1,601 research outputs found
Acceptability of novel lifelogging technology to determine context of sedentary behaviour in older adults
<strong>Objective:</strong> Lifelogging, using body worn sensors (activity monitors and time lapse photography) has the potential to shed light on the context of sedentary behaviour. The objectives of this study were to examine the acceptability, to older adults, of using lifelogging technology and indicate its usefulness for understanding behaviour.<strong> </strong><strong>Method:</strong> 6 older adults (4 males, mean age: 68yrs) wore the equipment (ActivPAL<sup>TM</sup> and Vicon Revue<sup>TM</sup>/SenseCam<sup>TM</sup>) for 7 consecutive days during free-living activity. The older adults’ perception of the lifelogging technology was assessed through semi-structured interviews, including a brief questionnaire (Likert scale), and reference to the researcher's diary. <strong>Results:</strong> Older adults in this study found the equipment acceptable to wear and it did not interfere with privacy, safety or create reactivity, but they reported problems with the actual technical functioning of the camera. <strong>Conclusion:</strong> This combination of sensors has good potential to provide lifelogging information on the context of sedentary behaviour
A frailty model for (interval) censored family survival data, applied to the age at onset of non-physical problems
Family survival data can be used to estimate the degree of genetic and environmental contributions to the age at onset of a disease or of a specific event in life. The data can be modeled with a correlated frailty model in which the frailty variable accounts for the degree of kinship within the family. The heritability (degree of heredity) of the age at a specific event in life (or the onset of a disease) is usually defined as the proportion of variance of the survival age that is associated with genetic effects. If the survival age is (interval) censored, heritability as usually defined cannot be estimated. Instead, it is defined as the proportion of variance of the frailty associated with genetic effects. In this paper we describe a correlated frailty model to estimate the heritability and the degree of environmental effects on the age at which individuals contact a social worker for the first time and to test whether there is a difference between the survival functions of this age for twins and non-twins. © 2009 The Author(s)
Designed Azolopyridinium Salts Block Protective Antigen Pores In Vitro and Protect Cells from Anthrax Toxin
Background:Several intracellular acting bacterial protein toxins of the AB-type, which are known to enter cells by endocytosis, are shown to produce channels. This holds true for protective antigen (PA), the binding component of the tripartite anthrax-toxin of Bacillus anthracis. Evidence has been presented that translocation of the enzymatic components of anthrax-toxin across the endosomal membrane of target cells and channel formation by the heptameric/octameric PA63 binding/translocation component are related phenomena. Chloroquine and some 4-aminoquinolones, known as potent drugs against Plasmodium falciparium infection of humans, block efficiently the PA63-channel in a dose dependent way.Methodology/Principal Findings:Here we demonstrate that related positively charged heterocyclic azolopyridinium salts block the PA63-channel in the μM range, when both, inhibitor and PA63 are added to the same side of the membrane, the cis-side, which corresponds to the lumen of acidified endosomal vesicles of target cells. Noise-analysis allowed the study of the kinetics of the plug formation by the heterocycles. In vivo experiments using J774A.1 macrophages demonstrated that the inhibitors of PA63-channel function also efficiently block intoxication of the cells by the combination lethal factor and PA63 in the same concentration range as they block the channels in vitro.Conclusions/Significance:These results strongly argue in favor of a transport of lethal factor through the PA63-channel and suggest that the heterocycles used in this study could represent attractive candidates for development of novel therapeutic strategies against anthrax. © 2013 Beitzinger et al
Two-Particle-Self-Consistent Approach for the Hubbard Model
Even at weak to intermediate coupling, the Hubbard model poses a formidable
challenge. In two dimensions in particular, standard methods such as the Random
Phase Approximation are no longer valid since they predict a finite temperature
antiferromagnetic phase transition prohibited by the Mermin-Wagner theorem. The
Two-Particle-Self-Consistent (TPSC) approach satisfies that theorem as well as
particle conservation, the Pauli principle, the local moment and local charge
sum rules. The self-energy formula does not assume a Migdal theorem. There is
consistency between one- and two-particle quantities. Internal accuracy checks
allow one to test the limits of validity of TPSC. Here I present a pedagogical
review of TPSC along with a short summary of existing results and two case
studies: a) the opening of a pseudogap in two dimensions when the correlation
length is larger than the thermal de Broglie wavelength, and b) the conditions
for the appearance of d-wave superconductivity in the two-dimensional Hubbard
model.Comment: Chapter in "Theoretical methods for Strongly Correlated Systems",
Edited by A. Avella and F. Mancini, Springer Verlag, (2011) 55 pages.
Misprint in Eq.(23) corrected (thanks D. Bergeron
The stellar and sub-stellar IMF of simple and composite populations
The current knowledge on the stellar IMF is documented. It appears to become
top-heavy when the star-formation rate density surpasses about 0.1Msun/(yr
pc^3) on a pc scale and it may become increasingly bottom-heavy with increasing
metallicity and in increasingly massive early-type galaxies. It declines quite
steeply below about 0.07Msun with brown dwarfs (BDs) and very low mass stars
having their own IMF. The most massive star of mass mmax formed in an embedded
cluster with stellar mass Mecl correlates strongly with Mecl being a result of
gravitation-driven but resource-limited growth and fragmentation induced
starvation. There is no convincing evidence whatsoever that massive stars do
form in isolation. Various methods of discretising a stellar population are
introduced: optimal sampling leads to a mass distribution that perfectly
represents the exact form of the desired IMF and the mmax-to-Mecl relation,
while random sampling results in statistical variations of the shape of the
IMF. The observed mmax-to-Mecl correlation and the small spread of IMF
power-law indices together suggest that optimally sampling the IMF may be the
more realistic description of star formation than random sampling from a
universal IMF with a constant upper mass limit. Composite populations on galaxy
scales, which are formed from many pc scale star formation events, need to be
described by the integrated galactic IMF. This IGIMF varies systematically from
top-light to top-heavy in dependence of galaxy type and star formation rate,
with dramatic implications for theories of galaxy formation and evolution.Comment: 167 pages, 37 figures, 3 tables, published in Stellar Systems and
Galactic Structure, Vol.5, Springer. This revised version is consistent with
the published version and includes additional references and minor additions
to the text as well as a recomputed Table 1. ISBN 978-90-481-8817-
Impact Factor: outdated artefact or stepping-stone to journal certification?
A review of Garfield's journal impact factor and its specific implementation
as the Thomson Reuters Impact Factor reveals several weaknesses in this
commonly-used indicator of journal standing. Key limitations include the
mismatch between citing and cited documents, the deceptive display of three
decimals that belies the real precision, and the absence of confidence
intervals. These are minor issues that are easily amended and should be
corrected, but more substantive improvements are needed. There are indications
that the scientific community seeks and needs better certification of journal
procedures to improve the quality of published science. Comprehensive
certification of editorial and review procedures could help ensure adequate
procedures to detect duplicate and fraudulent submissions.Comment: 25 pages, 12 figures, 6 table
Converting simulated total dry matter to fresh marketable yield for field vegetables at a range of nitrogen supply levels
Simultaneous analysis of economic and environmental performance of horticultural crop production requires qualified assumptions on the effect of management options, and particularly of nitrogen (N) fertilisation, on the net returns of the farm. Dynamic soil-plant-environment simulation models for agro-ecosystems are frequently applied to predict crop yield, generally as dry matter per area, and the environmental impact of production. Economic analysis requires conversion of yields to fresh marketable weight, which is not easy to calculate for vegetables, since different species have different properties and special market requirements. Furthermore, the marketable part of many vegetables is dependent on N availability during growth, which may lead to complete crop failure under sub-optimal N supply in tightly calculated N fertiliser regimes or low-input systems. In this paper we present two methods for converting simulated total dry matter to marketable fresh matter yield for various vegetables and European growth conditions, taking into consideration the effect of N supply: (i) a regression based function for vegetables sold as bulk or bunching ware and (ii) a population approach for piecewise sold row crops. For both methods, to be used in the context of a dynamic simulation model, parameter values were compiled from a literature survey. Implemented in such a model, both algorithms were tested against experimental field data, yielding an Index of Agreement of 0.80 for the regression strategy and 0.90 for the population strategy. Furthermore, the population strategy was capable of reflecting rather well the effect of crop spacing on yield and the effect of N supply on product grading
A post-wildfire response in cave dripwater chemistry
Surface disturbances above a cave have the potential to impact cave dripwater discharge, isotopic composition and solute concentrations, which may subsequently be recorded in the stalagmites forming from these dripwaters. One such disturbance is wildfire; however, the effects of wildfire on cave chemistry and hydrology remains poorly understood. Using dripwater data monitored at two sites in a shallow cave, beneath a forest, in southwest Australia, we provide one of the first cave monitoring studies conducted in a post-fire regime, which seeks to identify the effects of wildfire and post-fire vegetation dynamics on dripwater δ18O composition and solute concentrations. We compare our post-wildfire δ18O data with predicted dripwater δ18O using a forward model based on measured hydro-climatic influences alone. This helps to delineate hydro-climatic and fire-related influences on δ18O. Further we also compare our data with both data from Golgotha Cave – which is in a similar environment but was not influenced by this particular fire – as well as regional groundwater chemistry, in an attempt to determine the extent to which wildfire affects dripwater chemistry. We find in our forested shallow cave that δ18O is higher after the fire relative to modelled δ18O. We attribute this to increased evaporation due to reduced albedo and canopy cover. The solute response post-fire varied between the two drip sites: at Site 1a, which had a large tree above it that was lost in the fire, we see a response reflecting both a reduction in tree water use and a removal of nutrients (Cl, Mg, Sr, and Ca) from the surface and subsurface. Solutes such as SO4 and K maintain high concentrations, due to the abundance of above-ground ash. At Site 2a, which was covered by lower–middle storey vegetation, we see a solute response reflecting evaporative concentration of all studied ions (Cl, Ca, Mg, Sr, SO4, and K) similar to the trend in δ18O for this drip site. We open a new avenue for speleothem science in fire-prone regions, focusing on the geochemical records of speleothems as potential palaeo-fire archives. © Author(s) 2016
High-resolution regional gravity field recovery from Poisson wavelets using heterogeneous observational techniques
2016-2017 > Academic research: refereed > Publication in refereed journal201804_a bcmaVersion of RecordPublishe
Calibration of Traffic Simulation Models using SPSA
Εθνικό Μετσόβιο Πολυτεχνείο--Μεταπτυχιακή Εργασία. Διεπιστημονικό-Διατμηματικό Πρόγραμμα Μεταπτυχιακών Σπουδών (Δ.Π.Μ.Σ.) “Γεωπληροφορική
- …
