2,826 research outputs found
Letter to the Editor: A reply – acknowledged reasonable limitations in a secondary analysis but key conclusions remain in ‘The neural basis of flashback formation: the impact of viewing trauma’
Human kin detection
Natural selection has favored the evolution of behaviors that benefit not only one's genes, but also their copies in genetically related individuals. These behaviors include optimal outbreeding (choosing a mate that is neither too closely related, nor too distant), nepotism (helping kin), and spite (hurting non-kin at a personal cost), and all require some form of kin detection or kin recognition. Yet, kinship cannot be assessed directly; human kin detection relies on heuristic cues that take into account individuals' context (whether they were reared by our mother, or grew up in our home, or were given birth by our spouse), appearance (whether they smell or look like us), and ability to arouse certain feelings (whether we feel emotionally close to them). The uncertainties of kin detection, along with its dependence on social information, create ample opportunities for the evolution of deception and self-deception. For example, babies carry no unequivocal stamp of their biological father, but across cultures they are passionately claimed to resemble their mother's spouse; to the same effect, neutral' observers are greatly influenced by belief in relatedness when judging resemblance between strangers. Still, paternity uncertainty profoundly shapes human relationships, reducing not only the investment contributed by paternal versus maternal kin, but also prosocial behavior between individuals who are related through one or more males rather than females alone. Because of its relevance to racial discrimination and political preferences, the evolutionary pressure to prefer kin to non-kin has a manifold influence on society at large
Software Sustainability: The Modern Tower of Babel
The development of sustainable software has been identified as one of the key challenges in the field of computational science and engineering. However, there is currently no agreed definition of the concept. Current definitions range from a composite, non-functional requirement to simply an emergent property. This lack of clarity leads to confusion, and potentially to ineffective and inefficient efforts to develop sustainable software systems. The aim of this paper is to explore the emerging definitions of software sustainability from the field of software engineering in order to contribute to the question, what is software sustainability? The preliminary analysis suggests that the concept of software sustainability is complex and multifaceted with any consensus towards a shared definition within the field of software engineering yet to be achieved
EXPERIMENTALLY MEASURED RADIATIVE LIFETIMES AND OSCILLATOR STRENGTHS IN NEUTRAL VANADIUM
We report a new study of the V i atom using a combination of time-resolved laser-induced fluorescence and Fourier transform spectroscopy that contains newly measured radiative lifetimes for 25 levels between 24,648 cm−1 and 37,518 cm−1 and oscillator strengths for 208 lines between 3040 and 20000 Å from 39 upper energy levels. Thirteen of these oscillator strengths have not been reported previously. This work was conducted independently of the recent studies of neutral vanadium lifetimes and oscillator strengths carried out by Den Hartog et al. and Lawler et al., and thus serves as a means to verify those measurements. Where our data overlap with their data, we generally find extremely good agreement in both level lifetimes and oscillator strengths. However, we also find evidence that Lawler et al. have systematically underestimated oscillator strengths for lines in the region of 9000 ± 100 Å. We suggest a correction of 0.18 ± 0.03 dex for these values to bring them into agreement with our results and those of Whaling et al. We also report new measurements of hyperfine structure splitting factors for three odd levels of V i lying between 24,700 and 28,400 cm−1
Arduous implementation: Does the Normalisation Process Model explain why it's so difficult to embed decision support technologies for patients in routine clinical practice
Background: decision support technologies (DSTs, also known as decision aids) help patients and professionals take part in collaborative decision-making processes. Trials have shown favorable impacts on patient knowledge, satisfaction, decisional conflict and confidence. However, they have not become routinely embedded in health care settings. Few studies have approached this issue using a theoretical framework. We explained problems of implementing DSTs using the Normalization Process Model, a conceptual model that focuses attention on how complex interventions become routinely embedded in practice.Methods: the Normalization Process Model was used as the basis of conceptual analysis of the outcomes of previous primary research and reviews. Using a virtual working environment we applied the model and its main concepts to examine: the 'workability' of DSTs in professional-patient interactions; how DSTs affect knowledge relations between their users; how DSTs impact on users' skills and performance; and the impact of DSTs on the allocation of organizational resources.Results: conceptual analysis using the Normalization Process Model provided insight on implementation problems for DSTs in routine settings. Current research focuses mainly on the interactional workability of these technologies, but factors related to divisions of labor and health care, and the organizational contexts in which DSTs are used, are poorly described and understood.Conclusion: the model successfully provided a framework for helping to identify factors that promote and inhibit the implementation of DSTs in healthcare and gave us insights into factors influencing the introduction of new technologies into contexts where negotiations are characterized by asymmetries of power and knowledge. Future research and development on the deployment of DSTs needs to take a more holistic approach and give emphasis to the structural conditions and social norms in which these technologies are enacte
Turnip mosaic potyvirus probably first spread to Eurasian brassica crops from wild orchids about 1000 years ago
Turnip mosaic potyvirus (TuMV) is probably the most widespread and damaging virus that infects cultivated brassicas worldwide. Previous work has indicated that the virus originated in western Eurasia, with all of its closest relatives being viruses of monocotyledonous plants. Here we report that we have identified a sister lineage of TuMV-like potyviruses (TuMV-OM) from European orchids. The isolates of TuMV-OM form a monophyletic sister lineage to the brassica-infecting TuMVs (TuMV-BIs), and are nested within a clade of monocotyledon-infecting viruses. Extensive host-range tests showed that all of the TuMV-OMs are biologically similar to, but distinct from, TuMV-BIs and do not readily infect brassicas. We conclude that it is more likely that TuMV evolved from a TuMV-OM-like ancestor than the reverse. We did Bayesian coalescent analyses using a combination of novel and published sequence data from four TuMV genes [helper component-proteinase protein (HC-Pro), protein 3(P3), nuclear inclusion b protein (NIb), and coat protein (CP)]. Three genes (HC-Pro, P3, and NIb), but not the CP gene, gave results indicating that the TuMV-BI viruses diverged from TuMV-OMs around 1000 years ago. Only 150 years later, the four lineages of the present global population of TuMV-BIs diverged from one another. These dates are congruent with historical records of the spread of agriculture in Western Europe. From about 1200 years ago, there was a warming of the climate, and agriculture and the human population of the region greatly increased. Farming replaced woodlands, fostering viruses and aphid vectors that could invade the crops, which included several brassica cultivars and weeds. Later, starting 500 years ago, inter-continental maritime trade probably spread the TuMV-BIs to the remainder of the world
Prediction of landing gear loads using machine learning techniques
This article investigates the feasibility of using machine learning algorithms to predict the loads experienced by a landing gear during landing. For this purpose, the results on drop test data and flight test data will be examined. This article will focus on the use of Gaussian process regression for the prediction of loads on the components of a landing gear. For the learning task, comprehensive measurement data from drop tests are available. These include measurements of strains at key locations, such as on the side-stay and torque link, as well as acceleration measurements of the drop carriage and the gear itself, measurements of shock absorber travel, tyre closure, shock absorber pressure and wheel speed. Ground-to-tyre loads are also available through measurements made with a drop test ground reaction platform. The aim is to train the Gaussian process to predict load at a particular location from other available measurements, such as accelerations, or measurements of the shock absorber. If models can be successfully trained, then future load patterns may be predicted using only these measurements. The ultimate aim is to produce an accurate model that can predict the load at a number of locations across the landing gear using measurements that are readily available or may be measured more easily than directly measuring strain on the gear itself (for example, these may be measurements already available on the aircraft, or from a small number of sensors attached to the gear). The drop test data models provide a positive feasibility test which is the basis for moving on to the critical task of prediction on flight test data. For this, a wide range of available flight test measurements are considered for potential model inputs (excluding strain measurements themselves), before attempting to refine the model or use a smaller number of measurements for the prediction
Recommended from our members
Personalized versus standardized dosing strategies for the treatment of childhood amblyopia: study protocol for a randomized controlled trial
Background: Amblyopia is the commonest visual disorder of childhood in Western societies, affecting, predominantly,
spatial visual function. Treatment typically requires a period of refractive correction (‘optical treatment’) followed by occlusion: covering the nonamblyopic eye with a fabric patch for varying daily durations. Recent studies have provided insight into the optimal amount of patching (‘dose’), leading to the adoption of standardized dosing strategies, which, though an advance on previous ad-hoc regimens, take little account of individual patient characteristics. This trial compares the effectiveness of a standardized dosing strategy (that is, a fixed daily occlusion dose based on disease severity) with a personalized dosing strategy (derived from known treatment dose-response functions), in which an initially prescribed occlusion dose is modulated, in a systematic manner, dependent on treatment compliance.
Methods/design: A total of 120 children aged between 3 and 8 years of age diagnosed with amblyopia in association with either anisometropia or strabismus, or both, will be randomized to receive either a standardized or a personalized occlusion dose regimen. To avoid confounding by the known benefits of refractive correction, participants will not be randomized until they have completed an optical treatment phase. The primary study objective is to determine whether, at trial endpoint, participants receiving a personalized dosing strategy require fewer hours of occlusion than those in receipt of a standardized dosing strategy. Secondary objectives are to quantify the relationship between
observed changes in visual acuity (logMAR, logarithm of the Minimum Angle of Resolution) with age, amblyopia type, and severity of amblyopic visual acuity deficit.
Discussion: This is the first randomized controlled trial of occlusion therapy for amblyopia to compare a treatment arm representative of current best practice with an arm representative of an entirely novel treatment regimen based on statistical modelling of previous trial outcome data. Should the personalized dosing strategy demonstrate superiority over the standardized dosing strategy, then its adoption into routine practice could bring practical benefits in reducing the duration of treatment needed to achieve an optimal outcome
Between feminism and anorexia: An autoethnography
Critical feminist work on eating disorders has grown substantially since its establishment in the 1980s, and has increasingly incorporated the use of anorexic stories, voices and experiences. Yet rarely do such accounts offer the anorexic a space to respond to the now established feminist conceptions of the problem which structure the books or articles in which they appear. Anorexic, or recovered anorexic, voices are used by the researcher to interpret the role played by gender, even whilst the subjects are invited to respond to and critique, medical and popular discourses on the disorder. This lack of dialogue is all the more striking in the context of the feminist aim to fight ‘back against the tendency to silence anorexic women’s’ own interpretations of their starving, treatment and construction (Saukko, 2008: 34). As someone who suffered from anorexia for 20 years, this article offers an autoethnographic account of my experience of encountering the feminist literature on anorexia in a bid to speak back, or enter into a dialogue between feminist politics and eating disorder experience
Display of probability densities for data from a continuous distribution
Based on cumulative distribution functions, Fourier series expansion and
Kolmogorov tests, we present a simple method to display probability densities
for data drawn from a continuous distribution. It is often more efficient than
using histograms.Comment: 5 pages, 4 figures, presented at Computer Simulation Studies XXIV,
Athens, GA, 201
- …
