795 research outputs found
A Revised Design for Microarray Experiments to Account for Experimental Noise and Uncertainty of Probe Response
Background
Although microarrays are analysis tools in biomedical research, they are known to yield noisy output that usually requires experimental confirmation. To tackle this problem, many studies have developed rules for optimizing probe design and devised complex statistical tools to analyze the output. However, less emphasis has been placed on systematically identifying the noise component as part of the experimental procedure. One source of noise is the variance in probe binding, which can be assessed by replicating array probes. The second source is poor probe performance, which can be assessed by calibrating the array based on a dilution series of target molecules. Using model experiments for copy number variation and gene expression measurements, we investigate here a revised design for microarray experiments that addresses both of these sources of variance.
Results
Two custom arrays were used to evaluate the revised design: one based on 25 mer probes from an Affymetrix design and the other based on 60 mer probes from an Agilent design. To assess experimental variance in probe binding, all probes were replicated ten times. To assess probe performance, the probes were calibrated using a dilution series of target molecules and the signal response was fitted to an adsorption model. We found that significant variance of the signal could be controlled by averaging across probes and removing probes that are nonresponsive or poorly responsive in the calibration experiment. Taking this into account, one can obtain a more reliable signal with the added option of obtaining absolute rather than relative measurements.
Conclusion
The assessment of technical variance within the experiments, combined with the calibration of probes allows to remove poorly responding probes and yields more reliable signals for the remaining ones. Once an array is properly calibrated, absolute quantification of signals becomes straight forward, alleviating the need for normalization and reference hybridizations
Recommended from our members
Cosmogenic neutron production at the Sudbury Neutrino Observatory
Neutrons produced in nuclear interactions initiated by cosmic-ray muons present an irreducible background to many rare-event searches, even in detectors located deep underground. Models for the production of these neutrons have been tested against previous experimental data, but the extrapolation to deeper sites is not well understood. Here we report results from an analysis of cosmogenically produced neutrons at the Sudbury Neutrino Observatory. A specific set of observables are presented, which can be used to benchmark the validity of geant4 physics models. In addition, the cosmogenic neutron yield, in units of 10-4 cm2/(g·μ), is measured to be 7.28±0.09(stat)-1.12+1.59(syst) in pure heavy water and 7.30±0.07(stat)-1.02+1.40(syst) in NaCl-loaded heavy water. These results provide unique insights into this potential background source for experiments at SNOLAB
Characterizing genomic alterations in cancer by complementary functional associations.
Systematic efforts to sequence the cancer genome have identified large numbers of mutations and copy number alterations in human cancers. However, elucidating the functional consequences of these variants, and their interactions to drive or maintain oncogenic states, remains a challenge in cancer research. We developed REVEALER, a computational method that identifies combinations of mutually exclusive genomic alterations correlated with functional phenotypes, such as the activation or gene dependency of oncogenic pathways or sensitivity to a drug treatment. We used REVEALER to uncover complementary genomic alterations associated with the transcriptional activation of β-catenin and NRF2, MEK-inhibitor sensitivity, and KRAS dependency. REVEALER successfully identified both known and new associations, demonstrating the power of combining functional profiles with extensive characterization of genomic alterations in cancer genomes
Artificial boundary conditions for the linearized Benjamin-Bona-Mahony equation
International audienceWe consider various approximations of artificial boundary conditions for linearized Benjamin-Bona-Mahoney equation. Continuous (respectively discrete) artificial boundary conditions involve non local operators in time which in turn requires to compute time convolutions and invert the Laplace transform of an analytic function (respectively the Z-transform of an holomorphic function). In this paper, we derive explicit transparent boundary conditions both continuous and discrete for the linearized BBM equation. The equation is discretized with the Crank Nicolson time discretization scheme and we focus on the difference between the upwind and the centered discretization of the convection term. We use these boundary conditions to compute solutions with compact support in the computational domain and also in the case of an incoming plane wave which is an exact solution of the linearized BBM equation. We prove consistency, stability and convergence of the numerical scheme and provide many numerical experiments to show the efficiency of our tranparent boundary conditions
A pilot Internet "Value of Health" Panel: recruitment, participation and compliance
Objectives
To pilot using a panel of members of the public to provide preference data via the Internet
Methods
A stratified random sample of members of the general public was recruited and familiarised with the standard gamble procedure using an Internet based tool. Health states were perdiodically presented in "sets" corresponding to different conditions, during the study. The following were described: Recruitment (proportion of people approached who were trained); Participation (a) the proportion of people trained who provided any preferences and (b) the proportion of panel members who contributed to each "set" of values; and Compliance (the proportion, per participant, of preference tasks which were completed). The influence of covariates on these outcomes was investigated using univariate and multivariate analyses.
Results
A panel of 112 people was recruited. 23% of those approached (n = 5,320) responded to the invitation, and 24% of respondents (n = 1,215) were willing to participate (net = 5.5%). However, eventual recruitment rates, following training, were low (2.1% of those approached). Recruitment from areas of high socioeconomic deprivation and among ethnic minority communities was low. Eighteen sets of health state descriptions were considered over 14 months. 74% of panel members carried out at least one valuation task. People from areas of higher socioeconomic deprivation and unmarried people were less likely to participate. An average of 41% of panel members expressed preferences on each set of descriptions. Compliance ranged from 3% to 100%.
Conclusion
It is feasible to establish a panel of members of the general public to express preferences on a wide range of health state descriptions using the Internet, although differential recruitment and attrition are important challenges. Particular attention to recruitment and retention in areas of high socioeconomic deprivation and among ethnic minority communities is necessary. Nevertheless, the panel approach to preference measurement using the Internet offers the potential to provide specific utility data in a responsive manner for use in economic evaluations and to address some of the outstanding methodological uncertainties in this field
What is the evidence for the management of patients along the pathway from the emergency department to acute admission to reduce unplanned attendance and admission? An evidence synthesis
Background
Globally, the rate of emergency hospital admissions is increasing. However, little evidence exists to inform the development of interventions to reduce unplanned Emergency Department (ED) attendances and hospital admissions. The objective of this evidence synthesis was to review the evidence for interventions, conducted during the patient’s journey through the ED or acute care setting, to manage people with an exacerbation of a medical condition to reduce unplanned emergency hospital attendance and admissions.
Methods
A rapid evidence synthesis, using a systematic literature search, was undertaken in the electronic data bases of MEDLINE, EMBASE, CINAHL, the Cochrane Library and Web of Science, for the years 2000–2014. Evidence included in this review was restricted to Randomised Controlled Trials (RCTs) and observational studies (with a control arm) reported in peer-reviewed journals. Studies evaluating interventions for patients with an acute exacerbation of a medical condition in the ED or acute care setting which reported at least one outcome related to ED attendance or unplanned admission were included.
Results
Thirty papers met our inclusion criteria: 19 intervention studies (14 RCTs) and 11 controlled observational studies. Sixteen studies were set in the ED and 14 were conducted in an acute setting. Two studies (one RCT), set in the ED were effective in reducing ED attendance and hospital admission. Both of these interventions were initiated in the ED and included a post-discharge community component. Paradoxically 3 ED initiated interventions showed an increase in ED re-attendance. Six studies (1 RCT) set in acute care settings were effective in reducing: hospital admission, ED re-attendance or re-admission (two in an observation ward, one in an ED assessment unit and three in which the intervention was conducted within 72 h of admission).
Conclusions
There is no clear evidence that specific interventions along the patient journey from ED arrival to 72 h after admission benefit ED re-attendance or readmission. Interventions targeted at high-risk patients, particularly the elderly, may reduce ED utilization and warrant future research. Some interventions showing effectiveness in reducing unplanned ED attendances and admissions are delivered by appropriately trained personnel in an environment that allows sufficient time to assess and manage patients
A meta-analysis of long-term effects of conservation agriculture on maize grain yield under rain-fed conditions
Conservation agriculture involves reduced tillage, permanent soil cover and crop rotations to enhance soil fertility and to supply food from a dwindling land resource. Recently, conservation agriculture has been promoted in Southern Africa, mainly for maize-based farming systems. However, maize yields under rain-fed conditions are often variable. There is therefore a need to identify factors that influence crop yield under conservation agriculture and rain-fed conditions. Here, we studied maize grain yield data from experiments lasting 5 years and more under rain-fed conditions. We assessed the effect of long-term tillage and residue retention on maize grain yield under contrasting soil textures, nitrogen input and climate. Yield variability was measured by stability analysis. Our results show an increase in maize yield over time with conservation agriculture practices that include rotation and high input use in low rainfall areas. But we observed no difference in system stability under those conditions. We observed a strong relationship between maize grain yield and annual rainfall. Our meta-analysis gave the following findings: (1) 92% of the data show that mulch cover in high rainfall areas leads to lower yields due to waterlogging; (2) 85% of data show that soil texture is important in the temporal development of conservation agriculture effects, improved yields are likely on well-drained soils; (3) 73% of the data show that conservation agriculture practices require high inputs especially N for improved yield; (4) 63% of data show that increased yields are obtained with rotation but calculations often do not include the variations in rainfall within and between seasons; (5) 56% of the data show that reduced tillage with no mulch cover leads to lower yields in semi-arid areas; and (6) when adequate fertiliser is available, rainfall is the most important determinant of yield in southern Africa. It is clear from our results that conservation agriculture needs to be targeted and adapted to specific biophysical conditions for improved impact
A theoretical model of inflammation- and mechanotransduction- driven asthmatic airway remodelling
Inflammation, airway hyper-responsiveness and airway remodelling are well-established hallmarks of asthma, but their inter-relationships remain elusive. In order to obtain a better understanding of their inter-dependence, we develop a mechanochemical morphoelastic model of the airway wall accounting for local volume changes in airway smooth muscle (ASM) and extracellular matrix in response to transient inflammatory or contractile agonist challenges. We use constrained mixture theory, together with a multiplicative decomposition of growth from the elastic deformation, to model the airway wall as a nonlinear fibre-reinforced elastic cylinder. Local contractile agonist drives ASM cell contraction, generating mechanical stresses in the tissue that drive further release of mitogenic mediators and contractile agonists via underlying mechanotransductive signalling pathways. Our model predictions are consistent with previously described inflammation-induced remodelling within an axisymmetric airway geometry. Additionally, our simulations reveal novel mechanotransductive feedback by which hyper-responsive airways exhibit increased remodelling, for example, via stress-induced release of pro-mitogenic and procontractile cytokines. Simulation results also reveal emergence of a persistent contractile tone observed in asthmatics, via either a pathological mechanotransductive feedback loop, a failure to clear agonists from the tissue, or a combination of both. Furthermore, we identify various parameter combinations that may contribute to the existence of different asthma phenotypes, and we illustrate a combination of factors which may predispose severe asthmatics to fatal bronchospasms
Adaptable image quality assessment using meta-reinforcement learning of task amenability
The performance of many medical image analysis tasks are strongly associated with image data quality. When developing modern deep learning algorithms, rather than relying on subjective (human-based) image quality assessment (IQA), task amenability potentially provides an objective measure of task-specific image quality. To predict task amenability, an IQA agent is trained using reinforcement learning (RL) with a simultaneously optimised task predictor, such as a classification or segmentation neural network. In this work, we develop transfer learning or adaptation strategies to increase the adaptability of both the IQA agent and the task predictor so that they are less dependent on high-quality, expert-labelled training data. The proposed transfer learning strategy re-formulates the original RL problem for task amenability in a meta-reinforcement learning (meta-RL) framework. The resulting algorithm facilitates efficient adaptation of the agent to different definitions of image quality, each with its own Markov decision process environment including different images, labels and an adaptable task predictor. Our work demonstrates that the IQA agents pre-trained on non-expert task labels can be adapted to predict task amenability as defined by expert task labels, using only a small set of expert labels. Using 6644 clinical ultrasound images from 249 prostate cancer patients, our results for image classification and segmentation tasks show that the proposed IQA method can be adapted using data with as few as respective 19.7 %
%
and 29.6 %
%
expert-reviewed consensus labels and still achieve comparable IQA and task performance, which would otherwise require a training dataset with 100 %
% expert labels
On the mechanisms governing gas penetration into a tokamak plasma during a massive gas injection
A new 1D radial fluid code, IMAGINE, is used to simulate the penetration of gas into a tokamak plasma during a massive gas injection (MGI). The main result is that the gas is in general strongly braked as it reaches the plasma, due to mechanisms related to charge exchange and (to a smaller extent) recombination. As a result, only a fraction of the gas penetrates into the plasma. Also, a shock wave is created in the gas which propagates away from the plasma, braking and compressing the incoming gas. Simulation results are quantitatively consistent, at least in terms of orders of magnitude, with experimental data for a D 2 MGI into a JET Ohmic plasma. Simulations of MGI into the background plasma surrounding a runaway electron beam show that if the background electron density is too high, the gas may not penetrate, suggesting a possible explanation for the recent results of Reux et al in JET (2015 Nucl. Fusion 55 093013)
- …
