317 research outputs found
Recommended from our members
Cholesterol reduction using manufactured foods high in monounsaturated fatty acids: a randomized crossover study
In two separate studies, the cholesterol-lowering efficacy of a diet high in monounsaturated fatty acids (MUFA) was evaluated by means of a randomized crossover trial. In both studies subjects were randomized to receive either a high-MUFA diet or the control diet first, which they followed for a period of 8 weeks; following a washout period of 4–6 weeks they were transferred onto the opposing diet for a further period of 8 weeks. In one study subjects were healthy middle-aged men (n 30), and in the other they were young men (n 23) with a family history of CHD recruited from two centres (Guildford and Dublin). The two studies were conducted over the same time period using identical foods and study designs. Subjects consumed 38 % energy as fat, with 18 % energy as MUFA and 10 % as saturated fatty acids (MUFA diet), or 13 % energy as MUFA and 16 % as saturated fatty acids (control diet). The polyunsaturated fatty acid content of each diet was 7 %. The diets were achieved by providing subjects with manufactured foods such as spreads, ‘ready meals’, biscuits, puddings and breads, which, apart from their fatty acid compositions, were identical for both diets. Subjects were blind to which of the diets they were following on both arms of the study. Weight changes on the diets were less than 1 kg. In the groups combined (n 53) mean total and LDL-cholesterol levels were significantly lower at the end of the MUFA diet than the control diet by 0⋅29 (SD 0⋅61) mmol/l (
On the evaluation of uncertainties in climate models
The prediction of the Earth's climate system is of immediate importance to many decision-makers. Anthropogenic climate change is a key area of public policy and will likely have widespread impacts across the world over the 21st Century. Understanding potential climate changes, and their magnitudes, is important for effective decision making. The principal tools used to provide such climate predictions are physical models, some of the largest and most complex models ever built. Evaluation of state-of-the-art climate models is vital to understanding our ability to make statements about future climate. This Thesis presents a framework for the analysis of climate models in light of their inherent uncertainties and principles of statistical good practice. The assessment of uncertainties in model predictions to-date is incomplete and warrants more attention that it has previously received. This Thesis aims to motivate a more thorough investigation of climate models as fit for use in decision-support. The behaviour of climate models is explored using data from the largest ever climate modelling experiment, the climateprediction.net project. The availability of a large set of simulations allows novel methods of analysis for the exploration of the uncertainties present in climate simulations. It is shown that climate models are capable of producing very different behaviour and that the associated uncertainties can be large. Whilst no results are found that cast doubt on the hypothesis that greenhouse gases are a significant driver of climate change, the range of behaviour shown in the climateprediction.net data set has implications for our ability to predict future climate and for the interpretation of climate model output. It is argued that uncertainties should be explored and communicated to users of climate predictions in such a way that decision-makers are aware of the relative robustness of climate model output
Glomerular Filtration Rate Following Pediatric Liver Transplantation—The SPLIT Experience
Impaired kidney function is a well-recognized complication following liver transplantation (LT). Studies of this complication in children have been limited by small numbers and insensitive outcome measures. Our aim was to define the prevalence of, and identify risk factors for, post-LT kidney dysfunction in a multicenter pediatric cohort using measured glomerular filtration rate (mGFR). We conducted a cross-sectional study of 397 patients enrolled in the Studies in Pediatric Liver Transplantation (SPLIT) registry, using mGFR < 90 mL/min/1.73 m 2 as the primary outcome measure. Median age at LT was 2.2 years. Primary diagnoses were biliary atresia (44.6%), fulminant liver failure (9.8%), metabolic liver disease (16.4%), chronic cholestatic liver disease (13.1%), cryptogenic cirrhosis (4.3%) and other (11.8%). At a mean of 5.2 years post-LT, 17.6% of patients had a mGFR < 90 mL/min/1.73 m 2 . In univariate analysis, factors associated with this outcome were transplant center, age at LT, primary diagnosis, calculated GFR (cGFR) at LT and 12 months post-LT, primary immunosuppression, early post-LT kidney complications, age at mGFR, height and weight Z-scores at 12 months post-LT. In multivariate analysis, independent variables associated with a mGFR <90 mL/min/1.73 m 2 were primary immunosuppression, age at LT, cGFR at LT and height Z-score at 12 months post-LT.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/79286/1/j.1600-6143.2010.03316.x.pd
Personalized therapy for mycophenolate:Consensus report by the international association of therapeutic drug monitoring and clinical toxicology
When mycophenolic acid (MPA) was originally marketed for immunosuppressive therapy, fixed doses were recommended by the manufacturer. Awareness of the potential for a more personalized dosing has led to development of methods to estimate MPA area under the curve based on the measurement of drug concentrations in only a few samples. This approach is feasible in the clinical routine and has proven successful in terms of correlation with outcome. However, the search for superior correlates has continued, and numerous studies in search of biomarkers that could better predict the perfect dosage for the individual patient have been published. As it was considered timely for an updated and comprehensive presentation of consensus on the status for personalized treatment with MPA, this report was prepared following an initiative from members of the International Association of Therapeutic Drug Monitoring and Clinical Toxicology (IATDMCT). Topics included are the criteria for analytics, methods to estimate exposure including pharmacometrics, the potential influence of pharmacogenetics, development of biomarkers, and the practical aspects of implementation of target concentration intervention. For selected topics with sufficient evidence, such as the application of limited sampling strategies for MPA area under the curve, graded recommendations on target ranges are presented. To provide a comprehensive review, this report also includes updates on the status of potential biomarkers including those which may be promising but with a low level of evidence. In view of the fact that there are very few new immunosuppressive drugs under development for the transplant field, it is likely that MPA will continue to be prescribed on a large scale in the upcoming years. Discontinuation of therapy due to adverse effects is relatively common, increasing the risk for late rejections, which may contribute to graft loss. Therefore, the continued search for innovative methods to better personalize MPA dosage is warranted.</p
Getting better judgement working party: bias, guess and expert judgement, in actuarial work ‐ Abstract of the London Discussion
SageFS: the location aware wide area distributed filesystem
Modern distributed applications often have to make a choice about how to main- tain data within the system. Distributed storage systems are often self- contained in a single cluster or are a black box as data placement is unknown by an applica- tion. Using wide area distributed storage either means using multiple APIs or loss of control of data placement. This work introduces Sage, a distributed filesystem that aggregates multiple backends under a common API. It also gives applications the ability to decide where file data is stored in the aggregation. By leveraging Sage, users can create applications using multiple distributed backends with the same API, and still decide where to physically store any given file. Sage uses a layered design where API calls are translated into the appropriate set of backend calls then sent to the correct physical backend. This way Sage can hold many backends at once mak- ing them appear as the same filesystem. The performance overhead of using Sage is shown to be minimal over directly using the backend stores, and Sage is also shown to scale with respect to backends used. A case study shows file placement in action and how applications can take advantage of the feature.Graduat
- …
