317 research outputs found

    On the evaluation of uncertainties in climate models

    Get PDF
    The prediction of the Earth's climate system is of immediate importance to many decision-makers. Anthropogenic climate change is a key area of public policy and will likely have widespread impacts across the world over the 21st Century. Understanding potential climate changes, and their magnitudes, is important for effective decision making. The principal tools used to provide such climate predictions are physical models, some of the largest and most complex models ever built. Evaluation of state-of-the-art climate models is vital to understanding our ability to make statements about future climate. This Thesis presents a framework for the analysis of climate models in light of their inherent uncertainties and principles of statistical good practice. The assessment of uncertainties in model predictions to-date is incomplete and warrants more attention that it has previously received. This Thesis aims to motivate a more thorough investigation of climate models as fit for use in decision-support. The behaviour of climate models is explored using data from the largest ever climate modelling experiment, the climateprediction.net project. The availability of a large set of simulations allows novel methods of analysis for the exploration of the uncertainties present in climate simulations. It is shown that climate models are capable of producing very different behaviour and that the associated uncertainties can be large. Whilst no results are found that cast doubt on the hypothesis that greenhouse gases are a significant driver of climate change, the range of behaviour shown in the climateprediction.net data set has implications for our ability to predict future climate and for the interpretation of climate model output. It is argued that uncertainties should be explored and communicated to users of climate predictions in such a way that decision-makers are aware of the relative robustness of climate model output

    Glomerular Filtration Rate Following Pediatric Liver Transplantation—The SPLIT Experience

    Full text link
    Impaired kidney function is a well-recognized complication following liver transplantation (LT). Studies of this complication in children have been limited by small numbers and insensitive outcome measures. Our aim was to define the prevalence of, and identify risk factors for, post-LT kidney dysfunction in a multicenter pediatric cohort using measured glomerular filtration rate (mGFR). We conducted a cross-sectional study of 397 patients enrolled in the Studies in Pediatric Liver Transplantation (SPLIT) registry, using mGFR < 90 mL/min/1.73 m 2 as the primary outcome measure. Median age at LT was 2.2 years. Primary diagnoses were biliary atresia (44.6%), fulminant liver failure (9.8%), metabolic liver disease (16.4%), chronic cholestatic liver disease (13.1%), cryptogenic cirrhosis (4.3%) and other (11.8%). At a mean of 5.2 years post-LT, 17.6% of patients had a mGFR < 90 mL/min/1.73 m 2 . In univariate analysis, factors associated with this outcome were transplant center, age at LT, primary diagnosis, calculated GFR (cGFR) at LT and 12 months post-LT, primary immunosuppression, early post-LT kidney complications, age at mGFR, height and weight Z-scores at 12 months post-LT. In multivariate analysis, independent variables associated with a mGFR <90 mL/min/1.73 m 2 were primary immunosuppression, age at LT, cGFR at LT and height Z-score at 12 months post-LT.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/79286/1/j.1600-6143.2010.03316.x.pd

    Personalized therapy for mycophenolate:Consensus report by the international association of therapeutic drug monitoring and clinical toxicology

    Get PDF
    When mycophenolic acid (MPA) was originally marketed for immunosuppressive therapy, fixed doses were recommended by the manufacturer. Awareness of the potential for a more personalized dosing has led to development of methods to estimate MPA area under the curve based on the measurement of drug concentrations in only a few samples. This approach is feasible in the clinical routine and has proven successful in terms of correlation with outcome. However, the search for superior correlates has continued, and numerous studies in search of biomarkers that could better predict the perfect dosage for the individual patient have been published. As it was considered timely for an updated and comprehensive presentation of consensus on the status for personalized treatment with MPA, this report was prepared following an initiative from members of the International Association of Therapeutic Drug Monitoring and Clinical Toxicology (IATDMCT). Topics included are the criteria for analytics, methods to estimate exposure including pharmacometrics, the potential influence of pharmacogenetics, development of biomarkers, and the practical aspects of implementation of target concentration intervention. For selected topics with sufficient evidence, such as the application of limited sampling strategies for MPA area under the curve, graded recommendations on target ranges are presented. To provide a comprehensive review, this report also includes updates on the status of potential biomarkers including those which may be promising but with a low level of evidence. In view of the fact that there are very few new immunosuppressive drugs under development for the transplant field, it is likely that MPA will continue to be prescribed on a large scale in the upcoming years. Discontinuation of therapy due to adverse effects is relatively common, increasing the risk for late rejections, which may contribute to graft loss. Therefore, the continued search for innovative methods to better personalize MPA dosage is warranted.</p

    Book Review: Handbook of Lipid Research 2. “The Fat Soluble Vitamins‘

    Full text link

    Book Reviews: Diabetes, Obesity, and Hyperlipidemias

    Full text link

    SageFS: the location aware wide area distributed filesystem

    No full text
    Modern distributed applications often have to make a choice about how to main- tain data within the system. Distributed storage systems are often self- contained in a single cluster or are a black box as data placement is unknown by an applica- tion. Using wide area distributed storage either means using multiple APIs or loss of control of data placement. This work introduces Sage, a distributed filesystem that aggregates multiple backends under a common API. It also gives applications the ability to decide where file data is stored in the aggregation. By leveraging Sage, users can create applications using multiple distributed backends with the same API, and still decide where to physically store any given file. Sage uses a layered design where API calls are translated into the appropriate set of backend calls then sent to the correct physical backend. This way Sage can hold many backends at once mak- ing them appear as the same filesystem. The performance overhead of using Sage is shown to be minimal over directly using the backend stores, and Sage is also shown to scale with respect to backends used. A case study shows file placement in action and how applications can take advantage of the feature.Graduat

    Toxicogenetic markers of liver dysfunction

    No full text

    Using Cyclosporine Neoral Immediately After Liver Transplantation

    Full text link
    corecore