216 research outputs found

    SNPmplexViewer--toward a cost-effective traceability system

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Beef traceability has become mandatory in many regions of the world and is typically achieved through the use of unique numerical codes on ear tags and animal passports. DNA-based traceability uses the animal's own DNA code to identify it and the products derived from it. Using <it>SNaPshot</it>, a primer-extension-based method, a multiplex of 25 SNPs in a single reaction has been practiced for reducing the expense of genotyping a panel of SNPs useful for identity control.</p> <p>Findings</p> <p>To further decrease <it>SNaPshot</it>'s cost, we introduced the Perl script <it>SNPmplexViewer</it>, which facilitates the analysis of trace files for reactions performed without the use of fluorescent size standards. <it>SNPmplexViewer </it>automatically aligns reference and target trace electropherograms, run with and without fluorescent size standards, respectively. <it>SNPmplexViewer </it>produces a modified target trace file containing a normalised trace in which the reference size standards are embedded. <it>SNPmplexViewer </it>also outputs aligned images of the two electropherograms together with a difference profile.</p> <p>Conclusions</p> <p>Modified trace files generated by <it>SNPmplexViewer </it>enable genotyping of <it>SnaPshot </it>reactions performed without fluorescent size standards, using common fragment-sizing software packages. <it>SNPmplexViewer</it>'s normalised output may also improve the genotyping software's performance. Thus, <it>SNPmplexViewer </it>is a general free tool enabling the reduction of <it>SNaPshot</it>'s cost as well as the fast viewing and comparing of trace electropherograms for fragment analysis. <it>SNPmplexViewer </it>is available at <url>http://cowry.agri.huji.ac.il/cgi-bin/SNPmplexViewer.cgi</url>.</p

    Recent Advances in Our Understanding of the Role of Meltwater in the Greenland Ice Sheet System

    Get PDF
    Nienow, Sole and Cowton’s Greenland research has been supported by a number of UK NERC research grants (NER/O/S/2003/00620; NE/F021399/1; NE/H024964/1; NE/K015249/1; NE/K014609/1) and Slater has been supported by a NERC PhD studentshipPurpose of the review:  This review discusses the role that meltwater plays within the Greenland ice sheet system. The ice sheet’s hydrology is important because it affects mass balance through its impact on meltwater runoff processes and ice dynamics. The review considers recent advances in our understanding of the storage and routing of water through the supraglacial, englacial, and subglacial components of the system and their implications for the ice sheet Recent findings:   There have been dramatic increases in surface meltwater generation and runoff since the early 1990s, both due to increased air temperatures and decreasing surface albedo. Processes in the subglacial drainage system have similarities to valley glaciers and in a warming climate, the efficiency of meltwater routing to the ice sheet margin is likely to increase. The behaviour of the subglacial drainage system appears to limit the impact of increased surface melt on annual rates of ice motion, in sections of the ice sheet that terminate on land, while the large volumes of meltwater routed subglacially deliver significant volumes of sediment and nutrients to downstream ecosystems. Summary:  Considerable advances have been made recently in our understanding of Greenland ice sheet hydrology and its wider influences. Nevertheless, critical gaps persist both in our understanding of hydrology-dynamics coupling, notably at tidewater glaciers, and in runoff processes which ensure that projecting Greenland’s future mass balance remains challenging.Publisher PDFPeer reviewe

    Evidence of an active volcanic heat source beneath the Pine Island Glacier

    Get PDF
    Tectonic landforms reveal that the West Antarctic Ice Sheet (WAIS) lies atop a major volcanic rift system. However, identifying subglacial volcanism is challenging. Here we show geochemical evidence of a volcanic heat source upstream of the fast-melting Pine Island Ice Shelf, documented by seawater helium isotope ratios at the front of the Ice Shelf cavity. The localization of mantle helium to glacial meltwater reveals that volcanic heat induces melt beneath the grounded glacier and feeds the subglacial hydrological network crossing the grounding line. The observed transport of mantle helium out of the Ice Shelf cavity indicates that volcanic heat is supplied to the grounded glacier at a rate of ~ 2500 ± 1700 MW, which is ca. half as large as the active Grimsvötn volcano on Iceland. Our finding of a substantial volcanic heat source beneath a major WAIS glacier highlights the need to understand subglacial volcanism, its hydrologic interaction with the marine margins, and its potential role in the future stability of the WAIS

    Implementation and performance of adaptive mesh refinement in the Ice Sheet System Model (ISSM v4.14)

    Get PDF
    Accurate projections of the evolution of ice sheets in a changing climate require a fine mesh/grid resolution in ice sheet models to correctly capture fundamental physical processes, such as the evolution of the grounding line, the region where grounded ice starts to float. The evolution of the grounding line indeed plays a major role in ice sheet dynamics, as it is a fundamental control on marine ice sheet stability. Numerical modeling of a grounding line requires significant computational resources since the accuracy of its position depends on grid or mesh resolution. A technique that improves accuracy with reduced computational cost is the adaptive mesh refinement (AMR) approach. We present here the implementation of the AMR technique in the finite element Ice Sheet System Model (ISSM) to simulate grounding line dynamics under two different benchmarks: MISMIP3d and MISMIP+. We test different refinement criteria: (a) distance around the grounding line, (b) a posteriori error estimator, the Zienkiewicz–Zhu (ZZ) error estimator, and (c) different combinations of (a) and (b). In both benchmarks, the ZZ error estimator presents high values around the grounding line. In the MISMIP+ setup, this estimator also presents high values in the grounded part of the ice sheet, following the complex shape of the bedrock geometry. The ZZ estimator helps guide the refinement procedure such that AMR performance is improved. Our results show that computational time with AMR depends on the required accuracy, but in all cases, it is significantly shorter than for uniformly refined meshes. We conclude that AMR without an associated error estimator should be avoided, especially for real glaciers that have a complex bed geometry.</p

    Coupling computer-interpretable guidelines with a drug-database through a web-based system – The PRESGUID project

    Get PDF
    BACKGROUND: Clinical Practice Guidelines (CPGs) available today are not extensively used due to lack of proper integration into clinical settings, knowledge-related information resources, and lack of decision support at the point of care in a particular clinical context. OBJECTIVE: The PRESGUID project (PREScription and GUIDelines) aims to improve the assistance provided by guidelines. The project proposes an online service enabling physicians to consult computerized CPGs linked to drug databases for easier integration into the healthcare process. METHODS: Computable CPGs are structured as decision trees and coded in XML format. Recommendations related to drug classes are tagged with ATC codes. We use a mapping module to enhance computerized guidelines coupling with a drug database, which contains detailed information about each usable specific medication. In this way, therapeutic recommendations are backed up with current and up-to-date information from the database. RESULTS: Two authoritative CPGs, originally diffused as static textual documents, have been implemented to validate the computerization process and to illustrate the usefulness of the resulting automated CPGs and their coupling with a drug database. We discuss the advantages of this approach for practitioners and the implications for both guideline developers and drug database providers. Other CPGs will be implemented and evaluated in real conditions by clinicians working in different health institutions

    Decoding of Superimposed Traces Produced by Direct Sequencing of Heterozygous Indels

    Get PDF
    Direct Sanger sequencing of a diploid template containing a heterozygous insertion or deletion results in a difficult-to-interpret mixed trace formed by two allelic traces superimposed onto each other. Existing computational methods for deconvolution of such traces require knowledge of a reference sequence or the availability of both direct and reverse mixed sequences of the same template. We describe a simple yet accurate method, which uses dynamic programming optimization to predict superimposed allelic sequences solely from a string of letters representing peaks within an individual mixed trace. We used the method to decode 104 human traces (mean length 294 bp) containing heterozygous indels 5 to 30 bp with a mean of 99.1% bases per allelic sequence reconstructed correctly and unambiguously. Simulations with artificial sequences have demonstrated that the method yields accurate reconstructions when (1) the allelic sequences forming the mixed trace are sufficiently similar, (2) the analyzed fragment is significantly longer than the indel, and (3) multiple indels, if present, are well-spaced. Because these conditions occur in most encountered DNA sequences, the method is widely applicable. It is available as a free Web application Indelligent at http://ctap.inhs.uiuc.edu/dmitriev/indel.asp

    Algorithms for optimizing drug therapy

    Get PDF
    BACKGROUND: Drug therapy has become increasingly efficient, with more drugs available for treatment of an ever-growing number of conditions. Yet, drug use is reported to be sub optimal in several aspects, such as dosage, patient's adherence and outcome of therapy. The aim of the current study was to investigate the possibility to optimize drug therapy using computer programs, available on the Internet. METHODS: One hundred and ten officially endorsed text documents, published between 1996 and 2004, containing guidelines for drug therapy in 246 disorders, were analyzed with regard to information about patient-, disease- and drug-related factors and relationships between these factors. This information was used to construct algorithms for identifying optimum treatment in each of the studied disorders. These algorithms were categorized in order to define as few models as possible that still could accommodate the identified factors and the relationships between them. The resulting program prototypes were implemented in HTML (user interface) and JavaScript (program logic). RESULTS: Three types of algorithms were sufficient for the intended purpose. The simplest type is a list of factors, each of which implies that the particular patient should or should not receive treatment. This is adequate in situations where only one treatment exists. The second type, a more elaborate model, is required when treatment can by provided using drugs from different pharmacological classes and the selection of drug class is dependent on patient characteristics. An easily implemented set of if-then statements was able to manage the identified information in such instances. The third type was needed in the few situations where the selection and dosage of drugs were depending on the degree to which one or more patient-specific factors were present. In these cases the implementation of an established decision model based on fuzzy sets was required. Computer programs based on one of these three models could be constructed regarding all but one of the studied disorders. The single exception was depression, where reliable relationships between patient characteristics, drug classes and outcome of therapy remain to be defined. CONCLUSION: Algorithms for optimizing drug therapy can, with presumably rare exceptions, be developed for any disorder, using standard Internet programming methods

    Diverse M-Best Solutions by Dynamic Programming

    Get PDF
    Many computer vision pipelines involve dynamic programming primitives such as finding a shortest path or the minimum energy solution in a tree-shaped probabilistic graphical model. In such cases, extracting not merely the best, but the set of M-best solutions is useful to generate a rich collection of candidate proposals that can be used in downstream processing. In this work, we show how M-best solutions of tree-shaped graphical models can be obtained by dynamic programming on a special graph with M layers. The proposed multi-layer concept is optimal for searching M-best solutions, and so flexible that it can also approximate M-best diverse solutions. We illustrate the usefulness with applications to object detection, panorama stitching and centerline extraction
    corecore