127 research outputs found

    Quantitative Evidence for Revising the Definition of Primary Graft Dysfunction after Lung Transplant

    Get PDF
    RATIONALE: Primary graft dysfunction (PGD) is a form of acute lung injury that occurs after lung transplantation. The definition of PGD was standardized in 2005. Since that time, clinical practice has evolved, and this definition is increasingly used as a primary endpoint for clinical trials; therefore, validation is warranted. OBJECTIVES: We sought to determine whether refinements to the 2005 consensus definition could further improve construct validity. METHODS: Data from the Lung Transplant Outcomes Group multicenter cohort were used to compare variations on the PGD definition, including alternate oxygenation thresholds, inclusion of additional severity groups, and effects of procedure type and mechanical ventilation. Convergent and divergent validity were compared for mortality prediction and concurrent lung injury biomarker discrimination. MEASUREMENTS AND MAIN RESULTS: A total of 1,179 subjects from 10 centers were enrolled from 2007 to 2012. Median length of follow-up was 4 years (interquartile range = 2.4-5.9). No mortality differences were noted between no PGD (grade 0) and mild PGD (grade 1). Significantly better mortality discrimination was evident for all definitions using later time points (48, 72, or 48-72 hours; P < 0.001). Biomarker divergent discrimination was superior when collapsing grades 0 and 1. Additional severity grades, use of mechanical ventilation, and transplant procedure type had minimal or no effect on mortality or biomarker discrimination. CONCLUSIONS: The PGD consensus definition can be simplified by combining lower PGD grades. Construct validity of grading was present regardless of transplant procedure type or use of mechanical ventilation. Additional severity categories had minimal impact on mortality or biomarker discrimination

    Bacterial consortium for copper extraction from sulphide ore consisting mainly of chalcopyrite

    Get PDF
    Abstract The mining industry is looking forward for bacterial consortia for economic extraction of copper from low-grade ores. The main objective was to determine an optimal bacterial consortium from several bacterial strains to obtain copper from the leach of chalcopyrite. The major native bacterial species involved in the bioleaching of sulphide ore (Acidithiobacillus ferrooxidans, Acidithiobacillus thiooxidans, Leptospirillum ferrooxidans and Leptospirillum ferriphilum) were isolated and the assays were performed with individual bacteria and in combination with At. thiooxidans. In conclusion, it was found that the consortium integrated by At. ferrooxidans and At. thiooxidans removed 70% of copper in 35 days from the selected ore, showing significant differences with the other consortia, which removed only 35% of copper in 35 days. To validate the assays was done an escalation in columns, where the bacterial consortium achieved a higher percentage of copper extraction regarding to control

    The CAMALIOT project

    Get PDF
    This invited presentation was given at an information event about the European Space Agency’s (ESA) Navigation Innovation and Support Programme (NAVISP) hosted by the Austrian Agency for the Promotion of Science (FFG) in preparation for the ESA Ministerial Conference 2022. The presentation was about the CAMALIOT project, which is currently funded through NAVISP and by FFG, outlining the initial results and what the next steps in the project are. In particular, information about the CAMALIOT crowdsourcing campaign (being run by IIASA) was provided as well as the status of the CAMALIOT machine learning infrastructure and the science uses cases in the project

    A Cloud-native Approach for Processing of Crowdsourced GNSS Observations and Machine Learning at Scale: A Case Study from the CAMALIOT Project

    Get PDF
    The era of modern smartphones, running on Android version 7.0 and higher, facilitates nowadays acquisition of raw dual-frequency multi-constellation GNSS observations. This paves the way for GNSS community data to be potentially exploited for precise positioning, GNSS reflectometry or geoscience applications at large. The continuously expanding global GNSS infrastructure along with the enormous volume of prospective GNSS community data bring, however, major challenges related to data acquisition, its storage, and subsequent processing for deriving various parameters of interest. In addition, such large datasets cannot be managed manually anymore, leading thus to the need for fully automated and sophisticated data processing pipelines. Application of Machine Learning Technology for GNSS IoT data fusion (CAMALIOT) was an ESA NAVISP Element 1 project (NAVISP-EL1-038.2) with activities aiming to address the aforementioned points related to GNSS community data and their exploitation for scientific applications with the use of Machine Learning (ML). This contribution provides an overview of the CAMALIOT project with information on the designed and implemented cloud-native software for GNSS processing and ML at scale, developed Android application for retrieving GNSS observations from the modern generation of smartphones through dedicated crowdsourcing campaigns, related data ingestion and processing, and GNSS analysis concerning both conventional and smartphone GNSS observations. With the use of the developed GNSS engine employing an Extended Kalman Filter, example processing results related to the Zenith Total Delay (ZTD) and Slant Total Electron Content (STEC) are provided based on the analysis of observations collected with geodetic-grade GNSS receivers and from local measurement sessions involving Xiaomi Mi 8 that collected GNSS observations using the developed Android application. For smartphone observations, ZTD is derived in a differential manner based on a single-frequency double-difference approach employing GPS and Galileo observations, whereas satellite-specific STEC time series are obtained through carrier-to-code leveling based on the geometry-free linear combination of observations from both GPS and Galileo constellations. Although the ZTD and STEC time series from smartphones were derived on a demonstration basis, a rather good level of consistency of such estimates with respect to the reference time series was found. For the considered periods, the RMS of differences between the derived smartphone-based time series of differential zenith wet delay and reference values were below 3.1 mm. In terms of satellite-specific STEC time series expressed with respect to the reference STEC time series, RMS of the offset-reduced differences below 1.2 TECU was found. Smartphone-based observations require special attention including additional processing steps and a dedicated parameterization in order to be able to acquire reliable atmospheric estimates. Although with lower measurement quality compared to traditional sources of GNSS data, an augmentation of ground-based networks of fixed high-end GNSS receivers with GNSS-capable smartphones would however, form an interesting source of complementary information for various studies relying on GNSS observations

    Determination of high-precision tropospheric delays using crowdsourced smartphone GNSS data

    Get PDF
    The Global Navigation Satellite System (GNSS) is a key asset for tropospheric monitoring. Currently, GNSS meteorology relies primarily on geodetic-grade stations. However, such stations are too costly to be densely deployed, which limits the contribution of GNSS to tropospheric monitoring. In 2016, Google released the raw GNSS measurement application programming interface for smartphones running on Android version 7.0 and higher. Given that nowadays there are billions of Android smartphones worldwide, utilizing those devices for atmospheric monitoring represents a remarkable scientific opportunity. In this study, smartphone GNSS data collected in Germany as part of the Application of Machine Learning Technology for GNSS IoT Data Fusion (CAMALIOT) crowdsourcing campaign in 2022 were utilized to investigate this idea. Approximately 20 000 raw GNSS observation files were collected there during the campaign. First, a dedicated data processing pipeline was established that consists of two major parts: machine learning (ML)-based data selection and ionosphere-free precise point positioning (PPP)-based zenith total delay (ZTD) estimation. The proposed method was validated with a dedicated smartphone data collection experiment conducted on the rooftop of the ETH campus. The results confirmed that ZTD estimates of millimeter-level precision could be achieved with smartphone data collected in an open-sky environment. The impacts of observation time span and utilization of multi-GNSS observations on ZTD estimation were also investigated. Subsequently, the crowdsourced data from Germany were processed by PPP with the ionospheric delays interpolated using observations from surrounding satellite positioning service of the German National Survey (SAPOS) GNSS stations. The ZTDs derived from ERA5 and an ML-based ZTD product served as benchmarks. The results revealed that an accuracy of better than 10 mm can be achieved by utilizing selected high-quality crowdsourced smartphone data. This study demonstrates high-precision ZTD determination with crowdsourced smartphone GNSS data and reveals success factors and current limitations
    corecore