131 research outputs found
The impact of sequence database choice on metaproteomic results in gut microbiota studies
Background: Elucidating the role of gut microbiota in physiological and pathological processes has recently emerged as a key research aim in life sciences. In this respect, metaproteomics, the study of the whole protein complement of a microbial community, can provide a unique contribution by revealing which functions are actually being expressed by specific microbial taxa. However, its wide application to gut microbiota research has been hindered by challenges in data analysis, especially related to the choice of the proper sequence databases for protein identification.
Results: Here, we present a systematic investigation of variables concerning database construction and annotation and evaluate their impact on human and mouse gut metaproteomic results. We found that both publicly available and experimental metagenomic databases lead to the identification of unique peptide assortments, suggesting parallel database searches as a mean to gain more complete information. In particular, the contribution of experimental metagenomic databases was revealed to be mandatory when dealing with mouse samples. Moreover, the use of a "merged" database, containing all metagenomic sequences from the population under study, was found to be generally preferable over the use of sample-matched databases. We also observed that taxonomic and functional results are strongly database-dependent, in particular when analyzing the mouse gut microbiota. As a striking example, the Firmicutes/Bacteroidetes ratio varied up to tenfold depending on the database used. Finally, assembling reads into longer contigs provided significant advantages in terms of functional annotation yields.
Conclusions: This study contributes to identify host- and database-specific biases which need to be taken into account in a metaproteomic experiment, providing meaningful insights on how to design gut microbiota studies and to perform metaproteomic data analysis. In particular, the use of multiple databases and annotation tools has to be encouraged, even though this requires appropriate bioinformatic resources
Mistle: bringing spectral library predictions to metaproteomics with an efficient search index
Motivation
Deep learning has moved to the forefront of tandem mass spectrometry-driven proteomics and authentic prediction for peptide fragmentation is more feasible than ever. Still, at this point spectral prediction is mainly used to validate database search results or for confined search spaces. Fully predicted spectral libraries have not yet been efficiently adapted to large search space problems that often occur in metaproteomics or proteogenomics.
Results
In this study, we showcase a workflow that uses Prosit for spectral library predictions on two common metaproteomes and implement an indexing and search algorithm, Mistle, to efficiently identify experimental mass spectra within the library. Hence, the workflow emulates a classic protein sequence database search with protein digestion but builds a searchable index from spectral predictions as an in-between step. We compare Mistle to popular search engines, both on a spectral and database search level, and provide evidence that this approach is more accurate than a database search using MSFragger. Mistle outperforms other spectral library search engines in terms of run time and proves to be extremely memory efficient with a 4- to 22-fold decrease in RAM usage. This makes Mistle universally applicable to large search spaces, e.g. covering comprehensive sequence databases of diverse microbiomes.
Availability and implementation
Mistle is freely available on GitHub at https://github.com/BAMeScience/Mistle
International Coordination of Long-Term Ocean Biology Time Series Derived from Satellite Ocean Color Data
[ABSTRACT] In this paper, we will describe plans to coordinate the initial development of long-term ocean biology time series derived from global ocean color observations acquired by the United States, Japan and Europe, Specifically, we have been commissioned by the International Ocean Color Coordinating Group (IOCCG) to coordinate the development of merged products derived from the OCTS, SeaWiFS, MODIS, MERIS and GLI imagers. Each of these missions will have been launched by the year 2002 and will have produced global ocean color data products. Our goal is to develop and document the procedures to be used by each space agency (NASA, NASDA, and ESA) to merge chlorophyll, primary productivity, and other products from these missions. This coordination is required to initiate the production of long-term ocean biology time series which will be continued operationally beyond 2002. The purpose of the time series is to monitor interannual to decadal-scale variability in oceanic primary productivity and to study the effects of environmental change on upper ocean biogeochemical processes
Ad hoc learning of peptide fragmentation from mass spectra enables an interpretable detection of phosphorylated and cross-linked peptides
Mass spectrometry-based proteomics provides a holistic snapshot of the entire protein set of living cells on a molecular level. Currently, only a few deep learning approaches exist that involve peptide fragmentation spectra, which represent partial sequence information of proteins. Commonly, these approaches lack the ability to characterize less studied or even unknown patterns in spectra because of their use of explicit domain knowledge. Here, to elevate unrestricted learning from spectra, we introduce ‘ad hoc learning of fragmentation’ (AHLF), a deep learning model that is end-to-end trained on 19.2 million spectra from several phosphoproteomic datasets. AHLF is interpretable, and we show that peak-level feature importance values and pairwise interactions between peaks are in line with corresponding peptide fragments. We demonstrate our approach by detecting post-translational modifications, specifically protein phosphorylation based on only the fragmentation spectrum without a database search. AHLF increases the area under the receiver operating characteristic curve (AUC) by an average of 9.4% on recent phosphoproteomic data compared with the current state of the art on this task. Furthermore, use of AHLF in rescoring search results increases the number of phosphopeptide identifications by a margin of up to 15.1% at a constant false discovery rate. To show the broad applicability of AHLF, we use transfer learning to also detect cross-linked peptides, as used in protein structure analysis, with an AUC of up to 94%
The BAM Data Store: Piloting an OpenBIS-Based Research Data Infrastructure in Materials Science and Engineering
As a partner in several NFDI consortia, the Bundesanstalt für Materialforschung und -prüfung (BAM, German federal institute for materials science and testing) contributes to research data standardization efforts in various domains of materials science and engineering (MSE). To implement a central research data management (RDM) infrastructure that meets the requirements of MSE groups at BAM, we initiated the Data Store pilot project in 2021. The resulting infrastructure should enable researchers to digitally document research processes and store related data in a standardized and interoperable manner. As a software solution, we chose openBIS, an open-source framework that is increasingly being used for RDM in MSE communities.
The pilot project was conducted for one year with five research groups across different organizational units and MSE disciplines. The main results are presented for the use case “nanoPlattform”. The group registered experimental steps and linked associated instruments and chemicals in the Data Store to ensure full traceability of data related to the synthesis of ~400 nanomaterials. The system also supported researchers in implementing RDM practices in their workflows, e.g., by automating data import and documentation and by integrating infrastructure for data analysis.
Based on the promising results of the pilot phase, we will roll out the Data Store as the central RDM infrastructure of BAM starting in 2023. We further aim to develop openBIS plugins, metadata standards, and RDM workflows to contribute to the openBIS community and to foster RDM in MSE
Comprehensive evaluation of peptide de novo sequencing tools for monoclonal antibody assembly
Monoclonal antibodies are biotechnologically produced proteins with various applications in research, therapeutics and diagnostics. Their ability to recognize and bind to specific molecule structures makes them essential research tools and therapeutic agents. Sequence information of antibodies is helpful for understanding antibody–antigen interactions and ensuring their affinity and specificity. De novo protein sequencing based on mass spectrometry is a valuable method to obtain the amino acid sequence of peptides and proteins without a priori knowledge. In this study, we evaluated six recently developed de novo peptide sequencing algorithms (Novor, pNovo 3, DeepNovo, SMSNet, PointNovo and Casanovo), which were not specifically designed for antibody data. We validated their ability to identify and assemble antibody sequences on three multi-enzymatic data sets. The deep learning-based tools Casanovo and PointNovo showed an increased peptide recall across different enzymes and data sets compared with spectrum-graph-based approaches. We evaluated different error types of de novo peptide sequencing tools and their performance for different numbers of missing cleavage sites, noisy spectra and peptides of various lengths. We achieved a sequence coverage of 97.69–99.53% on the light chains of three different antibody data sets using the de Bruijn assembler ALPS and the predictions from Casanovo. However, low sequence coverage and accuracy on the heavy chains demonstrate that complete de novo protein sequencing remains a challenging issue in proteomics that requires improved de novo error correction, alternative digestion strategies and hybrid approaches such as homology search to achieve high accuracy on long protein sequences.Peer Reviewe
Viewing the proteome: How to visualize proteomics data?
Proteomics has become one of the main approaches for analyzing and understanding biological systems. Yet similar to other high-throughput analysis methods, the presentation of the large amounts of obtained data in easily interpretable ways remains challenging. In this review, we present an overview of the different ways in which proteomics software supports the visualization and interpretation of proteomics data. The unique challenges and current solutions for visualizing the different aspects of proteomics data, from acquired spectra via protein identification and quantification to pathway analysis, are discussed, and examples of the most useful visualization approaches are highlighted. Finally, we offer our ideas about future directions for proteomics data visualization.acceptedVersio
FIORA: Local neighborhood-based prediction of compound mass spectra from single fragmentation events
Non-targeted metabolomics holds great promise for advancing precision medicine and biomarker discovery. However, identifying compounds from tandem mass spectra remains a challenging task due to the incomplete nature of spectral reference libraries. Augmenting these libraries with simulated mass spectra can provide the necessary references to resolve unmatched spectra, but generating high-quality data is difficult. In this study, we present FIORA, an open-source graph neural network designed to simulate tandem mass spectra. Our main contribution lies in utilizing the molecular neighborhood of bonds to learn breaking patterns and derive fragment ion probabilities. FIORA not only surpasses state-of-the-art fragmentation algorithms, ICEBERG and CFM-ID, in prediction quality, but also facilitates the prediction of additional features, such as retention time and collision cross section. Utilizing GPU acceleration, FIORA enables rapid validation of putative compound annotations and large-scale expansion of spectral reference libraries with high-quality predictions
DeNovoGUI: an open source graphical user interface for de novo sequencing of tandem mass spectra
De novo sequencing is a popular technique in proteomics for identifying peptides from tandem mass spectra without having to rely on a protein sequence database. Despite the strong potential of de novo sequencing algorithms, their adoption threshold remains quite high. We here present a user-friendly and lightweight graphical user interface called DeNovoGUI for running parallelized versions of the freely available de novo sequencing software PepNovo+, greatly simplifying the use of de novo sequencing in proteomics. Our platform-independent software is freely available under the permissible Apache2 open source license. Source code, binaries, and additional documentation are available at http://denovogui.googlecode.com.acceptedVersio
Machine learning methods for compound annotation in non-targeted mass spectrometry—A brief overview of fingerprinting, in silico fragmentation and de novo methods
Non-targeted screenings (NTS) are essential tools in different fields, such as forensics, health and environmental sciences. NTSs often employ mass spectrometry (MS) methods due to their high throughput and sensitivity in comparison to, for example, nuclear magnetic resonance–based methods. As the identification of mass spectral signals, called annotation, is labour intensive, it has been used for developing supporting tools based on machine learning (ML). However, both the diversity of mass spectral signals and the sheer quantity of different ML tools developed for compound annotation present a challenge for researchers in maintaining a comprehensive overview of the field.
In this work, we illustrate which ML-based methods are available for compound annotation in non-targeted MS experiments and provide a nuanced comparison of the ML models used in MS data analysis, unravelling their unique features and performance metrics. Through this overview we support researchers to judiciously apply these tools in their daily research. This review also offers a detailed exploration of methods and datasets to show gaps in current methods, and promising target areas, offering a starting point for developers intending to improve existing methodologies
- …
