55 research outputs found
An Explainable AI-Based Computer Aided Detection System for Diabetic Retinopathy Using Retinal Fundus Images
Diabetic patients have a high risk of developing diabetic retinopathy (DR), which is one of the major causes of blindness. With early detection and the right treatment patients may be spared from losing their vision. We propose a computer-aided detection system, which uses retinal fundus images as input and it detects all types of lesions that define diabetic retinopathy. The aim of our system is to assist eye specialists by automatically detecting the healthy retinas and referring the images of the unhealthy ones. For the latter cases, the system offers an interactive tool where the doctor can examine the local lesions that our system marks as suspicious. The final decision remains in the hands of the ophthalmologists. Our approach consists of a multi-class detector, that is able to locate and recognize all candidate DR-defining lesions. If the system detects at least one lesion, then the image is marked as unhealthy. The lesion detector is built on the faster R-CNN ResNet 101 architecture, which we train by transfer learning. We evaluate our approach on three benchmark data sets, namely Messidor-2, IDRiD, and E-Ophtha by measuring the sensitivity (SE) and specificity (SP) based on the binary classification of healthy and unhealthy images. The results that we obtain for Messidor-2 and IDRiD are (SE: 0.965, SP: 0.843), and (SE: 0.83, SP: 0.94), respectively. For the E-Ophtha data set we follow the literature and perform two experiments, one where we detect only lesions of the type micro aneurysms (SE: 0.939, SP: 0.82) and the other when we detect only exudates (SE: 0.851, SP: 0.971). Besides the high effectiveness that we achieve, the other important contribution of our work is the interactive tool, which we offer to the medical experts, highlighting all suspicious lesions detected by the proposed system.<br/
Hybrid Deep Learning Gaussian Process for Diabetic Retinopathy Diagnosis and Uncertainty Quantification
Diabetic Retinopathy (DR) is one of the microvascular complications of
Diabetes Mellitus, which remains as one of the leading causes of blindness
worldwide. Computational models based on Convolutional Neural Networks
represent the state of the art for the automatic detection of DR using eye
fundus images. Most of the current work address this problem as a binary
classification task. However, including the grade estimation and quantification
of predictions uncertainty can potentially increase the robustness of the
model. In this paper, a hybrid Deep Learning-Gaussian process method for DR
diagnosis and uncertainty quantification is presented. This method combines the
representational power of deep learning, with the ability to generalize from
small datasets of Gaussian process models. The results show that uncertainty
quantification in the predictions improves the interpretability of the method
as a diagnostic support tool. The source code to replicate the experiments is
publicly available at https://github.com/stoledoc/DLGP-DR-Diagnosis
Mapping and characterization of structural variation in 17,795 human genomes
A key goal of whole-genome sequencing for studies of human genetics is to interrogate all forms of variation, including single-nucleotide variants, small insertion or deletion (indel) variants and structural variants. However, tools and resources for the study of structural variants have lagged behind those for smaller variants. Here we used a scalable pipeline1 to map and characterize structural variants in 17,795 deeply sequenced human genomes. We publicly release site-frequency data to create the largest, to our knowledge, whole-genome-sequencing-based structural variant resource so far. On average, individuals carry 2.9 rare structural variants that alter coding regions; these variants affect the dosage or structure of 4.2 genes and account for 4.0–11.2% of rare high-impact coding alleles. Using a computational model, we estimate that structural variants account for 17.2% of rare alleles genome-wide, with predicted deleterious effects that are equivalent to loss-of-function coding alleles; approximately 90% of such structural variants are noncoding deletions (mean 19.1 per genome). We report 158,991 ultra-rare structural variants and show that 2% of individuals carry ultra-rare megabase-scale structural variants, nearly half of which are balanced or complex rearrangements. Finally, we infer the dosage sensitivity of genes and noncoding elements, and reveal trends that relate to element class and conservation. This work will help to guide the analysis and interpretation of structural variants in the era of whole-genome sequencing
Occurrence of Clavispora lusitaniae, the teleomorph of Candida lusitaniae, among clinical isolates
Of 13 clinical isolates of Candida lusitaniae from diverse geographical regions, 7 represented the mating types (6 alpha, 1 a) of the ascomycete Clavispora lusitaniae. Selected nonfertile isolates showed significant DNA relatedness (greater than 90%) to representatives of both mating types. Phenotypic physiological characteristics, such as cellobiose fermentation and rhamnose assimilation, proved insufficient for separation of Clavispora lusitaniae and Clavispora opuntiae.</jats:p
Segmentation of pigment signs in fundus images for retinitis pigmentosa analysis by using deep learning
The adoption of Deep Learning (DL) algorithms into the practice of ophthalmology could play an important role in screening and diagnosis of eye diseases in the coming years. In particular, DL tools interpreting ocular data derived from low-cost devices, as a fundus camera, could support massive screening also in resource limited countries. This paper explores a fully automatic method supporting the diagnosis of the Retinitis Pigmentosa by means of the segmentation of pigment signs in retinal fundus images. The proposed approach relies on an U-Net based deep convolutional network. At the present, this is the first approach for pigment signs segmentation in retinal fundus images that is not dependent on hand-crafted features, but automatically learns a hierarchy of increasingly complex features directly from data. We assess the performance by training the model on the public dataset RIPS and comparisons with the state of the art have been considered in accordance with approaches working on the same dataset. The experimental results show an improvement of 15% in F-measure score
- …
