317 research outputs found

    Efficacy of Radiomics and Genomics in Predicting TP53 Mutations in Diffuse Lower Grade Glioma

    Get PDF
    An updated classification of diffuse lower-grade gliomas is established in the 2016 World Health Organization Classification of Tumors of the Central Nervous System based on their molecular mutations such as TP53 mutation. This study investigates machine learning methods for TP53 mutation status prediction and classification using radiomics and genomics features, respectively. Radiomics features represent patients\u27 age and imaging features that are extracted from conventional MRI. Genomics feature is represented by patients’ gene expression using RNA sequencing. This study uses a total of 105 LGG patients, where the patient dataset is divided into a training set (80 patients) and testing set (25 patients). Three TP53 mutation prediction models are constructed based on the source of the training features; TP53-radiomics model, TP53-genomics model, and TP53-radiogenomics model, respectively. Radiomics feature selection is performed using recursive feature selection method. For genomics data, EdgeR method is utilized to select the differentially expressed genes between the mutated TP53 versus the non-mutated TP53 cases in the training set. The training classification model is constructed using Random Forest and cross-validated using repeated 10-fold cross validation. Finally, the predictive performance of the three models is assessed using the testing set. The three models, TP53-Radiomics, TP53-RadioGenomics, and TP53-Genomics, achieve a predictive accuracy of 0.84±0.04, 0.92±0.04, and 0.89±0.07, respectively. These results show promise of non-invasive MRI radiomics features and fusion of radiomics with genomics features for prediction of TP53

    Personalized Prediction of Tumor Recurrence With Image-Guided Physics-Informed Computational Model in High-Grade Gliomas

    Get PDF
    High grade gliomas are infiltrating tumors characterized by their diffusive invasion and proliferative growth. Across and within patients heterogeneity of tumors makes it challenging to determine tumor spatial extent after surgical resection. Traditionally, tumor growth predictions after surgical resections rely on generalized models and population-based observations, which do not account for individual patient differences. To address this gap, we propose a personalized approach with image-guided computational model (digital twin) that incorporates physics-based modeling to predict tumor recurrence. Our digital twin involves an inverse modeling step, followed by a recurrence model that accounts for varying surgical effects. The physics-guided inverse model considers discrete loss, and estimates patient-specific diffusion (D) and proliferation (ρ) parameters from pre-operative magnetic resonance imaging (MRI) of 133 patients. The analysis is conducted using a publicly available dataset from The Cancer Imaging Archive (TCIA). The proposed model is personalized due to use of the patient-specific parameters gleaned from the real patient data to assess risk for both high-aggressive and low-aggressive tumor groups. The prognostic index for each patient reveals the interplay between tumor aggressiveness, surgical resection, and survival outcome. The results demonstrate that despite varying levels of surgical resections, patients with high-aggressive tumors have worse survival outcomes, with a median survival of 141-153 days due to rapid regrowth (0.10/day). In comparison, the low-aggressive group exhibits slower growth (0.06/day) and a median survival of 158-171 days. Furthermore, by integrating patient-specific diffusion and proliferation rates, the proposed method offers significant variability in tumor aggressiveness within high-grade gliomas

    Class Activation Mapping and Uncertainty Estimation in Multi-Organ Segmentation

    Get PDF
    Deep learning (DL)-based medical imaging and image segmentation algorithms achieve impressive performance on many benchmarks. Yet the efficacy of deep learning methods for future clinical applications may become questionable due to the lack of ability to reason with uncertainty and interpret probable areas of failures in prediction decisions. Therefore, it is desired that such a deep learning model for segmentation classification is able to reliably predict its confidence measure and map back to the original imaging cases to interpret the prediction decisions. In this work, uncertainty estimation for multiorgan segmentation task is evaluated to interpret the predictive modeling in DL solutions. We use the state-of-the-art nnU-Net to perform segmentation of 15 abdominal organs (spleen, right kidney, left kidney, gallbladder, esophagus, liver, stomach, aorta, inferior vena cava, pancreas, right adrenal gland, left adrenal gland, duodenum, bladder, prostate/uterus) using 200 patient cases for the Multimodality Abdominal Multi-Organ Segmentation Challenge 2022. Further, the softmax probabilities from different variants of nnU-Net are used to compute the knowledge uncertainty in the deep learning framework. Knowledge uncertainty from ensemble of DL models is utilized to quantify and visualize class activation map for two example segmented organs. The preliminary result of our model shows that class activation maps may be used to interpret the prediction decision made by the DL model used in this study

    Innovative Computing in Engineering and Medicine II

    Get PDF
    Chairs: Drs. Khan Iftekharuddin, Dean Krusienski, & Jiang Li, Department of Electrical and Computer Engineerin

    Standardized evaluation of algorithms for computer-aided diagnosis of dementia based on structural MRI: the CADDementia challenge

    Get PDF
    Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n = 30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org

    Special Section Guest Editorial: Machine Learning In Optics

    Get PDF
    This guest editorial summarizes the Special Section on Machine Learning in Optics

    Two-Stage Transfer Learning for Facial Expression Classification in Children

    Get PDF
    Studying facial expressions can provide insight into the development of social skills in children and provide support to individuals with developmental disorders. In afflicted individuals, such as children with Autism Spectrum Disorder (ASD), atypical interpretations of facial expressions are well-documented. In computer vision, many popular and state-of-the-art deep learning architectures (VGG16, EfficientNet, ResNet, etc.) are readily available with pre-trained weights for general object recognition. Transfer learning utilizes these pre-trained models to improve generalization on a new task. In this project, transfer learning is implemented to leverage the pretrained model (general object recognition) on facial expression classification. Though this method, the base and middle layers are preserved to exploit the existing neural architecture. The investigated method begins with a base-packaged architecture trained on ImageNet. This foundation is then task changed from general object classification to facial expression classification in the first transfer learning step. The second transfer learning step performs a domain change from adult to child data. Finally, the trained network is evaluated on the child facial expression classification task

    Monocular Camera Viewpoint-Invariant Vehicular Traffic Segmentation and Classification Utilizing Small Datasets

    Get PDF
    The work presented here develops a computer vision framework that is view angle independent for vehicle segmentation and classification from roadway traffic systems installed by the Virginia Department of Transportation (VDOT). An automated technique for extracting a region of interest is discussed to speed up the processing. The VDOT traffic videos are analyzed for vehicle segmentation using an improved robust low-rank matrix decomposition technique. It presents a new and effective thresholding method that improves segmentation accuracy and simultaneously speeds up the segmentation processing. Size and shape physical descriptors from morphological properties and textural features from the Histogram of Oriented Gradients (HOG) are extracted from the segmented traffic. Furthermore, a multi-class support vector machine classifier is employed to categorize different traffic vehicle types, including passenger cars, passenger trucks, motorcycles, buses, and small and large utility trucks. It handles multiple vehicle detections through an iterative k-means clustering over-segmentation process. The proposed algorithm reduced the processed data by an average of 40%. Compared to recent techniques, it showed an average improvement of 15% in segmentation accuracy, and it is 55% faster than the compared segmentation techniques on average. Moreover, a comparative analysis of 23 different deep learning architectures is presented. The resulting algorithm outperformed the compared deep learning algorithms for the quality of vehicle classification accuracy. Furthermore, the timing analysis showed that it could operate in real-time scenarios
    corecore