39 research outputs found

    Development and validation of a multimodal neuroimaging biomarker for electroconvulsive therapy outcome in depression: A multicenter machine learning analysis

    Get PDF
    Background Electroconvulsive therapy (ECT) is the most effective intervention for patients with treatment resistant depression. A clinical decision support tool could guide patient selection to improve the overall response rate and avoid ineffective treatments with adverse effects. Initial small-scale, monocenter studies indicate that both structural magnetic resonance imaging (sMRI) and functional MRI (fMRI) biomarkers may predict ECT outcome, but it is not known whether those results can generalize to data from other centers. The objective of this study was to develop and validate neuroimaging biomarkers for ECT outcome in a multicenter setting. Methods Multimodal data (i.e. clinical, sMRI and resting-state fMRI) were collected from seven centers of the Global ECT-MRI Research Collaboration (GEMRIC). We used data from 189 depressed patients to evaluate which data modalities or combinations thereof could provide the best predictions for treatment remission (HAM-D score ⩽7) using a support vector machine classifier. Results Remission classification using a combination of gray matter volume and functional connectivity led to good performing models with average 0.82–0.83 area under the curve (AUC) when trained and tested on samples coming from the three largest centers (N = 109), and remained acceptable when validated using leave-one-site-out cross-validation (0.70–0.73 AUC). Conclusions These results show that multimodal neuroimaging data can be used to predict remission with ECT for individual patients across different treatment centers, despite significant variability in clinical characteristics across centers. Future development of a clinical decision support tool applying these biomarkers may be feasible.publishedVersio

    250 AI-Supported Sleep Staging from Activity and Heart Rate

    Full text link
    Abstract Introduction Polysomnography (PSG) is considered the gold standard for sleep staging but is labor-intensive and expensive. Wrist wearables are an alternative to PSG because of their small form factor and continuous monitoring capability. In this work, we present a scheme to perform such automated sleep staging via deep learning in the MESA cohort validated against PSG. This scheme makes use of actigraphic activity counts and two coarse heart rate measures (only mean and standard deviation for 30-s sleep epochs) to perform multi-class sleep staging. Our method outperforms existing techniques in three-stage classification (i.e., wake, NREM, and REM) and is feasible for four-stage classification (i.e., wake, light, deep, and REM). Methods Our technique uses a combined convolutional neural network coupled and sequence-to-sequence network architecture to appropriate the temporal correlations in sleep toward classification. Supervised training with PSG stage labels for each sleep epoch as the target was performed. We used data from MESA participants randomly assigned to non-overlapping training (N=608) and validation (N=200) cohorts. The under-representation of deep sleep in the data leads to class imbalance which diminishes deep sleep prediction accuracy. To specifically address the class imbalance, we use a novel loss function that is minimized in the network training phase. Results Our network leads to accuracies of 78.66% and 72.46% for three-class and four-class sleep staging respectively. Our three-stage classifier is especially accurate at measuring NREM sleep time (predicted: 4.98 ± 1.26 hrs. vs. actual: 5.08 ± 0.98 hrs. from PSG). Similarly, our four-stage classifier leads to highly accurate estimates of light sleep time (predicted: 4.33 ± 1.20 hrs. vs. actual: 4.46 ± 1.04 hrs. from PSG) and deep sleep time (predicted: 0.62 ± 0.65 hrs. vs. actual: 0.63 ± 0.59 hrs. from PSG). Lastly, we demonstrate the feasibility of our method for sleep staging from Apple Watch-derived measurements. Conclusion This work demonstrates the viability of high-accuracy, automated multi-class sleep staging from actigraphy and coarse heart rate measures that are device-agnostic and therefore well suited for extraction from smartwatches and other consumer wrist wearables. Support (if any) This work was supported in part by the NIH grant 1R21AG068890-01 and the American Association for University Women. </jats:sec

    Super-Resolution PET Imaging Using Convolutional Neural Networks

    Full text link

    Longitudinal predictive modeling of tau progression along the structural connectome

    No full text
    Tau neurofibrillary tangles, a pathophysiological hallmark of Alzheimer’s disease (AD), exhibit a stereotypical spatiotemporal trajectory that is strongly correlated with disease progression and cognitive decline. Personalized prediction of tau progression is, therefore, vital for the early diagnosis and prognosis of AD. Evidence from both animal and human studies is suggestive of tau transmission along the brains preexisting neural connectivity conduits. We present here an analytic graph diffusion framework for individualized predictive modeling of tau progression along the structural connectome. To account for physiological processes that lead to active generation and clearance of tau alongside passive diffusion, our model uses an inhomogenous graph diffusion equation with a source term and provides closed-form solutions to this equation for linear and exponential source functionals. Longitudinal imaging data from two cohorts, the Harvard Aging Brain Study (HABS) and the Alzheimer’s Disease Neuroimaging Initiative (ADNI), were used to validate the model. The clinical data used for developing and validating the model include regional tau measures extracted from longitudinal positron emission tomography (PET) scans based on the 18F-Flortaucipir radiotracer and individual structural connectivity maps computed from diffusion tensor imaging (DTI) by means of tractography and streamline counting. Two-timepoint tau PET scans were used to assess the goodness of model fit. Three-timepoint tau PET scans were used to assess predictive accuracy via comparison of predicted and observed tau measures at the third timepoint. Our results show high consistency between predicted and observed tau and differential tau from region-based analysis. While the prognostic value of this approach needs to be validated in a larger cohort, our preliminary results suggest that our longitudinal predictive model, which offers an in vivo macroscopic perspective on tau progression in the brain, is potentially promising as a personalizable predictive framework for AD
    corecore