18 research outputs found
Decoding 2.3 million ECGs: interpretable deep learning for advancing cardiovascular diagnosis and mortality risk stratification
Electrocardiogram (ECG) is widely considered the primary test for evaluating cardiovascular diseases. However, the use of artificial intelligence to advance these medical practices and learn new clinical insights from ECGs remains largely unexplored. Utilising a dataset of 2,322,513 ECGs collected from 1,558,772 patients with 7 years of follow-up, we developed a deep learning model with state-of-the-art granularity for the interpretable diagnosis of cardiac abnormalities, gender identification, and hyper- tension screening solely from ECGs, which are then used to stratify the risk of mortality. The model achieved the area under the receiver operating characteristic curve (AUC) scores of 0.998 (95% confidence interval (CI), 0.995-0.999), 0.964 (0.963-0.965), and 0.839 (0.837-0.841) for the three diagnostic tasks separately. Using ECG-predicted results, we find high risks of mortality for subjects with sinus tachycardia (adjusted hazard ratio (HR) of 2.24, 1.96-2.57), and atrial fibrillation (adjusted HR of 2.22, 1.99-2.48). We further use salient morphologies produced by the deep learning model to identify key ECG leads that achieved similar performance for the three diagnoses, and we find that the V1 ECG lead is important for hypertension screening and mortality risk stratification of hypertensive cohorts, with an AUC of 0.816 (0.814-0.818) and a univariate HR of 1.70 (1.61-1.79) for the two tasks separately. Using ECGs alone, our developed model showed cardiologist-level accuracy in interpretable cardiac diagnosis, and the advancement in mortality risk stratification; In addition, the potential to facilitate clinical knowledge discovery for gender and hypertension detection which are not readily available
Theoretical Analysis of Emission Spectra of Electronic Transitions of Molecules in Dense Media
MentalQLM:A Lightweight Large Language Model for Mental Healthcare Based on Instruction Tuning and Dual LoRA Modules
Mental disorders pose significant challenges to healthcare systems and have profound social implications. The rapid development of large language model (LLM) presents new opportunities for improving mental healthcare. However, existing approaches primarily rely on instruction tuning and few-shot in-context learning with massive datasets and large-scale backbone models, leading to significant computational costs. To address these limitations, we propose MentalQLM, a novel lightweight LLM that leverages a dual Low-Rank Adaptation (LoRA) strategy for parameter-efficient fine-tuning. The development of our proposed MentalQLM includes two key stages. Firstly, we perform dataset pruning based on perplexity and diversity analysis to reduce computational load. The first LoRA module is applied during instruction tuning to adapt the base LLM for mental health classification. Secondly, we introduce a dense layer augmented with a second LoRA module, fine-tuned specifically to boost performance on complex multi-class classification problems. Experimental results demonstrate that our proposed MentalQLM, with only 0.5 billion parameters, achieves an average weighted F1-score of 0.778 on mental disorder diagnosis across five benchmark datasets. It outperforms the state-of-the-art instruction-tuned MentaLLaMA-Chat-13B model by 3.2%, and the few-shot tuned GPT-4 model by 17.7%. This promising performance, combined with its significantly lower resource requirements, positions our developed MentalQLM as a cost-effective and efficient solution for real-world mental healthcare applications, especially in computationally constrained environments. Code is available at https://github.com/tortorish/MentalQLM
