25 research outputs found

    Large language models in healthcare and medical domain: A review

    Full text link
    The deployment of large language models (LLMs) within the healthcare sector has sparked both enthusiasm and apprehension. These models exhibit the remarkable capability to provide proficient responses to free-text queries, demonstrating a nuanced understanding of professional medical knowledge. This comprehensive survey delves into the functionalities of existing LLMs designed for healthcare applications, elucidating the trajectory of their development, starting from traditional Pretrained Language Models (PLMs) to the present state of LLMs in healthcare sector. First, we explore the potential of LLMs to amplify the efficiency and effectiveness of diverse healthcare applications, particularly focusing on clinical language understanding tasks. These tasks encompass a wide spectrum, ranging from named entity recognition and relation extraction to natural language inference, multi-modal medical applications, document classification, and question-answering. Additionally, we conduct an extensive comparison of the most recent state-of-the-art LLMs in the healthcare domain, while also assessing the utilization of various open-source LLMs and highlighting their significance in healthcare applications. Furthermore, we present the essential performance metrics employed to evaluate LLMs in the biomedical domain, shedding light on their effectiveness and limitations. Finally, we summarize the prominent challenges and constraints faced by large language models in the healthcare sector, offering a holistic perspective on their potential benefits and shortcomings. This review provides a comprehensive exploration of the current landscape of LLMs in healthcare, addressing their role in transforming medical applications and the areas that warrant further research and development

    Use of electronic personal health record systems to encourage HIV screening: an exploratory study of patient and provider perspectives

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>When detected, HIV can be effectively treated with antiretroviral therapy. Nevertheless in the U.S. approximately 25% of those who are HIV-infected do not know it. Much remains unknown about how to increase HIV testing rates. New Internet outreach methods have the potential to increase disease awareness and screening among patients, especially as electronic personal health records (PHRs) become more widely available. In the US Department of Veterans' Affairs medical care system, 900,000 veterans have indicated an interest in receiving electronic health-related communications through the PHR. Therefore we sought to evaluate the optimal circumstances and conditions for outreach about HIV screening. In an exploratory, qualitative research study we examined patient and provider perceptions of Internet-based outreach to increase HIV screening among veterans who use the Veterans Health Administration (VHA) health care system.</p> <p>Findings</p> <p>We conducted two rounds of focus groups with veterans and healthcare providers at VHA medical centers. The study's first phase elicited general perceptions of an electronic outreach program to increase screening for HIV, diabetes, and high cholesterol. Using phase 1 results, outreach message texts were drafted and then presented to participants in the second phase. Analysis followed modified grounded theory.</p> <p>Patients and providers indicated that electronic outreach through a PHR would provide useful information and would motivate patients to be screened for HIV. Patients believed that electronic information would be more convenient and understandable than information provided verbally. Patients saw little difference between messages about HIV versus about diabetes and cholesterol. Providers, however, felt patients would disapprove of HIV-related messages due to stigma. Providers expected increased workload from the electronic outreach, and thus suggested adding primary care resources and devising methods to smooth the flow of patients getting screened. When provided a choice between unsecured emails versus PHRs as the delivery mechanism for disease screening messages, both patients and providers preferred PHRs.</p> <p>Conclusions</p> <p>There is considerable potential to use PHR systems for electronic outreach and social marketing to communicate to patients about, and increase rates of, disease screening, including for HIV. Planning for direct-to-patient communications through PHRs should include providers and address provider reservations, especially about workload increases.</p

    Large Language Models in Healthcare and Medical Domain: A Review

    No full text
    The deployment of large language models (LLMs) within the healthcare sector has sparked both enthusiasm and apprehension. These models exhibit the remarkable ability to provide proficient responses to free-text queries, demonstrating a nuanced understanding of professional medical knowledge. This comprehensive survey delves into the functionalities of existing LLMs designed for healthcare applications and elucidates the trajectory of their development, starting with traditional Pretrained Language Models (PLMs) and then moving to the present state of LLMs in the healthcare sector. First, we explore the potential of LLMs to amplify the efficiency and effectiveness of diverse healthcare applications, particularly focusing on clinical language understanding tasks. These tasks encompass a wide spectrum, ranging from named entity recognition and relation extraction to natural language inference, multimodal medical applications, document classification, and question-answering. Additionally, we conduct an extensive comparison of the most recent state-of-the-art LLMs in the healthcare domain, while also assessing the utilization of various open-source LLMs and highlighting their significance in healthcare applications. Furthermore, we present the essential performance metrics employed to evaluate LLMs in the biomedical domain, shedding light on their effectiveness and limitations. Finally, we summarize the prominent challenges and constraints faced by large language models in the healthcare sector by offering a holistic perspective on their potential benefits and shortcomings. This review provides a comprehensive exploration of the current landscape of LLMs in healthcare, addressing their role in transforming medical applications and the areas that warrant further research and development

    Evaluation of open and closed-source LLMs for low-resource language with zero-shot, few-shot, and chain-of-thought prompting

    No full text
    As the global deployment of Large Language Models (LLMs) increases, the demand for multilingual capabilities becomes more crucial. While many LLMs excel in real-time applications for high-resource languages, few are tailored specifically for low-resource languages. The limited availability of text corpora for low-resource languages, coupled with their minimal utilization during LLM training, hampers the models’ ability to perform effectively in real-time applications. Additionally, evaluations of LLMs are significantly less extensive for low-resource languages. This study offers a comprehensive evaluation of both open-source and closed-source multilingual LLMs focused on low-resource language like Bengali, a language that remains notably underrepresented in computational linguistics. Despite the limited number of pre-trained models exclusively on Bengali, we assess the performance of six prominent LLMs, i.e., three closed-source (GPT-3.5, GPT-4o, Gemini) and three open-source (Aya 101, BLOOM, LLaMA) across key natural language processing (NLP) tasks, including text classification, sentiment analysis, summarization, and question answering. These tasks were evaluated using three prompting techniques: Zero-Shot, Few-Shot, and Chain-of-Thought (CoT). This study found that the default hyperparameters of these pre-trained models, such as temperature, maximum token limit, and the number of few-shot examples, did not yield optimal outcomes and led to hallucination issues in many instances. To address these challenges, ablation studies were conducted on key hyperparameters, particularly temperature and the number of shots, to optimize Few-Shot learning and enhance model performance. The focus of this research is on understanding how these LLMs adapt to low-resource downstream tasks, emphasizing their linguistic flexibility and contextual understanding. Experimental results demonstrated that the closed-source GPT-4o model, utilizing Few-Shot learning and Chain-of-Thought prompting, achieved the highest performance across multiple tasks: an F1 score of 84.54% for text classification, 99.00% for sentiment analysis, a F1bertscore of 72.87% for summarization, and 58.22% for question answering. For transparency and reproducibility, all methodologies and code from this study are available on our GitHub repository: https://github.com/zabir-nabil/bangla-multilingual-llm-eval

    Fibro-CoSANet: pulmonary fibrosis prognosis prediction using a convolutional self attention network

    Full text link
    Abstract Idiopathic pulmonary fibrosis (IPF) is a restrictive interstitial lung disease that causes lung function decline by lung tissue scarring. Although lung function decline is assessed by the forced vital capacity (FVC), determining the accurate progression of IPF remains a challenge. To address this challenge, we proposed Fibro-CoSANet, a novel end-to-end multi-modal learning based approach, to predict the FVC decline. Fibro-CoSANet utilized computed tomography images and demographic information in convolutional neural network frameworks with a stacked attention layer. Extensive experiments on the OSIC Pulmonary Fibrosis Progression Dataset demonstrated the superiority of our proposed Fibro-CoSANet by achieving new state-of-the-art modified Laplace log-likelihood score of −6.68. This network may benefit research areas concerned with designing networks to improve the prognostic accuracy of IPF. The source-code for Fibro-CoSANet is available at: https://github.com/zabir-nabil/Fibro-CoSANet.</jats:p
    corecore