11 research outputs found

    Motivational Interviewing Improves Medication Adherence: a Systematic Review and Meta-analysis

    No full text
    BACKGROUND: Randomized clinical trials (RCTs), mostly conducted among minority populations, have reported that motivational interviewing (MI) can improve medication adherence. OBJECTIVES: To evaluate the impact of MI and of the MI delivery format, fidelity assessment, fidelity-based feedback, counselors’ background and MI exposure time on adherence. DATA SOURCES: We searched the MEDLINE database for studies published from 1966 until February 2015. STUDY ELIGIBILITY CRITERIA: We included RCTs that compared MI to a control group and reported a numerical measure of medication adherence. DATA SYNTHESIS: The main outcome was medication adherence defined as any subjective or objective measure reported as the proportion of subjects with adequate adherence or mean adherence and standard deviation. For categorical variables we calculated the relative risk (RR) of medication adherence, and for continuous variables we calculated the standardized mean difference (SMD) between the MI and control groups. RESULTS: We included 17 RCTs. Ten targeted adherence to HAART. For studies reporting a categorical measure (n = 11), the pooled RR for medication adherence was higher for MI compared with control (1.17; 95 % CI 1.05- 1.31; p < 0.01). For studies reporting a continuous measure (n = 11), the pooled SMD for medication adherence was positive (0.70; 95 % CI 0.15-1.25; p < 0.01) for MI compared with control. The characteristics that were significantly (p < 0.05) associated with medication adherence were telephonic MI and fidelity-based feedback among studies reporting categorical measures, group MI and fidelity assessment among studies reporting continuous measures and delivery by nurses or research assistants. Effect sizes differed in magnitude, creating high heterogeneity. CONCLUSION: MI improves medication adherence at different exposure times and counselors’ educational level. However, the evaluation of MI characteristics associated with success had inconsistent results. Larger studies targeting diverse populations with a variety of chronic conditions are needed to clarify the effect of different MI delivery modes, fidelity assessment and provision of fidelity based-feedback

    In vitro models of multiple drug resistance

    No full text

    Viskosität

    No full text

    The ChatGPT Artificial Intelligence Chatbot: How Well Does It Answer Accounting Assessment Questions?

    No full text
    ABSTRACT ChatGPT, a language-learning model chatbot, has garnered considerable attention for its ability to respond to users’ questions. Using data from 14 countries and 186 institutions, we compare ChatGPT and student performance for 28,085 questions from accounting assessments and textbook test banks. As of January 2023, ChatGPT provides correct answers for 56.5 percent of questions and partially correct answers for an additional 9.4 percent of questions. When considering point values for questions, students significantly outperform ChatGPT with a 76.7 percent average on assessments compared to 47.5 percent for ChatGPT if no partial credit is awarded and 56.5 percent if partial credit is awarded. Still, ChatGPT performs better than the student average for 15.8 percent of assessments when we include partial credit. We provide evidence of how ChatGPT performs on different question types, accounting topics, class levels, open/closed assessments, and test bank questions. We also discuss implications for accounting education and research

    The ChatGPT Artificial Intelligence Chatbot: How Well Does It Answer Accounting Assessment Questions?

    Full text link
    ABSTRACT ChatGPT, a language-learning model chatbot, has garnered considerable attention for its ability to respond to users’ questions. Using data from 14 countries and 186 institutions, we compare ChatGPT and student performance for 28,085 questions from accounting assessments and textbook test banks. As of January 2023, ChatGPT provides correct answers for 56.5 percent of questions and partially correct answers for an additional 9.4 percent of questions. When considering point values for questions, students significantly outperform ChatGPT with a 76.7 percent average on assessments compared to 47.5 percent for ChatGPT if no partial credit is awarded and 56.5 percent if partial credit is awarded. Still, ChatGPT performs better than the student average for 15.8 percent of assessments when we include partial credit. We provide evidence of how ChatGPT performs on different question types, accounting topics, class levels, open/closed assessments, and test bank questions. We also discuss implications for accounting education and research.</jats:p
    corecore