13 research outputs found
Estimation and psychometric analysis of component profile scores via multivariate generalizability theory
Component Universe Score Profile analysis (CUSP) is introduced in this paper as a psychometric alternative to multivariate profile analysis. The theoretical foundations of CUSP analysis are reviewed, which include multivariate generalizability theory and constrained principal components analysis. Because CUSP is a combination of generalizability theory and principal components analysis, CUSP was analyzed for accuracy and precision for small sample sizes via a simulation analysis. The stability and accuracy of CUSP analysis was also assessed using a large sample of examinee data (n=5,000) from the College Board Advanced Placement (AP) Statistics subject test. The simulation showed that CUSP reliability estimates and coordinate estimates are generally unbiased when sample sizes were larger than n=100. Subtest covariance structure caused significant bias in reliability and coordinate estimates when the covariance matrix was compound symmetric. Coordinate standard error estimates were significantly biased when sample sizes were very small (n\u3c100). The AP data analysis, like the simulation, showed that profile estimates were consistent and stable for larger samples, but profile scores were inconsistent for the small sample condition (n=50). CUSP external analysis illustrated the use of an external variable (gender) to define and predict AP Statistics profiles. The estimated external gender profile was not useful for classifying examinees by gender, achieving accuracy approximately equal to chance assignment
Detection of Differential Item Functioning in Dichotomous and Polytomous Items: The Exploratory Correspondence Analysis Method
Abstract not availabl
A resource efficient and reliable standard setting method for OSCEs: Borderline regression method using standardized patients as sole raters in clinical case encounters with medical students
A resource efficient and reliable standard setting method for OSCEs: Borderline regression method using standardized patients as sole raters in clinical case encounters with medical students
Finding a reliable, practical and low-cost criterion-referenced standard setting method for performance-based assessments has proved challenging. The borderline regression method of standard setting for OSCEs has been shown to estimate reliable scores in studies using faculty as raters. Standardized patients (SPs) have been shown to be reliable OSCE raters but have not been evaluated as raters using this standard setting method. Our study sought to find whether SPs could be reliably used as sole raters in an OSCE of clinical encounters using the borderline regression standard setting method. SPs were trained for on a five-point global rating scale. In an OSCE for medical students, SPs completed skills checklists and the global rating scale. The borderline regression method was used to create case passing scores. We estimated the dependability of the final pass or fail decisions and the absolute dependability coefficients for global ratings, checklist scores, and case pass-score decisions using generalizability theory. The overall dependability estimate is 0.92 for pass or fail decisions for the complete OSCE. Dependability coefficients (0.70–0.86) of individual case passing scores range demonstrated high dependability. Based on our findings, the borderline regression method of standard setting can be used with SPs as sole raters in a medical student OSCE to produce a dependable passing score. For those already using SPs as raters, this can provide a practical criterion-referenced standard setting method for no additional cost or faculty time.</p
Validity evidence for a novel instrument assessing medical student attitudes toward instruction in implicit bias recognition and management
Abstract
Background
Implicit bias instruction is becoming more prevalent in health professions education, with calls for skills-based curricula moving from awareness and recognition to management of implicit bias. Evidence suggests that health professionals and students learning about implicit bias (“learners”) have varying attitudes about instruction in implicit bias, including the concept of implicit bias itself. Assessing learner attitudes could inform curriculum development and enable instructional designs that optimize learner engagement. To date, there are no instruments with evidence for construct validity that assess learner attitudes about implicit bias instruction and its relevance to clinical care.
Methods
The authors developed a novel instrument, the Attitude Towards Implicit Bias Instrument (ATIBI) and gathered evidence for three types of construct validity- content, internal consistency, and relationship to other variables.
Results
Authors utilized a modified Delphi technique with an interprofessional team of experts, as well as cognitive interviews with medical students leading to item refinement to improve content validity. Seven cohorts of medical students, N = 1072 completed the ATIBI. Psychometric analysis demonstrated high internal consistency (α = 0.90). Exploratory factor analysis resulted in five factors. Analysis of a subset of 100 medical students demonstrated a moderate correlation with similar instruments, the Integrative Medicine Attitude Questionnaire (r = 0.63, 95% CI: [0.59, 0.66]) and the Internal Motivation to Respond Without Prejudice Scale (r = 0.36, 95% CI: [0.32, 0.40]), providing evidence for convergent validity. Scores on our instrument had low correlation to the External Motivation to Respond Without Prejudice Scale (r = 0.15, 95% CI: [0.09, 0.19]) and the Groningen Reflection Ability Scale (r = 0.12, 95% CI: [0.06, 0.17]) providing evidence for discriminant validity. Analysis resulted in eighteen items in the final instrument; it is easy to administer, both on paper form and online.
Conclusion
The Attitudes Toward Implicit Bias Instrument is a novel instrument that produces reliable and valid scores and may be used to measure medical student attitudes related to implicit bias recognition and management, including attitudes toward acceptance of bias in oneself, implicit bias instruction, and its relevance to clinical care.
</jats:sec
Recommended from our members
Novel use of an OSCE to assess medical students’ responses to a request for a low value diagnostic imaging test: A mixed methods analysis
ObjectiveEvaluate medical students' communication skills with a standardized patient (SP) requesting a low value test and describe challenges students identify in addressing the request.MethodsIn this mixed-methods study, third-year students from two medical schools obtained a history, performed a physical examination, and counseled an SP presenting with uncomplicated low back pain who requests an MRI which is not indicated. SP raters evaluated student communication skills using a 14-item checklist. Post-encounter, students reported whether they ordered an MRI and challenges faced.ResultsStudents who discussed practice guidelines and risks of unnecessary testing with the SP were less likely to order an MRI. Students cited several challenges in responding to the SP request including patient characteristics and circumstances, lack of knowledge about MRI indications and alternatives, and lack of communication skills to address the patient request.ConclusionsMost students did not order an MRI for uncomplicated LBP, but only a small number of students educated the patient about the evidence to avoid unnecessary imaging or the harm of unnecessary testing.Practice implicationsKnowledge about unnecessary imaging in uncomplicated LBP may be insufficient to adhere to best practices and longitudinal training in challenging conversations is needed
