62 research outputs found

    The educational impact of assessment: a comparison of DOPS and MCQs

    Get PDF
    Aim: To evaluate the impact of two different assessment formats on the approaches to learning of final year veterinary students. The relationship between approach to learning and examination performance was also investigated. Method: An 18-item version of the Study Process Questionnaire (SPQ) was sent to 87 final year students. Each student responded to the questionnaire with regards to DOPS (Direct Observation of Procedural Skills) and a Multiple Choice Examination (MCQ). Semi-structured interviews were conducted with 16 of the respondents to gain a deeper insight into the students’ perception of assessment. Results: Students’ adopted a deeper approach to learning for DOPS and a more surface approach with MCQs. There was a positive correlation between an achieving approach to learning and examination performance. Analysis of the qualitative data revealed that deep, surface and achieving approaches were reported by the students and seven major influences on their approaches to learning were identified: motivation, purpose, consequence, acceptability, feedback, time pressure and the individual difference of the students. Conclusions: The format of DOPS has a positive influence on approaches to learning. There is a conflict for students between preparing for final examinations and preparing for clinical practice

    Factors influencing students’ receptivity to formative feedback emerging from different assessment cultures

    Get PDF
    Introduction Feedback after assessment is essential to support the development of optimal performance, but often fails to reach its potential. Although different assessment cultures have been proposed, the impact of these cultures on students’ receptivity to feedback is unclear. This study aimed to explore factors which aid or hinder receptivity to feedback. Methods Using a constructivist grounded theory approach, the authors conducted six focus groups in three medical schools, in three separate countries, with different institutional approaches to assessment, ranging from a traditional summative assessment structure to a fully implemented programmatic assessment system. The authors analyzed data iteratively, then identified and clarified key themes. Results Helpful and counterproductive elements were identified within each school’s assessment system. Four principal themes emerged. Receptivity to feedback was enhanced by assessment cultures which promoted students’ agency, by the provision of authentic and relevant assessment, and by appropriate scaffolding to aid the interpretation of feedback. Provision of grades and comparative ranking provided a helpful external reference but appeared to hinder the promotion of excellence. Conclusions This study has identified important factors emerging from different assessment cultures which, if addressed by programme designers, could enhance the learning potential of feedback following assessments. Students should be enabled to have greater control over assessment and feedback processes, which should be as authentic as possible. Effective long-term mentoring facilitates this process. The trend of curriculum change towards constructivism should now be mirrored in the assessment processes in order to enhance receptivity to feedback

    Changing the culture of assessment: the dominance of the summative assessment paradigm

    Get PDF
    Background Despite growing evidence of the benefits of including assessment for learning strategies within programmes of assessment, practical implementation of these approaches is often problematical. Organisational culture change is often hindered by personal and collective beliefs which encourage adherence to the existing organisational paradigm. We aimed to explore how these beliefs influenced proposals to redesign a summative assessment culture in order to improve students’ use of assessment-related feedback. Methods Using the principles of participatory design, a mixed group comprising medical students, clinical teachers and senior faculty members was challenged to develop radical solutions to improve the use of post-assessment feedback. Follow-up interviews were conducted with individual members of the group to explore their personal beliefs about the proposed redesign. Data were analysed using a socio-cultural lens. Results Proposed changes were dominated by a shared belief in the primacy of the summative assessment paradigm, which prevented radical redesign solutions from being accepted by group members. Participants’ prior assessment experiences strongly influenced proposals for change. As participants had largely only experienced a summative assessment culture, they found it difficult to conceptualise radical change in the assessment culture. Although all group members participated, students were less successful at persuading the group to adopt their ideas. Faculty members and clinical teachers often used indirect techniques to close down discussions. The strength of individual beliefs became more apparent in the follow-up interviews. Conclusions Naïve epistemologies and prior personal experiences were influential in the assessment redesign but were usually not expressed explicitly in a group setting, perhaps because of cultural conventions of politeness. In order to successfully implement a change in assessment culture, firmly-held intuitive beliefs about summative assessment will need to be clearly understood as a first step

    Assessment of examiner leniency and stringency ('hawk-dove effect') in the MRCP(UK) clinical examination (PACES) using multi-facet Rasch modelling

    Get PDF
    BACKGROUND: A potential problem of clinical examinations is known as the hawk-dove problem, some examiners being more stringent and requiring a higher performance than other examiners who are more lenient. Although the problem has been known qualitatively for at least a century, we know of no previous statistical estimation of the size of the effect in a large-scale, high-stakes examination. Here we use FACETS to carry out a multi-facet Rasch modelling of the paired judgements made by examiners in the clinical examination (PACES) of MRCP(UK), where identical candidates were assessed in identical situations, allowing calculation of examiner stringency. METHODS: Data were analysed from the first nine diets of PACES, which were taken between June 2001 and March 2004 by 10,145 candidates. Each candidate was assessed by two examiners on each of seven separate tasks. with the candidates assessed by a total of 1,259 examiners, resulting in a total of 142,030 marks. Examiner demographics were described in terms of age, sex, ethnicity, and total number of candidates examined. RESULTS: FACETS suggested that about 87% of main effect variance was due to candidate differences, 1% due to station differences, and 12% due to differences between examiners in leniency-stringency. Multiple regression suggested that greater examiner stringency was associated with greater examiner experience and being from an ethnic minority. Male and female examiners showed no overall difference in stringency. Examination scores were adjusted for examiner stringency and it was shown that for the present pass mark, the outcome for 95.9% of candidates would be unchanged using adjusted marks, whereas 2.6% of candidates would have passed, even though they had failed on the basis of raw marks, and 1.5% of candidates would have failed, despite passing on the basis of raw marks. CONCLUSION: Examiners do differ in their leniency or stringency, and the effect can be estimated using Rasch modelling. The reasons for differences are not clear, but there are some demographic correlates, and the effects appear to be reliable across time. Account can be taken of differences, either by adjusting marks or, perhaps more effectively and more justifiably, by pairing high and low stringency examiners, so that raw marks can be used in the determination of pass and fail

    Expertise in performance assessment: assessors’ perspectives

    Get PDF
    The recent rise of interest among the medical education community in individual faculty making subjective judgments about medical trainee performance appears to be directly related to the introduction of notions of integrated competency-based education and assessment for learning. Although it is known that assessor expertise plays an important role in performance assessment, the roles played by different factors remain to be unraveled. We therefore conducted an exploratory study with the aim of building a preliminary model to gain a better understanding of assessor expertise. Using a grounded theory approach, we conducted seventeen semi-structured interviews with individual faculty members who differed in professional background and assessment experience. The interviews focused on participants’ perceptions of how they arrived at judgments about student performance. The analysis resulted in three categories and three recurring themes within these categories: the categories assessor characteristics, assessors’ perceptions of the assessment tasks, and the assessment context, and the themes perceived challenges, coping strategies, and personal development. Central to understanding the key processes in performance assessment appear to be the dynamic interrelatedness of the different factors and the developmental nature of the processes. The results are supported by literature from the field of expertise development and in line with findings from social cognition research. The conceptual framework has implications for faculty development and the design of programs of assessment
    corecore