18 research outputs found
Benefits and obstacles: factors affecting the uptake of CAA in undergraduate courses
This short paper introduces and outlines a piece of research investigating the
use of Computer Assisted Assessment (CAA) with undergraduate students, in
order to identify the benefits of CAA as well the perceived obstacles to its
adoption. It is hoped that ultimately this research will be able to inform the
future use of CAA at undergraduate level, especially in blended learning
environments. This research is currently in progress at the University of
Bradford as part of the author’s PhD and feeding into the university’s
Pathfinder project into e-assessment. The author hopes to be able to take
advantage of the 11th International CAA conference to raise various issues
related to this research project with his professional colleagues in order to
receive feedback; this should enable decisions to be made on progress to
date and inform how the research project may be developed in future
Assessing understanding of complex learning outcomes and real-world skills using an authentic software tool: a study from Biomedical Sciences.
We describe a study conducted during 2009-12 into innovative assessment practice, evaluating an assessed coursework task on a final year Medical Genetics module for Biomedical Science undergraduates. An authentic e-assessment coursework task was developed, integrating objectively marked online questions with an online DNA sequence analysis tool (BLAST), routinely used by NHS and research professionals. The aim was to combine the assessment of understanding of complex module learning outcomes with real-world authentic skills highly valued in the work place. This approach challenges the oft-heard accusation that online computer-marked tests can lack validity and authenticity in higher education. The study demonstrates the content and construct validity of this form of e-assessment, showing that careful question design, allied with integration with the real life BLAST tool, enables instructors to assess complex higher order understanding, and requires students to demonstrate skills relevant for the work place. A study of three years of test results and measures of internal consistency data also show the reliability of this assessment. In addition, the results of surveys of student opinion and positive feedback from student module feedback questionnaires suggest that it is effective in terms of face validity. Keywords Authentic assessment; technology enhanced assessment; assessing deeper learning.
Secure, reliable and effective institution-wide e-assessment: paving the way for new technologies
This short paper addresses a number of the key themes of the 12th
International CAA conference with particular regard to evaluation, innovation
and strategic developments. It is based on the current findings and
experiences from two interrelated CAA projects underway at the University of
Bradford: “Embedded support processes for e-Assessment” and “Integrating
thin client systems and smart card technology to provide flexible, accessible
and secure e-Assessment”. These two projects, along with specific aims in the University’s Learning,
Teaching and Assessment Strategy and other projects conducted as part the
institution’s e-Strategy, aim to establish an effective and efficient system for
online summative and formative assessment at the University of Bradford that
will meet the needs of a Higher Education Institution in the 21st century. This
is very much a work in progress, and it is hoped that this work will be written
up as a long paper for a future CAA conference
Recommended from our members
Freeing the hoop jumpers: Eportfolio assessment to raise learner engagement on PgCert HE programmes
YesThe idea of professional development has gradually become an accepted and established part of teaching in higher education (Dearing, 1997; DfES, 2003; Browne, 2010). It is now the norm for new university teaching staff in the UK to complete a postgraduate certificate in Higher Education Practice, Learning and Teaching in HE, or Academic Practice as recommended or even mandatory initial professional development (Laycock & Shrives, 2009). While these certificate programmes are now well-established in the sector and are valued for raising the profile of university teaching and educational scholarship (Shrives, 2012), it is not uncommon for learners to view them as a hoop-jumping exercise, and therefore adopt strategic approaches to get through the programme, resulting in disappointing learning gains.
We present an analysis of the barriers to engagement that can cause PgCert learners to take such a hoop jumping approach to their programme, drawing from policy, literature, and participant views. We then propose a teaching and assessment model to address these barriers using an eportfolio approach. While eportfolio use is not new in PgCert programmes and staff development, for example being used notably at York St. John University where learners create a portfolio to evidence how they meet the UK Professional Standards Framework (UKPSF) and use it as an ‘aide memoire’ in a summatively assessed dialogue (Asghar, 2014), the challenges to engagement for our learners that the current study found lead us to propose a different portfolio approach.
There is of course no single right way to design deep learning into a PgCert programme, but we hope that the research-informed eportfolio model presented here may be useful to other practitioners who seek, like us, to remove the hoops from
reflective teaching practice
Assessing understanding of complex learning outcomes and real-world skills using an authentic software tool: a study from Biomedical Sciences.
YesWe describe a study conducted during 2009-12 into innovative assessment practice, evaluating an assessed coursework task on a final year Medical Genetics module for Biomedical Science undergraduates. An authentic e-assessment coursework task was developed, integrating objectively marked online questions with an online DNA sequence analysis tool (BLAST), routinely used by NHS and research professionals. The aim was to combine the assessment of understanding of complex module learning outcomes with real-world authentic skills highly valued in the work place. This approach challenges the oft-heard accusation that online computer-marked tests can lack validity and authenticity in higher education. The study demonstrates the content and construct validity of this form of e-assessment, showing that careful question design, allied with integration with the real life BLAST tool, enables instructors to assess complex higher order understanding, and requires students to demonstrate skills relevant for the work place. A study of three years of test results and measures of internal consistency data also show the reliability of this assessment. In addition, the results of surveys of student opinion, and positive feedback from student module feedback questionnaires suggest that it is effective in terms of face validity
Assessing understanding of complex learning outcomes and real-world skills using an authentic software tool: a study from Biomedical Sciences
We describe a study conducted during 2009-12 into innovative assessment practice, evaluating an assessed coursework task on a final year Medical Genetics module for Biomedical Science undergraduates. An authentic e-assessment coursework task was developed, integrating objectively marked online questions with an online DNA sequence analysis tool (BLAST), routinely used by NHS and research professionals. The aim was to combine the assessment of understanding of complex module learning outcomes with real-world authentic skills highly valued in the work place. This approach challenges the oft-heard accusation that online computer-marked tests can lack validity and authenticity in higher education. The study demonstrates the content and construct validity of this form of e-assessment, showing that careful question design, allied with integration with the real life BLAST tool, enables instructors to assess complex higher order understanding, and requires students to demonstrate skills relevant for the work place. A study of three years of test results and measures of internal consistency data also show the reliability of this assessment. In addition, the results of surveys of student opinion and positive feedback from student module feedback questionnaires suggest that it is effective in terms of face validity.
Keywords
Authentic assessment; technology enhanced assessment; assessing deeper learning
In search of Osiris: random item selection, fairness and defensibility in high-stakes e-assessment
This paper explores issues of fairness and reliability in e-assessments where objectively marked questions are randomly selected from an item bank .In the UK Higher Education sector, there are several compelling reasons for wanting to use such assessments for summative purposes, but randomly selected items do raise concerns for students, instructors and institutions. Drawing upon latent trait analysis and computer adaptive testing, the paper advocates the use of calibrated item banks and proposes the innovative 'OSIRIS' (objective standardisation in random item selection) method of modifying student grades based on the difficulty level of the questions they received, demonstrating this with data from a large scale online assessment. The paper concludes that the OSIRIS modified grades method, in conjunction with following best practice for quality assurance in item banking, may mean that random item selection could in fact enhance the fairness and reliability of assessments, rather than being viewed as a risk
Student engagement with topic-based facilitative feedback on e-assessments
This three year study investigates how undergraduate students engage with topic-based formative feedback on e-assessments consisting of multiple choice and extended matching questions. After submitting the assessment, the student does not receive directive feedback on individual questions, but instead they are shown diagnostic facilitative feedback on the different subject topic areas covered in the test. The study looks into student engagement with this type of topic-based feedback: engagement is measured in terms of time commitment, number of questions answered, and the distribution of timing of the student effort. Through quantitative analysis of three years of student data, the paper explores whether there is evidence of different engagement patterns between the stronger and weaker students, as measured by performance on the subsequent summative module examination. The paper concludes that there is evidence that the more successful students did engage with the formative assessments significantly more than the mid-ranking students, and the least successful students engaged least of all. Qualitative questionnaire data also indicate positive student attitudes towards this kind of feedback and suggest that the feedback is mostly used to evaluate the revision process
e-Assessment for learning: Can online multiple-choice and extended matching questions really provide useful formative feedback?
This article investigates the impact on learning of generic feedback provided automatically after online selected response questions, such as multiple-choice or extended matching questions. Students on a foundation degree biology module were given the opportunity to engage with a formative e-assessment task which gave them detailed and extensive feedback on each question in order to help focus their learning and revision in preparation for an online summative assessment. Quantitative data analysis was used to ascertain whether there was an association between student progress and their level of engagement with the formative feedback task. In addition, students were surveyed on their attitudes towards this kind of formative e-assessment. The article concludes that student attitudes to formative e-assessment are generally very positive and that there is evidence to suggest that increased engagement with formative e-assessment tasks like this can be linked to increased progress, especially if the students see the feedback as part of the learning process
