This blog touches on the move away from examination-based assessments to longer ‘timed’ essays and replicas that it has become necessary to move to this year, and its implications on academic integrity. If you haven’t already, I would recommend reading Dr Isabel Hopwood’s blog on the recent TESTA findings at Bristol as it briefly covers this and other assessment topics.
I’ll be writing about three students’ experiences of the assessments that have been changed this year. The first student studied Economics. Before the need to change assessments, Economics students had a very exam heavy workload with 2/3 of summative assessments in years 1 and 3 being carried out by exam, and a mammoth 92% in year 2. As the student was so used to them, it was unsurprising that they quite liked exams as it’s what they were used to, particularly for the mathematics elements of the course.
As a result of the need to change assessment methods, many of these exams either became coursework essays, or 7-day open-book distance exams. The student commented that the move to more essays didn’t work as well, partially because the cohort was not clear on whether the marking criteria has been changed alongside the change of assessment method. Something that may have confused the students further was the inclusion of numerical problems within their ‘essays’. This student commented that essays work best when there isn’t a right or wrong answer. These two comments align really well, as the exam questions were designed to test specific skills under locked-down conditions, simply converting them over to an essay led to confused students. The same could be said for a 7-day open-book assessment the student completed, whereby they noted that the assessment had been made so difficult that even with the resources available to them the cohort struggled to answer all of the questions.
On the whole however, this Economics student really quite liked the move to 7-day open-book exams as they found them less stressful as they had more time to prepare. Another student I spoke to studied Law. Law is similar to Economics in that 2/3 of their assessments are typically timed examinations. All of these exams would have usually been crammed into one week; instead they were changed to 7-day open-book essays. Whilst these essays were not easy, and assessment work and deadlines were still constant, the student much preferred these over the exams. They enjoyed that the essay tested application of knowledge rather than memory of knowledge, and that the notes they had written throughout the course were actually of use. Whilst they were still stressful, the student felt less pressured by them.
I asked each student I spoke to about whether, from their perceptions, there was more collusion, cheating or plagiarism happening because of this lack of examinations. The law student I spoke to responded that it wasn’t worth cheating as it would be found out and could have knock-on consequences for your career; as such they didn’t believe much plagiarism happened during the last assessment season.
I also spoke to a student who studied Criminology, their perspective on plagiarism was quite different. Most of their assessments had been coursework to begin with, the small number of exams had been turned into 24-hour open-book exams. However, students didn’t seem clear on what exactly was classed as ‘cheating’. As such there was much talk in course group chats during assessment periods requiring students to mute them so they were not involved, or other students calling out those talking about assessments as being in-appropriate. I asked how the school had dealt with this, if they knew. I believe the school have tried to assess the impact by sending a survey to students asking questions such as whether they have paid for an essay before. Although, I am doubtful to whether any student would admit to it.
Ultimately, the point of this blog was to discuss constructive alignment and how this links to academic integrity. Constructive alignment refers to the construction of knowledge by the student (leaning on the constructivist belief that knowledge cannot simply be imparted or transmitted); and the alignment of the learning activities set to achieve the learning outcomes (Biggs, 2003). In practice this is the alignment of learning objectives, with taught content, and with the assessment task whilst taking into account the conditions the assessment will be undertaken in.
Most students I have spoken to for this blog, and part of the University Quality Team’s School Reviews, found that they much preferred completing essays over a longer period of time than under examination conditions as there was reduced pressure to perform in one 2–3-hour window, and they could benefit from the notes they had taken to actually apply the knowledge rather than simply regurgitate. Whilst few students thought there was collusion, or other types of plagiarism occurring, they did comment that it seemed more prevalent where students were answering the same questions. In instances where students picked their own essay topic/title, students were aware that if it were too similar to other students’ that it would likely cause suspicion.
So, what does this mean for future practice and deterring collusion/plagiarism? For any assessment which does not have a right or wrong answer, and students could choose their topic; a piece of course work or 7-day open-book would be best. This way students feel less stressed, more engaged in practically applying their knowledge, and there is little risk of plagiarism as it may be too obvious. I would encourage anyone who currently prescribes essay titles to allow their students to pick their own; not only does this reduce the risk of collusion, it also enables students to really deep dive into an area of the unit that they are personally interested in – potentially involving wider reading and more research, which is always a good thing!
For assessments with right or wrong answers, testing mathematical skills, or simply memory of knowledge; exams may be what we return to. Whilst I would like to say the more we can limit exam use the better, I feel that a 24-hour open-book assessment isn’t a substitute for an invigilated exam with regards to academic integrity. If we’re using this method for closed-answer/mathematical knowledge testing, the risk of group-chat collusion is potentially too high and could be easily remedied by having these return to in-person invigilated examinations when restrictions allow.