Catriona Johnson, Lecturer in Academic Development for BILT, provides an overview of our Show Tell and Talk event on AI and assessment
Integrating AI into assessment
Sam Bell from the Business School set the scene with a quick poll to gauge how much participants at the Show Tell and Talk thought that AI should be integrated into assessments. A show of hands in response to her three questions about whether academics should a) actively encourage students to use it, b) limit its use with clear parameters or c) completely prohibit it, showed that most people in the room were in favour of the first two options, with only a small minority considering a complete ban to be the way forward.
Sam’s engaging presentation then went on to explain how AI has been successfully integrated into one of her final year UG coursework assessments, using strategies such as restricting the number of sources (which could then be checked more thoroughly by the tutor) and getting students to use ChatGPT to get feedback on their coursework rather than creating the original texts. Sam also talked about using marking rubrics to clarify to students that they would be assessed on the quality of their ideas rather than on a perfect academic style of writing. This seemed to be a key message for students as their final reflective statements for this piece of coursework revealed that the most common use of AI was to enhance the grammar and vocabulary of their writing.
Guidance for staff and students
Pete Peasey from the DEO then gave a thought-provoking overview of the BILT Associate Project, which has compared AI policies and guidance from a number of HE institutions. He explained that although much of the staff-facing guidance is reactive and lacks detail, some universities, such as King’s and LSE, have created constructive advice for assessment reform in response to GenAI’s widespread accessibility. In terms of student-facing guidance, UoB’s guide to using AI at university (developed by Study Skills) was described as exemplary as it provides meaningful, detailed advice for students. Despite this excellent resource, students and staff at Bristol have called for even clearer advice on institutional use of AI to help reduce anxiety around this issue.
However, recent focus groups at UoB have highlighted some key differences in opinion between staff and students about how to redesign assessments in response to AI. Pete explained that many members of staff felt that a return to traditional in person exams would be a guaranteed way to prevent AI use, as well as proctoring software to monitor online exams. Students were, perhaps unsurprisingly, strongly against both these suggestions, favouring assessments which incorporated AI, e.g. evaluating an AI-generated text. This approach could help to develop AI literacy skills and prepare students for a future with AI in it. Despite these differences in opinion, it was agreed that central university guidance on academic integrity related to AI and assessments should be preventative rather than punitive.
Suggestions from the audience
During the Q&A session at the end, a participant understandably raised the issue of the increased workload caused by rising academic integrity cases. It was acknowledged that certain factors contribute to this extra marking time, including the need to check sources and verify the score created by the AI detection tool on Turnitin, which often leads to false positives. Academics discussed the need for a more programmatic approach to dealing with AI cases so that tutors on individual units weren’t working in isolation, trying to cope with heavy marking loads. A call for more AI literacy training for staff was also made to increase expertise in this area.
Another member of the audience emphasised the need to create a disincentive for students to use ChatGPT by designing assessments which would be difficult for AI tools to produce without sophisticated prompts or lengthy editing processes, e.g. personalised reflective statements. One way to integrate this into an assessment would be to ask students to insert speech bubbles throughout an essay to show how they have made links between key concepts or how they have constructed a coherent argument. This would also place more value on the process of writing an academic text, rather than on a final polished product. Another suggestion for discouraging the use of AI was to make assessment tasks more achievable by reducing the word count (even down to 200 words for formatives) so that students would be less likely to resort to ChatGPT when short of time.
Final summary
Overall, it was clear that there are already many creative strategies in place across the university for managing and integrating AI in assessments. Here’s a summary of some of these suggestions:
· Limit the number of sources for coursework so they can be checked for authenticity
· Allow students to use AI to get feedback on their work rather than producing it
· Talk openly with students about the drawbacks of AI and how their work is superior
· Create disincentives to use AI with authentic and motivating assessment design
· Design manageable assessment tasks, e.g. reduced word counts for formatives
· Add elements which are harder for AI to produce, e.g. ask students to insert reflective comments on the process of writing their assessment
If you can add to this list, please leave a comment below explaining how your School is adjusting assessment practices in response to the rise of AI. It would be great to hear from you.