an artist s illustration of artificial intelligence ai this image depicts how ai can help humans to understand the complexity of biology it was created by artist khyati trehan as part
AI, Designed for All

The question of AI and assessment

With re-sits taking place or on the horizon, the exam season for the 23/24 academic year is almost over. How has it all gone? To what extent do you feel your assessments have been AI proofed? What are you going to do with your assessments moving forward?

I don’t know about you, but it is very rare these days to engage in a conversation about teaching and learning without AI coming into it. More and more we see professional development events with AI in the line up of presentations. Just last week I was at a Digital Accessibility conference and there were presentations on ’Putting AI in its place:  Generative AI to improve Digital Accessibility’, ‘Alt-text: Will AI save the day?’, and today colleagues are attending an event where AI is one of the themes. Suffice to say AI is not going away and we need to find a way to live with it, just as we have learnt to live with Google, but more importantly, live with AI having an impact on our assessments.

Assessments form a crucial part of the learning journey in most education contexts with qualifications being awarded based on the outcomes of summative assessments. But how do you know that it is our students knowledge that is being recognized and not AI? This question, of course, can lead to us, as educators, to second guess our students or doubt their true abilities,  how does this impact our students? A recent article on GenAI and trust in teacher-student relationships highlighted students fears and the lack of trust they felt from their lecturers in the assessment process due to suspicions about AI use. There was also mention of an absence of transparency where AI scores for submitted assignments were not made available to students – do we want our students to be afraid of assessments and feel inhibited in their learning journeys? There is also the question of parity – putting academic integrity aside for a moment, if students are engaging with AI, some may be more skilled at interacting with AI than others and obtain better results which will aid their success. Other students may have paid for more sophisticated GenAI applications and get a better outcome than those who rely on free services found online. Does this raise the question of whether all students should have equal exposure to it and be trained in how to use it effectively for tasks?

Traditionally, assessments have assessed academic intelligence – what we know about a subject and how we apply this knowledge, while this has a place in the learning journey and is an important aspect of demonstrating knowledge of a subject, we know in some cases AI is able to use and apply knowledge generally more effectively than humans. So, what kind of knowledge should we be assessing? Bearman and Luckin (2020) suggest assessing ‘meta-knowing (knowing what knowledge is and how to use it) and perceived self-efficacy (judging how well our intelligence can equip us to succeed in particular situations)’(p55). They provide an example of developing critical appraisal assessments, which are not innovative in themselves, but are not always included as summative assessments or assessments of learning. A particular example could be requiring students to critically appraise a scientific paper against a checklist and comparing their appraisal with peers before reflecting on the value of this kind of task. In certain cases the peer could be AI.

If academic intelligence is to be assessed though, I feel it can only be reliable and valid if you take a two lane approach to assessment, which the University of Sydney has proposed. In other words, for the high stakes summative assessments or assessments of learning where you are assessing students’ academic knowledge (lane 1) you ensure the assessments are carried out in person and supervised. Lane 2 is for those assessments that are more authentic in that they encourage students to engage with AI, preparing them for a future where AI will be integrated in much of what they do, rather like the assessment mentioned above. 

And so I return to the question I asked at the start – what are you going to do with your assessments moving forward? Please do share your thoughts in the comments.

References

Bearman, M. and Luckin, R., 2020. Preparing university assessment for a world with AI: Tasks for human intelligence. Re-imagining university assessment in a digital world, pp.49-63.

Liu, D. and Bridgeman, A. (2023) Embracing the future of assessment at the University of Sydney – Teaching@Sydney

Luo, J., 2024. How does GenAI affect trust in teacher-student relationships? Insights from students’ assessment experiences. Teaching in Higher Education, pp.1-16.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.