Higher Education continues to face thorny challenges thanks to AI, as outlined at this week’s QAA Insights Conference. In one of the breakout groups titled “compassion in education” two speakers presented different perspectives on how to respond to the messy problems educators face in this fast-moving tech space.
First up was Prof David Webster from the University of Liverpool speaking on how AI can threaten our ability to offer a compassionate and humane education. He of course acknowledged the benefits of AI, but the talk was about the perils faced. The big one he points out is the mind-shift in how students work and the perception of something being hard. In the context of cheap apps promising hacks and shortcuts to learning, is the natural and perfectly legitimate struggle of learning being set aside? Importantly, he notes that AI products are not neutral nor altruistic services, they are created by for-profit entities with their own agendas. In this tangle, outsourcing learning means putting the very action of learning at risk.
Universities are therefore left with multiple, and familiar, dilemmas: to ban, regulate or embrace AI; to decode how to develop AI literacy; to work around the arms race of detection tools; somehow address the myriad of direct and indirect ethical quandaries; and serve our duty of care to prepare students for the future.
What are his solutions? There are no easy take-away answers here, but more provocations for us all. Universities must shift from being reactive to offering AI leadership. In this vein, “the job of the University is to invent the future”. What does that mean for us in Bristol?
The second speaker was Prof Mary Davis of Oxford Brookes presenting her research and practice on inclusive guidance for students on ethical decision-making with AI use. For context, Mary pushes back against the use of AI detection software. Instead, she wants us to focus on more positively framed approaches for students becoming more active in their thinking about AI use. In her words “we shouldn’t spend our time detecting AI, we should be detecting learning”.
She shared details of the declaration forms she uses as a starting point for students to acknowledge AI use in all its expressions. The starting advice is to make student declaration forms as simple as possible, with prompts like what AI tools were used and how they were used. Reviewing student responses from 2023, the types of uses are broken down as follows: 20% spelling and grammar; 13% research aid; 12% learning about a topic; 9% rephrasing; 9% plan and structure; and then a smattering of reducing wordcount, creating content and giving feedback.
The seesaw metaphor was helpful to understand the concept of upskilling and downskilling encountered in these uses. Travelling down on the seesaw might include using AI as an author or producer of work to avoid the learning process, while travelling up might include using AI as an assistive tool to develop knowledge and ability. This helps articulate framing use of AI away from a binary good or ill into a more nuanced application-in-learning view.
This data came from an AI course she developed, taken by 500 students, which employed Universal Design for Learning Principles. While AI literacy skills are something that Mary sees as subject specific, ethical decision making covered in the course can be taught regardless of disciplines.
She used a stoplight system to help students navigate inappropriate use (stop! Red light!), at risk practice (check! Amber light!) and appropriate use (go! Green light!). The topic of authorship is a good example of what this looks like in practice:

- Green light for appropriate use: ethical use where the student is still the author of the assignment
- Amber light for at risk practice: relying on AI tools for part of the assignment
- Red light for inappropriate use: unethical use where the student is no longer the author of the assignment
Students used an interactive tool with examples to help them reflect on the traffic light system. Follow up questionnaires and focus groups demonstrated the effectiveness of her practice. Students stated that the course helped them explore AI and not rely on AI to complete assignments, to learn the boundaries of AI, and how detrimental AI can be to integrity. Students also shared how they applied their learning from the course positively and one also noted: “I didn’t really learn anything new but it has made me confident to know I am approaching using AI in an ethical way”. That validation of appropriate behaviours is just as valuable as students improving their ethical use practices.
Some of the takeaways from this talk included prioritising accessibility and inclusion into any teaching about ethical AI, the importance of gathering feedback on what you are doing, working collaboratively with students, and ensuring that an agile approach is taken to keep amending guidance as needed.
I want to finish by sharing some of the questions Mary posed as a final slide in her presentation as they are great question for us all, and I hope that you will share your thoughts with BILT in the comments sections below, on our social media channels or by emailing us at bilt-info@bristol.ac.uk. As ever we want to hear about your practice in AI!
- How are you addressing inclusion and accessibility issues with students use of AI?
- Do you use a declaration form? How is it working? If not, how are you finding out about student practices with AI?
- How are you teaching ethical decision making?
- What kind of guidance do students need?