Every month, I attend JISC’s national community meet up to discuss shared educator experiences of all things AI. This month, my suggested topic got picked up: “AI rejection by students in their learning and assessments”.  Colleagues kindly shared their experiences and thoughts on ways forward. In this informal blog, I share some insights and thoughts on what way we can reflect and action a response to this specific emerging issue.

To start, I asked JISC attendees what level of rejection staff are experiencing. Most people had anecdotal evidence that matched my own observations: c.2-20% reject integration of AI within their studies.

What does this rejection look like?

Walking out of lectures is on the dramatic and rare side, but does happen. More frequently, this subset of students articulate dissatisfaction and state they simply do not want any AI anywhere. Rejection is concentrated in the Arts and Humanities, and is less frequent in Science and Engineering, but exceptions occur. This can sometimes lead to class disruption and interpersonal tensions.

In some universities, serious student complaints about “being forced to engage with AI” have emerged, particularly in cases where teaching materials are heavily generated by AI or where assessments are marked by AI and students are provided with AI-generated feedback. In UoB, 5% of BLUE Unit surveyed students were dissatisfied with AI feedback and marking while 7% were dissatisfied with AI learning content (as opposed to 25% positive towards AI learning content).

Why does rejection occur?

Many students have significant concerns about AI’s production, use and role in their education and wider society (n.66/376), and some report actively choosing not to use AI for value-based or ethical reasons. The biggest reason for rejection is on ethical grounds, with sustainability and bias/discrimination most frequently cited. About 23% of UoB students do not use AI at all (for all reasons, not just ethical).

Many students are frustrated with other students’ inappropriate use of AI, which erodes their sense of trust in the technology and its place in their learning environment. Some students articulate annoyance with pro-AI stances, including the central University tone and arguments around the inevitability of AI in society (6% dissatisfied).

42% of UoB students state that HE students should not use AI technologies in relation to any assessed work while 50% express concern about how to use AI appropriately within their students (Oldfield et al. 2025).

A 2026 national survey by HEPI found that a minority of students think that AI has negatively impacted their experiences, noting concerns around skills erosion, social isolation, perceptions of fairness, and future employment.

What approaches can bring about resolution to AI rejection?

Resolution option: dialogue. Often, staff find that when they open a dialogue with students about the rationale for their AI rejection, students can learn a bit more about the “behind the scenes” development and pedagogic rationale behind the learning activity or assessment. In a large number of cases, this conversation resolves the problem and students accept AI integration.

This prompts us to reflect on how useful it can be not just to tell students what we are doing, but to explain why. This can help them to understand the value and benefits of activities.

It is important that such conversations are handled with care so as not to be combative. The emotive nature of differing moral and ethical standpoints can quickly escalate into conflict. This can be avoided by inviting conversation, rather than framing discussion around ethical positions being “wrong” or irrelevant.

Resolution option: provide opportunities to develop a critical understanding of how AI technologies are produced and how they relate to learning. Both staff and students need to be equipped to engage with AI responsibly. Students demand AI literacy skills as relevant to their studies and their future careers. This ensures that students have the chance to voice concerns within their learning contexts, and develop confidence and trust that their teachers approach AI with criticality.

Students can access free online training to complete in their own time, in addition to subject-specific guidance provided by teachers:

Resolution option: transparency. There should be greater transparency around AI tools embedded within university platforms, course activities and staff use of AI. Transparency enables trust and informed consent when engaging with AI. It also allows students to see the care and caution with which staff engage with AI.

Resolution option : voice and choice. Opportunities to learn about or use AI should acknowledge that some students avoid AI due to ethical, environmental or intellectual concerns. Courses should not assume AI use or require it necessarily; instead, where possible and reasonable, students should have choices. This reflects our institutional aim to develop inclusive assessments and respect the student choice. This option enables student autonomy and agency in their learning. It also reflects the diversity of ethical and moral engagement with AI, where many arguments against AI are rooted in legitimate concerns.

Next steps and further resource:

UoB student data used in this blog: Oldfield et al.’s (2025) BILT Student Survey (n.402); DEO Digital Insights Survey (n.358); BILT’s AI Hackathon (n.50) by Esther Ng; and thematic analysis of BLUE unit surveys (n.125) by author.

Note: AI in this blog refers to all types of AI. However, it may be assumed that most student responses relate to generative AI as this is what the qualitative evidence demonstrates.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Discover more from Bristol Institute for Learning and Teaching

Subscribe now to keep reading and get access to the full archive.

Continue reading