Title slide featuring a skyline image of the Bristol Campus and the name of the authors
AI, BILT Funded Projects, Designed for All

Case study: A Reflective Approach to AI:  Co-creating resources to support HE students’ critical use of AI   

What’s this project?   

This research examined University of Bristol students’ understanding and consideration of AI technologies, addressing themes of students’ ethical and environmental concerns over AI and the importance of students’ voices in institutional-level policies and practices related to AI. Based on the study findings, we also developed a set of student-facing resources and activities that can be used across disciplines and that aim to support students’ critical consideration of these technologies in their studies.  

Our research was funded by a BILT Education Development grant and ties into BILT’s Curriculum framework categories of Disciplinary and interdisciplinaryPersonal Development and Sense of belonging.  

Who’s involved?   

The project members are Alison Oldfield (School of Education), Rosey Crow (Library Services), Christy Fisher (Library Services), Kerrianne Orriss (Library Services), Maxime Perrot (School of Education) and Tim Worth (Study Skills). 

Why do it? 

As artificial intelligence (AI) becomes increasingly embedded in university life, students are navigating a complex landscape of opportunities, challenges, and ethical dilemmas. Our study asked students in all faculties and levels of study about their AI use, their concerns about AI technology and what support they would like from the university around these tools and then developed student-facing resources and activities that responded to these issues and can be shared across the University.   

What we did and what we found:

Understanding Student Use and Concerns 

The research began with a university-wide survey of 402 students, revealing that 86% reported using an AI tool in their studies, with ChatGPT (77% of students reported having used this AI tool), Google Translate (41%), and Grammarly (39%) being the most commonly used tools. Students reported having used AI for a variety of study-related tasks, most prominently summarising information, simplifying complex concepts, and finding research materials. 

Despite this widespread use of AI, students also expressed significant concerns about the technology. These included misinformation, bias, academic integrity (including accusations of cheating), data privacy, and environmental impact. One student shared, “I know some specific AI tools are useful, but I am not confident I could sort out the good from the bad.” Another voiced ethical concerns: “I think AI is fundamentally unethical and environmentally damaging and should not be used.”  

Co-Developing Activities with Students  

Building on the survey information about student uses, understanding and concerns about AI technologies, we developed five student-facing activities, refined through workshops and focus groups with 29 students from across the university. These activities aimed to foster critical thinking, consider the ethics around AI production and use, reflect on our relationships with AI technologies through drawing and metaphor, and support practical evaluation skills. The images below are from two of the activities.

Images 1 and 2: Image 1 shows one student’s drawing for an introductory activity, which asked students to ‘draw AI as if it was an animal’. Image 2 shows a set of ‘dilemma cards’ that prompt debate around different AI-related issues. 

Feedback from students at the workshops included: 

“I’ve never really sat there and thought about what my relationship to AI is… it got me thinking, ‘What am I actually using it for?’” 

“I now understand why certain actions were flagged as problematic… I’ve not really been able to understand the intricacy behind it.” 

“The environmental regulation aspect… opened my eyes to a whole lot of stuff I didn’t know about before.” 

Research Findings:  

AI’s ambiguity 

The project revealed that students view AI as ambiguous and ever-changing and struggle to identify what technologies are or use AI, beyond common generative AI tools. For example, while nearly all survey respondents recognised ChatGPT as an AI technology, only 27% said they considered Google Translate to be an AI tool. 

Broad and variable use of AI technologies 

While the majority of students said they have used AI in their studies, use of these technologies is variable and uneven. Students expressed concerns about fairness, for example some use of subscription-based AI tools were seen as offering an unfair advantage to those who paid for them, and some students reported fearing false accusations of AI-generated work. Some students also reported actively choosing not to use AI because of its potential impact on their learning and issues like copyright, unethical labour practices and AI’s environmental impact.  

Cross-discipline vs. bespoke support 

The teaching activities developed in this research address broader concerns noted across the student body and can be facilitated by whole-student services like libraries and Study Skills. However, variable student experiences and concerns across disciplines suggests a need for supplementing these broader supports with bespoke guidance for specific fields and courses. As one student said, “Subject-specific workshops could also be helpful because your perspective as a computer science student vs my perspective as a language student – they’re going to be completely different and the issues are going to be different as well.” 

Being clear and non-judgemental 

Importantly, students in this research called for clearer guidance on AI usage at university, non-judgmental spaces to discuss AI in their studies and lives, and discipline-specific support. They valued practical tools over vague advice. “They talk about… having your critical thinking mindset on, but it’s like, what specifically should I do?” one student asked. 

Students as active participants in the AI ‘debate space’ 

This research underscores the importance of student voice in shaping university responses to AI. Some students said they want to be informed, critical participants in the evolving AI landscape. The survey and focus groups demonstrated a wide range of student views on AI technologies, from positivity about the educational benefits to significant ethical concerns about its use. AI is, as one student described it, still in ‘a debate space.’  

It is perhaps in this ‘debate space’ where educators can support students to consider AI technologies as ‘active citizens who make informed decisions for more humane and just communities’ (Krutka et al, 2022, p. 231). Supporting students to understand and scrutinise the roles of these technologies in their lives and learning is a good first step – and listening to student experiences, concerns and views is an important part of that process.   

Outputs and Impact 

The full report on our findings and the student-facing activities, with activity instructions, are available to download here: A Reflective Approach to AI: Co-creating resources to support HE students’ critical use of AI – Research Outputs – University of Bristol 

  • Findings have been shared at the 2025 BILT conference, where UoB teaching staff expressed interest in incorporating resources in their teaching in 2025/2026, and also in a workshop at the recent British Educational Research Association (BERA) Conference in Brighton.    
  • The workshops and focus groups informed the development of a library guide, AI and the Library. One of the resources, Using the CRAAP framework, is included in the guide and has attracted interest from other institutions. 
  • Team members have contributed to university policy and BILT online training modules for university staff. 

Next Steps 

  • The activities will be incorporated into central and embedded library workshops.  
  • In the School of Education, workshops are being developed in collaboration with the school integrity officer using these activities, to be delivered to all students in timetabled sessions in TB1.  
  • A research article has been submitted to Technology, Pedagogy and Education journal.  
  • Further research interests include how students use AI across subjects; how spectrums of use affect fairness and peer collaboration; intentional or critical refusal of AI; student contributions to university practice and policy; and how pedagogical approaches can support students’ development of criticality, scrutiny and reflection around AI. 

For more information, contact the authors:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.