The Investigation

Over the past two years, the use of generative AI, and especially Large Language Models (e.g. ChatGPT, Co-Pilot, Gemini, etc) has become widespread. These tools show great promise for educators: both in helping students to connect with new approaches to their own learning, and in helping with teaching-related tasks to reduce workloads. However, there are substantial potential risks to un-nuanced uses of these tools. Bristol students see AI as part of their future but have some concerns about its use (Tierney et al, 2025). To understand how University of Bristol staff are deploying AI tools, what they feel the strengths of these tools are, and what their concerns are, we conducted a university-wide survey in 2025. 

The Discoveries

We had 74 respondents, with nearly equal representation from the three faculties, and good balances between pathway 1 and pathway 3 staff, and across different career stages. 

The tasks staff reported using AI for were varied, but the most common were administrative, and related to the creation of assessment material. 

In general, the outputs were perceived as effective for most tasks – with the clear exception of generating images and diagrams for teaching materials. 

The primary benefit that staff reported was saving time. This relates well to the administrative nature of many of the tasks reported, and supports the UK government’s view that AI might be able to free up time for educators. However, overall, they disagree that generative AI gives better, more creative or more diverse materials. They most strongly disagree that generative AI increases students’ depth of knowledge and understanding: a concerning outcome if generative AI use were to be expanded without careful thought. 

Staff’s concerns about AI were varied. The most common was linked to the reliability of responses: something which might change as LLMs are developed.  The second most common concern was that it would reduce critical thinking; this has also been perceived as a risk by Bristol students (Tierney et al, 2025). Issues around bias, discrimination and ethics also figure prominently. 

Interestingly, about a fifth of respondents reported they had never used AI – but they still took the time to complete the survey. This is a group whose voice needs to be heard, and their needs should be considered.

The university released guidance for staff in the 2024-25 academic year which it continues to update, as well as study-skills tutorials for students. Despite this, at the time of the survey, only a fifth of staff thought they had enough guidance, and some participants were not aware of the guidance at all. In this fast moving field, support and training for staff is a key need identified by survey participants. 

Next Steps

Beyond the quantitative data gathered, we also collected qualitative data and conducted in-depth interviews with a number of respondents. This is the first of several BILT posts, stay tuned for more results, and some case studies we have gathered in the course of our research.

Contact

Project members: Claire Hudson, Sarah Zaghloul, Jessica Irving, Shan Hua

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Discover more from Bristol Institute for Learning and Teaching

Subscribe now to keep reading and get access to the full archive.

Continue reading