The Bristol University Press got wind of our research into what students think about AI in higher education and invited me to present on a panel at their recent event “How is AI changing the teaching and academic landscape?”. This title set the tone for thought provoking reflections from the panel and the audience alike.
The first speaker was Mark Carrigan of the University of Manchester. He positioned his talk as taking a step forward from the initial panic caused from mass-access to AI and onto bigger questions for what AI means for educational standards. He articulated challenges and opportunities that we face and the need for AI literacy. He cautioned against self-defeating reliance on AI automation, recognising we still rely on human interactions. Issues on data use and legal risks were also highlighted. He noted that AI comes with a demand on resources that can endanger attempts towards sustainability. There are, thankfully, some hopeful considerations in the mix too. These opportunities include broad innovation potential, the ability to be agile and flexible, and how we can bring our humanity/humaneness into AI.
I was next to speak and presented findings from research I’ve been leading with Peter Peasey in the Digital Education Office. We spoke to 67 students across the University earlier this year and then undertook qualitative analysis of the data collected, with support from our colleague Joe Gould in the Curriculum Enhancement Team. Some of the takeaways demonstrate that what academics consider important AI-related topics do align with what students are concerned about – which is reassuring! I shared some facts and figures too, such as how c.80% of students already use AI while many of those who don’t use it avoid it for moral reasons. I also shared observations by the research team of student discussions such as how many students do not know what is and isn’t AI, and how most students do not seem to understand the ethical bias issues with AI. An academic paper is just about to be submitted and once that’s done, I’ll be sharing full details and results on this blog!
The event welcomed Colin Gavaghan, Professor of Digital Futures (UoB). Colin is based at the Bristol Digital Futures Institute (BDFI) who seek to understand, and get ahead of, how digital technologies change our world. He tackled the difficult issue of AI detection. He explored different ways to deal with generative AI specifically and what it takes to detect AI outputs. This was a futures-thinking presentation full of tricky questions about the future of detection tools like Turnitin and what kinds of implications detection has for universities.
Finally, David Beer, Professor of Sociology at the University of York, situated all the discussions in relation to algorithmic thinking. Much of the discussion stems from his recent book “The Tensions of Algorithmic Thinking: Automation, Intelligence and Politics of Knowing” (BUP). Topics include the pursuit of posthuman security, the limits of algorithmic thinking and living in algorithmic times. With regard to AI, David asks how will these tensions persevere and what will they mean for knowledge in the future?
Hundreds of people from around the world attended the session which was kindly chaired by Sanja Milivojevic from the Bristol Digital Futures Institute at the University of Bristol.
*This blog will be updated with a link to the event recording once it becomes live.
Keep an eye on the BILT blog for more AI updates and resources in the coming weeks and months!