Last month I attended two AI events for higher education. Speakers explored familiar territory such as how can AI be leveraged to help the sector, and perennial provocations on the challenges and opportunities afforded by the technology.
First up is the European University Associations’ 2025 AI conference. The opening plenary speakers reflected on principles and policies in AI. I was pleased to see the prioritisation of ethics. Some suggestions to tackle this area were around setting up specialist institutional ethics committees. There were lots of implications around research, notably statutory conflicts for data transfers between the EU and USA, including remote data access, and the current danger of how AI can already undertake reidentification of anonymous data participants. In the future, this latter issue may impact the legal thresholds for anonymity.
Another big topic was around AI competencies with some 70% of SMEs experiencing problems recruiting for AI competencies. Of course, this may partly be due to the lack of definition on what AI competencies actually are. This prompted me to reflect on how we articulate the skills students develop and how this relates to the Bristol Skills Profile. Speakers noted the risks of de-skilling are high because of AI and staff need to be proactive in reflecting on the types of tasks students need to be able to carry out, especially since some de-skilling can be very dangerous (the medical and engineering fields were highlighted here).
Some speakers posited the need for the sector to move at speed and work directly with AI companies to build competence and competitiveness. This approach was presented as a challenge for leaders to move away from traditional operational models as “we are in a different world”. Many of us know what slow-moving beasts universities are and how fast AI tech is shifting so this is no easy undertaking. A future-thinking approach was paramount here with speakers advising institutions to start forecasting, thinking about key skills for the future, to remain imaginative while also foregrounding institutional accountability. A real focus was on how AI right now isn’t very trustworthy but skills for trustworthy AI approaches will be valuable.
In relation to education specifically, one speaker suggested that the near-future of AI will be fully personalised with access to all past decisions. When this happens students might be considered “augmented humans” and it’s for universities to find new ways to test their performance in this context.
For Vilnius University, rethinking lectures and exams in relation to AI has been transformative. They have shifted from testing what students know to testing how they think. They provided an overview of their approach in the table below, a useful prompt as we reflect on assessment design. This was all facilitated through the creation of two “AI knowledge twins”, bespoke chatbots that students were directed to interact with (you can chat with them here: Paul AI; Goda AI).
| Before AI integration | After AI integration |
| Lectures focused on repeating content | Lectures became interactive and case-based |
| Students asked basic, factual questions | Students used AI for basics; deeper class discussions |
| Exams tested memorisation and recall | Exams tested legal reasoning and application |
| No AI use allowed | All AI tools encouraged, incl. AI knowledge twins |
| Traditional prep, lots of repetition | More time spent on problem-solving and reflection |
Another education-focused talk discussed PhD students. Lessons learned from five Flanders institutions advised that training around AI needs to be tailored to PhD students, offer domain specific content, demands clear prerequisites and learning outcomes, and is in a modular design.
The other event I attended was hosted by the King’s Institute for AI (all sessions will be added to their YouTube). One of the most enjoyable sessions was a panel of creative academics reflecting on the fiction and reality of AI’s impact on society, education and research. I must admit there were far more nerdy references to science-fiction than normally socially palatable, so I had a blast! From Battlestar Galactica to Astroboy, Marvin the paranoid Android to Hal in Alien, to Terminator’s Skynet and Altered Carbon, a galaxy of scifi references filled the lively chat. All this grounded perspectives on how fiction is typically ahead of reality, but human’s relationship to technology is always shifting and brings with it a myriad of terrors and delights.
There was some useful signposting, notably to the Dair Institute that works to prevent harms from AI technology while also engaging with imagination with new technologies for the future. Plus reading recommendations including Jordan S Carroll 2024 “Speculative Whiteness: Science Fiction and the Alt-Right“. This relevance of this book is palpable – just last month, the AI Grok tool was found to be regurgitating right-wing misinformation as widely reported in the press, and it’s not an isolated incident for popular AI tools.
Presentations included reflections from venture-capitalist perspectives connected King’s Entrepreneurship Institute. Skills like disruptive thinking, problem solving and compelling communication were mentioned. When asked (by me) about how this fast-moving industry navigates the gender disparity, Clare Zhang of Playfair Capital discussed her efforts to include women through initiatives like the female founder network that has collectively raised £600m. The men on the panel noted the need to think about the issue more with one noting ambitions to have a minimum 20% female representations in co-founders (he achieved 30%), though they did not discuss the wider taskforce demographics nor the manosphere cultural tendencies in this industry.
I found the presentation from Prof David Whetham (KCL) of great value to education leaders when thinking about getting staff to act mindfully and with ethics when using AI. He demonstrates the power of normalising ethics discussions and peer-to-peer value transmission. As Professor of Ethics and the Military Profession, his research explores the prevention of war crimes asking “how do you get people to want to do the right thing?”. He mentions a 2018 report from the RedCross that asked an even better question “why don’t people break the rules”. Turns out, the greatest protection is down to organisational culture. Collective mindsets with shared values and peer to peer norms are powerful tools. A big takeaway is how one-off interventions like training aren’t as impactful as frequent small bursts that continuously socialise values.
Key practice includes a deck of cards with question prompts to get ethical discussion going. This is supplemented with talking head videos, case studies and other content that deepens reflections and engagement on ethical issues in a structured manner. (One of our own Bristol colleagues is developing an AI deck for students right now, so expect more on that in the near future!) This important research all leads to thinking about leadership in AI and how we socialise values (and what values!) around AI.
To close, why note share your thoughts and learning from AI conferences you’ve attended this year!
Plus, here are some extra questions to dwell on and discuss:
What are your key ethical questions on AI in higher education?
What questions do we pose as leaders, and within our disciplinary contexts?




Leave a Reply