digitised stylised earth hemisphere with connecting lines, in red and black tones
AI, News

International perspectives on AI in Higher Education: Part 1, Keynotes

I’ve just attended the 18th annual International Technology, Education and Development Conference in Valencia. With over 420 presentations to choose from, there’s been a lot to digest! One of the biggest themes, unsurprisingly, is AI. In Part 1 of my conference round up, I explore what the keynotes think we should focus on.

The delightful Mike Sharples, The Open University, kicked things off with a deep diver into Social Generative AI. Drawing on his decades of engagement with AI, Mike is a bottomless bounty of insight!

The main idea behind social generative AI is to think about a new era where humans and machines engage in extended dialogue. An example is conversations across languages using real time speech translation. It’s a way of moving beyond our current understandings of gen-AI as a series of prompts and responses. Instead, everyone interacts together – sometimes human to human, other times human to AI, and even AI to AI directly. Mike discussed how AI is constantly changing, recognising how Big Tech is invested in AI growing its ability to engage with higher-level reasoning.

There are many new roles for social generative AI in education. Mike provides some examples:

Co-designer: AI assists a group of students throughout a design process. AI helps the students define the problem, challenge assumptions, brainstorm ideas and produce prototypes.

Open textbook writer: AI summarises, translates, compares and adapts textbooks for open discussion.

Mediator: AI moderates a discussion to explore differences and reach agreements.

Socratic opponent: students interact with AI to develop arguments.

Of course, ethics is really important and was addressed throughout the presentation. If teaching is, essentially, a caring profession, how can AI serve a role when it can never care. AI is optimised for efficiency and efficiency can lead to selfishness? AI currently uses human languages but will develop other ways to communicate outside of that. When we already can’t comprehend huge neural networks, will humans be sidelined? What’s really important for Mike is that we bring human care and empathy to AI in education. We need digital literacy to address the many flaws that AI brings to the table. This can include collectively building good educational AI.

Onto the second keynote, Sarah Newan of Harvard University. She draws on creative and interdisciplinary perspectives to explore the pitfalls and opportunities of AI for educators. Her AI research started about 10 years ago, through the lens of ethics and philosophy.

She presented some beautiful examples of her installation art and children’s book engaging with themes of morality and bias in machine learning. These evocative images and metaphors help to explore AI in terms that humans can connect to in different ways.

Sarah’s work asks us to challenge our assumptions about AI. When we don’t need to be able to fix an engine to drive a car, do we really need to know the back-end of development to use AI? Do we need to know chemistry to take our medication? No! We should also question the terms we use, such as what do we really mean by artificial? How intelligent is AI, really, and what do we mean by intelligence?

The AI pedagogy project at Harvard informs much of her work. It’s a rich resource that I recommend colleagues explore (aipedagogy.org). Inspiring examples of practice (assignments) and vetted resources from across the world are available on the project website. There’s also a straight-forward intro to AI and a tutorial on how to use LLMs (Large Language Models). This is perfect for those brand new to AI.

Sarah thereafter presented a big list of dos and don’ts when working with AI. Some of the highlights include:

  • Don’t put your head in the sand. Students are going in to an AI-filled world and need AI literacy.
  • Do learn the basics and experiment with AI. Be able to articulate your critiques of AI.
  • Be cautious of the AI hype.
  • Know the risks (privacy, bias, misinformation, incorrect info, environmental costs, copyright, increasing inequities.
  • Don’t assume your students want to cheat.
  • Lead with trust with your students to revisit pedagogical concerns, and discuss what/why your students want to learn.
  • Don’t forget who owns the tools. They are mostly developed by for-profit organisations and not designed with your best interests in mind.
  • Be careful how you share your data.

Check out Part 2 of this mini-blog series to catch what international colleagues had to say about AI in education as they shared their practice and research from across the globe. Part 3 will include reflections on some questions we are grappling with on AI.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.