earth hemisphere with digitised onnections, in yellow
AI, News

International perspectives on AI in Higher Education: Part 2, global views

Welcome to Part 2 of my round up of highlights from the 18th annual International Technology, Education and Development Conference in Valencia. International colleagues had a lot to say, drawing on practice and research on AI in education.

Assessment in the Era of Generative AI

A pilot project allowed Aerospace students to use ChatGPT in their summative exam (M. Baume et al., Germany). 100 students volunteered and the academics hosted the exam using Moodle with ChatGPT accessible within the platform. Half the questions didn’t need AI, while the other half could invite (sensible) use of AI. The experiment shows that students used ChatGPT for all questions, and most students were neutral when reflecting on the experience. They felt that it did, however, reflect “real-world scenarios” positively. They also felt that it shifted the focus away from memorising to understanding of concepts. Unfortunately, the researchers felt the allowance of ChatGPT meant that students relied on it too heavily and did not study properly.

On aspects of marking, an exploration on the potential of LLMs to provide feedback on students’ work was presented (J. Lievens, Belgium) (LLM = Large Language Model). I slightly held my breath at this, given how my research showed that UoB students rebuke the concept entirely! (This research is incoming to BILT soon!)  It was interesting to see how the fast developments of GPTs are better able to engage with high-order thinking. The literature review presented showed plenty of evidence that suggests the results of AI feedback may be getting closer in quality to that of humans. Plenty of food for thought here – though I remain sceptical, especially if this isn’t moderated with human intervention.

New ways of evaluating students in primary education considered modern skills, such as valuing teamwork and creativity, and emphasis on critical thinking and problem solving (E. Benedik & A. Gruber, Austria). Here are some examples:

  • Digital storytelling (creating a short film in under 24 hours)
  • Sketch notes (using drawings and texts to assess understandings of key topics in a visually engaged manner)
  • Digital workbooks and portfolios (reflecting on learning across a term and showcasing the learning journey, inviting multimedia engagement)
  • Podcasts.

For the researchers, this isn’t just about tests, it’s about creating experiences where students get to create and express themselves.

Student and teachers’ perspectives on AI

A study from Turkey (H. Cirali Sarica, Hacettepe University) showed how teachers using AI to develop e-learning materials saw Gen-AI tools as useful for saving time, reducing workload, learning fast, and providing access for universal use through language diversity. They experienced some challenges, however, such as cost-to-access limitations, difficulties writing AI prompts, and some complexity with interfacing with AI.

A fascinating talk on student AI literacies from Singapore (S.H.S. Ng, H.Y. Can, InsPIRE NTU, Nanyang Technological University) showed how at-home use and play with AI is informing how students use AI in their education. Singapore is a hyper-technologically-engaged country, so high digital literacy is commonplace. Students use AI for lots of uses in their education, such as a writing buddy, learning assistant, task handler and artwork generator. Some see it as useful, for example, for simple coding leaving more time for higher order thinking. They acknowledge outputs aren’t always perfect and need human verification before use. The researchers used an interesting metaphor – AI is seen as an exoskeleton to help scaffold a human to do things faster and better. They recommend not taking a punitive approach to AI use, equip faculty with skills to work with AI, and to keep a sharp eye on the rapid changes in the field.

A survey of 83 students at the University of Andorra (M. Bleda Bejar, A. Dorca Josa, B. Oliveras Prat) showed that 98% of students already use ChatGPT and consider LLMs reliable, though they failed to offer robust judgements for this opinion. Most students believed that it is important to improve reliability of AI use through cross-referencing with reliable sources, to seek transparency from the AI tool, and to engage in independent verification of outputs. This shows the role of critical thinking when using AI.

I was really impressed with the work of the Norwegian team who have surveyed over 2,800 students on their use and views of AI (C. Bjelland, K. Ludvigsen, A. Møgelvang , Western Norway University of Applied Sciences  https://www.hvl.no/en/alu/). As showcased in the work of many others, students use AI to support different stages of the writing process, from generating ideas to seeking help with structure, and getting feedback on argument and discussion. For students, they find this helps them to be more creative and to save time. They also use AI as a teacher, buddy and dialogue partner. For one student who doesn’t have many peers to chat with after outside the classroom, AI offers a way to have a dialogue partner to discuss topics outside of lectures. This made me wonder about loneliness and connections and whether AI use could limit sociability. Many students commented that AI undermines the very essence of what it means to be human, while some have become paranoid about its long term use, and some don’t think it’s even possible to avoid AI. In response to growing AI use, the pedagogy team have developed a 20 hour course for all teachers to encourage engagement with AI including admin leaders and library staff, so that the whole learning community develops a shared space for discussing lessons (via live online sessions).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.