This blog is written by visiting PhD researcher Priscila Gonsales. She presents steps to respond to AI alongside a review of a recent report by Education Internacional. The blog will be of interest to those who want to stay up to date on the latest discourse on AI, including ethical and governance issues.
It is almost always the Bing-Bard-ChatGPT trio that comes to mind when the subject is generative AI (GenAI) and its impacts on education. However, a research map released in July by Stanford University shows that almost 16,000 GenAI models have already been created worldwide. These LLMs (Large Language Models) draw on huge databases and use deep machine learning techniques to provide written outputs. Faced with this astonishing advance in GenAI, educational institutions have the opportunity to reflect and rethink their strategies.
Firstly, it is important to go beyond a superficial debate about how to “apply” AI in the classroom or in pedagogical projects. There is a persistent view of the neutrality of technology, as well as technological solutionism. Natural language technologies, in constant development, carry with them issues such as social, political, economic power, environmental, exploitation of human labor, amongst others. In this sense, the initial step should be to understand how AI works and how it impacts on society.
Following this, educational institutions could establish internal technology governance areas of concern. Governance does not simply mean defining a set of rules for teams follow, but rather establishing a collaborative participation means for reflection. This can include seeking consensus in relation to processes and practices, and creating action guidelines that can be permeated by transparency, ethics and responsibility.
Education Internacional’s 2023 report
In October, Education Internacional launched a report that highlights the unintended consequences of artificial intelligence and its applications in teaching processes (AI&ED). Education Internacional is a global organisation that brings together 383 unions of teachers and other education workers.
By defining AI as a “domain of computer science that seeks to develop machines capable of performing tasks that normally require human intelligence”, the document analyses AI&ED from two perspectives, teaching and learning with AI and about AI, applied to three points categories of view: student, teacher and institution. For each category, the report provides a detailed table commenting on the main types of application that exist today, with student-centered typically receiving the most investment, amounting to millions of dollars. In this sense, the commercialisation of education has become an increasingly worrying issue.
The report highlights issues that are often overlooked. This includes the valorisation of collective and participatory spaces and the concern to the reduction of the teaching role. In addition to current debates about bias, the report notes how AI can reinforce inequalities, exploit data (unethically), and incorporate outdated approaches to pedagogy. The report highlights the limited evidence for the effectiveness or safety of AI in education, or for any of the benefits that usually be listed by the developers – seen in such sentiments as “Companies prioritize profit over effectiveness.” The intention is to highlight the importance of AI literacy (AI Literacies), which involves associating not only the technological dimension (how AI works) but also the human dimension (ethical, social impacts, rights). Teaching about AI should support human rights and social justice, teachers professional development, and promote student agencies — which can only be achieved through collective engagement.
The report also questions the broad push for personalised learning powered by AI, something that has been proposed for nearly a hundred years as a solution to various educational problems, such as disengagement and achievement gaps. This is because personalised learning is deeply influenced by the Silicon Valley perspective, which overemphasises technology and individualism to the detriment of community, in addition to erasing the potential for social interactions that are so fundamental in the educational environment.
Another aspect involves the devaluation of teachers while decisions about what and how students should learn are made by commercial organisations that develop AI. This results in the transformation of education into a commodity, where students and teachers are seen as service providers. Reflections about AI&ED should contemplate human rights and social justice, as well as strengthen education as a democratic and public-good environment. It is necessary to ensure that AI is used in a responsible and ethical way, but to do so, it is suggested that teachers be involved in making decisions about the use of AI in their educational activities. It is also up to trade unionists to play a fundamental role in defending greater transparency and accountability in the use of AI in education, whether through the regulation and supervision of AI&ED, as well as ensuring the monitoring of use.
I’m running a study that is part of my current PhD research on critical AI literacies (University of Campinas and University of Bristol) which valorise participatory instances and can help schools and education departments to organise a collaborative process to choose AI technology.
The following infographic, despite being mentioned in UNESCO/IESALC guide on page 14 (accessible version), is still under development, and points to some important challenges that educational institutions need to deal with. I initially called it AI audit, but I’m considering changing it to AI governance, as it seems more appropriate to the educational environmental.