In this piece, Professor of Education and the Academic Director of the Perivoli Africa Research Centre, Leon Tikly, shares insights on the importance of decolonising Artificial Intelligence and reflects on the role of universities in supporting this movement.  

Gaurav Saxena:  

Thank you so much for joining me, Leon. We know that there’s growing discussions about the harms caused by AI and there are increasing calls to decolonise AI. What are your views on this idea? Do you see decolonising AI as a meaningful or a worthwhile goal? 

 
Leon Tikly:    
Yes, I do see this as a really critical question for higher education at the moment, so I’m really pleased that that we’re able to discuss it.  

I think there’s an urgent need to decolonise AI. But before that, it’s important, I think, to understand what we mean by decolonisation and to understand what we mean by coloniality. So I think that AI needs to be understood as operating within a wider context of, what [Walter] Mignolo and others, describe as the colonial matrix of power. AI is emerging at a time where we’re seeing a deepening crisis of global capitalism. We’re seeing deepening inequalities within and between countries. And we also see a crisis in Western modernity. There is a crisis in what we understand by the so-called liberal world order and a rise of ethno-nationalism and other kinds of morbid symptoms of the current crisis of global capitalism. And closely linked to all of this is a crisis of legitimacy about who defines knowledge and intelligence, and AI in this context becomes an important focus because it has a key role to play in terms of innovation. It’s playing an increasingly significant role in all aspects of our lives. But of course, it’s not neutral as a technology. It’s currently owned by big tech largely and it operates increasingly within paywalls, and it’s been developed not in an open democratic way, but in a way that reflects the interests and the identities of those who have developed the large language models, many of whom are white, males, living in the Global North. So, it’s far from a neutral technology in that sense. And this is born out in key aspects of AI.  

So, it really can be seen as a key instrument in ongoing processes of data extraction and this, of course, relates directly to research in higher education. I mean, historically, we know research between Global North and Global South has been very unequal. It’s often been defined by, owned by, and controlled by researchers in the Global North, and it’s been based on extractivist principles since colonial time. So, you know data has been freely extracted from the Global South as a means to know and ultimately to control black and brown populations in the Global South, and AI exacerbates this. It provides a new way of doing this, in a way that doesn’t acknowledge the sovereignty of Indigenous, Southern knowledge systems and languages. But rather, [it] extracts data that is used in large language models [and] the benefit of that accrues to corporations in the Global North and in fact, human activity becomes raw material in that sense. Lots of the data processing plants rely on intensive labour and they’re located often in the Global South and people work in sometimes quite desperate conditions, low pay and poor conditions of labour. So, in a very material sense it reflects northern dominance.  

But then also, one can see the environmental devastation that’s linked with AI. So, it’s hugely as we know, energy consuming as a technology. So, it contributes directly to the climate crisis that we’re experiencing. And in that sense, it’s again people in the Global South who are often most at risk, black and brown people in the Global South and in the Global North and poor people in the North and the South who are often most at risk of the effects of climate change exacerbated by AI. So, I think it’sreally important in that sense to understand. It becomes very important to understand how AI is working. 

The other issue, of course, is that in the way that the large language models themselves are constructed. They’re trained on largely data from the global North. So, they reflect an epistemic bias already, and not just an epistemic bias, but a linguistic bias as well. Because the language of large language models are often dominant global languages rather than Indigenous languages. So, it has an effect of also reproducing epistemic injustice and epistemicide either the ongoing process of colonialism, ofmarginalising Indigenous knowledges, and devaluing Indigenous languages. And so, it reproduces that kind of epistemic bias as well. So, it’s really important to take a critical view of AI, not as a neutral technology, but rather as an instrument of coloniality and one that I feel can also be put to good use as well as part of a broader decolonising project. 

 
Gaurav Saxena: 
Thank you so much, Leon, that was very insightful. Now to segue to the next question, if we were to take the idea of decolonising AI seriously, what do you think should change in how AI is being developed, designed, or used? 

 
Leon Tikly: 
That’s a very good question. And I think, you know, this is critical. So, we’ve talked a little bit about ownership. At the moment it’s the large language models, the most widely used ones at least are often controlled by big tech and there’s a need to develop alternative models. And there are efforts to do this, I mean some Indigenous groups have been trying to work to develop, for example, Māori speech recognition systems and to engage with African knowledge systems to inform AI. Not to assume Western-individualistic, competitive assumptions. but in the way that large language models are taught to think and to respond. There are efforts to infuse AI with ideas like Ubuntu, the African concept of relationality. So, these efforts need to be supported and of course that requires resource, but it also points to a very important role for higher education. If higher education is to contribute to the common good, the role of higher education is to contribute to the knowledge commons, to develop knowledge that can draw on all the archives of the world that can contribute to human development broadly understood, not just the interests of the few. Then helping to develop these kinds of large language models, that’s a genuinely [good way that you] can draw on not in an exploitative or an extractive way, but in a way that legitimises, acknowledge, values different knowledges and languages.  

And of course, there are dangers in doing this because you might just end up reproducing inequalities if you’re not careful. I think in that sense, universities [should] play a role in developing [large language models] through research and innovation [that is] more open, more epistemically expansive, [and] holistic. It needs to be done through working co-creatively with communities rather than just reproducing the kind of extraction that has commonly been the case so far. [So] to work with communities to understand how communities perceive, value, and perceive problems. But also, to benefit from the resources of communities whilst again, making sure that those contributions are properly acknowledged and remunerated.  

There’s a real need to extend some of this work that’s being done with indigenous languages because it’s through language that people access knowledge. Language is critical for accessing different knowledge systems as well as the affective side, and the values that underpin different cosmologies and knowledges. So, it’s important that large language models are not developed in an extractive way, but in a genuinely co-creative way, [that] legitimises and values and rewards communities that have been historically marginalised through colonialism. 

 
Gaurav Saxena: 

Leon, you were talking about higher education and about the role of universities. What do you think the role of academics would be in supporting the movement to decolonise AI? 

 
Leon Tikly: 

Well hopefully a very critical one. But I think there’s a clear need for much more attention to the kinds of skills that are needed in higher education to develop a critical awareness of the coloniality of AI, but also of how this might be counted. I think it’sdiscussed already, the issue of research and how research into large language models can be extended and opened. But of course, teaching is another crucial area where people develop skills related to critically using AI on how one must critically understand it. Part of the debate now is often focused on things like plagiarism and assessment and the originality of people’s work. But there’s another important question here about teaching our students whose knowledge is recognised and validated through AI. What are the exclusions? Which knowledges are excluded? How can we try to decentre within different disciplinary contexts the Western centric nature of many large language models and encourage our students to see beyond this [and] understand their limitations? So that one day they might be able to use AI in a more decolonial frame that serves their own interests.  

But I think there are also issues not just for teaching and for pedagogy, but also for leadership. How do our leaders in higher education engage with these issues? Most of the debates now strike me as being very instrumental in nature. They seem to be about securing the reputation of assessment and ultimately the university, rather than about critically engaging with AI as a colonial technology. And I think leaders need to play a key role here in moving this debate along, as with other areas of decolonisation.  

But there’s something else as well, which is the extent to which AI is also being used for surveillance. And I haven’t really touched on this so far, but of course, it’s deeply implicated in things like racial profiling. And it can be used as a means of surveillance for students and academic staff. So, they’re really important issues there. As well about how [do] universities use AI in in their own processes, in their own security and other kinds of processes [like] profiling students and so on. So, making our students aware of some of these issues and challenges [is crucial]. 

 
Gaurav Saxena: 
Leon, there is a risk that attempts to decolonise AI may again end up reproducing colonial power relations in new forms. How can we ensure that our efforts to decolonise AI do not simply replicate the same colonial patterns that we have been trying to fight against? 

 
Leon Tikly: 
Well, there we need to engage in a critical pedagogy. So, it needs to be one where we remain aware of our positionality as academics, if we’re based in the global North, if we are lighter skinned, if we are male. That we are aware of these dynamics and that AI as I was mentioning before, large language models, also need to be co-created with our communities, and we need to be held to account. So, I think there are enormous issues around governance and voice. Who has a say over the way that technology is used and mobilised in higher education and in other settings? And then, of course, there’s the research piece as well. How can we develop new large language models that are reflective of epistemic pluralism, but also are accessible in a way that would make them accountable to people as well? So often, it’s a very opaque process of developing large language models and if this can be opened up to scrutiny by people, especially those who are often the most marginalised in terms of language and culture, that becomes very important. 

 
Gaurav Saxena:  

My final question to you is, looking ahead, what do you think are the most important questions or challenges universities should be grappling with in relation to AI and decolonisation? 
 

Leon Tikly:   
Well, I think how can AI be mobilised for the common good rather than for private interests of technology firms? How can we ensure that everybody has equal access to AI? So, another issue, of course, is that with increasing monetisation, commercialisation of AI, it becomes the preserve of the few. And there are some universities will have greater access than other universities. Universities in Africa, for example, need to have full access to AI and it should be open source as well. It shouldn’t be behind paywalls. But then I think also, it’s really important that the other side of epistemic justice is not just about access, but also about inclusion. It’s really important that universities go out of their way to ensure that the large language models that they help to develop and use in higher education context are epistemically and linguistically inclusive. So, I think those are the key priorities. 

Many thanks to all of our collaborators for taking the time to contribute to this series.

View the other posts in this series here: https://bilt.online/category/decolonising/decolonising-ai/

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Trending

Discover more from Bristol Institute for Learning and Teaching

Subscribe now to keep reading and get access to the full archive.

Continue reading