In this conversation, Professor of Education Richard Watermeyer shares his views on the provocative question of decolonising AI. He reflects on the current geopolitical and economic pressures facing universities and advocates for greater transparencyand more critical and reflexive engagement with the use of AI tools.
Gaurav Saxena: Thank you so much, Richard, for joining me today. What views do you have about the movement to decolonise AI? Do you think that it’s even a worthwhile goal?
Richard Watermeyer: Well, what is the relationship between AI tools and more to the point, those who are the architects, the vendors etc. is fundamentally underpinned by an economic rationale. We utilize these tools because they help us to achieve more with less. My understanding would be to say that any kind of attempts for decolonisation rests with social justice, social equality perspectives. [It is about] how do we make things better. [But] is there a premise that AI tools are designed for those purposes? I would suggest not. I don’t think I’ve come across any sense in which the tools are rationalised on a social justice mission or agenda. It is predominantly an economic rationalisation. So, I think my immediate response is that there is an invisibility. So,decolonisation of AI for me goes back to thinking about all the data and the processes through which we do knowledge production and arrive at specific forms of knowledge, which in itself and, you know, AI is biased in and of itself. The extent to which it takes pre-existing bias within data and then the organisation of that bias can then sometimes be elevated to turn into an even greater form of bias.
We’re in a particular geopolitical moment where my sense is that these concerns are beginning to take a bit of a backseat [for] things like EDI in the UK context. [There is] a backlash against so-called wokeism, academic freedom, mixed with the whole kind of economic backdrop [within which] higher education currently sits, [making] financial burden the dominant concern. [So], the ideological political projects around that social justice dimension kind of become deprioritised. [They are] morally justified, morally important, yet not necessarily fully compatible with what the current priorities and [where] the general political economy of higher education currently is.
Gaurav Saxena: What should the role of universities be in efforts to decolonise AI?
Richard Watermeyer: So, there are all sorts of aspirations and ambitions tied to the university as a civic institution, as a public institution, as one which ostensibly services the public good. But I think fundamentally what universities should do is massively challenged by the conditions within which they operate. [But] they are economically frail. We’re still contending with the legacy of the COVID pandemic. We are basically still contending with the legacy of the 2008 global economic crash. [But] what they can do related to what they should do. I think it is [about] working through a new value proposition. [But] I think what universities should do is first is work towards being more economically viable, resilient, and then once you’ve got that kind of platform, then you can begin to manoeuvre and exercise and invest more moral and genuinely societally impactful research and education. That comes from having a house that’s built not on sand, but on strong foundations. And currently universities don’t have those foundations.
Gaurav Saxena: What do you think the role of other academic colleagues can be in the movement to decolonise AI?
Richard Watermeyer: Potentially, but the degree of illiteracy is so massive. This is not even on the agenda. From all the conversations I’ve had with people, one is that the use of the tools whilst broad, [the way] a lot of people are utilising them is very nascent. The guardrails and governance for the utilisation of the tools is less than nascent. You know, it’s basically not there. Good practice guidelines or clear directive policies that can help people understand responsible AI [use] gets bandied about a lot [and] is often very decontextualized. I think there’s also very clear sense of the utilisation of the tools being very uncritical. I think the tools are used quite unreflexively. And I think that’s often because they provide quick fire and easy solutions to things. So, I don’t think that we’re even at a stage yet where practice itself is properly critically informed [and] reflexively guided.
Gaurav Saxena:
And how do you think that we can generally develop these competencies?
Richard Watermeyer:
Well, I think first is transparency and openness. You need to start having the conversations about it, which are currently submerged, I think. There are strong and reasonable arguments made around the deleterious impacts. My own personal take is the future is one that is AI and the present is one that is AI infused. [But] how can we work with the tools to do a better job of decolonising the curriculum? I think these are really interesting and profitable questions. How we get there is by being honest and critical in terms of the use of the tools, having transparency put in place. [I think] you need to have agile guardrails, informed governance, [and] open conversation [about AI] use.
These are the core things for me that will enable a more profitable, better, moralistic, responsible use of the tools that might then help service those other broader public missions of the university.
It’s about having broad literacy and a critical take so that the tools aren’t just used without consideration. [It’s about] having a much sharper eye in terms of when [we are] and we’re not utilising the tools and for what purposes. That gets to the very foundations of how we understand knowledge production [and] how we make value of what we produce as knowledge producers. [But] that is all being affected by increasing, albeit invisible, use of these tools.
Many thanks to all of our collaborators for taking the time to contribute to this series.
View the other posts in this series here: https://bilt.online/category/decolonising/decolonising-ai/




Leave a Reply