Thoughts from a summer of reading and the developments I can make to my practice.
The first thing I noticed when embarking on my summer reading list was how many of the articles had been written by a native English speaker. This led me to wonder whether in English academia we are desperately holding on to hegemony such that when we do focus on use of AI we assume that the motivation for its use is to cheat and that it is a threat to academic integrity.
Prakash (n.d.) highlights the fact that AI is levelling the playing field by enabling more international writers to engage in academic discourse with confidence. The more I read, the more I wondered whether these authors might be using AI for legitimate reasons and that I needed to let go of my prejudices. I spotted verb-like -posits- that I had been condemning my students for using because it is just not natural English and must be “AI speak”. It clearly is natural English if that means frequently used. Language evolves over time and I feel that perhaps I need to adjust my understanding of what proper ‘english language’ is in light of the dominance of AI.
If non-native speakers are using AI to develop as more confident academic writers why should my students not want to do the same? It is incumbent on us to slow them down perhaps, to help them be more judicious in their use of AI, not to punish them for wanting to be proficient before they actually are. This links powerfully to inclusivity, (Al Kadi and Ali, 2024) which universities uphold as a core value. For some students AI is their lifeline and one of the few ways they can cope with the demands of academia. Are we wrong then to make them feel that its use is a failure on their part?
Following this, I shifted my focus to reading about how students can use AI to build confidence and proficiency. I found that a lot of authors were looking at -Grammarly- and it is a tool I have spotted my students using and used myself for my Masters.
Al Kadi and Ali (2024) argue that Grammarly gives non-judgmental feedback. It made me reflect and consider that sometimes, in an effort to give nuanced feedback, I might be being unnecessarily judgmental. Students often dislike the dehumanising nature of automated feedback but if the feedback is non-judgmental it might be easier for students to respond to it more effectively as their affective needs are being attended to.
Ranali (2022) highlights the timeliness of AI feedback. By giving students live feedback as they are writing, it could potentially make them more confident to work independently. When they then have tutorials with us we can have conversations about their use of language knowing that they might come informed already of the choices they have made about the language with the help of AI. Kadi et al (2024) reinforces this idea by stressing the fact that Grammarly can give the type of feedback that students value, the micro level language error correction that often isn’t on their transcripts. As lecturers, attending to minor language errors is seen as unimportant when compared to text organisation and quality of ideas, which is what feedback typically focuses. This dismisses the anxiety students feel about “getting it wrong”. Fan (2023) argues that we should attend to this belief that language issues are the biggest obstacle to their writing, which is a legacy of the way they have been previously taught. Why not then work with Grammarly rather than stop students from using it?
As I read further I found out that the free version of Grammarly is not always reliable in terms of error correction and students do not always know how to interrogate the software, so the information they get is not nuanced enough (Fan, 2023). The premium version does give more detailed tutorials and students do have the opportunity to make informed judgements about the knowledge it is imparting (Al Kadi and Ali, 2024). Rather than discouraging my students from using it, I should instead make time for them to show me what they have learned from it so that we can evaluate the tool together, and I can then encourage them to triangulate and use other tools to check the reliability of Grammarly. However, it also raised complex ideas about inclusivity, as not all students will be able to access premium versions and therefore receive this additional level of proficiency. Al Kadi and Ali (2024) suggest that students who are already competent academic writers benefit more from AI because they know how to interrogate it and are less likely to just let it do the work for them. Zhang and Hyland (2018) remind us of the importance of scaffolding any work we do with our students and AI so that it is not just the already proficient who benefit from these tools.
I still need to train myself in the use of AI tools and then I definitely need to carve out time in workshops to work with my students and their use of AI. The issue is not lack of willingness to do so but now about carving out time in an already highly pressurised curriculum. At least I am now sure that it is a worthwhile endeavour.
References
Al-Kadi, A. and Mohammed Ali, J. (2024) A Holistic Approach to ChatGPT, Gemini, and Copilot in English Learning and Teaching Article in Language Teaching Research Quarterly
Ning Fan (2023) Exploring the Effects of Automated Written Corrective Feedback on EFL
Students’ Writing Quality: A Mixed-Methods Study Sage Open April-June 2023: 1–17, DOI: 10.1177/21582440231181296
Prakash, A., Aggarwal, S., Varhese, J,J. and Varghese, J, J (n.d.) Writing without borders: AI and cross-cultural convergence in academic writing quality, Social Sciences Communications https://doi.org/10.1057/s41599-025-05484-6
Ranali, J. (2022) Automated written corrective feedback: Error-correction performance and timing of delivery,Language learning and technology, Volume 6, Issue I pp.1-25
Zhang, V. and Hyland, K. (2018) Student engagement with teacher and automated feedback on L2 writing, Assessing writing 36 pp.90-102




Leave a Reply