Artificial Intelligence is now so accessible that, with some clever steps, it can produce good quality essays that rarely get caught by plagiarism checks. The higher education sector is abuzz with talk on how this will affect the rate of cheating and the impact on student learning. What does this mean for our practice, and do we need to rethink how we design assessments?
ChatGPT is the big player on the AI stage right now (others exist, and more are coming). It’s a free online tool that accepts text prompts and information to produce text outputs in any style requested. There are entire online forums dedicated to providing tips on getting the most out of the tool, so even a novice can get a high quality output quickly.
There are some legitimate ways ChatGPT could be used, but it is already proving problematic for higher education. When something so easy is at their fingertips, it’s not a leap to imagine the uncertain student reaching for the assistance. As with any form of cheating, a nervous desperation is a common rationale. Unlike essay mills, the tool is free so it removes any financial barrier to cheating that may have previously been fed via essay mills. In fact, the silver lining of ChatGPT may be that it spells the end of essay mills altogether.
Hüseyin Ç. Ö. has drawn together detailed examples of how ChatGPT works in practice, and demonstrates how effective it can be – I recommend you have a read of this! One thing that stands out is how difficult it is to phrase an essay prompt that can’t be answered well by ChatGPT. It takes the most obscure approach to overcome the AI’s ability. A major point is that this issue is only going to get worse because right now, ChatGPT isn’t connected to JStor or countless other academic repositories. It can’t data scrape the web, but once it does it will transform the quality and nature of the AI system.
As a teacher, Hüseyin Ç. Ö. has chosen to entirely forego take-home summative assignments replacing them with in-class and end of term exams, despite their dislike of this choice. It is a sensible choice as it avoids the issue completely and nullifies the availability of the tool. But what other choices do we have?
Perhaps we could integrate such tools into our practice, using the tools to generate materials for students to critique. It could be fun to get students to test their ability to discern AI content from human content, or ask them to assess the work using the same rubrics they are graded against similar to peer-to-peer assessment. ChatGPT is often wrong, so it would be a good test to see what errors students pick up. Critical thinking on the benefits or risks to disciplinary contexts could also fit in well with units covering ethics. It’s also important to consider that the more the tool is used, the more it learns from this to improve its responses in the future. Personally, I don’t think there is a point trying to pretend that the tool doesn’t exist, it needs to be designed around or utilised head on.
It’s early days for higher education’s engagement with the accessible advances in AI and there is likely no perfect response to ChatGPT and its ilk. Some in academia want to embrace and integrate this type of AI, while others ban it outright. We are going to see very different approaches across the sector, so I am eager to find out which will be successful. If you have any thoughts or practice in this topic you would like to share, please do get in touch!
University of Bristol Guidance for staff: Impact of Artificial Intelligence, such as ChatGPT, in Assessments
Additional resource: check out this overview by Dr Torrey Trust (Amherst, Massachusetts) which includes some tips on how to use ChatGPT in support of teaching design
2 thoughts on “The Rise of ChatGPT”
Excellent points. The issue is similar, in Translation Studies, when it comes to Machine Translation tools. We’ve adopted a new MT policy this year which includes recommendations to tutors along similar lines to your suggestions. I also agree that ‘authentic’ assessment will mean doing things that go substantially beyond existing practice or knowledge, and/or take reader/audience/client perspectives into account. This is something that MT can’t really do. Beyond that, many of the caveats to MT also apply to chat bots: users are at the mercy of the quality of the corpus, and there is an implicit premise of a ‘correct answer’. Super important issue, but one where Bristol possesses unique strengths.
Thanks Christophe – it’s interesting to see comparable experience coming through, and the challenges we face as technology changes.