mosaic alien on wall
AI, Designed for All, News

Exclude, Embrace, Ban?

Do we already know the best way to deal with Generative AI?

If you’ve given a lecture in the last 30 years, you’ll have lived this moment: 

A tutor asks a question during a lecture. The room fills with the patter of fingers on keyboards as each student delegates the task to an internet search engine. When a response is offered, it is a regurgitation of a Wiki page rather than a good measure of the student’s comprehension. 

Following this, a reflective teacher may well ask themselves, “how do I respond to students using technology in a way I didn’t anticipate?”. If we remove the existential hyperbole and scale the problem up (a lot), I think the same question is posed by generative AI (GenAI) today. 

As I have been working with BILT, the DEO and others to develop guidance for AI-responsive assessment design, I’ve found scaling the problem down to this microcosmic precedent has helped me consider the choices open to us. Essentially, it boils down to three options, and we can consider the consequences (and perhaps the motivations) for each by reflecting on how well it worked it in the past.

Prohibition: 

Bemoaning the laziness of the collective student body, the lecturer bans the use of laptops and smartphones in class.

This is a book-burning in lieu of a response, really. It is unrealistic, unenforceable, and becomes a self-fulfilling prophecy. If you think telling students not to bring their phones into class doesn’t lead to phones under table you are kidding yourself. 

History shows that prohibition almost always leads to an underground market and a brooding anger at authority. We should expect the same if we try to ban GenAI in our classroom or institutions. We would essentially be asking students to adopt our moral objection to the tool and, as they’re intelligent enough to question authority, they won’t. What they will probably do is use GenAI without our knowledge, without our influence, and likely with some pointed, rebellious resentment.

Prevention:

Bemoaning the attainment-focused logic of contemporary education, the lecturer wishes their students had the confidence to be wrong. They see using search engines as a self-defeating response to a culture they need to help students challenge. They reflect upon and reform the question they ask to make it harder to Google a response.

As part of a broader response, prevention can have a place, but never without explanation. As an absolute strategy, prevention is prohibition’s more passive-aggressive cousin, so it carries the same risks with the added horror of making everyone feel shame for using Google behind the well-meaning teacher’s back. More still, a total-prevention strategy wouldn’t discourage using GenAI, but it would place it beyond intellectual consideration. Essentially, the message would be, “play with your toys but don’t bring them to class”. We should, I think, avoid this at all costs. Look at the internet, particularly social media, for a sense of the dangers here. We simply cannot afford to let a generation of graduates think that GenAI exists beyond critical scrutiny. 

Integration:

Far from banning the internet or seeking to stop students from using it, the lecturer encourages its use, normalising its critical application. The class may collectively evaluate a Wiki site, combine findings into preliminary research activities or summative definitions, or contextualise an online definition with personal examples. 

This is not integration for the sake of it (a gimmick) nor is it unregulated. To paraphrase Rancière, the lecturer in this instance validates the intelligence of the student whilst exploring some shared ignorance. 

This approach recognises that the student’s behaviour represents an intelligent (if initially uncritical) application of a tool to a problem. By allowing it, the teacher normalises asking questions about the tool’s efficacy and appropriateness in application. Therefore, rather than an alternative to prevention, integration is a crucial part of that strategy too, replacing force and patronage with experiential knowledge regarding the limits and value of the tool. 

It is ‘exploring shared ignorance’ in that the tutor explores the tool’s use with the student. I think this is the only sustainable way we can respond (and keep responding) to GenAI. 

Existential dread about the sustainability of education ‘as we know it’ is everywhere at the moment, but frankly when isn’t it? Moreover, isn’t that an important question to keep asking? Every time I’ve walked into a classroom ‘education as I knew it’ had to radically reform, adjust, and adapt to the people I met. This is normal for a teacher, and I think that this small-scale, tactical agility is going to need to scale up. GenAI has evolved at an impressive rate in the year it has been widely available, so strategic responses will need to be more agile and tactical to remain relevant.

How does that translate into an assessment? More on that in the next post.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.