And they’re off! 🏁
With the number crunching complete we had our sets of feedback at the ready. The next challenge was deciding how best to deliver it in a way that would encourage meaningful engagement and learning.
We quickly decided this needed to be a live session to enable discussions between students and with our teaching team. The session opened with a brief talk introducing roughly how AI models work (it’s easy to forget that, at its core, it’s not so different from predictive text on an old Nokia), along with some initial reflections on its limitations and ethical considerations. The bulk of the session took up a format our students already knew well from peer review: small group discussions where students worked together to review their AI feedback.
Initially, this session plan seemed like it had a good chance of achieving our aims. However, a few days before it, a thoughtful colleague raised an important point: like all good engineers, our students value efficiency, and the most efficient response to using this feedback was to scan for the crosses, make changes, and move on. We worried that in this process there would be minimal chance to build critical thinking or engage with whether the logic of the AI output holds up. Since, for the foreseeable future, responsibility and trust still rest with the engineer—not the AI—this didn’t feel like a positive outcome.
To nudge students into deeper reflection, we introduced a twist. For ~5% of the rubric points that had initially passed, we asked the AI to argue the opposite case. The AI, unsurprisingly, often took the easy route: hallucinating quotes or inventing logic to support its new stance (classic AI “hallucinations”). We were open with students about our mischievous AI, and this change transformed the task. Now, students had to put their critical thinking to the test, evaluating each piece of feedback and deciding whether they:
- agreed with it,
- believed it to be a hallucination, or
- disagreed with the judgment but did not consider it a hallucination.
Our aim for the live session was to create a controlled environment in which students could critically assess AI outputs and reflect on the role of AI in scientific communication. The result was a lively and thoughtful session, full of discussion and debate.
We’ll be back soon with reflections on how it all played out and what students thought. Stay tuned!