To paraphrase a line from a film…‘some of the time, it works every time’.
In the second of this blog series on feedback, attention focusses on the question ‘how often does feedback help improve work?’.
To most people, this question is a fairly essential and fundamental question to ask in any educational context, but in terms of question-design, there’s a huge number of knotty, difficult subsidiary questions nested within it.
On the one hand, there are those broader, ideological considerations, and then there are more practical, pragmatic, spreadsheety ones.
- There’s a domain of frequency to the question (‘I always get feedback/ I never get any feedback’)
- There’s a question about the quality of the feedback itself (‘It helped me to apply a new critical perspective’/‘I had no idea what that particular comment meant’)
- There’s an element of student agency and students’ actions to improve following feedback (‘I was really happy with the grade I got on it’/‘I’ve changed the way I engage with theory now’)
- There’s a consideration of the extent to which there are (assessment) opportunities for feedback itself to subsequently be enacted (‘I now know how I’m going to shape my conclusions in future essays’/ ‘I’ve moved on to new topics – that feedback isn’t really relevant now’.)
And that’s setting aside any definitional fuzziness of ‘feedback’ itself.
As mentioned in the first blog of the series, Boud and Molloy’s call for feedback to be ‘repositioned as a fundamental part of curriculum design, not an episodic mechanism delivered by teachers to learners’ is perhaps a helpful consideration. So this could include self-assessment or peer-feedback – in an extreme case this question might prompt a response where notions of feedback are decoupled from assessment entirely [I very often get feedback from my children about how to improve as a parent, thankfully no assessment as yet]. There’s an excellent paper by Naomi E. Winstone and David Boud (2020) on this very entanglement of assessment and feedback.
But the reality is that this question is intrinsically linked to experiences of assessment. And this question is one which is largely a challenge across the whole HE sector.
Continuing down the rabbit hole necessitates asking – well, what exactly do we even mean by assessment?
In education circles (and Berkeley Squares) we often talk, and perhaps even sublimate, a binary of formative or summative assessment (do look aghast at the thought that summative assessment could possibly be formative!), but there is often less discussion of formative or summative feedback. ‘Feedforward’ is a term which is positioned to acknowledge this, and there are already some great approaches showcased on the BILT site which explore this further (The LeapForward Project).
But even this parlance can come with a raft of considerations. So, undoubtedly, efforts to ensure that feedback ‘works every time’ most (or more) of the time, are complex.
Some of the efforts might involve exploring assessment and feedback environments holistically, as TESTA or the EAT project seek to do. Other initiatives may seek to establish precise areas for improvement, as outlined in BILT’s case studies section.
One area which the Curriculum Enhancement Programme team, in collaboration with Bristol SU, are currently developing are workshops which explore more about students’ confidence in engaging with their feedback on a programme.
By understanding more about the different experiences of utilising feedback, the team hopes to share some insights and strategies as to how feedback can be more effective, more often.
One of the ways we might consider these different dimensions might look a bit like this:

If you’re interested in understanding more about the approach, and how feedback can help improve work more often, it would be great to hear from you!




Leave a Reply