In the first part of the blog double-header we explored some of the parameters around the Assessment Experience Questionnaire.
In this second blog, there’s a bit more opportunity to explore some of how the questionnaire responses measure up against the university’s revised Assessed and Feedback strategy.
For a quick refresher on the strategy you can find it here, but a helpful visualization can be found in the ASSET tool, developed by the Curriculum Enhancement Programme team.



(Click on the image to enlarge it.)
Looking initially at integrated assessment design, it’s encouraging to note the majority of students who felt this degree of connectivity on their programme.

Just to bring a raincloud to this picnic though, this question is potentially orientated to content, and not specifically around assessment/feedback, but there’s clearly a sense of alignment and potentially a narrative progression to students’ experience of their course. And for me, anything badged ‘learning’ is the ultimate criterion of value. Over-modularisation and ‘learning in silos’ have been some of the experiences that TESTA has aimed to address, as well as figuring as part of the principles of Designed for All strand of the strategy.
Assessment types is one where there is often an opportunity for a decisive vox pop:

This was the second highest scoring question for ‘strongly agree’ in the 40-item questionnaire. Trying to read into the ‘neither agree nor disagree’ – I’m left wondering as to whether ‘N/A’ has an equivalence here, whether experience of only one type of these assessments precludes judgement, or whether this is implied as a laissez-faire approach to grade distribution. There’s some potential for consideration as to where this aspect of the student experience intersects with the Designed for All section of the A&F strategy.
On the authentic assessment side, here’s question 37:

If you’ve raised an eyebrow (or two?) a notch at the phrase ‘real world problems’, that’s okay – there’s a useful discussion to be had around this…but not for this blog.
Pairing our last two questions gives a real insight into some of the experiences of feedback and its potential.

One research paper which has been of interest to understanding more about feedback practices is that of Mulliner and Tucker (2015). One of the stand-out features of the study was their reporting that 93% students said they always acted on feedback, whereas only 4% of educators thought students always acted on feedback.
Another finding from their study reported that most staff and students thought that individual and individual typed feedback were effective forms of feedback (86% of staff thought this, 63% of students thought this). In terms of engagement and feedback as a form of dialogue, perhaps this next question score is a little unsurprising.

As a stepping off point, hopefully this run-through will have given granular insight into some of the predominant themes around assessment and feedback at the university, and invited some reflection of our own academic experiences and practices.
Finally, just a cross-tabulation of two of the questions – for even more granularity!





Leave a Reply