white and brown sea dock
Designed for All, News

Time to form an orderly AEQ (Part Two)

In the first part of the blog double-header we explored some of the parameters around the Assessment Experience Questionnaire.  

In this second blog, there’s a bit more opportunity to explore some of how the questionnaire responses measure up against the university’s revised Assessed and Feedback strategy.  

For a quick refresher on the strategy you can find it here, but a helpful visualization can be found in the ASSET tool, developed by the Curriculum Enhancement Programme team.  

(Click on the image to enlarge it.)

Looking initially at integrated assessment design, it’s encouraging to note the majority of students who felt this degree of connectivity on their programme.  

A bar graph for question 26 with the statement 'I can apply what I have learned in previous units to new units of study'.
Strongly agree had 111 responses (14.5%)
Agree had 492 responses (64.2%)
Neither agree nor disagree had 122 responses (15.9%)
Disagree had 36 responses (4.7%)
Strongly disagree had 5 responses (0.7%)

Just to bring a raincloud to this picnic though, this question is potentially orientated to content, and not specifically around assessment/feedback, but there’s clearly a sense of alignment and potentially a narrative progression to students’ experience of their course. And for me, anything badged ‘learning’ is the ultimate criterion of value. Over-modularisation and ‘learning in silos’ have been some of the experiences that TESTA has aimed to address, as well as figuring as part of the principles of Designed for All strand of the strategy. 

Assessment types is one where there is often an opportunity for a decisive vox pop:  

A bar graph for question 36 with the statement 'Open book assessments are a better measure of my ability than conventional exams'.
Strongly agree had 227 responses (37.1%)
Agree had 195 responses (26.1%)
Neither agree nor disagree had 185 responses (24.8%)
Disagree had 66 responses (8.8%)
Strongly disagree had 24 responses (3.2%)

This was the second highest scoring question for ‘strongly agree’ in the 40-item questionnaire. Trying to read into the ‘neither agree nor disagree’ – I’m left wondering as to whether ‘N/A’ has an equivalence here, whether experience of only one type of these assessments precludes judgement, or whether this is implied as a laissez-faire approach to grade distribution. There’s some potential for consideration as to where this aspect of the student experience intersects with the Designed for All section of the A&F strategy.  

On the authentic assessment side, here’s question 37:  

A bar graph for question 37 with the statement 'Some assessments encourage me to explore real world problems'.
Strongly agree had 197 responses (26.3%)
Agree had 365 responses (48.7%)
Neither agree nor disagree had 111 responses (14.8%)
Disagree had 59 responses (7.9%)
Strongly disagree had 18 responses (2.4%)

If you’ve raised an eyebrow (or two?) a notch at the phrase ‘real world problems’, that’s okay – there’s a useful discussion to be had around this…but not for this blog. 

Pairing our last two questions gives a real insight into some of the experiences of feedback and its potential.  

A bar graph for question 39 with the statement 'I pay careful attention to my feedback'.
Strongly agree had 174 responses (23.2%)
Agree had 420 responses (55.9%)
Neither agree nor disagree had 102 responses (13.6%)
Disagree had 46 responses (6.1%)
Strongly disagree had 9 responses (1.2%)

One research paper which has been of interest to understanding more about feedback practices is that of Mulliner and Tucker (2015).  One of the stand-out features of the study was their reporting that 93% students said they always acted on feedback, whereas only 4% of educators thought students always acted on feedback. 

Another finding from their study reported that most staff and students thought that individual and individual typed feedback were effective forms of feedback (86% of staff thought this, 63% of students thought this). In terms of engagement and feedback as a form of dialogue, perhaps this next question score is a little unsurprising.  

A bar graph for question 25 with the statement 'My feedback feels like a conversation'.
Strongly agree had 22 responses (2.9%)
Agree had 125 responses (16.3%)
Neither agree nor disagree had 183 responses (23.9%)
Disagree had 327 responses (42.7%)
Strongly disagree had 109 responses (14.2%)

As a stepping off point, hopefully this run-through will have given granular insight into some of the predominant themes around assessment and feedback at the university, and invited some reflection of our own academic experiences and practices.  

Finally, just a cross-tabulation of two of the questions – for even more granularity! 

A screenshot of a cross-tabulation of two questions - Question 32 'I do not understand how to do well in my assessments' on the vertical axis; Question 23 'I can see the point of doing practice assessments' on the horizontal axis.
Key points are 12.37% for disagree with Q32 and Q23, and 20.83% for disagree with Q32 and agree with Q23.

2 thoughts on “Time to form an orderly AEQ (Part Two)”

  1. Has anyone been systematically gathering stats on the ‘viewing’ of feedback from Turnitin assignments? We’re grappling with key questions: what proportion of students read their feedback? how many actually ACT on it? how many act on it EFFECTIVELY?!

    I just grabbed some data from an OTA my 2nd year cohort took in January (effectively a 1.5k essay). As of this moment, 45% had ‘viewed’ their feedback, according to Turnitin (N=274).

    Breaking this down by grade level is interesting: of those who achieved a 1st or 2.1, 63% and 65% respectively viewed their feedback. For 2.2’s it was 38%. 3rd’s 42%. And fails (N=28), 36%.

    A number of potential hypotheses suggest themselves, e.g., high achieving students are more likely to view their feedback because this forms a systematic part of their learning and achievement; students performing less well are less likely to read their feedback because they are demoralised (turned off by) their low mark; poorly performing students’ tendency to not read their feedback is a cause, not an effect, of their low achievement; mid range students are less likely to read their feedback if they feel their mark is ‘good enough’; less than half of students overall read their feedback because feedback received in the past has been of little use to them. And various permutations thereof!

    Of course this says nothing about the QUALITY of the feedback and whether students expect to find it helpful. And it would be useful to look at this across units to discern patterns, e.g., high performers always/usually look at their feedback.

    Anyway, the data are out there. It just needs people with the time and tools to look at it! 😉

  2. Has anyone been systematically gathering stats on the ‘viewing’ of feedback from Turnitin assignments? We’re grappling with key questions: what proportion of students read their feedback? how many actually ACT on it? how many act on it EFFECTIVELY?!

    I just grabbed some data from an OTA my 2nd year cohort took in January (effectively a 1.5k essay). As of this moment, 45% had ‘viewed’ their feedback, according to Turnitin (N=274).

    Breaking this down by grade level is interesting: of those who achieved a 1st or 2.1, 63% and 65% respectively viewed their feedback. For 2.2’s it was 38%. 3rd’s 42%. And fails (N=28), 36%.

    A number of potential hypotheses suggest themselves, e.g., high achieving students are more likely to view their feedback because this forms a systematic part of their learning and achievement; students performing less well are less likely to read their feedback because they are demoralised (turned off by) their low mark; poorly performing students’ tendency to not read their feedback is a cause, not an effect, of their low achievement; mid range students are less likely to read their feedback if they feel their mark is ‘good enough’; less than half of students overall read their feedback because feedback received in the past has been of little use to them. And various permutations thereof!

    Of course this says nothing about the QUALITY of the feedback and whether students expect to find it helpful. And it would be useful to look at this across units to discern patterns, e.g., high performers always/usually look at their feedback.

    Anyway, the data are out there. It just needs people with the time and tools to look at it! 😉

    –Lloyd

Leave a Reply to Lloyd FletcherCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.