white and brown sea dock
Designed for All

Time to form an orderly AEQ (Part One) 

For the past few years, the Curriculum Enhancement Programme (CEP) team have been running the TESTA programme within the University of Bristol. There’s plenty on the methodology of TESTA here, but for the purpose of these two blogs, what’s salient is that all participants in the study undertake an Assessment Experience Questionnaire (version 5.2 to be precise – more on that to follow).  

As we reach the end of the CEP programme this summer, there’s an opportunity to take a step back and consider the institutional picture from the 768* individual responses across a whole range of different schools and programmes since 2019.  

This first blog outlines some of the design features and considerations (and caveats) around the Assessment Experience Questionnaire as well as some clear indicators of students’ perceptions, whilst the second blog covers some of the knottier aspects which the data presents.  

Questionnaire design  

The questionnaire consists of 40 statements which are answered on a Likert scale of ‘strongly agree’, ‘agree’, ‘neither agree nor disagree’, ‘disagree’, and ‘strongly agree’.  

The statements are designed to reflect different themes linked to students’ experiences of assessment on the programme as a whole, and the questionnaire is generally undertaken by students in the final year of their study. There’s a bit more background here if you want to explore it. 

AEQ 5.2 was developed as a sequel to previous versions, to address changing issues in HE programme design (particularly post-Covid around Online Learning Design). The latest themes include dimensions such as ‘integrated assessment design’ , ‘quality of feedback’ or ‘attitude to formative assessment’.  

Questions for each scale are distributed throughout the questionnaire, rather than in sections, and some of the questions are negatively-worded (with the scores then reversed) to improve the accuracy of data collection.  

For example, the questions which comprise the ‘personalisation of feedback’ domain are these ones:  

As we often emphasise during staff debrief sessions, not only is this very much perceptual data, but as with any wording of a statement, there is an opportunity for ambiguity to creep in. With Question 21 – this could easily be read as a form of co-operation and individual rapport, but likewise perceived as ad hominem. Question 28 has two elements – student feedback literacy, but also legitimacy of the marker’s judgement. So the caveat with this data is that in a typical TESTA approach at a programme level, this would be triangulated with focus groups, a sample marking analysis and an audit of assessment types on the programme.  

With such qualifications in mind, what can we glean from some of this data? 

Let’s start with which two statements had the strongest consensus of agreement amongst students (86.4% agree/strongly agree and 89.4% agree/strongly agree.) 

Here they are:   

Does this data correspond to engagement with optional formative assessment tasks? Is this an aspirational framing of the statements? How I would position this is as an affirmation to continue or develop cost-neutral approaches to formative assessment.  

To add a different angle to this, let’s have a look at the final two graphs in this blog. 

Whilst ostensibly another positive endorsement, and sidestepping deliberations regarding terminology of ‘formative feedback’ or ‘formative assessment’, it does seem that there is potentially a misstep between formative assessment/feedback as valued concrete experiences of deliberate practice and the other learning gain which is in their alignment between these formative assessments and subsequent summative assessments.  

[The way I tried to explain it to myself was: I think physical exercise is good for me to do; I’m not always sure if I’m training properly, but I got some good advice from my swimming coach ahead of my next football match.] 

And I’d argue that this is borne out in this final graph:  

What these graphs do not illustrate is the changes and improvements over time, and there’s no doubt from working with colleagues across the institution that there is a continuing collective endeavour to carefully refine and improve the experiences of assessment and feedback, for all those involved.  

In the next blog we’ll look a little further to the assessment and feedback horizon in the form of attitudes to assessment types and what we know so far about students’ experiences of assessment authenticity, integrated assessment design and students’ feedback literacy.  

[*Note: not every question aggregates completely to 768]. 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.