In June, I attended the Assessment in Higher Education International Conference #AssessmentConf23. It was a jam-packed two-day programme of speakers, combining traditional keynote and research presentations with masterclasses, mini-keynotes and round-table presentations. I was also able to present some of my own research findings about institutional blind spots in the management of assessment in universities.
In this post I share some of the key insights I gleaned from conference which may be of interest to colleagues at the University, supporting Bristol’s assessment priorities and delivering changes to assessment.
Supporting Bristol’s Assessment Priorities
The University’s Assessment and Feedback Strategy 2022-2030 has three core principles: integrated, designed for all, and authentic. This year’s conference had many insights into practice from other institutions that can inform how we engage with Bristol’s assessment priorities.
Day 1 included a series of mini-keynotes, where colleagues had a small window to capture the attention of the whole delegation. The topic that stole the limelight, online and in the room, was the idea of the #unessay – an assessment type which allows students to choose both the topic and the format of their assignment. Milena Marinkova and Joy Robbins shared their experience of adopting the unessay in their undergraduate module. They found that the unessay was popular with students and with teachers and argue that the unessay is integral to the future suite of assessment methods; it enables students to approach the assessment more holistically, and bring in learning or ideas from other units, and is viewed as a more inclusive form of assessment. Notwithstanding the operational and policy challenges to bringing the unessay to students at Bristol, it could have the potential to deliver an assessment type that addresses all three of the University’s priorities. You can read more about the how the unessay has been used at Newcastle University and Royal Holloway History Department online.
Implementing synoptic assessment
As part of a series of practice-sharing roundtables, Pam Birtill and colleagues shared their experiences of implementing a synoptic approach to assessment at the University of Leeds since 2021, following their initial scoping project. A synoptic approach to assessment can enable students to make links between the different elements of a subject. They tried different approaches to delivering this type of assessment change, including using ‘mega-modules’ with competency-based assessment in one school, and separating assessment from teaching in another. They adopted an institution-wide team-based problem-solving approach, to surface and overcome potential obstacles, including creating appropriate structures within their student systems and virtual learning environment, and communicating synoptic assessment to students. As they evolve in their implementation, they have adopted design thinking to support assessment change.
Programmatic assessment design principles – examples from Europe
Reflecting its international nature, the conference had contributions from across the globe. As UK higher education institutions slowly introduce programme-level approaches to assessment, institutions in Europe are delivering more radical changes to assessment. Liesbeth Baartman and her colleagues shared their findings from delivering ‘programmatic’ assessment in the Netherlands. They conceptualise programmatic assessment as a ‘longitudinal collection of data points about student learning’, and surface a series of design principles for delivering this holistic approach. This approach recognises that the number and types of data points may be different, depending on the learning outcomes and whether assessment is high or low stakes. They brought this explanation to life with their image of a portrait; the higher-stakes, the more data points needed for a clear picture of student learning.
The wicked problem of delivering assessment change
Weaved throughout the array of conference sessions that covered authenticity, literacy, digital practices, timing, volume, was a golden thread about the challenges of delivering assessment change. The smaller sessions were sandwiched between two thought-provoking keynotes on tackling the ‘wicked problem’ of assessment change.
In the opening keynote, Sally Everett shared insights from leaders of assessment change across the UK sector, that struck a chord with the room. Delivering assessment change is difficult, and the complexity of navigating internal structures, frameworks and individuals feels like ‘wading through treacle’. Sally offered suggestions to those ‘navigating chopping waters’ and invited delegates, and all those bringing about change to assessment, to consider themselves tempered radicals; a community capable of delivering incremental changes to assessment practice, that evolve and are long lasting. You can find more resources on institution-wide change in assessment through AdvanceHE.
To bring the conference to a close, Paul Kleiman reflected on how we as a sector, and an international community, can continue to innovate assessment practice in the digital age. We were reminded that genuine and large-scale change to assessment was possible; we did it during the pandemic. But Paul suggested that in those unprecedented times, there was a collective will, a necessity to adapt, for rigid structures to flex, changing the feeling of wading through treacle to something much more achievable. As the sector potentially faces its next unprecedented challenge – artificial intelligence – he called for more of the radical thinking that emerged in the pandemic, malleable assessment protocols, processes and practices that enable innovation.
As Head of Assessment, the conference re-affirmed to me the importance of taking a holistic approach to delivering assessment change within large organizations, and I’m looking forward to shaping our policies, processes and underpinning systems to support the University in delivering its new assessment strategy and structure of the academic year.