500 Words

What’s in a grade?

Surely a 2:1 by any other name would be as sweet?

Numerical grading of assessments is something that has bothered me for a long time. I’ve had many conversations with colleagues and students over the past couple of years and I’ve realised I’m not alone in this feeling. Of course, I’ve been met with many protests of how we ‘need’ to have these numbers, but no argument has ever really convinced me. There are a number of reasons why I’ve come to realise that numbers are useless in grading – a bold claim, I know – and I’ll try and convince you, too, over the next few paragraphs.

The main and overriding reasons for my distaste in numbers is the very fact that it makes students focus on the number. Whether you’ve been given a 62, 63 or 64 in an essay means absolutely nothing when it comes to what you can do to improve. If you’re happy with the number that has been assigned to your essay, you don’t think much more about it. A lot of students won’t even bother reading the feedback (if there is any). A student doesn’t sit back and think ‘what did I do right this time?’; they are content with their number. Similarly, if a student doesn’t get the number they feel they ‘deserved’ – whether it be for the effort they put in or their perceived understanding of the topic, they feel upset, frustrated and sometimes angry. They may read the feedback but only a small proportion of these students would go away and specifically work on the points for improvement, with the majority believing that they had been hard done by in some way.  

I’m not alone in my belief – both Chris Rust and Dylan William, two prominent scholars in the field of assessment, have argued against the use of numbers in assessment marking. In a recent interview with BILT, Christ Rust said that the one thing he would change about higher education would be the use of numbers in assessment[1], and Dylan William advocates students only being given written feedback[2] (though with teachers recording grades for their own use).

I can already hear the main arguments to this point, and they are loudest from the courses that need accreditation; courses like Engineering, Medicine and Dentistry, who already have very high-achieving cohorts of students. Student who, I imagine, would argue for these numbers. It ranks them against others in the course and they use it as a measure of how well they are doing – not whether they have the sufficient knowledge to become a successful engineer or doctor. Why do we need any more than a pass/ fail in these subjects? Surely you have the knowledge, or you don’t? For any other assessment, one that assesses how well a student interacts with a patient or how an engineer approaches a problem, can be better ‘graded’ using a written statement about their performance, rather than a number?

All programmes in all universities in the UK boil down to five ‘grades’ anyway. You either leave university with a 1st, 2:1, 2:2, 3rd or a pass (or you fail, but we won’t go into that here). Essentially, you spend £27k on one of those five classifications. In the vast majority of graduate situations, all that matters is what their overall grade (or classification) is – and arguably, that doesn’t really matter at all[3]. Almost three quarters of students across UK universities get a 2:1 or above – what does that really tell you about the student?

I’ve come up with a solution; an approach in which students, instead of ever getting a grade, would just get a report. A paragraph or two (or three) about what they did well and where they could improve. For courses where they need get have a certain level of understanding or knowledge, this could include a pass/fail option too. This feedback would accumulate over the three/ four years of their programme to create a picture of a student who had progressed and grown, who had worked on areas that needed improvement and who had developed academically.

Additionally, students would have the same personal tutor throughout their degree who understood their progress not only academically, but also socially and in their day-to-day lives. From taking all their washing home at the weekend to being a regular at the launderette. From rarely exercising to being President of the running society. It would highlight students who had overcome struggles in their personal, social or academic life and come out the other side. Students who had persevered and were determined. Personal tutors could then share this as part of a running report throughout their programme, which would be given to employers as part of a university portfolio, rather than a degree classification.

This approach to grading (i.e. not grading) would also encourage assessments to be more authentic. There’s not much you can write about a student that has successfully crammed three months of learning about quantum physics to regurgitate in an exam, but you can talk about how they interacted as part of a laboratory environment and contributed to discussions and debate on the subject. A student who has produced a print advert would better show their marketing prowess than an essay written on it.

A bigger emphasis on written feedback may translate to a bigger marking load for academics, but we could change assessments to reduce summative assessment in favour for a more programme- focussed approach. Feedback on these assessments would tie into the overall learning outcomes for the degree and therefore ensure students are always working towards the programme as a whole, rather than taking individual modules that don’t add up to a whole.

The implications for the removal of numerical grading are huge and would have major impacts on nearly all areas of the University. It is a radical concept and I’m not even sure where you would or could start. But it is something to think about in a time when student and staff mental health is being pushed to its limit and in an educational climate that increasingly focuses on results rather than on an individual’s improvement.

Amy Palmer


[1] http://bilt.online/an-interview-with-chris-rust

[2] https://blog.learningsciences.com/2019/03/19/10-feedback-techniques/

[3] https://www.bbc.co.uk/news/education-45939993

Meet the BILT Fellows

Meet the BILT Fellows: Zoe Palmer

We asked our Fellows to write us a short blog about their background and what they are doing as part of their BILT Fellowship. The following blog is from Zoe Palmer, who has been a BILT Fellow since September 2018.

For the past six years (on and off!) I have been teaching in the School of Physiology, Pharmacology and Neuroscience in what is now the Faculty of Life Sciences.  Within our school we teach our own undergraduates and postgraduate students, but also students on professional programmes; vets, dentists and medics.  My involvement with the medical programme also extends to recently being appointed lead for teaching block one of year two of the new medical curriculum (MB21) and I have been developing material for an optional three week pharmacology skills development and training unit.  In addition, I am involved with outreach, widening participation and public engagement.  This summer I co-organised the first Biomedical Sciences International Summer School.  This new faculty-wide endeavour is aimed at external undergraduates who don’t have the opportunity to undertake many practical classes at their home universities and so visit us to take advantage of our laboratories and teaching skills.

I am particularly interested in assessment and during my BILT fellowship I intend to investigate methods of quality assurance in exam setting.  I recently submitted my CREATE Level 2 portfolio which included a project in which I retrospectively analysed and evaluated the reliability of standard setting exam papers.  Standard setting is a process whereby exam papers are scrutinised by a team of experts to (in theory) create a robust and fair pass mark, as opposed to employing an arbitrary pass mark of, for example, 50%.  The results of this investigation were thought-provoking.  I would like to use this preliminary work to explore whether there might be a more rigorous and accurate method of generating the pass mark for exams.  This, and finding out more about assessment processes across the university and beyond, will aid us in implementing best practice and making evidence-based decisions to ensure that our assessments are valid and fit for purpose.