bristol conversations in education logo

Computer-based Diagnostic Assessment of Young Learners with Automated Feedback – an International Trial

This event is part of the School of Education’s ‘Bristol Conversations in Education’ seminar series. These seminars are free and open to the public.

Speaker: Tony Clark, Cambridge Assessment English

An effective diagnostic test can play a key role in the language learning process, allowing specific strengths and weaknesses in students’ linguistic development to be identified and then addressed (Jang, 2012). This paper describes the development of an online diagnostic test by Cambridge Assessment English that assesses English grammatical knowledge at A2 level.   
 
As most language tests to date have been proficiency or achievement tests, there has been relatively little research done in the field of diagnostic language assessment and there is no real agreement on exactly what it entails (Alderson, 2005; Alderson, Brunfaut, & Harding, 2015; Davies, 1999; Lee, 2015). It was decided to create something based on the learning-oriented assessment framework (Jones & Saville, 2016) and in response to this lack of research in the field of diagnostic testing. Another aim was to trial a faster, more iterative way of working to better respond to the continuous rapid changes in technology by producing an initial prototype to trial which we could later improve based on the trial results. 
 
Aimed at learners of approximately 15 years old, the test provides detailed diagnostic feedback on seven grammar categories at both individual and class levels, aiming to improve curriculum and lesson planning and accommodate students’ learning needs. The test was trialled internationally and surveys and focus groups were designed to investigate student and teacher perspectives. As well as discussing the results of the trial, the paper also outlines planned modifications for the next version of the Diagnostic Grammar Test and the implications of this research for wider pedagogical practice.

CLICK TO REGISTER

bristol conversations in education logo

Extrapolating the widening participation agenda to the recruitment of underserved groups in medical research: Assessment, ethnicity, and language

This event is part of the School of Education’s ‘Bristol Conversations in Education’ seminar series. These seminars are free and open to the public.

Speaker: Dr Talia Isaacs, UCL Institute of Education, University College London

Widening participation has long been a strategic objective in UK higher education, with government targets for increasing the diversity of student intake in university admissions (e.g., HEFCE; see Rose et al., 2019). The argument for catering to a wider demographic naturally extends to the healthcare sector, including health intervention research, which tests the safety and effectiveness of different medical treatments for patients (Bartlett et al., 2005). Although examining the composition of the recruited sample in a study and the extent to which it is representative of the target population to which the results will be extrapolated has not traditionally been a focus in health intervention research in the UK (Brown et al., 2014), there are some signs of change. One informal indicator is ongoing work of a National Institute for Healthcare Research (NIHR) Clinical Research Network (CRN) on including underserved/underrepresented groups in the context of clinical trials (Rochester et al., in progress). This project is likely to inform future requirements for research funding applications.

In this talk, couched under the broader theme of trends in the field of language assessment, interdisciplinarity, and the role of our professional associations, I will discuss why this specific-purposes topic is relevant. Language testers need to be engaging and lending their expertise to different stakeholder groups, including domain experts from different fields, as part of what has been termed as “indigenous assessment” (Jacoby & McNamara, 1999). This is particularly important to improve the quality of assessments that are used for gatekeeping purposes to promote social justice (principles of inclusion, equitability, fairness, etc.; Shohamy, 2001). By way of an example, I will focus specifically on the role of language as a criterion for including or excluding patients from participating in trials (Isaacs et al., 2016). I will argue that adequate operationalization of the language proficiency construct is potentially high stakes for patients in this context and should be a research priority, notwithstanding barriers to conducting interdisciplinary research.

CLICK TO REGISTER

Teaching Stories

The Primary Experience: What Can We Learn about Cross-Institutional Changes?

The following post was written by Dr. Isabel Hopwood-Stephens, a TESTA Researcher.

As one of the TESTA researchers attached to BILT, I’m going to be involved in collecting and analysing data about Bristol undergraduates’ experience of assessment. The aim of TESTA is to provide an evidence-based starting point for discussions among Programme Teams about how students’ experience of assessment might be improved, thereby increasing their engagement with their study and satisfaction with the course.

This is done by sharing any issues identified in the analysis and providing ideas which are likely to involve teaching staff making changes to aspects of the assessment experience; for example, offering detailed verbal feedback on a draft of an essay, which the student can use to improve it, before the essay is submitted for grading, or explicitly discussing and exemplifying the marking criteria with students to help them internalise standards.

Having a good idea about how to improve students’ experience of assessment is one thing, though; making the required modifications to working habits to enact those ideas is another. My recent research into the factors that enable or inhibit changes to assessment practice among primary school teachers has provided some interesting pointers.

As part of my study into primary teachers changing their assessment practice, I looked at the main vehicle for teachers’ professional development in primary schools: the staff meeting. I was expecting to find that staff meetings with particular characteristics – where teachers could discuss how they worked, were encouraged to raise questions, and where the focus on learning was clear – would be significantly linked to subsequent reports of school-wide changes to their assessment practice.

Instead, I found out that the characteristics of the wider workplace seemed more influential. Teachers who felt that their workplaces encouraged collaborative, cross-departmental working and innovation were more likely to also report school-wide changes to how they carried out assessment.

This made me think that the kind of professional learning that helps primary teachers to change the ways that they do their job takes place during the wider working day, through ongoing conversation with colleagues, rather than within the confines of a staff meeting. When I looked at communication style between teaching colleagues, I also found that the activities which school-wide changes to teaching practice seemed to entail – negotiation and agreement of shared goals; reflection upon and review of progress; sharing of best practice; questioning and clarification of aims – were underpinned by an open and dynamic communication style that facilitated the involvement of all in discussion and decision-making. This research was conducted with primary teachers in state-maintained primary schools, a working environment which we might consider somewhat removed from the more selective and purposeful atmosphere of a university. However, it will be interesting to see whether the characteristics of the working environment and the interpersonal communication style experienced by academic staff plays a role in enabling programme-wide changes to aspects of practice as a result of participating in TESTA.

500 Words, News

My Retirement from Competitive Baking

Yesterday, after an excruciating three-week wait, it was the Education Services Charity Bake Off Final. I had made it through to the final after winning my heat (cheese and rosemary scones, if you must know) and I had been practising for my chance at winning the title ever since.

I was as happy with my cake as a novice baker could be, having opted for a chocolate and passionfruit cake, and eagerly awaited the results as the morning went on. By the time it came to 1pm, when colleagues from across the office gathered around waiting our Director to announce the winner, I was actually nervous.

I didn’t win. I didn’t expect to win – there were some amazing cakes on offer from some equally amazing bakers – but no one likes to lose do they? I spend the afternoon texting my husband about how I was never going to bake again and fanaticising about throwing my rolling pin away when I got home.

And I don’t plan on entering another baking competition; I didn’t like the waiting around for weeks not knowing what the result is going to be – yet this is exactly what so many 17 and 18-year olds are going through today.

Having sat their exams months ago, they have spent their summer nervously awaiting the results that will determine their future. Whether they go to university or not, and whether, if they do choose on university, that university is their ‘first choice’, or whether they have to go though ‘clearing’ (an awful process and even more awful word to use for it – surely there is a better way it can be done?*).

But there is no option for a university student to ‘never bake again’ – doing a degree is like a three-year baking competition. For the few students who do well in all of their assessments this is fine (read: smash the soufflé), but for the majority of students who struggle though at least some of their degree, the process of endlessly awaiting the next result is hugely detrimental for their wellbeing – and yet we continue to assess in this way.

As an adult, we don’t experience this same kind of stress. The wait to hear if you’ve been accepted for a mortgage, or if your latest paper has been accepted in to journal, is about as close as we come. But these are annual occurrences at best and, as adults, we have the experience of know we can always resubmit a paper or apply for a different mortgage. I wonder if we experienced the continual insecurity and nerves that students face around assessment that we would still choose to assess in this way?

One way to reduce this insecurity could be a move towards more formative assessments and less summative assessment may be one approach, or a move away from numerical grading may be another, but it is difficult to know what balance could be reached between keeping students motivated while still removing the carrot of a grade they are happy with.  

So, while I’ll be hanging up my apron for the foreseeable future, I’ll be thinking of all the students starting in September (and coming back) who will be facing another year of blind bakes and wondering what we can do to help reduce the anxiety around results and assessments this causes.  

*If this area interests you, I highly recommend this WonkHE piece on making university admissions truly inclusive – including two very viable recommendations.  

Amy Palmer

Teaching Stories

Strategic Students and Question Spotting

The following piece was written by Helen Heath, a BILT Fellow, Reader in Physics and (soon to be!) University Education Director (Quality).

Why do we think that students being strategic in their learning is a bad thing? Is this an example of emotive conjugation as brilliantly illustrated by Anthony Jay and Johnathan Lynn in the “Yes Minister” series, “I give confidential security briefings. You leak. He has been charged under section 2a of the Official Secrets Act.” ?

“I only have time for important things, you have concentrated on the wrong things, students are question spotting rather than learning.”

Academics are very strategic in the tasks they decide to undertake. They pick tasks that will result in promotion, they tune their lectures to give students what they want to get those good questionnaire responses and they leave jobs undone that they have decided are not worth the time and effort. Yet we seem to criticise students for the same behaviour. We decide not to read the majority of the 200 papers in the Senate pack. Quickly reviewing the headings and deciding what matters to us. This is sensible use of precious time. A student decides they don’t have time to read and understand the whole textbook so they will look at previous examinations and see what topics are more likely to come up and this is “question spotting”.

But is “question spotting” such a bad idea? There is some sense academically. If a question (or a variation of a question) about the same topic appears every year then the examiner is giving a message that this is a topic they regard as important. We might hope that students had realised what were the key topics in other ways. We might stress these key topics in our lectures. We might like to think our students were able to just “get” what is key but that’s a high-level skill and the key topics may only be obvious when they have reappeared in subsequent years. When students are struggling with the nuts and bolts of a subject it’s not surprising that they can’t manage to see the wood for the trees.

Many weaker students are known to find difficulty with scaffolding their learning and identifying the key elements that will enable them to succeed later. They use every piece of information they can to work out what these key topics are and that includes judging what we regard as important by what we assess them on. The topics we choose to place an emphasis on in our final assessment must be import so question spotting is a way of understanding what it is that academics regard as important.

I’d suggest that this strategic planning is not only useful for passing examinations but it’s a useful life skill. The difficulty arises where students question spot and learn by rote with no understanding. The symptom of this in Physics is often a good response to a question that looked like the one that was asked but was slightly different.

The HEA training materials used in the programme focussed assessment training for the pilot project encouraged academics to consider what are the threshold topics in their area. There is much written about threshold topics in physics a recent paper even suggests that there are too many threshold concepts in physics to count them (“Identifying Threshold Concepts in Physics: too many to count” R. Serbanescu 2017). If this is the case, we need to guide the students by deciding what we think is key. If we fail to do that then we shouldn’t blame the students for looking at what we indicated was key by our assessment. Assessment does drive learning and if we are assessing the same topic repeatedly then it is driving the students to learn that topic.

One mechanism we have tried in physics which has some advantages is giving the students a list of questions of which a subset will be a people be guaranteed to appear on the paper and make up ~40% of the material. These direct students towards the bare bones of the course. If they can answer this set of questions they should at least be able to reproduce the basic information in the course Looking our definition of what constitutes a third class performance in assessment (“some grasp of the issues and concepts underlying the techniques and material taught” UoB 21 point scale 40-50 descriptor) the ability to simply regurgitate with reasonable accuracy some basic concepts could be seen to meet these. Ideally students would want to go further but, in some cases, they haven’t had the time to absorb that particular piece of knowledge and digest it in the depth we would expect. While there are still time constraints on the acquisition of knowledge in a Higher Education programme inevitably almost everyone will come up against a concept that they are unable to grasp before the assessment.

And is learning by rote so bad? I do not set out to prove Pythagoras’ theorem every time I need to use it for a question.

Forms of assessment should have a range of tasks that test both use of tools and deeper concepts, but students should not be criticised for directing their learning towards topics they think are likely to come up in an examination. By putting these topics on the examination regularly we have declared them to be important.

Teaching Stories

Assessing Celebrity Cultures

Rumour had it that both the teaching and assessment on the third-year English Literature Celebrity Cultures module was pushing boundaries to introduce students to new ways of thinking. Intrigued, I arranged a meeting with its unit leaders, Rowena Kennedy-Epstein and Andrew Blades, to find out more about what they were up to.

The Celebrity Cultures unit has been running for just one academic year, but already word has got around that this unit is one worth taking. Andrew and Rowena came up for the concept of the course through a desire for students to reflect on course materials in a more “personal, idiosyncratic” way. They recognised a disconnect between the way academics thought and the way students were encouraged to think.

“… as scholars we are deeply involved in the emotional life of our material. And I think we felt that the students here didn’t quite understand kind of their political positions within how to engage with our texts and cultures, and this is set up, I guess, in some ways to think about that.”

The course material covers gender studies, cultural studies, critical race studies and queer studies but it’s also about how students find materials. Andrew and Rowena use celebrities as the central concept, thinking about how we, as an individual and as a society, create icons; how we obsess over certain things, how we look at things, what and how we expect things to be as opposed to how they are. Ideas about the political world that are then interrogated through the idea of celebrity.

In terms of planning the course, Rowena and Andrew sat down and did all the thought about its structure and assignments simultaneously, making the transition between materials and assessment seamless and organic. There are several things that set this unit apart from others on the degree.

Each week, students were tasked with writing a 250-word lecture reflection, considering what had struck them the most about the content. Students could either do this in the time between the lecture and the seminar, or at the beginning of the seminar, where the first 15 minutes of each session was handed over to students to either write this reflection or discuss the lecture with others in their group.

The lecture reflection also had additional benefits – lecture theatres were full; in part this is down to the reflective piece, but also the fact that lectures are delivered by multiple speakers, with a number guest academics from across the Faculty of Arts taking the lectern each week, turning each session into a mini-conference, with lectures being a mix of scripted material, reflection and discussion between academics, film clips, etc. This didn’t come without its organisation difficulties, but the benefits for students were huge – Andrew observed that in his entire career he had not seen lecture theatres so full! Students were not aware of what the lecture each week so they would have to come.

These lecture reflections formed part of a portfolio of work across the unit, in which students chose their best two reflections to make up alongside a traditional essay (75% for the portfolio), with a group presentation too (25%). Students continued to write throughout the course, creating a sense of continual reflection, which removed the emphasis on the ‘final’ assessment. Andrew and Rowena both said how high the quality of work was across the board, and this was undoubtedly because the students were given their own voice to reflect on what they had learnt. As well as the 2 lecture responses and essay, there’s a 500 word piece they call a ‘meditation’ – on a particular celebrity figure or phenomenon. This is a one-off creative-critical piece, and each of the three seminar tutors produced their own and presented it at a lecture at the beginning of term.

“Students will often hide behind a kind of what they think to be a scholarly style and behind certain buzz phrases… which are often ways of clouding the very things that they want to express. Academic, scholarly language is a learned artificial language, none of us speak like that. And in fact, it can often be really inarticulate in what it’s trying to say and deliberately obscure [it]. And I think, in a way, you’re sort of parting the clouds over that, and demystifying that, to some extent, brought out at this time better, better quality of writing, which had fewer of different types of technical terms, and fewer of some of the technical terms that are actually often misused.”

The majority of students on the unit enjoyed this way of learning and being assessed, yet a few found the academic freedom difficult. Rethinking education in this way won’t always feel comfortable for every student, and ‘Celebrity Culture’ definitely addresses some of the problems students currently find with more traditional units – heavy emphasis on a final, summative assessment without much room for practice and difficulty engaging with lectures and course materials are both solved through the design and delivery of this unit. Although the study of celebrity isn’t applicable to all, the educational elements certainly are.

Amy Palmer

Assessment in higher education conference

Assessment in Higher Education Conference 2019

This event will be the seventh international Assessment in Higher Education conference. This research and academic development conference is a forum for critical debate of research and innovation focused on assessment and feedback practice and policy. The themes for our 2019 conference will invite a wide range of papers, practice exchanges and posters. Themed poster presentations, accompanied by a short pitch from the authors, have been a particular strength of the conference and have encouraged networking by delegates.

Keynote Speakers

Phil Dawson: Associate Professor at Deakin University

Bruce Macfarlane: Professor of Higher Education at University of Bristol

500 Words

What’s in a grade?

Surely a 2:1 by any other name would be as sweet?

Numerical grading of assessments is something that has bothered me for a long time. I’ve had many conversations with colleagues and students over the past couple of years and I’ve realised I’m not alone in this feeling. Of course, I’ve been met with many protests of how we ‘need’ to have these numbers, but no argument has ever really convinced me. There are a number of reasons why I’ve come to realise that numbers are useless in grading – a bold claim, I know – and I’ll try and convince you, too, over the next few paragraphs.

The main and overriding reasons for my distaste in numbers is the very fact that it makes students focus on the number. Whether you’ve been given a 62, 63 or 64 in an essay means absolutely nothing when it comes to what you can do to improve. If you’re happy with the number that has been assigned to your essay, you don’t think much more about it. A lot of students won’t even bother reading the feedback (if there is any). A student doesn’t sit back and think ‘what did I do right this time?’; they are content with their number. Similarly, if a student doesn’t get the number they feel they ‘deserved’ – whether it be for the effort they put in or their perceived understanding of the topic, they feel upset, frustrated and sometimes angry. They may read the feedback but only a small proportion of these students would go away and specifically work on the points for improvement, with the majority believing that they had been hard done by in some way.  

I’m not alone in my belief – both Chris Rust and Dylan William, two prominent scholars in the field of assessment, have argued against the use of numbers in assessment marking. In a recent interview with BILT, Christ Rust said that the one thing he would change about higher education would be the use of numbers in assessment[1], and Dylan William advocates students only being given written feedback[2] (though with teachers recording grades for their own use).

I can already hear the main arguments to this point, and they are loudest from the courses that need accreditation; courses like Engineering, Medicine and Dentistry, who already have very high-achieving cohorts of students. Student who, I imagine, would argue for these numbers. It ranks them against others in the course and they use it as a measure of how well they are doing – not whether they have the sufficient knowledge to become a successful engineer or doctor. Why do we need any more than a pass/ fail in these subjects? Surely you have the knowledge, or you don’t? For any other assessment, one that assesses how well a student interacts with a patient or how an engineer approaches a problem, can be better ‘graded’ using a written statement about their performance, rather than a number?

All programmes in all universities in the UK boil down to five ‘grades’ anyway. You either leave university with a 1st, 2:1, 2:2, 3rd or a pass (or you fail, but we won’t go into that here). Essentially, you spend £27k on one of those five classifications. In the vast majority of graduate situations, all that matters is what their overall grade (or classification) is – and arguably, that doesn’t really matter at all[3]. Almost three quarters of students across UK universities get a 2:1 or above – what does that really tell you about the student?

I’ve come up with a solution; an approach in which students, instead of ever getting a grade, would just get a report. A paragraph or two (or three) about what they did well and where they could improve. For courses where they need get have a certain level of understanding or knowledge, this could include a pass/fail option too. This feedback would accumulate over the three/ four years of their programme to create a picture of a student who had progressed and grown, who had worked on areas that needed improvement and who had developed academically.

Additionally, students would have the same personal tutor throughout their degree who understood their progress not only academically, but also socially and in their day-to-day lives. From taking all their washing home at the weekend to being a regular at the launderette. From rarely exercising to being President of the running society. It would highlight students who had overcome struggles in their personal, social or academic life and come out the other side. Students who had persevered and were determined. Personal tutors could then share this as part of a running report throughout their programme, which would be given to employers as part of a university portfolio, rather than a degree classification.

This approach to grading (i.e. not grading) would also encourage assessments to be more authentic. There’s not much you can write about a student that has successfully crammed three months of learning about quantum physics to regurgitate in an exam, but you can talk about how they interacted as part of a laboratory environment and contributed to discussions and debate on the subject. A student who has produced a print advert would better show their marketing prowess than an essay written on it.

A bigger emphasis on written feedback may translate to a bigger marking load for academics, but we could change assessments to reduce summative assessment in favour for a more programme- focussed approach. Feedback on these assessments would tie into the overall learning outcomes for the degree and therefore ensure students are always working towards the programme as a whole, rather than taking individual modules that don’t add up to a whole.

The implications for the removal of numerical grading are huge and would have major impacts on nearly all areas of the University. It is a radical concept and I’m not even sure where you would or could start. But it is something to think about in a time when student and staff mental health is being pushed to its limit and in an educational climate that increasingly focuses on results rather than on an individual’s improvement.

Amy Palmer


[1] http://bilt.online/an-interview-with-chris-rust

[2] https://blog.learningsciences.com/2019/03/19/10-feedback-techniques/

[3] https://www.bbc.co.uk/news/education-45939993

500 Words, News

Should we go ‘The Whole Hog’ with programme-level assessment?

The following post was written by Amy Palmer, BILT Digital Resources Officer.

Since the launch of BILT in 2017, the implementation of programme-level assessment across the University has been a widely-discussed topic. But what do we really mean by programme-level assessment?

Tansy Jessop, while delivering her TESTA workshop in January, outlined her ‘Five Hogs of Programme-Level Assessment’, breaking down the term into five different ways this assessment framework could be implemented.

The first, ‘The Whole Hog’, advocates an integrated and connected assessment plan, running though entire programmes, using capstone and cornerstone assessments to bring together learning from different modules. Teaching is separated from the [summative] assessment, allowing students to make their own connections between content in different modules. This approach is the most widespread understanding of what ‘programme-level assessment’ is and is arguably the simplest implement and there is a clear split between teaching and summative assessment.

The next, ‘Half the Hog’, still has an assessment piece that runs throughout the entire programme, separate from individual modules, but it doesn’t require all assessments to be disconnected from teaching. This connective assessment could be a research project that runs from first to third (or fourth) year and draws on concepts from all of the individual modules. A benefit of this ‘Hog’ is that there is an overall reduction in summative assessments across the degree to make room for the programmatic assessment piece.

The ‘Other half of the Hog’ employs synoptic assessment from across a number of modules (i.e. 50% of the degree modules are assessment via a synoptic assessment while the other 50% have assessments that are directly related to their module’s content). Each module has a combination of formative and one summative assessment, and the synoptic assessment integrates concepts, makes connections between the modules and is challenging for students.

The next pig- or pigs- ‘Both the Hogs together’ (originally named ‘Eat the Hogs Together’, but we didn’t think that was appropriate for our plant-based friends 😊) is when both the curriculum and assessment design is done as a team, using TESTA (programme and student evidence to inform the assessments). Summative assessment is reduced across the entire degree so that students engage more with formative assessments. Teams are encouraged to integrate assessment in the shared process so that everyone has a shared understanding and practice.

The final hog, ‘The Warthog’, is the most radical of approaches. Instead of running parallel modules, students take one module at a time in blocks (for example, one module runs week 1-4, second module runs week 5 – 8, etc.). Assessments are joined up though shared units that weave across the programme. This method has been adopted to some extent at Plymouth University through their immersive induction module in first year.

Some of these ‘hogs’ would be easier to achieve than others, but we don’t know yet which one would create the best outcomes for students. With the amount of modular choice available across most degree programmes, a singular approach would have to be taken at least within a faculty, and potentially across the entire university – it wouldn’t be possible for one programme to undertake a ‘Warthog’ approach while another employed ‘Half the Hog’. But how do we decide which approach to take? And how would this one approach be implemented across the hundreds of programmes we have on offer with limited time for programme teams to sit down and redesign their assessments?

 There are examples of institutions where programme-level assessment has been successfully put into practice (Brunel’s IPA and Bradford’s PASS are two good examples), but we need to understand the impact it has had on student learning, outcomes, wellbeing (both staff and students) before deciding whether going the ‘Whole Hog’ is the right approach for Bristol.

Student Voice

In conversation with a fourth year Liberal Arts student

Check out this snippet of conversation our Student Fellow Zoe Backhouse recorded with a fellow fourth year Liberal Arts student on the topic of assessment.  Want to know why Europe’s doing HE better than the UK and why playing Donald Trump in class may not be a bad thing? Read on…

Z: How was your assessment on your year abroad?

A: Well, when I was in Amsterdam it was broken down so much into different areas. It wasn’t all reduced down to an essay because that isn’t the one mode of intelligence in the world.

One of my assessments was I became Federica Mogherini who’s the Foreign Minister for the EU and we played out a simulation of the Middle East. Everybody was a different country – someone was Donald Trump! – and literally I learned so much about applying the theory and the logic and actually putting in a practical sense. I think that’s just so important because university should be about teaching skills that can be transferred to employability.

I also loved how we did presentations abroad. At Utrecht you had to lead a seminar for 45 minutes after a 20 minute presentation. In your presentation you couldn’t just read from a piece of paper like everyone does at Bristol. You would stand and deliver a lesson, not looking down at notes, you’d talk to people and have eye contact. And then you had to lead a discussion amongst your peers.

I found it pretty nerve-wracking and I’m quite a confident public speaker. But that’s because the way we’ve always been indoctrinated here is… it’s just very insular. I don’t know, I just think there is a lack of discussion in general in all forms. Discussion only happens as an internal monologue that gets reproduced in an essay. People can’t have conversations in seminars because they get nervous, because they feel like they’d look stupid. I think you should take that away.

We used to be marked on class participation at Utrecht which was like 20% of the mark. I actually do think that’s really important? In the UK people are so scared of saying something because they think there’s only one right answer. In our education system we’re taught that there’s only one right answer and it’s at the back of the book and don’t look and don’t copy and don’t speak to anyone else about it. But it’s not that. Art is about taking things and reinterpreting them and making them better. So I think discussion has been lost from education.

I did another module called Digital Citizens. And literally, we were just coming in to talk about what was going on in the news that day, we’d all just sit around and have a discussion. One of the requirements of that course was to write a journalistic article which was liberating. And it wasn’t just GCSE journalism, it was like, can you write a legitimate article? So I wrote about how data analytics is perpetuating gender stereotypes.

You did have essays as well because that’s important. It’s just about diversifying assessment, and making people feel more comfortable and able in their abilities as opposed to constantly critiquing people and telling them they’re wrong all the time because they don’t fit one style of system.