500 Words, News

My Retirement from Competitive Baking

Yesterday, after an excruciating three-week wait, it was the Education Services Charity Bake Off Final. I had made it through to the final after winning my heat (cheese and rosemary scones, if you must know) and I had been practising for my chance at winning the title ever since.

I was as happy with my cake as a novice baker could be, having opted for a chocolate and passionfruit cake, and eagerly awaited the results as the morning went on. By the time it came to 1pm, when colleagues from across the office gathered around waiting our Director to announce the winner, I was actually nervous.

I didn’t win. I didn’t expect to win – there were some amazing cakes on offer from some equally amazing bakers – but no one likes to lose do they? I spend the afternoon texting my husband about how I was never going to bake again and fanaticising about throwing my rolling pin away when I got home.

And I don’t plan on entering another baking competition; I didn’t like the waiting around for weeks not knowing what the result is going to be – yet this is exactly what so many 17 and 18-year olds are going through today.

Having sat their exams months ago, they have spent their summer nervously awaiting the results that will determine their future. Whether they go to university or not, and whether, if they do choose on university, that university is their ‘first choice’, or whether they have to go though ‘clearing’ (an awful process and even more awful word to use for it – surely there is a better way it can be done?*).

But there is no option for a university student to ‘never bake again’ – doing a degree is like a three-year baking competition. For the few students who do well in all of their assessments this is fine (read: smash the soufflé), but for the majority of students who struggle though at least some of their degree, the process of endlessly awaiting the next result is hugely detrimental for their wellbeing – and yet we continue to assess in this way.

As an adult, we don’t experience this same kind of stress. The wait to hear if you’ve been accepted for a mortgage, or if your latest paper has been accepted in to journal, is about as close as we come. But these are annual occurrences at best and, as adults, we have the experience of know we can always resubmit a paper or apply for a different mortgage. I wonder if we experienced the continual insecurity and nerves that students face around assessment that we would still choose to assess in this way?

One way to reduce this insecurity could be a move towards more formative assessments and less summative assessment may be one approach, or a move away from numerical grading may be another, but it is difficult to know what balance could be reached between keeping students motivated while still removing the carrot of a grade they are happy with.  

So, while I’ll be hanging up my apron for the foreseeable future, I’ll be thinking of all the students starting in September (and coming back) who will be facing another year of blind bakes and wondering what we can do to help reduce the anxiety around results and assessments this causes.  

*If this area interests you, I highly recommend this WonkHE piece on making university admissions truly inclusive – including two very viable recommendations.  

Amy Palmer

Teaching Stories

Strategic Students and Question Spotting

The following piece was written by Helen Heath, a BILT Fellow, Reader in Physics and (soon to be!) University Education Director (Quality).

Why do we think that students being strategic in their learning is a bad thing? Is this an example of emotive conjugation as brilliantly illustrated by Anthony Jay and Johnathan Lynn in the “Yes Minister” series, “I give confidential security briefings. You leak. He has been charged under section 2a of the Official Secrets Act.” ?

“I only have time for important things, you have concentrated on the wrong things, students are question spotting rather than learning.”

Academics are very strategic in the tasks they decide to undertake. They pick tasks that will result in promotion, they tune their lectures to give students what they want to get those good questionnaire responses and they leave jobs undone that they have decided are not worth the time and effort. Yet we seem to criticise students for the same behaviour. We decide not to read the majority of the 200 papers in the Senate pack. Quickly reviewing the headings and deciding what matters to us. This is sensible use of precious time. A student decides they don’t have time to read and understand the whole textbook so they will look at previous examinations and see what topics are more likely to come up and this is “question spotting”.

But is “question spotting” such a bad idea? There is some sense academically. If a question (or a variation of a question) about the same topic appears every year then the examiner is giving a message that this is a topic they regard as important. We might hope that students had realised what were the key topics in other ways. We might stress these key topics in our lectures. We might like to think our students were able to just “get” what is key but that’s a high-level skill and the key topics may only be obvious when they have reappeared in subsequent years. When students are struggling with the nuts and bolts of a subject it’s not surprising that they can’t manage to see the wood for the trees.

Many weaker students are known to find difficulty with scaffolding their learning and identifying the key elements that will enable them to succeed later. They use every piece of information they can to work out what these key topics are and that includes judging what we regard as important by what we assess them on. The topics we choose to place an emphasis on in our final assessment must be import so question spotting is a way of understanding what it is that academics regard as important.

I’d suggest that this strategic planning is not only useful for passing examinations but it’s a useful life skill. The difficulty arises where students question spot and learn by rote with no understanding. The symptom of this in Physics is often a good response to a question that looked like the one that was asked but was slightly different.

The HEA training materials used in the programme focussed assessment training for the pilot project encouraged academics to consider what are the threshold topics in their area. There is much written about threshold topics in physics a recent paper even suggests that there are too many threshold concepts in physics to count them (“Identifying Threshold Concepts in Physics: too many to count” R. Serbanescu 2017). If this is the case, we need to guide the students by deciding what we think is key. If we fail to do that then we shouldn’t blame the students for looking at what we indicated was key by our assessment. Assessment does drive learning and if we are assessing the same topic repeatedly then it is driving the students to learn that topic.

One mechanism we have tried in physics which has some advantages is giving the students a list of questions of which a subset will be a people be guaranteed to appear on the paper and make up ~40% of the material. These direct students towards the bare bones of the course. If they can answer this set of questions they should at least be able to reproduce the basic information in the course Looking our definition of what constitutes a third class performance in assessment (“some grasp of the issues and concepts underlying the techniques and material taught” UoB 21 point scale 40-50 descriptor) the ability to simply regurgitate with reasonable accuracy some basic concepts could be seen to meet these. Ideally students would want to go further but, in some cases, they haven’t had the time to absorb that particular piece of knowledge and digest it in the depth we would expect. While there are still time constraints on the acquisition of knowledge in a Higher Education programme inevitably almost everyone will come up against a concept that they are unable to grasp before the assessment.

And is learning by rote so bad? I do not set out to prove Pythagoras’ theorem every time I need to use it for a question.

Forms of assessment should have a range of tasks that test both use of tools and deeper concepts, but students should not be criticised for directing their learning towards topics they think are likely to come up in an examination. By putting these topics on the examination regularly we have declared them to be important.

500 Words, News

Should we go ‘The Whole Hog’ with programme-level assessment?

The following post was written by Amy Palmer, BILT Digital Resources Officer.

Since the launch of BILT in 2017, the implementation of programme-level assessment across the University has been a widely-discussed topic. But what do we really mean by programme-level assessment?

Tansy Jessop, while delivering her TESTA workshop in January, outlined her ‘Five Hogs of Programme-Level Assessment’, breaking down the term into five different ways this assessment framework could be implemented.

The first, ‘The Whole Hog’, advocates an integrated and connected assessment plan, running though entire programmes, using capstone and cornerstone assessments to bring together learning from different modules. Teaching is separated from the [summative] assessment, allowing students to make their own connections between content in different modules. This approach is the most widespread understanding of what ‘programme-level assessment’ is and is arguably the simplest implement and there is a clear split between teaching and summative assessment.

The next, ‘Half the Hog’, still has an assessment piece that runs throughout the entire programme, separate from individual modules, but it doesn’t require all assessments to be disconnected from teaching. This connective assessment could be a research project that runs from first to third (or fourth) year and draws on concepts from all of the individual modules. A benefit of this ‘Hog’ is that there is an overall reduction in summative assessments across the degree to make room for the programmatic assessment piece.

The ‘Other half of the Hog’ employs synoptic assessment from across a number of modules (i.e. 50% of the degree modules are assessment via a synoptic assessment while the other 50% have assessments that are directly related to their module’s content). Each module has a combination of formative and one summative assessment, and the synoptic assessment integrates concepts, makes connections between the modules and is challenging for students.

The next pig- or pigs- ‘Both the Hogs together’ (originally named ‘Eat the Hogs Together’, but we didn’t think that was appropriate for our plant-based friends 😊) is when both the curriculum and assessment design is done as a team, using TESTA (programme and student evidence to inform the assessments). Summative assessment is reduced across the entire degree so that students engage more with formative assessments. Teams are encouraged to integrate assessment in the shared process so that everyone has a shared understanding and practice.

The final hog, ‘The Warthog’, is the most radical of approaches. Instead of running parallel modules, students take one module at a time in blocks (for example, one module runs week 1-4, second module runs week 5 – 8, etc.). Assessments are joined up though shared units that weave across the programme. This method has been adopted to some extent at Plymouth University through their immersive induction module in first year.

Some of these ‘hogs’ would be easier to achieve than others, but we don’t know yet which one would create the best outcomes for students. With the amount of modular choice available across most degree programmes, a singular approach would have to be taken at least within a faculty, and potentially across the entire university – it wouldn’t be possible for one programme to undertake a ‘Warthog’ approach while another employed ‘Half the Hog’. But how do we decide which approach to take? And how would this one approach be implemented across the hundreds of programmes we have on offer with limited time for programme teams to sit down and redesign their assessments?

 There are examples of institutions where programme-level assessment has been successfully put into practice (Brunel’s IPA and Bradford’s PASS are two good examples), but we need to understand the impact it has had on student learning, outcomes, wellbeing (both staff and students) before deciding whether going the ‘Whole Hog’ is the right approach for Bristol.

Student Voice

In conversation with a fourth year Liberal Arts student

Check out this snippet of conversation our Student Fellow Zoe Backhouse recorded with a fellow fourth year Liberal Arts student on the topic of assessment.  Want to know why Europe’s doing HE better than the UK and why playing Donald Trump in class may not be a bad thing? Read on…

Z: How was your assessment on your year abroad?

A: Well, when I was in Amsterdam it was broken down so much into different areas. It wasn’t all reduced down to an essay because that isn’t the one mode of intelligence in the world.

One of my assessments was I became Federica Mogherini who’s the Foreign Minister for the EU and we played out a simulation of the Middle East. Everybody was a different country – someone was Donald Trump! – and literally I learned so much about applying the theory and the logic and actually putting in a practical sense. I think that’s just so important because university should be about teaching skills that can be transferred to employability.

I also loved how we did presentations abroad. At Utrecht you had to lead a seminar for 45 minutes after a 20 minute presentation. In your presentation you couldn’t just read from a piece of paper like everyone does at Bristol. You would stand and deliver a lesson, not looking down at notes, you’d talk to people and have eye contact. And then you had to lead a discussion amongst your peers.

I found it pretty nerve-wracking and I’m quite a confident public speaker. But that’s because the way we’ve always been indoctrinated here is… it’s just very insular. I don’t know, I just think there is a lack of discussion in general in all forms. Discussion only happens as an internal monologue that gets reproduced in an essay. People can’t have conversations in seminars because they get nervous, because they feel like they’d look stupid. I think you should take that away.

We used to be marked on class participation at Utrecht which was like 20% of the mark. I actually do think that’s really important? In the UK people are so scared of saying something because they think there’s only one right answer. In our education system we’re taught that there’s only one right answer and it’s at the back of the book and don’t look and don’t copy and don’t speak to anyone else about it. But it’s not that. Art is about taking things and reinterpreting them and making them better. So I think discussion has been lost from education.

I did another module called Digital Citizens. And literally, we were just coming in to talk about what was going on in the news that day, we’d all just sit around and have a discussion. One of the requirements of that course was to write a journalistic article which was liberating. And it wasn’t just GCSE journalism, it was like, can you write a legitimate article? So I wrote about how data analytics is perpetuating gender stereotypes.

You did have essays as well because that’s important. It’s just about diversifying assessment, and making people feel more comfortable and able in their abilities as opposed to constantly critiquing people and telling them they’re wrong all the time because they don’t fit one style of system.

500 Words, Uncategorized

Informal exploratory writing: three activities you can try with your students

The following post was written by Amy Palmer, BILT Digital Resources Officer. 

Studies have shown there is a strong correlation between the amount of writing a learner completes and their attainment (Arum and Roksa, 2011). John Bean, in his book ‘Engaging Ideas’ (2011), outlines a number of methods to increase the amount of informal writing your students undertake. He groups these under the theme of ‘thinking pieces’, and he highlights a number of benefits. He believes thinking pieces:

  • Promote critical thinking.
  • Change the way students approach reading – with an increase in writing down their thoughts it forces them to consider alternative and opposite arguments to the piece they are reading.
  • Produce higher levels of class preparation and richer discussions in class. Similar to the point above, if informal exploratory writing is done at the point of reading, students are more prepared with arguments and counter-points in discussion classes.
  • Are enjoyable to read, and make a nice change for markers from the normality of essays
  • Help to get to know your students better as you can see how their arguments are formed and where their beliefs lie.
  • Help assess learning problems along the way. Like any increase in formative work, the teacher can see any gaps in learning at an earlier point and assess whether this is the case for others in the cohort.

Bean describes 22 different exploratory writing tasks, which you can find in his ‘Engaging Ideas’ book; we have selected three to share in this blog.

Bio-poems

This task is easier to apply to some disciplines rather than others (philosophers, historians and politicians come to mind first) and is designed to make students think about the personal dimensions of a subject being studied in a course. A bio-poem is semi-structured and goes as follows:

  • Line 1: First name of the character
  • Line 2: Four traits that describe the character
  • Line 3: Relative of (brother of, sister of, etc)
  • Line 4: Lover of (list three things or people)
  • Line 5: Who feels (three items)
  • Line 6: Who needs (three items)
  • Line 7: Who fears (three items)
  • Line 8: Who gives (three items)
  • Line 9: Who would like to (three items)
  • Line 10: Resident of
  • Line 11: Last name

(Gere, 1985:222)

Not only does this make the subject more human and therefore more memorable, but it also provides a great revision tool when it comes to exams. If this is done as a task before the class, each person’s poem can be discussed to see differences they have found in their perception of the subject.

Writing dialogues between two different theorists/ arguments

This task asks students to write an ‘meeting of the minds’ piece (Bean, 2011:136), where they conjure a script between two theorists arguing different sides (e.g. Hobbes and Locke arguing over the responsibility in a state). This encourages the students to truly consider each side of the argument and also prepares them for discussion in class. This can be done as an individual task or in small groups, and suits many disciplines.

Writing during class to ask questions or express concerns.

Less creative than our other two suggestions, this piece asks students to ‘freewrite’ during a break in the class. You could ask students to summarise the lecture so far, or write down any puzzlements or questions they have. At the end of the freewriting time (which should be a maximum of five minutes), ask a couple of students to feedback. Not only do student practice writing, but it also means you can get real time feedback and allows students to ask questions part way through the lecture.

References

Bean, J., 2011. Engaging Ideas: The Professor’s Guide to Integrating Writing, Critical Thinking and Active Learning in the Classroom. Jossey Bass, United States of America.

Gere, A. R. (ed.), 1985. Roots in the Sawdust: Writing to Learn Across the Disciplines. Urbana, Ill.: National Council of Teachers of English.

Arum, R. and Roksa, J., 2011. Academically Adrift: Limited Learning on College Campuses, University of Chicago Press, Chicago.

News

Should all assessments be inclusive?

The following post was written by Emilie Poletto-Lawson, an Educational Developer and BILT Fellow. 

I am a BILT fellow based in Academic Staff Development where I work as an Educational Developer. I have been working on the BILT theme of assessment – focusing on inclusive assessment since February 2018. I am undertaking a literature review with a view to making recommendations around inclusive assessment principles that we can embed into our units and programs at the University of Bristol to work alongside our institutional principles on Assessment and Feedback.

From my reading to date the  main take away is that inclusivity is predominantly discussed as a means for supporting students with disabilities. It is very much viewed as a deficit approach to considering assessment, however, I strongly believe it is far more than that, we want to be inclusive of all learners and for inclusive assessment to actually be more inclusive.

As part of my BILT fellow role I recently attended an event at the University of Leicester called “Making IT* Happen: from strategy to action (*Inclusive Teaching)’, led by Pete Quinn and Mike Wray (blog available here). The focus was very much on supporting disabled students in our institutions and ensuring universities are legally compliant with the Equality Act. In preparation for the event, the experts highlighted good practice in the work we do at Bristol, for example we received positive feedback on our institution website regarding inclusivity (http://www.bristol.ac.uk/disability-services/study-support/reasonable-adjustments/) and in particular videos created by Louise Howson from Academic Staff Development (http://www.bristol.ac.uk/staffdevelopment/academic/resources/learning-and-teaching-resources/learning-and-teaching-videos/ ).

Regarding the literature review I am working on, when researching the key words “inclusive” “assessment” in “higher education”, I obtained 9596 results in ERIC and yet, going through the abstracts not that many articles encompass all three parameters. It appears there might be a gap in the literature here despite inclusivity being key to university strategies in the UK and beyond for a number of years now. So far, the key emerging themes from my searching can be seen below.

Inclusive Assessment in Higher Education map created by Emilie Poletto-Lawson
Created with Mindmeister 21.09.2018

In the US literature the Inclusive aspects of articles relates to the idea of an inclusive campus and looks at inclusivity from the selection process (access) to the students completing their degree (success). In the UK, the literature shows there is an acknowledged need for policies, strategies and processes as well as professional development to bring about inclusive practices.

Initial readings suggest there is a rhetoric of inclusivity as a given good, but it is difficult to identify concrete examples, especially when it comes to assessment. The literature review is the first step to articulate a clear definition before focusing on what inclusive assessment means for us at the University of Bristol.

If you are interested in this topic why not read “Against being Inclusive” by Jeffrey Carlson, interim provost and Vice President for Academic Affairs at Dominican University? I appreciate it might be an odd recommendation since this post advocates that all assessments should be inclusive, but I think this article, published in 2016, does offer food for thought and reinforces the need to clearly define what we mean by “inclusivity” before we move to making recommendations at Bristol.

News

Tensions with Programme-Level Assessment

The following post was written by Helen Heath, a reader in Physics and BILT Fellow. 

I am a reader in the School of Physics at Bristol and currently 1st year coordinator and head of the newly formed Physics Education Group (PEG), which is a group for the pathway 3 staff within the School of Physics. I’ve been a BILT fellow since September 2017 with a focus on programme level assessment and this has been renewed for the next academic year.

While looking into programme level assessment I have felt that there are significant tensions. The move to more formative and less summative assessment should help student development but fewer summative assessments inevitably means that these assessments are higher stakes. Devising good innovative assessment methods that are targeted at testing the learning outcomes is desirable but too many different types of assessment can be confusing to students.

As part of the University move towards programme level assessment several Schools took part in assessment labs and have been working on moving their assessment to a more programme level approach. I am in the process of interviewing those involved with these pilot projects. I would like to understand the challenges they have faced in the programme design and the institutional changes, e.g. to regulations, that need to be made to enable the implementation of the revised assessment schemes they have proposed. I am also interested in how the perceptions of what programme level assessment is vary across the University.

Although the content of courses varies, the principles of good assessment design should be applicable to all programmes however it seems clear to me, from the first interviews, that there are structural differences between programmes that can hinder or help the adoption this approach. Joint honours programmes, for example, can pose a challenge. The programme level approach can offer the chance of an assessment which brings together the strands in these types of programmes, but this then requires clarity in the management of a joint assessment. Schools that have very many different joint honours programmes need to avoid a confusing range of different assessments.

Schools piloting programme level assessment are also in very different points in their curriculum review cycle. Adopting a programme level approach where a review is underway anyway is much more natural than making large changes to assessment when results of a previous review are still working their way through programmes.

As a second strand of my work I have been thinking about what programme level assessment might look like in my own School. We have joint programmes with Schools in our own Faculty and outside as well as several “Physics with” programmes but we have a well-defined Core in all programmes which could be assessed at year level. The “straw person” proposal for a way forward is submitted to our next Teaching Committee. Hopefully it promotes a lively discussion.