News

An introduction from Tansy Jessop, our Visiting Professor

Tansy

Here I am in my Christmas jumper, looking slightly silly #dachshundthroughthesnow, and telling you a bit about myself. First things first, I do have a twelve year old black and tan sausage dog whose origins are close to Bristol. So call my stint at BILT a bit of a return on behalf of my hound! I am absolutely thrilled and honoured to be a Visiting Professor at BILT for the year. My undergraduate years were spent at the University of Cape Town, not dissimilar in size and feel to Bristol but a campus university rather than a city one. From my four years in the fraught 1980s at UCT, I remember feeling both adrift and excited; mystified, enthralled and slightly confused at the relevance of T S Eliot and Catullus. My studies seemed slightly irrelevant in a context of tear gas and angry fists thrust in the air.  As I look back I now know I was experiencing what many students feel but cannot name in relation to their studies. Sarah Mann has written the best work on student alienation and as I read it, I know for myself that this is the root of much of student disengagement in higher education. Particularly for first generation students.  

My interest in alienation and in engaging students is a huge spur to my work in learning and teaching. In leading the ‘Transforming the Experience of Students through Assessment’ (TESTA) research and change process for nearly ten years, and working with students and staff in many UK universities, I have encountered alienation in many guises. The defining feature of alienation is an absence of meaning or connection with something expected to bring meaning.  In the context of assessment, it is students disgruntled with the treadmill of repetitive assessments; overloaded with content; finding that their curiosity is not ignited by assessment; that they have little in the way of pedagogic relationship with their tutors in feedback, for example.  Students often experience their modular curriculum as fragmented and knowledge on one unit seems unrelated to another one. TESTA exposes some of the structural flaws in compartmentalised modular curricula. It calls to a much more programmatic and joined up approach to teaching and learning. 

But alienation is not all bad. It is part of what higher education is about as students wrestle with multiple perspectives and try to pick their way through different ways of understanding their disciplines. The soupy sea of ambivalence that higher education invites students to swim in is bound to be a bit unsettling. However, there are wonderful pedagogic ways of lighting beacons along the way for students. Through TESTA, I have seen academics embrace new ways of doing formative assessment, engaging students in challenging, playful and exciting learning which prepares them for summative tasks. I have seen academics stand back and see the whole programme for the first time. This new way of seeing is often a catalyst for programme teams drawing back from content-heavy, facts first approaches, and inviting them to partner with their students in slow learning. The ‘slow professor’ approach to teaching, learning and assessment is all about creating spaces for students to engage, integrate and apply their learning.  I hope over the coming months to share some of these ideas and engage various programmes in the TESTA process. I am really looking forward to getting to know the community at the University of Bristol, with or without my dog.  Definitely without my Christmas jumper.  

 

Berg, M. and Seeber, B. 2016. The Slow Professor: Challenging the Culture of Speed in the Academy. Toronto. University of Toronto Press. 

Mann, S. 2001.  Alternative Perspectives on the Student Experience: alienation and engagement. Studies in Higher Education. 26 (1). 

 

 

News

Tensions with Programme-Level Assessment

The following post was written by Helen Heath, a reader in Physics and BILT Fellow. 

I am a reader in the School of Physics at Bristol and currently 1st year coordinator and head of the newly formed Physics Education Group (PEG), which is a group for the pathway 3 staff within the School of Physics. I’ve been a BILT fellow since September 2017 with a focus on programme level assessment and this has been renewed for the next academic year.

While looking into programme level assessment I have felt that there are significant tensions. The move to more formative and less summative assessment should help student development but fewer summative assessments inevitably means that these assessments are higher stakes. Devising good innovative assessment methods that are targeted at testing the learning outcomes is desirable but too many different types of assessment can be confusing to students.

As part of the University move towards programme level assessment several Schools took part in assessment labs and have been working on moving their assessment to a more programme level approach. I am in the process of interviewing those involved with these pilot projects. I would like to understand the challenges they have faced in the programme design and the institutional changes, e.g. to regulations, that need to be made to enable the implementation of the revised assessment schemes they have proposed. I am also interested in how the perceptions of what programme level assessment is vary across the University.

Although the content of courses varies, the principles of good assessment design should be applicable to all programmes however it seems clear to me, from the first interviews, that there are structural differences between programmes that can hinder or help the adoption this approach. Joint honours programmes, for example, can pose a challenge. The programme level approach can offer the chance of an assessment which brings together the strands in these types of programmes, but this then requires clarity in the management of a joint assessment. Schools that have very many different joint honours programmes need to avoid a confusing range of different assessments.

Schools piloting programme level assessment are also in very different points in their curriculum review cycle. Adopting a programme level approach where a review is underway anyway is much more natural than making large changes to assessment when results of a previous review are still working their way through programmes.

As a second strand of my work I have been thinking about what programme level assessment might look like in my own School. We have joint programmes with Schools in our own Faculty and outside as well as several “Physics with” programmes but we have a well-defined Core in all programmes which could be assessed at year level. The “straw person” proposal for a way forward is submitted to our next Teaching Committee. Hopefully it promotes a lively discussion.

News

Introduction to 2018/19 from the BILT Director

This year’s BILT calendar kicked off with a fully-booked seminar by Debby Cotton and Rebecca Turner on the use of four-week “immersion modules” as means of easing the transition of new undergraduates into their disciplines and good habits of university study. BILT has a full calendar of seminars, covering topics ranging from contract cheating, student autonomy to the embedding of skills. Please see our Events page for the full list.

We begin academic year 2018/19 with a new set of 9 BILT associates drawn from around the university who will be working with BILT in areas of mutual interest so that the educational work of colleagues around the institution can be supported and disseminated throughout the University. See this graphic to make sense of a growing BILT community. Please watch out for further opportunities to join our community in the coming months.

preview-5.png

Our Fellows and Associates are exploring at our two themes, Assessment and Rethinking Spaces, as well as three additional projects outside the themes.

Among these opportunities will be the establishment of funded Learning Communities where we will invite individuals to join a cross-disciplinary team to work for a year on a defined topic. Learning Communities will operate largely autonomously but with BILT support, and they will bring together individuals with cognate interests where synergies and mutual interests can be exploited to their best advantage. These Communities will run from January for a full calendar year.

To support colleagues in schools even more thoroughly we will soon be announcing a discretionary small/ seedcorn funding scheme that staff can use to begin educational work and innovation within the year. The aim of such funding is to provide a more agile and responsive resourcing as the educational landscape changes ever more quickly.

We are also increasing our work with the University’s strategic Bristol Futures programme by bringing the thinking around embedding the key themes (‘Sustainable Futures’, ‘Innovation and Enterprise’ and ‘Global Citizenship’) into BILT by introducing three BILT-Bristol Futures Academic Fellows, each of whom will take charge of the development of one of the themes, its intellectual rationale and supporting resources. As part of this support, many schools will also see BILT colleagues working closely with key individuals in their academic programmes as we begin to transform our pedagogies, assessment and curricula along the lines envisaged by the Bristol Futures project.

This year we are also seeking to significantly increase our work with our students, beginning with the introduction of student fellows who will complement and work with existing academic fellows and associates to work towards realising practices of co-design and co-creation of our education.

I have left until last the introduction of our new BILT Visiting Professor for 2018/19. We thank the outgoing Visiting Professor, Christopher Rust, who has worked with many of you developing our collective thinking around assessment. We welcome our new Visiting Professor, Tansy Jessop, who has led on a key assessment change project in the sector (TESTA) and who will bring her considerable expertise and warm manner to help us make a step change to our move towards programme-level assessment this year.

Alvin Birdi

Image of Chris Rust delivering workshop symposium
An interview with...

An interview with… Chris Rust

We spoke to Chris Rust, Professor Emeritus of Oxford Brookes University and author of ‘Assessment Literacy: The Foundation for Improving Student Learning’ and numerous other publications on assessment and pedagogy. Chris was BILT’s first visiting professor and has facilitated a number of workshops for BILT. He was the keynote speaker in BILT’s launch symposium in June 2017 on Assessment and Feedback.

What are the most common problems you tend to observe with current assessment practices?

I think the most common problem is a lack of alignment, or a fudging of alignment, between the learning outcomes and the task set. And then a further fudging when it comes to the assessment criteria (which may bear little or no connection to the outcomes), the fact that it as all then finally reduced to one virtually meaningless number (mark), and the subsequent opacity of the feedback given. There may be four or five excellent outcomes but then the task chosen to assess them may be an essay, or a report, or exam, or whatever (regardless of whether that will actually assess whether the outcome/s have been met or not) and the assessment criteria then tend to focus on the medium of the task rather than the individual outcomes – structure, fluency, grammar, spelling, referencing, etc. Now while those all may be important, they almost certainly do not explicitly feature in the learning outcomes. And then finally, the worse sin of all, the assessment decisions are aggregated.

What benefits do students experience through a programme level approach to assessment?

Well the programme specifications and subsequent programme level outcomes, should be the vital things the student needs to achieve to merit the qualification. So focussing on them should benefit both the teaching staff and the student. The problem with unitised or modular programmes is that outcomes can be atomised at the lower level to the point that they don’t add up to the espoused programme outcomes, or reach the greater depth and complexity of programme outcomes. A programme level approach should also benefit students by explicitly encouraging the integration of learning from the different units or modules.

How can Universities help students to understand these benefits?

By being explicit at all times – in programme and module documentation, when assessment tasks are set and discussed – and also be ensuring that assessment tasks are valid and, wherever possible, authentic.

What are the most valuable resources/articles you use?

I have summarised a lot of the useful research in a freely available paper: ‘What do we know about assessment?’ I would also recommend the Australian website Assessment Futures (found here).

What one piece of advice would you give to help improve students’ assessment literacy?

You must involve students in the activity of assessment – marking work and having to think like assessors – whether it is through marking exercises, giving self and/or peer feedback, or actually allocating actual marks.

You advocate ‘quick and dirty feedback’- what does this mean?

I only advocate this when detailed, individualised feedback may not be logistically possible, or perhaps necessary. In the case of, say, weekly lab reports it is much more useful to take them in and sample them and then send an all class e-mail with generic feedback than for students to receive detailed individualised feedback on a report they did three weeks ago, and since then they have done another two. I would also class on-line possibly multiple-choice quizzes in this category. They may not be able to assess at the higher end of Bloom’s taxonomy (discuss!) but they can give instant feedback to the student on how much they have understood this week’s topic, and depending on the software can also possibly give hints and tips when the answer is wrong.

What inspired you to first start looking at assessment practice and advocating change?

When I did my MEd at Bristol, I had a session from David Satterly and was introduced to his book Assessment in Schools which highlights many of the problems in assessment practice which sadly still exist today over 30 years later. And out of all of them, I am especially incensed by the misuse of numbers in assessment, and the fact that university assessment systems get away with doing things a first year statistics student would fail for.

Are there any models you would recommend following to redesign programme assessment? 

Yes. I particularly like the idea of requiring programmes to identify cornerstone and capstone modules, which are where the programme outcomes are explicitly assessed. I also think that Brunel’s system of allowing the separation of what they call study blocks from assessment blocks is especially ingenious and clearly allows for all sorts of creativity by the programme team.

Can you think of any case studies from other institutions that would inspire staff to change their programme assessment?

Further to what I said above, I think the Brunel model is certainly worth the effort needed to understand it because of the potential it opens up.

What is your view on 100 point marking scales and would you advocate use of any different forms of marking scales?

If I had my way I would ban the use of numbers in the assessment process completely – they are worse than unhelpful, and I have written on this at length! See for example: Rust, C. (2011) “The unscholarly use of numbers in our assessment practices; what will make us change?” International Journal for the Scholarship of Teaching and Learning, Vol. 5, No. 1, January 2011 (available here). I would advocate much simpler grading – pass/fail, or perhaps pass/merit/distinction, or at most a four-point scale (perhaps based on Biggs’ SOLO taxonomy) – specifically for each learning outcome.

What one film/book/resource would you like to share with the academic community?

In addition to those already mentioned, maybe the video A Private Universe. (available here). It is quite old now but still totally relevant regarding issues of teaching and the failure of many of our assessment practices.

If you could change one thing about HE in the UK what would it be?

Banning the use of numbers in assessment.

Who was your favourite teacher at school/university and why? 

That’s hard – I went to a boys’ grammar school – much easier to list the bad teachers, and why. Not sure about favourite but I can only remember two good teachers at school – Mr Allen for English and Mr Thomas for maths – and they were good in that they explained things in easily accessible ways, with humanity and humour, had passion for their subjects and appeared to care about us learning.

Meet the BILT Fellows

Meet the BILT Fellows: Helen Heath

We asked our Fellows to write us a short blog about their background and what they are doing as part of their BILT Fellowship. The following blog is from Helen Heath, who has been a BILT Fellow since September 2017.

Programme Level Assessment – A return to finals?

The summative assessment degree started on a Thursday morning with a bell ringing. At that point the waiting students were able to run to their allocated desks and start writing. I was elbowed out of the way by someone I’d considered to be a friend for three years. The summative assessment for my degree concluded, seven exam papers later, the following Monday lunchtime. Seven papers in four and a half days with a Sunday “off” after the first six exams. I don’t think anyone ever explained how the various papers contributed to the overall mark and I’ve certainly never had a transcript. “The past is a foreign country; they do things differently there.”1

I can testify that finals were not stress free.

I am currently a BILT academic fellow considering programme level assessment. This is not a return to finals but a rethink about how to design and implement assessment of programmes. In the past I have worked with programme directors as a “critical friend” during the development stage of programmes. No programme director sets out to design a programme with an incoherent and inappropriate assessment regime that puts too much stress on students and doesn’t assess the skills or knowledge that they hope students will acquire or provide the formative assessment they need to develop. The new programme directors I worked with were all keen to devise appropriate assessments, in quantity and level, with some innovation in assessment methods to provide good quality feedback in novel and helpful ways. However, programmes don’t always remain as designed.

We are aware that students are stressed. Their work load at times is difficult to cope with and they also struggle to cope with the many different demands. At the same time university staff are struggling to provide meaningful feedback to increasing cohorts of students. We are advised to reduce assessment load, but students want more feedback. A solution could be more formative assessment and less summative. If we move to more formative assessment and less summative, the summative becomes more high-stakes and therefore, presumably, more stressful.

Jessop and Tomas2 write “The idea that well-executed formative assessment could revolutionise student learning has not yet taken hold.” This paper also suggests that a large variety in assessment can cause students confusion. Here, a programme level approach for design of formative and summative assessment might help.

As an example, in my subject area it is considered that weaker students do better on course work. In exams with numerical questions students may not be able to start a question or to get it completely wrong. In order to improve performance on their unit a lecturer may introduce some element of summative course work. There are examples in the literature of this being very successful. From personal experience, when one lecturer introduced a continuous summative assessment, through problem sheets, to a 4th year unit, the performance on that unit improved. A success! Except in this case students reported spending so much time on this one unit that the other units suffered. A possibly more serious consequence is that all lecturers see the success of the approach and students end up swamped in course work.

I’d also suggest that the frequently-heard complaint that students only do work that is assessed is much more likely to be true when the work that’s assessed takes up most of their time. By over-assessing in order that the work is done we ensure that unassessed work isn’t done. A move towards programme level assessment would develop programme teams with an overall view of how the programme is assessed; to move from a model where the unit is owned by a lecturer to one where the programme is owned by a team and where there isn’t competition to get students to work on “your unit”.

In my school we are taking a step backwards by removing the assessed course work element from the core lecture units in our first year. Work will still be submitted for formative feedback. The change from historic practice is that students are required to engage with the work to pass the unit, but the marks won’t count. The aim is to move students to using the course work as a formative exercise, using it to identify where they are having conceptual difficulties rather than (as is anecdotally the case) searching on the Internet for a very similar solved problem to copy without understanding, just to secure the additional marks. Working on the problems is the learning experience, not writing down the correct answers. What will happen? Watch this space.

1 L.P. Hartley, The Go-Between

2 Jessop, T. and C. Tomas (2017). “The implications of programme assessment patterns for student learning.” Assessment & Evaluation in Higher Education 42(6): 990-999.

Meet the BILT Fellows

Meet the BILT Fellows: Paul Wyatt

We asked our Fellows to write us a short blog about their background and what they are doing as part of their BILT Fellowship. The following blog is from Paul Wyatt, who has been a BILT Fellow since September 2017.

Biography

I’ve worked in the University for some 20 years and in that time been very much at the heart of teaching, its innovation and quality.  I was Director of Undergraduate Studies in the School Chemistry for 13 years, Director of ChemLabS and Faculty Quality Assurance Team Chair (as it was then) for the Faculty of Science for five years.  I’ve taught quite a variety of students over the years.  Once upon a time, I taught chemistry and physics in a secondary school to 11- to 18-year-old boys and in the professional development courses I ran in industry I would sometimes teach adults nearing retirement.  I suppose my first taste of the satisfaction that can come from teaching came in my early 20s upon seeing 12-year-old boys simply bubbling over with excitement about chemistry.

I’ve co-authored four text books in chemistry (two undergraduate and two postgraduate) which are now course texts in some US institutions and have been translated into both Chinese and Japanese.  With my own teaching I like to mix it up with the media, using whatever works best in the situation. While I use a blackboard on the one hand, I’m also a big fan of using technology where it actually, genuinely improves the teaching experience (and it can – my iPad in lectures is so much clearer than the visualiser), but not where it is used for its own sake or – for reasons no one can put their finger on – simply doesn’t work.  The last couple of years have been quite experimental for me in this regard, using polling software and flipping the class.

I am one of the University’s Pathway 3 Professors.

BILT Fellow

I started my BILT fellowship on 1st September 2017. With Programme Level Assessment as a starting point, reading some of the literature started the process of thinking a bit more deeply about the activities we have in the School of Chemistry, and it began to dawn on me that there are several things we do that do not really work very well. Furthermore, the things that don’t work very well have been tinkered with for years and yet continue to not work very well.

Also, I hear people say that they ‘don’t know the answers’ and yet all too readily the answer to, for example, students not attending lectures is to introduce a register.  Well, it’s about time that we put to work the information out ‘there’ in the educational literature.  So I set about developing a resource that digests the educational literature to provide some evidence-based, concrete solutions to the problems that we have.  The School of Chemistry can be the framework in which to set that, but the application should be very much broader. Simply, ‘what can we do better?’

Having been Director of Undergraduate Studies in Chemistry for over a decade, I’m a bit shocked to realise that, while we might have completely redesigned the course, improved the labs immeasurably and put in far more robust assessment processes over the years – all good stuff – somehow we missed some important deep-seated issues.  Until we fix these our NSS scores will never hit the big time.

This year I have had three BSc students who have been doing educational projects:

  • Virtual and Augmented Learning to Improve Student Learning & Engagement
  • Student-Student Interactions for Enhancing the Learning Experience
  • Efficacy of handouts currently used in the School of Chemistry

All these projects have caused the students themselves to reflect on their learning, and not just in their university years. The four of us have had many open discussions, and they have been very open with me about when they have displayed superficial learning, and things that they don’t think worked for social cohesion. They have also quizzed me about why the School does things the way it does.  At times, their questions made me realise the magnitude of the issues. They have provided a good sounding board for the issues which have emerged, which are, very broadly:

  • social cohesion (at every level)
  • communication (to the students)
  • student-student interactions

Sometimes the task ahead of us for effecting change looks very daunting. A book that I have found very encouraging, and which details how teaching methods were totally transformed at a research-intensive university, is “Improving How Universities Teach Science” by Carl Wieman (ISBN 978-0-674-97207-0).  It demonstrates that monumental changes to teaching can be made within an institution, and it has top tips on how to achieve them.