500 Words, Uncategorized

Informal exploratory writing: three activities you can try with your students

The following post was written by Amy Palmer, BILT Digital Resources Officer. 

Studies have shown there is a strong correlation between the amount of writing a learner completes and their attainment (Arum and Roksa, 2011). John Bean, in his book ‘Engaging Ideas’ (2011), outlines a number of methods to increase the amount of informal writing your students undertake. He groups these under the theme of ‘thinking pieces’, and he highlights a number of benefits. He believes thinking pieces:

  • Promote critical thinking.
  • Change the way students approach reading – with an increase in writing down their thoughts it forces them to consider alternative and opposite arguments to the piece they are reading.
  • Produce higher levels of class preparation and richer discussions in class. Similar to the point above, if informal exploratory writing is done at the point of reading, students are more prepared with arguments and counter-points in discussion classes.
  • Are enjoyable to read, and make a nice change for markers from the normality of essays
  • Help to get to know your students better as you can see how their arguments are formed and where their beliefs lie.
  • Help assess learning problems along the way. Like any increase in formative work, the teacher can see any gaps in learning at an earlier point and assess whether this is the case for others in the cohort.

Bean describes 22 different exploratory writing tasks, which you can find in his ‘Engaging Ideas’ book; we have selected three to share in this blog.

Bio-poems

This task is easier to apply to some disciplines rather than others (philosophers, historians and politicians come to mind first) and is designed to make students think about the personal dimensions of a subject being studied in a course. A bio-poem is semi-structured and goes as follows:

  • Line 1: First name of the character
  • Line 2: Four traits that describe the character
  • Line 3: Relative of (brother of, sister of, etc)
  • Line 4: Lover of (list three things or people)
  • Line 5: Who feels (three items)
  • Line 6: Who needs (three items)
  • Line 7: Who fears (three items)
  • Line 8: Who gives (three items)
  • Line 9: Who would like to (three items)
  • Line 10: Resident of
  • Line 11: Last name

(Gere, 1985:222)

Not only does this make the subject more human and therefore more memorable, but it also provides a great revision tool when it comes to exams. If this is done as a task before the class, each person’s poem can be discussed to see differences they have found in their perception of the subject.

Writing dialogues between two different theorists/ arguments

This task asks students to write an ‘meeting of the minds’ piece (Bean, 2011:136), where they conjure a script between two theorists arguing different sides (e.g. Hobbes and Locke arguing over the responsibility in a state). This encourages the students to truly consider each side of the argument and also prepares them for discussion in class. This can be done as an individual task or in small groups, and suits many disciplines.

Writing during class to ask questions or express concerns.

Less creative than our other two suggestions, this piece asks students to ‘freewrite’ during a break in the class. You could ask students to summarise the lecture so far, or write down any puzzlements or questions they have. At the end of the freewriting time (which should be a maximum of five minutes), ask a couple of students to feedback. Not only do student practice writing, but it also means you can get real time feedback and allows students to ask questions part way through the lecture.

References

Bean, J., 2011. Engaging Ideas: The Professor’s Guide to Integrating Writing, Critical Thinking and Active Learning in the Classroom. Jossey Bass, United States of America.

Gere, A. R. (ed.), 1985. Roots in the Sawdust: Writing to Learn Across the Disciplines. Urbana, Ill.: National Council of Teachers of English.

Arum, R. and Roksa, J., 2011. Academically Adrift: Limited Learning on College Campuses, University of Chicago Press, Chicago.

News

Tensions with Programme-Level Assessment

The following post was written by Helen Heath, a reader in Physics and BILT Fellow. 

I am a reader in the School of Physics at Bristol and currently 1st year coordinator and head of the newly formed Physics Education Group (PEG), which is a group for the pathway 3 staff within the School of Physics. I’ve been a BILT fellow since September 2017 with a focus on programme level assessment and this has been renewed for the next academic year.

While looking into programme level assessment I have felt that there are significant tensions. The move to more formative and less summative assessment should help student development but fewer summative assessments inevitably means that these assessments are higher stakes. Devising good innovative assessment methods that are targeted at testing the learning outcomes is desirable but too many different types of assessment can be confusing to students.

As part of the University move towards programme level assessment several Schools took part in assessment labs and have been working on moving their assessment to a more programme level approach. I am in the process of interviewing those involved with these pilot projects. I would like to understand the challenges they have faced in the programme design and the institutional changes, e.g. to regulations, that need to be made to enable the implementation of the revised assessment schemes they have proposed. I am also interested in how the perceptions of what programme level assessment is vary across the University.

Although the content of courses varies, the principles of good assessment design should be applicable to all programmes however it seems clear to me, from the first interviews, that there are structural differences between programmes that can hinder or help the adoption this approach. Joint honours programmes, for example, can pose a challenge. The programme level approach can offer the chance of an assessment which brings together the strands in these types of programmes, but this then requires clarity in the management of a joint assessment. Schools that have very many different joint honours programmes need to avoid a confusing range of different assessments.

Schools piloting programme level assessment are also in very different points in their curriculum review cycle. Adopting a programme level approach where a review is underway anyway is much more natural than making large changes to assessment when results of a previous review are still working their way through programmes.

As a second strand of my work I have been thinking about what programme level assessment might look like in my own School. We have joint programmes with Schools in our own Faculty and outside as well as several “Physics with” programmes but we have a well-defined Core in all programmes which could be assessed at year level. The “straw person” proposal for a way forward is submitted to our next Teaching Committee. Hopefully it promotes a lively discussion.

Meet the BILT Fellows

Meet the BILT Fellows: Helen Heath

We asked our Fellows to write us a short blog about their background and what they are doing as part of their BILT Fellowship. The following blog is from Helen Heath, who has been a BILT Fellow since September 2017.

Programme Level Assessment – A return to finals?

The summative assessment degree started on a Thursday morning with a bell ringing. At that point the waiting students were able to run to their allocated desks and start writing. I was elbowed out of the way by someone I’d considered to be a friend for three years. The summative assessment for my degree concluded, seven exam papers later, the following Monday lunchtime. Seven papers in four and a half days with a Sunday “off” after the first six exams. I don’t think anyone ever explained how the various papers contributed to the overall mark and I’ve certainly never had a transcript. “The past is a foreign country; they do things differently there.”1

I can testify that finals were not stress free.

I am currently a BILT academic fellow considering programme level assessment. This is not a return to finals but a rethink about how to design and implement assessment of programmes. In the past I have worked with programme directors as a “critical friend” during the development stage of programmes. No programme director sets out to design a programme with an incoherent and inappropriate assessment regime that puts too much stress on students and doesn’t assess the skills or knowledge that they hope students will acquire or provide the formative assessment they need to develop. The new programme directors I worked with were all keen to devise appropriate assessments, in quantity and level, with some innovation in assessment methods to provide good quality feedback in novel and helpful ways. However, programmes don’t always remain as designed.

We are aware that students are stressed. Their work load at times is difficult to cope with and they also struggle to cope with the many different demands. At the same time university staff are struggling to provide meaningful feedback to increasing cohorts of students. We are advised to reduce assessment load, but students want more feedback. A solution could be more formative assessment and less summative. If we move to more formative assessment and less summative, the summative becomes more high-stakes and therefore, presumably, more stressful.

Jessop and Tomas2 write “The idea that well-executed formative assessment could revolutionise student learning has not yet taken hold.” This paper also suggests that a large variety in assessment can cause students confusion. Here, a programme level approach for design of formative and summative assessment might help.

As an example, in my subject area it is considered that weaker students do better on course work. In exams with numerical questions students may not be able to start a question or to get it completely wrong. In order to improve performance on their unit a lecturer may introduce some element of summative course work. There are examples in the literature of this being very successful. From personal experience, when one lecturer introduced a continuous summative assessment, through problem sheets, to a 4th year unit, the performance on that unit improved. A success! Except in this case students reported spending so much time on this one unit that the other units suffered. A possibly more serious consequence is that all lecturers see the success of the approach and students end up swamped in course work.

I’d also suggest that the frequently-heard complaint that students only do work that is assessed is much more likely to be true when the work that’s assessed takes up most of their time. By over-assessing in order that the work is done we ensure that unassessed work isn’t done. A move towards programme level assessment would develop programme teams with an overall view of how the programme is assessed; to move from a model where the unit is owned by a lecturer to one where the programme is owned by a team and where there isn’t competition to get students to work on “your unit”.

In my school we are taking a step backwards by removing the assessed course work element from the core lecture units in our first year. Work will still be submitted for formative feedback. The change from historic practice is that students are required to engage with the work to pass the unit, but the marks won’t count. The aim is to move students to using the course work as a formative exercise, using it to identify where they are having conceptual difficulties rather than (as is anecdotally the case) searching on the Internet for a very similar solved problem to copy without understanding, just to secure the additional marks. Working on the problems is the learning experience, not writing down the correct answers. What will happen? Watch this space.

1 L.P. Hartley, The Go-Between

2 Jessop, T. and C. Tomas (2017). “The implications of programme assessment patterns for student learning.” Assessment & Evaluation in Higher Education 42(6): 990-999.