In the previous episode, I outlined some theory that explains why screencast can work well as dialogic feedback. But how effective is it in practice? Does it live up to its theoretical promise? And how can we refine and develop the technique to enhance learning?
While there have been a few empirical studies, we still have more to learn about how students receive and perceive screencast, and how this can help us develop our approaches. In this post, I describe the findings from my own research and the implications if you want to use the screencast method in your teaching.
The study: screencast feedback on draft coursework
I investigated the use of screencast with students in a final year undergraduate class in Strategy (in Spring 2020, just before Covid hit).
- The 54 students in my three seminar groups were invited to submit a draft essay for formative feedback, prior to completing their portfolio of coursework.
- Of these, 38 provided some writing for me to review, which ranged in size from 800 to about 2,000 words.
- For each of these, I produced screencast feedback. These varied in duration from 12 to 44 minutes, with an average of 22:57.
- Of those receiving screencast, 28 responded to a questionnaire asking them to rate the usefulness of the feedback, but more importantly to write about it (feedback on feedback!).
All respondents found their screencast useful. But to understand how and in what ways, I examined their written responses. Two major themes emerged from this qualitative analysis.
Theme 1: Talking it through with my tutor
Students found that the detail and specificity in the personalised feedback and the way it was conveyed meant it was…
unbelievably helpful to be talked through the assignment, rather than just have a few comments”
(Student 20)
making it feel…
almost like having a meeting with the marker without the logistics and time [constraints]”
(Student 1)
as if…
the teacher led me to review my essay”
(Student 3)
because it was…
much more engaging and interactive when compared to written feedback”
(Student 18).
Importantly, students’ reactions to the tone and nuance of the verbal commentary confirmed that…
you can learn a lot more by the way something is said, rather than merely stating [it in writing]”
(Student 11)
So students perceived the screencast as like sitting with their tutor while talking through the work in detail: it felt interactive, even though it wasn’t. But several students did ask follow-up questions, to which I responded with further (briefer) recordings, email, or live chat. So there was some ‘conversation’ beyond the screencast (as indeed there had been before the formative submissions).
We can interpret all this in terms of the dialogic framework we looked at in the previous episode:
- The structural dimension entails the ‘mechanics’ of the screencast and follow-up exchanges by email or video call.
- The cognitive here is the content of the feedback itself – the commentary, suggestions, and detail of the critique.
- The social-affective angle is the implied interaction between the students and me, and how they felt about and reacted to that ‘dialogue’
Based on the evidence from the questionnaire responses, we can argue that the structural dimension (the screencast) enabled a suitably ‘social and interpersonal negotiation’ of feedback (Yang and Carless, 2013, p. 287) that encouraged students to engage with the feedback content itself. In other words, they gained a better understanding of their feedback because of how it was conveyed to them in this more ‘emotion-rich’ way.

Theme 2: Puts me in the mind of the marker
If the first story draws attention to the social-affective dimension of the feedback triangle, the second illuminates how the screencast enabled students to engage with the cognitive dimension in a new way. In contrast to traditional written feedback, students were…
able to follow the reasoning behind the comments and so [get] a clearer idea on what to improve”
(Student 10)
because the feedback provided…
insight into exactly what the lecturer was thinking/feeling … what worked, whether it left the assessor with the effect […] intended, and most importantly … what [could be done] to improve”
(Student 11)
In other words, the screencast opened up a new perspective on their work…
to see how someone sees it as a whole and how it is possible to change the structure”
(Student 16)
Because…
it was easy to follow the logic of the tutor, whereas, with written feedback, the comments can seem disjointed and impersonal”
(Student 20)
This meant students felt able to…
fully understand what the marker meant rather than … feedback in short [written] phrases [that] can be unhelpful and misleading”
(Student 1).
Again, we can interpret this theme through the lens of the feedback triangle. It indicates that the screencasts engaged students in a more ‘active role’ in processing the feedback’ (Yang and Carless, 2013). As dialogic theory suggests, the cognitive, affective, and structural dimensions reinforced each other to convey meaning. This enabled the student to see their work from the marker’s viewpoint. And this helped to combat the problem of students and tutor operating within different ‘epistemological frames’ (Nicol and Macfarlane‐Dick, 2006) that can cause mutual misunderstandings – in other words, thinking and talking at cross purposes!
Implications for practice
This study adds to the evidence that screencast can be dialogic, and thus a more effective form of feedback than traditional written comments. But what else can we say to help us optimise its use and hone our screencast practice? To maximise feedback effectiveness, we can exploit the idea that screencasts are ‘conversational’ (see Theme 1). But this should be part of a broader dialogic process. In particular, to increase impact in the social-affective there should be an ongoing conversation underpinning a tutor-student relationship dimension (Yang and Carless, 2013). Without that, there is a risk that students see a screencast as an isolated ‘transaction’ rather than one component of a developmental process.
However, we must not assume screencasts are necessarily the ‘best’ method for feedback in all situations: as one student wrote,
[it] depends on the individual giving the feedback – as some teachers provide thorough responses through word form”
(Student 4).
Similarly, simply using a screencast to ‘transmit’ terse, vague, and impersonal comments would not be dialogic at all, offering little learning value. Screencasts not only need to be a part of a greater dialogic whole, but designed with dialog in mind: ‘the key is not the technology per se, but its role in advancing student learning’ (Yang and Carless, 2013, p. 294).
This may require shaping the feedback ‘technology’ to optimise its effects on the cognitive and affective dimensions by holding learning conversations at a variety of times and places and channels to suit learners’ needs. Partnering with students can be key here – if they benefit from seeing through our eyes as teachers, we can design more effective learning environments if we also see through their eyes as learners.
In the next episode I’ll offer some guidance, tips, and techniques for designing your screencasts as part of those wider learning conversations, drawing from the How to screencast — good practice guide v1.2.
References
Nicol, D.J., Macfarlane‐Dick, D., 2006. Formative assessment and self‐regulated learning: a model and seven principles of good feedback practice. Stud. High. Educ. 31, 199–218. https://doi.org/10.1080/03075070600572090
Yang, M., Carless, D., 2013. The feedback triangle and the enhancement of dialogic feedback processes. Teach. High. Educ. 18, 285–297. https://doi.org/10.1080/13562517.2012.719154




Leave a Reply