- “Workload ‘pushing young teachers to the brink’” (BBC News, 15th April 2017).
- “Teachers ‘wasting time on marking in coloured pens’” (BBC News, 21st October 2016) (quoting Nick Gibb).
- “Inspectors are still looking for detailed marking despite please not to, Ofsted admits” (25th November 2016).
The insidious role of marking in teacher workload and misery has been a growing complaint for some time and with some justification. Too often an external auditor, be it a senior leader, an OFSTED inspector or parent expects to be able to see evidence of teachers’ work writ clear, ideally in a specific colour of pen. Of course, simply writing lots of comments on students’ work does not mean that students listen or follow advice, this is well established as the greatest challenge in giving feedback. Therefore a clear response from the student is called for. Along with a third pen colour. Of course this does not fundamentally address underlying issues such as the quality of teacher comments and so many students get easily into bad response habits such as writing “okay”, “thank you” or simply copying out their targets without any developed understanding of their next step. This doesn’t lead to progress and high quality feedback is known to be the cheapest, high-impact intervention schools and teachers can offer… so clearly more marking is called for. And so it goes on.
This issue has become serious enough to generate responses from unions, Ofsted and the establishment of a Marking Policy Review Group which specifically addressed the issue of teacher workload, titling its report in March 2016 “Eliminating unnecessary workload around marking”. Two key culprits stand out: “deep” marking where an extensive quantity of written feedback is given and triple impact marking where a written dialogue develops between teachers and students. Both generate an intense workload for little proven impact. But both are driven by the same goal – ensuring that feedback has impact, one of the greatest challenges in assessment, as I discussed in my last post.
This is not just an externally imposed problem. I have found myself adding more and more to my ‘depth’ marking over recent years; seeking to address literacy, give targets, identify what strengths the work shows, model effective answers and give directives for the application of targets … in short to make each piece of marking the perfect ‘solution’ to student progress. Too rarely have I stopped to think carefully about what the impact of each piece of feedback was, or which parts of this exhaustive process were actually the ones that best supported students’ learning. When students made progress it felt irresponsible to tinker. When they struggled it felt dangerous to step back and reduce my input … so I generally added more.
In a blog post on December 1st 2016 David Didau threw down a challenge to school leaders to let teachers reduce their marking time (the time spent actually writing comments for students) and experiment with other ways of giving feedback, particularly giving whole-class feedback and creating model work based on a reading of students’ work. This seems very similar to the model advocated by the Michaela school and fits well with Elliott et al’s (2016) finding that dialogic and triple impact marking generate significant workload but lack clear evidence of impact for this work.
At JMS we did indeed pilot this model of feedback across various subjects and key stages in order to reflect on the purpose of feedback and the impact it could have. There were a lot of positives to it: once teachers got into the swing it was a dramatic workload-saver. It drew my attention to exactly how much time I spend rewriting the same comments on several students’ work. Instead, using this model we produced a single class feedback sheet, which we started terming the ‘Examiner’s Report’ and then focused on how we would ensure that students took the key messages on board. As with any feedback model, simply telling students what had gone well and what needed improving was not enough. Modelling helped but even combined both methods rely on students being able to identify which aspects of the general feedback applied to their work. Those with lower confidence had a tendency to be over-critical of their work and risk focusing on fixing problems which did not apply. Those with a limited grasp of the assessment criteria could not always see which bits of feedback applied to them.
One-to-one conversations with those students who struggled to apply the feedback were crucial. I think our openness that we were trying something new and wanted their feedback on it also helped; students seemed more willing to admit early on if they were struggling to understand the feedback. This may be because ‘problems’ could be safely located with the ‘new’ model, rather than in themselves or the teacher, which facilitated questions and dialogue.
For me, the process has given a new emphasis to the importance of dialogue in feedback. I am not advocating extended written discussion, or even a specific pen colour. Workload has to be a consideration, but so does turnaround time if the effort is to pay off for the students. However I am convinced of the value to my students in seeing the feedback I give as the first step in a dialogic process where we discuss what went well and how that was achieved, what the next steps are and how they will try to meet these and then a way forward.
This does not have to be a laborious written dialogue built in different colours over several weeks, with books and folders passed back and forth. Sometimes, often, verbal discussion is quicker and more directly relevant to the student or small group with whom I wish to discuss their work. Tools such as the ‘examiner’s report’ marking can play a valuable part in this by cutting down wasted time marking repetitively whilst shaping my thoughts on how to move students forward and giving us a clear starting point for dialogue beyond the piece of work itself. However I have found whole-class feedback to be very much the start of a process, and not sufficient on its own. In whatever form I need my students to respond directly to my feedback to be sure that it is doing the job.
Questions that helped me to reflect on student responses to feedback:
- How widespread is this error and is it something I need to address with the whole class?
- Is this something the students can fix themselves? If so, when am I going to give them time to do that?
- How will I know if this feedback has ‘sunk in’? What am I expecting students to do with it or how am I expecting their thinking to develop? When am I going to give them time to do that?
- What is the most time efficient way to work with the student on this development point?
Reread Didau’s original post here: