Why teacher assessed grades are higher than exam results & the very good reasons why they should be.

Updated: Sep 2, 2020

Right, I really have had enough of all the rubbish around teachers’ apparent pathological wish to upgrade their students beyond their actual achievements. The vast majority of responsible teachers don't do that because if a student who is not as good as they have been told they are moves to the next step of their education, they will fail. And generally, teachers don' t want students to fail.

Yes, the predicted / projected grades furnished by teachers may well be higher than actual exam result grades have been in previous years, and it absolutely SHOULD be that way. There are some very good reasons why.

Here are some of them:

1. Exams are more complicated than they seem. They do NOT represent the culmination of a straight-line race to the winning post. There are many, many, many ways in which an exam can produce a result which does not accurately reflect a student’s studies over the period of the exam course. This is why so much work is done in schools on exam skills. Any teacher who wants to do the best by their students in terms of helping them to get the best exam results will spend a significant amount of time on the mechanics of doing an exam. In some subjects, for example English Language, each question in the paper needs to be approached in a different way, so there might be lessons on Question 4, for instance, to ensure that students go into the exam equipped to succeed. Removing exams and looking only at what students know and can do means that the judgment is, in a way, more purely on what they have learned, and the knowledge and skills that they display in class and in submitted classwork and homework, than on a random spot-check on a specific day in a specific format and a high-stress environment. Unlike the snapshot test of exams, in-class education and the acquisition of skills and knowledge over time IS generally cumulative, so that by the end of the course the teacher should have a pretty accurate idea of what they are capable of. There are students who reach the pinnacle of their ability in school, and there are those who still have much more to give when they leave. Teachers can take into consideration those soft skills, the ones which lead them identify a student as gifted in a particular subject, or conversely as having difficulty acquiring the necessary skills.

2. If you don’t sit an exam, you can’t mess up an exam. Related to the previous point, sometimes a student will get a surprisingly low result and when the school retrieves the exam script we immediately see why. This doesn’t really work the other way. Exam mistakes can result in a surprisingly low result; a really good performance doesn’t inflate a result. Examples of mistakes which I have seen include:

a. In a French A level, if you answer a question in the wrong language you get a 0 score, regardless of the quality of the language or the worth of the arguments. A bilingual student wrote a 10-mark comprehension answer in excellent French, and her answers to the question were cogent, incisive and well-expressed, but as the answer was supposed to be in English, she got 0 marks for that question, and 8 from the other one where she made a similar mistake, and so she under-performed, coming in at two grades under what had been predicted. As a student, was her exam result a better adjudication of her ability than teacher assessment would have been?

b. Again in a French A level, a student looked at the two questions about the Francois Truffaut film which we studied together, and decided she didn’t like either of them. She therefore panicked and answered a question on a film which we had watched and studied in outline, but not in nearly enough detail for her to perform at her best. Predictably she only received half the marks which she did normally in her essay question and dropped a grade overall from what she had been predicted.

c. It’s not only students who make mistakes. Examiners, marking piles and piles of scripts at home, can mess up badly. In a French GCSE, when we got a paper back after a student had surprisingly seriously under-performed, we discovered that the examiner had omitted to mark two double pages of answers. The grade went up when this was pointed out.

d. Back in the day when I was sitting my History O level (yes, I know – I’m very, very old…) I had studied the General Strike of 1926 in great detail. When I opened my paper I was dismayed to find that there was no question on the General Strike. I therefore wrote three essays on other questions, and then was checking my work when I discovered that there WAS a question on the General Strike. In a moment of madness, I crossed out all of one essay and answered the GS question in note form in about twenty minutes instead. Unsurprisingly I dropped a grade from what my teachers had predicted.

3. Some students do not perform well in exams. A fear of exams can become phobic, and the repeated testing required for a student to proceed through the British system can become so intimidating that some students will never do well on paper in the system. A student who excels in class can fall apart in the exam hall. A previous student of mine, now a journalist enjoying growing success, fell into this category. She would do well in class and we would predict her high grades. Then she would step into the exam hall and it would all fall apart. She writes about it in her first piece to be published in the Independent. (Incidentally, words cannot say how proud I am of her!) So what as teachers should we have predicted for her? Should her predicted grades have been based upon what we knew she was capable of in a stress-free, learning-rich educational environment? Or should those grades have reflected the way in which we knew she would perform in the white-hot pressure of the exam setting?

So, to recap, there are very good reasons why teachers’ views of how their students can do are, and I would say SHOULD BE, generally the same or better than they do in exams.

Young people are not automatons. The transition between teaching and learning to sitting exams is not as simple as data in –> computation –> data out. Human beings are fallible and erratic with flashes of genius and dunderhead moments, and the best preparation in the world will not absolutely ensure that a student doesn’t misread or misinterpret the question, or fail to notice an important detail in the rubric, or overlook a question. Tired, time-poor, overstretched examiners won’t always get it right, nor will every child or young person have the self-possession, confidence and nerve to nail every detail on the day.

Is the exam result necessarily the best summary of how any young person makes the transition from school to the rest of their lives? Probably not. The reason we rely on them is because they are economic, convenient, standardised and easily measurable. But don’t let’s ever think that makes them necessarily better than teacher judgement.