So results are in and, surprise, surprise, there’s not much change. A slight 0.5% improvement on ‘pass’ rates, but given that the new 4 was supposed to be equivalent to a C/D borderline grade, that’s to be expected. But wait… ‘pass’ rate? Aren’t there three other grades to consider here? Aren’t grades 1-3 passes too? It would seem not since government have firmly labelled not only 4s as ‘standard passes’ but also 5s as ‘strong passes.’ Who cares about the rest? The 33.9%? Meh. May they proceed onto endless resits, doomed to groundhog day repeats of failure for the next few years, their confidence dwindling to the point that they feel worthless. Who cares? Passes is what we’re after. Because, standards.
And passes we’ll get. Well 66.1% will get them. Almost without fail every single year. Because that’s what the system is set up to ensure. No matter that we have to pull the grade boundaries down. One Maths exam board had to lower Maths ‘standard pass’ rates down to 21% this year to ensure that the ‘right’ number of candidates passed. Ofqual had to rescue a whole group of higher tier Science students from U grades by getting examiners to remark them at foundation level so they could at least achieve grade 3s. And it’s probably right that they do so. We can’t have whole cohorts of students fall victim to the whims and follies of government ministers who throw the system into chaos and then skip off to another department. But it creates some very serious difficulties for us all.
For example, the new ‘harder’ A Levels were designed to challenge those pupils who had met the new harder standards of GCSE. But they haven’t ‘met’ that standard – they’ve just been given grades for lower marks. So the gap is even bigger putting more pressure on A Level teachers and creating difficulty for pupils.
And given that GCSE results are set largely in line with KS2 outcomes in English and Maths, what of other subjects? While there is, theoretically, a possibility that Ofqual will change those boundaries if exam boards make the case that a cohort of pupils were ‘better’ in, say PE, it rarely happens. Where is the incentive for exam boards to do so? How do they prove it? There is no baseline data for PE – only the performances of previous years, which were set in line with other subjects based on baseline data in English and Maths. And even the National Reference Test, that is designed to check whether progress is indeed linear, is only done in English and Maths. (And in February before GCSEs – to pupils who have just done mocks. A test to test that the test is working!) I’m not holding out much hope.
So for the pupil who has played sport all her life, or who has played musical instruments and gained grades in them, their potential grades in PE or Music are tied to how they did in Maths and English when they were 11. Consider also that these subjects, sitting outside of the EBacc, are opted into – that they are more likely to be chosen by students who have an aptitude and existing experience. Tying their results to a cohort average across the whole range of subjects seems even more ridiculous. But it happens. Talk to statisticians at Ofqual and they will patiently tell you that the maths shows them that pupils who do well in SATs are indeed more likely to do well in PE/Art/Music etc. Of course it does. And not just because of the linearity of the measurements but also because of Psychology. A child will be given target grades in those subjects based on their SATs data. For five years they and their teachers will work to ensure they hit those targets: targets based on their performance in English and Maths when they were 11. Some, of course, will buck the trend. But most will become self fulfilling prophecies- statistically fulfilling prophecies. In order for the whole subject, across the whole nation, to buck that linearity, chief examiners across subjects will have to notice that on the whole, the cohort this year, seemed better than the last. And the last could well have been better too but standardised grade boundaries have driven them into the layered sediment of year upon year of results based on expectations. It’s hard to see in that sediment where there might have been improvements. They’ll have to notice and be motivated to act. Let’s face it, the chances are slim.
So we changed everything and nothing. And in the meantime, the ‘harder’ content and the removal/downgrading of aspects of assessment that allowed pupils to show skills other than performing in exams created another layer of stress for teachers, pupils and parents. There were more revision sessions, more schools moving to a three year GCSE, more reported cases of exam stress, mental health issues and self harming, more Easter holidays given up for study support, more worry, more money spent on resources…for what? So that a minister can stand up and say the reforms worked? We are now as competitive as Singapore and Finland?
It’s really no wonder that private schools are now almost exclusively rejecting the ‘new’ GCSEs and opting instead for the more stable IGCSE. No wonder either that Wales and NI decided to stick with the old system. The reforms sold to us to make us more internationally competitive and more like private schools, turned out to be a pup. And meanwhile, leading experts on adolescent brain development such as neuroscientist Sarah-Jayne Blakemore, are pointing out to this profession that putting exam pressure on 15 and 16 year olds is one of the worst things we could do. That we would be far better testing them at 18 because their brains are literally at a critical stage of both emotional and intellectual change at 15 and 16. It’s like opening a pupae to check there’s a fully formed butterfly inside. It would seem we only want to be evidence informed when it suits us.
So why do we do it? Because we’ve always done it? Because the infrastructure of our schools was designed for a leaving age of 16 with optional education to 18? Because it’s just too hard to reconfigure that infrastructure into a system of primary, middle and high schools with an end assessment point at 18? Because our eyes are on another problem? Whatever the reason, it’s becoming clear that our exam system is not fit for purpose. It fails, habitually, 33% of our children. It places huge pressure on the mental health of pupils. It warps our perceptions of what constitutes good education, good teaching, a broad and balanced curriculum. It doesn’t even raise standards. It doesn’t even raise spirits.