Ofqual’s decision, to adopt a modified form of its proposal for grading GCSE and A-levels this Summer, has been met with consternation and in some cases despair. Their decision to put increased weighting on historical data is not the best (or least worst) option available. It’s more about statistical modelling than a humane approach to a complex and difficult situation.
Ofqual are faced with a real dilemma. There isn’t an easy, quick or totally fair solution to the grading of pupils this year, at GCSE and A-level. They are faced with a set of imperfect choices and need to determine priorities. I’d propose that prioritising young people’s futures should be the key driver, in any decisions made.
Schools, their leaders and teachers may feel aggrieved by Ofqual’s current decision. The removal of performance tables and the flawed data from an accountability perspective lessen some of the potential damage done. However, at an emotional level, Ofqual’s decision will do harm. People who have given everything to improve outcomes for young people will feel badly let down, possibly totally despondent.
But the greatest immediate and possibly long term impact is on young people who may or may not be given places on the post-16 courses, they have applied for. Employment opportunities and prospects will be affected. A-level results will have an impact on university places offered.
The use of historical evidence is particularly problematic given the curriculum and examination upheaval of recent years. In using historical data you would probably want to look across at least three summer examination seasons, the period from 2017-2019 to provide data for statistical modelling.
In each of these years there were new GCSE examinations introduced. Ofqual’s own data will show that in the first year(s) of a new examination there is a far higher degree of deviation from the mean, than in future years. This is in part due to all teachers/schools coming to terms with the new specifications and examinations. This year would be the point at which some of the earlier volatility in Maths, English (2017) and the twenty further GCSE examinations introduced in 2018 would have been reduced in the system. Instead, Ofqual has decided to re-introduce the turmoil.
The 2017 data is particularly problematic as it also includes the ECDL Effect which produced a quite significant increase in Progress 8 in some schools. Effectively including the impact of this qualification in the 2020 calculations makes a mockery of removing it from the performance tables in the first place.
However, suggestions to reduce the number of years included in any statistical model means that instead of looking at performance over time you have a far less statistically reliable snapshot. There is no easy solution.
Part of the significant angst being expressed is, the increased weighting placed on historical data means, all those schools who have busted a gut to improve the outcomes and hence life chances of young people will feel their efforts are in vain.
Young people who sat GCSEs in 2017, 2018 and 2019 will have mostly started at their secondary schools in 2012, 2013 and 2014 respectively. In school improvement terms that’s a life time ago. The idea that schools will have not improved can be modelled statistically but utterly fails the common sense test.
There is a difference between fairness and a complicated statistical model. Fairness is associated with the assumptions being made not the mathematics being used. Any assumption built into the model for grading young people this year must pass these two tests: first the assumptions must do no harm and secondly they must do good, promote a course of action that is in the best interests of the person.
I have some simple rules that can guide us. Any model adopted must ensure that no school, and as a consequence their pupils, is worse off statistically than previous years. The historical model can be used to ensure this.
Secondly, schools must be allowed to provide evidence of improved performance overall or in subject specific areas. This evidence must be looked at in a spirit of generosity; the evidence is always given the benefit of the doubt.
For example, these two graphs are from GL Assessment Science Progress tests. The graph above shows the results for Year 7 in September 2017. The data is representative of the results the school tends to get for each Year 7 cohort.
The graph below is for the Year 9 cohort taken in Summer Term 2018 (please note it is a different cohort from the Year 7 above) and shows the level of progress now being made in Science.
This is the cohort that would be taking their GCSEs this Summer. The improvements are due to work done by both the school overall and the Science Department in particular, since these young people joined the school in 2015. The 10 point improvement represents nearly two GCSE grades per pupil; this is now going to be ignored. That is neither fair or right. A similar trend is seen in English Progress Test results.
The flaw in my proposal is the potential “unfairness” is that some young people might get a few grades higher than they would if they had sat their examinations. I’m OK with that. The idea that the improvements schools have made and the work these pupil have done is not rewarded is even less acceptable to me.
Limiting the improvement any school can make overall will militate against excessive rises or massive grade inflation. But for a year group who have already suffered in different ways, due to COVID-19, I say give them the benefit of the doubt. Don’t take even more away from them.
Yes, but have you looked at the GL progress test questions?