This month Waldo (known as Wally in GB) had his 25th birthday. There are now numerous pictures with one red and white hooped sweatshirt and hat wearing bespectacled Waldo for you to find in the crowd. It’s not easy but it’s great fun and very satisfying when you locate him.
It is a similar challenge for teachers and leaders to find their progress outliers amongst the many pupils who are making expected or better progress. You can come at this from a “keep the wolves away from the door” perspective; a few pupils making disastrous progress will implode your class/school data this summer. Or from a humane “every child matters to me” approach. The challenges and potential benefits are the same.
With Progress 8 data published this week the issue of outliers will again be part of a wider discussion. If you have a pupil expected to get at Attainment 8 score of 48 and they only get 18 points; a class of 30 pupils each need to get a grade better than predicted to get the Progress 8 score back on track. The numbers are different but the same issue occurs in primary school data, with extra impact created by the smaller number of pupils in each school’s cohort.
There are different types of outliers. One is the pupil who arrives just before census day or the Year 6 SATs from another school and creates great statistical damage, without you ever really having a chance to educate them. Putting this pupil centre stage; it’s time to hold schools accountable for a pupil’s outcomes in proportion to the length of time they have been in the school (acknowledgement to Datalab for this idea). It would stop some unscrupulous or fearful schools trying to move pupils on at the last moment in the hope of protecting themselves with no regard for the child or young person.
The other types of outlier has been with you for five or seven years. Their underachievement may be fuelled by a personal or family implosion. Disadvantaged and looked after children may be disproportionately represented in this group. You may have little control over the circumstances or situation. Just wrap around them as much support and pastoral care as you can. The other in-house outlier has been slowly underachieving for years and years; spot them and these are the ones we can do significantly more with.
At a primary level we have made the assumptions that this year’s Key Stage 2 SATs, associated conversion to a scaled score and regression formula from Key Stage 1 to 2 will all be the same as last year. Whilst this assumption is likely to be wrong it is accurate enough for our purpose. We’re looking for the outliers and the biggest underperformers rather than seeking the actual progress score.
Pupils in Year 6 all sat last year’s SATs papers. These are then marked to give a current score, converted into a scaled score and then entered into the spreadsheet alongside the pupil’s Key Stage 1 data. The spreadsheet compares the expected Key Stage 2 outcome, generated from the Key Stage 1 data, with the current Key Stage 2 data from the SATs paper. Heather provided knowledge of the methodology and Simon (@MathsMrCox) knowledge of macros to produce the spreadsheet that churned out this data. All this detailed work goes on behind the scenes. The tests are low stakes for the pupils. It’s simply part of finding out what they should but don’t know; assessment as part of the learning process.
There’s still time between this data collection point (late November) and the May SATs for the normal day to day teaching programme to impact on the progress children have made; this is a snapshot in time. Some pupils are flying; look at Raj’s reading score. Others might need extra help; Tim is struggling. Some need targetted help as Debra is doing well in Reading but less so in Maths.
A copy of the spreadsheet we use is here: Calculating KS2 Progress – Excel Spreadsheet
We now know where our greatest efforts are needed. There are learning gaps to fill and confidence levels to build. There’s time to do both. But, if we don’t do something about these outliers now, this is urgent, I worry about the impact of our pernicious accountability system on our academies. Equally, if we don’t improve their rate of progress of some pupils I worry that they will start secondary school behind their peers. They will be placed in lower sets and the expectations made of them will be less; they’ll fall further behind. I worry that those starting from a low prior attainment point, even if they make decent progress, will still be behind by the time they are in Year 11. I tend to worry a lot.
St. Mary’s use the same thinking and essentially the same methodology. Using FFT predictions pupils’ current grades are compared to targets to see who the outliers and potentially big underachievers currently are. The analysis is completed by our Data Manager using the Management Information System.
English & Maths are a real problem this year as the current grades, whilst not quite made up, are a real finger in the air guesstimate. (It was the same last year in primary schools.) We simply don’t know what they’ll be and won’t till after the summer’s examination season. Next year we’ll have the same problem with the E-Bacc suite of new GCSEs. However, there’s something slightly comforting in walking past the Progress Board in the staff room, full of pupil pictures, knowing that staff have eyes on the issue. Maybe I should worry a little less.
What would really help my worrying tendency would be to sort the lack of progress at source. To that end we are collecting progress data in two different ways. On-going, as part of our assessment system, we find out what pupils don’t know or can’t do with re-teach and remedial small group tuition to sort any underachievement problems close to first teaching. The process from Early Years to Sixth Form is to try to stop pupils falling behind. It sits in the class teacher’s hands.
A couple of times a year in terms of Red-Amber-Green-Gold judgement of progress made against the curriculum and reported to parents. This process is fraught with issues akin to those of levelling but it gives us a way of asking potentially useful questions. The percentages whilst giving an air of accuracy are in no way hugely reliable so trying to draw significant and far reaching conclusions would be silly.
If loads of pupils are green or gold is the teaching brilliant or the challenge too low? Vice versa, if lots of pupils are amber or red is the curriculum unrealistically difficult or just badly taught. Are all teachers making progress judgements against the curriculum in a consistent manner? I haven’t a clue what the answer are but if middle and senior leaders continuously ask and reflect on these questions I’m sure our work will continue to evolve. The pupils will do fantastically well, teaching will flourish and the statistics will look after themselves.