Too often when talking about school improvement, leaders will real off an impressive list of actions they are taking to improve pupil outcomes and rightfully so, helping pupils improve is at the core of what we do. But what is actually working?
I was once asked, for example, by a Trust leader what our higher ability score was in autumn and the summer. On the surface that looks harmless but I knew its true purpose. The data was being used to measure the impact of a higher ability programme of three revision workshops over the year had on pupil outcomes.
Again, it easy to think that there is nothing wrong with this but my position is that we actually have no idea if the workshops had any impact on the pupils and saying so is flawed for many reasons.
Firstly, many of the pupils did not attend all the sessions and there was no tracking to ensure that the strategies taught were actually being implemented by the pupils.
Secondly, it ignores that other actions specifically tailored to higher ability pupils were being implemented in congruent with the revision sessions.
Thirdly, it discounts natural rates of expected progression. I understand that learning is not linear; however, if pupils are expected to make one grade of progress over a year we can assume that roughly equates to a third grade improvement per term.
Why does this matter? The data I sent to the Trust leader suggested that our higher ability pupils made a grade improvement; therefore, pupils met expected progress – may sound good but actually it suggests that there was no “extra” added value because pupils are, technically, supposed to improve by a grade over the year.
In the end, the programme was considered a success. The leader was able to meet their performance management requirements, add another golden thread to their career and redirect their attention to new areas of school improvement.
Multiply this line of rational thought of measuring impact and over the years you could be stuck with a programme of school improvement with an action list a mile long without knowing what actually works.
Because every time the grades go up another action gains its wings and if the grades go down it’s because of poor leadership or blamed further down the ladder.
What leaders are then stuck with is a list of very plausible or pointless solutions but fear and worry leads schools to repeat everything again with a few more added for just in case. Only the few will be brave enough throw the list away. The fear of the grades dropping, however, overshadows common sense.
I, too, have a history of adding to growing lists of actions but have really started to think about what is “actually” working.
Here are some brief pointers that I’ve picked up that may be of some use for fellow school leaders:
- Slow down. Consider a three year school improvement with a focus to implement just a few actions.
- Spend time really thinking about the implementation design (it is a school improvement blueprint) with clear and realistic expectations.
- Stay focused on what is being implemented but be flexible to amend and pivot when necessary.
- Impact will not always be instant and is not always data driven at the start.
- And finally, see what the impact is actually demonstrating not what you want it to.
Hopefully, this will be of some help.