The unstoppable growth of evidence
In 1999, a young, Dr Robert Coe, published an excitingly titled Manifesto for Evidence Based Education. You can still read it; it’s very good. The updated version from 2019 is even better.
With deep foresight, Rob argued that:
“Evidence-based” is the latest buzz-word in education. Before long, everything fashionable, desirable and Good will be “evidence-based”. We will have Evidence-Based Policy and Evidence-Based Teaching, Evidence-Based Training – who knows, maybe even Evidence-Based Inspection…evidence, like motherhood and apple pie, is in danger of being all things to all people’.Professor Rob Coe
It’s safe to say that we have now reached the apple pie stage of evidence in education: the recent White Paper promises evidence-based policy and Ofsted’s new strategy offers evidence-based inspection.
If you’re reading this, then, like me, you’re among the converted when it comes to the promise of evidence, but my faith is increasingly being tested. For a long time, I thought any moves to more evidence use was unquestionably good, but I now wonder if more evidence use is always a good thing
To see why, look at this sketch of what I think are plausible benefit-harm ratios for five different levels of evidence use.
Level 1: no use
This is what happens when teachers use their intuition and so on to make decisions. This provides a baseline against which we can judge the other levels of evidence use.
Level 2: superficial use
I think in some ways, this is what Rob foresaw. For me, this stage is characterised by finding evidence to justify, rather than inform decisions. The classic example of this is finding evidence to justify Pupil Premium spending long after the decision has been made.
I think this is a fairly harmless activity, and the only real downside is that it wastes time, which is why there is a slightly increased risk of an overall harm. Equally, I think it’s plausible that superficial use might focus our attention on more promising things, which could also increase the likelihood of a net benefit.
Level 3: emerging use
For me, this is where it starts to get risky. I also dare say that if you’re reading this, there’s a good chance that you fall into this category – at least for some things you do. So why do I think there’s such a risk of a net harm? Here are three reasons:
- Engaging with evidence is time consuming so there might be more fruitful ways of spending our time.
- We might misinterpret the evidence and make some overconfident decisions. There’s decent evidence that retrieval practice is beneficial, but some teachers and indeed whole schools have used this evidence to justify spending significant portions of lessons retrieving prior learning, which everything I know about teaching tells me is probably not helpful. This is an example of what some people have called a lethal mutation.
- There’s also a risk of overconfident decision-making. If we think that ‘the evidence says’ we should do a particular approach, then there is a risk that we keep going at it despite the signals that it’s not having the benefit we hope.
Of course, even emerging evidence use may be immensely beneficial. I think there are three basic mechanisms by which evidence can help us to be more effective:
- Deciding what to do – for instance, the EEF’s tiered model guides schools towards focusing on quality teaching, targeted interventions and wider strategies with the most effort going into quality teaching.
- Deciding what to do exactly – what is formative assessment exactly? This is a question I routinely ask teachers and the answers are often quite vague. Evidence can help us define quality.
- Deciding how to do things – a key insight from the EEF’s work is that both the what and the how matter. Effective implementation and professional development can be immensely valuable.
The interplay of these different mechanisms, and other factors, will determine for any single decision whether the net impact of evidence use is beneficial or harmful.
Level 4: developing use
At this level, we’re likely spending even more time engaging with evidence. But we’re also likely reaping more rewards.
I think the potential for dramatic negative impacts starts to be mitigated by better evaluation. At level three, we were relying on ‘best bets’, but we had little idea of whether it was actually working in our setting. Although imperfect, some local evaluation protects us from larger net harms.
Level 5: sophisticated use
Wow – we have become one with research evidence. At this stage, we become increasingly effective at maximising the beneficial mechanisms outlined in level 3, but we do this with far greater quality.
Crucially, the quality of local evaluation is even better, which almost completely protects us from net harm – particularly over the medium to long-term. Also, at this stage, the benefits arguably become cumulative meaning that things get better and better over time – how marvellous!
The categories I’ve outlined are very rough and you will also notice that I have cunningly avoided offering a full definition of what I even mean by evidence-informed practice.
I think there are lots of implications of these different levels of evidence use, but I’ll save them for another day. What do you think? Is any evidence use a good thing? Am I being a needless gatekeeper about evidence use? Do these different levels have implications for how we should use evidence?
A version of this blog was originally published on the Research Schools Network site.