Categories
Uncategorized

Looking again at the evidence for peer tutoring

In 2011, the first version of what went on to become the Teaching and Learning Toolkit was published with peer tutoring as a very promising-looking strand.

Today, the Toolkit has gone through vast improvements and the evidence still looks very promising, but there is relatively limited interest in the approach.

Jonathan Haslam has speculated that this may be due to two evaluations of peer tutoring published by the EEF in 2015 that led to the headline in the Tes that ​‘peer tutoring is ineffective and detrimental’.

Over the past couple of months, I have been looking again at the evidence for peer tutoring and I agree with Jonathan that it is a mistake to dismiss the approach too hastily.

Screenshot 2023 02 03 at 06 06 19

The many flavours of peer tutoring

Once you dig into the evidence about peer tutoring, it is striking how many different forms exist. My immediate question is to what extent it makes sense to analyse them together or individually – do we lump them all together or split them into smaller and smaller groups?

At a minimum, I suggest trying to be very clear about what is actually done, which can confuse the uninitiated as much of the literature emphasises issues like cross-age tutoring and reciprocal tutoring.

Crucially, I think there are dramatic differences in the nature of peer tutoring between phases, subjects and whether the approach is used in class or as an intervention.

Why didn’t Paired Reading work?

On the surface, Paired Reading looked like a promising approach, given the existing evidence for peer tutoring.

The EEF trial involved Y9 pupils tutoring Y7 pupils and found the programme was no better than usual practice for Year 7 pupils but had a small negative impact on pupils in Year 9.

This is a surprising finding, given the existing evidence. One intriguing possibility is that schools have simply improved over time, so peer tutoring is no longer good enough. As an analogy, Ford’s Model T car was great compared to a horse-drawn carriage, but it is no match for a modern car. Is peer tutoring an outdated car?

This is one of three plausible interpretations offered by the EEF when the paired reading trial was published alongside another peer tutoring trial involving maths.

Reinterpreting the data

I find the suggestion above compelling. It resonates with one of my favourite papers by researchers who reported the diminishing impact of peer tutoring approaches across five trials over nine years.

However, after looking closely at the evidence, I now wonder if the bigger reason is that Paired Reading was just implemented badly. Three things stand out to me as red flags.

First, the programme was a substitute, not a supplement, in nearly all schools, which is always much tougher to show impact. In many of the schools, the intervention replaced English lessons. Perhaps the Y9 pupils appeared to particularly suffer as it just was not a great use of one of their English lessons.

Next, the text selection was poor. Pupils were responsible for choosing texts and guided to apply the ​‘five finger test’ of putting a hand on a page of a potential book and seeing if the tutee could read most of the words on the page to judge the suitability.

To me, this just seems a bit crude and a world away from the key message in the EEF’s Toolkit about maximising the quality of the interaction.

Third, I’m sceptical about how the programme selected pupils and paired them together. Strikingly, it involved pairing some struggling Y9 readers with struggling Y7 readers, which does not strike me as ideal.

Reflections

All in all, I think the news of peer tutoring’s death has been greatly exaggerated. I also think there are some wider insights about how we use evidence.

First, it’s important to go back to the underlying studies. It’s easy to get caught up in one interpretation of the evidence. I have also particularly enjoyed reading some of the work of Professor Carol Fitz-Gibbon, who wrote about peer tutoring in the 1990s. Her writing is engaging and full of no-nonsense advice that is often missing from academic work.

Second, evidence can never tell us what will work, only what has worked in the past. This is a key insight into how Professor Steve Higgins encourages teachers to use evidence. Taking this idea further, Steve suggests that the onus is on us to consider how we will do better than people who have tried and failed with approaches before.

Taking up the challenge of how to do better, my colleague Louise has described some of the key considerations that have gone into the design of our peer tutoring programme.

Categories
Uncategorized

Cutting red meat will make schools greener

The past few years have been filled with heartening examples of schools’ engagement with their wider civic role: the warmth with which they welcomed Ukrainian families, the care for the vulnerable and for all pupils at the heart of their Covid response, and the help they are offering their communities with the rising cost of living.

These and myriad other ongoing pressures mean schools are stretched, so tackling climate change too can easily feel like a request too far. After all, schools can’t fix all of society’s problems. But the truth is that by virtue of the size of the education system alone, not to mention its immeasurable influence, schools are needed to drive sustainability. Climate change is an existential threat to humanity, and there is compelling evidence that the wars, diseases and poverty we are already battling will only become worse if nothing is done.

The good news is that we can achieve a massive impact without time-consuming curriculum reviews and resource-intensive capital investments. Schools can maximise their impact by focusing on a single key issue: serving less meat (especially red meat from cows, sheep and pigs).

Writing for Schools Week, I’ve highlighted how serving less meat can have a massive impact without time-consuming curriculum reviews and resource-intensive capital investments.

Categories
Uncategorized

How many pupils does Ofsted guide to the ‘wrong’ school?

The problem

I have been thinking about the exemption from routine Ofsted inspections introduced for Outstanding schools in 2014.

Inevitably, some of these schools are no longer Outstanding, but we do not know which ones so some families choose the ‘wrong’ school. How many pupils have been affected?

Before I get into the detail, I want to confess my sympathy for Ofsted as an organisation and their staff. I am not ideological about inspection, and I dislike that Ofsted is often seen as the ‘baddies’. However, I am sceptical that inspections add much value to the school system or are cost-effective. For me, this is empirical, not political.

Context

We can get our bearings by looking at how many pupils go to schools split by their inspection grade and the year the grades were awarded. I have excluded the lowest grades for clarity.

So, a little under 1.3 million pupils attend Outstanding rated schools, which on average were inspected in 2013, but some go back to 2006.

Note that I am using data that I downloaded from Get Information About Schools a couple of months ago. This data also takes a while to filter in from Ofsted, but the most recent inspections are irrelevant given the assumptions I explain later.

Estimating the size of the problem

This will be a rough estimate, so I want to be transparent and show my working. You’re welcome to offer a better estimate by changing some of my assumptions, adding more complexity or correcting mistakes – although I hope I have avoided mistakes!

I’m interested in two related questions:

1. How many pupils have joined schools that were not actually Outstanding?

2. How many pupils have joined schools that were not actually Outstanding but chose those schools because of Ofsted’s guidance?

1. How many pupils have joined schools that were not actually Outstanding?

We need to start by recognising that this is a long-term issue, so we can’t just look at the pupils in school today: we need to estimate the number that has passed through the school. To do this, I will assume that the cohort size in each school has remained constant.

The table below shows the number of pupils by the year their Outstanding grade was awarded. I have estimated the number in each year group as one-sixth of the total. I have then calculated the number of year groups affected.

So far, I don’t think anyone would disagree much with these approximations, although you could give more precise estimates.

Next, I need to make two assumptions concerning:

  1. How long a school remains Outstanding
  2. After this period, the proportion that is no longer Outstanding

I want my estimate to be conservative, so I will say that schools rated Outstanding remain Outstanding for five years. After that, half are no longer actually Outstanding. This second estimate is a bit of a guess, but it mirrors an estimate by Amanda Spielman.

So, if we put these two numbers into our simple model, we get the following table, which estimates that 280,000 pupils have joined schools that were not actually Outstanding.

Even if we make very conservative assumptions, this issue still affects a lot of pupils: it’s the classic multiplying a big number by a small number is still quite a big number situation.

Suppose all schools remain Outstanding for seven years, and after that, just 25% are no longer Outstanding; this still affects 75,000 pupils.

2. How many pupils have joined schools that were not actually Outstanding but chose those schools because of Ofsted’s guidance?

To answer this question, we need to multiply our answer to question 1 by the proportion of pupils who have gone to a different school based on the Outstanding rating. My best estimate of this is 10%, which would mean around 28,000 pupils.

My estimate is based on multiple sources, including comparing differences in the ratio of pupils to capacity between Good and Outstanding schools. This is the estimate that I am least confident about, though. Note that this is an average estimate for the population: individuals will vary a lot based on their values, priorities and their local context – especially their available alternative options.

So what?

First, I’m very open to better estimates of the magnitude of this issue, but I think this issue is an issue. Again, this is empirical, not political.

Second, defenders of the Outstanding exemption policy might reasonably argue that it refocused inspection by allowing more frequent inspections of poorly rated schools. The trouble with this argument is that Ofsted has never generated rigorous evidence that inspections aid school improvement. This would be a fairly simple evaluation if there was the will to do it – you could simply randomly allocate the frequency of inspections – but it is easy to understand why an organisation would be unwilling to take the risk.

Third, this issue is not over. Today, tomorrow, and next year, families will choose schools for their children based on Ofsted results, and some will be mislead. Even with the accelerated timeline for post-pandemic inspections. Not to mention the myriad other challenges to making valid inferences. There is also a risk of the reverse happening: families being sceptical about older Outstanding grades and placing less weight on them in their decision-making.

Fourth, if Ofsted had a theory of change that set out their activities, the outcomes they hope to achieve – and avoid – and the specific mechanisms that might lead to these outcomes, we could have more grown-up conversations about inspection. To judge Ofsted’s impact and implementation, we need to understand their exact intent.

Finally, as part of the promised review of education, we should think hard about these kinds of issues and how to minimise their impact.