Categories
Evidence use

How could the ITT market review succeed?

The ITT Market Review aims to ensure consistently high-quality training in a more efficient and effective market. It is currently out for consultation and could dramatically reshape how we prepare teachers in England.

The main recommendation is that all providers should implement a set of new quality requirements and that an accreditation process should ensure that providers can meet these requirements.

The new quality requirements cover:

  • A greater focus on the curriculum linked to the Core Content Framework
  • The identification of placement schools and ensuring that these placements are aligned with the training curriculum
  • The identification and training of mentors, including introducing the new role of lead mentors
  • The design and use of a detailed assessment framework
  • Processes for quality assurance throughout the programme
  • The structures and partnerships needed to deliver a programme and hold partners accountable for the quality of their work
  • An expectation that courses last at least 38 weeks, with at least 28 weeks in schools

How could the review succeed?

Instead of focusing on the nature of the proposals – should courses last at least 38 weeks? Is the Core Content Framework appropriate? – I want to analyse the proposals in their own terms: are they likely to achieve their stated goals? To do this, it helps to think about potential mechanisms that could lead to improvements and potential support factors and unintended effects.

Mechanism 1: Removing less effective providers

Less effective providers could be removed from the market, and this could raise average quality. This could happen in multiple ways: providers might decide it is all too much not re-apply, which is problematic if they are a strong provider. Second, some providers may merge. Third, some providers may try but fail to meet the requirements. Time will tell how many of the 240 accredited ITT providers fit into each category.

How do we accurately assess the quality of provision?

Getting this right is fundamental if removing less effective providers is a crucial mechanism for strengthening the market. However, we should consider for every less effective provider we remove, how many strong providers we are willing to sacrifice because of the fundamental trade-off between false positives and negatives in any selection process.

The distribution of provider quality and how the assessment is done will influence the relative trade-off between sensitivity and specificity. Do we know the distribution of provider quality? My hunch is that most providers are similar, but there are long tails of stronger and weaker providers. If this is the case, do we draw the line to chop off the tail of weaker providers, or do we cut into the body of similar providers?

The second consideration is how to judge provider quality. The consultation offers a high-level process on page 29 involving a desk-based exercise with providers responsible for submitting evidence. But who will apply the quality requirements to the evidence submitted? Civil servants supported by some expert input? This might work well for some aspects, such as assessing quality assurance processes, but the heart of the reforms – the curriculum – is much harder to assess.

To maximise the accuracy of judgements, it makes sense to do it in phases: an initial separation of those that very clearly do or do not meet the criteria and then a more intensive stage for those that might meet the requirements. Otherwise, an appeal mechanism might be wise. Using a phased approach could improve the assessments’ accuracy while making the most of everyone’s finite resources.

While still thinking about the distribution of provider quality, it is worth asking if there is enough meaningful variation. If most providers are pretty similar, then at best, we can only make relatively minor improvements by removing the least effective providers. There might be more meaningful variation at the subject level or even at the level of individual tutors. If true, and we could accurately measure this variation, this would hint at a very different kind of market review (licences for ITT tutors, anyone? No?). For context, eight providers each developed over 500 teachers last year, and UCL almost reached 1,600 – should we look at a more granular level for these providers?

The less effective providers are gone; what next?

We now need to replace the capacity we removed by introducing new providers or expanding the remaining higher-quality providers. Removing lots of less effective providers is a promising sign the mechanism is working, but it poses a challenge: can we bring in new capacity that is – quite a bit – better? This may depend on how much we lose: is it 5, 15 or even 50 per cent?

Do we know if new providers will join? It would probably be wise to determine if this is likely – and the potential quality – before removing existing providers. The quality requirements set a high bar for new entrants, so a rush of new providers seems unlikely. That said, some Trusts and Teaching School hubs may come forwards – especially if given the grant funding for the set-up work advocated in the review. Other providers like the ECF and NPQ providers not already involved with ITT, including Ambition Institute, may consider applying.

Can we replace the capacity we remove with capacity that is – quite a lot – better?

Expanding existing strong providers seems desirable and straightforward enough, but we should heed the warnings from countless unsuccessful efforts of scaling promising ideas. Spotting barriers to scalability – before you hit them – is often tricky. Sir David Carter’s observation that when Trusts grow to the point that the CEO can no longer line manage all of the headteachers, a scalability barrier has been reached – new systems and processes are needed to continue operating effectively.

What are the barriers for an ITT provider? The brilliant people and the delicate partnerships with placement schools that have often developed over several years are challenging to scale. No doubt there are many more.

Before we forget, what about those providers that merged to get through the application process? How do we ensure the best practice is embedded across their work? Again, this isn’t easy, and we will likely have to base the judgements on providers’ plans rather than the actual implementation, given the timeline. Nonetheless, it seems likely that money and time will help. An analogy is the Trust Capacity Fund that provides additional funding to expanding Trusts for focused capacity building. 

Summary

If we think that removing less effective providers is an effective mechanism for the ITT Market Review, then we should:

  • Purposefully design and implement the selection process
  • Plan for how to replace the removed capacity
  • Ensure that time and money are not undue obstacles
  • Consider phasing the approach

In part two, I explore another mechanism – programme development – that the ITT Market Review might use to achieve its goals.

2 replies on “How could the ITT market review succeed?”