The Evidence Peter Principle: the misuse and abuse of evidence

By Howard White 

This article is the second of three in a series "Reflections on the evidence architecture".

Back in the early 70s, I read my father's copy of Lawrence Peter and Raymond Hull's management book, The Peter Principle. People are promoted to their level of incompetence. A good salesperson gets promoted to manager, a good teacher gets promoted to head teacher and so on. The authors fill their book with case studies of people who excelled in the first job but were not able to do the latter. The result is an organisation full of people who are not very good at their jobs.

Who knew that nearly 50 years later I would meet the Peter Principle in a different guise: the Evidence Peter Principle. The Evidence Peter Principle is that data or studies which are suitable for one purpose are used for another purpose to which they are not suited. The result is an "evidence-based system" not based on the appropriate evidence.

In a recent article I presented the evidence-based project cycle. This cycle illustrates the different evidence which should be used at different stages of the project cycle.

So, project identification begins with prevalence data to identify priority problems. Decision-makers then consult the global evidence base to identify programmes which have worked elsewhere to address these problems.

The most promising programmes are then tested in the local context, first through formative evaluation. However, in the first case of the Evidence Peter Principle, there is a tendency to use prevalence data to identify both problems and solutions, missing out consulting evidence on what works or formative evaluation.

The problem is widespread so I won't name names. But, for example, I recently read a country education strategy which stated it was evidence-based, but the evidence was solely of prevalence. A key table listed problems and their solutions, but no mention of local formative research on the sources of these problems or what the global evidence base says about addressing these issues.

Data at the base of the knowledge pyramid

There is even a tendency to conflate data and evidence: "we will consult the data to see what works". Data are not evidence. The data need to be analysed in studies to produce the evidence we use to assess programme effectiveness. Data are the bottom layer of my knowledge pyramid: they are the foundation on which evidence is based, but data need to be analysed and translated to become evidence.

You can read more about the evidence architecture here.

An evidence-based programme will have a good monitoring and evaluation system, including monitoring and process evaluations. Implementation data – such as expenditure, employees, activities and people reached – tell project management if the programme is on track in meeting its intermediate targets. Outcome monitoring tells us a problem is on track – but not what to do if it is not. But it remains common – especially in advocates of ‘results frameworks’ – to use before and after outcome monitoring data to judge if a programme "is working". This is another case of the Evidence Peter Principle, promoting monitoring data to an impact evaluation function to which they are not suited.

To judge if programme performance is working or not, an impact evaluation is needed – preferably a randomized control trial (RCT). As we have learned in the last 15 years, RCTs can be applied in a wide range of settings with many possible variations in design; see my blog post on selling RCTs for more details.

An impact evaluation tells us if the programme works or not and, with suitable mixed-method design, can give valuable lessons on design and implementation. This evidence can tell decision-makers whether to expand, modify or close the programme.

However, in the last case of the Evidence Peter Principle, single studies are inappropriately used to make global policy statements. Deworming is perhaps the best known case of this, but it is far from being the only example. Single studies are not suitable for that purpose. For that we need an assessment of the global evidence in systematic reviews.

One reason for the persistence of the Evidence Peter Principle is the under-investment in the evidence architecture. That will be the subject of the last article in this blog series.

To contribute to the global debate on these issues join us at the What Works Global Summit 2019 in Mexico City this October.

Contact us

  • P.O. Box 222, Skøyen,
    N-0213 Oslo, Norway
  • (+47) 21 07 81 00
  • info@campbellcollaboration.org