By Howard White, CEO, The Campbell Collaboration, originally posted here
The sight of people sleeping rough on snow-strewn winter streets is an affront to our aspiration to ensure minimum living standards for all citizens. And people who are street homeless are just the tip of the iceberg in the broader sense of living in inadequate accommodation: staying with friends or relatives, cramped into sub-standard, rented accommodation and so on. Why is this happening? Do we simply not know what to do to tackle homelessness?
The last decade has seen the rise of evidence-based policy, the cornerstone of which are rigorous studies to determine which programmes work and which don’t. Traditional process evaluations give useful information about issues which arise in implementation. And they can give insights about design problems.
But process evaluations cannot give a definitive, quantified answer to the question of the extent to which an intervention reduces homelessness. And they cannot say if interventions successfully address related issues such as access to services of homeless people and families. That usually requires studies with a well-defined control group of a comparable group who are not in the programme. This is best achieved though randomized controlled trials, though non-experimental approaches can also produce credible results.
However, there is a growing evidence base of what works to combat homelessness. The Centre for Homelessness Impact is working with the Campbell Collaboration to produce maps of the existing evidence. The first stage of this work has mapped 131 studies, which were identified in a systematic review of homelessness conducted by the Institute for Public Health (FHI) in Oslo.
This database shows the recent growth in rigorous studies. There were just under two studies a year published prior to 2000, an average of 4 a year from 2000 to 2009 and nearly 10 a year since 2010.
What do we learn from this preliminary work?
First, the evidence map is not a desert. In many sectors we see huge evidence gaps where no studies have been done. This is largely not the case for homelessness, with the exception of legislation where it is generally difficult to find appropriate study designs. Moreover, the studies cover a broad range of outcomes, not just housing stability. There are already studies we can learn from.
Second, the evidence is not evenly distributed. The largest concentrations of studies are on health and social care interventions, followed by accommodation-based approaches. There is relatively less evidence for education and employment and skills interventions. Similarly, the focus on outcomes is on health and housing stability, closely followed by other measures of wellbeing. There are far fewer studies addressing employment and income for those at risk of homelessness.Evidence maps show what evidence is available not what evidence says. But some initial impressions can be given which help lay out the road map for further work.
Reviews in many areas show that most things don’t work. There is growing recognition of the ‘80 per cent rule’. Eighty per cent of things don’t work, and that includes many interventions which seem like ‘no brainers’.
But at first look at studies of homelessness suggest exceptions to this pattern, perhaps because so few interventions have been rigorously tested. Most studies do find interventions to be effective, meaning they are better than ‘usual services’. The control group in most studies doesn’t get nothing. They get the usual services which are available in the absence of the programme. The FHI review of randomized controlled trials of the impact on housing stability identified all the following as successful interventions: Housing First, critical time intervention, both abstinence-contingent housing and non-abstinence-contingent housing, the latter with high intensity case management, subsidised housing, and residential treatment. More detailed work is need to assess whether the extent to which these findings are subject to common biases in academic research such as publication bias and outcome reporting, that is that papers with ‘null findings’ never see the light of day, and that authors don’t discuss or even report the null findings in their published papers. Or it may be that the studies do not properly account for attrition - that is people dropping out of the programme - thus overstating impact.
Second, where things do work then effect sizes are small. An intervention ‘working’ doesn’t mean that it solves the problem. It usually means a relatively small percentage shift in the outcome. Again homelessness appears to be an exception. At least some studies show large effects, as did the FHI review for intervention categories such as case management. For example, a study of a Critical Time Intervention for mentally ill men being discharged from a shelter in New York found the number of homeless nights in the 18 months following discharge was 30 for those who received the Critical Time Intervention compared to 91 for those receiving usual services. A study of Housing First with Assertive Community Treatment in Canada found that the participants found housing more quickly (73 days compared to 220 for those with usual services) and spent more than twice as long in stable housing (281 versus 115 days at the end of the study).
Third, in many sectors multi-component interventions are found to be most effective. There are two possible reasons for this, which are not mutually exclusive. First, those at risk of homelessness often face multiple, interacting problems such as mental health, substance abuse and unemployment. Hence a package of interventions is needed, not just housing support, but also mental health interventions and support in job searching. Giving one alone may not be sufficient. The second explanation is that different things work for different people. For one person dealing with substance abuse may be sufficient to get them back on track. For another it may be getting a job. It matters for programme design and targeting to know which of these is the case, an answer which may vary by context and individual.
So, whilst there is an evidence base on which to build a platform for evidence-based policies, there are some important caveats.
1. Most of the evidence comes from North America.
Few studies are from the United Kingdom. Of the 89 RCTs listed in the FHI review, 74 are from the USA and another 5 from Canada; just seven are from the UK. Of these studies, six are in London, one of which also includes Manchester, and the other one is from Oxford. None are from Scotland, Wales or Northern Ireland.
Overseas studies are relevant, but we do need to be sensitive to differences in context which may mean that things that worked elsewhere work less well (or better) here. Local studies of promising interventions should be undertaken. Evidence-based policy is anyway not a blueprint approach: it worked in Ontario let’s do it in Oxford. Rather, programmes which worked elsewhere are adapted to the local context and then tested in that context.
2. Even if most things seem to work we can’t do everything.
There are two parts to this. For multi component interventions, which bits are necessary? This matters not just to do the bits that work, but because the more complicated a programme then the more likely it is to go wrong. More generally, we need to know what works best. More specifically, which interventions are most cost effective? This is where values also play a role. It is very plausible that prevention is most cost effective as it avoids all the costs once a person falls into the vicious circle of homelessness, unemployment, and substance abuse. But that does not mean we can ignore those sleeping rough. Resource allocation is a ultimately a political decision, but one which should be informed by robust evidence.
Currently that evidence is not readily available or used. The mission of the new Centre for Homelessness Impact is to bring rigorous evidence of what works in an accessible, usable form for use by policy makers and practitioners.