Interview with Emily Tanner-Smith

Interview conducted by Ciara Keenan and bio written by Jennifer Hanratty
This article was originally posted on the Meta-evidence website on 12 February 2018.

We are delighted to have Emily Tanner-Smith as our guest this week. Professor Tanner-Smith is Research Scientist at the Prevention Science Institute and Associate Professor at the University of Oregon, current methods editor for the Campbell Collaboration and previous director of the Meta-analysis centre at the Peabody Research Institute, Vanderbilt University. Her research focuses on adolescent substance use and addiction and applying research synthesis methods to better understand variability in the effectiveness of interventions by context, setting and population. Exploring heterogeneity to better understand what works, for who and under what circumstances is a key theme in Professor Tanner-Smiths work.

What’s an annoying misconception you have heard about meta-analysis, and how did/ would you answer to that?

That heterogeneity is always a “problem” that invalidates the findings from a meta-analysis. This misconception likely arose from the fact that many early meta-analyses aimed to estimate a common (fixed-effect) mean effect size across primary studies. However, I would say most modern meta-analyses, are no longer simply interested in estimating the common average effect across studies. Rather, as scholars have come to increasingly embrace a complexity perspective, many meta-analyses now assume multiple true effects in the population, and are explicitly interested in quantifying and potentially explaining heterogeneity in effects. This is why we have seen an increase in the use of meta-analytic methods that can be used to explore heterogeneity in effects (such as subgroup analysis, meta-regression analysis, network meta-analysis), as the field moves beyond questions about whether programs work (or not), to instead focus on questions about for whom and under what conditions a program works best.

Which meta-analysis has most historical significance for you?

Petrosino and colleagues’ (2004; updated in 2013) review of Scared Straight and juvenile awareness programs for preventing juvenile delinquency. These ‘Scared Straight’ programs began in the 1970s, where at-risk youth visited adult prisons and interacted with adult inmates. These programs often use shock or scare tactics to warn youth about the dangers of a life of crime. During an era of public approval of “Tough on Crime” strategies, these Scared Straight programs often resonated with the public and the media as a promising crime deterrence policy. The Petrosino et al., (2004; 2013) meta-analysis was the first comprehensive systematic review to synthesize evidence on the effects of these programs. Based on their synthesis of nine trials, the review authors concluded that Scared Straight programs were not an effective crime prevention strategy, and might actually be harmful – potentially increasing the likelihood of delinquent or criminal activity. In light of this evidence of ineffectiveness and potential harm, Scared Straight programs are not funded by the U.S. Department of Justice. This meta-analysis was particularly influential in that it highlighted the crucial role of scientific evidence vs. non-scientific knowledge derived from popular opinion or media when evaluating the effects of social programs.

Why is a forest plot called a forest plot?

Forest plots are one of the most efficient graphical tools for summarizing the findings from a meta-analysis. Forest plots typically show the effects from all studies included in the meta-analysis, displayed as individual effect size estimates and their

forest plot

corresponding confidence intervals. Most forest plots will also display a diamond at the bottom, which represents the estimated mean effect size across studies (and its corresponding confidence and/or prediction interval). The individual effect size estimates from each study are usually displayed as squares proportional to the weight received in the meta-analysis, which serves to draw visual attention to studies with the most precise effect estimates. Without such weighting, visual attention would be drawn to studies with the widest confidence intervals, and hence lowest precision. Although the exact historical origins of the name “forest plot” are unclear, Lewis and Clarke (2001) suggest these plots are called forest plots because they include a forest of lines—namely, those horizontal lines that represent confidence intervals around individual study effects. The forest plot was not named for a particular researcher, although Richard Peto reportedly joked in 1990 that the plot was named for the breast cancer researcher Pat Forrest (Lewis & Clarke, 2001: p. 1480).

What do you want to see more of in published research?

logic modelAs someone who conducts meta-analyses of complex social interventions, something I would like to see in the primary evaluation research are better descriptions of program logic models. Logic models can be used to depict the theorized causal pathways of an intervention, highlighting the important core components (or ‘kernels’) of an intervention. Logic models are also useful for specifying the potential pathways by which an intervention is theorized to operate, as well as the different settings or contexts in which the intervention may be more or less effective. Logic models reported in the primary evaluation literature, along with those created by systematic reviewers, can thus help identify items that a meta-analyst should measure during data collection and potentially examine in the meta-analysis—for example, items measuring the presence/absence or strength of key intervention components, potential mediators or mechanisms of program effects, and potential moderators of program effects.

What’s been a really exciting advance in the method?

One particularly exciting advance is the increased use of individual participant data meta-analysis methods. Traditional meta-analysis methods for synthesizing aggregate study data can be incredibly informative, but are inherently limited for examining participant-level characteristics—such as age, gender, race, ethnicity, or baseline functioning— that may moderate program effects. Historically, a major barrier to conducting individual participant data meta-analyses has been the lack of access to participant-level study data. However, based on recent cultural shifts that value and embrace open science, I am optimistic that meta-analysts will see improved access to such data as researchers, funders, and journals become increasingly committed to data sharing for promoting transparent and replicable science. This cultural shift, combined with advances in statistical methods for synthesizing both aggregate and participant-level data, should yield exciting advancements in our understanding of variability in program effects.

Further reading:

Paper debunking Myths and Urban Legends about Meta-Analysis
http://shell.cas.usf.edu/~pspector/ORM/AguinisORM-11.pdf

Addressing common criticisms of meta-analysis
https://www.meta-analysis.com/downloads/Intro_Criticisms_optim.pdf

Podcast debating meta-analyses and systematic reviews
https://soundcloud.com/everything-hertz/4-meta-analysis-or-mega-silliness

Scared straight Petrosino and colleagues’ meta-analysis here
https://staging1.campbellcollaboration.org/library/juvenile-delinquency-scared-straight-etc-programmes.html

Readers can access Lewis and Clarke’s (2001) discussion of the history of the forest plot here
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1120528/.

Step by step guide to reading forest plots
https://www.students4bestevidence.net/tutorial-read-forest-plot/

 

Meta-evidence is a blog for interviews and tips on evidence synthesis brought to you by Campbell UK & Ireland.

Contact us

  • P.O. Box 222 Skøyen
    0213 Oslo
    Norway
  • +47 2107 8100
  • info@campbellcollaboration.org