When Enough Is Enough

When Enough Is Enough

Sep 12, 2022
Football helmet

As an economist, a policy researcher, and a generally curious person, I’m always looking for more data, more context, more information to help make decisions. That goes for everything from relatively low-stakes decisions about where to order take out to higher-stakes choices about how to lead Mathematica. Data has never been more readily available to inform decisions in more facets of our lives, and we’re constantly taking in information—star ratings, online reviews, and even expert recommendations—to make calculated choices.

Having this wealth of information at our fingertips can be extremely valuable and instill confidence that we’re making the right choice, but it can also lead to situations where we’re constantly looking for more information rather than making the best decision based on what we already know.

It makes sense that as the stakes of the decision rise, so too should the reliability of the information used to make it. Given the effort we put into making consumer, personal, and even professional choices, imagine what goes through the minds of the politicians, policy makers, and program administrators tasked with high-stakes decisions about multibillion dollar programs that have the capacity to improve the lives of people around the world.

Efforts to encourage the adoption of evidence-based policy have mostly been met by greater acceptance of the value of more and more evidence—and greater resistance to trying to do much with it. Talk about paralysis by analysis.

Eventually, we need to know when to say enough is enough.

The fact is that we as an evidence community need to do a better job of describing evidence not as a knowledge base to add to, but as an evolving spectrum of insights to take from. We need to become more comfortable talking about the arc of evidence and do a better job of explaining when it points to the need for action, as opposed to the need for more research. While “more evidence is needed” may be a viable conclusion in the world of policy research briefs, it’s also a luxury policy makers often don’t have. How can researchers better adapt to this reality?

Luckily, there are plenty of examples of high-stakes decisions driven by various forms of evidence already out there. In fact, I was drawn to write about this after re-listening to a popular podcast about the dangers of youth tackle football. While there is emerging evidence about the dangers of concussions and growing evidence of a link between football and chronic traumatic encephalopathy, a degenerative brain disease, there’s nothing yet that rises to a causal link. That hasn’t stopped parents from thinking twice about letting their kids play youth tackle football; enrollment fell by more than 620,000 between 2008 and 2018.

We’ve seen similar swings in habits like smoking, where the prevalence of the problem was so clear that nitpicking the research eventually became counterproductive to public health. Similarly, evidence for black lung in miners wasn’t supported by randomized controlled trials, but the dangers were perfectly clear. With malaria, which kills more than half a million people each year, current proven control methods like bed nets haven’t been sufficient to eradicate the disease, which was reason enough for the Bill & Melinda Gates Foundation to make unprecedented investments in vaccine research and development. “If we have the chance to save millions of lives, and a clear plan to make it happen, we have an obligation to act,” said Bill Gates during one such grant announcement in 2008.

My colleague Matt Stagner gave an insightful speech a couple of years ago at the Association of Public Policy & Management Conference, where he made a passionate appeal to move beyond the caricature of the distant, aloof researcher and embrace the emotional aspects of our work. I couldn’t agree more.

That doesn’t mean turning our backs on the rigor and objectivity that gives credence to our research. It does mean taking on a bigger role in helping decision makers understand that using evidence is about weighing risks and knowing how and when to condition our path of action based on the evidence as well as the risks associated with not acting—for example, on topics like youth concussions, where the evidence may not be conclusive but certainly points in the same direction. It means having honest conversations with decision makers about building this risk profile into the work before we even set out to collect the data. It means doing things differently, and it means rethinking what has too often become an all-or-nothing approach rooted in strict adherence to randomized controlled trials.

The truth is there are many situations where the relevant questions at hand can be addressed by becoming savvier about how we use other methods or combine methods to address specific questions. My colleagues have discussed the value of Bayesian approaches designed to help discover what is most likely to work by putting new research findings in the context of an existing evidence base. That’s just one example of how the policy research landscape continues to evolve.

The emergence and acceptance of other forms of evidence is an important step forward for the evidence community. We also need to do a better job calling out instances where uncertainty about the evidence is less about well-meaning questions or concerns, and more about individuals or groups who are already inclined to oppose the evidence digging in their heels. When the stakes are as high as they are on questions like youth concussions, we as the research community have an obligation to say so, and to bring evidence to the table that supports how to set policy now, rather than make our traditional call for more evidence. To maintain credibility, any advocacy for evidence-based policy must also include a discussion of these hard questions and a commitment from the research community to evaluate our own methods as rigorously as we evaluate these programs.

As an evidence community, we should welcome those challenges. If we want decision makers to turn to our expertise as they look for the most effective and efficient ways to serve the public, we need to seek out opportunities to better explain the continuum of evidence available to them—warts and all—rather than discounting and dismissing that which does not rise to previously established benchmarks of acceptability.

If we want to grow the practice of evidence-based policy, we can’t simply advocate for more randomized control trials. We need to be passionate advocates for the best available, most accessible research possible. We need to know when to say enough is enough.

About the Author

Paul Decker

Paul Decker

President and Chief Executive Officer
View More by this Author