News | International

Who’s good at forecasts?

How to sort the best from the rest

By Philip Tetlock and Dan Gardner

At the turn of the 21st century The Economist had sage words that deserve to be read again. “In every way that people, firms, or governments act and plan”, it wrote, “they are making implicit forecasts about the future.” That is a fundamental fact. Our decisions rest on expectations of what the future will look like and how those decisions will play out in it. Forecasts are as important as the decisions they inform.

So how good are they? That question is both obvious and critical. And yet, in most instances, we really don’t know the answer.

Of course a few fields, such as meteorology, closely monitor predictive accuracy. And there are curious cases—recall the pundits in 2012 who predicted a President Romney—where we have a good idea who got it right. But mostly, no one knows. And because of that, no one can be sure if forecasts are the best available, or if they can be improved, or how. We can’t even be sure that the forecasts guiding our decisions are more insightful than what we would hear from oracles examining goat guts. Worse still, we often don’t know that we don’t know.

As supporters of Larry Summers and Janet Yellen argued over which economist was better suited to succeed Ben Bernanke at the Federal Reserve, Steven Rattner, a Wall Street financier, sided with Mr Summers. Mr Summers’s “batting average and qualifications” put him “at the top of the heap”, Mr Rattner wrote. Qualifications, arguably. But batting average? Mr Rattner didn’t say what Mr Summers’s batting average is and he didn’t provide anything remotely like the evidence needed to calculate one. He merely recalled a few examples of Mr Summers saying prescient things, as if video of a batter slugging a couple of home runs proves he has the best batting average in the league. So why was Mr Rattner so sure that Mr Summers is the best forecaster?

We suspect that Mr Rattner did what we all routinely do in this sort of situation: he resorted to what Daniel Kahneman, a Nobel laureate, has dubbed “attribute substitution”: asked a really tough question, we unconsciously replace it with a much easier one. “How good is Larry Summers’s forecasting?” is a very tough question. So Mr Rattner substituted: “Can I think of times when Larry nailed it?” The answer to that was obvious. And so Mr Rattner became convinced he knew something he does not really know.

Avoiding this trap, and acknowledging ignorance, is the first step to doing better. But then comes the challenge of generating real insight into forecasting accuracy. How can one compare forecasting ability?

The only reliable method is to conduct a forecasting tournament

The only reliable method is to conduct a forecasting tournament in which independent judges ask all participants to make the same forecasts in the same timeframes. And forecasts must be expressed numerically, so there can be no hiding behind vague verbiage. Words like “may” or “possible” can mean anything from probabilities as low as 0.001% to as high as 60% or 70%. But 80% always and only means 80%.

In the late 1980s one of us (Philip Tetlock) launched such a tournament. It involved 284 economists, political scientists, intelligence analysts and journalists and collected almost 28,000 predictions. The results were startling. The average expert did only slightly better than random guessing. Even more disconcerting, experts with the most inflated views of their own batting averages tended to attract the most media attention. Their more self-effacing colleagues, the ones we should be heeding, often don’t get on to our radar screens.

The Netherlands hosts the third <strong>Nuclear Security Summit</strong>, a gathering which since 2010 has aimed to prevent nuclear terrorism

That project proved to be a pilot for a far more ambitious tournament currently sponsored by the Intelligence Advanced Research Projects Activity (IARPA), part of the American intelligence world. Over 5,000 forecasters have made more than 1m forecasts on more than 250 questions, from euro-zone exits to the Syrian civil war. Results are pouring in and they are revealing. We can discover who has better batting averages, not take it on faith; discover which methods of training promote accuracy, not just track the latest gurus and fads; and discover methods of distilling the wisdom of the crowd.

The big surprise has been the support for the unabashedly elitist “super-forecaster” hypothesis. The top 2% of forecasters in Year 1 showed that there is more than luck at play. If it were just luck, the “supers” would regress to the mean: yesterday’s champs would be today’s chumps. But they actually got better. When we randomly assigned “supers” into elite teams, they blew the lid off IARPA’s performance goals. They beat the unweighted average (wisdom-of-overall-crowd) by 65%; beat the best algorithms of four competitor institutions by 35-60%; and beat two prediction markets by 20-35%.

Over to you

To avoid slipping back to business as usual—believing we know things that we don’t—more tournaments in more fields are needed, and more forecasters. So we invite you, our readers, to join the 2014-15 round of the IARPA tournament. Current questions include: Will America and the EU reach a trade deal? Will Turkey get a new constitution? Will talks on North Korea’s nuclear programme resume? To volunteer, go to the tournament’s website at www.goodjudgmentproject.com. We predict with 80% confidence that at least 70% of you will enjoy it—and we are 90% confident that at least 50% of you will beat our dart-throwing chimps.

Philip Tetlock: Leonore Annenberg University professor of Psychology and Management, University of Pennsylvania
Dan Gardner: journalist and author of “Future Babble: Why Pundits Are Hedgehogs and Foxes Know Best” (Plume)