In the news and opinion business, late December and early January are the dead season as journalists go on vacation, leaving behind their canned top 10 lists for last year and forecasts for the new year.
I’m not much of a fan of either, but I do approve of the modest increase in recent years of pundits grading themselves on their predictions from the previous January. Nobody has contributed more over the decades to this salutary trend than Philip E. Tetlock, a heavyweight professor of both psychology and political science at Penn’s Wharton School. Tetlock is part of a coterie of heterodox social scientists who have been documenting the debilitating stranglehold that political correctness has on social psychology, along with Jonathan Haidt, Lee Jussim, and a few other brave souls.
In recent years, the highbrow spooks at the government’s Intelligence Advanced Research Projects Activity, an agency set up in 2006 in shame over the Iraq WMD call, have funded four years of Tetlock’s Good Judgment Project to find the best amateur foreign-policy forecasters. In a development reminiscent of the followers of statistical maven Bill James taking over baseball general managers’ offices, it turns out that the best volunteers are consistently better than the government’s experts, even though the professionals have access to confidential information.
Tetlock, teaming up with journalist Dan Gardner, has now published for the Frequent Flyer market (e.g., guys who liked Michael Lewis’ Moneyball or, at the high end, Daniel Kahneman’s Thinking, Fast and Slow) a popular book on what he’s learned from his annual forecasting tournament: Superforecasting: The Art and Science of Prediction.
The first lesson I took away is that, just as we’ve seen with baseball statistics, there is a lot of underutilized analytic talent out there.
Another message from Tetlock is that hard work pays off in forecasting. For example, I signed up for Tetlock’s tournament one year, but quickly realized that it would be a huge amount of labor to keep up to date on a lot of overseas issues that I’m not very interested in—e.g., “Will Serbia be officially granted European Union candidacy by 31 December 2011?” And I would probably just end up being humiliatingly wrong on average.
For example, one January 2014 question was: Would “the number of registered Syrian refugees reported by the United Nations Refugee Agency as of 1 April 2014” be under 2.6 million?
That year’s winner Tim Minto, a software engineer, first decided there was a 55 percent chance of the number coming in under 2.6 million. But then he updated his estimate 33 times over the next three months, changing his forecast by an average of 3.5 percentage points with each adjustment. Tetlock observes:
It’s not dramatic. It’s even, to be candid, a tad boring. Tim will never become a guru who shares his visionary insights in TV appearances, bestselling books, and corporate gigs. But Tim’s way works.
Tetlock’s book features numerous other tips on forecasting. For example, the best forecasts were obtained when past high-scoring individuals were teamed together and encouraged to argue over the reasons behind their estimates.
Still, as much as I admire Tetlock’s work and recommend his book, let’s look at why forecasting the most interesting developments is so hard that questions about the most memorable events often don’t even suggest themselves ahead of time.
Take a look back at some of the more striking events of 2015.
For example, Tetlock’s question about whether the number of Syrian refugees would be under 2.6 million in three months was a good one in that the answer wasn’t obvious until late in the time span when Minto’s probability estimates headed toward 100 percent.
On the other hand, from the perspective of early 2016, a really interesting question about Syrian refugees would have been: While this book’s manuscript is at the printers, will a European national leader decide upon a whim to invite in a million of them?
The name “Merkel” doesn’t appear in the index.
Let’s think through the process of what it would have taken to generate questions about the most interesting events of 2015: For instance, to have asked contestants to predict the probability of Merkel’s Boner, you would first have had to forecast yourself the possibility of it even happening. But inviting home a million-Muslim mob was a remarkably stupid decision by Dr. Merkel. And the universe of possible stupid decisions by politicians is perhaps too large to pester contestants to evaluate.
Nonetheless, Merkel’s blunderkrieg was more or less accurately foreseen in 1973 by French novelist Jean Raspail in his book The Camp of the Saints, based on his sense of the direction the zeitgeist was headed. In hindsight, Raspail’s prophecy appears brilliant. Still, you can imagine the technical problems in phrasing questions ahead of time to be both broad enough and specific enough. Raspail focused, for example, on a French-Hindu-impoverished-by-sea immersion rather than a German-Muslim-smartphone-by-land hegira. Is that close enough?
Moreover, Raspail being right 42 years ahead of time isn’t much use in an annual contest that, by its nature, can’t look more than 12 months ahead.
Also, Raspail missed key aspects of what happened in 2015. He imagined that the refugees would be starving masses who overcame European resistance by their pitifulness. But instead, the invaders turned out to be strutting military-age youths with smartphones, giving Germany’s surrender a weird sexual vibe that nobody yet has explained satisfactorily even in retrospect.
Copyright 2017 TakiMag.com and the author. This copy is for your personal, noncommercial use only. You can order reprints for distribution by contacting us at firstname.lastname@example.org.