Who Cares About Forecast Accuracy?

Gardner and Tetlock note that while prediction is hard, we should be able to do better. For example, we could attend less to “hedgehogs” who know “one big thing” and whose forecasts are “beaten rather soundly even by the [random] chimp.” Yet we seem surprisingly uninterested in improving our forecasts:

Corporations and governments spend staggering amounts of money on forecasting, and one might think they would be keenly interested in determining the worth of their purchases and ensuring they are the very best available. But most aren’t. They spend little or nothing analyzing the accuracy of forecasts and not much more on research to develop and compare forecasting methods. Some even persist in using forecasts that are manifestly unreliable. … This widespread lack of curiosity … is a phenomenon worthy of investigation.

I can confirm that this disinterest is real. For example, when I try to sell firms on internal prediction markets wherein employees forecast things like sales and project completion dates, such firms usually don’t doubt my claims that such forecasts are cheap and more accurate. Nevertheless, they usually aren’t interested.

TV weather forecasters are not usually chosen based on their forecast accuracy. Top business professors tell me that firms usually aren’t interested in doing randomized experiments to test their predictions about business methods. Furthermore, a well-connected reporter told me that a major DC-area media firm recently abandoned a large project to collect pundit prediction track records, supposedly because readers just aren’t interested.

Gardner and Tetlock are heartened to see a big research project testing new ways to aggregate forecasts, as that “holds out the promise of improving our ability to peer into the future.” But I can’t be much encouraged without a better understanding of our disinterest in using already well-tested and simpler methods. After all, a good diagnosis usually precedes a good prognosis.

So let me try to tackle this puzzle head on. Surprising disinterest in forecasting accuracy could be explained either by its costs being higher, or its benefits being lower, than we expect.

The costs of creating and monitoring forecast accuracy might be higher than we expect if in general thinking about times other than the present is harder than we expect. Most animals seem to focus almost entirely on reacting to current stimuli, as opposed to remembering the past or anticipating the future. We humans are proud that we attend more to the past and future, but perhaps this is still harder than we let on, and we flatter ourselves by thinking we attend more than we do.

The benefits of creating and monitoring forecast accuracy might be lower than we expect if the function and role of forecasting is less important than we think, relative to the many functions and roles served by our pundits, academics, and managers.

Consider first the many possible functions and roles of media pundits. Media consumers can be educated and entertained by clever, witty, but accessible commentary, and can coordinate to signal that they are smart and well-read by quoting and discussing the words of the same few focal pundits. Also, impressive pundits with prestigious credentials and clear “philosophical” positions can let readers and viewers gain by affiliation with such impressiveness, credentials, and positions. Being easier to understand and classify helps “hedgehogs” to serve many of these functions.

Second, consider the many functions and roles of academics. Academics are primarily selected and rewarded for their impressive mastery and application of difficult academic tools and methods. Students, patrons, and media contacts can gain by affiliation with credentialed academic impressiveness. In forecasts, academic are rewarded much more for showing mastery of impressive tools than for accuracy.

Finally, consider next the many functions and roles of managers, both public and private. By being personally impressive, and by being identified with attractive philosophical positions, leaders can inspire people to work for and affiliate with their organizations. Such support can be threatened by clear tracking of leader forecasts, if that questions leader impressiveness.

Even in business, champions need to assemble supporting political coalitions to create and sustain large projects. As such coalitions are not lightly disbanded, they are reluctant to allow last minute forecast changes to threaten project support. It is often more important to assemble crowds of supporting “yes-men” to signal sufficient support, than it is to get accurate feedback and updates on project success. Also, since project failures are often followed by a search for scapegoats, project managers are reluctant to allow the creation of records showing that respected sources seriously questioned their project.

Often, managers can increase project effort by getting participants to see an intermediate chance of the project making important deadlines—the project is both likely to succeed, and to fail. Accurate estimates of the chances of making deadlines can undermine this impression management. Similarly, overconfident managers who promise more than they can deliver are often preferred, as they push teams harder when they fall behind and deliver more overall.

Even if disinterest in forecast accuracy is explained by forecasting being only a minor role for pundits, academics, and managers, might we still hope for reforms to encourage more accuracy? If there is hope, I think it mainly comes from the fact that we pretend to care more about forecast accuracy than we actually seem to care. We don’t need new forecasting methods so much as a new social equilibrium, one that makes forecast hypocrisy more visible to a wider audience, and so shames people into avoiding such hypocrisy.

Consider two analogies. First, there are cultures where few requests made of acquaintances are denied. Since it is rude to say “no” to a difficult request, people instead say “yes,” but then don’t actually deliver. In other cultures, it is worse to say “yes” but not deliver, because observers remember and think less of those who don’t deliver. The difference is less in the technology of remembering, and more in the social treatment of such memories.

A second analogy is that in some cultures people who schedule to meet at particular times actually show up over a wide range of surrounding times. While this might once have been reasonable given uncertain travel times and unreliable clocks, such practices continued long after these problems were solved. In other cultures, people show up close to scheduled meeting times, because observers remember and think less of those who are late.

In both of these cases, it isn’t enough to just have a way to remember behavior. A track record tech must be combined with a social equilibrium that punishes those with poor records, and thus encourages rivals and victims to collect and report records. The lesson I take for forecast accuracy is that it isn’t enough to devise ways to record forecast accuracy—we also need a new matching social respect for such records.

Might governments encourage a switch to more respect for forecast accuracy? Yes: by not explicitly discouraging it! Today, the simplest way to create forecast track records that get attention and respect is by making bets. In a bet, the parties work to define the disputed issue clearly enough to resolve later, and the bet payoff creates a clear record of who was right and wrong. Anti-gambling laws now discourage such bets—shouldn’t we at least eliminate this impediment to more respect for forecast accuracy records?

And once bets are legal we should go further, to revive our ancestors’ culture of respect for bets. It should be shameful to visibly disagree and yet evade a challenge to more clearly define the disagreement and bet a respectable amount on it. Why not let “put your money where your mouth is” be our forecast-accuracy-respecting motto?

Also from this issue

Lead Essay

  • Dan Gardner and Philip E. Tetlock review the not-too-promising record of expert predictions of political and social phenomena. The truth remains that for all our social science, the world manages to surprise us far more often than not. Rather than giving up or simply declaring in favor of populism, however, they suggest several ways to improve expert predictions, including greater attention to styles of thinking as well as a “forecasting tournament” in which different methodologies will compete against one another to gain empirical data about the process. Still, they concede that our ability to predict the future will probably always be sharply limited.

Response Essays

  • Robin Hanson argues that most people aren’t interested in the accuracy of predictions because predictions often aren’t about knowing the future. They are about affiliating with an ideology or signaling one’s authority. The outcomes of predictions have nothing to do with either, of course, especially in the present. He suggests that one way to make predictions more accurate might be to lift both the social stigma and legal prohibitions against gambling. Unlike mere predictions, wagers carry real consequences for those who make them. Which, Hanson argues, they should.

  • John H. Cochrane offers a limited defense of the hedgehogs: Economics is full of uncertainty because the agents within the system are aware of the theories and possible actions of the other agents. Trying to capture all of them produces a hopeless muddle. Instead, what are needed are explanations of principle and the tendencies that arise all other things being equal. This calls for a hedgehoggy worldview after all. “Especially around policy debates,” he argues, “keeping the simple picture and a few basic principles in mind is the only hope.”

  • We should not be surprised when experts fail to predict the future, says Bruce Bueno de Mesquita. Expertise doesn’t mean good judgment; rather, expertise is an accumulation of many facts about a subject. That we commonly prefer the pronouncements of experts suggests a bias in favor of “wisdom” and against the scientific method. He argues that statistically rigorous game theory can do better by examining the beliefs and objectives of major players in a given situation, and he welcomes forecasting tournaments as a means of refining the method.