About July 2011
Experts must love making predictions. They keep right on predicting, even though by any reasonable standard, they’re terrible at it.
Many of them, though intelligent and well-informed, nonetheless have difficulty even beating a random guess about future events—or, if you will, beating the proverbial dart-throwing chimp. This applies to many realms of human activity, but above all to politics, and the subject of expert political judgment forms this month’s theme at Cato Unbound.
Once we grasp that the experts aren’t so reliable at predicting the future, a question arises immediately: How can we do better? Some events will always be unpredictable, of course, but this month’s lead authors, Dan Gardner and Philip E. Tetlock, suggest a few ways that the experts might still be able to improve.
To discuss with them, we’ve invited economist and futurologist Robin Hanson of George Mason University, Professor of Finance and Cato Adjunct Scholar John H. Cochrane, and political scientist Bruce Bueno de Mesquita. Each will offer a commentary on Gardner and Tetlock’s essay, followed by a discussion among the panelists lasting through the end of the month.
Dan Gardner and Philip E. Tetlock review the not-too-promising record of expert predictions of political and social phenomena. The truth remains that for all our social science, the world manages to surprise us far more often than not. Rather than giving up or simply declaring in favor of populism, however, they suggest several ways to improve expert predictions, including greater attention to styles of thinking as well as a “forecasting tournament” in which different methodologies will compete against one another to gain empirical data about the process. Still, they concede that our ability to predict the future will probably always be sharply limited.
Robin Hanson argues that most people aren’t interested in the accuracy of predictions because predictions often aren’t about knowing the future. They are about affiliating with an ideology or signaling one’s authority. The outcomes of predictions have nothing to do with either, of course, especially in the present. He suggests that one way to make predictions more accurate might be to lift both the social stigma and legal prohibitions against gambling. Unlike mere predictions, wagers carry real consequences for those who make them. Which, Hanson argues, they should.
John H. Cochrane offers a limited defense of the hedgehogs: Economics is full of uncertainty because the agents within the system are aware of the theories and possible actions of the other agents. Trying to capture all of them produces a hopeless muddle. Instead, what are needed are explanations of principle and the tendencies that arise all other things being equal. This calls for a hedgehoggy worldview after all. “Especially around policy debates,” he argues, “keeping the simple picture and a few basic principles in mind is the only hope.”
We should not be surprised when experts fail to predict the future, says Bruce Bueno de Mesquita. Expertise doesn’t mean good judgment; rather, expertise is an accumulation of many facts about a subject. That we commonly prefer the pronouncements of experts suggests a bias in favor of “wisdom” and against the scientific method. He argues that statistically rigorous game theory can do better by examining the beliefs and objectives of major players in a given situation, and he welcomes forecasting tournaments as a means of refining the method.