IARPA and Open Testing Methods

I find myself overwhelmingly in agreement with Gardner and Tetlock. The IARPA undertaking may well be the right and most opportune forum for testing alternative theories, methods, and hunches about prediction. Just tell me how to participate and I am happy to do so. I have already provided Tetlock with dozens of forecasts by my students over the past two or three years. These are ready for comparison to other predictions/analyses of the same events. What I don’t know is what questions IARPA wants forecasts on and whether those questions are structured in a way suitable to my method. My method requires an issue continuum (it can handle dichotomous choices but those tend to be uninteresting as they leave no space for compromise), specification of players interested in influencing the outcome, their current bargaining positions, salience, flexibility and potential clout. Preferences are assumed to be single-peaked and players are assumed to update/learn in accordance with Bayes’ Rule. Bring on the questions and let’s start testing.

As for decomposing my model, the information to do so is pretty much all in print so no problem there although the nature of game theoretic reasoning is that the parts are interactive, not independent and additive so I am not sure exactly what is to be advanced by decomposition. But I am happy to leave that to impartial arbiters. Perhaps what Gardner and Tetlock have in mind is testing not only the predicted policy outcome on issues, but also the model’s predictions about the trajectory that gets it to the equilibrium result: how do player positions, influence, salience and flexibility change over time for instance. As long as they have the resources to do the evaluation, great!

Also from this issue

Lead Essay

  • Dan Gardner and Philip E. Tetlock review the not-too-promising record of expert predictions of political and social phenomena. The truth remains that for all our social science, the world manages to surprise us far more often than not. Rather than giving up or simply declaring in favor of populism, however, they suggest several ways to improve expert predictions, including greater attention to styles of thinking as well as a “forecasting tournament” in which different methodologies will compete against one another to gain empirical data about the process. Still, they concede that our ability to predict the future will probably always be sharply limited.

Response Essays

  • Robin Hanson argues that most people aren’t interested in the accuracy of predictions because predictions often aren’t about knowing the future. They are about affiliating with an ideology or signaling one’s authority. The outcomes of predictions have nothing to do with either, of course, especially in the present. He suggests that one way to make predictions more accurate might be to lift both the social stigma and legal prohibitions against gambling. Unlike mere predictions, wagers carry real consequences for those who make them. Which, Hanson argues, they should.

  • John H. Cochrane offers a limited defense of the hedgehogs: Economics is full of uncertainty because the agents within the system are aware of the theories and possible actions of the other agents. Trying to capture all of them produces a hopeless muddle. Instead, what are needed are explanations of principle and the tendencies that arise all other things being equal. This calls for a hedgehoggy worldview after all. “Especially around policy debates,” he argues, “keeping the simple picture and a few basic principles in mind is the only hope.”

  • We should not be surprised when experts fail to predict the future, says Bruce Bueno de Mesquita. Expertise doesn’t mean good judgment; rather, expertise is an accumulation of many facts about a subject. That we commonly prefer the pronouncements of experts suggests a bias in favor of “wisdom” and against the scientific method. He argues that statistically rigorous game theory can do better by examining the beliefs and objectives of major players in a given situation, and he welcomes forecasting tournaments as a means of refining the method.