I find myself overwhelmingly in agreement with Gardner and Tetlock. The IARPA undertaking may well be the right and most opportune forum for testing alternative theories, methods, and hunches about prediction. Just tell me how to participate and I am happy to do so. I have already provided Tetlock with dozens of forecasts by my students over the past two or three years. These are ready for comparison to other predictions/analyses of the same events. What I don’t know is what questions IARPA wants forecasts on and whether those questions are structured in a way suitable to my method. My method requires an issue continuum (it can handle dichotomous choices but those tend to be uninteresting as they leave no space for compromise), specification of players interested in influencing the outcome, their current bargaining positions, salience, flexibility and potential clout. Preferences are assumed to be single-peaked and players are assumed to update/learn in accordance with Bayes’ Rule. Bring on the questions and let’s start testing.
As for decomposing my model, the information to do so is pretty much all in print so no problem there although the nature of game theoretic reasoning is that the parts are interactive, not independent and additive so I am not sure exactly what is to be advanced by decomposition. But I am happy to leave that to impartial arbiters. Perhaps what Gardner and Tetlock have in mind is testing not only the predicted policy outcome on issues, but also the model’s predictions about the trajectory that gets it to the equilibrium result: how do player positions, influence, salience and flexibility change over time for instance. As long as they have the resources to do the evaluation, great!