It is true, as Dan Gardner and Philip Tetlock point out, that economic forecasting isn’t very good. Financial forecasting is next to useless. At least these are better than political forecasting, and at least economic and financial forecasters routinely use statistical models, compare judgmental and statistical forecasts with outcomes, and systematically improve. (I refer to real forecasters here, not the clowns on TV.) But many movements of the economy and financial markets are so far beyond anyone’s ability to foresee.
Unforecastability Is a Good Sign
It is also true, as they hint, that the reason for this is the inherent unforecastability of the system, not the incompetence of the forecasters. One should not conclude from “you didn’t forecast the crash” that “economists don’t know what they’re doing,” or “the economy is all screwed up and needs lots of regulating.”
In fact, many economic events should be unforecastable, and their unforecastability is a sign that the markets and our theories about them are working well.
This statement is clearest in the case of financial markets. If anyone could tell you with any sort of certainty that “the market will go up tomorrow,” you could use that information to buy today and make a fortune. So could everyone else. As we all try to buy, the market would go up today, right to the point that nobody can tell whether tomorrow’s value will be higher or lower.
An “efficient” market should be unpredictable. If markets went steadily up and delivered return without risk, then markets would not be working as they should.
Much the same happens throughout economics. Consumption should depend on “permanent” income, as Milton Friedman pointed out. That means today’s consumption should depend on consumers’ best guess of future prospects, just as a stock price is investors’ best guess of future returns. Changes in consumption, driven by changes in information, therefore should be just as unpredictable as stock prices.
Economics often predicts unpredictability even when markets are not working well. A bank run is an undesirable outcome, but the theory of bank runs says they should be unpredictable. If anyone knew the run would happen tomorrow, it would happen today.
Gardner and Tetlock cite complex systems and nonlinear dynamics, but even these mathematical structures have been failures in forecasting economic and financial systems. Complex and nonlinear dynamic systems are predictable, they are just very sensitive to initial conditions. Tests for nonlinearities in the sciences found them popping up all over. Except in the stock market. The fact that we who study the system are part of the system, that people can read our papers and forecasts, and change their behavior as a result, means that we are no smarter than the system we study. Indeed, this makes the domain of social sciences uniquely unforecastable.
Some trends in economics are nonetheless predictable. When things get out of whack you can tell they will converge. Unemployment of 9% won’t last forever (unless the government really screws things up); a huge debt to GDP ratio must be resolved by growth, default, or inflation. If you take a billion people, terrorize them to the stone age, and then get out of the way a bit, their wealth and incomes will grow very fast for a while as they catch up (China). But even here, the slow movement of predictable long-run trends is swamped by shorter-run unpredictable variation.
Risk Management Rather than Forecast-and-Plan
The answer is to change the question, to focus on risk management, as Gardner and Tetlock suggest. There is a set of events that could happen tomorrow—Chicago could have an earthquake, there could be a run on Greek debt, the Administration could decide “Heavens, Dodd–Frank and Obamacare were huge mistakes, let’s fix them” (Okay, not the last one.) Attached to each event, there is some probability that it could happen.
Now “forecasting” as Gardner and Tetlock characterize it, is an attempt to figure out which event really will happen, whether the coin will land on heads or tails, and then make a plan based on that knowledge. It’s a fool’s game.
Once we recognize that uncertainty will always remain, risk management rather than forecasting is much wiser. Just the step of naming the events that could happen is useful. Then, ask yourself, “if this event happens, let’s make sure we have a contingency plan so we’re not really screwed.” Suppose you’re counting on diesel generators to keep cooling water flowing through a reactor. What if someone forgets to fill the tank?
The good use of “forecasting” is to get a better handle on probabilities, so we focus our risk management resources on the most important events. But we must still pay attention to events, and buy insurance against them, based as much on the painfulness of the event as on its probability. (Note to economics techies: what matters is the risk-neutral probability, probability weighted by marginal utility.)
So it’s not really the forecast that’s wrong, it’s what people do with it. If we all understood the essential unpredictability of the world, especially of rare and very costly events, if we got rid of the habit of mind that asks for a forecast and then makes “plans” as if that were the only state of the world that could occur; if we instead focused on laying out all the bad things that could happen and made sure we had insurance or contingency plans, both personal and public policies might be a lot better.
Foxes and Hedgehogs
Gardner and Tetlock admire the “foxes” who “used a wide assortment of analytical tools, sought out information from diverse sources, were comfortable with complexity and uncertainty, and were much less sure of themselves… they frequently shifted intellectual gears.” By contrast, “hedgehogs” “tended to use one analytical tool in many different domains, … preferred keeping their analysis simple and elegant by minimizing “distractions” and zeroing in on only essential information.”
There is another very important kind of “forecast” however, and here I think some “hedgehog” traits have an advantage.
Gardner and Tetlock have in mind what economists call “unconditional” forecasting. In this, they are content to use historical correlations to guess what comes next, with no need of structural understanding. We often do this in economic forecasting, and rightly. For example, the slope of the yield curve gives a good signal of whether recessions are coming. But this does not mean that if the government changes that slope it will change the recession. Forcing the weather forecaster to lie will not produce a sunny weekend. Leading indicators, confidence surveys, and more formal regression-based and statistical forecasts all operate this way.
But economics is really concerned with conditional forecasting; predicting the answers to questions such as “if we pass a trillion dollar stimulus, how much more GDP will we get next year?” “If we raise taxes on ‘the rich’, how much less will they work, and how much revenue will we actually raise?” “If the Fed monetizes $600 billion of long-term debt, how much will GDP increase, and much inflation will we get, and how soon?” “If you tell insurance companies they have to take everyone at the same price no matter how sick, how many will sign up for insurance?”
Here we are trying to “predict” the effect of a policy, how much the future will change if a policy is enacted. Despite popular impression, the vast majority of economists spend the vast majority of their time on these sorts of questions, not on unconditional forecasts. Asking the average economist whether unemployment will go down next quarter is about as useless as asking a meteorological researcher who studies the physics of tornadoes whether it will rain over the weekend. He probably doesn’t even have a window in his office.
It was once hoped that really understanding the structure of the economy would also help in the sort of unconditional forecasting that Gardner and Tetlock are more interested in. Alas, that turned out not to be true. Big “structural” macroeconomic models predict no better than simple correlations. Even if you understand many structural linkages from policy to events, there are so many other unpredictable shocks that imposing “structure” just doesn’t help with unconditional forecasting.
But economics can be pretty good at such structural forecasting. We really do know what happens if you put in minimum wages, taxes, tariffs, and so on. We have a lot of experience with regulatory capture. At least we know the signs and general effects. Assigning numbers is a lot harder. But those are useful predictions, even if they typically dash youthful liberal hopes and dreams.
Doing good forecasting of this sort, however, rewards some very hedgehoggy traits.
Focusing on “one analytical tool”—basic supply and demand, a nose for free markets, unintended consequences, and regulatory capture—is essential. People who use a wide range of analytical tools, mixing economics, political, sociological, psychological, Marxist-radical and other perspectives end up hopelessly muddled.
Keeping analysis “simple and elegant” and “minimizing distractions” is vital too, rather than being “comfortable with complexity and uncertainty,” or even being “much less sure of oneself.” Especially around policy debates, one is quickly drowned in mind-blowing detail. Keeping the simple picture and a few basic principles in mind is the only hope.
Gardner and Tetlock admire statistical modeling, but this is usually a smokescreen in conditional forecasting, and only serves to hide the central stories about which we actually know something.
Milton Friedman was a hedgehog. And he got the big picture of cause and effect right in a way that the foxes around him completely missed. Take just one example, his 1968 American Economic Association presidential speech, in which he said that continued inflation would not bring unemployment down, but would lead to stagflation. He used simple, compelling logic, from one intellectual foundation. He ignored big computer models, statistical correlations, and all the muddle around him. And he was right.
In political forecasting, anyone’s success in predicting cause and effect is even lower. U.S. foreign policy is littered with cause-and-effect predictions and failures—if we give them money, they’ll love us; if we invade they will welcome us as liberators; if we pay both sides they will work for peace, not keep the war and subsidies going forever.
But the few who get it right are hedgehogs. Ronald Reagan was a hedgehog, sticking to a few core principles that proved to be right.
Good hedgehogs are not know-it-alls. Friedman didn’t produce a quarterly inflation forecast, and he argued against all the “fine tuning” in which the Fed indulges to this day. Good hedgehogs stick to a few core principles because they know that nobody really knows detailed answers.
Principles matter. They produce wiser conditional forecasts. That’s a good thing for this forum, because otherwise the Cato Institute should disband!