About this Issue

Did the collapse of the financial sector cause the recession or did the recession cause the collapse of the financial sector? Was the supply of money too “easy” in the run-up to the collapse or was it too “tight”? Did the exhaustion of monetary policy tools necessitate a surge of government spending to prop up the economy during our near-depression? Or did the failure to use available monetary policy tools in part cause our near-depression? Should we be worried about getting hammered by high inflation, or should we worry inflation is not high enough?

If you think you know the answers to these questions, it might be time to think again. In this month’s edition of Cato Unbound, we’re exploring “The Monetary Lessons of the Not-So-Great-Depression.” Leading off the discussion with a probing, provocative essay, Bentley University monetary economist Scott Sumner argues that just about everybody is getting it wrong. To tell us whether Sumner’s getting it right, we’ve lined up a diverse, top-notch panel of money specialists including James Hamilton of the University of California, San Diego, George Selgin of the University of Georgia, and Jeffrey Hummel of San Jose State University.

Lead Essay

The Real Problem was Nominal

A recent series of articles in The Economist argued that the current financial crisis has exposed important flaws in modern economic theory. I will make a slightly different argument. The sub-prime crisis that began in late 2007 was probably just a fluke, and has few important implications for either financial economics or macroeconomics.[1] The much more severe crisis that swept the entire world in late 2008 was a qualitatively different problem, which has been misdiagnosed by those on both the left and the right. Most economists simply assumed that a severe intensification of the financial crisis depressed spending throughout much of the world. In fact, the causation reversed in the second half of 2008, as falling nominal income began worsening the debt crisis.

Because central bankers misdiagnosed the problem, they were not able to come up with an effective policy response. It was as if a doctor prescribed medicine for a common cold to someone whose illness had progressed to pneumonia. And because economists were confused by the nature of the problem, it appeared as if modern macro offered no solutions. Thus policymakers turned in desperation to old-fashioned Keynesian fiscal stimulus, an idea that had been almost totally discredited by the 1980s.

We cannot hope to understand what happened late last year without first recognizing that the proximate cause of the crash was not a financial crisis, but rather a steep decline in nominal spending. Like any other fall in aggregate demand, this represented a failure of monetary policy. Severe demand-side recessions are almost never the result of special interest politics — the losses are too great and too widespread — but instead represent an intellectual failure by well-meaning public servants and the academic economists who advise them. To see how this happened I’ll trace out a brief history of monetary theory, and then show how the current crisis resulted not from a failure of modern macroeconomics, but rather a failure to take seriously some of the most promising recent developments in the field.

A Very Brief History of Monetary Economics

In the mid-1700s David Hume developed all of the key ideas necessary to understand the current crisis. He is most famous for his exposition of the quantity theory of money, which explains why autonomous changes in the money supply lead to proportional changes in the price level. He also noticed that in the period before wages and prices have had time to fully adjust, a change in the money supply caused output to move in the same direction. Hume even understood that a change in the velocity of money had an identical effect to a change in the money supply:

“If the coin be locked up in chests, it is the same thing with regard to prices, as if it were annihilated.” David Hume — Of Money

If we think of Hume’s theory in terms of the famous equation of exchange (M*V = P*Y), we can see that he had a fairly sophisticated understanding of all four variables. What we would call an “aggregate demand shock” was triggered by either a change in the money supply and/or the velocity of circulation. In the long run the effects of diminished nominal spending show up in the form of lower prices, but in the short run output would also fall. And this is exactly what happened in late 2008. The level of nominal gross domestic product (NGDP) began falling rapidly. No one should be surprised that real output also fell sharply. The only question is why did NGDP fall?

You might wonder whether I was being intentionally provocative with my assertion that Hume had the answer to all our problems. Maybe a little bit, but Milton Friedman made a similar observation in 1975 regarding the Phillips Curve:

As I see it, we have advanced beyond Hume in two respects only: first, we now have a more secure grasp on the quantitative magnitudes involved; second, we have gone one derivative beyond Hume. [2]

The importance of “quantitative magnitudes” is obvious, but what about the phrase “one derivative beyond Hume”? Friedman simply meant that Hume thought of nominal shocks in terms of one-time changes in the price level, or NGDP, whereas Friedman suggested that what really mattered were changes in the growth rate of prices. Today we think in terms of unanticipated changes, though in practice we haven’t advanced far beyond Friedman’s focus on changes in the rate of growth.

Between the early 1990s and 2007, NGDP grew at just over five percent per year. Because the real GDP growth rate averaged nearly three percent, we ended up with a bit more than two percent inflation, which was widely believed to be the Fed’s implicit target. Beginning around August 2008, however, NGDP slowed sharply, and then fell at a rate of more than four percent over the following several quarters. Indeed the decline in NGDP during 2009 is likely to be the steepest since 1938. This produced what may end up being the deepest and most prolonged recession since 1938.

Many conservative economists favor lowering the rate of NGDP growth to three percent per year or even less. This may or may not be a good idea as a long-run goal, but as of early 2008 the U.S. economy featured many wage and debt contracts negotiated under the expectation that NGDP would keep growing at about five percent per year. Because nominal GDP is essentially total national gross income, if it falls sharply it becomes much harder for debtors to repay loans, and much harder for companies to pay wages and salaries. The almost inevitable consequence is that unemployment rises sharply, and debt default rates soar.

At this point the reader may be a bit exasperated, as I seem to be ignoring two very obvious problems:

  1. The “real problem” was obviously the financial crisis, and NGDP fell as a consequence.
  2. Monetary policy was obviously highly expansionary, and so one can hardly evoke Hume’s “tight money” explanation for this crisis.

But is there any reason to accept these two “obvious” assumptions? I will show that neither assumption is well-supported, and that this misdiagnosis explains how the world stumbled into a deep slump. The real problem was not a “real” problem at all. It was a nominal problem, and the severe intensification of the debt crisis was a symptom of an ordinary Humean nominal shock. Furthermore, monetary policy was not “easy” but rather was highly contractionary in the only sense that matters, that is, relative to the stance expected to hit the Fed’s implicit nominal targets.

My brief history of macro from Hume to Friedman left out one important strain of monetary economics: the interest-rate approach developed by Knut Wicksell and taken to an extreme by John Maynard Keynes. Wicksell argued that central banks should adjust the interest rate on short-term loans as needed to stabilize the price level. The rate that maintained a stable price level was called the “natural rate” and could vary with the business cycle. Keynes’ most important contribution to this theory was to argue that in a depressed economy with falling prices the natural rate might become negative. Because central banks cannot reduce nominal interest rates below zero, this would seem to make monetary stimulus relatively ineffective in a deep depression. He called this scenario a “liquidity trap.”

Today we know that there are important flaws in the interest-rate approach to monetary policy, and two competing approaches suggest ways of escaping from a liquidity trap. The monetarists recommend “quantitative easing,” or injecting more cash into the economy than the public wishes to hold. The only way to get rid of these excess (real) cash balances is to spend them on goods, services, and assets, thus driving aggregate demand higher.

In addition to the interest-rate and quantity-of-money approaches, there is also a third “price of money” approach to monetary policy. According to this view it is always possible to produce inflation by lowering the price of money either in terms of a commodity like gold or in terms of foreign exchange. This was the approach used by FDR in 1933, when conventional monetary tools seemed ineffective. Although FDR’s dollar depreciation policy was highly effective in boosting NGDP, a promising recovery in real GDP was aborted in late 1933 by the ill-advised National Industrial Recovery Act, which sharply raised wage rates. Nevertheless, the price-of-money approach is so effective that Lars Svensson has called it a “foolproof” escape from a liquidity trap.

Much of recent macro theory has focused on showing how and why monetary policy can be highly effective in a liquidity trap. Thus I was quite surprised to observe the general sense of powerlessness that seemed to grip the world’s central bankers as the crisis intensified last fall. In early October 2008, the world economy was in free fall, with forecasts of falling prices and output going well into 2009. And yet there was a general sense that monetary policy could do nothing to arrest this collapse, despite the fact that the Fed’s target rate was still 200 basis points above zero, and the ECB’s target rate was 425 basis points above zero. By the time the Fed cut rates close to zero in December 2008, almost all of the attention was focused on fiscal stimulus. How did this happen? Why did policymakers ignore what we teach our students in best-selling money and banking textbooks?

In The Economics of Money, Banking and Financial Markets, Frederic Mishkin says: “Monetary policy can be highly effective in reviving a weak economy even if short-term interest rates are already near zero.” [3]

In the next two sections I will trace out the series of errors that led to this policy failure.

Misdiagnosing the Stance of Monetary policy

Twentieth-century macroeconomics reached its nadir in 1938, when zero interest rates and falling prices led many economists to dismiss the importance of monetary policy. Keynes had argued that monetary policy could only impact demand by changing interest rates, and Joan Robinson drew the logical implication that easy money couldn’t possibly have caused the German hyperinflation, as interest rates were not particularly low. Today we tend to sneer at that sort of crude Keynesianism, but when I argued with my fellow economists that money was actually very tight last fall, the most common retort was “how can that be, interest rates have been cut close to zero?” Some might argue that at least the real interest rate is a good indicator of the stance of monetary policy. But this is also false, as tight money can easily depress real rates in a forward-looking model. And, even if it were true, real rates rose very sharply in the late summer and fall of 2008, so one can hardly use that as an excuse for the profession’s failure to recognize tight money.

The second most common rationale for believing money was “easy” was to point to the huge expansion of the monetary base that began in the fall of 2008. But the monetary base is not much more reliable that interest rates. Friedman and Schwartz showed that money was very tight during the early 1930s, and yet the monetary base rose sharply during that period. It is true that the increase in the base was even more rapid in this recession, but that simply reflects the fact that beginning in October 2008 the Fed began paying banks to hoard excess reserves. According to James Hamilton, the Fed did this to prevent its large injections of liquidity during the banking crisis from creating high inflation. This is probably true, but then it begs the question of why so many economists thought monetary policy had become ineffective. The Fed certainly had it within its power to boost NGDP last fall. Indeed, the Fed’s official excuse for paying interest on reserves was called a “confession of contractionary intent” by Robert Hall and Susan Woodward.

When I point out that neither interest rates nor the base tell us anything about the stance of monetary policy, the final fallback position is often that the broader aggregates also increased during the past year. But these broader aggregates were widely viewed as being discredited during the 1980s, when they sent out false alarms about high inflation. In fact, that episode did much to discredit monetarism, or at least the more dogmatic form that advocated targeting the money supply. Late in his life even Milton Friedman suggested that it might be better for the Fed to target inflation forecasts. If they had done so last September, I believe the current unemployment rate would be about five percent. Instead, policymakers turned to fiscal policy.

In a recent paper criticizing fiscal stimulus John Cochrane made this impassioned plea:

Some economists tell me, “Yes, all our models, data, and analysis and experience for the last 40 years say fiscal stimulus doesn’t work, but don’t you really believe it anyway?” This is an astonishing attitude. How can a scientist “believe” something different than what he or she spends a career writing and teaching?

I had the same sense of frustration regarding monetary policy. Here is what Mishkin’s best-selling text says about indicators of monetary policy:

It is dangerous always to associate the easing or the tightening of monetary policy with a fall or a rise in short-term nominal interest rates.

Mishkin then argues that:

Other asset prices besides those on short-term debt instruments contain important information about the stance of monetary policy because they are important elements in various monetary policy transmission mechanisms.

I looked at seven key economic indicators during the summer and fall of 2008, and between July and November each one signaled that monetary policy was highly contractionary:

  1. Real interest rates soared much higher.
  2. Inflation expectations fell sharply, and by October were negative.
  3. Stock markets crashed.
  4. Commodity prices fell precipitously
  5. Beginning in August, industrial production plunged.
  6. The dollar soared in value against the euro.
  7. In the spring and early summer housing prices had briefly stabilized, but in August a renewed decline set in. This time there were also sharp declines in markets (mostly in the middle of the country) that had avoided the sub-prime excesses. At this point the housing downturn also spread to Canada.

By early October it was obvious that monetary policy had completely lost credibility. This means that the market expectation for inflation and NGDP growth over the following 12 months had fallen far below any plausible estimate of the Fed’s implicit target. At the time few noticed this problem, as economists focused all their attention on what to do about banking.

Even when the fall in aggregate demand was noticed, its implications were not understood. Many economists associate a loss of monetary credibility with high inflation, not excessively low inflation. This bias was mirrored in the media, which frequently made the bizarre announcement that inflation was “no longer a problem,” when in fact millions were losing their jobs precisely because inflation was far too low.

A third problem was that many economists assumed that the financial crisis was causing the decline in aggregate demand, whereas the reverse was more nearly true. This should come as no surprise; monetary policy almost never appears to be the cause of deflation to those living in the same time and place. And how could it be otherwise? Deflationary policies cause massive losses to an economy, and would never be intentionally undertaken except when absolutely necessary to maintain an external currency peg. In both 1930s America and 1990s Japan, local observers failed to see the monetary origins of their deflation, and instead blamed it on problems with the financial system. Only with the perspective of time and distance were economists able to look dispassionately at the falling price-level data and ask why monetary policy was not more expansionary.

A fourth problem was that many economists focused on inflation, whereas NGDP growth is a far more revealing indicator of deflationary policies. It is true that headline inflation rates did turn negative toward the end of 2008, but this could be brushed off as merely reflecting a sharp fall in energy prices. After all, the Fed had (correctly) discounted the high inflation rate of mid-2008, noting that the core rate remained relatively low. So why shouldn’t they take comfort from the fact that in late 2008 and early 2009 the core rate never turned negative? There are several reasons.

First, because many wages and prices are very sticky, a deflationary monetary policy may affect output before it has much impact on inflation. But the more important reason is that the core rate almost certainly understated the decline in prices during the past 12 months. For instance, BLS data shows housing prices (which comprise nearly 40 percent of the core index) rising 2.1 percent in the 12 months ending June 2009, whereas it is obvious that the housing sector was experiencing severe deflation. Even if the government data correctly measure the “rental equivalent,” which they do not, those prices are not relevant for considering the macroeconomic impact of deflation. Construction workers lose their jobs when house prices fall; the fact that rents under existing 12-month leases respond much more slowly is of little importance. Thus economists ended up being reassured by economic data that was highly misleading.

In my view, the expected growth rate in NGDP is the best indicator of whether monetary policy is too loose or too tight. But even if we end up targeting inflation instead, it is essential that policymakers engage in “level targeting.” This does not necessarily mean keeping prices absolutely level — you could target a price-level path that rises at 2 percent per year. But it does require that policy have a “memory,” and make up for past under- or over-shooting of the target. The most devastating demand shocks are those that change the expected trajectory of NGDP and inflation many years out into the future. Oddly, these almost always occur around the September to November period, and in 1929, 1937, and 2008 we saw very similar stock and commodity market crashes in the autumn as investors realized NGDP growth was moving to a much lower trajectory for years to come. Only the 2008 crash was associated with a banking crisis, but the three inflection points had many similar characteristics.

Richmond Fed economist Robert Hetzel showed that during the summer of 2008 all sorts of indicators were signaling that money was too tight well before the failure of Lehman in mid-September. Although there is no doubt that the intensification of the financial crisis further reduced demand, most observers lost sight of how falling demand worsened the banking sector. Even in normal times going from five percent NGDP growth to a nearly five percent rate of decline would place a severe burden on banks. But these weren’t ordinary times. This massive deflationary shock was superimposed on a banking system that was already reeling from the earlier sub-prime crisis. The results were predictable. All sorts of commercial and industrial loans that were unrelated to the sub-prime mess, and which would have been repaid with healthy NGDP growth, became much more questionable. As asset prices declined sharply from the deflationary monetary policies, bank balance sheets deteriorated. Policymakers misdiagnosed the problem as financial, and thought it could be “fixed” by injecting more money into the banking system. But with NGDP falling rapidly, each bailout never seemed to be enough. The policies were about as effective as bailing water out of a boat without first plugging the leak in the hull.

Targeting the Forecast

Lars Svensson has advocated a policy of targeting the forecast — setting the central bank’s policy instrument at the level most likely to hit its policy goal. Thus, if a central bank had a goal of two-percent inflation, it should set the fed funds rate at a level where its own forecasters were forecasting two-percent inflation. Once one starts to think of monetary policy this way, any other policy seems unacceptable. After all, why would any central bank ever want to adopt a policy stance that was expected to fail? Ben Bernanke also seemed to find the logic of Svensson’s idea to be quite appealing, and hinted that the Fed also saw thing this way, at least with regards to its longer term forecasts:

The [macroeconomic] projections also function as a plan for policy-albeit as a rough and highly provisional one. As I mentioned earlier, FOMC participants will continue to base their projections on the assumption of ‘appropriate’ monetary policy. Consequently, the extended projections will provide a sense of the economic trajectory that Committee participants see as best fulfilling the Federal Reserve’s dual mandate, given the initial conditions and the constraints imposed by the structure of the economy.” [Italics added.]

As we all know by now, things didn’t turn out this way. Beginning in the fall of 2008, it became apparent that the Fed’s forecast was well below any plausible target. (They have no official inflation target, but it is assumed that most FOMC members favor a rate of roughly two percent.) Any doubts about whether policy had lost credibility were erased when Bernanke asked for fiscal stimulus. In the standard new Keynesian model, where the central bank targets the inflation rate, there is no role for fiscal policy in stabilizing nominal spending. His move was a tacit admission of failure.

It is difficult to understand how this happened, but three conceptual errors may have played a role.

First, monetary policymakers may have assumed that they were “running out of ammunition” as rates approached zero. But on closer examination this cannot have been the whole story, as the target rate was still two percent in early October, and at that time the Fed adopted a policy of paying interest on reserves to prevent market rates from falling below their target.

A second problem was that policy was too backward-looking. Hetzel argued that the Fed was frightened by the high inflation rates in the “headline CPI” during mid-2008, and thus was reluctant to ease aggressively. In their meeting of September 16, 2008, after both the failure of Lehman and a severe stock market decline, the Fed stated that the risks of recession and inflation were roughly balanced. Keep in mind that the U.S. had already been in a mild recession for the first seven months of 2008, and the recession intensified greatly in August and September. Furthermore, the market forecast of inflation over the next five years had plunged to an annual rate of only 1.23 percent just before the meeting. Both the inflation and growth outlook called for aggressive easing. There can be no more perfect example of the problem with backward-looking policy that ignores market forecasts.

The third problem resulted from misreading the lessons of monetarism. As I noted earlier, monetarism was pretty much a spent force after the 1980s. Nevertheless, many monetarist insights became a part of the consensus new Keynesian model. Some of these were very valuable, ideas such as the importance of expectations, and also that monetary policy was more effective than fiscal policy. But one idea turned out to cause great mischief. This was the belief that any large increase in the money supply must inevitably be followed by high inflation. This idea still holds sway despite the fact that prices have trended downward in Japan since 1994, despite huge monetary injections. Part of the problem is a widespread belief in a mysterious “long and variable lags” in the impact of monetary policy. Although monetarists are generally associated with the view that markets are efficient, they ignored the fact that any increase in the money supply expected to be inflationary should immediately show up in the financial markets, particularly in the so-called TIPS spreads, which is the difference in yields between indexed and conventional bonds.

My own research on the Great Depression convinces me that most monetarists (and Keynesians as well) underestimate the difficulty of identifying monetary shocks. Modern macro theory suggests that what really matters is not a change in today’s money supply, but rather a change in the expected future path of money. Fed actions that do not have this effect, such as the injections of liquidity into the banking system in late 2008, do not raise inflation expectations. Actions that do credibly raise the future expected path of the money supply, such as the 1933 devaluation of the dollar, have an immediate effect on all sorts of asset prices, especially stocks and commodities.

A sharp devaluation of the dollar would not have been appropriate in the second half of 2008, as much of the world faced the same problems. But we did need some type of credible policy of price-level or NGDP targeting. I have advocated a policy where the Fed pegs the price of a 12-month forward NGDP futures contract and lets purchases and sales of that contract lead to parallel open market operations. In essence, this would mean letting the market determine the monetary base and the level of interest rates expected to lead to five-percent NGDP growth. When I first proposed this idea in the 1980s, I envisioned the advantage in terms of traders observing local demand shocks before the central bank. The logic behind this idea is often called “the wisdom of the crowds.” But I no longer see this as its primary advantage. Although last fall the market forecast turned out to be far more accurate than the Fed’s forecast, in general the Fed forecasts pretty well.

This crisis has dramatized two other advantages to futures targeting, each far more important that the “efficient markets” argument. One advantage is that the central bank would no longer have to choose a policy instrument. Their preferred instrument, the fed funds rate, proved entirely inadequate once nominal rates hit zero. Under futures targeting each trader could look at their favorite policy indicator, and use whatever structural model of the economy they preferred. A few years ago I published this idea under the title “Let a Thousand Models Bloom.” I am not an “Austrian” economist, but this proposal is very Austrian in spirit. (And my preferred policy target, NGDP, is also the nominal aggregate that Hayek thought was most informative.)

Only last fall did I realize that there was another, even more powerful advantage of futures targeting-credibility. The same people forecasting the effects of monetary policy would also be those setting monetary policy. Under the current regime, the Fed sets policy and the market forecasts the effects of policy. To consider why this is so important, consider the Fed’s current dilemma. They have already pumped a lot of money into the economy, but prices have fallen over the past year as base velocity plummeted. Certainly if they pumped trillions more into the money supply at some point expectations would turn around. But when this occurred, velocity might increase as well, and that same monetary base could suddenly become highly inflationary. This problem does not occur under a futures targeting regime. Rather, the market forecasts the money supply required to hit the Fed’s policy goal, under the assumption that they will hit that goal. Today we have no idea how much money is needed, because the current level of velocity reflects the (quite rational) assumption that policy will fail to boost NGDP at the desired rate.

Futures targeting will not happen in the near future. But this thought experiment provides insights into what sort of policy would have worked last fall. The most important thing that policymakers could do right now would be to set an explicit CPI or NGDP target path and commit to make up for any future under- or over-shooting. Perhaps the easiest way to see the value of this approach is to recall the stabilizing speculation that occurs under a credible currency peg. If a government commits to peg a currency within a band plus or minus one percent around a fixed rate, then speculators will tend to buy the currency if it falls toward the lower limit, and vice versa. Now consider a price-level target rising two percent each year. Robert Barro argued that if inflation undershot a price-level target, investors would expect higher future inflation. These expectations will immediately increase velocity and hence aggregate demand, thus mitigating the original deflationary shock. In contrast, under our current “memory-less” inflation-targeting regime just the opposite occurs. After the 1.4 percent deflation over the past year, investors are (quite rationally) expecting below-target inflation for years to come. The sort of severe downturns in aggregate demand that occurred in the fall of 1929, 1937, and 2008, when long term expectations became unanchored, cannot occur under a credible regime of price level targeting.

Concluding Remarks

I have devoted my career to three research areas: the Great Depression, liquidity traps, and forward-looking monetary regimes that utilize market expectations. The insights derived from this research gave me a unique and at times quite frustrating perspective on the crisis as it was unfolding. As a result, I became something of a “monetary crank,” arguing that a lack of money was causing or worsening many of our most pressing problems. Most of the time monetary cranks are spouting nonsense; economic problems can rarely be solved by printing money. But occasionally they are right. One of those times was 1933. I believe that late 2008 was another.

It is especially important for free-market economists to never lose sight of the harm that can be done by deflationary monetary policies. Because falling NGDP is almost never blamed on monetary policy, the public will end up blaming the free-market system. And I have some sympathy for this error. Monetary policy is incredibly counterintuitive, with tight money often accompanied by low interest rates and a bloated monetary base. It is no surprise that the public failed to see the role played by the Fed in the Great Depression, and instead blamed laissez-faire economic policies. The same process occurred in Argentina a few years ago, with the same political result as in America. We may (correctly) argue that the Hoover administration’s policies were not really laissez-faire, but the public instinctively understood that Hoover’s deviations from laissez-faire were not big enough to cause national income to fall in half. And they were right.

Only when Friedman and Schwartz showed that the Depression was a failure of government monetary policy, not laissez-faire, was free market ideology able to regain real intellectual respectability. If I am right, we may have made essentially the same mistake in late 2008 as in the early 1930s. Fortunately, the downturn was much milder this time, but it was still very traumatic, particularly when added on to an economy already weakened by the sub-prime fiasco. I hope other free market economists will give serious consideration to the interpretation of this crisis discussed here and in an outstanding recent paper by Robert Hetzel. A forward-looking monetary policy aimed at low and stable inflation or NGDP growth is the best way to restore the prestige of free markets.


Scott Sumner is professor of economics at Bentley University.

Notes

[1] That is not to say there are no lessons for regulation. I would like to see the government retreat from its policy of encouraging homeownership through sponsored entities such as Fannie Mae and Freddie Mac.

[2] Milton Friedman, “25 Years After the Rediscovery of Money: What Have We Learned?: Discussion,” American Economic Review 65 (May, 1975), p. 177.

[3] Frederic Mishkin, The Economics of Money, Banking and Financial Markets (8th ed.), p. 607.

Response Essays

It’s Harder than It Looks

Let me begin with Professor Sumner’s opening assertion that “the sub-prime crisis that began in late 2007 was probably just fluke, and has few important implications for either financial economics or macroeconomics.” I maintain instead that problem loans are indeed the principal cause of our present difficulties.  Ashcraft and Schuermann (2008) analyzed a pool of about 4000 mortgages originated in 2006 by the now-bankrupt New Century Financial Corporation. All the loans in the pool were subprime, meaning one would have significant concerns about the borrowers’ ability to repay on the basis of their credit history. Furthermore, almost all of the loans called for the monthly interest payments to increase between 25 percent and 45 percent within 2-1/2 years, even if there was no change in short-term interest rates.  Yet somehow 79 percent (by dollar value) of the MBS tranches created from this pool were rated AAA by both Standard & Poor’s and Moody’s, and 95 percent were rated at least A. Something was going seriously, seriously wrong in the allocation of credit in 2006.

The only way that a significant fraction of these loans would be repaid would be if house prices continued to rise at the rates of the early 2000s. With rapidly rising house prices, the borrower has every incentive not to default, but instead should refinance and pocket the capital gain. As long as that continued, the low default rates and low correlations among default rates that went into these risk assessments might seem justified. But, when you look at the fundamentals, it was impossible for the house price appreciation to continue.

This pool of New Century loans appears representative of the $1.7 trillion in new subprime loans that were initiated in the United States between 2004 and 2006; (Ashcraft and Schuermann, 2008). Over this period there was an additional trillion dollars in alt-A loans, for which inadequate documentation of borrowers’ incomes or high loan-to-value ratio would also lead one to anticipate significant problems with repayment in an environment of falling house prices. These years also saw $3.6 trillion in new mortgages purchased or “guaranteed” by the government-sponsored enterprises such as Fannie Mae and Freddie Mac. These enterprises did not have anywhere near the capital to make such guarantees credible, and the ultimate perception of the safety of such loans by investors resulted from the assumption that the U.S. government itself would back up the loans if the GSEs could not.

This huge misdirection of capital into the U.S. housing market caused household mortgage debt to more than triple between 1995 and 2007. [1] House prices in the major U.S. metropolitan areas doubled between 2000 and 2005. [2] A downward correction in house prices and significant wave of defaults was inevitable. Credit-default swaps, complicated collateralized debt obligations constructed from underlying troubled mortgages, and off-balance sheet entities such as structured investment vehicles left many key financial institutions with leveraged exposure to this downturn. Fears of their failure crippled lending, sending economic activity into a nosedive in the fall of 2008. There is a serious indictment of policy to be made here, for which the Federal Reserve should share some of the blame. But I only attribute a small part of this failure to the excessively low interest rates of 2003-2005, and see the primary policy errors as a problem of inadequate regulatory supervision (Hamilton, 2007).

To be sure, these problems were greatly aggravated by the economic downturn itself. Rising unemployment rates and falling income exacerbated default rates. We can quite legitimately ask whether there were options open to the Federal Reserve in the fall of 2008 that might have mitigated the damage. I agree with Professor Sumner that, if the Federal Reserve had been able to achieve a five percent annual growth rate for nominal GDP over the last four quarters instead of the ‑2.5 percent actually achieved, the average American would have been better off. But I disagree on the mechanisms and tools that the Fed could have used to try to steer us toward such an outcome.

Growth of nominal GDP at an annual rate, 2000: Q1 to 2009:Q2Growth of nominal GDP at an annual rate, 2000:Q1 to 2009:Q2

Sumner appeals to the equation of exchange,

M x V = P x Y

where M is a measure of the money supply, V its velocity, and nominal GDP is written as the product of the overall price level (P) with real GDP (Y). Sumner reminds us of Hume’s notion that velocity V in part depends on the extent to which households decide keep their coins locked in chests. If we thought of V in the above equation as determined by institutional details of how often people get paid or visit the grocery store, then we’re tempted to conclude that by choosing the appropriate value for the money supply M, the Fed could deliver a desired target for nominal GDP.

But one runs into an immediate practical problem in the bewildering variety of different magnitudes that might be thought to correspond to the money supply M, such as M1 (checkable deposits plus currency held by the public) or the monetary base (currency plus reserves, the latter being the electronic credits that private banks could use to turn into currency if they wished). Obviously two different M’s must imply two different V’s for the above equation, and we can’t think of both V’s as being entirely unaffected by actions taken by the Fed. For that matter, there are different concepts one could use for P x Y on the right-hand side of the above equation. Is it the dollar value of sales of all final goods and services (that is, nominal GDP, as my discussion above assumed), the dollar value of all transactions (as the earlier monetary theorists supposed and the notion of a dollar physically changing hands invites), the dollar value of consumption expenditures, or something else? It is clear that there is a long menu of different values we might refer to when we talk about the “velocity of money,” and at most one of these can actually determine the level of nominal GDP.

When you dive into such details, you are led to the conclusion that the above equation is not a theory of income determination, but instead is a definition of V. If we use a different measure of the money supply or a different measure of nominal transactions, then we must be talking about a different number V. What the equation really does is define a value of V for which the resulting expression is true by definition.

As Milton Friedman himself was quite clear, it is ultimately an empirical question as to whether a given candidate V so defined simply changes passively in response to the Fed implementing a change in the favored measure of M. The diagram below plots the growth rate of M1 along with the negative of the growth rate of the velocity implied when M1 is our measure of the money supply and nominal GDP is the measure of transactions. The strong impression is that in quarters in which M1 grew a lot, the M1-velocity shrank by an offsetting amount, leaving the quarter-to-quarter correlation between M1 growth and nominal GDP growth quite weak.

null
Top panel. Annual growth rate of M1, 1980:Q1 to 2009:Q2. Bottom panel Annual growth rate of the ratio of M1 to nominal GDP.

The same conclusion emerges if you prefer to use the monetary base as your measure of the money supply.

It’s necessary to spell out a mechanism other than the equation of exchange by which the Federal Reserve is asserted to have the power to achieve a particular target for nominal GDP. One mechanism that we all agree is relevant in normal times is the Fed’s control of the short-term interest rate. Lowering this will usually stimulate demand and eventually lead to faster nominal GDP growth. However, there are two problems with advocating this tool in the fall of 2008. First, most of us are persuaded that the stimulus from such a policy takes some time to affect the economy, so that, if implemented in October 2008, it is not a realistic vehicle for preventing a decline in 2008:Q4 nominal GDP.  Second, we quickly reached a point at which the fed funds rate fell to its nominal floor, an essentially zero percent interest rate, from which there is no more down to go.

Now, I am in agreement with Professor Sumner that this does not mean that monetary policy is completely ineffective in such a situation. I do not endorse the “liquidity trap” scenario, though my reasons are different from those given by Sumner. I believe that once the nominal T-bill rate falls to zero, not much is achieved by further open market purchases of T-bills by the Federal Reserve. However, there is still an opportunity for monetary stimulus in such a situation through purchase of assets other than T-bills, and I believe that this was something the Fed should have tried in the fall of 2008.

Unfortunately, until the beginning of 2009, the Federal Reserve was doing everything it could to prevent its actions from stimulating the economy in the usual fashion. It was viewing the slowly unfolding credit problems as primarily a crisis in lending, in which the Fed felt it needed to step in as lender of last resort on what ultimately proved to be a massive scale. [3] The Fed wanted to lend extensively, but did not want to see currency held by the public increase. For this reason, it sold off a significant portion of its holdings of T-bills through September 2008, in a paired set of actions, lending with one hand and selling T-bills with the other, that might be described as “sterilizing” the lending operations so as to prevent them from having an effect on the money supply. [4] When the Fed ran out of T-bills to sell, it asked the Treasury to create some more for the Fed to use just for sterilization. More importantly, In October the Fed began paying interest on reserves, in effect borrowing directly from banks, and creating an incentive for banks to hold the newly created deposits as a staggering burgeoning of excess reserves, again preventing its actions from increasing the value of M1. Excess reserves amounted to $833 billion by August 2009, or more than the sum of all the currency issued by the Federal Reserve between its creation in 1913 and 2008.

I agree with Sumner that the Fed made an error in abandoning its traditional goal of monetary expansion. The sterilization efforts ultimately made it much harder for the Fed to do what I think they now understand would be desirable, namely, provide a more traditional stimulus to aggregate nominal GDP. In my opinion, the preferred policy in the fall of 2008 would have been to acknowledge more aggressively the losses financial institutions had absorbed on existing loans, impose those losses on stockholders, creditors, and taxpayers, and retain as the Fed’s first priority the stimulus of nominal GDP rather than trying to lend to everybody.

Let me mention one other real-world complication that has made it difficult for the Fed to achieve a faster growth of nominal GDP. Macroeconomists often like to think of inflation in terms of an aggregate price index, the variable P in the equation of exchange. But, particularly in the current environment, aggressive stimulus by the Fed is unlikely to show up as higher wages or the prices of most services, but instead would raise relative commodity prices and could in a worst case scenario precipitate a currency crisis, both of which would be highly destabilizing in their own right. I agree with Sumner that the Fed could and should have done more, but would caution that it is also possible for the Fed to try to do too much. I come back to the perspective with which I opened — given the earlier regulatory lapses, significant economic losses could not have been prevented by any monetary policy that could have been implemented in the fall of 2008.

Finally, I would like to comment on Sumner’s intriguing suggestion that we might be able to sidestep all of the complications as to how the Fed achieves a particular target for nominal GDP growth by having it fix the price of a futures contract settled on the basis of nominal GDP growth. Perhaps there are some more details of what Sumner has in mind that I am missing, but I don’t really understand how it could work. The essence of any Fed operation is an exchange of assets of equivalent value. Traditionally, the Fed uses the money it creates to purchase a T-bill, thereby increasing the quantity of money in circulation. However, a futures contract is not an asset, but instead is an agreement between two parties for which neither party initially compensates the other. The market value of that agreement at inception is, by definition, zero.  The Fed may participate in the market with an infinitely large position, say, and thereby cause the contract to price in five percent nominal GDP growth. But the initial act of taking one side or the other of such contracts will not create or destroy any money.

For example, you and I might agree today that if fourth-quarter GDP growth is above five percent, I pay you a sum based on the gap, and if it is below five percent, you pay me. Let’s say we make exactly that agreement, so that the futures price starts out precisely where the Fed wants it to be. If at expiry nominal GDP growth equals five percent, the value of the contract to either party is, again by definition, zero, because neither of us compensates the other. Hence, if the contracts when issued were priced for a five percent growth (as I understand they’re supposed to be, under this system), and the economy experienced that hoped-for 5 percent growth, there is no avenue whereby the money supply could increase or decrease over the period of time covered by the contract as a result of Federal Reserve participation in the market.

If, for sake of argument, we ignore the issues discussed above and think of V as institutionally fixed and view M x V = P x Y as determining nominal GDP, it would be necessary to have 5 percent growth of M to achieve 5 percent growth of nominal GDP. I do not see how the Fed “fixing” the price of a futures contract based on nominal GDP growth could achieve that objective. This is simply a manifestation of the broader point I’ve been making above. If the Fed does not have the economic ability to achieve a particular target for nominal GDP growth, then it is not feasible for it to fix the expiry value of a futures contract settled on the basis of nominal GDP growth.

In conclusion, Sumner is raising some important issues and I agree with him on some of the key points. But while I agree that monetary policy in the fall of 2008 could have been improved upon, doing so was much trickier than might appear.


James D. Hamilton is professor of economics at the University of California, San Diego

Notes

[1] Federal Reserve Board Flow of Funds, Table L.2.

[2] Case-Shiller/S&P Home Price Index for 20 metropolitan areas, http://www2.standardandpoors.com/spf/pdf/index/CSHomePrice_Release_0825….

[3] Keister and McAndrews (2009) provide an exposition of this view.

[4] See Hamilton (2009) for more details.

References

Ashcraft, Adam, and Til Schuermann. 2008. “Understanding the Securitization of Subprime Mortgage Credit,” working paper, Federal Reserve Bank of New York (http://www.newyorkfed.org/research/staff_reports/sr318.html).

Hamilton, James D. 2007. “Commentary: Housing and the Monetary Transmission Mechanism.” In Housing, Housing Finance, and Monetary Policy, Federal Reserve Bank of Kansas City, pp. 415-422.

Hamilton, James D. 2009. “Concerns about the Fed’s New Balance Sheet,” in The Road Ahead for the Fed, edited by John D. Ciorciari and John B. Taylor, Stanford: Hoover Institution Press.

Keister, Todd, and James McAndrews. 2009. “Why Are Banks Holding So Many Excess Reserves?” Federal Reserve Bank of New York Staff Papers (http://www.newyorkfed.org/research/staff_reports/sr380.pdf).

Between Fulsomeness and Pettifoggery: A Reply to Sumner

Scott Sumner’s general views on macroeconomics are so much in harmony with my own that, in commenting on the present essay, I’m hard pressed to steer clear of the Scylla of fulsomeness without being drawn into a Charybdis of pettifoggery.

The best I can do is to risk a little of each. So let me start by indicating the considerable extent of my agreement with Sumner. Like him, I believe that monetary policy should strive, not to achieve any particular values of interest rates, employment, or inflation, but simply to maintain a steady growth rate of overall nominal spending. Such a policy seems to me, after all, the most straightforward, practical counterpart of the textbook ideal of keeping an economy’s “aggregate demand schedule” from shifting, or at least from shifting in an unpredictable manner, so as to keep output at its long-run or “natural” value instead of causing it to fluctuate around it. Although the simplest textbook story would seem to warrant a policy aimed at preserving an absolutely constant value of aggregate spending, Sumner rightly argues that a constant growth rate of spending may also be consistent with keeping real output (as well as other real macroeconomic variables) at their “natural” levels. I believe that he is also correct in claiming that, if an economy is accustomed to some steady rate of growth of spending a mere reduction in that rate can be depressing.

It follows from all this that I also share Sumner’s view that monetary policy became excessively tight at some point in 2008. Certainly it was too tight starting in November of that year, when nominal GDP growth turned negative. I believe it was also too tight in September, when the growth rate, though still positive, plunged from just over 2.5 percent to zero.

I am, on the other hand, less inclined than Sumner is to believe that monetary policy was excessively tight before September, for reasons I’ll come to shortly.

Finally, I share Sumner’s view that the extent to which monetary policy is either tight or loose cannot be established simply by observing either the level of short-run nominal interest rates or the growth rate of the monetary base or of some monetary aggregate. Short run interest rates depend just as much on the demand for as on the supply of credit; and any given growth rate of the monetary base (or of any monetary aggregate) may be either too slow or too fast depending on whether the real demand for the assets in question is growing more or less rapidly than the supply. Indeed, Sumner’s response to those who, relying on interest rate or money stock data alone, have insisted that monetary policy has been adequately (if not more than adequately) accommodative since September 2008 is very similar to my response to David Henderson and Jeff Hummel’s claim, based on similar data, that easy money played no part in inflating the housing bubble.

Now to disagree a little. First, what about the boom? Sumner’s silence is so conspicuous that one might imagine that he sees no role at all for overly-easy money in causing the subprime crisis. That stance may appear especially odd given Sumner’s evident desire to distinguish his views from those of a run-of-the-mill “monetary crank,” for he might readily do this by admitting — as genuine cranks fail to do — that central banks are just as capable of creating too much as too little money.

But for Sumner the way out isn’t quite so simple, because he has chosen to treat the average rate of NGDP growth “between the early 1990s and 2007″ as his preferred policy target. Doing so makes it awkward, to say the least, for him to admit that rates only modestly above this target were associated with the pumping up the housing bubble in the first place, and also with the preceding dot-com boom.

So my one substantial disagreement with Sumner is that I think five percent an unnecessarily and perhaps dangerously high figure — one that is less likely than lower rates to maximize welfare in the long run, yet more likely to perpetuate boom-bust cycles. Instead I favor a spending growth rate target closer to three or even two percent. Such a target would succeed in keeping the rate of factor price inflation at a modest level, while holding the rate of CPI inflation close to zero, and even allowing it to fall below zero with surges to productivity. (I also think “final sales of domestic product” a better measure of overall demand than nominal GDP, though this is a relatively minor point.) My reasons have to do in part with the sort of considerations Milton Friedman put forth in his famous (but too-often brushed-aside) essay on “The Optimum Quantity of Money,”[1] and also with the tendency, in the absence of complete indexation, of wages and other input prices to lag behind output prices when nominal incomes (and firm revenues) grow rapidly enough to raise equilibrium prices in both factor and output markets. This tendency can cause a temporary “profit inflation” that may in turn cause asset prices to be bid up. Eventually, though, input prices catch up, bringing both profits and asset prices back down. Surges in the rate of nominal income growth — even ones that seem relatively modest in percentage term, are especially likely to fuel unsustainable booms.[2]  The less rapidly equilibrium factor prices are made to rise, the less likely it is that input sellers will find themselves playing catch-up, and the lower the risk of money-driven cycles.

The relevance of this perspective, regarding the dangers of a five percent spending growth target, to the last two boom-bust cycles may best be seen by inspecting the patterns of spending, price level, and wage growth over the course of the last 19 years or so:

Evidently, with a trend rate of growth of nominal GDP of around five percent, even relatively small deviations around the trend can be destabilizing in the presence of nominal rigidities. As Bill Niskanen has observed, during the Greenspan era, “[a]lthough the standard deviation of demand around this [5.4 percent] trend was only 1.3 percent, this variation had significant effects on asset prices and the real economy.” [3]

In arguing that five percent nominal GDP growth is excessive, I don’t mean to deny that there is some risk involved in moving to a lower rate, that is, to a rate as much as three percentage points below what has been customary. Nor do I wish to deny that it may be prudent to approach the preferred target gradually. But the costs involved in implementing such a target sooner rather than later are in my opinion likely to be small compared to those of even one more round of boom and bust.

Of course it may well be true, as Sumner suggests, that the gradual deceleration of income growth that took place between January and September 2008 was particularly ill-timed, in that it coincided with a collapse in asset values that was already placing a severe strain on the credit system. What argument could there have been, on the other hand, for failing to take advantage of the fait accompli of decelerated spending — and correspondingly tempered expectations — as of early September 2008, to inaugurate a less trouble-prone regime, instead of rushing to reestablish the status-quo ante?

The sudden move from decelerating growth to genuine collapse of spending in the latter part of September was, of course, another matter. Personally, I follow John Taylor in blaming that collapse not on the decision to allow Lehman Brothers to fail, but on the scare tactics Bernanke and Paulson used in their efforts to cow Congress into approving the Treasury’s massive bailout scheme.[4] Whether Bernanke’s doom-saying helped trigger the collapse or not, the Fed ought to have put a stop to it, and could have done so (in both my and Sumner’s opinion) by conventional means. Indeed, at least one of the Fed’s innovations at the time — its decision to begin paying interest on bank reserves in October — appears to have been perfectly counterproductive, in a manner all-too-reminiscent of the Fed’s foolish doubling of bank reserve requirements during 1936 and 1937.

To summarize and conclude: I largely agree with Scott’s arguments, and particularly with his claims (1) that tight money was the proximate cause of the post-September 2008 recession, and (2) that a policy of nominal income growth targeting might have prevented the recession. I disagree with him concerning how rapidly income should have been allowed to grow. The actual difference — a mere three percentage points or so — seems too small to rule out the possibility of that our views are informed by the same fundamental beliefs, and that we may eventually come to an agreement.

 

Notes

[1] In The Optimum Quantity of Money and Other Essays (Chicago: Aldine, 1969), pp. 1-30.

[2] See my Less Than Zero: the Case for a Falling Price Level in a Growing Economy. London: Institute of Economic Affairs, 1997 and also David Beckworth, “Aggregate Supply-Driven Deflation and its Implications for Macroeconomic Stability,” The Cato Journal 28 (3) (Fall 2008): 363-84. Other recent papers that reaches broadly similar conclusions using a New Keynesian macroeconomic framework is Niloufar Entekhabi, “Technical Change, Wage and Price Dispersion, and the Optimal Rate of Inflation,” University of Quebec, March 2008 and Stephanie Scmidtt-Grohé and Martín Uribe, “Optimal Inflation Stabilization in a Medium-Scale Macroeconomic Model,” July 15, 2006.

[3] William A. Niskanen, “An Unconventional Perspective on the Greenspan Record.” The Cato Journal 26 (2) (Spring/Summer 2006): 333-5.

[4] John Taylor, Getting off Track (Stanford: Hoover Institution Press, 2008), pp. 25-30.

Explanation vs. Prescription

Like others, I was unfamiliar with the work of Scott Sumner before he started his blog, “The Money Illusion” last February. But since then, his posts have proved so powerful, innovative, and challenging that he has commanded the attention of prominent macroeconomists from nearly all perspectives. Sumner’s distinctive analysis — what might be labeled “neo-monetarism” — strives to provide both an explanation for the financial crisis and a prescription for monetary policy.

Let me comment first on his causal explanation, with an excursion of my own into the development of economic thought. The business cycle remains the major unresolved problem in macroeconomics. None of the competing theories have yet achieved a consensus within the profession, in part because none are fully satisfactory. As a result, the business cycle has, over my lifetime, migrated from the front end of most macro texts to the back end, situated behind the topics of growth and inflation, about which economists know and agree more. So we must approach this problem with a measure of epistemic humility.

Outside of real business cycle theory and Austrian business cycle theory, all the alternatives, including Sumner’s, blame depressions and recessions on negative shocks to what economists call aggregate demand, the total level of spending. Orthodox monetarists attributed such shocks to declines in the rate of monetary growth, whereas traditional Keynesians blamed declining autonomous expenditures. Both of these sources are captured in the well known equation of exchange: MV = Py, in which MV (money times its velocity) is equivalent to aggregate demand, and Py represents nominal GDP, the product of the price level and real output. In other words, a fall in velocity (V) is equivalent to a Keynesian fall in autonomous expenditures, which can happen only if people in the aggregate are holding (or hoarding) more money. Although this basic truth is sometimes overlooked in the recent debates over fiscal policy, in which economists replay (often with far less theoretical sophistication, despite greater mathematical pizzazz) the forgotten Keynes versus the Classics controversies, a negative shock to aggregate demand must involve either (a) a decline in the money stock’s growth rate or (b) an increase in the demand for money.

During the 1980s the behavior of velocity became more erratic than seemed consistent with monetarist predictions. Many macroeconomists turned toward a New Keynesian synthesis, in which shocks can arise from either M or V, and the goal of monetary policy is to offset them. The best way to do that, according to New Keynesians, is with some kind of interest-rate target, like the famous Taylor Rule, which allegedly adjusts for the impact of inflationary expectations on observed interest rates, so that the Fed can stabilize growth of MV and thus Py.

The recent recession actually raises two related questions: (1) what caused the initial downturn in late 2007; and (2) why did a mild, garden-variety recession start to turn into a major financial panic in late 2008. The same two questions apply to the Great Depression, despite the fact that the current recession is so far nowhere near as severe. A recession that began in 1929 only turned into a Great Depression beginning in October 1930 with the most massive series of banking panics not just in U.S. history but also in world history. Milton Friedman and Anna Jacobson Schwartz’s seminal 1963 study, A Monetary History of the United States, 1867-1960, decisively confirmed for almost all macroeconomists the role of this severe banking crisis in bringing on what Friedrich Hayek called a “secondary deflation,” although economic historians still debate what triggered the banking panics and what caused the initial recession. Friedman and Schwartz held Fed-induced monetary tightness responsible for both; Keynesians continue to blame velocity shocks; the Austrians attribute the initial 1929 downturn to a malinvestment bubble brought on by monetary expansion during the 1920s; whereas New Classical economists along with other supply-siders sometimes point to such supply-side shocks as the Smoot-Hawley tariff.

Sumner attributes the mild recession that began in 2007 to the supply-side shock of subprime defaults. His real concern, however, is the subsequent financial panic. Although he identifies monetary policy that was too tight as the underlying cause of increasing distress, he defines “tight” and “loose” relative to velocity rather than relative to the money stock. What he is really saying is that for some unspecified reason the economy was hit with a negative velocity shock, and the Fed failed to respond promptly and strongly enough. I would like to see him address in greater detail the origins of this shock; attributing it to an expected future decline in nominal GDP doesn’t get us very far. Did these expectations result from the subprime crisis, from an unpredictable attack of Keynesian “animal spirits,” from the declining rates of monetary growth over the previous five years, or from something else? In fact, I believe he goes a bit too far when he suggests that it was the fall in aggregate demand that caused all the financial failures. Surely, once the process is underway, you can have both reinforcing each other, as clearly happened during the Great Depression.

Sumner’s focus on 2008 is thus consistent with a variety of stories about the earlier onset of recession, even the Austrian story that David Henderson and I critiqued in our Cato Briefing Paper and in our reply to critics, as well as our preferred story that brings in volatile international savings flows. Yet Sumner has convinced me that in light of the looming financial panic, whatever its source, Ben Bernanke’s response of targeted bailouts was too tight as well as misdirected. Beginning with the Fed’s creation of the Term Auction Facility in December 2007, nearly every dollar that Bernanke injected into financial institutions was sterilized with the withdrawal of dollars through the sale of Treasury securities. Not until September 17, 2008, did a panicked Fed finally set off a monetary explosion, doubling the base in less than four months.

Even then, as Sumner astutely emphasizes, Bernanke accompanied this inflationary step with the deliberately deflationary step of paying interest on bank reserves. Henderson observes in a recent post how this stands in marked contrast to what Alan Greenspan did when faced with a mere whiff of panic in anticipation of Y2K and after 9/11. In both instances, Greenspan flooded the system with liquidity and then, when any financial uneasiness calmed, rapidly pulled the money back out, a policy far more consistent with the implications of Freidman’s research. I thus am persuaded that financial failures under Bernanke would have been far less serious if the Fed had simply started expanding the base well before September, and had done so without any direct bailouts that exacerbated moral hazard.

I also wholeheartedly accept Sumner’s criticisms of the current obsession with interest rates as the indicator of monetary policy. I have repeatedly stated myself that interest rates, whether real or nominal, have never proved an adequate gauge of what central banks are doing: not during the Great Depression, when nominal rates were very low despite a collapsing money stock; not during the Great Inflation of the 1970s, when nominal rates were high despite an expanding money stock; not during Japan’s lost decade; and not under Greenspan or Bernanke. Moreover, Sumner is absolutely right that zero interest rates are no obstacle to an expansionary monetary policy. The Fed could easily increase the monetary base well beyond its current $1.7 trillion with traditional open market operations alone, up to the Treasury’s total outstanding debt of nearly $7 trillion, while avoiding any loans whatsoever to specific depositories, investment banks, or other financial institutions. Eventually some of those reserves would be converted into currency, which the public would start spending.

What does cripple monetary expansion is paying interest on reserves, something other major central banks were already doing before the Fed. The practice is not merely deflationary, other things equal. It essentially converts monetary policy into fiscal policy, since in effect the Fed is now doing the same thing as the Treasury, borrowing money on one side of the balance sheet, through interest earning reserve deposits, in order to spend or lend it on the other side. Symptomatic of this subtle transition in the Fed’s role was the fact that in the midst of its monetary explosion, the Fed’s total balance sheet exceeded the monetary base by half a trillion dollars. That difference represented money that the Treasury had borrowed from the public for the express purpose of lending it to the Fed, which in turn employed it for more loans, in this case, primarily foreign currency swaps. In short, interest-earning reserves have created a self-fulfilling Keynesian liquidity trap. And if one inspects some of the most advanced academic writing on monetary policy, one discovers that some central bankers now view centrally planning the economy’s interest rate as their primary function, with the ultimate ideal of separating that role entirely from anything happening to the money stock.

This brings us to the second issue of monetary prescriptions. Here I decisively depart from Sumner’s recommendations. I agree with him that the Taylor Rule or other forms of interest-rate targeting are inadequate. But his alternative, to somehow have central banks target expectations about nominal GDP growth, has its own defects. It does, admittedly, preclude a Fed tightening during negative supply shocks, when the price level should be allowed to rise to reflect increasing scarcity. Indeed, it was probably such shocks from climbing oil and commodity prices in 2007 that encouraged a Fed reaction to the subprime crisis that was too tight. But this advantage over straight inflation targeting is something Sumner’s Rule has in common with the Taylor Rule.

On the down side, Sumner’s Rule implicitly shares the current bias against any price deflation at all. Sudden, sharp deflation, which generates serious economic dislocations, should be distinguished from mild, secular deflation. The latter has historically been benign, and George Selgin has argued that it is actually optimal. Of course, Sumner could in theory set the target for the growth of nominal GDP expectations at zero or even at a negative rate. But the more critical defect of Sumner’s Rule is its blithe assumption that money, unlike any other good or service, requires not merely government provision but detailed, sophisticated, and flexible government management.

Which brings me full circle to my earlier caution about epistemic humility. No one yet knows or understands the full causes and cures for the business cycle, and any claim to the contrary is pure intellectual hubris. As we have already observed, Sumner himself is somewhat tentative about what brought on the initial downturn in 2007 and silent on how or why this evolved into a negative velocity shock. It may well be that Sumner’s Rule would outperform the alternatives tried so far, but in light of the Fed’s inept and often disastrous record until the Great Moderation during the two decades following the mid-1980s, that is not saying a lot. Would Sumner be willing to bet against any future research or financial innovations either discrediting his rule or making it obsolete? Moreover, even if Sumner’s Rule is the best we can ever expect from the State’s central bank, how likely is it that the rule will be adopted and consistently applied? Does anyone really believe that political pressures had absolutely no influence on Bernanke’s bailouts?

Given that the financial sector, from which business cycles apparently emanate, is one of the most heavily regulated within the U.S. today and, in fact, has never been fully deregulated, ever, we should instead be looking at deregulation and privatization, rather than better fine tuning. Sumner has, I believe, made a major contribution to the debate over the recent recession. Yet with all his close attention to the work of Milton Friedman and his sympathy for free markets, I am puzzled that he has said little (as far as I know) about Friedman’s conclusion that private currency issued under the Aldrich-Vreeland Act headed off a panic in 1914 and would have done a far better job than the Federal Reserve during the Great Depression. Nor am I aware of his addressing the arguments of Selgin and Lawrence H. White that free banking would spontaneously stabilize MV through the automatic operation of the clearing system. In the final analysis, only abolition of the Fed, elimination of government fiat money, and complete deregulation of banks and other financial institutions offer any long-term hope of bringing better macroeconomic stability.

The Conversation

Almost on the Money: Replies to Hamilton, Selgin, and Hummel

Reply to James Hamilton

I greatly appreciate that three distinguished economists took the time to write very extensive and thoughtful comments on my recent essay in Cato Unbound. Although there are some points on which we disagree, I’d like to start by clearly up a few misconceptions, in order to show that we aren’t all that far apart.

I agree with much of what Professor Hamilton says about regulation, and would defer to his greater expertise in that area. I happen to favor a requirement of a 20 percent down payment on mortgages, because of the moral hazard problem created by FDIC and “too big to fail.” I have always been strongly opposed to the government-sponsored Fannie Mae and Freddie Mac, regarding them as potential “time bombs.” But we knew that before the crisis. Here’s what I meant by the statement that the crisis was a “fluke.”

  1. Like the October 19, 1987 stock market crash it is totally inexplicable, and unlikely to be repeated. The “villains” were often very smart people, and often lost huge amounts of their own money. How can that be explained? Any attempts to install regulations specifically tailored toward this crisis are likely to be misguided; the next problem will be different.
  2. It has no implications for monetary policy. The only proper goal is targeting expected NGDP; even if we are in the midst of a bubble, or a financial crisis.
  3. It did not cause the severe slump that began in August 2008; tight money did.

Hamilton suggests that the financial crisis caused the intensification of the recession in late 2008. But the term “cause” has a slippery meaning. The late 1930 American banking panic made the Great Depression much worse. So there is a sense it which it could be said to have “caused” a deep slump. But Friedman and Schwartz showed that aggressive monetary policy could have offset the impact on the money supply. I would add that in both late 1930, and late 2008, a more expansionary monetary policy would have meant a much smaller financial crisis. So in my view both slumps were caused by monetary policy errors.

I think Professor Hamilton misunderstood the intent of my reference to the equation of exchange. He writes:

When you dive into such details, you are led to the conclusion that the above equation [MV=PY] is not a theory of income determination, but instead is a definition of V. If we use a different measure of the money supply or a different measure of nominal transactions, then we must be talking about a different number V.

I entirely agree, and regret leaving the impression that I thought the equation of exchange had some theoretical implications, particularly monetarist implications. It is merely a definition of velocity, as Hamilton indicates. I had in mind a version using the monetary base, base velocity, real GDP, and the deflator. I mentioned this equation only because it was an easy shorthand way of describing Hume’s three key insights: More money means higher prices and real output in the short run. Only prices are affected in the long run (in proportion to the increase in M.) And changes in V have the same impact on NGDP as changes in M.

Hamilton then argued:

It is necessary to spell out a mechanism other than the equation of exchange by which the Federal Reserve is asserted to have the power to achieve a particular target for nominal GDP. One mechanism that we all agree is relevant in normal times is the Fed’s control of the short-term interest rate.

I agree that this is a widely held view, but I think it is wrong. In my view the so-called “liquidity effect,” or the response of short-term rates to changes in Fed policy, is a sort of epiphenomenon that hides more important causal forces. I see things this way: A contractionary monetary shock that is expected to be permanent will lower future expected NGDP (for monetarist reasons). This will dramatically lower current asset prices for two reasons. First, because the lower future price level tends to lower current prices. Second, and much more importantly, with sticky-wages, a nominal shock will depress output, and the expectation of recession will lower the real value of stocks and commodities (and perhaps some real estate.) This transmission mechanism was crystal clear in the massive monetary shocks of the interwar period, and reoccurred in late 2008. Even worse, this time it occurred during a financial crisis, and the falling asset prices dramatically worsened that crisis.

Professor Hamilton also argues that

… most of us are persuaded that the stimulus from such a policy takes some time to affect the economy, so that, if implemented in October 2008, it is not a realistic vehicle for preventing a decline in 2008:Q4 nominal GDP.

Indeed, this is the key issue that separates us.

I agree that most economists look at things that way, but I think they are wrong, at least if the actions were taken in September. (By October, some decline in Q4 NGDP was inevitable.) Most economists assume that interest rates or the money supply are good indicators of the stance of monetary policy. Because of this they mis-identify monetary shocks, and this leads to estimates of long and variable lags in the effect on NGDP. But this view is hard to reconcile with the fact that when shocks are easily identifiable, the lags seem very short. The most expansionary monetary shock in U.S. history, by far, was the 1933 dollar devaluation and the decision to leave the gold standard. During the first four months of this policy, the WPI rose by 14 percent and industrial production soared 57 percent, regaining half the ground lost in the previous 3 ½ years. And these stunning gains in nominal output occurred during one of the worst financial crises in American history, when much of the banking system was shut down for months. Other easily-identified monetary shocks, such as the 17 percent decline in the U.S. monetary base between late 1920 and late 1921 had an immediate and severe impact on both prices and output.

Hamilton also questioned my proposal to use NGDP futures contracts to implement monetary policy. I first published this idea in 1989, but my most recent article from 2006 is more accessible. The idea is that each time there is a futures transaction, the Fed conducts a parallel open market operation with ordinary Treasury securities. Last October I would have expected NGDP growth to come in at under five percent. Thus I would have sold NGDP futures short to the Fed. Each time I did so the Fed would conduct a parallel open market purchase. The opposite would occur if investors expected the economy to overheat, and bought NGDP futures. This plan would essentially replace the 12-member Federal Open Market Committee with an NGDP futures market comprised of thousands of traders.

Even without futures targeting, a Fed commitment to five-percent NGDP growth last year would have helped prevent a severe collapse in late 2008. Current changes in aggregate demand are strongly linked to expected future changes — something the Fed can control. If NGDP was expected to grow at a five percent rate from 2008:Q3 to 2009:Q4, asset prices and real output both would have done much better in the fourth quarter.

Although we disagree on some points, I’d like to add that before I began my blog I was strongly influenced by Hamilton’s critique of Fed policy on his blog, econbrowser.com. His insightful analysis of the Fed’s new procedures influenced my thinking.

Reply to George Selgin

I was surprised by how much George Selgin and I agreed upon. As with Hamilton, I will argue that we may even be a bit closer than he thinks. Selgin says:

I am, on the other hand, less inclined than Sumner is to believe that monetary policy was excessively tight before September.

I actually think this is a very defensible view. On my blog I have argued that the first key mistakes occurred in the Fed’s September 16 meeting. I now think that in retrospect a slightly easier policy would have been desirable in August as well. Selgin may have been referring to my preference for five percent NGDP growth, which we fell short of in late 2007 and the first half of 2008 when NGDP growth was closer to three percent. However I also favor “level targeting,” meaning a five percent trend line with attempts to make up for undershooting or overshooting. Because the housing bubble was associated with more than five percent NGDP growth, a bit slower growth was acceptable during the relapse from that bubble. So I actually agree with Selgin that September 2008 is a reasonable date for policy going seriously off track.

If I suggested otherwise, that was sloppy exposition on my part. It probably reflected my belief that the intensification of the crisis was partly caused by slow NGDP growth after July 2008. Another way of making this point is to consider targeting the forecast. By Lars Svensson’s criteria we were not off course until late September, when even the Fed must have realized that NGDP growth was likely to come in too low. On the other hand, TIPS spreads indicated that inflation expectations fell sharply in August and early September, and my guess is that NGDP growth expectations also declined at this time.

I was silent on the origins of the crisis only because of a lack of space and a desire to concentrate on the issue that most economists overlooked — the role of tight money in the severe crash of late 2008. I agree that monetary policy was too easy during the 2004-06 housing bubble (although easy money alone cannot explain the bubble, as we have often had even more inflationary policies without such an unusual movement in real housing prices.) I agree with Selgin that my view doesn’t explain the size of the tech and housing bubbles; to be honest, I have no explanation. If I did have an explanation, then the factors causing the bubble should have been observable in real time, in which case the efficient market hypothesis – the view that it is hard to beat the market — would have been violated.

The most important issue raised by Selgin is the appropriate rate of NGDP growth. I strongly favored a five percent target in mid-2008, as wage and debt contracts already incorporated that expectation. Should we now gradually move to a lower rate, as Selgin suggests? Perhaps:

Pro:

  1. “Optimal quantity of money” argument suggest mild deflation is ideal.
  2. Less distortion from our taxes on capital, which are not indexed to inflation.

Con:

  1. Harder to adjust real wages rates if there is money illusion.
  2. More likely to encounter liquidity traps.

If we went to a forward-looking policy, I’d worry much less about liquidity traps, and I’d be much more supportive of a two or three percent target. As it is, I am ambivalent about the optimal rate in the long run, but I think it’s probably in the three to five percent range. The less competent Fed policy at the zero rate bound, the stronger the argument for a bit higher trend rate.

I don’t quite understand Selgin’s argument that a higher trend rate of inflation makes bubbles more likely. This seems to violate the super-neutrality of money, unless Selgin is also assuming that higher trend rates are generally associated with more inflation variance (which is defensible I suppose, but this is a problem that can be fixed with an explicit mandate from Congress for X percent NGDP growth expectations.) Other than that, however, I agree with most of what Selgin says about NGDP targeting, although I have a few doubts about mild deflation under the current policy regime.

Reply To Jeffrey Hummel

Professor Hummel did a fine job of summarizing how my views of the crisis are part of a long tradition of demand-side theories of business cycles, which includes both the Keynesian and monetarist camps. He then makes an interesting comparison of this recession to the Great Contraction. In both cases, the initial slump was greatly worsened by a financial crisis. But I think there are also a few differences that command attention.

In the 1929-32 slump, the economy was already deep in depression even before the first U.S. banking crisis (of December 1930.) Indeed, the August 1929 to December 1930 slump was by itself worse than the entire current recession. Almost equally severe slumps occurred in 1920-21, 1937-38 and 1981-82, all without any significant financial crises. So there is ample precedence for nominal shocks (or declines in AD) triggering recessions as severe as the one we are now experiencing. In addition, even the early (very mild) stage of this recession was accompanied by financial distress, which is another difference from 1929-30. Indeed, I think the fact that banking problems triggered the onset of recession in December 2007 has led many economists to overestimate its role in the intensification of the recession that began in August 2008.

I would also like to emphasize that although Friedman and Schwartz pointed to the banking panics of the early 1930s as key factors worsening the contraction, they saw it working through demand-side channels: the banking crises reduced the multiplier and the monetary aggregates. I am not sure that Hummel disagrees with any of this, but I think it is helpful to always sharply distinguish between supply-side and demand-side causal factors. Hummel rightly notes that banking distress can be viewed as a supply shock, but in my view the most severe outcomes usually occur when it spills over into demand-side problems. However, I do think Hummel misunderstood one aspect of my argument. I do not think that expectations of falling NGDP explains all of the intensification of the financial crisis after September 2008, just most of it.

Hummel makes a very good point when he suggests that paying interest on reserves essentially converts monetary policy into fiscal policy. I hadn’t thought of it that way. I had argued that, even at the current reserve interest rate of 0.25 percent, reserves dominated T-bills in both liquidity and rate of return. So they are actually a sort of government bond. And there is a reason why zero-interest cash has always been treated separately as “high-powered money.” Bonds just don’t have the same expansionary impact. Indeed this is one point that I believe all four of us might agree on, but which much of the profession has overlooked. (Hamilton was one of the first to note the problems with interest on reserves.)

Hummel concludes with an argument that our centrally planned monetary regime will never produce satisfactory results, and that we should consider a laissez-faire monetary system, such as free banking. I have a few comments on this issue:

  1. I agree that in its first few decades the Fed made things worse, as it foreclosed some private mechanisms for dealing with banking crises such as occurred in 1914. I do think the Fed has gotten much better, and I (perhaps stubbornly) think that further improvements are likely.
  2. I am not sure about the White/Selgin argument that a regime of free banking would stabilize NGDP, but I also admit that I don’t have a clear idea as to how such a regime would be likely to perform.
  3. Even if the free banking argument is correct, and the Fed should be abolished, I would continue to advocate the ideas expressed in this essay. As I indicated, we are currently stuck with the system we have, and if it performs poorly (particularly in the direction of deflation), people will almost certainly blame the resulting problems on the failures of capitalism, not monetary policy. (As interest rates are low during deflation.)
  4. My proposal for using market forecasts of NGDP as policy targets goes some way toward making monetary policy less discretionary. For instance, this would take the Fed out of the business of setting interest rates. Let’s consider an even simpler futures targeting regime, one stabilizing CPI futures. How does this compare to the old gold standard? First, it replaces one commodity with a basket of goods. Second, it replaces the spot price with a futures price. But note the similarities; both regimes replace discretionary policy with a clear monetary rule. In both cases you would have the sort of policy envisioned in the Constitution, which gave Congress the power to set the value of money. Other researchers such as Bill Woolsey and David Glasner have discussed how this basic idea could be combined with a free banking system. So I hope free-market economists won’t let the perfect be the enemy of the good. A market-oriented futures-targeting approach could be a useful first step toward moving away from a bureaucratic, government-run monetary regime.

Again, I’d like to thank all three reviewers for taking the time to give serious consideration to my ideas.

I have one question: Does anyone have any thoughts on my proposal to charge a negative interest rate on excess reserves as a way of reducing the hoarding of base money? Sweden recently adopted this proposal.

There Are Limits to What Monetary Policy Can Accomplish

On the causes of the crisis, Sumner asserts that the villains lost large amounts of their own money.  Some did, but others made out very well indeed personally while losing vast sums of other people’s money.  That latter feature identifies some profound incentive problems that need to be corrected.  Getting into those in detail would distract us a bit from the interesting main thesis to which Sumner directs our attention, and I’m sure we’d find many areas of agreement as well as some disagreements if we pursued those questions in more depth.  But for this forum I will simply reiterate my conviction that these problems in financial markets were a key cause of our present problems, and are not well described as a failure of monetary policy as conventionally understood in the sense that they did not result primarily from wrong values chosen for interest rates, inflation, or the money supply in the events leading up to September 2008.

The core question to be discussed here is what the Fed might have done differently beginning in September 2008.  I am challenging the suggestion that the nominal growth rate of GDP for 2008:Q4 represents a magnitude that the Fed could have chosen to be whatever it wished in September 2008, had it only followed the right policy or operated within the right institutional environment.

I have studied the additional material on the GDP-futures targeting idea that Sumner suggested I look at, but confess I remain as perplexed as before about how exactly this is supposed to work.  My basic confusion may arise from the fact that I think of private participation in futures markets as determining an equilibrium price of the contract, whereas Sumner is evidently thinking that the quantity of such contracts is itself a relevant magnitude to which the Fed might make a quantitative response.  With equally informed risk-neutral speculators, there would be an infinite demand for one side of the contract at any price other than that which corresponds to the expected GDP growth rate, and zero demand at the equilibrium price.  One can write down more complicated models with heterogeneous beliefs, risk aversion, and liquidity constraints, that do have particular implications for the quantities of contracts held in equilibrium, but I am most doubtful that a tight argument can be made relating the volume of positions taken on one side to a specific targeted economic objective.

I reiterate that the main question is whether the Fed, by whatever mechanism, has the capacity to control the expiry value of the contract, that is, the capacity to control the value of nominal GDP.  Insofar as it does not, I do not see how any such scheme can work.  Let me try to make the point by taking the argument to its logical extreme.  Suppose we decided that we’re really unsatisfied with the orbit of Mars around the sun, and propose that the Fed needs to take responsibility for setting it right.  We then set up a futures contract and allow market participants to wager on where Mars will be one year hence, and propose that the Fed shall make open market operations on the basis of some scheme relating to the price or quantity of such wagers.  If we are clever enough at designing this scheme, will it deliver the path that we might prefer Mars to follow?

Granted, this is a facetious example, since we know that the Fed in reality has no control whatever over the orbit of Mars, but it seems to have something to do with the growth rate of nominal GDP.  Granted too that expectations play a role in that influence the Fed has on nominal GDP.  But it is equally clear to me that there are many things that happen to GDP that are not in the control of the Fed.  After all, which number is it that the Fed is asserted to be able to control, the advance estimate, the revised estimate, or some Platonic truth that our best measures can only imperfectly reflect?

I grant that steps taken by the Fed in September 2008 might have affected the path of nominal GDP for 2008:Q4.  But those same measures would have had much bigger consequences for subsequent prices and output.  The more wildly one tried to adjust those magnitudes that are directly in control of the Fed in a futile bid to achieve a particular target it cannot control, whether it be 2008:Q4 nominal GDP or the 2009:Q4 position of Mars, the more instability it would introduce into the real economy.

I therefore come back to my suggestion that it is necessary to spell out the mechanism by which the Fed’s actions alter the course of nominal GDP.  Sumner is welcome to describe changes in velocity V as “monetary shocks,” if he so wishes.  But it is another matter altogether to explain exactly how the Fed can prevent them.

My view is that the Fed lacks the power to control the orbit of Mars, and lacked in September 2008 any tools that could have delivered a 5% annual growth rate for nominal GDP for 2008:Q4.

A (Gentle) Nudge Toward Gentle Deflation

I’m glad you agree, Scott, that our views are very close. I’d like to make them closer still-even identical. Naturally I’d prefer to make them so by nudging yours a bit further toward mine!

You wonder why I attach as much importance as I do to the trend growth rate of nominal spending, and wonder whether, in suggesting that more rapid growth may lead to bigger cycles, I am violating the assumption of monetary super-neutrality. You also generously suggest a defense, viz: that a higher mean spending growth rate will tend to be associated with greater variability of spending. My argument does in fact take the last assumption for granted; but there’s more to it than that. So I’m bound to say that, yes, I deny that money would be super-neutral even if the first moments of nominal growth rate time series were uncorrelated.

I know perfectly well that this is asking for trouble — that all kinds of formal macro-models suggest that money is super-neutral, or would be if currency bore nominal interest. (The qualification allows that the equilibrium demand for real money balances may depend in practice on the mean rate of inflation; still it doesn’t imply any link between the latter rate and the amplitude of cycles.) As an appeal to formal models won’t do, let me try an informal approach. The price system has always got plenty of work to do; and I take it we both agree that this work isn’t accomplished without cost. So much is taken for granted in all theories of nominal rigidities. I submit further that, absent complete indexation, the more prices have to change, the more work the price system has to bear, and the greater both the costs of continuous adjustment and the extent to which optimal price-adjustment strategies will allow prices to differ from their full-information values.

Now, bearing this in mind, allow me to resort to a reductio ad absurdum. Imagine an array of nominal income growth rate target percentages, starting at 5 and doubling from there; and tell me please when you would start being uncomfortable (considerations of real money demand aside) with recommending the rate in question. Here goes: 10, 20, 40, 80, … are you saying “stop” yet? If so, then you yourself doubt that money is truly super-neutral, even putting mean-variance correlation aside; and I suspect it’s because you are certain (as I am) that at high rates of income growth not only will various price indices vary more around their mean rates of change, but that within any given market at any point in time there will be considerably more dispersion of prices around their full-information-flexible-price values.

So far we have plenty of waste from high spending growth, but not cycles. Allow, though, that (1) the authorities aren’t able, despite their best efforts, to perfectly adhere to their chosen growth targets, so that realized nominal income growth is in fact stochastic; (2) that factor price adjustments are generally more costly than adjustments to final-goods prices; and, finally, that (3) regardless of the feedback rule employed, the variance of realized nominal spending growth is positively related to its mean value; and you have almost the whole basis for my favoring a lower target.

“Almost.” Because I also believe there is no good reason for not setting nominal income growth so low as to allow final goods prices to decline at an average rate equal to the rate of growth of total factor productivity. This is the “productivity norm” — a form of nominal income targeting in which the target growth rate is set equal to the growth rate of factor input. The last can, without doing to much violence to reality, generally be reckoned at about 2 percent per year; it is, in any event, hard to estimate more accurately than that. The point of allowing for it is to eliminate the need for any substantial, general downward adjustment of factor prices, and prices of labor services especially-an allowance that seems prudent in light of these prices’ relatively high degree of downward “stickiness.”

And what about output prices? Aren’t they also sticky downward? Since everyone’s talking about where macroeconomists have gone wrong, let me put in my choice for biggest screw-up: it’s the treatment of the degree of nominal rigidity (as represented in Calvo-pricing probability parameters and such) as being independent of the sort of shocks price-setting agents confront. Now, it’s all well and good to speak of a negative velocity shock as giving rise to a sort of externality — Yeager calls it the “who-goes-first” problem — whereby the private gains from downward price adjustments are small or nonexistent even though the social benefits are substantial. But the same can’t reasonably be said for positive productivity shocks. Indeed, it makes little sense to call these “shocks” at all because, although they may come as a surprise to many, they are generally not only anticipated but sought after by the very agents responsible (in the absence of any monetary policy response) for changing prices in response to them. More concretely, they are almost always deliberate results of efforts to cut unit production costs, which efforts are aimed at allowing producers to compete more effectively with their rivals by cutting prices in turn. Consequently, no good end is served by arranging monetary policy so as to spare the “average” producer the need to lower prices in response to a general improvement in productivity. On the contrary: such a strategy is likely to complicate the price-adjustment problem faced by producers, making output market price signals that much “noisier.” What’s more, it will also makes factor price signals more noisy, by forcing needed upward real wage adjustments to be accomplished by raising relatively sticky nominal wage rates.

Call it hubris, but I daresay that, once the mathematics get worked out so that someone can make a DSGE model realistic enough to allow for the differential stickiness not only of output prices and “wages,” but also of all prices depending on the “shocks” taking place, while also taking money’s usefulness into account, that model will suggest a Ramsay-optimal monetary policy that looks a helluvalot like what I pled for in my 1997 pamphlet. (Which was, after all — to be rather more modest — simply what many prominent economists pled for before their ideas were swept aside by the wake of the Keynesian diversion. [1]) Indeed, several recent models of this sort already come very close despite not allowing for it. [2] Needless to say, allowing money balances to be either a direct or an indirect source of utility only strengthens the productivity norm case.

Lastly, on the matter of the ideal income growth rate, I think that, so far as the productivity norm is concerned, there’s no reason to fear that it would occasion a negative neutral nominal interest rate, for the simple reason that the real neutral rate will almost certainly never fall short of the rate of productivity growth (which informs future real income expectations ).

Turning to free banking, in essence the argument here is that it simplifies the task of nominal income targeting, and especially of targeting nominal income by means of a McCallum-type monetary base feedback rule, by making for a more stable relation between the stock of base money on the one hand and the equilibrium level of nominal spending on the other. In other words, free banking can help stabilize the income velocity of the monetary base. To understand why, first consider a run-of-the-mill model of the precautionary demand for money of the sort often used to represent the public’s demand for money but originally developed by Francis Edgeworth to model banks’ demand for reserves.[3] Such models typically make the aggregate demand reserves proportional to the standard deviation of an individual bank’s net reserve loss, which in turn increases with the total volume of bank-money payments, though less than proportionately (e.g., the “square root law.”)

Next suppose that the stock of bank reserves is fixed. In that case, there will be a unique equilibrium volume of payments consistent with reserve-market clearing. It follows that, for any particular bank money aggregate, there must be a tendency for the supply of that aggregate to adjust so as to compensate for changes in its velocity if reserve-market equilibrium is to be preserved. I develop the argument in some detail in my Theory of Free Banking [4] and (more formally) in a short paper I wrote for the Economic Journal’s “Policy Forum.” [5] Diagrammatically, it looks like this:

Admittedly, the argument refers to a tendency for the total volume of bank money expenditures to bear a stable relation to the available stock of bank reserves. To get to stable base income velocity you have to assume (1) that total bank-money transactions are a relatively stable (if nevertheless changing) multiple of income transactions and (2) that targeting the base is equivalent to targeting bank reserves.

The first of these assumptions isn’t all that heroic. The second is where free banking becomes crucial, for the assumption depends, not only on banks being free to set their reserve ratios without having to heed binding statutory requirements, but also on their collective reserves being unaffected by changes in the public’s desired currency-deposit ratio. So long as the public can’t simply dispense with paper currency altogether, the last requirement can be met only by letting commercial banks meet public requests for paper currency using their own notes-as they do, for instance, in Scotland today, and as they did in many more instances (and with fewer regulatory restrictions) both in Scotland and dozens of other places in the past.

What all this boils down to is that there are forces at work in the banking system that can make stabilizing nominal income easier, and that policymakers should take as much advantage of those forces as possible. Doing so will make it easier to achieve some desired growth rate of nominal spending by simply controlling the growth rate of the monetary base-something any central banker, or even a computer program, can do.

Of course there’s a lot more to the case for free banking than this, including arguments to the effect that, despite not involving deposit insurance, it would offer better protection against runs and panics than existing arrangements do. But this isn’t the place for me to go into them, so I’ll settle for hoping that the point about base velocity stabilization will whet your appetite enough to encourage you to look into my, Kevin Dowd’s, and Larry White’s writings on the topic.

Notes

[1] Cf. Selgin, (1995) “The ‘productivity norm’ versus zero inflation in the history of economic thought,” History of Political Economy 27 (4).

[2] See, for example, Rochelle M. Edge, Thomas Laubach and John C. Williams (2005), “Monetary policy and shifts in long-run productivity growth” (mimeo, Federal Reserve Board); Stephanie Schmitt-Grohe and Martin Uribe (2006), “Optimal Inflation in a Medium-Scale Macroeconomic Model,” mimeo.

[3] F. Y. Edgeworth (1888), “The mathematical theory of banking.” Journal of the Royal Statistical Society 51 (1) (March).

[4] Totowa, New Jersey: Rowman & Littlefield, 1988.

[5] “Free banking and monetary control,” The Economic Journal 104 (1994).

From Discretion to Futures Targeting, One Step at a Time

I agree with Professor Hamilton’s argument that monetary policy was not the primary cause of the housing bubble, so I will focus on the futures targeting idea. I think it will help if we separate out several issues, to make it clear exactly where we disagree.

When Hamilton uses the “Mars” example, he is suggesting that monetary authorities might not have powerful enough tools to create five percent NGDP growth. But under a fiat regime any central bank could create hyperinflation, as long as it is not restricted to buying conventional assets like Treasury securities. Since the Fed has recently shown a willingness to buy unconventional assets, it has the resources to create hyperinflation if it wishes. I assume that Hamilton agrees with this, so henceforth I will assume that the debate is not about whether the Fed has enough tools to create at least five percent NGDP growth over the next 12 months, but rather whether doing so would cause other problems.

First, let me clear up one misconception. Hamilton thought I was claiming that in September 2008 the Fed had the ability to create five percent NGDP growth in 2008:Q4. That was not my claim. Rather, I argued that they could have created five percent expected NGDP growth over the following 12 months, and also that had they done so, the 2008:Q4 results would have been far better. Perhaps we would not have had five percent NGDP growth in Q4, but much closer to that number that the actual negative six percent. My basic argument (and it is also a theme in recent research by Woodford and others) is that changes in current AD are powerfully affected by expected changes in future AD. When expectations of NGDP going several years forward fell sharply last fall, current AD fell sharply. Firms see no reason to throw good money after bad. When it is clear that a major recession is imminent, firms respond by immediately slashing production. Had expectations of five percent NGDP growth over 12 months been maintained throughout the financial crisis, any near-term slowdown in NGDP would have been much milder. Unless I am mistaken, this is a well-established proposition in modern business cycle theory.

With this in mind, let’s concentrate on my argument that the Fed can create five percent NGDP growth expectations over 12 months, even in a financial crisis. I think it is useful to break this argument down into three components:

  1. Is five percent NGDP growth over the next 12 months the proper target?
  2. If so, should the Fed do as Lars Svensson has recommended and target the forecast?  That is, should it set policy in such a way that its own internal forecasting unit expects exactly five percent NGDP growth?
  3. If it should target the forecast, can it do so more effectively by tying policy to market forecasts?

Obviously, I don’t know that five percent NGDP growth over the next 12 months is exactly right. Perhaps a different number or a different time period is preferable. Indeed, perhaps an entirely different variable (such as the CPI) is appropriate. But by any reasonable criterion policy was clearly too tight in the Svenssonian sense last fall. The Fed was even calling for help from fiscal policy. So let’s take a step back, and work toward my proposal in baby steps, in the hope that we can better see where Hamilton disagrees.

I think we all agree that the Fed ought to have some objective. Because of the “one tool/two targets” problem, most economists proceed under the assumption that it should be possible to express that objective as a single target (which of course may be a hybrid target incorporating a weighted average of nominal and real goals, such as NGDP, or the Taylor rule formula.) So the Fed’s goal is to achieve X percent growth in a particular nominal aggregate. Svensson says that they should set policy at a level where they are expected to hit that goal. I have never seen anyone object to Svensson’s proposal on the grounds that Hamilton uses to object to my proposal. So let’s assume that there are no objections to Svensson’s policy critierion. Obviously, we are still a long way from NGDP futures contracts. Svensson contemplates a discretionary regime where the FOMC decides which instrument setting is most likely to hit their target.

Now let’s expand the FOMC from 12 members to everyone on Wall Street. Does this make Svensson’s policy infeasible? I don’t see why it would. And now let’s have each voter write down their preferred instrument setting (for the base) and then pick the median vote as the actual policy setting. And now let’s compensate each member, ex post, on how well they voted. If actual future NGDP turns out to be above target, those who voted for relatively easy money are penalized, and vice versa. And now let’s change “one man, one vote” to “one dollar, one vote,” where each FOMC member gets to choose how many dollars they want to “bet.” Any problems yet? If not, we have just arrived at NGDP futures targeting.

I am using this procedure for two reasons. First, in the hope that if we move to NGDP futures targeting in baby steps, the change won’t seem so radical. (Sort of like gradually heating up a frog in a pan of water — although I am not comparing Hamilton to a frog!) The second reason is to make it easier to find out where we disagree, i.e. where Hamilton thinks the proposed policy breaks down.

I do think that Hamilton may have misinterpreted the proposal in one important respect. He suggests that if everyone had the same views, then traders would take an infinitely large long or short position. But this ignores the fact that each futures transaction triggers a parallel open market operation. This changes the money supply and thus the expected future NGDP. Thus, suppose everyone thought that under the current instrument setting, NGDP would come in below the five percent target. In that case they would begin buying NGDP futures contracts. Each purchase would trigger a parallel open market purchase of ordinary Treasury bonds by the Fed. As the monetary base increased, then all these like-minded investors would begin to expect faster NGDP growth. The purchases would continue until the expected NGDP growth rose to five percent.

Now let’s assume we are in a liquidity trap, or the “Mars” example, and that no amount of open market purchases would increase expected NGDP growth up to five percent over the next 12 months. In that case, the Fed would buy up the entire world stock of assets, including all foreign stocks and bonds. I will concede that in that case we would fall short of the five percent NGDP target. But, on the plus side, Americans would own the entire world stock of assets, and all income from capital will go to U.S. taxpayers. Of course, I don’t think this reductio ad absurdum case would actually occur, and I am sure that Hamilton doesn’t either. Under any reasonable model of base money demand there is no infinite demand for futures contracts. So I think we can assume that at some point the transactions would cause a large enough change in the base to equate the public’s expected NGDP growth rate with the five percent policy target. When that happens, the NGDP futures market will reach equilibrium and trading will cease.

Hamilton also raised this issue:

After all, which number is it that the Fed is asserted to be able to control, the advance estimate, the revised estimate, or some Platonic truth that our best measures can only imperfectly reflect?

In order for contracts to be settled after the 12-month forward NGDP is realized, we need a measurable estimate of actual NGDP, so that rules out the Platonic ideal. I prefer the revised estimate, as it should be closer to the actual NGDP, which is presumably what the monetary authority wishes to stabilize.

I thank Professor Hamilton for providing me with an opportunity to expand on my proposal, beyond the sketchy description in the original essay. It is a complex subject, and also one that might be unfamiliar to many Cato readers. I also see that George Selgin has a reply; I will respond within a few hours.

Yes, the Lags Are Long and Variable

I agree with Sumner that the Fed has the ability to create hyperinflation. The questions are how long it would take for this to happen and by which mechanism it would transpire. The mechanism I have in mind is a currency crisis in which everyone tries to get out of their dollar holdings. Saying that the Fed has the ability to create such a situation does not imply that it could achieve exactly four percent inflation, or even a number remotely close to that, for the coming quarter. These kinds of shifts are by their nature impossible to control with any precision. The Fed has the ability eventually to achieve significant inflation or significant deflation. That is very different from the ability to give us a particular desired number for the coming three months.

Sumner says he is not claiming that the Fed in September 2008 could have achieved five percent nominal GDP growth for 2008:Q4. May I ask why not, under his view of the world? In denying that the Fed has the ability to achieve an objective over the next three months, he seems to be implicitly endorsing the notion that there are lags in the effects of monetary policy. But if he agrees with me that it’s not feasible to do this within three months, how does he know we could do it within 12?

Or perhaps he is wishing to emphasize the difference between realizations and expectations — the Fed can’t actually deliver five percent nominal GDP growth, but it can nevertheless persuade the public this is what is going to happen? Then let me rephrase my position — there is nothing the Fed could have done in September 2008 that would have caused me to expect that the GDP growth rate in 2008:Q4 was going to be five percent. The reason that is my position is because I maintain that there is nothing the Fed could have done over the subsequent three months that would have achieved a five percent growth rate.

Given a longer time frame (and I think 12 months is still too short), I believe a target average for the nominal growth rate becomes more credible. But that’s because I believe in long and variable lags, with much that can happen that is beyond the Fed’s control that matters for output and prices. The Fed influences, but does not control, nominal GDP growth.

As for the mechanics of futures trading, my observation that with risk-neutral, equally informed traders there is an infinite demand for the contract at anything other than the equilibrium price has nothing to do with the specifics of how Sumner intends to implement this plan. It is a feature of any and all futures contracts. If I expect a growth rate of six percent and you offer me a contract at five percent, I make $1 expected profit if I buy $100 contracts, and I make $1 million expected profit if I buy $100 million contracts. If my goal is to maximize expected profit, I buy an infinite quantity of contracts. This is why I say that it is the equilibrium price of the contract, not the quantity of contracts written, that is the relevant aggregator of private beliefs. I do not think it is workable to implement a plan that is geared to the quantity of contracts written.

Agreements and Disagreements

The discussion seems to be careening into ever more technical issues, with George Selgin now invoking “Calvo-pricing probability parameters” and “Ramsay-optimal monetary policy.” Although I find this all quite fascinating, I fear it is too esoteric for the bulk of Cato Unbound’s readers. And unnecessarily so, because the essential questions under consideration can be examined and expressed in accessible terminology. So let me try to enumerate them. This should help clarify the agreements as well as disagreements. If I misunderstand or mischaracterize anyone’s position, I expect that he will correct me.

1. What initiated the recession that began in 2007?

Scott Sumner and James Hamilton both think the initiating event was a supply-side, real shock, emanating from the financial sector, particularly subprime mortgages. Selgin in contrast identifies what he considers an easy monetary policy beginning after 2001 as setting the stage for the subprime crisis. Sumner and Hamilton concede that monetary policy may have made a minor contribution to the subprime crisis but do not assign it a primary role. I actually think Selgin is correct about low interest rates (with other factors) being a significant causal factor, but I deny that those low rates resulted primarily from Fed policy, which according to the various monetary measures, was not all that easy after 2001.

Hamilton partly attributes the subprime crisis, in turn, to inadequate financial regulation, whereas Selgin and I (and possibly Sumner?) believe the opposite to be true: the problem was too much regulation and subsidization of the financial system, although this controversy has not really been a focus of our debate.

2. What made the recent recession so severe, beginning in mid-2008?

All four participants in the discussion seem to agree that the economy was hit with a negative velocity shock (i.e., an increase in the demand for money) that deepened the recession. Sumner, Selgin, and I further contend that the Fed could have dampened that shock better than it actually did, although we seem to have minor disagreements about exactly how. At one extreme, Sumner argues that a precise targeting of nominal-GDP expectations in operation by September 2008 would have prevented most subsequent financial failures, whereas I suspect that outright expansion of the monetary base should have started sooner, could not have been conducted with much precision, and while far superior to Bernanke’s policies, may not have been as fully successful as Sumner projects.

Hamilton is the only one who has been specific about the source of the velocity shock, blaming the mortgage crisis. But he denies that a more expansionary monetary policy could have made a lot of difference, presumably because some kind of liquidity trap would have counteracted any expansion with further declines in velocity. Interestingly enough, this parallels the online debate that Arnold Kling (here and here), Bryan Caplan (here and here), and Bill Woolsey (here and here) have been carrying on over the efficacy monetary policy. I should hastily add that Hamilton does not go to the bizarre lengths of Kling, who asserts that monetary policy, within a very broad range, is always offset by changes in velocity.

By the way, a belief that the Fed was too tight in 2008 is shared by David Henderson and now, to some degree, by Tyler Cowen, who recently stated that expansionary monetary policy “would have eased the crisis, in my very rough guesstimate, by one-third.” Cowen, however, also endorses most of the Fed’s and Treasury’s targeted bailouts, unlike Henderson, Sumner, Selgin, and me.

3. What is the best gauge of monetary policy?

On this question, there are some interesting disputes that have not yet surfaced. Hamilton embraces the mainstream view that interest rates offer the best indicator for judging and conducting monetary policy. Sumner argues persuasively that interest rates are seriously misleading, yet he accepts the mainstream view that the monetary measures are no longer helpful either. His alternative indicator is expectations about nominal GDP. Although Selgin appears to agree, quibbling merely about the optimal rate of nominal GDP growth, in other writings he has employed interest rates and even the Taylor Rule to evaluate monetary policy. Moreover, I very much doubt Sumner’s precision targeting of the forecast is compatible with Selgin’s support for a return to some kind of commodity money base. I’ve always understood Selgin’s “productivity norm” as a second-best approximation of what would happen under free banking.

As for me, while I accept the truism that less volatility in nominal GDP is tantamount to diminished business cycles, I cling to the now unfashionable view that what happens to the money stock still matters. Despite alternative measures and sometimes-erratic velocity, analyzing central bank policy requires a subtle examination of many factors, with assorted monetary measures receiving prominence. But this leads to my final point of contention.

4. How should we characterize velocity shocks?

One of the most perceptive and telling of Hamilton’s observations was the following from his second comment: “Sumner is welcome to describe changes in velocity V as ‘monetary shocks,’ if he so wishes.” Given their fixation on nominal GDP, both Sumner and Selgin treat changes in money or velocity as virtually equivalent. Why should such a question of semantics be important? Because conflating the two variables obfuscates causation. No matter what happens to aggregate demand, Sumner and Selgin have tautologically defined it as the result of monetary policy, good or bad. I think we should distinguish between a decline in aggregate demand that stems from a money-stock shock and one that stems from a velocity shock not counteracted by monetary expansion. Doing so is not just of historical interest; it opens up a possible argument about market failure versus government failure. At least some velocity shocks could arise from changing preferences rather than government policy, and surely that should matter in some way, especially to an Austrian economist like Selgin. Recognizing the distinction between shocks to M and to V also requires Sumner and Selgin to take monetary measures more seriously. Unfortunately, what in fact triggered the recent velocity shock remains a major under-addressed question in our discussion so far.

Score-Keeping with Selgin

George, every time I read your arguments it makes me want to chop a few more tenths of a percent off my proposed NGDP growth target. This was no exception. However I am still not completely convinced by your arguments.

Let’s start with this claim:

(3) regardless of the feedback rule employed, the variance of realized nominal spending growth is positively related to its mean value.

I agree with everything you said up to this point, but I am not convinced of this. But in the spirit of compromise let’s first discuss where I am persuaded. I like your discussion of how a very high trend inflation rate degrades the price system. Here is an example that drives home that the “menu cost” argument is about much more than just menus. My father was a real estate broker from about 1965 to 1985. When he started out he had a pretty good store of knowledge that he could use to advise clients about housing values. When he saw a home for sale, he could recall what similar homes had sold for in earlier years. By 1975, however, prices were rising so fast that this store of human capital rapidly depreciated. In principle he could have memorized a price index table, and also memorized the dates at which each home was sold, but this is beyond the cognitive ability of most individuals. If the higher trend rate of inflation caused homes to be listed at the “wrong” price, then this would have made the real estate broker industry less productive, and thus would have increased both transactions costs and price dispersion around the equilibrium value of homes.

So I see the main impact of the degradation of the price system as being less efficiency, not more NGDP volatility. I’ll acknowledge that once you introduce any sort of real impact from higher trend inflation, it is always possible to construct a model where that somehow makes it harder for the central bank to stabilize NGDP growth. It’s just that I don’t see a plausible model where this occurs. I think we may be subtly swayed by the fact that real world cases of high inflation are associated with more volatility — but that is because they almost always lack an explicit target.

Although I am not convinced that NGDP growth rates would be more volatile, there is much in your argument where I do agree, and I’ll add suboptimal price dispersion to the two other factors supporting a low NGDP target: the implicit tax on cash, and the tax on capital that results from a non-indexed tax system. (And as an aside, the complexities of indexing all financial transactions to inflation are enormous, so the tax problem is important.) So now it is three to two in your favor. By the way, I also agree that factor price adjustments (especially wages) are generally more costly than final goods price adjustments, which is why I favor NGDP targeting.

I also agree that price declines due to factor improvements don’t cause much of a problem. But here is what I do worry about. In the U.S. the trend rate of real GDP growth is about three percent. Population growth is about one percent. So if total factor productivity grows at about two percent per year, we would average roughly one percent deflation and two percent NGDP growth under your plan. Average nominal wage rates would grow at about one percent per year. If there is an irrational psychological barrier to small nominal wage cuts, and if that barrier is less significant if real wages are cut an equal amount through inflation, then a low trend rate of nominal wage rate increase could increase labor market distortions. I don’t want to make too much of this issue, as there are two arguments that cut the other way. Because of “life cycle effects,” the average worker becomes more productive over time, and thus each worker gets a larger pay increase, on average, then the overall average wage of workers at any given age. In addition, if we went to a Congressionally-mandated NGDP rule of five percent NGDP growth, the same psychological importance now given to zero percent pay raises might simply be shifted up to four percent wage increases. Indeed we might be almost there already. Nevertheless, I have seen studies that show a discontinuity in contractual wage increases right at the zero percent point, so I don’t think my worry is completely unfounded.

In a perfect world I think you have the better of the argument here, so let me distinguish between three scenarios.

  1. In a world where wage and debt contracts were negotiated under a five percent expected NGDP growth rate, and where the banking system is already weakened for other reasons, it is best to aim for five percent NGDP growth until the crisis is over, and then bring the rate down gradually. I’m not saying you disagree here, but I just wanted to emphasize that even if I accepted your entire argument, I would not have changed the central thrust of my recent essay, which was a critique of policy in late 2008.
  2. Now let’s assume the crisis is over, but we still have a less-than-enlightened Fed, which doesn’t seem able to operate in a zero rate environment. In that case a higher trend rate of NGDP could be justified on the basis that it would make it less likely that policymakers would bump up against the zero rate bound. You suggested that an equilibrium sub-zero nominal rate is unlikely under a productivity norm, but I am not convinced. Japan has averaged something like one percent productivity growth and one percent deflation (in the GDP deflator) for the past 15 years. And, of course, their short term nominal rate was stuck at zero for a good part of that period. I’m not saying the Japanese case exactly matches your proposal, but it’s close enough that I don’t have confidence that we would always be able to avoid sub-zero equilibrium nominal rates — especially if policy mistakes were made. This doesn’t necessarily apply to the optimal monetary regimes that you and I contemplate, but rather is an attempt to save capitalism from having its reputation trashed by inept central bankers, who use a policy procedure that leaves them prone to deflationary mistakes.
  3. Now let’s assume we do have an optimal monetary regime, preferably forward-looking enough that liquidity traps are avoided. In that case, you probably win. I doubt that my observation that there is a discontinuity at zero nominal wage changes is enough to overcome the two (now three) arguments on your side. So let’s go on to look at your proposal for free banking.

I always have trouble visualizing what sort of macroeconomic outcome would result from a regime of free banking. At least your proposal for a fixed reserve base helps anchor the price level. Then the only real macro issue is whether it could stabilize the demand for reserves. (There is also a longstanding microeconomic debate about whether competitive currency issue leads to wasteful non-price competition, as banks compete to get people to hold their non-interest-bearing banknotes.)

I think you make a fairly persuasive case for the proposition that free banking could cushion the financial system against one important type of money demand shock — indeed the type of shock that arguably caused the Great Depression. I am thinking of a scenario where people wish to hoard cash and/or bank deposits, perhaps because of financial instability. As long as these balances are hoarded, banks would have an incentive to accommodate any increase in demand for money by issuing banknotes or expanding bank accounts. If I am not mistaken, this is because banks need only hold reserves in order to meet deposit outflows or banknote redemption. And obviously if money is being hoarded, this does not trigger any call on the banks’ reserve base. So if I have the intuition right, this part of your argument seems fine.

To make it work perfectly you need to assume that, as you put it:

To get to stable base income velocity you have to assume (one) that total bank-money transactions are a relatively stable (if nevertheless changing) multiple of income transactions.

I could easily foresee this assumption failing to hold — for instance, if the ratio of total transactions to income transactions were to change dramatically. Recall that total transactions (which includes those occurring in financial markets), are much larger than income transactions.

At the same time I can’t really argue with your conclusion, which strikes an appropriately non-dogmatic stance:

What all this boils down to is that there are forces at work in the banking system that can make stabilizing nominal income easier, and that policymakers should take as much advantage of those forces as possible. Doing so will make it easier to achieve some desired growth rate of nominal spending by simply controlling the growth rate of the monetary base-something any central banker, or even a computer program, can do.

I would just add that even if free banking combined with a fixed reserve base of fiat money could deliver improved macroeconomic stability, a regime of free banking and a reserve base adjusted by the market to deliver stable expected NGDP growth could do even better. So perhaps I can nudge you a bit in my direction on the idea of using the market to help stabilize NGDP expectations by appropriately adjusting the size of the fiat reserve base.

Taxing Banks for Holding Reserves

In his first reply to the three of us, Scott Sumner asked: “Does anyone have any thoughts on my proposal to charge a negative interest rate on excess reserves as a way of reducing the hoarding of base money? Sweden recently adopted this proposal.” (I assume Scott meant “charge a positive interest rate,” which is the same as paying negative interest on excess reserves.) I’ll take the bait with a frank answer, even though I know next to nothing about how this has actually worked in Sweden. I think it is a dreadful idea, especially dangerous at the tail end of a crisis.

I have little doubt that, in the short run, the proposal would encourage banks to try to offload their reserves. Since in the aggregate they cannot reduce the monetary base, it would indeed encourage both further expansion of bank balance sheets through lending and conversion of reserves into currency held by the public. In other words, it would increase velocity. But let me first mention some technical issues. With reserve requirements virtually a dead letter, applying only to M1, charging interest on excess reserves (rather than total reserves) would in effect tax clearing balances that banks hold against their M2 and M3 liabilities. So it would unintentionally cause banks to encourage their customers to shift into M1 deposits, with consequences that I haven’t thought through. The tax would also comprise an indirect subsidy to money market mutual funds, which issue an M2 security but would not bear the tax. Doesn’t this raise the long-run specter of major financial disintermediation?

Charging interest on the banks’ deposits at the Fed poses no administrative problems, but banks also hold a lot of their reserves in the form of vault cash, especially for their ATMs, and this cash currently does not earn any interest. Unless you figure out a way to also charge interest on vault cash, banks would merely shift the composition of their reserves. On the other hand, charging interest on vault cash could eventually lead to the bright idea of imposing a similar tax on the public’s currency. Having the government directly tax everyone’s cash balances would, to my mind, be a nightmare, not the least because it gives an inept or avaricious central bank the ability, with a high enough tax, to set off a serious, velocity-induced inflation without actually increasing the money stock.

Whether the tax remains confined to banks or expanded to the general public, the effect on the yield of Treasury securities might be intriguing. Holding Treasuries would be a way of evading the tax. Would Treasuries entirely supplant reserves as the relevant component of the monetary base, and would the policy drive up prices of Treasuries to the point where they earned zero or even negative returns? Perhaps some fancy modeling from practitioners of the Fiscal Theory of the Price Level could answer these questions. One good thing is almost certain: charging interest on reserves would cripple the Fed’s discount window (and Term Auction Facility), because what bank would want to pay twice, borrowing at some specified discount rate reserves on which it has to pay additional interest anyway?

Even if the proposal is kept within reasonable limits, I have strong theoretical objections. I still haven’t gotten my head around the balance sheet implications of turning reserve liabilities into assets (or into liabilities that pay negative interest). But why would Scott, with his opposition to interest-rate targeting and to paying interest on reserves, want to actually increase the Fed’s ability to manipulate interest rates? This only undercuts his oft-repeated claim that mere monetary expansion (indeed, a mere expectation of such expansion) can completely offset and overwhelm any decline in velocity (i.e., any increase in the demand for money). The proposal simultaneously represents another step moving central banks away from their traditional role of controlling the money supply toward the central planning of the economy’s interest rates.

I’ve already pointed out that one way to think about paying interest on reserves is that it converts monetary into fiscal policy. Another way is that it combines two separate functions: monetary policy with federally subsidized financial intermediation. It thereby fuses the central bank’s traditional activity with that of such agencies as Fannie, Freddie, and the Federal Home Loan Bank System. Although I realize the Fed is already doing this through the wide variety of assets it now purchases, allowing it to convert bank reserves at will from a form of borrowing to a form of lending, and then back again, only exacerbates the potential chaos. I thought the disaster of Regulation Q and significant financial disintermediation during the 1970s had taught economist the lesson that central banks should not be allowed to play with interest-rate fire.

Call me an old-line Friedmanite, but if the economy must suffer under a central bank, it should be one that is circumscribed as much as possible. That means not giving the Fed additional powers, but stripping it of the powers to pay interest on reserves and to create subsidiary structured investment vehicles (like those with the label of Maiden Lane that made loans to Bear Stearns and AIG), as well as denying it the power to borrow money with its own securities, as Bernanke has advocated. Indeed, let’s go all the way with Friedman, and abolish the discount window, then eliminate all remaining reserve requirements (which the Fed is scheduled to gain the option to do in 2012), remove the Fed’s virtual monopoly on hand-to-hand currency, and while we are at it, prevent it from intervening in foreign exchange markets. Not one of these is essential for controlling the monetary base. Confine the Fed exclusively to open market operations using Treasury securities. All this strikes me as entirely consistent with Scott’s confidence in monetary policy’s efficacy.

Clearing up Some Miscommunication

Hamilton and I are still miscommunicating on two points.

Professor Hamilton continues to argue that if the Fed promised to buy or sell unlimited amounts of 12 month forward NGDP futures contracts at a five percent premium over current NGDP, it might lead to an almost infinite demand for such contracts if someone expected six percent NGDP growth. One problem with this argument is that a margin payment would be required for each trader. And wealth is not infinite. Nonetheless, I do concede that a very wealthy individual or investment bank could theoretically purchase an extremely large number of such contracts. But Hamilton overlooks a much more important point; the policy envisions each trade changing the money supply.

In the example cited by Hamilton, each trader took a “long” position on NGDP, expecting above five percent NGDP growth. Thus each trade triggers an offsetting open market sale by the Fed. What Hamilton appears to overlook is that long before an infinite position was established, the monetary base would fall to zero. When traders forecast NGDP they don’t make unconditional forecasts, rather they make forecasts conditional on the money supply after the trade is completed. I defy anyone to come up with a plausible model where six percent NGDP growth would be expected at a zero money supply. Yet that is the implication of Hamilton’s counterfactual. Note: I am not talking about a zero growth rate, but rather a zero level of the money supply.

I think this counterargument is sufficient. But in case anyone is worried about some rogue trader willing to sacrifice huge amounts of wealth in order to destabilize monetary policy, I did suggest in my 2006 paper that the Fed could limit the net position of any one trader to some reasonable level, where “reasonable” is defined as a level where other institutions such as investment banks would be able to provide a sufficient counterweight if the expected future NGDP diverged significantly from its target value.

The second issue has to do with policy lags and the potential for monetary policy to impact short run movements in NGDP. I would like to clear up two misconceptions.

First, I do not believe that the Fed would ever be incapable of hitting a five percent NGDP growth target over a three month time frame. The miscommunication was my fault, as I said something that could have easily been read the other way. I said that I wasn’t claiming that the Fed had this ability. I meant this to refer to my proposed policy regime. They are certainly capable of rapidly creating explosive inflation if they wish to. What I meant to say is that if the Fed had an appropriate policy — say, targeting one or two year forward NGDP growth at five percent per annum — then they might not also be able to prevent NGDP from coming in somewhat below target in the next quarter. This is the familiar “one tool two targets” dilemma. But is this really such a big problem? We often have fairly wide variation in quarter-to-quarter NGDP growth without sliding into recession.

Another way of looking at this is that when we do have recessions, we virtually always have one or two year forward NGDP growth coming in well below pre-recession estimates. The only exception that comes to mind is the 1974 recession, which was distorted by both a big energy shock, and the removal of price controls (which increased measured inflation.) I don’t see how anyone could say energy prices are the main problem in 2009, as they have been generally lower than 2007, when unemployment was quite low. Rather, the problem is now low NGDP. (Although I agree with Hamilton that energy prices did slow the economy in early 2008, when unemployment was still low.)

I also think it is important not to confuse two distinct problems. One is the very real difficulty that policymakers have in convincing the markets that they intend to shift to a different NGDP trajectory. And the second is the much less serious problem of convincing the markets that they intend to keep targeted NGDP on roughly the same growth track as had persisted for several decades. The later problem doesn’t requite changing peoples’ minds, but rather reassuring them that any near term shortfall will be made up as soon a reasonably possible. If such an assurance is provided, it will greatly reduce the near term fall in NGDP, even if there are real shocks such as a dysfunctional banking system. At worst you would get a bit of stagflation, but that is appropriate in the face of supply shocks under either flexible inflation targets like the Taylor Rule, or my preferred NGDP targeting regime.

I simply don’t see any examples of severe demand-side recessions that were not accompanied by equally sharp declines in one and two year forward-looking NGDP expectations. This happened in 1920-21, 1929-30, 1937-38, 1981-82, and 2008-09. I can’t say it’s impossible; it is theoretically possible that unemployment might soar to 9.7 percent in an environment where the public continued to expect on-target NGDP growth over the next year or two. But until I see something remotely close to that scenario, I will continue to believe that the odds of it occurring are vanishingly small, and not a consideration for policy formation.

As an aside, yesterday Hamilton’s blog (econbrowser.com) ran a guest post by David Papell that clearly shows the importance of targeting price level or NGDP expectations. Papell points out that throughout almost all of the 1970s the public continued to lag behind reality in their inflation forecasts. Let’s suppose they consistently missed by two percent or three percent. In that case, a Fed policy of targeting inflation expectations would lead to persistently high inflation. In contrast, a price level target that was consistently underestimated by two percent or three percent would result in roughly on-target inflation, except at the very beginning. And since wage contracts are generally linked to expected inflation, any short term price level movements would have little effect on wage growth, and thus would make it relatively easy to get back on target without a major change in employment. We need to stabilize expectations; but not inflation expectations, rather price level or, even better yet, NGDP expectations.

Perhaps this is just wishful thinking, but I continue to believe Hamilton and I aren’t that far apart. But we need to get beyond hypotheticals like hyperinflation that neither of us favor. Consider this statement by Hamilton:

Given a longer time frame (and I think 12 months is still too short), I believe a target average for the nominal growth rate becomes more credible. But that’s because I believe in long and variable lags, with much that can happen that is beyond the Fed’s control that matters for output and prices. The Fed influences, but does not control, nominal GDP growth.

Let’s say 12 months is too short. Let’s suppose Hamilton thinks the Fed should be looking 24 months down the road. My response would be “fine, let’s target 24 month forward NGDP at a level about 10 percent above current levels.” (I.e. five percent per annum.) Suppose he prefers four percent per annum. Fine. I’d prefer using market forecasts, but if Hamilton doesn’t think that would work, then let’s use Fed discretion. I claim even that policy would have made the near-term recession far less severe; Hamilton is more skeptical. But what is the worst that could happen? We’d still be implementing Hamilton’s preferred policy. In contrast, the Fed is going to deliver mid-2010 NGDP at a level roughly equal to mid-2008 levels. That’s 10 percent below trend, and eight percent below a hypothetical goal I gave Hamilton. I don’t know exactly what target he favors, but from his blog it is clear he thinks it would have been better if AD had not fallen so sharply late last year.

Let’s Make a Deal

Okay, Scott; I won’t insist on two percent NGDP growth; why don’t we just split the remaining difference, making it 2.5, and call it a day. I’m also very happy to accept your suggestion that, for safety’s sake, a move toward free banking ought to be coupled with an arrangement in which the reserve base is adjusted by means of NGDP futures contracting. If free banking really works as my theory says it might, the contracting will in fact lead to steady growth in the base (that is, in bank reserves). Otherwise, the base will vary more. Either way, though, there’s no need for anything beyond a sort of night-watchman version of “monetary policy.” Like I said, a computer could handle it. Indeed, it seems to me that neither of our proposals can be expected to work as planned unless the monetary authority’s wings are clipped in a manner that altogether rules out a revival of monetary discretion. To this extent, at least, it seems we both favor doing away with central banking in the usually understood sense of the term-the one in which central bankers’ agonize every couple months or so about the right monetary policy stance.

I’d like nevertheless to make a remark or two concerning the dangers you see in a full-blown productivity-norm regime. The Japanese example is, indeed, troublesome to me: I frankly don’t understand why, earlier this decade, short-run rates there remained obstinately stuck at zero despite the fact that the rate of (total factor) productivity growth exceeded the rate of deflation. I suppose that expectations had been traumatized somehow. Nevertheless, I think Japan’s experience actually offers some support for both of our positions. Consider that, during the first part of its “lost decade” of the 90s, Japan’s inflation rate (according to the GDP deflator) was positive; whereas by the time inflation gave way to substantial deflation, at the end of the decade, Japanese total factor productivity growth had fallen very low-most estimates put it around .5 percent. There was, in short, no point during the lost decade when Japan’s situation resembled a productivity norm.

From roughly 1999 through 2005, on the other hand, Japan’s deflation rate did more-or-less match its rate of productivity growth. But by then the Japanese economy was growing again, if only modestly. This happened in part precisely because the Japanese government had at last turned to quantitative easing: had it not done so Japan’s deflation might well have proceeded well beyond productivity-norm bounds. In short, Japan’s case suggests that deflation (insofar as it doesn’t exceed the bounds of productivity growth) and zero interest rates are each of them red-herrings: Japan’s economy tanked when its NGDP growth rate fell dramatically, and it began to recover when the rate stabilized again, even though it stabilized at a very low value. (It has since slumped badly again.)

Turning to the problem of downward nominal wage adjustments, the argument you raise reminds me of Akerlof, Dickens, and Perry’s influential paper on “The Macroeconomics of Low Inflation.” [1] I remember puzzling over their own version of it and eventually concluding that it wasn’t a good reason for resisting productivity-norm deflation, let alone for tolerating inflation. For starters, it begs the question: if we are to resort to monetary expansion as a means to avoid downward pressure on money wage rates, even when that pressure reflects a relative drop in demand only (with positive demand pressure elsewhere), then where do we stop? Should we regret the fact that equilibrium nominal wage rates in, say, the hoop-skirt and slide-rule industries, or at firms like the Tucker (automobile) Corporation, are no longer positive? Could even the most expansionary monetary policy possibly have kept them so?

Okay, that’s being extreme. But there’s a more fundamental point here, which is that there’s a big difference between downward pressure on nominal wage rates that signifies falling real (that is, relative) demand for products made by the labor in question, and downward pressure that merely signifies a general drop in demand. The difference is, simply, that in the former case it isn’t clear that nominal wage rates need to change at all. The “flexible wage” equilibrium may be one in which they stay the same, with labor shifting from the low- to the high-demand firms and industries. That’s transparently the case where labor is both homogeneous and perfectly mobile. Of course, these assumptions don’t generally hold. But even so it’s far from clear that using general wage inflation as a means for getting relative wage rates down in depressed firms and industries without resort to nominal wage cuts serves to preserve rather than to undermine overall economic efficiency.

Finally, I admit that in proposing a link between the NGDP growth rate and the amplitude of the business cycle I’ve gone way out on a limb. It’s very hard to make a theoretical case for this connection. But here goes: as trend NGDP growth increases, prices become less sensitive to current economic conditions (the greater price-dispersion argument), including any short-run fluctuation in NGDP growth (remember how I said that we have to assume that NGDP growth it isn’t targeted perfectly). This flattens the short-run Phillips Curve, making real output vary more around its natural level.

Oh well, at least I’ve given it a shot.

Now I’d like to say a thing or two about Jeff’s remarks. First of all, Jeff, I’m sorry about all that jargon in my last post. You know it isn’t my style. But I was after all making a digression on where mainstream macroeconomics had gone wrong; and I wanted to put it in terms that the “mainstreamers” I have in mind would regard as reasonably concrete, just in case any of them was listening! I’m sorry if in doing so I seemed to forget my real audience.

I hope I may kiss and make up by saying that my reaction to the idea of charging interest on bank reserves is much the same as Jeff’s. That is, I don’t much like it. For starters it reminds me of schemes like Gesell’s for punishing “hoarders,” which amounted to making them scapegoats for central bankers’ mistakes. Also, if the goal is to reduce reserve demand, the more efficient way to do that is to hurry up and abolish statutory reserve requirements, as Jeff suggests. In any event I’m bound to oppose any reform that risks driving the demand for reserves, and especially reserves held for interbank settlement purposes, to zero, for then how will the value of the fiat dollar be determined once we free bankers succeed in allowing commercial banks to substitute their own notes for outstanding Federal Reserve notes?

Finally, I’ve found the exchange between Scott and Prof. Hamilton very helpful. The practical workings of NGDP futures targeting were, I confess, something of a mystery to me. But in response to Prof. Hamilton’s probing Scott has clarified them a lot. Perhaps there’s still a flaw in the idea — and I hope Prof. Hamilton will not relent if that’s so. Still I hope there isn’t, because the scheme seems to offer more promise than any other monetary targeting scheme I’m aware of for making monetary discretion perfectly superfluous.

Notes

[1] Brookings Papers on Economic Activity, 1996.

Defining the Stance of Monetary Policy Is Harder than It Looks

Jeffrey Hummel has listed four points in a response to our recent discussion, and then one more in a separate post. I agree with much of what he has to say, but there are enough points of disagreement that it’s worth going through these one at a time:

  1. I agree that the financial crisis was a real shock, but would add that I (and I believe Hamilton as well) believe high oil prices also played some role in the early stages of the economic slowdown (2007-08.) On the question of regulation, I have mixed views on the question of whether more or less regulation is desirable. I do feel that deposit insurance and “too big to fail” have created perverse incentives, and that this explains some of the excessive risk taken by banks in recent years, and by S&Ls in the 1980s. If it is politically impossible to come up with a more sensible insurance scheme, then I think we might want to consider one more regulation. I am attracted to the Canadian system, which I believe has a 20 percent minimum requirement for down payments on mortgages. Even though I don’t think the financial crisis made a severe recession inevitable, there is no question that it at the very least interacted with flawed monetary policy in a highly destructive way. So I think some regulation can be defended on the basis of “second best.” Earlier I mentioned that I would favor less government involvement in other areas, particularly all the entities and regulations that encourage home ownership. I believe most if not all of us agree on this point.
  2. I agree with most of point two, with one exception. I do not believe that Hamilton thinks monetary policy was ineffective last year, at least in the usual Keynesian sense of a liquidity trap. Just a couple days ago in his blog (econbrowser.com) Hamilton mentioned that the Fed began paying interest on reserves last fall in order to prevent high inflation. That view is certainly not consistent with the Keynesian view that monetary policy was ineffective at zero rates. Hamilton can offer his own view, but I am pretty sure that he felt policy could still have boosted AD, but that long and variable policy lags prevented it from having any near term impact on the sharp drop in output last fall.
  3. I agree with Michael Woodford that changes in the expected future path of monetary policy are far more important that changes in the current setting of the monetary instrument. On the other hand I disagree with Woodford’s view that the short term interest rate is the proper monetary instrument and indicator. Instead, I think changes in the future supply of money are of key importance. So I view policy in terms of the monetarist “excess cash balance” transmission mechanism.I disagree with the monetarist view in two areas. I focus on changes in the expected future path of policy, and not the current setting of policy. And second, although I believe it is best to think of monetary policy in terms of changes in the money supply, because velocity can be unstable I believe changes in NGDP forecasts are the most useful indicator of the stance of policy.
  4. I define monetary policy in terms of expected changes in NGDP. My view that monetary policy was therefore highly contractionary last fall is considered rather bizarre by almost all economists. Hummel suggests that a contractionary monetary policy is one that decreases the money supply, and if NGDP falls because of lower velocity, that should be considered a velocity shock, not tight money. Let’s examine this issue from a different angle. Suppose last year that the Fed had steadily raised rates from two percent to eight percent during 2008. And suppose everything else was the same. The money supply still increased and NGDP still fell sharply in the second half. I claim that most economists would have viewed policy as being highly contractionary. And I think Hummel would agree that most economists would have had this view, as he himself notes that “most economists” view interest rates as the best indicator of monetary policy. But Hummel also states that he finds my critique of the interest rate view of monetary policy to be “persuasively” argued.

So far I haven’t address Hummel’s view of policy. But I have shown that the vast majority of economists view my definition of the stance of monetary policy as being bizarre, solely for reasons that Hummel himself finds inadequate-the interest rate indicator. So maybe my view isn’t so preposterous. Now let’s consider the money supply. The Fed nearly doubled the monetary base last fall. The base is the monetary aggregate that I find most useful, because it is a tool that the Fed can directly control.

Many economists think changes in the base are the most straightforward way of defining the stance of monetary policy. But not all. Friedman and Schwartz, as well as some other monetarists, believe that the Fed conducted a “tight money” policy during the early 1930s, despite the fact that the monetary base rose sharply. So now we have two groups that are willing to call money “tight” at the same time as the Fed is pumping lots more money into the economy. One group is mainstream Keynesians of all sorts, those who focus on interest rates. That’s most of the profession. And another group is traditional monetarists, those who think money was tight in the early 1930s.

I don’t know what version of “M” that Hummel prefers. But I do know that economists are all over the map as to what the terms “easy money” and “tight money” really mean. In that case I am inclined to throw up my hands and ask this pragmatic question:

In a fiat money world where the central bank has almost limitless ability to pump money into the economy, and impact the expected growth of nominal aggregates, what is the most useful definition of the stance of monetary policy?

Since I believe that the Fed should target the expected growth rate of NGDP on a daily basis, I decided the most useful way to think of “easy money” was as a policy expected to lead to above-target nominal growth, and vice versa. Is this so unusual? I notice that those who favor targeting interest rates (Keynesians) define the stance of monetary policy in terms of interest rates. And I notice that many who favor targeting the money supply (monetarists) tend to define the stance of monetary policy in terms of the money supply. I prefer to target NGDP expectations. So that’s my policy indicator.

In a later post Hummel thoroughly demolished my interest penalty on reserves idea. I just have a few comments. I probably directed the question at the wrong group of economists. It was a thought experiment intended to show that the Fed had not run out of ammunition, as most economists seem to believe. However I don’t believe that this group of economists thinks the Fed ran out of ammunition. So we are outliers. I agree with Hummel that negative interest is not optimal, and that some of my other policies are more than adequate. But I do think he slightly overstates the potential problems.

  • The impact on bank profits can be eliminated by a two tier scheme; negative interest on excess reserves and positive interest on required reserves. These rates can be calibrated so that banks remain competitive with MMMFs.
  • Vault cash is easy to deal with, just cap the amount of interest-free vault cash at a level above what any reasonable bank would want to hold. If the Fed had done so, any future open market operations would, at the margin, go toward expanding the broader aggregates, rather than being hoarded. Then most banks wouldn’t have to bother counting vault cash each week. (Do they do this now? If so, the definition of reserves could include vault cash.)
  • The yield on T-bills might go slightly negative, but only up to the point where it becomes profitable to hoard cash in safety deposit boxes. Again, this isn’t my first choice (NGDP futures targeting is) but I think it could be made workable.
  • I’m not worried about a tax on cash held by the public, it is currently technically impossible to implement. On the other hand at some time during the 21st century I expect all cash to be removed from circulation, and replaced with some sort of electronic money (debit cards or cash cards.) I don’t like the idea, but I think it is inevitable. At that point interest rate targets will be once again feasible in a liquidity trap, and Keynesian economics may be reborn for a third time.

Despite these reservations, I agree with much of what Jeffrey Hummel wrote, and think he did a good job clarifying the issues for Cato readers.

Some Final Thoughts

Let me take this opportunity to thank the participants for an interesting discussion and offer my perspective on the areas of agreement and disagreement.

I agree with Sumner that achieving a higher level of nominal GDP growth over the last year would have been helpful and that the Fed put insufficient weight on this objective.  In particular I have misgivings about the Fed’s deliberate decision to encourage an explosion of excess reserves.  I nevertheless believe that there were profound problems with financial markets which, once we had entered the fall of 2008, there was little that monetary policy could have done to prevent.  I have emphasized that quarterly changes in nominal GDP, or if you prefer, quarterly shocks to velocity, are beyond the powers of the Federal Reserve to control or offset.

In his last entry, Sumner appears to be in agreement with this last claim, and indicates a willingness to discuss nominal GDP targeting as a loose multi-year objective.  This is a much less controversial idea than I originally understood him to be proposing.  My own preference would be to include nominal GDP growth among the many indicators we watch, but there may be substantial common ground between this view and what Sumner proposes.

On the possibility of using futures markets to implement the idea, I am afraid that we are unable to reach common ground here.  My observation that the quantity of futures contracts written can not be used as a guide to forming monetary policy has nothing to do with whether the Fed is on one side of the contract or not.  Yes, the volume of Fed open market operations is a meaningful number and yes if the Fed sets the money supply to zero that will have consequences.  But the whole notion of using these contracts as a guide for policy seemed to be that the volume of positions taken by private participants is a meaningful number.  I continue to maintain that price, not quantity, is the relevant signal, and remain unpersuaded that these contracts could be used in the way that Sumner envisions.

We Can’t Agree on Everything, George…

Otherwise we’ll have nothing to debate at the Southern Economic meetings in November. But seriously, I’d like to offer a conditional acceptance of your productivity norm. Here are the two conditions:

1. Assume that inflation expectations have been reduced to roughly zero. Ideally this would be done very gradually, to prevent any disruption of labor markets.

2. Assume that the Fed has adopted a forward-looking procedure that is not susceptible to “liquidity traps.”

I think the second point is rather important, and in my blog I will continue to argue for mild inflation for the time being. I agree that Japan is not a perfect match for the productivity norm, but the numbers are close enough that I fear combining mild deflation with what I regard as a very dysfunctional system of interest rate targeting combined with a backward-looking Taylor Rule. Nevertheless, I feel good that at least our ideal systems are converging; I could certainly live with a 2.5 percent NGDP target if implemented with a forward-looking policy regime, such as NGDP futures targeting.

I don’t have anything more to say about nominal wage stickiness and the effect of higher trend inflation on the size of business cycles, primarily because I am a bit of an agnostic on both points. I don’t worry too much about the Gesell proposal being adopted, partly for reasons I discussed in my reply to Hummel (which was posted after you sent this reply.) So let’s talk a bit about NGDP futures targeting, as this was the area where James Hamilton was most skeptical.

I now regret not discussing this issue more fully early on. If I am not mistaken, Hamilton argues that the price of NGDP futures would be much more informative that the quantity of futures traded (actually the net long or short position.) And in 1995 I did publish a paper suggesting that the Fed could use the price of NGDP futures as a guide to policy. But in 1997 Bernanke and Woodford published a paper (also in the JMCB) that was highly critical of this approach, citing a “circularity problem.” This means that if the markets understood that the Fed was looking at NGDP futures prices, and using them to target NGDP at five percent growth, then the equilibrium price would stay close to the target, in anticipation that any deviation would be “corrected” by a change in Fed policy. But in that case the market price might stick to the target, and thus fail to provide the Fed with early warning of velocity shocks.

Bernanke and Woodford also suggested an alternative. They said what the Fed really needed is the market prediction of the instrument setting most likely to hit the NGDP (or inflation) target. That is what my 1989 and 2006 papers proposed, as well as a 1994 proposal by Kevin Dowd that appeared in Economic Journal. So two very distinguished economists argued that what the Fed really needs is not the market prediction of the target variable, but the market prediction of the instrument setting most likely to hit the Fed’s target.

Not everyone sees my current proposal as achieving that goal. Therefore in 1997 I also proposed a way of eliciting market forecasts of the optimal policy instrument in a somewhat more straightforward manner. In that paper (also JMCB) I proposed a contingent auction of NGDP (or CPI) futures. Traders would fill out a set of schedules indicating how many NGDP futures they wished to buy and sell at a variety of different instrument settings. The policy instrument might be the monetary base, or the fed funds rate. For instance, if it were the base then the Fed would set the monetary base at the level that most closely equilibrated the aggregate net long and short positions of the traders. And before they bid on the contracts the traders would be told that this method would be employed to implement monetary policy. This provides an alternative method of ascertaining the market’s view as to which instrument setting is most likely to result in on-target NGDP growth. Some Treasury bond auctions use a similar procedure.

In my blog, some commenters have argued that this approach makes more sense. Robin Hanson, who is a leading expert on prediction markets, told me the same thing. So why do I cling to the futures market approach presented in my 1989 and 2006 papers? The main reason is that it does not require policymakers to decide which instrument is best. For instance, today most economists favor an interest rate instrument, so it is likely that the Fed would choose that approach. But then what would happen if there was no non-negative interest rate that equilibrated the net short and long positions, in other words, suppose we were in a liquidity trap? The beauty of my original proposal is that the central bank does not need to decide on a particular instrument setting. The market would literally be directing the open market operations, and each trader could look at whatever policy indicator that they thought was most informative.

Hamilton was also skeptical of my argument that a more expansionary monetary policy would have greatly reduced the severity of the financial crisis last fall. My argument had two parts. First, I agree with Frederic Mishkin that monetary policy is still highly effective at the zero rate bound. Thus a more expansionary policy could have prevented expected 2009 and 2010 NGDP from falling sharply last fall. And second, most of the second wave of the financial crisis was due to falling NGDP, not bad lending.

For the sake of argument assume that $1 trillion of the estimated $4 trillion in losses was due to poorly thought out mortgage lending, i.e. loans that would have been defaulted on even with stable five percent NGDP growth. Then assume that the other $3 trillion in losses, which became apparent only late last year when estimates of NGDP plunged sharply, represented higher quality residential mortgages, as well as commercial and industrial loans that would have been sound at a five percent NGDP growth rate. If I am right, and most of the financial losses last fall and winter were due to falling NGDP, then a modest uptick in nominal growth should modestly trim those losses. I think we have had a modest uptick in nominal growth in the past few months. Yes, it is very small relative to the huge decline in NGDP, which over the past 12 months has fallen about eight percent below trend. But nevertheless things are looking a bit better.

Take a look at today’s AP story showing how this modest upswing affected the IMF’s estimates of total losses from the financial crisis.

ISTANBUL, Turkey – Likely losses from the financial crisis in the three years to 2010 have been reduced by $600 billion to $3.4 trillion as the world economy grows faster than previously expected, the International Monetary Fund said Wednesday.

And now imagine not a slight uptick, but a much more expansionary policy in 2008, one that kept NGDP growing at five percent rate in 2009 and 2010, even if a somewhat weak fourth quarter was unavoidable. Then assume the recent uptick represented 20 percent of the total collapse in NGDP. What kind of estimate does that imply for the share of financial losses that are due to a weak economy? I think it is very possible that only about $1 trillion of the $4 trillion in losses is due to bad lending. After all, look how little economic growth it took to trim $0.6 trillion off those estimated losses.

One minor point; I didn’t say that the Fed has no influence over near term (quarterly) NGDP movements. I think by stabilizing longer term NGDP expectations they would also reduce quarterly NGDP volatility. However I agree with the view that there is an irreducible minimum of near term NGDP fluctuations that are beyond the influence of monetary policy. But that view is logically consistent with the view that longer term NGDP uncertainty can make near term velocity even more unstable than necessary.

Sometimes it is difficult to convey important distinctions that seem like slight nuances. So I thank all the referees for trying to work through these ideas, and apologize if they weren’t always expressed in the clearest possible fashion.

Parting Shots

For my last comment in the conversation portion of our exchange I want to mention how much I have enjoyed and learned from the discussion. I expressed my admiration for Scott Sumner in my initial contribution, and I will add to that my amazement at his ability to respond calmly, rapidly, and in detail to every query or objection raised, including nearly all those made in the comments he receives on his blog. I don’t know how he does it; I couldn’t reply that copiously even if I devoted every waking hour to the activity. As for George Selgin, those who have read my review of his latest book know how much I respect his work. And James Hamilton’s frequent posts at Econbrowser have been some of the most useful explanations and depictions of the Fed’s machinations over the last year or more. He and the others have all brought great insights to the discussion.

I conclude with an elaboration on why the distinction between monetary and velocity shocks deserves far more attention than Scott is willing to give it. The fundamental economic question about business cycles, although often unarticulated, is whether they are primarily market failures or government failures and, if they are market failures, whether government can do anything to alleviate them. This was the basic issue that divided Austrians and orthodox Monetarists on one side from traditional Keynesians on the other. Nearly all other macro differences stemmed from their answers to this question, despite the fact that the conclusions of any particular economist may have had no explicit political motivation. Consider the scrap heap of abandoned Keynesian fallacies — secular stagnation, exhaustion of investment opportunities, administered prices, totally endogenous money stock, etc., etc. — or the fact that New Keynesians have now embraced inflexible prices and wages, the very neoclassical explanation for cyclical unemployment that the early Keynesians rejected. There has been a collective, if unconscious, effort on the part of Keynesians to opportunistically seize whatever handy argument will prove that depressions or recessions require government intervention.

The distinction between money and velocity goes to the heart of this debate. Unless one embraces a pure, real business cycle approach, in which depressions and recessions are neither market failures nor government failures but economically optimal, then one must blame either government mismanagement of the monetary and financial system, erratic velocity driven by irrational bubbles and “animal spirits,” or some combination of the two. Lumping the two together not only obscures the fundamental issue, but also biases the answer toward the view that some government involvement is necessary. The more we get enmeshed in technical controversies over rules versus discretion, over inflation versus interest-rate versus nominal GDP targeting, and other secondary minutiae that simply assumes the necessity of a government central bank, however fascinating they may be, the more we lose sight of the central question of what does in fact cause the business cycle.

Scott’s last reply to me echoes Hamilton’s initial denigration of the equation of exchange (sometimes misleadingly called “the quantity equation”) because there are alternative definitions of M and therefore V. Multiple definitions of M and V merely reflect differences in how we organize our thoughts about the evidence. Thus, if we focus on the behavior of M1 and M2 from 1930 to 1933 during the Great Depression, we observe the bank panics causing a dramatic fall in the money stock. But if instead we focus on the monetary base, we will describe the panics as significantly decreasing base velocity (that is, increasing base money demand). In algebraic terms, VmKm = Vb, where the subscripts m and v refer respectively to broader definitions of money or the base, and K is the relevant multiplier for whatever M we are using.[1] If Km falls dramatically, ceteris paribus, Vb must be falling. Notice also without a sufficient change in the multiplier, the two velocities always move in the same direction.

These alternative ways of describing the same phenomenon should enhance our understanding, rather than bewilder us, as is clear from Friedman and Schwartz’s sophisticated handing of these matters in their Monetary History.[2] Their narrative about the depression is quite clear; they accuse the Fed of pursuing an actively tight policy prior to the onset of the banking panics in October 1930. Once the bank panics began, the Fed failed to counteract the falling multiplier with a sufficient expansion of the base. When the Fed finally expanded the base, it was too little and too late. Which measure is better depends on both convenience and what questions we are asking. What we should be looking at to understand the Great Moderation and the current recession is a puzzle to be explored[3], not brushed under the rug with a tautological definition of “monetary shocks.”

In the final analysis, I find the quest for better, self-enforcing central-bank rules naïve. When the market fails (as it often does), the alleged solution is more (or better) government regulation. And when the government fails, the alleged solution is more (or better) government regulation. This asymmetry, a dramatic illustration of Harold Demsetz’s nirvana fallacy, has already made the financial system one of the most, if not the most, highly regulated and subsidized sectors in the U.S. economy. It is high time to reverse the trend.

Notes

[1] The money multiplier is defined as Km = M/B, where B is the monetary base. If you substitute that definition into the above equation, you get MVm = BVb, which must be true, because both sides of the new equation are equal to nominal GDP.

[2] Milton Friedman and Anna Jacobson Schwartz, A Monetary History of the United States, 1867-1960 (Princeton: Princeton University Press, 1963).

[3] As David Henderson and I have attempted to do in our Cato Briefing. (http://www.cato.org/pub_display.php?pub_id=9756).

Final Thoughts and Thanks

I appreciate Jeffrey’s kind remarks on my monetary ideas and feel very fortunate that James Hamilton, George Selgin and Jeffrey Hummel took the time to give careful consideration to my essay.

There’s not much for me to disagree with in Jeffrey’s final post (although I ended up a bit more optimistic about improving the Fed than he is.) I have benefited from reading his writings on economics and history (a subject he knows far more about than I do). I have also greatly benefited from George Selgin’s writings on issues such as free banking and the productivity norm. James Hamilton has done excellent work on the Great Depression that was useful in my own research, and his blog econbrowser.com has helped me to better understand Fed policy over the past two years, even if we ended up with somewhat different positions. So I thank all three for their participation in Cato Unbound.