A (Gentle) Nudge Toward Gentle Deflation

I’m glad you agree, Scott, that our views are very close. I’d like to make them closer still-even identical. Naturally I’d prefer to make them so by nudging yours a bit further toward mine!

You wonder why I attach as much importance as I do to the trend growth rate of nominal spending, and wonder whether, in suggesting that more rapid growth may lead to bigger cycles, I am violating the assumption of monetary super-neutrality. You also generously suggest a defense, viz: that a higher mean spending growth rate will tend to be associated with greater variability of spending. My argument does in fact take the last assumption for granted; but there’s more to it than that. So I’m bound to say that, yes, I deny that money would be super-neutral even if the first moments of nominal growth rate time series were uncorrelated.

I know perfectly well that this is asking for trouble — that all kinds of formal macro-models suggest that money is super-neutral, or would be if currency bore nominal interest. (The qualification allows that the equilibrium demand for real money balances may depend in practice on the mean rate of inflation; still it doesn’t imply any link between the latter rate and the amplitude of cycles.) As an appeal to formal models won’t do, let me try an informal approach. The price system has always got plenty of work to do; and I take it we both agree that this work isn’t accomplished without cost. So much is taken for granted in all theories of nominal rigidities. I submit further that, absent complete indexation, the more prices have to change, the more work the price system has to bear, and the greater both the costs of continuous adjustment and the extent to which optimal price-adjustment strategies will allow prices to differ from their full-information values.

Now, bearing this in mind, allow me to resort to a reductio ad absurdum. Imagine an array of nominal income growth rate target percentages, starting at 5 and doubling from there; and tell me please when you would start being uncomfortable (considerations of real money demand aside) with recommending the rate in question. Here goes: 10, 20, 40, 80, … are you saying “stop” yet? If so, then you yourself doubt that money is truly super-neutral, even putting mean-variance correlation aside; and I suspect it’s because you are certain (as I am) that at high rates of income growth not only will various price indices vary more around their mean rates of change, but that within any given market at any point in time there will be considerably more dispersion of prices around their full-information-flexible-price values.

So far we have plenty of waste from high spending growth, but not cycles. Allow, though, that (1) the authorities aren’t able, despite their best efforts, to perfectly adhere to their chosen growth targets, so that realized nominal income growth is in fact stochastic; (2) that factor price adjustments are generally more costly than adjustments to final-goods prices; and, finally, that (3) regardless of the feedback rule employed, the variance of realized nominal spending growth is positively related to its mean value; and you have almost the whole basis for my favoring a lower target.

“Almost.” Because I also believe there is no good reason for not setting nominal income growth so low as to allow final goods prices to decline at an average rate equal to the rate of growth of total factor productivity. This is the “productivity norm” — a form of nominal income targeting in which the target growth rate is set equal to the growth rate of factor input. The last can, without doing to much violence to reality, generally be reckoned at about 2 percent per year; it is, in any event, hard to estimate more accurately than that. The point of allowing for it is to eliminate the need for any substantial, general downward adjustment of factor prices, and prices of labor services especially-an allowance that seems prudent in light of these prices’ relatively high degree of downward “stickiness.”

And what about output prices? Aren’t they also sticky downward? Since everyone’s talking about where macroeconomists have gone wrong, let me put in my choice for biggest screw-up: it’s the treatment of the degree of nominal rigidity (as represented in Calvo-pricing probability parameters and such) as being independent of the sort of shocks price-setting agents confront. Now, it’s all well and good to speak of a negative velocity shock as giving rise to a sort of externality — Yeager calls it the “who-goes-first” problem — whereby the private gains from downward price adjustments are small or nonexistent even though the social benefits are substantial. But the same can’t reasonably be said for positive productivity shocks. Indeed, it makes little sense to call these “shocks” at all because, although they may come as a surprise to many, they are generally not only anticipated but sought after by the very agents responsible (in the absence of any monetary policy response) for changing prices in response to them. More concretely, they are almost always deliberate results of efforts to cut unit production costs, which efforts are aimed at allowing producers to compete more effectively with their rivals by cutting prices in turn. Consequently, no good end is served by arranging monetary policy so as to spare the “average” producer the need to lower prices in response to a general improvement in productivity. On the contrary: such a strategy is likely to complicate the price-adjustment problem faced by producers, making output market price signals that much “noisier.” What’s more, it will also makes factor price signals more noisy, by forcing needed upward real wage adjustments to be accomplished by raising relatively sticky nominal wage rates.

Call it hubris, but I daresay that, once the mathematics get worked out so that someone can make a DSGE model realistic enough to allow for the differential stickiness not only of output prices and “wages,” but also of all prices depending on the “shocks” taking place, while also taking money’s usefulness into account, that model will suggest a Ramsay-optimal monetary policy that looks a helluvalot like what I pled for in my 1997 pamphlet. (Which was, after all — to be rather more modest — simply what many prominent economists pled for before their ideas were swept aside by the wake of the Keynesian diversion. [1]) Indeed, several recent models of this sort already come very close despite not allowing for it. [2] Needless to say, allowing money balances to be either a direct or an indirect source of utility only strengthens the productivity norm case.

Lastly, on the matter of the ideal income growth rate, I think that, so far as the productivity norm is concerned, there’s no reason to fear that it would occasion a negative neutral nominal interest rate, for the simple reason that the real neutral rate will almost certainly never fall short of the rate of productivity growth (which informs future real income expectations ).

Turning to free banking, in essence the argument here is that it simplifies the task of nominal income targeting, and especially of targeting nominal income by means of a McCallum-type monetary base feedback rule, by making for a more stable relation between the stock of base money on the one hand and the equilibrium level of nominal spending on the other. In other words, free banking can help stabilize the income velocity of the monetary base. To understand why, first consider a run-of-the-mill model of the precautionary demand for money of the sort often used to represent the public’s demand for money but originally developed by Francis Edgeworth to model banks’ demand for reserves.[3] Such models typically make the aggregate demand reserves proportional to the standard deviation of an individual bank’s net reserve loss, which in turn increases with the total volume of bank-money payments, though less than proportionately (e.g., the “square root law.”)

Next suppose that the stock of bank reserves is fixed. In that case, there will be a unique equilibrium volume of payments consistent with reserve-market clearing. It follows that, for any particular bank money aggregate, there must be a tendency for the supply of that aggregate to adjust so as to compensate for changes in its velocity if reserve-market equilibrium is to be preserved. I develop the argument in some detail in my Theory of Free Banking [4] and (more formally) in a short paper I wrote for the Economic Journal’s “Policy Forum.” [5] Diagrammatically, it looks like this:

Admittedly, the argument refers to a tendency for the total volume of bank money expenditures to bear a stable relation to the available stock of bank reserves. To get to stable base income velocity you have to assume (1) that total bank-money transactions are a relatively stable (if nevertheless changing) multiple of income transactions and (2) that targeting the base is equivalent to targeting bank reserves.

The first of these assumptions isn’t all that heroic. The second is where free banking becomes crucial, for the assumption depends, not only on banks being free to set their reserve ratios without having to heed binding statutory requirements, but also on their collective reserves being unaffected by changes in the public’s desired currency-deposit ratio. So long as the public can’t simply dispense with paper currency altogether, the last requirement can be met only by letting commercial banks meet public requests for paper currency using their own notes-as they do, for instance, in Scotland today, and as they did in many more instances (and with fewer regulatory restrictions) both in Scotland and dozens of other places in the past.

What all this boils down to is that there are forces at work in the banking system that can make stabilizing nominal income easier, and that policymakers should take as much advantage of those forces as possible. Doing so will make it easier to achieve some desired growth rate of nominal spending by simply controlling the growth rate of the monetary base-something any central banker, or even a computer program, can do.

Of course there’s a lot more to the case for free banking than this, including arguments to the effect that, despite not involving deposit insurance, it would offer better protection against runs and panics than existing arrangements do. But this isn’t the place for me to go into them, so I’ll settle for hoping that the point about base velocity stabilization will whet your appetite enough to encourage you to look into my, Kevin Dowd’s, and Larry White’s writings on the topic.


[1] Cf. Selgin, (1995) “The ‘productivity norm’ versus zero inflation in the history of economic thought,” History of Political Economy 27 (4).

[2] See, for example, Rochelle M. Edge, Thomas Laubach and John C. Williams (2005), “Monetary policy and shifts in long-run productivity growth” (mimeo, Federal Reserve Board); Stephanie Schmitt-Grohe and Martin Uribe (2006), “Optimal Inflation in a Medium-Scale Macroeconomic Model,” mimeo.

[3] F. Y. Edgeworth (1888), “The mathematical theory of banking.” Journal of the Royal Statistical Society 51 (1) (March).

[4] Totowa, New Jersey: Rowman & Littlefield, 1988.

[5] “Free banking and monetary control,” The Economic Journal 104 (1994).

Also from this issue

Lead Essay

  • In this month’s sure-to-be controversial lead essay, Bentley University economist Scott Sumner argues that almost everything economists and economic policymakers thought they knew about the role of monetary policy in the recent recession and financial collapse is wrong. Sumner contends that the resources of monetary policy were not exhausted, as many economists believed, but were barely used. Flying in the face of conventional wisdom, Sumner maintains that monetary policy in the run-up to the finacial crisis was not highly expansionary, but was in fact disastrously contractionary. Sumner offers a short history of monetary economics to put into historical perspective the role of allegedly failed monetary policy in the financial crisis and recession. He proposes a strategy for central bankers — targeting forecasts of nominal GDP — that might help avert future crises. In conclusion, Sumner warns of the political dangers of misdiagnosing the crisis: unless the record is set straight, free markets will once again take the fall for a failure of monetary policy.

Response Essays

  • University of California, San Diego economist James D. Hamilton disputes Scott Sumner’s claim that the sub-prime crisis was a fluke with few lessons for macroeconomics. According to Hamilton, the booming U.S. housing market represented a “huge misdirection of capital,” and the overexposure of key financial institution to the housing market’s downward correction crippled lending and sent the economy into a nosedive. Hamilton agrees that the Fed might have limited the damage had it kept the growth rate for nominal GDP higher, but he disagrees with Sumner about the tools available to the Fed to achieve this. Hamilton notes that tools available to the Fed depend on which of the possible specifications of the money supply and its velocity actually determine nominal GDP. Hamilton says unconventional paths to monetary stimulus were open the Fed in late 2008 and that “the preferred policy … would have been to acknowledge more aggressively the losses financial institutions had absorbed on existing loans, impose those losses on stockholders, creditors, and taxpayers, and retain as the Fed’s first priority the stimulus of nominal GDP rather than trying to lend to everybody.” Hamilton concludes with some worries about Sumner’s favored tool for targeting nominal GDP growth.

  • University of Georgia economist George Selgin agrees with Scott Sumner that “tight money was the proximate cause of the post-September 2008 recession” and that “a policy of nominal income growth targeting might have prevented the recession.” Selgin encourages Sumner to acknowledge the role easy money played in the subprime crisis, and argues that Sumner’s five-percent nominal income growth target is “unnecessarily and perhaps dangerously high.” Selgin favors a two or three percent target, which he contends would be less likely to perpetuate boom-bust cycles.

  • San Joses State’s Jeffrey Rogers Hummel begins with a brief history of economic thought about the causes of the business cycle, which leads to a call for “a measure of epistemic humility.” Hummel signs on to much of Sumner’s story about the Fed behavior in 2008, and accepts his criticism of the widespread use of interest rates as the main indicator of monetary policy. But Hummel departs sharply from Sumner’s prescription for better monetary policy — a rule to target the forecast of nominal GDP growth. “The … critical defect of Sumner’s Rule,” Hummel argues, “is its blithe assumption that money, unlike any other good or service, requires not merely government provision but detailed, sophisticated, and flexible government management.” Hummel raises doubts that even the best such rule would be well-applied, and calls for the “abolition of the Fed, elimination of government fiat money, and complete deregulation of banks.”