I. Marx vs. Smith and food banks
When Heinz produces too many Bagel Bites, or Kellogg produces too many Pop-Tarts, or whatever, these mammoth food-processing companies can donate their surplus food to Feeding America, a national food bank. Feeding America then distributes these corporate donations to local food banks throughout the country.

What’s the economically optimal way to allocate the donations across the country?

Option one is what you might call “full communism.” Under full communism, Feeding America collects the food donations and then top-down tells individual food banks what endowments they will be receiving, based on Feeding America’s own calculation of which food banks need what.

Prior to 2005, this was indeed what occurred: food was distributed by centralized assignment. Full communism!

The problem was one of distributed versus centralized knowledge. While Feeding America had very good knowledge of poverty rates around the country, and thus could measure need in different areas, it was not as good at dealing with idiosyncratic local issues.

Food banks in Idaho don’t need a truckload of potatoes, for example, and Feeding America might fail to take this into account. Or maybe the Chicago regional food bank just this week received a large direct donation of peanut butter from a local food drive, and then Feeding America comes along and says that it has two tons of peanut butter that it is sending to Chicago.

To an economist, this problem screams of the Hayekian knowledge problem. Even a benevolent central planner will be hard-pressed to efficiently allocate resources in a society since it is simply too difficult for a centralized system to collect information on all local variation in needs, preferences, and abilities.

This knowledge problem leads to option two: market capitalism. Unlike poorly informed central planners, the decentralized price system – i.e., the free market – can (often but not always) do an extremely good job of aggregating local information to efficiently allocate scarce resources. This result is known as the First Welfare Theorem.

Such a system was created for Feeding America with the help of four Chicago Booth economists in 2005. Instead of centralized allocation, food banks were given fake money – with needier food banks being given more – and allowed to bid for different types of food in online auctions. Prices are thus determined by supply and demand.

At midnight each day all of the (fake) money spent that day is redistributed, according to the same formula as the initial allocation. Accordingly, any food bank which does not bid today will have more money to bid with tomorrow.

Under this system, the Chicago food bank does not have to bid on peanut butter if it has just received a large peanut butter donation from another source. The Idaho food bank, in turn, can skip on bidding for potatoes and bid for extra peanut butter at a lower price. It’s win-win-win.

By all accounts, the system has worked brilliantly. Food banks are happier with their allocations; donations have gone up as donors have more confidence that their donations will actually be used. Chalk one up for economic theory.

II. MV=PY, information frictions, and food banks
This is all pretty neat, but here’s the really interesting question: what is optimal monetary policy for the food bank economy?

Remember that food banks are bidding for peanut butter or cereal or mini pizzas with units of fake money. Feeding America has to decide if and how the fake money supply should grow over time, and how to allocate new units of fake money. That’s monetary policy!

Here’s the problem for Feeding America when thinking about optimal monetary policy. Feeding America wants to ensure that changes in prices are informative for food banks when they bid. In the words of one of the Booth economists who helped design the system:

“Suppose I am a small food bank; I really want a truckload of cereal. I haven’t bid on cereal for, like, a year and a half, so I’m not really sure I should be paying for it. But what you can do on the website, you basically click a link and when you click that link it says: This is what the history of prices is for cereal over the last 5 years. And what we wanted to do is set up a system whereby by observing that history of prices, it gave you a reasonable instinct for what you should be bidding.”

That is, food banks face information frictions: individual food banks are not completely aware of economic conditions and only occasionally update their knowledge of the state of the world. This is because obtaining such information is time-consuming and costly.

Relating this to our question of optimal monetary policy for the food bank economy: How should the fake money supply be set, taking into consideration this friction?

Obviously, if Feeding America were to randomly double the supply of (fake) money, then all prices would double, and this would be confusing for food banks. A food bank might go online to bid for peanut butter, see that the price has doubled, and mistakenly think that demand specifically for peanut butter has surged.

This “monetary misperception” would distort decision making: the food bank wants peanut butter, but might bid for a cheaper good like chicken noodle soup, thinking that peanut butter is really scarce at the moment.

Clearly, random variation in the money supply is not a good idea. More generally, how should Feeding America set the money supply?

One natural idea is to copy what real-world central banks do: target inflation.

The Fed targets something like 2% inflation. But, if the price of a box of pasta and other foods were to rise 2% per year, that might be confusing for food banks, so let’s suppose a 0% inflation target instead.

It turns out inflation targeting is not a good idea! In the presence of the information frictions described above, inflation targeting will only sow confusion. Here’s why.

As I go through this, keep in the back of your mind: if households and firms in the real-world macroeconomy face similar information frictions, then – and this is the punchline of this entire post – perhaps inflation targeting is a bad idea in the real world as well.

III. Monetary misperceptions
I demonstrate the following argument rigorously in a formal mathematical model in a paper, “Monetary Misperceptions: Optimal Monetary Policy under Incomplete Information,” using a microfounded Lucas Islands model. The intuition for why inflation targeting is problematic is as follows.

Suppose the total quantity of all donations doubles.

You’re a food bank and go to bid on cheerios, and find that there are twice as many boxes of cheerios available today as yesterday. You’re going to want to bid at a price something like half as much as yesterday.

Every other food bank looking at every other item will have the same thought. Aggregate inflation thus would be something like -50%, as all prices would drop by half.

As a result, under inflation targeting, the money supply would simultaneously have to double to keep inflation at zero. But this would be confusing: Seeing the quantity of cheerios double but the price remain the same, you won’t be able to tell if the price has remained the same because
(a) The central bank has doubled the money supply
or
(b) Demand specifically for cheerios has jumped up quite a bit

It’s a signal extraction problem, and rationally you’re going to put some weight on both of these possibilities. However, only the first possibility actually occurred.

This problem leads to all sorts of monetary misperceptions, as money supply growth creates confusions, hence the title of my paper.

Inflation targeting, in this case, is very suboptimal. Price level variation provides useful information to agents.

IV. Optimal monetary policy
As I work out formally in the paper, optimal policy is instead something close to a nominal income (NGDP) target. Under log utility, it is exactly a nominal income target. (I’ve written about nominal income targeting before more critically here.)

Nominal income targeting in this case means that the money supply should not respond to aggregate supply shocks. In the context of our food banks, this result means that the money supply should not be altered in response to an increase or decrease in aggregate donations.

Instead, if the total quantity of all donations doubles, then the price level should be allowed to fall by (roughly) half. This policy prevents the confusion described above.

Restating, the intuition is this. Under optimal policy, the aggregate price level acts as a coordination mechanism, analogous to the way that relative prices convey useful information to agents about the relative scarcity of different goods. When total donations double, the aggregate price level signals that aggregate output is less scarce by halving.

It turns out that nominal income targeting is only exactly optimal (as opposed to approximately optimal) under some special conditions. I’ll save that discussion for another post though.

Feeding America, by the way, does not target constant inflation. They instead target “zero inflation for a given good if demand and supply conditions are unchanged.” This alternative is a move in the direction of a nominal income target.

V. Real-world macroeconomic implications
I want to claim that the information frictions facing food banks also apply to the real economy, and as a result, the Federal Reserve and other central banks should consider adopting a nominal income target. Let me tell a story to illustrate the point.

Consider the owner of an isolated bakery. Suppose one day, all of the customers seen by the baker spend twice as much money as the customers from the day before.

The baker has two options. She can interpret this increased demand as customers having come to appreciate the superior quality of her baked goods, and thus increase her production to match the new demand. Alternatively, she could interpret this increased spending as evidence that there is simply more money in the economy as a whole, and that she should merely increase her prices proportionally to account for inflation.

Economic agents confounding these two effects is the source of economic booms and busts, according to this model. This is exactly analogous to the problem faced by food banks trying to decide how much to bid at auction.

To the extent that these frictions are quantitatively important in the real world, central banks like the Fed and ECB should consider moving away from their inflation targeting regimes and toward something like a nominal income target, as Feeding America has.

VI. Summing up
Nominal income targeting has recently enjoyed a surge in popularity among academic monetary economists, so the fact that this result aligns with that intuition is pretty interesting.

To sum up, I’ll use a metaphor from Selgin (1997).

Consider listening to a symphony on the radio. Randomly turning the volume knob up and down merely detracts from the musical performance (random variation in the price level is not useful). But, the changing volume of the orchestra players themselves, from quieter to louder and back down again, is an integral part of the performance (the price level should adjust with natural variations in the supply of food donations). The changing volume of the orchestra should not be smoothed out to maintain a constant volume (constant inflation is not optimal).

Central banks may want to consider allowing the orchestra to do its job, and reconsider inflation targeting as a strategy.

Behavioral economists have a concept called loss aversion. It’s almost always described something like this:

“Loss aversion implies that one who loses $100 will lose more satisfaction than another person will gain satisfaction from a$100 windfall.”
Wikipedia, as of December 2015

Sounds eminently reasonable, right? Some might say so reasonable, in fact, that it’s crazy that those darn neoclassical economists don’t incorporate such an obvious, fundamental fact about human nature in their models.

It is crazy – because it’s not true! The pop definition of loss aversion given above – that ‘losses hurt more than equivalent size gains’ – is precisely the concept of diminishing marginal utility (DMU) that is boringly standard in standard price theory.

Loss aversion is, in fact, a distinct and (perhaps) useful concept. But somewhat obnoxiously, behavioral economists, particularly in their popular writings, have a tendency to conflate it with DMU in a way that makes the concept seem far more intuitive than it is, and in the process wrongly makes standard price theory look bad.

I’m not just cherry-picking a bad Wikipedia edit. I name names at the bottom of this post, listing where behavioral economists – Thaler, Kahneman, Sunstein, Dubner, etc. – have (often!) given the same misleading definition. It’s wrong! Loss aversion is about reference dependence.

To restate, what I’m claiming is this:

1. Behavioral economists use an incorrect definition of loss aversion when writing for popular audiences
2. This incorrect definition is in fact the property of DMU that is assumed in all of neoclassical economics
3. DMU is much more intuitive than the real definition of loss aversion, and so by using a false definition of loss aversion behavioral economists make neoclassical economics look unnecessarily bad and behavioral economics look misleadingly good

Let me walk through the difference between DMU and loss aversion painstakingly slowly:

Diminishing marginal utility
“Diminishing marginal utility” is the idea that the more you have of something, the less you get out of having a little bit more of it. For example:

If you own nothing but $1,000 and the clothes on your back, and I then give you$100,000, that is going to give you a heck of a lot more extra happiness then if you had $100 million and I gave you$100,000.

An important corollary follows immediately from this: losses hurt more than gains!

I made a super high quality illustration to depict this:

What we have here is a graph of your utility as a function of your wealth under extremely standard (i.e., non-behavioral) assumptions. The fact that the line flattens out as you get to higher wealth levels is the property of DMU.

We can also see that equivalently sized losses hurt more than gains. As you go from 10k wealth to 2k wealth (middle green line to bottom green line), your utility falls by more than the amount your utility rises if you go from 10k wealth to 18k wealth (middle green to top green lines), despite the change in wealth being the same 8k in both directions.

Standard economics will always assume DMU, thus capturing exactly the intuition of the idea described in the above Wikipedia definition of loss aversion.

More mathematically – and I’m going to breeze through this – if your utility is purely a function of your wealth, Utility=U(W), then we assume that U'(W)>0 but U''(W)<0, i.e. your utility function is concave. With these assumptions, the result that U(W+ε)-U(W) < U(W)-U(W-ε) follows from taking a Taylor expansion. See proof attached below.

Loss aversion
Loss aversion is a consequence of reference dependence and is an entirely different beast. The mathematical formulation was first made in Tversky and Kahneman (1991).

In words, loss aversion says this: Suppose you have nothing but the clothes you’re wearing and $10,000 in your pocket, and then another$10,000 appears in your pocket out of nowhere. Your level of utility/happiness will now be some quantity given your wealth of $20,000. Now consider a situation where you only own your clothes and the$30,000 in your pocket. Suppose suddenly $10,000 in your pocket disappears. Your total wealth is$20,000 – that is, exactly the same as the prior situation. Loss aversion predicts that in this situation, your level of utility will be lower than in the first situation, despite the fact that in both situations your wealth is exactly $20,000, because you lost money to get there. Perhaps this concept of loss aversion is reasonable in some situations. It doesn’t seem crazy to think that people don’t like to lose things they had before. But this concept is entirely different from the idea that ‘people dislike losses more than they like gains’ which sloppy behavioral economists go around blathering about. It’s about reference dependence! Your utility depends on your reference point: did you start with higher or lower wealth than you currently have? In their academic papers, behavioral economists are very clear on the distinction. The use of math in formal economic models imposes precision. But when writing for a popular audience in the less-precise language of English – see below for examples – the same economists slip into using an incorrect definition of loss aversion. Conclusion So, please, don’t go around claiming that behavioral economists are incorporating some brilliant newfound insight that people hate losses more than they like gains. We’ve known about this in price theory since Alfred Marshall’s 1890 Principles of Economics. Addendum It’s kind of silly for me to write this post without naming names. Here we go: 1. Richard Thaler, one of the founding fathers of behavioral economics, in his 2015 bestseller, Misbehaving: 2. Richard Thaler, in the 2008 bestseller, Nudge: 3. Cass Sunstein (Oct. 2015), Harvard law and behavioral economics professor: 4. Daniel Kahneman, Nobel Prize-winning behavioral economist, in his 2011 bestseller, Thinking Fast and Slow: 5. Stephen Dubner (Nov. 2005): 6. New York Times (Dec. 2013): 7.The Economist (Feb. 2015): I should note that Tversky and Kahneman in their original paper describing loss aversion are admirably clear in their usage of the concept: the title of their QJE paper is Loss Aversion in Riskless Choice: A Reference-Dependent Model, explicitly highlighting the notion of reference dependence. References Until very recently – see last month’s WSJ survey of economists – the FOMC was widely expected to raise the target federal funds rate this week at their September meeting. Whether or not the Fed should be raising rates is a question that has received much attention from a variety of angles. What I want to do in this post is answer that question from a very specific angle: the perspective of a New Keynesian economist. Why the New Keynesian perspective? There is certainly a lot to fault in the New Keynesian model (see e.g. Josh Hendrickson). However, the New Keynesian framework dominates the Fed and other central banks across the world. If we take the New Keynesian approach seriously, we can see what policymakers should be doing according to their own preferred framework. The punch line is that the Fed raising rates now is the exact opposite of what the New Keynesian model of a liquidity trap recommends. If you’re a New Keynesian, this is the critical moment in monetary policy. For New Keynesians, the zero lower bound can cause a recession, but need not result in a deep depression, as long as the central bank credibly promises to create an economic boom after the zero lower bound (ZLB) ceases to be binding. That promise of future growth is sufficient to prevent a depression. If the central bank instead promises to return to business as normal as soon as the ZLB stops binding, the result is a deep depression while the economy is trapped at the ZLB, like we saw in 2008 and continue to see in Europe today. The Fed appears poised to validate earlier expectations that it would indeed return to business as normal. If the New Keynesian model is accurate, this is extremely important. By not creating a boom today, the Fed is destroying any credibility it has for the next time we hit the ZLB (which will almost certainly occur during the next recession). It won’t credibly be able to promise to create a boom after the recession ends, since everyone will remember that it did not do so after the 2008 recession. The result, according to New Keynesian theory, will be another depression. I. The theory: an overview of the New Keynesian liquidity trap I have attached at the bottom of this post a reference sheet going into more detail on Eggertsson and Woodford (2003), the definitive paper on the New Keynesian liquidity trap. Here, I summarize at a high level –skip to section II if you are familiar with the model. A. The NK model without a ZLB Let’s start by sketching the standard NK model without a zero lower bound, and then see how including the ZLB changes optimal monetary policy. The basic canonical New Keynesian model of the economy has no zero lower bound on interest rates and thus no liquidity traps (in the NK context, a liquidity trap is defined as a period when the nominal interest rate is constrained at zero). Households earn income through labor and use that income to buy a variety of consumption goods and consume them to receive utility. Firms, which have some monopoly power, hire labor and sell goods to maximize their profits. Each period, a random selection of firms are not allowed to change their prices (Calvo price stickiness). With this setup, the optimal monetary policy is to have the central bank manipulate the nominal interest rate such that the real interest rate matches the “natural interest rate,” which is the interest rate which would prevail in the absence of economic frictions. The intuition is that by matching the actual interest rate to the “natural” one, the central bank causes the economy to behave as if there are no frictions, which is desirable. In our basic environment without a ZLB, a policy of targeting zero percent inflation via a Taylor rule for the interest rate exactly achieves the goal of matching the real rate to the natural rate. Thus optimal monetary policy results in no inflation, no recessions, and everyone’s the happiest that they could possibly be. B. The NK liquidity trap The New Keynesian model of a liquidity trap is exactly the same as the model described above, with one single additional equation: the nominal interest rate must always be greater than or equal to zero. This small change has significant consequences. Whereas before zero inflation targeting made everyone happy, now such a policy can cause a severe depression. The problem is that sometimes the interest rate should be less than zero, and the ZLB can prevent it from getting there. As in the canonical model without a ZLB, optimal monetary policy would still have the central bank match the real interest rate to the natural interest rate. Now that we have a zero lower bound, however, if the central bank targets zero inflation, then the real interest rate won’t be able to match the natural interest rate if the natural interest rate ever falls below zero! And that, in one run-on sentence, is the New Keynesian liquidity trap. Optimal policy is no longer zero inflation. The new optimal policy rule is considerably more complex and I refer you to the attached reference sheet for full details. But the essence of the idea is quite intuitive: If the economy ever gets stuck at the ZLB, the central bank must promise that as soon as the ZLB is no longer binding it will create inflation and an economic boom. The intuition behind this idea is that the promise of a future boom increases the inflation expectations of forward-looking households and firms. These increased inflation expectations reduce the real interest rate today. This in turn encourages consumption today, diminishing the depth of the recession today. All this effect today despite the fact that the boom won’t occur until perhaps far into the future! Expectations are important, indeed they are the essence of monetary policy. C. An illustration of optimal policy Eggertsson (2008) illustrates this principle nicely in the following simulation. Suppose the natural rate is below the ZLB for 15 quarters. The dashed line shows the response of the economy to a zero-inflation target, and the solid line the response to the optimal policy described above. Under optimal policy (solid line), we see in the first panel that the interest rate is kept at zero even after period 15 when the ZLB ceases to bind. As a result, we see in panels two and three that the depth of the recession is reduced to almost zero under policy; there is no massive deflation; and there’s a nice juicy boom after the liquidity trap ends. In contrast, under the dashed line – which you can sort of think of as closer to the Fed’s current history independent policy – there is deflation and economic disaster. II. We’re leaving the liquidity trap; where’s our boom? To be completely fair, we cannot yet say that the Fed has failed to follow its own model. We first must show that the ZLB only recently has ceased or will cease to be binding. Otherwise, a defender of the Fed could argue that the lower bound could have ceased to bind years ago, and the Fed has already held rates low for an extended period. The problem for showing this is that estimating the natural interest rate is extremely challenging, as famously argued by Milton Friedman (1968). That said, several different models using varied estimation methodologies all point to the economy still being on the cusp of the ZLB, and thus the thesis of this post: the Fed is acting in serious error. Consider, most tellingly, the New York Fed’s own model! The NY Fed's medium-scale DSGE model is at its core the exact same as the basic canonical NK model described above, with a lot of bells and whistles grafted on. The calibrated model takes in a whole jumble of data – real GDP, financial market prices, consumption, the kitchen sink, forecast inflation, etc. – and spits outs economic forecasts. It can also tell us what it thinks the natural interest rate is. From the perspective of the New York Fed DSGE team, the economy is only just exiting the ZLB: Barsky et al (2014) of the Chicago Fed perform a similar exercise with their own DSGE model and come to the same conclusion: Instead of using a microfounded DSGE model, John Williams and Thomas Laubach, president of the Federal Reserve Bank of San Francisco and director of monetary affairs of the Board of Governors respectively, use a reduced form model estimated using a Kalman filter. Their model has that the natural rate in fact still below its lower bound (in green): David Beckworth has a cruder but more transparent regression model here and also finds that the economy remains on the cusp of the ZLB (in blue): If anyone knows of any alternative estimates, I’d love to hear in the comments. With this fact established, we have worked through the entire argument. To summarize: 1. The Fed thinks about the world through a New Keynesian lens 2. The New Keynesian model of a liquidity trap says that to prevent a depression, the central bank must keep rates low even after the ZLB stops being binding, in order to create an economic boom 3. The economy is only just now coming off the ZLB 4. Therefore, a good New Keynesian should support keeping rates at zero. 5. So: why is the Fed about to raise rates?! III. What’s the strongest possible counterargument? I intend to conclude all future posts by considering the strongest possible counterarguments to my own. In this case, I see only two interesting critiques: A. The NK model is junk This argument is something I have a lot of sympathy for. Nonetheless, it is not a very useful point, for two reasons. First, the NK model is the preferred model of Fed economists. As mentioned in the introduction, this is a useful exercise as the Fed’s actions should be consistent with its method of thought. Or, its method of thought must change. Second, other models give fairly similar results. Consider the more monetarist model of Auerbach and Obstfeld (2005) where the central bank’s instrument is the money supply instead of the interest rate (I again attach my notes on the paper below). Instead of prescribing that the Fed hold interest rates lower for longer as in Eggertsson and Woodford, Auerbach and Obstfeld’s cash-in-advance model shows that to defeat a liquidity trap the Fed should promise a one-time permanent level expansion of the money supply. That is, the expansion must not be temporary: the Fed must continue to be “expansionary” even after the ZLB has ceased to be binding by keeping the money supply expanded. This is not dissimilar in spirit to Eggertsson and Woodford’s recommendation that the Fed continue to be “expansionary” even after the ZLB ceases to bind by keeping the nominal rate at zero. B. The ZLB ceased to bind a long time ago The second possible argument against my above indictment of the Fed is the argument that the natural rate has long since crossed the ZLB threshold and therefore the FOMC has targeted a zero interest rate for a sufficiently long time. This is no doubt the strongest argument a New Keynesian Fed economist could make for raising rates now. That said, I am not convinced, partly because of the model estimations shown above. More convincing to me is the fact that we have not seen the boom that would accompany interest rates being below their natural rate. Inflation has been quite low and growth has certainly not boomed. Ideally we’d have some sort of market measure of the natural rate (e.g. a prediction market). As a bit of an aside, as David Beckworth forcefully argues, it’s a scandal that the Fed Board does not publish its own estimates of the natural rate. Such data would help settle this point. I’ll end things there. The New Keynesian model currently dominates macroeconomics, and its implications for whether or not the Fed should be raising rates in September are a resounding no. If you’re an economist who finds value in the New Keynesian perspective, I’d be extremely curious to hear why you support raising rates in September if you do – or, if not, why you’re not speaking up more loudly. References I comment on Josh Hendrickson's interesting post. While it certainly is hard for me to believe that the natural rate of interest could be negative, it's difficult to find a satisfying alternative explanation for the sustained output gap of the past seven years coexisting with the federal funds rate at the zero lower bound plus positive inflation. JP Koning makes the case that even if Greece were to leave the Eurozone and institute a new currency (call it the New Drachma), Athens would still not have independent monetary policy: if households and firms continue to post prices in Euros rather than New Drachmas, Greek monetary policy would not be able to affect the Greek economy. As JP explains: “Consider what happens if the euro remains the economy's preferred accounting unit, even as Greek drachmas begin to circulate as a medium of exchange. No matter how low the drachma exchange rate goes, there can be no drachma-induced improvement in competitiveness. After all, if olive oil producers accept payment in drachmas but continue to price their goods in euros, then a lower drachma will have no effect on Greek olive oil prices, the competitiveness of Greek oil vis-à-vis , say, Turkish oil, remaining unchanged. If a Greek computer programmer continues to price their services in euros, the number of drachmas required to hire him or her will have skyrocketed, but the programmer's euro price will have remained on par with a Finnish programmer's wage.” Thus, if the New Drachma is not adopted as the dominant unit of account, Greece would still be at the mercy of the ECB, and worse, now without any voice in ECB decision-making. I think this story is largely correct, but I want to throw out a counterpoint for discussion, which perhaps demonstrates that leaving the Eurozone could benefit Greece. Currency reform and rewriting of debt contracts One of the most important actions a government takes when it institutes a new currency or a currency reform is to legally redenominate all old contracts (issued under domestic law) in the new currency. In particular: debt therefore becomes automatically priced in the new currency. In American history, this occurred during Franklin Roosevelt’s 1933 “currency reform”, when the dollar was devalued relative to gold and gold clauses in existing contracts were invalidated. To quote from Amity Shlaes’ “The Forgotten Man: A New History of the Great Depression”: “Next Roosevelt set to work invalidating gold clauses in contracts. Since the previous century, gold clauses had been written into both government bond and private contracts between individual businessmen. The clauses committed signatories to paying not merely in dollars but in gold dollars. The boilerplate phrase was that the obligation would be “payable in principal and interest in United States gold coin of the present standard of value.” The phrase “the present standard” referred, or so many believed, to the moment at which the contract had been signed. The line also referred to gold, not paper, just as it said. This was a way of ensuring that, even if a government did inflate, an individual must still honor his original contract. Gold clause bonds had historically sold at a premium, which functioned as a kind of meter of people’s expectation of inflation. In order to fund World War I, for instance, Washington had resorted to gold clause bonds, backing Liberty Bonds sold to the public with gold. Now, in the spring of 1933, upon the orders of Roosevelt, the Treasury was making clear that it would cease to honor its own gold clauses. This also threw into jeopardy gold clauses in private contracts between individuals. The notion would be tested in the Supreme Court later; meanwhile, bond and contract holders had to accept the de facto devaluation of their assets. The deflation had hurt borrowers, and now this inflationary act was a primitive revenge. To end the gold clause was an act of social redistribution, a$200 billion transfer of wealth from creditor to debtor, a victory for the populists.” [Chapter 5]

Unfortunately I can’t find a citation right now, but I believe Argentina did the same thing when it replaced the austral with the peso; and that this relabeling almost always occurs during currency reforms.

Thus after a currency reform, the price of existing debt, at the very least, would be in the new currency.

Debt: the most important nominal friction?
And there’s a good argument to be made that the most important “sticky” price is the price of debt. Selgin’s “Less Than Zero”, Sheedy (2014), and Mian and Sufi's new book make this argument.

Debt contracts are almost always both (a) fixed in nominal, not real, terms and (b) not contingent on aggregate economic conditions. In perfectly complete markets, on the other hand, we would expect debt contracts to be state-contingent. Contracts would be written in such a way that (perhaps by tracking an inflation index and some index of real economic conditions) if inflation or economic growth increases, borrowers would pay more back to their lender; and if inflation or economic growth went down, borrowers would pay less. Both borrowers and lenders would ex ante prefer this type of arrangement, but transaction costs make such contracts prohibitively expensive.

For more intuition on this see Chapter III of Less Than Zero and the introduction to Sheedy’s paper. As for why this nominal friction may be more important than the traditional nominal frictions that economists worry about – that is, sticky prices and sticky wages – I would again suggest a look at Sheedy’s paper where he calibrates his model and finds that the central bank should care 90% about this nominal debt “stickiness” and 10% about traditional price stickiness. However, the relative importance of these two categories of frictions is very much still an open question.

If non-state contingent debt is indeed the most important nominal friction, than perhaps if Greece were to rewrite existing debt contracts when instituting a New Drachma, the new Greek central bank would have enough control over the nominal economy to pull Greece out of its depression.

(Of course, after the switch over to the New Drachma, Greek households and firms could – unless further legislation prevented them – write *new* contracts denominated in euros. JP’s Latin America pricing hysteresis example would seem to suggest that this is very possible.)

In short
To summarize, JP writes, “As long as a significant portion of Greek prices are expressed in euros, Greece’s monetary policy will continue to be decided in Frankfurt, not Athens.” While true, it is at least conceivable that a government-mandated relabeling of existing debt contracts (as has occurred historically during currency reforms) could ensure that debt prices, which are perhaps the most important prices, are no longer expressed in euros but instead in New Drachma.

Summary:

1. NGDP growth is equal to real GDP growth plus inflation. Thus, under NGDP targeting, if the potential real growth rate of the economy changes, then the full-employment inflation rate changes.
2. New Keynesians advocate that the Fed adjust the NGDP target one for one with changes in potential GDP. However, this rule would be extremely problematic for market monetarists.
3. Most importantly, it is simply not possible to estimate potential GDP in real time: an accurate structural model will never be built.
4. Further: such a policy would give the Fed huge amounts of discretion; unanchor long term expectations, especially under level targeting; and be especially problematic if technological growth rapidly accelerates as some predict.

I want to discuss a problem that I see with nominal GDP targeting: structural growth slowdowns. This problem isn’t exactly a novel insight, but it is an issue with which I think the market monetarist community has not grappled enough.

I. A hypothetical example

Remember that nominal GDP growth (in the limit) is equal to inflation plus real GDP growth. Consider a hypothetical economy where market monetarism has triumphed, and the Fed maintains a target path for NGDP growing annually at 5% (perhaps even with the help of a NGDP futures market). The economy has been humming along at 3% RGDP growth, which is the potential growth rate, and 2% inflation for (say) a decade or two. Everything is hunky dory.

But then – the potential growth rate of the economy drops to 2% due to structural (i.e., supply side) factors, and potential growth will be at this rate for the foreseeable future.

Perhaps there has been a large drop in the birth rate, shrinking the labor force. Perhaps a newly elected government has just pushed through a smorgasbord of measures that reduce the incentive to work and to invest in capital. Perhaps, most plausibly (and worrisomely!) of all, the rate of innovation has simply dropped significantly.

In this market monetarist fantasy world, the Fed maintains the 5% NGDP path. But maintaining 5% NGDP growth with potential real GDP growth at 2% means 3% steady state inflation! Not good. And we can imagine even more dramatic cases.

II. Historical examples

Skip this section if you’re convinced that the above scenario is plausible

Say a time machine transports Scott Sumner back to 1980 Tokyo: a chance to prevent Japan’s Lost Decade! Bank of Japan officials are quickly convinced to adopt an NGDP target of 9.5%, the rationale behind this specific number being that the average real growth in the 1960s and 70s was 7.5%, plus a 2% implicit inflation target.

Thirty years later, trend real GDP in Japan is around 0.0%, by Sumner’s (offhand) estimation and I don’t doubt it. Had the BOJ maintained the 9.5% NGDP target in this alternate timeline, Japan would be seeing something like 9.5% inflation today.

Counterfactuals are hard: of course much else would have changed had the BOJ been implementing NGDPLT for over 30 years, perhaps including the trend rate of growth. But to a first approximation, the inflation rate would certainly be approaching 10%.

Or, take China today. China saw five years of double digit real growth in the mid-2000s, and not because the economy was overheating. I.e., the 12.5% and 14% growth in real incomes in China in 2006 and 2007 were representative of the true structural growth rate of the Chinese economy at the time. To be conservative, consider the 9.4% growth rate average over the decade, which includes the meltdown in 2008-9 and a slowdown in the earlier part of the decade.

Today, growth is close to 7%, and before the decade is up it very well could have a 5 handle. If the People’s Bank had adopted NGDP targeting at the start of the millennium with a 9.4% real growth rate in mind, inflation in China today would be more than 2 percentage points higher than what the PBOC desired when it first set the NGDP target! That’s not at all trivial, and would only become a more severe issue as the Chinese economy finishes converging with the developed world and growth slows still further.

This isn’t only a problem for countries playing catch-up to the technological frontier. France has had a declining structural growth rate for the past 30 years, at first principally because of declining labor hours/poor labor market policies and then compounded by slowing productivity and population growth. The mess that is Russia has surely had a highly variable structural growth rate since the end of the Cold War. The United States today, very debatably, seems to be undergoing at least some kind of significant structural change in economic growth as well, though perhaps not as drastic.

Source: Margaret Jacobson, “Behind the Slowdown of Potential GDP

III. Possible solutions to the problem of changing structural growth

There are really only two possible solutions to this problem for a central bank to adopt.

First, you can accept the higher inflation, and pray to the Solow residual gods that the technological growth rate doesn’t drop further and push steady state inflation even higher. I find this solution completely unacceptable. Higher long term inflation is simply never a good thing; but even if you don’t feel that strongly, you at least should feel extremely nervous about risking the possibility of extremely high steady state inflation.

Second, you can allow the central bank to periodically adjust the NGDP target rate (or target path) to adjust for perceived changes to the structural growth rate. For example, in the original hypothetical, the Fed would simply change its NGDP target path to grow at 4% instead of 5% as previously so that real income grows at 2% and inflation continues at 2%.

This second solution, I think, is probably what Michael Woodford, Brad DeLong, Paul Krugman, and other non-monetarist backers of NGDP targeting would support. Indeed, Woodford writes in his Jackson Hole paper, “It is surely true – and not just in the special model of Eggertsson and Woodford – that if consensus could be reached about the path of potential output, it would be desirable in principle to adjust the target path for nominal GDP to account for variations over time in the growth of potential.” (p. 46-7) Miles Kimball notes the same argument: in the New Keynesian framework, an NGDP target rate should be adjusted for changes in potential.

However – here’s the kicker – allowing the Fed to change its NGDP target is extremely problematic for some of the core beliefs held by market monetarists. (Market monetarism as a school of thought is about more than merely just NGDP targeting – see Christensen (2011) – contra some.) Let me walk through a list of these issues now; by the end, I hope it will be clear why I think that Scott Sumner and others have not discussed this issue enough.

IVa. The Fed shouldn’t need a structural model

For the Fed to be able to change its NGDP target to match the changing structural growth rate of the economy, it needs a structural model that describes how the economy behaves. This is the practical issue facing NGDP targeting (level or rate). However, the quest for an accurate structural model of the macroeconomy is an impossible pipe dream: the economy is simply too complex. There is no reason to think that the Fed’s structural model could do a good job predicting technological progress. And under NGDP targeting, the Fed would be entirely dependent on that structural model.

Ironically, two of Scott Sumner’s big papers on futures market targeting are titled, “Velocity Futures Markets: Does the Fed Need a Structural Model?” with Aaron Jackson (their answer: no), and “Let a Thousand Models Bloom: The Advantages of Making the FOMC a Truly 'Open Market'”.

In these, Sumner makes the case for tying monetary policy to a prediction market, and in this way having the Fed adopt the market consensus model of the economy as its model of the economy, instead of using an internal structural model. Since the price mechanism is, in general, extremely good at aggregating disperse information, this model would outperform anything internally developed by our friends at the Federal Reserve Board.

If the Fed had to rely on an internal structural model adjust the NGDP target to match structural shifts in potential growth, this elegance would be completely lost! But it’s more than just a loss in elegance: it’s a huge roadblock to effective monetary policymaking, since the accuracy of said model would be highly questionable.

IVb. Rules are better than discretion

Old Monetarists always strongly preferred a monetary policy based on well-defined rules rather than discretion. This is for all the now-familiar reasons: the time-inconsistency problem; preventing political interference; creating accountability for the Fed; etc. Market monetarists are no different in championing rule-based monetary policy.

Giving the Fed the ability to modify its NGDP target is simply an absurd amount of discretionary power. It’s one thing to give the FOMC the ability to decide how to best achieve its target, whether than be 2% inflation or 5% NGDP. It’s another matter entirely to allow it to change that NGDP target at will. It removes all semblance of accountability, as the Fed could simply move the goalposts whenever it misses; and of course it entirely recreates the time inconsistency problem.

IVc. Expectations need to be anchored

Closely related to the above is the idea that monetary policy needs to anchor nominal expectations, perhaps especially at the zero lower bound. Monetary policy in the current period can never be separated from expectations about future policy. For example, if Janet Yellen is going to mail trillion dollar coins to every American a year from now, I am – and hopefully you are too – going to spend all of my or your dollars ASAP.

Because of this, one of the key necessary conditions for stable monetary policy is the anchoring of expectations for future policy. Giving the Fed the power to discretionarily change its NGDP target wrecks this anchor completely!

Say the Fed tells me today that it’s targeting a 5% NGDP level path, and I go take out a 30-year mortgage under the expectation that my nominal income (which remember is equal to NGDP in aggregate) will be 5% higher year after year after year. This is important as my ability to pay my mortgage, which is fixed in nominal terms, is dependent on my nominal income.

But then Janet Yellen turns around and tells me tomorrow, “Joke’s on you pal! We’re switching to a 4% level target.” It’s simply harder for risk-averse consumers and firms to plan for the future when there’s so much possible variation in future monetary policy.

IVd. Level targeting exacerbates this issue

Further, level targeting exacerbates this entire issue. The push for level targeting over growth rate targeting is at least as important to market monetarism as the push for NGDP targeting over inflation targeting, for precisely the reasoning described above. To keep expectations on track, and thus not hinder firms and households trying to make decisions about the future, the central bank needs to make up for past mistakes, i.e. level target.

However, level targeting has issues even beyond those that rate targeting has, when the central bank has the ability to change the growth rate. In particular: what happens if the Fed misses the level target one year, and decides at the start of the next to change its target growth rate for the level path?

For instance, say the Fed had adopted a 5% NGDP level target in 2005, which it maintained successfully in 2006 and 2007. Then, say, a massive crisis hits in 2008, and the Fed misses its target for say three years running. By 2011, it looks like the structural growth rate of the economy has also slowed. Now, agents in the economy have to wonder: is the Fed going to try to return to its 5% NGDP path? Or is it going to shift down to a 4.5% path and not go back all the way? And will that new path have as a base year 2011? Or will it be 2008?

(Note: I am aware that had the Fed been implementing NGDPLT in 2008 the crisis would have been much less severe, perhaps not even a recession! The above is for illustration.)

(Also, I thank Joe Mihm for this point.)

IVe. This problem for NGDP targeting is analogous to the velocity instability problem for Friedman’s k-percent rule

Finally, I want to make an analogy that hopefully emphasizes why I think this issue is so serious. Milton Friedman long advocated that the Fed adopt a rule whereby it would have promised to keep the money supply (M2, for Friedman) growing at a steady rate of perhaps 3%. Recalling the equation of exchange, MV = PY, we can see that when velocity is constant, the k-percent rule is equivalent to NGDP targeting!

In fact, velocity used to be quite stable:

Source: FRED

For the decade and a half or two after 1963 when Friedman and Schwartz published A Monetary History, the rule probably would have worked brilliantly. But between high inflation and financial innovation in the late 70s and 80s, the stable relationship between velocity, income, and interest rates began to break down, and the k-percent rule would have been a disaster. This is because velocity – sort of the inverse of real, income-adjusted money demand – is a structural, real variable that depends on the technology of the economy and household preferences.

The journals of the 1980s are somewhat famously a graveyard of structural velocity models attempting to find a universal model that could accurately explain past movements in velocity and accurately predict future movements. It was a hopeless task: the economy is simply too complex. (I link twice to the same Hayek essay for a reason.) Hence the title of the Sumner and Jackson paper already referenced above.

Today, instead of hopelessly modeling money demand, we have economists engaged in the even more hopeless task of attempting to develop a structural model for the entire economy. Even today, when the supply side of the economy really changes very little year-to-year, we don’t do that good of a job at it.

And (this is the kicker) what happens if the predictability of the structural growth rate breaks down to the same extent that the predictability of velocity broke down in the 1980s? What if, instead of the structural growth rate only changing a handful of basis points each year, we have year-to-year swings in the potential growth rate on the order of whole percentage points? I.e., one year the structural growth is 3%, but the next year it’s 5%, and the year after that it’s 2.5%?

I know that at this point I’m probably losing anybody that has bothered to read this far, but I think this scenario is entirely more likely than most people might expect. Rapidly accelerating technological progress in the next couple of decades as we reach the “back half of the chessboard”, or even an intelligence explosion, could very well result in an extremely high structural growth rate that swings violently year to year.

However, it is hard to argue either for or against the techno-utopian vision I describe and link to above, since trying to estimate the future of productivity growth is really not much more than speculation. That said, it does seem to me that there are very persuasive arguments that growth will rapidly accelerate in the next couple of decades. I would point those interested in a more full-throated defense of this position to the work of Robin Hanson, Erik Brynjolfsson and Andrew McAfee, Nick Bostrom, and Eliezer Yudkowsky.

If you accept the possibility that we could indeed see rapidly accelerating technological change, an “adaptable NGDP target” would essentially force the future Janet Yellen to engage in an ultimately hopeless attempt to predict the path of the structural growth rate and to chase after it. I think it’s clear why this would be a disaster.

V. An anticipation of some responses

Before I close this out, let me anticipate four possible responses.

1. NGDP variability is more important than inflation variability

Nick Rowe makes this argument here and Sumner also does sort of here. Ultimately, I think this is a good point, because of the problem of incomplete financial markets described by Koenig (2013) and Sheedy (2014): debt is priced in fixed nominal terms, and thus ability to repay is dependent on nominal incomes.

Nevertheless, just because NGDP targeting has other good things going for it does not resolve the fact that if the potential growth rate changes, the long run inflation rate would be higher. This is welfare-reducing for all the standard reasons. Because of this, it seems to me that there’s not really a good way of determining whether NGDP level targeting or price level targeting is more optimal, and it’s certainly not the case that NGDPLT is the monetary policy regime to end all other monetary policy regimes.

2. Target NGDP per capita instead!

You might argue that if the most significant reason that the structural growth rate could fluctuate is changing population growth, then the Fed should just target NGDP per capita. Indeed, Scott Sumner has often mentioned that he actually would prefer an NGDP per capita target. To be frank, I think this is an even worse idea! This would require the Fed to have a long term structural model of demographics, which is just a terrible prospect to imagine.

3. Target nominal wages/nominal labor compensation/etc. instead!

Sumner has also often suggested that perhaps nominal aggregate wage targeting would be superior to targeting NGDP, but that it would be too politically controversial. Funnily enough, the basic New Keynesian model with wage stickiness instead of price stickiness (and no zero lower bound) would recommend the same thing.

I don’t think this solves the issue. Take the neoclassical growth or Solow model with Cobb-Douglas technology and preferences and no population growth. On the balanced growth path, the growth rate of wages = the potential growth rate of the economy = the growth rate of technology. For a more generalized production function and preferences, wages and output still grow at the same rate.

In other words, the growth rate of real wages parallels that of the potential growth rate of the economy. So this doesn’t appear to solve anything, as it would still require a structural model.

4. Set up a prediction market for the structural growth rate!

I don’t even know if this would work well with Sumner’s proposal. But perhaps it would. In that case, my response is… stay tuned for my critique of market monetarism, part two: why handing policymaking over to prediction markets is a terrible idea.

VI. In conclusion

The concerns I outline above have driven me from an evangelist for NGDP level targeting to someone extremely skeptical that any central banking policy can maintain monetary equilibrium. The idea of optimal policy under NGDP targeting necessitating a structural model of the economy disturbs me, for a successful such model – as Sumner persuasively argues – will never be built. The prospect that NGDP targeting might collapse in the face of rapidly accelerating technological growth worries me, since it does seem to me that this very well could occur. And even setting aside the techno-utopianism, the historical examples described above, such as Japan in the 1980s, demonstrate that we have seen very large shifts in the structural growth rate in actual real-world economies.

I want to support NGDPLT: it is probably superior to price level or inflation targeting anyway, because of the incomplete markets issue. But unless there is a solution to this critique that I am missing, I am not sure that NGDP targeting is a sustainable policy for the long term, let alone the end of monetary history.

I found an interesting 1970 AER paper that adds land to the Solow model in continuous time and verifies the result, discussed last week, that as the rate of return on capital approaches the growth rate of the economy the price of land will approach infinity. The paper is a mess – the notation is disgusting, and doing continuous time instead of discrete time adds nothing but pain.

The bottom line is this. The rate of return on land and capital should be equal in equilibrium (factor price equalization). If the interest rate – i.e., the rate of return on capital – is low, then the rate of return on land must be lower.

First, the rate of return on land and capital should be equal on the balanced growth path (equation 6). Where K is capital, L is land, P is the price of land in terms of goods, and g is the rate of exogenous economic growth, we have the equilibrium condition:
F_K = F_L/P + (∂P/∂t)/P

Translated to English: the marginal product of capital must equal the marginal product of land plus the rate of increase in the price of land. Perhaps even better: the interest rate on capital equals the rent from land plus the capital gains from land. Or, one last rephrasing: the return on capital equals the return on land, where part of the return on land is price appreciation.

From this, an equilibrium relationship between effective capital (k = K/N) and effective land (l = PL/N) can be derived with a little algebra (equation 8):
l = F_L / [(F_K - g)*β]

Where beta is a constant (the ratio of initial effective labor to land).

This is the key result. If the rate of return on capital is equal to the growth rate, effective land goes to infinity: dividing by zero. And thus the price of land goes to infinity, as the supply of land is perfectly inelastic, i.e. fixed.

To repeat what I wrote last week: we don’t see the price of landing going to infinity, which would seem to be a challenge for secular stagnationists.

Some thoughts on Eggertsson and Mehrotra (2014), the first formalization of the “secular stagnation” thesis. Nothing innovative here, I just wanted to collect my thoughts all in one place.

Model overview
First, a brief review of Eggertsson and Mehrotra’s model for easy reference. (Simon Wren-Lewis has a short summary of the math.)

The paper describes a three-period overlapping generations model, where the middle generation receives an endowment (or, in an extension, labors for an income). The young and old generations do not receive incomes; the young borrows from the middle generation, and the old uses money saved from their time in the middle generation. The amount the young can borrow is constrained because of a purely exogenous “debt limit”. The key result is that if this debt constraint (exogenously) drops (a “deleveraging shock”), then the demand for loans drops, forcing the natural rate of interest to permanently fall, potentially to permanently below zero.

Once a price level and downward nominal wage rigidity are introduced, we can then have a permanent zero lower bound situation where the natural rate is permanently and unattainably negative – secular stagnation, by definition. This causes output to be permanently below potential.

Now, various thoughts, from more to less interesting:

1. Lack of capital
This model does not include capital. I suspect a model with capital and a negative interest rate would have negative or zero investment, whereas in the economy today we of course have positive net investment.

The authors do note they want to include capital in the next iteration of the model.

2. Lack of land
There is also no land in this model. Of course in modern times land is not typically included as a factor in the production function. Solow once joked, “If God had meant there to be more than two factors of production, he would have made it easier for us to draw three-dimensional diagrams.”

But Nick Rowe, I think, makes a good case that in a model attempting to analyze permanently negative interest rates, land must be included.

The argument goes like this: think of land as an asset like any other, where the price of land equals the present discounted value of the future returns to land. It can be shown that as the interest rate approaches the growth rate of the economy, the value of the land goes to infinity.

Back in the real world, of course, we have not seen land prices go to infinity. So perhaps adding land to this model would prevent us from having secular stagnation without the price of land blowing up.

Section three of this Stefan Homburg (2014) paper discusses this further, and Homburg models the result more formally here. Another interesting post from Rowe here, and comments from Matt Rognlie here.

(Side note: by the same logic, perhaps a fall in the natural rate explains the housing “bubble” of the last decade?)

3. Debt limit as exogenous
The debt limit is purely exogenous. It seems likely that there would be important and interesting general equilibrium effects if it were endogenized. There is not much to say on this point, but it’s very important.

4. OLG modelling instead of representative agent
This model uses OLG as its basic framework instead of a representative agent.

Importantly, this is different from the last decade and a half of research on the liquidity trap (Krugman 1998, Eggertsson and Woodford 2003, Auerbach and Obstfeld 2005) which all used representative agent models. In these models, in the long run steady the natural rate will determined by the discount factor which forces the long run natural rate to be positive. Thus, the economy can only be in a liquidity trap (ZLB) situation temporarily.

It’s only in this OLG environment that we can have a permanently negative natural rate. That seems very interesting to me – what else might we be missing by using the representative agent model? (…Probably not much.)

Turning away from mathematical formalization, I wonder if one way we could think about this is: what if the natural rate was expected to remain at the ZLB for a period longer than the remainder of a person’s life (say >60 years)? Would that create some kind of a trap situation?

Conclusion
Overall, I’m simply not convinced that this is a useful model. The idea that the natural rate could be permanently negative simply seems extremely unlikely. Also, the lack of inclusion of land seems to be a big oversight.

Update: Josh Hendrickson makes the interesting point that adding money to the economy (with a fixed nominal return of 0%), the Eggertsson-Mehrota does not hold.

In 2008, Christina and David Romer published an interesting paper demonstrating that FOMC members are useless at forecasting economic conditions compared to the Board of Governors staff, and presented some evidence that mistaken FOMC economic forecasts were correlated with monetary policy shocks.

I’ve updated their work with another decade of data, and find that while the FOMC remained bad at forecasting over the extended period, the poor forecasting was not correlated with monetary policy shocks.

First, some background.

Background
Before every FOMC meeting, the staff at the Board of Governors produces the Greenbook, an in-depth analysis of current domestic and international economic conditions and, importantly for us, forecasts of all kinds of economic indicators a year or two out. The Greenbook is only released to the public with a major lag, so the last data we have is from 2007.

The FOMC members – the governors and regional bank presidents – prepare consensus economic forecasts twice a year, usually February and July, as part of the Monetary Policy Report they must submit to Congress. (Since October 2007, FOMC members have prepared projections at four FOMC meetings per year. That data, from the end of 2007, is not included in my dataset here, but I’ll probably put it in when I update it in the future as more recent Greenbooks are released.)

Summary of Romer and Romer (2008)
The Romers took around 20 years of data from these two sources, from 1979 to 2001, and compared FOMC forecasts to staff forecasts. They estimate a regression of the form

Where X is the realized value of the variable (e.g. actual GDP growth in year t+1), S is the staff’s projection of the variable (e.g. the staff’s projected GDP growth next year), and P is the FOMC’s projection of the variable (e.g. the FOMC’s projected GDP growth next year).

They find “not just that FOMC members fail to add information, but that their efforts to do so are counterproductive.” Policymakers were no good at forecasting over this period.

They then ask if the mistaken forecasts cause the FOMC to make monetary policy errors that cause monetary policy shocks. The two use their own Romer and Romer (2004) measure, which I’ve updated here, as the measure of monetary policy shocks. They then estimate the regression

Where M is the measure of shocks, and P and S are as before. They only ran this regression from 1979 through 1996, as that was the latest the measure of shocks went up to in the 2004 paper.

They find that, “The estimates suggest that forecast differences may be one source of monetary shocks… An FOMC forecast of inflation one percentage point higher than the staff forecast is associated with an unusual rise in the federal funds rate of approximately 30 basis points.”

That seemed like a very interesting result to me when I first read this paper. Could bad monetary policymaking be explained by the hubris of policymakers who thought they could forecast economic conditions better than the staff? It turns out, after I updated the data, this result does not hold.

Updating the data
I followed the same methodology as when I updated Romer and Romer (2004): first replicating the data to ensure I had the correct method before collecting the new data and updating. The data is from 1979 through 2007, and all my work is available here and here.

I find, first, that policymakers remained quite poor economic forecasters. Here is the updated version of Table 1 from the paper, with the old values for comparison:

The coefficient on the FOMC forecast for inflation and unemployment is still right around zero, indicating that FOMC forecasts for these two variables contain no useful information.

However, it appears that once we extend the monetary policy shock regression from 1996 to 2007, the second result – that forecast differences are a source of monetary policy shocks – does not hold. Here is the updated version of Table 2 from the paper, again with old values for comparison:

When the Romers published their paper, the R-squared on the regression of monetary shocks over all three variables was 0.17. This wasn’t exactly the strongest correlation, but for the social sciences it’s not bad, especially considering that monetary shock measure is fairly ad hoc.

As we can see in the updated regression, the R-squared is down to 0.05 with the extended data. This is just too small to be labeled significant. Thus, unfortunately, this result does not appear to hold.

I’ve updated the Romer and Romer (2004) series of monetary policy shocks. The main takeaway is this graph of monetary policy shocks by month, since 1969, where the gray bars indicate recession:

When the two published their paper, they only had access to date up through 1996, since Fed Greenbooks – upon which the series is based – are released with a large lag. I’ve updated it through 2007, the latest available, and will update it again next month when the 2008 Greenbooks are released.

The two interesting points in the new data are

1. The negative policy shock before and during the 2001 recession
2.  The negative policy shock in 2007 before the Great Recession

Below I’ll go into the more technical notes of how this measure is constructed and my methodology, but the graph and the two points above are the main takeaway.

How is the R&R measure constructed?
First, the Romers derive a series of intended changes in the federal funds rate. (This is easy starting in the 1990s, since the FOMC began announcing when it wanted to change the FFR; before that, the two had to trawl through meeting minutes to figure it out.) They then use the Fed’s internal Greenbook forecasts of inflation and real growth to control the intended FFR series for monetary policy actions taken in response to information about future economic developments, specifically RGDP growth, inflation, and unemployment.

In other words, they regress the change in the intended FFR around forecast dates on RGDP growth, inflation and unemployment. Then, as they put it, “Residuals from this regression show changes in the intended funds rate not taken in response to information about future economic developments. The resulting series for monetary shocks should be relatively free of both endogenous and anticipatory actions.”

The equation they estimate is:

$\Delta ff_m = \alpha + \beta ffb_m + \Sigma_{i=-1}^{2}\gamma_i\tilde{\Delta y_{mi}} + \Sigma_{i=-1}^{2} \lambda_i (\tilde{\Delta y_{mi} - \Delta y_{m-1,i}}) + \Sigma_{i=-1}^{2} \phi_i \tilde{\pi}_{mi} + \Sigma_{i=-1}^{2} \theta_i (\tilde{\pi_{mi}} - \tilde{\pi{m-1,i}}) + \rho \tilde{u}_{m0} + \epsilon_m$

• Δff is the change in the intended FFR around meeting m
• ffb is the level of the target FFR before the change in the meeting m (included to capture any mean reversion tendency)
• π, Δy, and u are the forecasts of inflation, real output growth, and the unemployment rate; note both that the current forecast and the change since the last meeting are used
• The i subscripts refer to the horizon of the forecast: -1 is the previous quarter, 0 the current quarter, 1 the next quarter, 2 the next next quarter
• All relative to the date of the forecast corresponding to meeting m; i.e. if the meeting is in early July 1980 and the forecast is in late June 1980, the contemporaneous forecast is for the second quarter of 1980

The Romers show in their paper that, by this measure, negative monetary policy shocks have large and significant effects on output and the price level.

It is worth noting the limitations of this measure. It is based on the federal funds rate instrument, which is not a very good indicator of the stance of monetary policy. Additionally, if the FOMC changes its target FFR between meetings, any shock associated with that decision would not be captured by this measure.

Results
First, I replicated the Romer and Romer (2004) results to confirm I had the correct method. Then I collected the new data in Excel and ran the regression specified above in MATLAB. The data is available here and here (though there might have been errors when uploading to Google Drive).

The residuals are shown above in graph form; it is an updated version of figure 1a in Romer and Romer (2004).

The coefficients and statistics on the regression are (this is an updated version of table 1 in the original paper):

Last, for clarity, I have the monetary policy shock measure below with the top five extremes removed. This makes some periods more clear, especially the 2007 shock. Again, I will update this next month when the 2008 Greenbooks are released. It should be very interesting to see how large a negative shock there was in 2008.

Update: I've updated this last graph with 2008 data. Interestingly, the 2008 shock is not exceptionally large.