Summary:

1. NGDP growth is equal to real GDP growth plus inflation. Thus, under NGDP targeting, if the potential real growth rate of the economy changes, then the full-employment inflation rate changes.
2. New Keynesians advocate that the Fed adjust the NGDP target one for one with changes in potential GDP. However, this rule would be extremely problematic for market monetarists.
3. Most importantly, it is simply not possible to estimate potential GDP in real time: an accurate structural model will never be built.
4. Further: such a policy would give the Fed huge amounts of discretion; unanchor long term expectations, especially under level targeting; and be especially problematic if technological growth rapidly accelerates as some predict.

I want to discuss a problem that I see with nominal GDP targeting: structural growth slowdowns. This problem isn’t exactly a novel insight, but it is an issue with which I think the market monetarist community has not grappled enough.

I. A hypothetical example

Remember that nominal GDP growth (in the limit) is equal to inflation plus real GDP growth. Consider a hypothetical economy where market monetarism has triumphed, and the Fed maintains a target path for NGDP growing annually at 5% (perhaps even with the help of a NGDP futures market). The economy has been humming along at 3% RGDP growth, which is the potential growth rate, and 2% inflation for (say) a decade or two. Everything is hunky dory.

But then – the potential growth rate of the economy drops to 2% due to structural (i.e., supply side) factors, and potential growth will be at this rate for the foreseeable future.

Perhaps there has been a large drop in the birth rate, shrinking the labor force. Perhaps a newly elected government has just pushed through a smorgasbord of measures that reduce the incentive to work and to invest in capital. Perhaps, most plausibly (and worrisomely!) of all, the rate of innovation has simply dropped significantly.

In this market monetarist fantasy world, the Fed maintains the 5% NGDP path. But maintaining 5% NGDP growth with potential real GDP growth at 2% means 3% steady state inflation! Not good. And we can imagine even more dramatic cases.

II. Historical examples

Skip this section if you’re convinced that the above scenario is plausible

Say a time machine transports Scott Sumner back to 1980 Tokyo: a chance to prevent Japan’s Lost Decade! Bank of Japan officials are quickly convinced to adopt an NGDP target of 9.5%, the rationale behind this specific number being that the average real growth in the 1960s and 70s was 7.5%, plus a 2% implicit inflation target.

Thirty years later, trend real GDP in Japan is around 0.0%, by Sumner’s (offhand) estimation and I don’t doubt it. Had the BOJ maintained the 9.5% NGDP target in this alternate timeline, Japan would be seeing something like 9.5% inflation today.

Counterfactuals are hard: of course much else would have changed had the BOJ been implementing NGDPLT for over 30 years, perhaps including the trend rate of growth. But to a first approximation, the inflation rate would certainly be approaching 10%.

Or, take China today. China saw five years of double digit real growth in the mid-2000s, and not because the economy was overheating. I.e., the 12.5% and 14% growth in real incomes in China in 2006 and 2007 were representative of the true structural growth rate of the Chinese economy at the time. To be conservative, consider the 9.4% growth rate average over the decade, which includes the meltdown in 2008-9 and a slowdown in the earlier part of the decade.

Today, growth is close to 7%, and before the decade is up it very well could have a 5 handle. If the People’s Bank had adopted NGDP targeting at the start of the millennium with a 9.4% real growth rate in mind, inflation in China today would be more than 2 percentage points higher than what the PBOC desired when it first set the NGDP target! That’s not at all trivial, and would only become a more severe issue as the Chinese economy finishes converging with the developed world and growth slows still further.

This isn’t only a problem for countries playing catch-up to the technological frontier. France has had a declining structural growth rate for the past 30 years, at first principally because of declining labor hours/poor labor market policies and then compounded by slowing productivity and population growth. The mess that is Russia has surely had a highly variable structural growth rate since the end of the Cold War. The United States today, very debatably, seems to be undergoing at least some kind of significant structural change in economic growth as well, though perhaps not as drastic.

Source: Margaret Jacobson, “Behind the Slowdown of Potential GDP

III. Possible solutions to the problem of changing structural growth

There are really only two possible solutions to this problem for a central bank to adopt.

First, you can accept the higher inflation, and pray to the Solow residual gods that the technological growth rate doesn’t drop further and push steady state inflation even higher. I find this solution completely unacceptable. Higher long term inflation is simply never a good thing; but even if you don’t feel that strongly, you at least should feel extremely nervous about risking the possibility of extremely high steady state inflation.

Second, you can allow the central bank to periodically adjust the NGDP target rate (or target path) to adjust for perceived changes to the structural growth rate. For example, in the original hypothetical, the Fed would simply change its NGDP target path to grow at 4% instead of 5% as previously so that real income grows at 2% and inflation continues at 2%.

This second solution, I think, is probably what Michael Woodford, Brad DeLong, Paul Krugman, and other non-monetarist backers of NGDP targeting would support. Indeed, Woodford writes in his Jackson Hole paper, “It is surely true – and not just in the special model of Eggertsson and Woodford – that if consensus could be reached about the path of potential output, it would be desirable in principle to adjust the target path for nominal GDP to account for variations over time in the growth of potential.” (p. 46-7) Miles Kimball notes the same argument: in the New Keynesian framework, an NGDP target rate should be adjusted for changes in potential.

However – here’s the kicker – allowing the Fed to change its NGDP target is extremely problematic for some of the core beliefs held by market monetarists. (Market monetarism as a school of thought is about more than merely just NGDP targeting – see Christensen (2011) – contra some.) Let me walk through a list of these issues now; by the end, I hope it will be clear why I think that Scott Sumner and others have not discussed this issue enough.

IVa. The Fed shouldn’t need a structural model

For the Fed to be able to change its NGDP target to match the changing structural growth rate of the economy, it needs a structural model that describes how the economy behaves. This is the practical issue facing NGDP targeting (level or rate). However, the quest for an accurate structural model of the macroeconomy is an impossible pipe dream: the economy is simply too complex. There is no reason to think that the Fed’s structural model could do a good job predicting technological progress. And under NGDP targeting, the Fed would be entirely dependent on that structural model.

Ironically, two of Scott Sumner’s big papers on futures market targeting are titled, “Velocity Futures Markets: Does the Fed Need a Structural Model?” with Aaron Jackson (their answer: no), and “Let a Thousand Models Bloom: The Advantages of Making the FOMC a Truly 'Open Market'”.

In these, Sumner makes the case for tying monetary policy to a prediction market, and in this way having the Fed adopt the market consensus model of the economy as its model of the economy, instead of using an internal structural model. Since the price mechanism is, in general, extremely good at aggregating disperse information, this model would outperform anything internally developed by our friends at the Federal Reserve Board.

If the Fed had to rely on an internal structural model adjust the NGDP target to match structural shifts in potential growth, this elegance would be completely lost! But it’s more than just a loss in elegance: it’s a huge roadblock to effective monetary policymaking, since the accuracy of said model would be highly questionable.

IVb. Rules are better than discretion

Old Monetarists always strongly preferred a monetary policy based on well-defined rules rather than discretion. This is for all the now-familiar reasons: the time-inconsistency problem; preventing political interference; creating accountability for the Fed; etc. Market monetarists are no different in championing rule-based monetary policy.

Giving the Fed the ability to modify its NGDP target is simply an absurd amount of discretionary power. It’s one thing to give the FOMC the ability to decide how to best achieve its target, whether than be 2% inflation or 5% NGDP. It’s another matter entirely to allow it to change that NGDP target at will. It removes all semblance of accountability, as the Fed could simply move the goalposts whenever it misses; and of course it entirely recreates the time inconsistency problem.

IVc. Expectations need to be anchored

Closely related to the above is the idea that monetary policy needs to anchor nominal expectations, perhaps especially at the zero lower bound. Monetary policy in the current period can never be separated from expectations about future policy. For example, if Janet Yellen is going to mail trillion dollar coins to every American a year from now, I am – and hopefully you are too – going to spend all of my or your dollars ASAP.

Because of this, one of the key necessary conditions for stable monetary policy is the anchoring of expectations for future policy. Giving the Fed the power to discretionarily change its NGDP target wrecks this anchor completely!

Say the Fed tells me today that it’s targeting a 5% NGDP level path, and I go take out a 30-year mortgage under the expectation that my nominal income (which remember is equal to NGDP in aggregate) will be 5% higher year after year after year. This is important as my ability to pay my mortgage, which is fixed in nominal terms, is dependent on my nominal income.

But then Janet Yellen turns around and tells me tomorrow, “Joke’s on you pal! We’re switching to a 4% level target.” It’s simply harder for risk-averse consumers and firms to plan for the future when there’s so much possible variation in future monetary policy.

IVd. Level targeting exacerbates this issue

Further, level targeting exacerbates this entire issue. The push for level targeting over growth rate targeting is at least as important to market monetarism as the push for NGDP targeting over inflation targeting, for precisely the reasoning described above. To keep expectations on track, and thus not hinder firms and households trying to make decisions about the future, the central bank needs to make up for past mistakes, i.e. level target.

However, level targeting has issues even beyond those that rate targeting has, when the central bank has the ability to change the growth rate. In particular: what happens if the Fed misses the level target one year, and decides at the start of the next to change its target growth rate for the level path?

For instance, say the Fed had adopted a 5% NGDP level target in 2005, which it maintained successfully in 2006 and 2007. Then, say, a massive crisis hits in 2008, and the Fed misses its target for say three years running. By 2011, it looks like the structural growth rate of the economy has also slowed. Now, agents in the economy have to wonder: is the Fed going to try to return to its 5% NGDP path? Or is it going to shift down to a 4.5% path and not go back all the way? And will that new path have as a base year 2011? Or will it be 2008?

(Note: I am aware that had the Fed been implementing NGDPLT in 2008 the crisis would have been much less severe, perhaps not even a recession! The above is for illustration.)

(Also, I thank Joe Mihm for this point.)

IVe. This problem for NGDP targeting is analogous to the velocity instability problem for Friedman’s k-percent rule

Finally, I want to make an analogy that hopefully emphasizes why I think this issue is so serious. Milton Friedman long advocated that the Fed adopt a rule whereby it would have promised to keep the money supply (M2, for Friedman) growing at a steady rate of perhaps 3%. Recalling the equation of exchange, MV = PY, we can see that when velocity is constant, the k-percent rule is equivalent to NGDP targeting!

In fact, velocity used to be quite stable:

Source: FRED

For the decade and a half or two after 1963 when Friedman and Schwartz published A Monetary History, the rule probably would have worked brilliantly. But between high inflation and financial innovation in the late 70s and 80s, the stable relationship between velocity, income, and interest rates began to break down, and the k-percent rule would have been a disaster. This is because velocity – sort of the inverse of real, income-adjusted money demand – is a structural, real variable that depends on the technology of the economy and household preferences.

The journals of the 1980s are somewhat famously a graveyard of structural velocity models attempting to find a universal model that could accurately explain past movements in velocity and accurately predict future movements. It was a hopeless task: the economy is simply too complex. (I link twice to the same Hayek essay for a reason.) Hence the title of the Sumner and Jackson paper already referenced above.

Today, instead of hopelessly modeling money demand, we have economists engaged in the even more hopeless task of attempting to develop a structural model for the entire economy. Even today, when the supply side of the economy really changes very little year-to-year, we don’t do that good of a job at it.

And (this is the kicker) what happens if the predictability of the structural growth rate breaks down to the same extent that the predictability of velocity broke down in the 1980s? What if, instead of the structural growth rate only changing a handful of basis points each year, we have year-to-year swings in the potential growth rate on the order of whole percentage points? I.e., one year the structural growth is 3%, but the next year it’s 5%, and the year after that it’s 2.5%?

I know that at this point I’m probably losing anybody that has bothered to read this far, but I think this scenario is entirely more likely than most people might expect. Rapidly accelerating technological progress in the next couple of decades as we reach the “back half of the chessboard”, or even an intelligence explosion, could very well result in an extremely high structural growth rate that swings violently year to year.

However, it is hard to argue either for or against the techno-utopian vision I describe and link to above, since trying to estimate the future of productivity growth is really not much more than speculation. That said, it does seem to me that there are very persuasive arguments that growth will rapidly accelerate in the next couple of decades. I would point those interested in a more full-throated defense of this position to the work of Robin Hanson, Erik Brynjolfsson and Andrew McAfee, Nick Bostrom, and Eliezer Yudkowsky.

If you accept the possibility that we could indeed see rapidly accelerating technological change, an “adaptable NGDP target” would essentially force the future Janet Yellen to engage in an ultimately hopeless attempt to predict the path of the structural growth rate and to chase after it. I think it’s clear why this would be a disaster.

V. An anticipation of some responses

Before I close this out, let me anticipate four possible responses.

1. NGDP variability is more important than inflation variability

Nick Rowe makes this argument here and Sumner also does sort of here. Ultimately, I think this is a good point, because of the problem of incomplete financial markets described by Koenig (2013) and Sheedy (2014): debt is priced in fixed nominal terms, and thus ability to repay is dependent on nominal incomes.

Nevertheless, just because NGDP targeting has other good things going for it does not resolve the fact that if the potential growth rate changes, the long run inflation rate would be higher. This is welfare-reducing for all the standard reasons. Because of this, it seems to me that there’s not really a good way of determining whether NGDP level targeting or price level targeting is more optimal, and it’s certainly not the case that NGDPLT is the monetary policy regime to end all other monetary policy regimes.

2. Target NGDP per capita instead!

You might argue that if the most significant reason that the structural growth rate could fluctuate is changing population growth, then the Fed should just target NGDP per capita. Indeed, Scott Sumner has often mentioned that he actually would prefer an NGDP per capita target. To be frank, I think this is an even worse idea! This would require the Fed to have a long term structural model of demographics, which is just a terrible prospect to imagine.

3. Target nominal wages/nominal labor compensation/etc. instead!

Sumner has also often suggested that perhaps nominal aggregate wage targeting would be superior to targeting NGDP, but that it would be too politically controversial. Funnily enough, the basic New Keynesian model with wage stickiness instead of price stickiness (and no zero lower bound) would recommend the same thing.

I don’t think this solves the issue. Take the neoclassical growth or Solow model with Cobb-Douglas technology and preferences and no population growth. On the balanced growth path, the growth rate of wages = the potential growth rate of the economy = the growth rate of technology. For a more generalized production function and preferences, wages and output still grow at the same rate.

In other words, the growth rate of real wages parallels that of the potential growth rate of the economy. So this doesn’t appear to solve anything, as it would still require a structural model.

4. Set up a prediction market for the structural growth rate!

I don’t even know if this would work well with Sumner’s proposal. But perhaps it would. In that case, my response is… stay tuned for my critique of market monetarism, part two: why handing policymaking over to prediction markets is a terrible idea.

VI. In conclusion

The concerns I outline above have driven me from an evangelist for NGDP level targeting to someone extremely skeptical that any central banking policy can maintain monetary equilibrium. The idea of optimal policy under NGDP targeting necessitating a structural model of the economy disturbs me, for a successful such model – as Sumner persuasively argues – will never be built. The prospect that NGDP targeting might collapse in the face of rapidly accelerating technological growth worries me, since it does seem to me that this very well could occur. And even setting aside the techno-utopianism, the historical examples described above, such as Japan in the 1980s, demonstrate that we have seen very large shifts in the structural growth rate in actual real-world economies.

I want to support NGDPLT: it is probably superior to price level or inflation targeting anyway, because of the incomplete markets issue. But unless there is a solution to this critique that I am missing, I am not sure that NGDP targeting is a sustainable policy for the long term, let alone the end of monetary history.

I found an interesting 1970 AER paper that adds land to the Solow model in continuous time and verifies the result, discussed last week, that as the rate of return on capital approaches the growth rate of the economy the price of land will approach infinity. The paper is a mess – the notation is disgusting, and doing continuous time instead of discrete time adds nothing but pain.

The bottom line is this. The rate of return on land and capital should be equal in equilibrium (factor price equalization). If the interest rate – i.e., the rate of return on capital – is low, then the rate of return on land must be lower.

First, the rate of return on land and capital should be equal on the balanced growth path (equation 6). Where K is capital, L is land, P is the price of land in terms of goods, and g is the rate of exogenous economic growth, we have the equilibrium condition:
F_K = F_L/P + (∂P/∂t)/P

Translated to English: the marginal product of capital must equal the marginal product of land plus the rate of increase in the price of land. Perhaps even better: the interest rate on capital equals the rent from land plus the capital gains from land. Or, one last rephrasing: the return on capital equals the return on land, where part of the return on land is price appreciation.

From this, an equilibrium relationship between effective capital (k = K/N) and effective land (l = PL/N) can be derived with a little algebra (equation 8):
l = F_L / [(F_K - g)*β]

Where beta is a constant (the ratio of initial effective labor to land).

This is the key result. If the rate of return on capital is equal to the growth rate, effective land goes to infinity: dividing by zero. And thus the price of land goes to infinity, as the supply of land is perfectly inelastic, i.e. fixed.

To repeat what I wrote last week: we don’t see the price of landing going to infinity, which would seem to be a challenge for secular stagnationists.

Some thoughts on Eggertsson and Mehrotra (2014), the first formalization of the “secular stagnation” thesis. Nothing innovative here, I just wanted to collect my thoughts all in one place.

Model overview
First, a brief review of Eggertsson and Mehrotra’s model for easy reference. (Simon Wren-Lewis has a short summary of the math.)

The paper describes a three-period overlapping generations model, where the middle generation receives an endowment (or, in an extension, labors for an income). The young and old generations do not receive incomes; the young borrows from the middle generation, and the old uses money saved from their time in the middle generation. The amount the young can borrow is constrained because of a purely exogenous “debt limit”. The key result is that if this debt constraint (exogenously) drops (a “deleveraging shock”), then the demand for loans drops, forcing the natural rate of interest to permanently fall, potentially to permanently below zero.

Once a price level and downward nominal wage rigidity are introduced, we can then have a permanent zero lower bound situation where the natural rate is permanently and unattainably negative – secular stagnation, by definition. This causes output to be permanently below potential.

Now, various thoughts, from more to less interesting:

1. Lack of capital
This model does not include capital. I suspect a model with capital and a negative interest rate would have negative or zero investment, whereas in the economy today we of course have positive net investment.

The authors do note they want to include capital in the next iteration of the model.

2. Lack of land
There is also no land in this model. Of course in modern times land is not typically included as a factor in the production function. Solow once joked, “If God had meant there to be more than two factors of production, he would have made it easier for us to draw three-dimensional diagrams.”

But Nick Rowe, I think, makes a good case that in a model attempting to analyze permanently negative interest rates, land must be included.

The argument goes like this: think of land as an asset like any other, where the price of land equals the present discounted value of the future returns to land. It can be shown that as the interest rate approaches the growth rate of the economy, the value of the land goes to infinity.

Back in the real world, of course, we have not seen land prices go to infinity. So perhaps adding land to this model would prevent us from having secular stagnation without the price of land blowing up.

Section three of this Stefan Homburg (2014) paper discusses this further, and Homburg models the result more formally here. Another interesting post from Rowe here, and comments from Matt Rognlie here.

(Side note: by the same logic, perhaps a fall in the natural rate explains the housing “bubble” of the last decade?)

3. Debt limit as exogenous
The debt limit is purely exogenous. It seems likely that there would be important and interesting general equilibrium effects if it were endogenized. There is not much to say on this point, but it’s very important.

4. OLG modelling instead of representative agent
This model uses OLG as its basic framework instead of a representative agent.

Importantly, this is different from the last decade and a half of research on the liquidity trap (Krugman 1998, Eggertsson and Woodford 2003, Auerbach and Obstfeld 2005) which all used representative agent models. In these models, in the long run steady the natural rate will determined by the discount factor which forces the long run natural rate to be positive. Thus, the economy can only be in a liquidity trap (ZLB) situation temporarily.

It’s only in this OLG environment that we can have a permanently negative natural rate. That seems very interesting to me – what else might we be missing by using the representative agent model? (…Probably not much.)

Turning away from mathematical formalization, I wonder if one way we could think about this is: what if the natural rate was expected to remain at the ZLB for a period longer than the remainder of a person’s life (say >60 years)? Would that create some kind of a trap situation?

Conclusion
Overall, I’m simply not convinced that this is a useful model. The idea that the natural rate could be permanently negative simply seems extremely unlikely. Also, the lack of inclusion of land seems to be a big oversight.

Update: Josh Hendrickson makes the interesting point that adding money to the economy (with a fixed nominal return of 0%), the Eggertsson-Mehrota does not hold.

In 2008, Christina and David Romer published an interesting paper demonstrating that FOMC members are useless at forecasting economic conditions compared to the Board of Governors staff, and presented some evidence that mistaken FOMC economic forecasts were correlated with monetary policy shocks.

I’ve updated their work with another decade of data, and find that while the FOMC remained bad at forecasting over the extended period, the poor forecasting was not correlated with monetary policy shocks.

First, some background.

Background
Before every FOMC meeting, the staff at the Board of Governors produces the Greenbook, an in-depth analysis of current domestic and international economic conditions and, importantly for us, forecasts of all kinds of economic indicators a year or two out. The Greenbook is only released to the public with a major lag, so the last data we have is from 2007.

The FOMC members – the governors and regional bank presidents – prepare consensus economic forecasts twice a year, usually February and July, as part of the Monetary Policy Report they must submit to Congress. (Since October 2007, FOMC members have prepared projections at four FOMC meetings per year. That data, from the end of 2007, is not included in my dataset here, but I’ll probably put it in when I update it in the future as more recent Greenbooks are released.)

Summary of Romer and Romer (2008)
The Romers took around 20 years of data from these two sources, from 1979 to 2001, and compared FOMC forecasts to staff forecasts. They estimate a regression of the form

Where X is the realized value of the variable (e.g. actual GDP growth in year t+1), S is the staff’s projection of the variable (e.g. the staff’s projected GDP growth next year), and P is the FOMC’s projection of the variable (e.g. the FOMC’s projected GDP growth next year).

They find “not just that FOMC members fail to add information, but that their efforts to do so are counterproductive.” Policymakers were no good at forecasting over this period.

They then ask if the mistaken forecasts cause the FOMC to make monetary policy errors that cause monetary policy shocks. The two use their own Romer and Romer (2004) measure, which I’ve updated here, as the measure of monetary policy shocks. They then estimate the regression

Where M is the measure of shocks, and P and S are as before. They only ran this regression from 1979 through 1996, as that was the latest the measure of shocks went up to in the 2004 paper.

They find that, “The estimates suggest that forecast differences may be one source of monetary shocks… An FOMC forecast of inflation one percentage point higher than the staff forecast is associated with an unusual rise in the federal funds rate of approximately 30 basis points.”

That seemed like a very interesting result to me when I first read this paper. Could bad monetary policymaking be explained by the hubris of policymakers who thought they could forecast economic conditions better than the staff? It turns out, after I updated the data, this result does not hold.

Updating the data
I followed the same methodology as when I updated Romer and Romer (2004): first replicating the data to ensure I had the correct method before collecting the new data and updating. The data is from 1979 through 2007, and all my work is available here and here.

I find, first, that policymakers remained quite poor economic forecasters. Here is the updated version of Table 1 from the paper, with the old values for comparison:

The coefficient on the FOMC forecast for inflation and unemployment is still right around zero, indicating that FOMC forecasts for these two variables contain no useful information.

However, it appears that once we extend the monetary policy shock regression from 1996 to 2007, the second result – that forecast differences are a source of monetary policy shocks – does not hold. Here is the updated version of Table 2 from the paper, again with old values for comparison:

When the Romers published their paper, the R-squared on the regression of monetary shocks over all three variables was 0.17. This wasn’t exactly the strongest correlation, but for the social sciences it’s not bad, especially considering that monetary shock measure is fairly ad hoc.

As we can see in the updated regression, the R-squared is down to 0.05 with the extended data. This is just too small to be labeled significant. Thus, unfortunately, this result does not appear to hold.

I’ve updated the Romer and Romer (2004) series of monetary policy shocks. The main takeaway is this graph of monetary policy shocks by month, since 1969, where the gray bars indicate recession:

When the two published their paper, they only had access to date up through 1996, since Fed Greenbooks – upon which the series is based – are released with a large lag. I’ve updated it through 2007, the latest available, and will update it again next month when the 2008 Greenbooks are released.

The two interesting points in the new data are

1. The negative policy shock before and during the 2001 recession
2.  The negative policy shock in 2007 before the Great Recession

Below I’ll go into the more technical notes of how this measure is constructed and my methodology, but the graph and the two points above are the main takeaway.

How is the R&R measure constructed?
First, the Romers derive a series of intended changes in the federal funds rate. (This is easy starting in the 1990s, since the FOMC began announcing when it wanted to change the FFR; before that, the two had to trawl through meeting minutes to figure it out.) They then use the Fed’s internal Greenbook forecasts of inflation and real growth to control the intended FFR series for monetary policy actions taken in response to information about future economic developments, specifically RGDP growth, inflation, and unemployment.

In other words, they regress the change in the intended FFR around forecast dates on RGDP growth, inflation and unemployment. Then, as they put it, “Residuals from this regression show changes in the intended funds rate not taken in response to information about future economic developments. The resulting series for monetary shocks should be relatively free of both endogenous and anticipatory actions.”

The equation they estimate is:

$\Delta ff_m = \alpha + \beta ffb_m + \Sigma_{i=-1}^{2}\gamma_i\tilde{\Delta y_{mi}} + \Sigma_{i=-1}^{2} \lambda_i (\tilde{\Delta y_{mi} - \Delta y_{m-1,i}}) + \Sigma_{i=-1}^{2} \phi_i \tilde{\pi}_{mi} + \Sigma_{i=-1}^{2} \theta_i (\tilde{\pi_{mi}} - \tilde{\pi{m-1,i}}) + \rho \tilde{u}_{m0} + \epsilon_m$

• Δff is the change in the intended FFR around meeting m
• ffb is the level of the target FFR before the change in the meeting m (included to capture any mean reversion tendency)
• π, Δy, and u are the forecasts of inflation, real output growth, and the unemployment rate; note both that the current forecast and the change since the last meeting are used
• The i subscripts refer to the horizon of the forecast: -1 is the previous quarter, 0 the current quarter, 1 the next quarter, 2 the next next quarter
• All relative to the date of the forecast corresponding to meeting m; i.e. if the meeting is in early July 1980 and the forecast is in late June 1980, the contemporaneous forecast is for the second quarter of 1980

The Romers show in their paper that, by this measure, negative monetary policy shocks have large and significant effects on output and the price level.

It is worth noting the limitations of this measure. It is based on the federal funds rate instrument, which is not a very good indicator of the stance of monetary policy. Additionally, if the FOMC changes its target FFR between meetings, any shock associated with that decision would not be captured by this measure.

Results
First, I replicated the Romer and Romer (2004) results to confirm I had the correct method. Then I collected the new data in Excel and ran the regression specified above in MATLAB. The data is available here and here (though there might have been errors when uploading to Google Drive).

The residuals are shown above in graph form; it is an updated version of figure 1a in Romer and Romer (2004).

The coefficients and statistics on the regression are (this is an updated version of table 1 in the original paper):

Last, for clarity, I have the monetary policy shock measure below with the top five extremes removed. This makes some periods more clear, especially the 2007 shock. Again, I will update this next month when the 2008 Greenbooks are released. It should be very interesting to see how large a negative shock there was in 2008.

Update: I've updated this last graph with 2008 data. Interestingly, the 2008 shock is not exceptionally large.

From Matt Rognlie, an interesting point:

"A few years ago, I read an aside in Stiglitz’s Nobel autobiography that really shook me:

Economists spend enormous energy providing refined testing to their models. Economists often seem to forget that some of the most important theories in physics are either verified or refuted by a single observation, or a limited number of observations (e.g. Einstein’s theory of relativity, or the theory of black holes).

I really think that this is true: we often do very complicated, nontransparent estimation and testing of models, when in reality one or two carefully selected stylized facts could be much more decisive."

That from here, a debate between Rognlie and Miles Kimball on sticky prices and wages.

When Larry Summers was still a candidate for Fed Chair, and the econoblogosphere was still debating whether or not he ought to be nominated, it became oddly fashionable to argue that it would be a good thing if Summers were to be confirmed and subsequently was not politically independent.

See, for example, Matt Yglesias: “If Summers’ close ties to Barack Obama and his team make people expect that monetary policy will err on the side of looseness and inflation overshooting, that'll be good for the economy.”

This seems to me to be dangerously short-term thinking. Fed independence has been a hard won achievement, and is not something that should be tossed aside lightly. To demonstrate this, I want to walk through two episodes of Federal Reserve history.

The Treasury-Fed Accord of 1951
The extent of Fed subservience to the Treasury during and immediately after World War II is pretty staggering. Robert Hetzel has a good narrative summary of the events leading up to the Accord, the so-called “Treasury-Fed War”.

Starting in 1942, the Fed pegged the interest rates on Treasury securities. The interest rates on T-bills (Treasuries with maturities of less than one year) were never allowed to rise above .375%, and the rates on Treasury bonds (longer term bonds) were pegged at no more than 2.5%.

How did the Fed keep interest rates low on Treasuries? By printing money to purchase them. Such large purchases were required that the Fed ended up owning basically the entire outstanding quantity of T-bills (though the public still held Treasury bonds).

This expansion in the money supply – truly, a monetization of the debt – led to high inflation as soon as the war was over and price controls were ended. Inflation in mid-1947 was 17.6%, and had only dropped to 9.5% a year later despite a looming recession.

By the second half of 1950, the FOMC was becoming severely antsy about all the inflation that it was creating (and recognized it was creating) as well as its loss of independence. In 1948, Chairman Marriner Eccles had not been reappointed by President Truman in order to demonstrate “who’s boss”, according to a Truman staff member. Starting in August 1950, the FOMC began raising the pegged rate very slightly and very slowly.

By February 1951, however, the Fed had had enough and declared that it was no longer willing to peg Treasuries. This came after six months of very public battles between the Federal Reserve and the Treasury, involving inaccurate media leaks, considerable political pressure, and what seemed often to be simply lies from the Treasury and White House.

The Truman administration finally relented, and the Accord was agreed upon, making the Fed independent and ending Treasury pegging after a very brief transition period. However, it seems that the White House may have not truly given up, as Fed Chairman William McCabe was forced out less than two weeks after the agreement was released on March 4, 1951.

William McChesney Martin, who had been an Assistant Secretary of the Treasury, was appointed as McCabe’s replacement. He was considered an ally of Truman, but once in office at the Fed, strongly supported its independence, creating the relationship between the Fed and Treasury that we see today. According to one Truman administration official, several years later Truman ran into Martin in New York. The former president only said one word – “traitor” – before continuing away.

Thus, it should be clear that Federal Reserve independence from the Treasury was something that had to be fought for.

Nixon pressures Burns
The Federal Reserve again came under political pressure during the Nixon administration. This time, instead of keeping the cost of debt down, President Nixon sought to ensure his reelection. Evidence comes from the Nixon Tapes and from members of the Nixon administration.

Nixon had felt burned after his loss in the 1960 presidential election, for which he blamed a weakening of the economy as November 1960 approached. He was concerned enough that, as soon as Martin’s term expired during his presidency (Martin had managed to remain as chair until then), he appointed Arthur Burns. Burns was considered a serious economist, but also a strong supporter of the Republican Party.

When Burns was first nominated, Nixon reportedly told him: “My relations with the Fed will be different than they were with Bill Martin there. He was always six months too late doing anything. I’m counting on you, Arthur, to keep us out of a recession… I know there’s this myth of the autonomous Fed [barked a quick laugh].”

Another Nixon administration member states that Nixon was responsible for anonymous leaks that were used to pressure Burns into loosening policy. One floated a proposal to expand the size of the Board of Governors, so that Nixon could pack it in a fashion similar to FDR’s court packing attempt; another proposed giving the White House more direct control over the FOMC; and a third falsely stated that Burns requested a pay raise when, in fact, he requested a cut.

The Nixon Tapes present the most irrefutable evidence. Nixon can be heard talking with his Secretary of the Treasury about Burns, stating, “War is going to be declared if he [Burns] doesn’t come around some.”

There are also tapes of Nixon and Burns, where Nixon says that “I don’t want to go out of town [Washington] too fast,” and, “This will be the last conservative administration in Washington.” He also declares the contemporaneous “liquidity problem” – that is, one of too much liquidity causing inflation – “just bullshit.”

Meanwhile, this appeared to have some effect on Burns. FOMC minutes indicate dissension among other members, but money supply growth increased, the federal funds rate dropped, and the discount rate was lowered.

This is the danger of having a Fed chair that is too close to the presidential administration: his or her sound economic judgment will be overridden by political beliefs.

Conclusion
This discussion is not quite as timely now that Summers has pulled his name from the race, though, there have been whispers about Tim Geithner being named. Nevertheless, it never a bad time to reiterate the importance of central bank independence.

Of course, independence should never be an excuse for lack of accountability. This, then, is just one more point on the long list of reasons why the Fed should stick to an explicit monetary policy rule that would ensure its insulation from political pressure and allow the public to judge it for any deviations from target.

(…not to mention, if this policy rule has a smart target and uses a smart instrument, then it will lead to economic prosperity.)

Ashok Rao has a post highlighting JOLTS data that shows that despite a steady rise in job openings since 2009, hiring has pretty much been flat. In particular, this graph (blue is openings, red is hiring):

[…]

What we see is a “V-type” recession for openings. That is, they rapidly crashed during the deeps of the recession, but recovered at a pace proportional to the fall. On the other hand, hires evince a more “L-type” recession which is characterized by a quick fall without a similar recovery.

He concludes that structural factors might be more important in holding back the labor market recovery than is commonly believed: “The labor-mismatch hypothesis of unemployment is very appealing.”

This is a fairly perplexing problem for, and a problem that does not receive enough attention from, those who think that the economy is currently demand-constrained and not very supply-constrained. However, I’m not convinced.

Falling recruiting intensity: a structural change in the way employers post openings
Ashok makes the smart point that because of data quality problems, we should concern ourselves with the change in the number of openings, rather than the absolute level. “Errors are systematic rather than random. That is, even if there is a degree of false openings, we care not about the absolute levels, but rate of change thereof.” (emphasis original)

In the graph above, we see that openings have risen significantly since 2009, so Ashok’s logic is consistent.

However, I think he may be too quick to discard the possibility that there has been a structural change in the way that employers advertise vacancies. In other words, the openings data error might be systematic, but not constant.

Two recent papers, both by the wonderful UChicago professor Steven Davis along with Jason Faberman and John Haltiwanger, provide evidence for this hypothesis. They argue that there is empirical evidence of a decline in “recruiting intensity” per job vacancy.

The three build an index of recruiting intensity. From early 2007 to late 2009, it plummets by 20% and as of September 2011 remained significantly below the pre-recession trend.

As for the definition of recruiting intensity, I’m just going to quote straight from first paper I linked to. ““Recruiting intensity” is shorthand for the other instruments employers use to influence the pace of new hires – e.g., advertising expenditures, screening methods, hiring standards, and the attractiveness of compensation packages.”

(I’m not sure, but it sounds like a residual in their model? I’m still teaching myself matching models, which they use, and have only looked at money matching models. For more, check out the last term of equation 8 in the second paper.)

Thus, the number of vacancies has risen, but the effort employers put into sorting through possible hires has fallen.

In short, the supply of able workers has not fallen; the demand for them has dropped, as evidenced by the drop in employer spending (of time and money) on recruiting. Employers are listing openings without intending to necessarily fill them.

Some intuition

Intuitively, this is a result that makes sense to me. First, there is an ongoing secular decline in the cost of advertising job openings thanks to websites like Monster and CareerBuilder. This does not explain the sharp decline in 2008, though.

Instead, I think the marginal benefit of posting a job vacancy probably rises during periods of mass unemployment (while the marginal cost stays the same). With a larger pool of possible hires, an employer has a greater chance of finding someone worth hiring.

All else equal, an employer is more likely to post an opening if there is a larger supply of potential hires.

Concluding thoughts
I’ve made it this far without mentioning what I believe to be far and away the most convincing evidence that the labor market has few structural problems: wages remain depressed in all markets!

A major implication of the claim that there is a skills mismatch in the labor market would be rising wages in supply-constrained fields. I have not seen any such data.

Last, I know that there exists data on job openings from private online job boards (e.g. Monster.com). I, however, am unable to find it. It would be interesting to see how this data matches up with the JOLTS data.

(Yichuan Wang also makes a good point in Ashok’s comment section about government-created supply constraints such as UI and food stamps being reduced as aggregate demand/nominal spending recovers.)

The future of Fannie Mae and Freddie Mac is in the news with a speech by the President last week outlining his proposed reforms for the pair of housing finance giants.

What role should the government play in the housing market in the aftermath of the second collapse of the American housing finance sector in three decades (the first being the S&L crisis)? All theory and evidence indicates that the role should be minimal, and that Fannie and Freddie should be abolished.

What do Fannie and Freddie do?
First, a quick review of what the GSEs do. Their main role, at least before the crisis, was mortgage pooling and securitization.

A bank issues mortgages to a set of homeowners. It then goes to Fannie or Freddie and trades this bundle of mortgages for a security (a mortgage-backed security, MBS) that pays regular, guaranteed dividends corresponding to mortgage payments associated with the underlying loans. In exchange, the GSE takes a cut of the pay, usually around 20 basis points (.2%). The GSE guarantee ensures payment, even in the event of default.

The real impact of the GSEs on the housing market comes from the fact that they can only purchase “conforming loans”. Conforming loans must meet a set of criteria. For example, the loan has to be below a certain size (\$417,000 for one unit loans in 2008), which historically increased annually with average home prices. Anything above this amount was called a jumbo loan. Other criteria include a maximum loan-to-value ratio and proper documentation.

The impact of the GSEs on the housing market
I would claim that the GSEs have three principal impacts on the housing market.

1. Standardization
First, the GSEs help standardize financial products in mortgage market. Because of the conforming loan standard, lenders are incentivized to produce products and make loans that conform to these standards.

This is a good thing: standardizing complicated financial products helps consumers make informed decisions. However, the standard has not made mortgages totally accessible for consumers, since mortgages are still considered confusing. That the CFPB is doing a lot with mortgages is evidence of that.

2. Subsidizing home ownership
Through its (once implicit, now explicit) federal government-backed participation in the mortgage market, Fannie and Freddie lower the cost of home ownership and thus encourage more home ownership – and less renting – than would otherwise occur.

Many economists, even many of those who believe Fannie and Freddie should be abolished, believe this is a noble cause. According to this line of thinking, home ownership has positive externalities. Homeowners take better care of their homes than renters (as a renter… I admit this is probably accurate) and are potentially more actively involved in their communities.

The benefits, however, are hard to quantify. This makes them difficult to justify against costs of propping up Fannie and Freddie.

What is more quantifiable is the impact on the rate of homeownership in the U.S., and that impact appears to be small. The U.S. does not have the highest homeownership rate in the world, despite having unique government involvement in housing finance. Indeed, we do not even have above average homeownership. The 2007 rate was 68%, which is pretty much smack dab in the middle among developed countries.

Thus, even if homeownership is something to be encouraged (which is far from clear), Fannie and Freddie do not do much to encourage it.

3. Subsidizing a particular type of housing finance

Source: Michael Lea, “Alternative Forms of Mortgage Finance: What Can We Learn from Other Countries?”

As can be seen in the graph above, the US is unique among peer nations in having a mortgage market that is dominated by long term fixed rate loans (colored purple). This is largely a result of the distortion of the market by the GSEs, though also, importantly, because of strong consumer preferences.

Consider: why would a bank want to make a 30 year fixed rate loan? If market interest rates go up, the bank will not receive any benefit. If rates go down, homeowners can refinance. While good for the borrower, this is not attractive for the bank.

This is also evident from the historical record. Before the government got involved in the mortgage market during the New Deal, residential mortgages were typically much shorter, 5-10 years, and carried a variable rate of interest. A mortgage could in effect last longer than 5-10 years if the borrower refinanced at the end of the loan, but it meant that the borrower bore the interest risk.

However, if the GSEs were abolished, it is highly unlikely the 30 year fixed rate mortgage would disappear. As mentioned earlier, conforming loans have a size limit, above which loans are known as jumbo loans. 30 year fixed rate jumbo loans still exist; they just require a higher interest rate, something like 20 basis points.

If you are a homeowner and receiving this subsidy, this might sound like a good enough reason to keep the GSEs around. For America as a whole, however, this does not seem like a positive benefit. Why should this particular type of mortgage be subsidized? Is the social benefit of long term, fixed rate mortgages higher than the private benefit? That would be a tough argument to make.

Even homeowners should remember that though the GSEs subsidize their fixed rate mortgages, it is not for free. Bailing out the GSEs, or even paying for their operation in normal times, means paying higher taxes in the long run.

Conclusion
Besides perhaps creating some limited standardization of financial products, there does not appear to be anything socially useful that Fannie and Freddie do that the private market could not on its own. Without any justification for their existence, and with the high fiscal cost of maintaining them, the two should be wound down and abolished.

One of the most important ideas to emerge from market monetarist writing in the blogosphere, in my opinion, is the “Sumner critique.” This critique named after Scott Sumner has been worded many different ways, but I would state it like this:

If the Fed is successfully stabilizing a nominal anchor – e.g. if the Fed is successfully maintaining 2% inflation or 5% NGDP growth – then the fiscal multiplier will be zero, since the Fed will offset any fiscal impact on aggregate demand.

Two important caveats. First, fiscal stimulus or austerity can still have real effects through supply side effects. Even if the Fed tightens policy after a tax cut, that tax cut will still be beneficial by reducing incentive distortions.

Second, if the Fed is not doing its job, the fiscal multiplier might not be zero. The word “successfully” is an important one in the definition.

The intuition is very straightforward. If Congress passes fiscal stimulus to boost demand, higher demand will boost inflation (and NGDP growth, if that is the Fed’s target), and the Fed will offset it by tightening monetary policy. If Congress cuts spending or raises taxes to reduce the deficit, that will reduce inflation (and NGDP growth), and the Fed will offset it by loosening monetary policy.

The important implication is that Congress should have no role in responding to recessions, since stimulus will only increase the national debt and replace action that the Fed would have taken anyway, which would not have increased the national debt. Additionally, if Congress implements austerity, one should not be extraordinarily concerned about the economic effects except for the supply-side consequences.

This is the tenet of market monetarism which has been the hardest for progressive supporters of the movement to accept.

In this post, I want to walk through a simple model to demonstrate that the Sumner critique is accurate, and what assumptions are necessary to the model to produce the result. The model is from a 1998 paper by Nick Rowe and Simon Power. By walking through the assumptions in the model, we can see what needs to be true for the Sumner critique to be accurate.

The model

The model is a game theoretic one. There are two players, Congress (which controls fiscal policy) and the Fed (which controls monetary policy). Aggregate demand (AD) is determined by the sum of fiscal policy (F) and monetary policy (M).

Neither Congress nor the Fed intrinsically cares about the level of monetary policy. M only matters for its effect on aggregate demand.

On the other hand, Congress and the Fed both care about the total amount of fiscal policy, F. Both have the same optimal target for fiscal policy F*. If F is too low then perhaps the roads start to crumble, too high then the tax burden will be too high, or the debt will grow too much.

Congress and the Fed also both care about the total level of aggregate demand. Neither wants it to be too high for fear of inflation or too low for fear of recession. However, they disagree about the optimal level of AD.

Assume that Congress wants AD to be higher than the Fed does. Perhaps Congress wants higher inflation to reduce the debt, or hopes that higher than expected inflation in the short run will boost employment and thus popular support.

A*F > A*M

We can represent Congress’ and the Fed’s dissatisfaction as actual fiscal policy F diverges from desired fiscal policy F* and as actual aggregate demand AD diverges from desired aggregate demands AD*F and AD*M respectively. We assign each a loss function that is quadratic, though the form of the loss function isn’t important.

It is clear that Congress and the Fed each has to take into account the other’s action when setting its own policy to best achieve its objective. But what type of game is this?

The game type that makes most sense to me is known as Stackelberg competition. In Stackelberg games, one player moves before the other. In this case, it makes sense for Congress to move first and set F. After viewing this action, the Fed moves and sets M.

My argument would be that Congress is slow-moving and typically only sets the level of spending and taxation once a year (…if it’s doing its job). The Fed, meanwhile, meets every six weeks to decide policy and can meet more frequently if conditions warrant.

Congress thus optimizes first. It seeks to minimize its loss function, LF. It knows that no matter the F that it chooses, the Fed will be able to set M such that aggregate demand equals the Fed’s goal. That is, the Fed’s reaction function is M = AD*M – F.

Proof:

min [choose M] LM = (F – F*)2 + (AD – AD*M)2

Substitute for the definition of AD

min [choose M] LM = (F – F*)2 + (M + F – AD*M)2

Take the first order condition (second order conditions check out)

dLM/dM = 2M + 2F – 2AD*M = 0

Solve

Bingo! This is the essence of what we’re trying to show. The Fed will always set M to offset any attempted manipulation of AD by Congress. Realizing that the level of F chosen will have no impact on aggregate demand, Congress sets F = F* so it can at least perfect that objective.

Mathematically:

min [choose F] LF = (F – F*)2 + (AD – AD*F)2

s.t. M = AD*M – F

Plug in the constraint to the objective function (AD*M = M + F = AD)

min [choose F] LF = (F – F*)2 + (AD*M – AD*F)2

Take the first order condition (second order conditions check out)

dLF/dF = 2F – 2F* = 0

Solve

F = F*

The Sumner critique mechanism is evident here, where Congress refuses to engage in any sort of Keynesian aggregate demand management because it rationally anticipates the monetary offset cited by Sumner.

What assumptions were necessary?

As with any model, this one requires assumptions about how the economy works. As a particularly simple model, this one has quite a few. Below I pick out a few key assumptions and discuss whether they are necessary to achieve the result that the Sumner critique is accurate.

Assumption 1: The Fed can affect aggregate demand precisely as much as desired.

This assumption might seem very important to the conclusion (and the real life accuracy of it, even as a very good approximation, is certainly debatable), but it’s not.

Say we instead said that the Fed sets M, but that AD = M-bar + F, where M-bar = M + ϵ and ϵ is a random error term normally distributed around zero.

(If the error was distributed around anything else, that would mean that the Fed is incompetent, and would be a serious issue. Plausibly, one could argue that the Fed is incompetent when the fed funds rate is near zero since it seems to believe itself to be impotent.)

This error term means that sometimes fiscal stimulus would have a positive ‘net’ multiplier – but that also sometimes it would have a negative one! The expected value of any kind of stimulus, however, would still be zero, and Congress would thus still not attempt to influence AD.

Assumption 2: The Fed is the Stackelberg follower and moves second.

In some respects this is the key assumption, at least for the intuition behind the Sumner critique. However, Rowe and Power show in the paper that even if the game is simultaneous Nash – i.e., Congress and the Fed set their policies at the same time without seeing what the other player is going to do – AD ends up being equal to AD*M. In other words, the Sumner critique still holds.

It is true that if Congress is modeled as the Stackelberg follower, than AD ends up higher than desired by the central bank (but still not as high as Congress wants it). But this assumption seems implausible.

The one argument that could be made here is that monetary policy impacts the economy with long lags. This actually what many argue, saying for example that the Fed is unable to offset spending cuts like the sequester since any actions it takes won’t have any effect until it is too late.

Such an argument is worthy of another post, but suffice it to say that the supposed problem of monetary policy lags is vastly overblown. As most economists accept today and all dominant DSGE models assume, the most important aspect of monetary policy is its effects on expectations. If the Fed is expected to offset spending cuts, even if it cannot do so right away, the fact that it is expected to do so in the near future is highly stabilizing.

Assumption 3:The Fed’s reaction function is invariant to fiscal policy.”

This assumption was pointed about by Noah Smith. I quote his example: “Consider a world in which the Fed targets a 3% growth rate for NGDP if there is no stimulus, but raises the growth rate target in the event of a stimulus. In this case, it would make perfect sense to say "fiscal stimulus increased NGDP growth," in the sense that we normally think of causality.”

This is an important point, but there doesn’t seem to be any reason to believe that the Fed’s goals change if Congress changes the level of fiscal policy.

In conclusion, the important assumptions underlying the simple model outlined above that demonstrates the Sumner critique seem reasonable.