Some thoughts on Eggertsson and Mehrotra (2014), the first formalization of the “secular stagnation” thesis. Nothing innovative here, I just wanted to collect my thoughts all in one place.

Model overview
First, a brief review of Eggertsson and Mehrotra’s model for easy reference. (Simon Wren-Lewis has a short summary of the math.)

The paper describes a three-period overlapping generations model, where the middle generation receives an endowment (or, in an extension, labors for an income). The young and old generations do not receive incomes; the young borrows from the middle generation, and the old uses money saved from their time in the middle generation. The amount the young can borrow is constrained because of a purely exogenous “debt limit”. The key result is that if this debt constraint (exogenously) drops (a “deleveraging shock”), then the demand for loans drops, forcing the natural rate of interest to permanently fall, potentially to permanently below zero.

Once a price level and downward nominal wage rigidity are introduced, we can then have a permanent zero lower bound situation where the natural rate is permanently and unattainably negative – secular stagnation, by definition. This causes output to be permanently below potential.

Now, various thoughts, from more to less interesting:

1. Lack of capital
This model does not include capital. I suspect a model with capital and a negative interest rate would have negative or zero investment, whereas in the economy today we of course have positive net investment.

The authors do note they want to include capital in the next iteration of the model.

2. Lack of land
There is also no land in this model. Of course in modern times land is not typically included as a factor in the production function. Solow once joked, “If God had meant there to be more than two factors of production, he would have made it easier for us to draw three-dimensional diagrams.”

But Nick Rowe, I think, makes a good case that in a model attempting to analyze permanently negative interest rates, land must be included.

The argument goes like this: think of land as an asset like any other, where the price of land equals the present discounted value of the future returns to land. It can be shown that as the interest rate approaches the growth rate of the economy, the value of the land goes to infinity.

Back in the real world, of course, we have not seen land prices go to infinity. So perhaps adding land to this model would prevent us from having secular stagnation without the price of land blowing up.

Section three of this Stefan Homburg (2014) paper discusses this further, and Homburg models the result more formally here. Another interesting post from Rowe here, and comments from Matt Rognlie here.

(Side note: by the same logic, perhaps a fall in the natural rate explains the housing “bubble” of the last decade?)

3. Debt limit as exogenous
The debt limit is purely exogenous. It seems likely that there would be important and interesting general equilibrium effects if it were endogenized. There is not much to say on this point, but it’s very important.

4. OLG modelling instead of representative agent
This model uses OLG as its basic framework instead of a representative agent.

Importantly, this is different from the last decade and a half of research on the liquidity trap (Krugman 1998, Eggertsson and Woodford 2003, Auerbach and Obstfeld 2005) which all used representative agent models. In these models, in the long run steady the natural rate will determined by the discount factor which forces the long run natural rate to be positive. Thus, the economy can only be in a liquidity trap (ZLB) situation temporarily.

It’s only in this OLG environment that we can have a permanently negative natural rate. That seems very interesting to me – what else might we be missing by using the representative agent model? (…Probably not much.)

Turning away from mathematical formalization, I wonder if one way we could think about this is: what if the natural rate was expected to remain at the ZLB for a period longer than the remainder of a person’s life (say >60 years)? Would that create some kind of a trap situation?

Conclusion
Overall, I’m simply not convinced that this is a useful model. The idea that the natural rate could be permanently negative simply seems extremely unlikely. Also, the lack of inclusion of land seems to be a big oversight.

 

Update: Josh Hendrickson makes the interesting point that adding money to the economy (with a fixed nominal return of 0%), the Eggertsson-Mehrota does not hold.

 

In 2008, Christina and David Romer published an interesting paper demonstrating that FOMC members are useless at forecasting economic conditions compared to the Board of Governors staff, and presented some evidence that mistaken FOMC economic forecasts were correlated with monetary policy shocks.

I’ve updated their work with another decade of data, and find that while the FOMC remained bad at forecasting over the extended period, the poor forecasting was not correlated with monetary policy shocks.

First, some background.

Background
Before every FOMC meeting, the staff at the Board of Governors produces the Greenbook, an in-depth analysis of current domestic and international economic conditions and, importantly for us, forecasts of all kinds of economic indicators a year or two out. The Greenbook is only released to the public with a major lag, so the last data we have is from 2007.

The FOMC members – the governors and regional bank presidents – prepare consensus economic forecasts twice a year, usually February and July, as part of the Monetary Policy Report they must submit to Congress. (Since October 2007, FOMC members have prepared projections at four FOMC meetings per year. That data, from the end of 2007, is not included in my dataset here, but I’ll probably put it in when I update it in the future as more recent Greenbooks are released.)

Summary of Romer and Romer (2008)
The Romers took around 20 years of data from these two sources, from 1979 to 2001, and compared FOMC forecasts to staff forecasts. They estimate a regression of the form

Where X is the realized value of the variable (e.g. actual GDP growth in year t+1), S is the staff’s projection of the variable (e.g. the staff’s projected GDP growth next year), and P is the FOMC’s projection of the variable (e.g. the FOMC’s projected GDP growth next year).

They find “not just that FOMC members fail to add information, but that their efforts to do so are counterproductive.” Policymakers were no good at forecasting over this period.

They then ask if the mistaken forecasts cause the FOMC to make monetary policy errors that cause monetary policy shocks. The two use their own Romer and Romer (2004) measure, which I’ve updated here, as the measure of monetary policy shocks. They then estimate the regression

Where M is the measure of shocks, and P and S are as before. They only ran this regression from 1979 through 1996, as that was the latest the measure of shocks went up to in the 2004 paper.

They find that, “The estimates suggest that forecast differences may be one source of monetary shocks… An FOMC forecast of inflation one percentage point higher than the staff forecast is associated with an unusual rise in the federal funds rate of approximately 30 basis points.”

That seemed like a very interesting result to me when I first read this paper. Could bad monetary policymaking be explained by the hubris of policymakers who thought they could forecast economic conditions better than the staff? It turns out, after I updated the data, this result does not hold.

Updating the data
I followed the same methodology as when I updated Romer and Romer (2004): first replicating the data to ensure I had the correct method before collecting the new data and updating. The data is from 1979 through 2007, and all my work is available here and here.

I find, first, that policymakers remained quite poor economic forecasters. Here is the updated version of Table 1 from the paper, with the old values for comparison:

The coefficient on the FOMC forecast for inflation and unemployment is still right around zero, indicating that FOMC forecasts for these two variables contain no useful information.

However, it appears that once we extend the monetary policy shock regression from 1996 to 2007, the second result – that forecast differences are a source of monetary policy shocks – does not hold. Here is the updated version of Table 2 from the paper, again with old values for comparison:

When the Romers published their paper, the R-squared on the regression of monetary shocks over all three variables was 0.17. This wasn’t exactly the strongest correlation, but for the social sciences it’s not bad, especially considering that monetary shock measure is fairly ad hoc.

As we can see in the updated regression, the R-squared is down to 0.05 with the extended data. This is just too small to be labeled significant. Thus, unfortunately, this result does not appear to hold.

 

I’ve updated the Romer and Romer (2004) series of monetary policy shocks. The main takeaway is this graph of monetary policy shocks by month, since 1969, where the gray bars indicate recession:

When the two published their paper, they only had access to date up through 1996, since Fed Greenbooks – upon which the series is based – are released with a large lag. I’ve updated it through 2007, the latest available, and will update it again next month when the 2008 Greenbooks are released.

The two interesting points in the new data are

  1. The negative policy shock before and during the 2001 recession
  2.  The negative policy shock in 2007 before the Great Recession

Below I’ll go into the more technical notes of how this measure is constructed and my methodology, but the graph and the two points above are the main takeaway.

How is the R&R measure constructed?
First, the Romers derive a series of intended changes in the federal funds rate. (This is easy starting in the 1990s, since the FOMC began announcing when it wanted to change the FFR; before that, the two had to trawl through meeting minutes to figure it out.) They then use the Fed’s internal Greenbook forecasts of inflation and real growth to control the intended FFR series for monetary policy actions taken in response to information about future economic developments, specifically RGDP growth, inflation, and unemployment.

In other words, they regress the change in the intended FFR around forecast dates on RGDP growth, inflation and unemployment. Then, as they put it, “Residuals from this regression show changes in the intended funds rate not taken in response to information about future economic developments. The resulting series for monetary shocks should be relatively free of both endogenous and anticipatory actions.”

The equation they estimate is:

\Delta ff_m = \alpha + \beta ffb_m + \Sigma_{i=-1}^{2}\gamma_i\tilde{\Delta y_{mi}} + \Sigma_{i=-1}^{2} \lambda_i (\tilde{\Delta y_{mi} - \Delta y_{m-1,i}}) + \Sigma_{i=-1}^{2} \phi_i \tilde{\pi}_{mi} + \Sigma_{i=-1}^{2} \theta_i (\tilde{\pi_{mi}} - \tilde{\pi{m-1,i}}) + \rho \tilde{u}_{m0} + \epsilon_m

  • Δff is the change in the intended FFR around meeting m
  • ffb is the level of the target FFR before the change in the meeting m (included to capture any mean reversion tendency)
  • π, Δy, and u are the forecasts of inflation, real output growth, and the unemployment rate; note both that the current forecast and the change since the last meeting are used
  • The i subscripts refer to the horizon of the forecast: -1 is the previous quarter, 0 the current quarter, 1 the next quarter, 2 the next next quarter
  • All relative to the date of the forecast corresponding to meeting m; i.e. if the meeting is in early July 1980 and the forecast is in late June 1980, the contemporaneous forecast is for the second quarter of 1980

The Romers show in their paper that, by this measure, negative monetary policy shocks have large and significant effects on output and the price level.

It is worth noting the limitations of this measure. It is based on the federal funds rate instrument, which is not a very good indicator of the stance of monetary policy. Additionally, if the FOMC changes its target FFR between meetings, any shock associated with that decision would not be captured by this measure.

Results
First, I replicated the Romer and Romer (2004) results to confirm I had the correct method. Then I collected the new data in Excel and ran the regression specified above in MATLAB. The data is available here and here (though there might have been errors when uploading to Google Drive).

The residuals are shown above in graph form; it is an updated version of figure 1a in Romer and Romer (2004).

The coefficients and statistics on the regression are (this is an updated version of table 1 in the original paper):

Last, for clarity, I have the monetary policy shock measure below with the top five extremes removed. This makes some periods more clear, especially the 2007 shock. Again, I will update this next month when the 2008 Greenbooks are released. It should be very interesting to see how large a negative shock there was in 2008.

 

Update: I've updated this last graph with 2008 data. Interestingly, the 2008 shock is not exceptionally large.

 

From Matt Rognlie, an interesting point:

"A few years ago, I read an aside in Stiglitz’s Nobel autobiography that really shook me:

Economists spend enormous energy providing refined testing to their models. Economists often seem to forget that some of the most important theories in physics are either verified or refuted by a single observation, or a limited number of observations (e.g. Einstein’s theory of relativity, or the theory of black holes).

I really think that this is true: we often do very complicated, nontransparent estimation and testing of models, when in reality one or two carefully selected stylized facts could be much more decisive."

That from here, a debate between Rognlie and Miles Kimball on sticky prices and wages.

 

When Larry Summers was still a candidate for Fed Chair, and the econoblogosphere was still debating whether or not he ought to be nominated, it became oddly fashionable to argue that it would be a good thing if Summers were to be confirmed and subsequently was not politically independent.

See, for example, Matt Yglesias: “If Summers’ close ties to Barack Obama and his team make people expect that monetary policy will err on the side of looseness and inflation overshooting, that'll be good for the economy.”

This seems to me to be dangerously short-term thinking. Fed independence has been a hard won achievement, and is not something that should be tossed aside lightly. To demonstrate this, I want to walk through two episodes of Federal Reserve history.

The Treasury-Fed Accord of 1951
The extent of Fed subservience to the Treasury during and immediately after World War II is pretty staggering. Robert Hetzel has a good narrative summary of the events leading up to the Accord, the so-called “Treasury-Fed War”.

Starting in 1942, the Fed pegged the interest rates on Treasury securities. The interest rates on T-bills (Treasuries with maturities of less than one year) were never allowed to rise above .375%, and the rates on Treasury bonds (longer term bonds) were pegged at no more than 2.5%.

How did the Fed keep interest rates low on Treasuries? By printing money to purchase them. Such large purchases were required that the Fed ended up owning basically the entire outstanding quantity of T-bills (though the public still held Treasury bonds).

This expansion in the money supply – truly, a monetization of the debt – led to high inflation as soon as the war was over and price controls were ended. Inflation in mid-1947 was 17.6%, and had only dropped to 9.5% a year later despite a looming recession.

By the second half of 1950, the FOMC was becoming severely antsy about all the inflation that it was creating (and recognized it was creating) as well as its loss of independence. In 1948, Chairman Marriner Eccles had not been reappointed by President Truman in order to demonstrate “who’s boss”, according to a Truman staff member. Starting in August 1950, the FOMC began raising the pegged rate very slightly and very slowly.

By February 1951, however, the Fed had had enough and declared that it was no longer willing to peg Treasuries. This came after six months of very public battles between the Federal Reserve and the Treasury, involving inaccurate media leaks, considerable political pressure, and what seemed often to be simply lies from the Treasury and White House.

The Truman administration finally relented, and the Accord was agreed upon, making the Fed independent and ending Treasury pegging after a very brief transition period. However, it seems that the White House may have not truly given up, as Fed Chairman William McCabe was forced out less than two weeks after the agreement was released on March 4, 1951.

William McChesney Martin, who had been an Assistant Secretary of the Treasury, was appointed as McCabe’s replacement. He was considered an ally of Truman, but once in office at the Fed, strongly supported its independence, creating the relationship between the Fed and Treasury that we see today. According to one Truman administration official, several years later Truman ran into Martin in New York. The former president only said one word – “traitor” – before continuing away.

Thus, it should be clear that Federal Reserve independence from the Treasury was something that had to be fought for.

Nixon pressures Burns
The Federal Reserve again came under political pressure during the Nixon administration. This time, instead of keeping the cost of debt down, President Nixon sought to ensure his reelection. Evidence comes from the Nixon Tapes and from members of the Nixon administration.

Nixon had felt burned after his loss in the 1960 presidential election, for which he blamed a weakening of the economy as November 1960 approached. He was concerned enough that, as soon as Martin’s term expired during his presidency (Martin had managed to remain as chair until then), he appointed Arthur Burns. Burns was considered a serious economist, but also a strong supporter of the Republican Party.

When Burns was first nominated, Nixon reportedly told him: “My relations with the Fed will be different than they were with Bill Martin there. He was always six months too late doing anything. I’m counting on you, Arthur, to keep us out of a recession… I know there’s this myth of the autonomous Fed [barked a quick laugh].”

Another Nixon administration member states that Nixon was responsible for anonymous leaks that were used to pressure Burns into loosening policy. One floated a proposal to expand the size of the Board of Governors, so that Nixon could pack it in a fashion similar to FDR’s court packing attempt; another proposed giving the White House more direct control over the FOMC; and a third falsely stated that Burns requested a pay raise when, in fact, he requested a cut.

The Nixon Tapes present the most irrefutable evidence. Nixon can be heard talking with his Secretary of the Treasury about Burns, stating, “War is going to be declared if he [Burns] doesn’t come around some.”

There are also tapes of Nixon and Burns, where Nixon says that “I don’t want to go out of town [Washington] too fast,” and, “This will be the last conservative administration in Washington.” He also declares the contemporaneous “liquidity problem” – that is, one of too much liquidity causing inflation – “just bullshit.”

Meanwhile, this appeared to have some effect on Burns. FOMC minutes indicate dissension among other members, but money supply growth increased, the federal funds rate dropped, and the discount rate was lowered.

This is the danger of having a Fed chair that is too close to the presidential administration: his or her sound economic judgment will be overridden by political beliefs.

Conclusion
This discussion is not quite as timely now that Summers has pulled his name from the race, though, there have been whispers about Tim Geithner being named. Nevertheless, it never a bad time to reiterate the importance of central bank independence.

Of course, independence should never be an excuse for lack of accountability. This, then, is just one more point on the long list of reasons why the Fed should stick to an explicit monetary policy rule that would ensure its insulation from political pressure and allow the public to judge it for any deviations from target.

(…not to mention, if this policy rule has a smart target and uses a smart instrument, then it will lead to economic prosperity.)

 

Ashok Rao has a post highlighting JOLTS data that shows that despite a steady rise in job openings since 2009, hiring has pretty much been flat. In particular, this graph (blue is openings, red is hiring):

[…]

What we see is a “V-type” recession for openings. That is, they rapidly crashed during the deeps of the recession, but recovered at a pace proportional to the fall. On the other hand, hires evince a more “L-type” recession which is characterized by a quick fall without a similar recovery.

He concludes that structural factors might be more important in holding back the labor market recovery than is commonly believed: “The labor-mismatch hypothesis of unemployment is very appealing.”

This is a fairly perplexing problem for, and a problem that does not receive enough attention from, those who think that the economy is currently demand-constrained and not very supply-constrained. However, I’m not convinced.

Falling recruiting intensity: a structural change in the way employers post openings
Ashok makes the smart point that because of data quality problems, we should concern ourselves with the change in the number of openings, rather than the absolute level. “Errors are systematic rather than random. That is, even if there is a degree of false openings, we care not about the absolute levels, but rate of change thereof.” (emphasis original)

In the graph above, we see that openings have risen significantly since 2009, so Ashok’s logic is consistent.

However, I think he may be too quick to discard the possibility that there has been a structural change in the way that employers advertise vacancies. In other words, the openings data error might be systematic, but not constant.

Two recent papers, both by the wonderful UChicago professor Steven Davis along with Jason Faberman and John Haltiwanger, provide evidence for this hypothesis. They argue that there is empirical evidence of a decline in “recruiting intensity” per job vacancy.

The three build an index of recruiting intensity. From early 2007 to late 2009, it plummets by 20% and as of September 2011 remained significantly below the pre-recession trend.

As for the definition of recruiting intensity, I’m just going to quote straight from first paper I linked to. ““Recruiting intensity” is shorthand for the other instruments employers use to influence the pace of new hires – e.g., advertising expenditures, screening methods, hiring standards, and the attractiveness of compensation packages.”

(I’m not sure, but it sounds like a residual in their model? I’m still teaching myself matching models, which they use, and have only looked at money matching models. For more, check out the last term of equation 8 in the second paper.)

Thus, the number of vacancies has risen, but the effort employers put into sorting through possible hires has fallen.

In short, the supply of able workers has not fallen; the demand for them has dropped, as evidenced by the drop in employer spending (of time and money) on recruiting. Employers are listing openings without intending to necessarily fill them.

Some intuition
*Speculation ahead*

Intuitively, this is a result that makes sense to me. First, there is an ongoing secular decline in the cost of advertising job openings thanks to websites like Monster and CareerBuilder. This does not explain the sharp decline in 2008, though.

Instead, I think the marginal benefit of posting a job vacancy probably rises during periods of mass unemployment (while the marginal cost stays the same). With a larger pool of possible hires, an employer has a greater chance of finding someone worth hiring.

All else equal, an employer is more likely to post an opening if there is a larger supply of potential hires.

Concluding thoughts
I’ve made it this far without mentioning what I believe to be far and away the most convincing evidence that the labor market has few structural problems: wages remain depressed in all markets!

A major implication of the claim that there is a skills mismatch in the labor market would be rising wages in supply-constrained fields. I have not seen any such data.

Last, I know that there exists data on job openings from private online job boards (e.g. Monster.com). I, however, am unable to find it. It would be interesting to see how this data matches up with the JOLTS data.

(Yichuan Wang also makes a good point in Ashok’s comment section about government-created supply constraints such as UI and food stamps being reduced as aggregate demand/nominal spending recovers.)

 

The future of Fannie Mae and Freddie Mac is in the news with a speech by the President last week outlining his proposed reforms for the pair of housing finance giants.

What role should the government play in the housing market in the aftermath of the second collapse of the American housing finance sector in three decades (the first being the S&L crisis)? All theory and evidence indicates that the role should be minimal, and that Fannie and Freddie should be abolished.

What do Fannie and Freddie do?
First, a quick review of what the GSEs do. Their main role, at least before the crisis, was mortgage pooling and securitization.

A bank issues mortgages to a set of homeowners. It then goes to Fannie or Freddie and trades this bundle of mortgages for a security (a mortgage-backed security, MBS) that pays regular, guaranteed dividends corresponding to mortgage payments associated with the underlying loans. In exchange, the GSE takes a cut of the pay, usually around 20 basis points (.2%). The GSE guarantee ensures payment, even in the event of default.

The real impact of the GSEs on the housing market comes from the fact that they can only purchase “conforming loans”. Conforming loans must meet a set of criteria. For example, the loan has to be below a certain size ($417,000 for one unit loans in 2008), which historically increased annually with average home prices. Anything above this amount was called a jumbo loan. Other criteria include a maximum loan-to-value ratio and proper documentation.

The impact of the GSEs on the housing market
I would claim that the GSEs have three principal impacts on the housing market.

1. Standardization
First, the GSEs help standardize financial products in mortgage market. Because of the conforming loan standard, lenders are incentivized to produce products and make loans that conform to these standards.

This is a good thing: standardizing complicated financial products helps consumers make informed decisions. However, the standard has not made mortgages totally accessible for consumers, since mortgages are still considered confusing. That the CFPB is doing a lot with mortgages is evidence of that.

2. Subsidizing home ownership
Through its (once implicit, now explicit) federal government-backed participation in the mortgage market, Fannie and Freddie lower the cost of home ownership and thus encourage more home ownership – and less renting – than would otherwise occur.

Many economists, even many of those who believe Fannie and Freddie should be abolished, believe this is a noble cause. According to this line of thinking, home ownership has positive externalities. Homeowners take better care of their homes than renters (as a renter… I admit this is probably accurate) and are potentially more actively involved in their communities.

The benefits, however, are hard to quantify. This makes them difficult to justify against costs of propping up Fannie and Freddie.

What is more quantifiable is the impact on the rate of homeownership in the U.S., and that impact appears to be small. The U.S. does not have the highest homeownership rate in the world, despite having unique government involvement in housing finance. Indeed, we do not even have above average homeownership. The 2007 rate was 68%, which is pretty much smack dab in the middle among developed countries.

Thus, even if homeownership is something to be encouraged (which is far from clear), Fannie and Freddie do not do much to encourage it.

3. Subsidizing a particular type of housing finance

Source: Michael Lea, “Alternative Forms of Mortgage Finance: What Can We Learn from Other Countries?”

As can be seen in the graph above, the US is unique among peer nations in having a mortgage market that is dominated by long term fixed rate loans (colored purple). This is largely a result of the distortion of the market by the GSEs, though also, importantly, because of strong consumer preferences.

Consider: why would a bank want to make a 30 year fixed rate loan? If market interest rates go up, the bank will not receive any benefit. If rates go down, homeowners can refinance. While good for the borrower, this is not attractive for the bank.

This is also evident from the historical record. Before the government got involved in the mortgage market during the New Deal, residential mortgages were typically much shorter, 5-10 years, and carried a variable rate of interest. A mortgage could in effect last longer than 5-10 years if the borrower refinanced at the end of the loan, but it meant that the borrower bore the interest risk.

However, if the GSEs were abolished, it is highly unlikely the 30 year fixed rate mortgage would disappear. As mentioned earlier, conforming loans have a size limit, above which loans are known as jumbo loans. 30 year fixed rate jumbo loans still exist; they just require a higher interest rate, something like 20 basis points.

If you are a homeowner and receiving this subsidy, this might sound like a good enough reason to keep the GSEs around. For America as a whole, however, this does not seem like a positive benefit. Why should this particular type of mortgage be subsidized? Is the social benefit of long term, fixed rate mortgages higher than the private benefit? That would be a tough argument to make.

Even homeowners should remember that though the GSEs subsidize their fixed rate mortgages, it is not for free. Bailing out the GSEs, or even paying for their operation in normal times, means paying higher taxes in the long run.

Conclusion
Besides perhaps creating some limited standardization of financial products, there does not appear to be anything socially useful that Fannie and Freddie do that the private market could not on its own. Without any justification for their existence, and with the high fiscal cost of maintaining them, the two should be wound down and abolished.

 

One of the most important ideas to emerge from market monetarist writing in the blogosphere, in my opinion, is the “Sumner critique.” This critique named after Scott Sumner has been worded many different ways, but I would state it like this:

If the Fed is successfully stabilizing a nominal anchor – e.g. if the Fed is successfully maintaining 2% inflation or 5% NGDP growth – then the fiscal multiplier will be zero, since the Fed will offset any fiscal impact on aggregate demand.

Two important caveats. First, fiscal stimulus or austerity can still have real effects through supply side effects. Even if the Fed tightens policy after a tax cut, that tax cut will still be beneficial by reducing incentive distortions.

Second, if the Fed is not doing its job, the fiscal multiplier might not be zero. The word “successfully” is an important one in the definition.

The intuition is very straightforward. If Congress passes fiscal stimulus to boost demand, higher demand will boost inflation (and NGDP growth, if that is the Fed’s target), and the Fed will offset it by tightening monetary policy. If Congress cuts spending or raises taxes to reduce the deficit, that will reduce inflation (and NGDP growth), and the Fed will offset it by loosening monetary policy.

The important implication is that Congress should have no role in responding to recessions, since stimulus will only increase the national debt and replace action that the Fed would have taken anyway, which would not have increased the national debt. Additionally, if Congress implements austerity, one should not be extraordinarily concerned about the economic effects except for the supply-side consequences.

This is the tenet of market monetarism which has been the hardest for progressive supporters of the movement to accept.

In this post, I want to walk through a simple model to demonstrate that the Sumner critique is accurate, and what assumptions are necessary to the model to produce the result. The model is from a 1998 paper by Nick Rowe and Simon Power. By walking through the assumptions in the model, we can see what needs to be true for the Sumner critique to be accurate.

The model

The model is a game theoretic one. There are two players, Congress (which controls fiscal policy) and the Fed (which controls monetary policy). Aggregate demand (AD) is determined by the sum of fiscal policy (F) and monetary policy (M).

AD = F + M

Neither Congress nor the Fed intrinsically cares about the level of monetary policy. M only matters for its effect on aggregate demand.

On the other hand, Congress and the Fed both care about the total amount of fiscal policy, F. Both have the same optimal target for fiscal policy F*. If F is too low then perhaps the roads start to crumble, too high then the tax burden will be too high, or the debt will grow too much.

Congress and the Fed also both care about the total level of aggregate demand. Neither wants it to be too high for fear of inflation or too low for fear of recession. However, they disagree about the optimal level of AD.

Assume that Congress wants AD to be higher than the Fed does. Perhaps Congress wants higher inflation to reduce the debt, or hopes that higher than expected inflation in the short run will boost employment and thus popular support.

A*F > A*M

We can represent Congress’ and the Fed’s dissatisfaction as actual fiscal policy F diverges from desired fiscal policy F* and as actual aggregate demand AD diverges from desired aggregate demands AD*F and AD*M respectively. We assign each a loss function that is quadratic, though the form of the loss function isn’t important.

LF = (F – F*)2 + (AD – AD*F)2

LM = (F – F*)2 + (AD – AD*M)2

It is clear that Congress and the Fed each has to take into account the other’s action when setting its own policy to best achieve its objective. But what type of game is this?

The game type that makes most sense to me is known as Stackelberg competition. In Stackelberg games, one player moves before the other. In this case, it makes sense for Congress to move first and set F. After viewing this action, the Fed moves and sets M.

My argument would be that Congress is slow-moving and typically only sets the level of spending and taxation once a year (…if it’s doing its job). The Fed, meanwhile, meets every six weeks to decide policy and can meet more frequently if conditions warrant.

Congress thus optimizes first. It seeks to minimize its loss function, LF. It knows that no matter the F that it chooses, the Fed will be able to set M such that aggregate demand equals the Fed’s goal. That is, the Fed’s reaction function is M = AD*M – F.

Proof:

min [choose M] LM = (F – F*)2 + (AD – AD*M)2

Substitute for the definition of AD

min [choose M] LM = (F – F*)2 + (M + F – AD*M)2

Take the first order condition (second order conditions check out)

dLM/dM = 2M + 2F – 2AD*M = 0

Solve

M = AD*M – F

Bingo! This is the essence of what we’re trying to show. The Fed will always set M to offset any attempted manipulation of AD by Congress. Realizing that the level of F chosen will have no impact on aggregate demand, Congress sets F = F* so it can at least perfect that objective.

Mathematically:

min [choose F] LF = (F – F*)2 + (AD – AD*F)2

s.t. M = AD*M – F

Plug in the constraint to the objective function (AD*M = M + F = AD)

min [choose F] LF = (F – F*)2 + (AD*M – AD*F)2

Take the first order condition (second order conditions check out)

dLF/dF = 2F – 2F* = 0

Solve

F = F*

Then the Fed chooses M such that AD is precisely what it wants. M = AD*M – F* and AD = AD*M.

The Sumner critique mechanism is evident here, where Congress refuses to engage in any sort of Keynesian aggregate demand management because it rationally anticipates the monetary offset cited by Sumner.

What assumptions were necessary?

As with any model, this one requires assumptions about how the economy works. As a particularly simple model, this one has quite a few. Below I pick out a few key assumptions and discuss whether they are necessary to achieve the result that the Sumner critique is accurate.

Assumption 1: The Fed can affect aggregate demand precisely as much as desired.

This assumption might seem very important to the conclusion (and the real life accuracy of it, even as a very good approximation, is certainly debatable), but it’s not.

Say we instead said that the Fed sets M, but that AD = M-bar + F, where M-bar = M + ϵ and ϵ is a random error term normally distributed around zero.

(If the error was distributed around anything else, that would mean that the Fed is incompetent, and would be a serious issue. Plausibly, one could argue that the Fed is incompetent when the fed funds rate is near zero since it seems to believe itself to be impotent.)

This error term means that sometimes fiscal stimulus would have a positive ‘net’ multiplier – but that also sometimes it would have a negative one! The expected value of any kind of stimulus, however, would still be zero, and Congress would thus still not attempt to influence AD.

Assumption 2: The Fed is the Stackelberg follower and moves second.

In some respects this is the key assumption, at least for the intuition behind the Sumner critique. However, Rowe and Power show in the paper that even if the game is simultaneous Nash – i.e., Congress and the Fed set their policies at the same time without seeing what the other player is going to do – AD ends up being equal to AD*M. In other words, the Sumner critique still holds.

It is true that if Congress is modeled as the Stackelberg follower, than AD ends up higher than desired by the central bank (but still not as high as Congress wants it). But this assumption seems implausible.

The one argument that could be made here is that monetary policy impacts the economy with long lags. This actually what many argue, saying for example that the Fed is unable to offset spending cuts like the sequester since any actions it takes won’t have any effect until it is too late.

Such an argument is worthy of another post, but suffice it to say that the supposed problem of monetary policy lags is vastly overblown. As most economists accept today and all dominant DSGE models assume, the most important aspect of monetary policy is its effects on expectations. If the Fed is expected to offset spending cuts, even if it cannot do so right away, the fact that it is expected to do so in the near future is highly stabilizing.

Assumption 3:The Fed’s reaction function is invariant to fiscal policy.”

This assumption was pointed about by Noah Smith. I quote his example: “Consider a world in which the Fed targets a 3% growth rate for NGDP if there is no stimulus, but raises the growth rate target in the event of a stimulus. In this case, it would make perfect sense to say "fiscal stimulus increased NGDP growth," in the sense that we normally think of causality.”

This is an important point, but there doesn’t seem to be any reason to believe that the Fed’s goals change if Congress changes the level of fiscal policy.

In conclusion, the important assumptions underlying the simple model outlined above that demonstrates the Sumner critique seem reasonable.

 

Ben Bernanke’s term as chairman of the Fed ends in January, and it is widely expected that he will not stay on. Speculation has recently heated up over as to who his successor will be.

The conventional wisdom is that current Vice Chairman Janet Yellen and former Treasury Secretary Lawrence Summers are the most likely possibilities, with former Vice Chairman Roger Ferguson as the dark horse. However, until two weeks ago, all the smart money was on Yellen. That has changed, particularly in the past couple of days, due to a pair of articles by Ben White and Ezra Klein referencing anonymous sources identifying Summers as the leading candidate.

What does the literature tell us about choosing the best Fed chair?

A mathematical model of the chairman’s leadership style

Where Yellen and Summers differ most is leadership style: Summers is taken to be more aggressive and outspoken. Summers is to Yellen as Alan Greenspan was to Bernanke. (Greenspan was a famously authoritarian chairman, whereas Bernanke is a consensus-builder.)

Petra Gerlach-Kristen, an economist at the Swiss National Bank, develops a simple model of a committee of monetary policymakers headed by a chairman.

The chairman can either be aggressive or democratic. The aggressive chairman is modeled as strategically suggesting an interest rate to pull the average opinion of the group closer to his ideal. The aggressive chairman, for example, proposes a 0.5 percentage point increase in the federal funds rate when he really wants a 0.25 percentage point increase if the majority of the committee supports no change in the rate.

The result of the model is that interest rate setting under an autocratic chairman, such as Summers, is worse than under a less autocratic one, such as Yellen.

It’s an interesting model, but the number of simplifying assumptions (e.g. an FOMC member only votes with chairman if the chairman’s preferred interest rate is within +/- 23 basis points of the member's preferred rate) required to build a model of the FOMC is too great to allow one to draw any real useful conclusions from it. The FOMC has 12 voting members, which is not small enough to be analyzed easily using game theory and not nearly large enough to be simplified as a large population. Not to mention that the motivations of voting members are multiple, complex, and varied.

An historical analysis

Christina and David Romer analyze the history of the Fed, combing through its minutes and transcripts, to tease out what makes a good chair in a paper titled simply “Choosing the Federal Reserve Chair: Lessons from History.”

Based on a review of the performance of past chairmen, they find (perhaps reassuringly?) that when choosing a Fed chair, “The key criterion should be economic beliefs.”

How then can economic beliefs be understood? Romer and Romer argue the most reliable evidence comes straight from the prospective candidate’s mouth: speeches, writing, and testimony are (again, perhaps reassuringly) reliable indicators of an economist’s beliefs.

Unfortunately this historical analysis is forced to rely on a tiny sample size, as the Fed has only had only six chairmen since gaining full independence from the Treasury in 1951.

Nonetheless, this suggests to me an alternative candidate who should be under consideration to be Bernanke’s successor: Christina Romer herself! Romer has written many great papers on the Great Depression, the Japanese Lost Decade, the operating procedure of the FOMC, and much more. Best of all, she is an avowed supporter of NGDP targeting.

This is also a major point against Summers, who, as Scott Sumner points out, failed to push for aggressive monetary expansion during the recession despite being a member of President Obama’s inner circle.

Time for a regime change

Lastly is the concept of a regime change. Numerous papers, particularly Eggertsson (2008), suggest that the US was able to escape the depths of the Great Depression because the election of Franklin Roosevelt was such a significant “regime change” that there was shock to expectations of future growth. This was a boost to confidence and spending, causing growth to return.

Could such a regime shift occur today with the appropriate appointment of a new chair? Certainly. Japan is currently going through a regime shift with the recent election of Shinzo Abe and the appointment of Haruhiko Kuroda to the Bank of Japan, as (there she is again!) Christina Romer (2013) discusses.

Who could create such a regime shift today? Not Yellen or Summers; they are almost the definition of mainstream. Romer might have a chance, especially if she pushed hard on NGDP targeting.

But if Obama wants to go for a regime shift, go big or go home. Many market monetarist economists – e.g. Scott Sumner, David Beckworth, Nick Rowe – would make great chairs in their own right, but also would create a shock to expectations. Not to mention that the market monetarist understanding of monetary policy is superior to other paradigms.

More on that as this blog continues.

 

This week, Senators John McCain, Elizabeth Warren, Maria Cantwell, and Angus King introduced legislation to essentially reinstate the Glass-Steagall Act of 1933, which was repealed in 1999. It would force financial institutions to be either pure commercial banks or pure investment banks.

Glass-Steagall was wrong and indeed harmful the first time around, and it is wrong and harmful now.

The arguments for Glass-Steagall usually fall along one of four lines, none of which has merit.

No conflict of interest problem
The most common argument is that having commercial banking activities (such as taking in deposits and making loans) and investment banking activities (securities underwriting) leads to conflicts of interest. For example, suppose a bank made a loan to a firm, and then the firm suddenly faced some sort of cash shortage. The bank could be inappropriately tempted to underwrite a bond or equity issuance of the firm to ensure that the firm is able to pay the loan back.

However, a 1994 matched pair study by Randy Kroszner and Raghuram Rajan provides fairly conclusive evidence that this is not the case. The two match bonds originating from commercial banks between 1921 and 1929 with similar securities originating from pure investment banks during the same time span. The matching criteria include S&P/Moody’s ratings, date of issuance, maturity, payment provisions, size, and conversion provision.

Kroszner and Rajan then compare performance data over time and find statistically significant evidence that commercial bank underwritten issues defaulted less frequently than investment bank underwritten issues, and when they do default, investors would recover more. The difference is even more pronounced when the volume of the issues is considered, since commercial banks underwrote larger issues. By 1940, 28% of investment bank underwritten issues had defaulted, whereas 11% of commercial bank underwritten issues had.

Instead, by comparing ratings and yields at issue with subsequent performance, the study provides empirical evidence of efficient markets: investors appear to have rationally discounted the price of bonds issued by commercial banks for potential conflicts.

Decreasing instead of increasing riskiness
A second popular argument is the idea that a commercial bank that has investment banking activities will be riskier and more likely to fail than a pure commercial bank. This argument has it exactly backwards.

According to Eugene White (1986) cited in Kroszner and Rajan, commercial banks which underwrote security issuances before Glass-Steagall were less likely to fail than pure commercial banks, and had on average the same earnings variance and capital ratios.

There is a very straightforward theoretical explanation for this: diversification. If a commercial bank’s only source of revenue is its commercial banking deposit and loans operations, then if this service experiences a temporary shock, the bank has nothing else to help it ride out the storm. This is not the case if the bank can also get revenue from securities activities.

Thus, it would seem that the imposition of a separation between commercial and investment banking between 1933 and 1999 could very well have been harmful by preventing banks from diversifying their revenue streams.

Political power; economies of scope
One reason to reinstate Glass-Steagall that has recently become popular is the idea that the law restrained the political power of banks. Investment bankers and commercial bankers have conflicting interests, and thus the political influence of each is offset by the other, Luigi Zingales writes in the Financial Times.

This is silly. The idea would seem ludicrous in any other context. Should farms be restricted to growing only one type of crop so that the sway of the farm lobby and the subsidies it receives would be reduced by, say, corn farmers duking it out with tomato farmers for funds?

Of course not. Farms that grow more than one crop are more productive because of economies of scope: goods or services are produced more cheaply when a firm produces a range of similar goods or services. The same is true in finance. By providing both commercial and investment banking services, a firm becomes more efficient. This means lower costs for consumers and yes, higher profits for firms.

The feel good factor
“I would say that ‘passing this law will annoy Jamie Dimon’ is a sufficient reason to do it,” writes Matt Yglesias of Slate on the legislation.

After the 2008 crisis, it’s certainly tempting to agree. Unfortunately, vengeance is not a sustainable source of a prosperous society.

Last, it should be noted that countries known for financial stability, such as Canada, have nothing like Glass-Steagall, and that Glass-Steagall would have done nothing to prevent the failure of the pure investment bank Lehman Brothers or pure commercial bank Washington Mutual in 2008.

Simply put, there is significant evidence that reinstating Glass-Steagall would not just be mistaken, but would be harmful. It prevents banks from stabilizing their income by diversifying its source, and blocks economies of scope which lower costs. Congress should reject the legislation.