Scott Sumner has famously proposed that the Fed stabilize monetary policy by pegging nominal GDP futures contracts in such a way to ensure that expectations for nominal income growth remain steady. For more details, see here; the merits (and demerits) of this proposal are not the subject of this post (but will be the subject of a future post).

One major problem: markets for NGDP futures don’t exist in the wild!

To be fair, Scott and the Mercatus Center have helped to fund a competition on the Hypermind website, where there is a fixed pot of prize money for correctly guessing the year-over-year NGDP growth rate. But this is not a two-sided market, and the pot of prize money is fixed – as the number of bettors increases, the expected winnings decrease.

Well, good news: an NGDP futures market is now live on the Augur blockchain. The specific contract is simply a binary option: will the growth rate in NGDP from 2018Q1 to 2019Q1 be greater than 4.5%?

The current price/probability implied by this contract can be viewed on the Augur aggregator website predictions.global: just search “NGDP’, or the permalink is here.

 

Background on Augur

For those unfamiliar, Augur is a new cryptocurrency project – it launched just last Thursday – built on the Ethereum platform that allows holders of its currency, “REP”, to create prediction markets. To speculate on such markets, an investor must use the Ethereum cryptocurrency (ETH).

The platform is decentralized: for everyone who wants to bet that NGDP growth will exceed 4.5%, there must be a counterparty who takes the other side of the bet. That is, the creators of Augur are not acting as market makers for the contract. The price of the contract will move to equilibrate supply and demand in a decentralized market: if the price is 0.7 ETH, that indicates that the market gives a 70% (risk-neutral) probability that NGDP will exceed 4.5%.

Of course, one can create prediction markets for literally anything. There were markets for who would win the World Cup; today, the most popular markets are cryptocurrency futures markets where traders can bet e.g. if the price of Ethereum will close the year higher than $500. A more interesting market is one asking, “Which stablecoin will have the largest market cap by the end of 2018?”, though no one has bet on this contract yet. Another interesting contract asks if a new cryptocurrency project will launch on schedule.

 

The NGDP prediction market

Back to the NGDP prediction market. As mentioned, there is currently only one contract on Augur related to NGDP, and it is a simple binary option on if NGDP will be greater or less than 4.5%. A buy order thus indicates that you believe yes, NGDP growth will exceed 4.5%, while a sell order indicates that you think it will be less than that threshold.

Hopefully in the future richer contracts will emerge, and speculators could bet on more specific outcomes: different time horizons, betting on particular growth rates rather than a simple binary choice, etc. This would provide richer analysis for policymakers and academic study.

 

The importance of blockchains and decentralization

To wax unabashedly lyrical for a moment, besides the practical macroeconomic application, what’s great about this is that it demonstrates how technology is liberation.

The major obstacle to creating an NGDP futures market in the past has been that prediction markets have been heavily restricted by government regulators. Intrade, a political prediction market, famously had to shut down due to a regulatory crackdown.

Augur, however, is a decentralized market which was only made possible in the last decade by the development of blockchain technology. There is no central point of failure, like with Intrade, where the CFTC can apply regulatory pressure to force a shutdown.

(Related. Despite the protection provided by decentralization, the regulatory environment around cryptocurrencies is notoriously… murky. And the state of regulation is possibly even more murky when it comes to predictions markets in particular. Calibrate your risk aversion appropriately before getting involved with this stuff.)

 

Viewing current probabilities

How can you check out the current market price/probability? One way is to download the Augur app and view the state of the market in the app, including the full order book (pictured below). The faster method, mentioned above, is to check out predictions.global, a great website that is layered on top of Augur and posts the current probabilities on all Augur contracts. Just search “NGDP” and you’ll see the contract (permalink here).

The Augur NGDP prediction market order book

 

Conclusion

As of this writing, the market-implied (risk-neutral) probability that NGDP growth exceeds 4.5% is 90%; but only 0.05 ETH ($23.41 at current exchange rates) has been bet on the market. The market creation fee on this market is at 0%, so bettors keep essentially all winnings, unlike many other Augur markets. So go and make some bets!

That’s the news. You can probably stop reading here. In the rest of this post, for those interested, I want to provide some resources and give an extremely limited sketch on how – hypothetically ☺ – one would go about setting up a prediction market on Augur. Given the number of people who have asked, I’ll highlight that there is no coding required. Augur has an app that makes things pretty easy!

 

 

 

A step-by-step guide to creating a new market on Augur

This guide is not meant to be comprehensive, in any way – I’ll provide a list just to set things off in the right direction. Fortunately, there are a fair number of resources online; I’ll link to many, but if the process is unclear, be sure to search for both articles/videos that can provide walkthroughs.

  1. Buy some Ethereum

The simplest way to do this is via Coinbase. Here’s a guide for how to get set up with that. How much should you buy? You’ll probably need something like 0.03 ETH (~$15 at present) at the bare minimum to set up the market, though a good chunk of that will be rebated to you after the market ends.

If you want a more precise number for how much ETH to purchase, you’ll need to wait until step 7; you can come back to this step then.

 

  1. Download Metamask

Metamask is a cryptocurrency wallet that is embedded in your browser. Instead of a central party like Coinbase holding your coins, you will now be the custodian of your coins.

Simply add the browser extension. Don’t lose your key! This video gives an overview of Metamask.

 

  1. Transfer your ETH to Metamask

If you bought via Coinbase, simply transfer. This article explains.

 

  1. Set up a MyEtherWallet (MEW)

Create an account and connect it to your Metamask.

 

  1. Buy some Augur via MEW

Again, if you want an exact amount, you’ll need to do step 7 and then come back here. Using MEW, swap some of your ETH for some of Augur’s native token, REP. This video again walks through.

 

  1. Download and install the Augur app. Open it and wait for it to sync.

 

  1. Set up your market!

This video and this video walk through how to do that. You'll need to use some of your ETH and REP to create the market.

 

  1. Monitor your market on http://predictions.global/ and, hopefully, watch the bets roll in.
 

I.
The efficient market hypothesis says that you can’t pick out which stocks are undervalued versus which are overvalued. Likewise, I claim that you can’t pick out which restaurants are underpriced versus which restaurants are overpriced.

Think you’ve found a great company, so that their stock will outperform on a risk-adjusted basis? Nope, someone else has already incorporated that information into the stock price and pushed the price up.

Think you’ve found a great restaurant which offers meals at a decent price? Nope, they’ve already raised their prices to the point where the extra cost just equals the extra utility you get from their extra delicious cuisine.

 

II.
A. But, first of all, we need to emphasize that this is on a risk-adjusted basis. A portfolio of stocks might have higher expected returns – but only if it’s riskier.

This applies to restaurants as well to stocks – trying a new exotic cuisine could be eye-opening and awesome, or awful. Admittedly, this is quantitatively much less important for restaurants.

(This is the essence of modern asset pricing theory.)

B. Similarly to stocks, fund managers will not consistently deliver alpha to their investors: if any manager can consistently deliver alpha, that manager will simply raise their fees to capture it for themselves. (This is the essence of the “rational model of active management” model of Berk and Green 2004.)

 

III.
Moreover, second of all, cheap restaurants and cheap managers might exist, but they can have very high search costs.

Truly great cheap restaurants might exist, but you have to pay a lot in time, money, and energy spent searching and reading reviews to pinpoint them. These search costs, this time wasted digging around on Yelp, are real costs: they take time and money that you could otherwise have spent on better food or anything else which gives you utility.

This is likewise true of asset managers. Cheap asset managers that provide alpha might truly exist, but you have to spend so much time and money searching and evaluating potential such managers that these search costs will eat up that alpha. Otherwise, other investors would have already found the manager and grabbed that alpha.

(This is the essence of Garleanu and Pedersen’s “Efficiently Inefficient” model.)

 

IV.
Third and finally: the utility of eating out at a restaurant is not just a result of tastiness and search costs. It incorporates every stream of services provided by the restaurant: convenience of location most of all, but also quality of service, ambience, and the social aspect of the other patrons. If a given restaurant achieves higher on these marks – e.g. a restaurant full of beautiful fashion models – then it should be expected that the quality of the food is less.

Similarly, to a lesser extent, with assets or with asset managers. Assets provide more than just a stream of returns: they provide the service of liquidity, or a “convenience yield”. We can think of people enjoying the comfort provided by liquid assets, much like they enjoy the ambience of a nice restaurant. And just as a restaurant full of fashion models will – all else equal – have lower quality food, an asset or manager that offers higher liquidity should be expected to provide a lower pecuniary return.

(The idea of a convenience yield has been discussed by Cochrane, Koning, and others. This is also the entirety of the value behind cryptocurrencies.)

[Personal aside: This area is a core component of my own research agenda, as I currently envision it.]

 

V.
Conclusion: in equilibrium, assets or asset managers should not be undervalued or overvalued, on a risk-adjusted, fee-adjusted, search cost-adjusted, liquidity-adjusted basis. Likewise, in equilibrium, restaurants should not be underpriced or overpriced, once one takes into account their riskiness; the time spent searching for them on Yelp and reading reviews; and the ambience and other “convenience yield” services provided by the restaurant.

 

Most people are probably somewhat overconfident. Most people – myself surely included – probably typically overestimate their own talents, and they (we) are overly confident in the precision of their estimates, underestimating uncertainty.

This bias has viscerally real, important consequences. Governments are overconfident that they can win wars quickly and easily; overconfident CEOs have a higher tendency to undertake mergers and issue more debt than their peers.

I claim, however, that this bias does not matter for asset pricing in particular. That is, stock prices (and other asset prices) are not affected by overconfident investors.

In fact, I claim that any kind of behavioral bias cannot in and of itself affect stock prices.

The idea that behavioral biases, on their own, can affect asset prices is one of if not the most widely held misconceptions about financial markets. Just because most people (myself included!) are blinded by cognitive biases – overconfidence, status quo bias, confirmation bias, etc. – does not mean that stock prices are at all affected or distorted.

If this seems crazy, let me try putting it another way: just because behavioral biases exist does not mean that you can get rich by playing the stock market and exploiting the existence of these biases.

The trick is that it only takes the existence of one rational unconstrained arbitrageur to keep prices from deviating away from their rational level.

To see this, consider two extremes.

All it takes is one
First, suppose everyone in the world is perfectly rational and unbiased, except for one poor fellow, Joe Smith. Joe is horribly overconfident, and thinks he’s smarter than everyone else. He invests all of his money in Apple stock, insisting that everyone else is undervaluing the company, and pushing the Apple share price up.

Of course, since every other investor is perfectly rational and informed, they will notice this and immediately race to go short Apple, betting against it until the price of the Apple stock is pushed back to the rational level.

Now, consider the inverse situation. Everyone in the world is systematically biased and cognitively limited, except for one rational informed Jane Smith. Perhaps more realistically, instead of Jane Smith, the one rational agent is some secretive hedge fund.

Now, billions of irrational investors are pushing prices away from their rational value. However, as long as Rational Hedge Fund LLC has access to enough capital, this one rational agent can always buy an undervalued stock until the price gets pushed up to its rational level, or short an overvalued stock until the price gets pushed down to the rational level. Rational Hedge Fund LLC profits, and prices are kept at their rational levels.

Even more realistically, instead of a single hypervigilant rational hedge fund keeping all stocks at their respective rational levels, there could be many widely dispersed investors each with specialized knowledge in one stock or one industry, collectively working to keep prices in line.

The marginal investor
The real world, of course, is somewhere between these two extremes. Most people have a host of cognitive biases, which leads to “noise traders” randomly buying and selling stocks. However, there is also a small universe of highly active, often lightning fast rational investors who quickly arbitrage away any price distortions for profit.

It is these marginal investors who determine the price of stocks, not the biased investors. This is why I say that “cognitive biases don’t matter for stock prices” – the existence of any unconstrained rational investors ensures that biases will not flow through to asset pricing.

The important caveat: the “limits to arbitrage”
There is an extremely important caveat to this story.

Note that I quietly slipped in the requirement that Rational Hedge Fund LLC must have “access to enough capital.” If the rational investors cannot raise enough money to bet against the noisy irrational traders, then prices cannot be pushed to their rational equilibrium level.

(The importance of access to capital is more than just the ability to apply price pressure. It’s also important for the marginal investor to be able to withstand the riskiness of arbitrage.)

This assumption of frictionless access to leverage clearly does not hold perfectly in the real world: lending markets are troubled by principal-agent problems, moral hazard, and other imperfections.

This (very important) friction is known as the “limits to arbitrage.”

Summing up
It is irrationality in conjunction with limits to arbitrage which allow for market prices to diverge from their rational levels. It is important to acknowledge that cognitive biases alone are not a sufficient condition for market inefficiency. Irrationality and limits to arbitrage are both necessary.

More pithily: Peanut butter alone is not enough to make a PB&J sandwich, and behavioral biases alone are not enough to make the stock market inefficient.

 

The Efficient Market Hypothesis (EMH) was famously defined by Fama (1991) as “the simple statement that security prices fully reflect all available information.”

That is, you can’t open the Wall Street Journal, read a news article from this morning about Google’s great earnings numbers that were just released, and make money by buying Google stock. The positive information contained in the earnings numbers would already have been incorporated into Google’s share price.

To put it another way, the EMH simply says that there is no such thing as a free lunch for investors.

Does this imply that stock prices (or other asset prices) are unpredictable? No! The EMH unequivocally does not mean that prices or returns are unpredictable.

This fallacy arises all the time. Some author claims to have found a way to predict returns and so declares, “The EMH is dead.” Return predictability does not invalidate the EMH. This is important – the empirical evidence shows that returns are indeed eminently predictable.

The key lies with risk premia.

I. What are risk premia?
The price of a stock (or any other asset) can be decomposed into two parts:

  1. The (discounted) expected value of the stock
  2. A “risk premium”

The first part is the standard discounted present-value that you might read about in an accounting textbook. The second is the compensation required by the stock investor in order to bear the risk that the stock might drop in value, known as a risk premium.

To understand risk premia, suppose that I offer you the following deal. You can pay me $x, and then get to flip a coin: heads I give you $100, tails you get nothing. How much would you be willing to pay to have this opportunity?

Although the expected value of this bet is $50, you’re probably only going to be willing to pay something like $45 for the chance to flip the coin, if that. The five dollars difference is the compensation you demand in order to bear the risk that you could lose all your money – the risk premium.

II. Return predictability is compensation for risk
The above decomposition suggests that return predictability can either be the result of

  1. The ability to truly predict movements in the underlying value of the stock
  2. The ability to predict risk premia

If the first type of predictability were possible, this would in fact invalidate the EMH. However, the second sort of predictability – predictability of risk premia – allows for stock returns to be predictable, even under the EMH.

This is because, if only risk premia are predictable, then there is still no free lunch.

Sure, you can predict that a stock portfolio will outperform the market over the next year. However, this excess return is simply compensation for the fact that this set of stocks is extra risky – i.e., the portfolio has a high risk premium.

As an extreme example, consider the well-known fact that buying and holding a diverse basket of stocks predictably has higher expected returns than buying and holding short-term Treasury bills.

Is this a free lunch? Does the existence of the stock market invalidate the EMH? No. This return predictability exists only because equities are fundamentally riskier than T-bills.

III. Summing up
This is all to say that while returns may be predictable, it is likely that any profits earned from such predictable strategies are merely compensation for extra risk.

The EMH says that there is no free lunch from investing. Just because returns are predictable does not mean you can eat for free.

 

Postscript. There is another (outdated) theory, the “random walk hypothesis”, defined as the claim that returns are not predictable. This is different from the EMH, which says that asset prices reflect all available information. The random walk hypothesis has been shown to be clearly empirically false, per links above.

 

Update: see important caveat at the bottom of this post.

This post continues the discussion from Scott Sumner’s thoughtful reply to my critique of NGDP targeting from 2015. (Note to frequent readers: I previously published a reply to Scott, which I have since deleted.)

In short:

  1. Some economists see zero inflation as optimal in the long run. NGDP targeting cannot achieve this in the long run, except under discretion, as I discussed in my original post.
  2. On the other hand, as I discuss below, many models prescribe the Friedman rule for the optimal long-run rate of inflation. This can, in fact, be achieved under NGDP targeting, even without discretion!

I. The benefit of NGDP targeting is that inflation can fluctuate in the short run. But can NGDP targeting achieve a long-run optimal inflation rate?

Targeting NGDP rather than targeting inflation allows inflation to fluctuate in the short run. This is the major benefit of NGDP targeting, since it makes sense to have higher inflation in the short run when there is a cyclical growth slowdown and lower inflation when there is a growth boom, (see Selgin, Sumner, Sheedy, myself).

This is an argument about the short or medium run, at the frequency of business cycles (say 2-5 years).

Separately, you could imagine – whether or not inflation is allowed to vary in the short run, as it would be under NGDP targeting – that there is a long-run rate of inflation which is optimal. That is, is there a “best” inflation rate at which the economy should ideally settle, at a 10+ year horizon?

If there is an optimal long-run inflation rate, you would hope that this could be achieved under NGDP targeting in the long-run, even while inflation is allowed to fluctuate in the short run.

II. The optimal long-run inflation rate
Economists have thought a lot about the question of what the long-run optimal inflation rate is. There are two competing answers [1]:

1. No inflation: One strand of literature argues that the optimal long-run inflation rate is precisely zero, based on price stickiness. The argument goes: by keeping the price level stable, sticky prices cannot distort relative prices.

2. Friedman rule: Alternatively, another strand of the literature going back to Milton Friedman argues that the optimal inflation rate is the negative of the short-term risk-free real interest rate (i.e. slight deflation). The argument here is that this would set the nominal risk-free interest rate to zero. In this world, there would be no opportunity cost to holding money, since both cash and risk-free bonds would pay zero interest, and the economy could be flush with liquidity and the optimum quantity of money achieved.

These two schools of thought clearly contradict each other. We will consider each separately.

What we want to know is this: could NGDP targeting achieve the optimal inflation rate in the long run (even while allowing beneficial short-run fluctuations in inflation)?

III. NGDP targeting and zero long-run inflation
In a previous blog post, I critiqued NGDP targeting by pointing out that NGDP targeting could not achieve zero inflation in long-run, unless the central bank could discretionarily change the NGDP target. In other words, I was arguing based on the first strand of literature that NGDP targeting was deficient in this respect.

The accounting is simple: NGDP growth = real growth + inflation. Under NGDP targeting without discretion, the growth rate of NGDP is fixed. But, real growth varies in the long run due to changing productivity growth – for example, real growth was higher in the 1960s than it has been in recent decades. As a result, the long-run inflation rate must vary and thus is unanchored.

Zero inflation can be achieved in the long run, but only at the cost of trusting the central bank to act discretionarily and appropriately modify the long-run NGDP target.

I think that such discretion would be problematic, for reasons I outline in the original post. I’ll note, however, that I (now) assess that the benefits of NGDP targeting in preventing short-run recessions outweigh this smaller long-run cost.

IV. NGDP targeting and the Friedman rule
On the other hand – and I haven’t seen this result discussed elsewhere before – NGDP targeting can achieve the Friedman rule for the optimal inflation rate in the long run without discretion. That is, under the logic of the second strand of literature, NGDP targeting can achieve the optimum. Here’s the accounting logic:

The Friedman rule prescribes that the optimal inflation rate, pi*, be set equal to the negative of the real interest rate r so that the nominal interest rate is zero:
pi* = -r

Here’s the kicker: Under a wide class of models (with log utility), the long-run real interest rate equals the rate of technological progress g plus the rate of time preference b. See Baker et al (2005) for a nice overview. As a result, the optimal inflation rate under the Friedman rule can be written:
pi* = -r = -(b+g)

This can be achieved under NGDP targeting without discretion! Here’s how.

Suppose that the central bank targets a nominal GDP growth rate of -b, that is, an NGDP path that declines at the rate of time preference. Recall again, under NGDP targeting, NGDP growth = g + pi. Since the central bank is targeting an NGDP growth rate of -b, if we rearrange to solve for inflation, we get that
pi = NGDP growth - g = -b - g

That’s the optimal inflation rate implied by the Friedman rule shown above. This result holds even if the long-run rate of productivity growth (g) changes.

Thus, we have shown that if the central bank targets an NGDP path that declines at the rate of time preference, then in the long run the Friedman rule will be achieved.

To summarize, under such a regime, the economy would get the short-run benefits of flexible inflation for which NGDP targeting is rightfully acclaimed; while still achieving the optimal long-run inflation rate.

This is a novel point in support of NGDP targeting, albeit a very specific version of NGDP targeting: an NGDP target of negative the rate of time preference.

V. Summing up
There’s still the tricky problem that economists can’t even agree on whether the Friedman rule or no-inflation is superior.

So, to sum up once more:

  1. NGDP targeting cannot achieve zero inflation in the long run without discretion, as discussed in my original post. This is unfortunate if zero inflation is long-run optimal.
  2. However, NGDP targeting – if targeting a growth rate of -b – can in fact achieve the Friedman rule in the long run without discretion. This is fortunate if the Friedman rule is the long-run optimal inflation rate.

To close this out, I’ll note that an alternative middle ground exists… an NGDP target of 0%. This would see a long-run inflation rate of -g: not as low as -g-b as prescribed by the Friedman rule; but not as high as 0% as prescribed by no-inflationistas.

Such a policy is also known as a “productivity norm,” (since long-run inflation is negative of productivity growth), advocated prominently by George Selgin (1997).

 

 

[1] I ignore ZLB considerations, which typically imply a higher optimal inflation rate, since many advocates of NGDP targeting do not see the ZLB as a true policy constraint (myself included).

 

 

Update: Selgin points out in correspondence and Sumner points out in comments below that, the above discussion is implicitly using variables in per capita terms. To the extent there is time variation in the growth rate of the labor supply, there still will be discretion required.

 

I. Marx vs. Smith and food banks
When Heinz produces too many Bagel Bites, or Kellogg produces too many Pop-Tarts, or whatever, these mammoth food-processing companies can donate their surplus food to Feeding America, a national food bank. Feeding America then distributes these corporate donations to local food banks throughout the country.

What’s the economically optimal way to allocate the donations across the country?

Option one is what you might call “full communism.” Under full communism, Feeding America collects the food donations and then top-down tells individual food banks what endowments they will be receiving, based on Feeding America’s own calculation of which food banks need what.

Prior to 2005, this was indeed what occurred: food was distributed by centralized assignment. Full communism!

The problem was one of distributed versus centralized knowledge. While Feeding America had very good knowledge of poverty rates around the country, and thus could measure need in different areas, it was not as good at dealing with idiosyncratic local issues.

Food banks in Idaho don’t need a truckload of potatoes, for example, and Feeding America might fail to take this into account. Or maybe the Chicago regional food bank just this week received a large direct donation of peanut butter from a local food drive, and then Feeding America comes along and says that it has two tons of peanut butter that it is sending to Chicago.

To an economist, this problem screams of the Hayekian knowledge problem. Even a benevolent central planner will be hard-pressed to efficiently allocate resources in a society since it is simply too difficult for a centralized system to collect information on all local variation in needs, preferences, and abilities.

This knowledge problem leads to option two: market capitalism. Unlike poorly informed central planners, the decentralized price system – i.e., the free market – can (often but not always) do an extremely good job of aggregating local information to efficiently allocate scarce resources. This result is known as the First Welfare Theorem.

Such a system was created for Feeding America with the help of four Chicago Booth economists in 2005. Instead of centralized allocation, food banks were given fake money – with needier food banks being given more – and allowed to bid for different types of food in online auctions. Prices are thus determined by supply and demand.

At midnight each day all of the (fake) money spent that day is redistributed, according to the same formula as the initial allocation. Accordingly, any food bank which does not bid today will have more money to bid with tomorrow.

Under this system, the Chicago food bank does not have to bid on peanut butter if it has just received a large peanut butter donation from another source. The Idaho food bank, in turn, can skip on bidding for potatoes and bid for extra peanut butter at a lower price. It’s win-win-win.

By all accounts, the system has worked brilliantly. Food banks are happier with their allocations; donations have gone up as donors have more confidence that their donations will actually be used. Chalk one up for economic theory.

II. MV=PY, information frictions, and food banks
This is all pretty neat, but here’s the really interesting question: what is optimal monetary policy for the food bank economy?

Remember that food banks are bidding for peanut butter or cereal or mini pizzas with units of fake money. Feeding America has to decide if and how the fake money supply should grow over time, and how to allocate new units of fake money. That’s monetary policy!

Here’s the problem for Feeding America when thinking about optimal monetary policy. Feeding America wants to ensure that changes in prices are informative for food banks when they bid. In the words of one of the Booth economists who helped design the system:

“Suppose I am a small food bank; I really want a truckload of cereal. I haven’t bid on cereal for, like, a year and a half, so I’m not really sure I should be paying for it. But what you can do on the website, you basically click a link and when you click that link it says: This is what the history of prices is for cereal over the last 5 years. And what we wanted to do is set up a system whereby by observing that history of prices, it gave you a reasonable instinct for what you should be bidding.”

That is, food banks face information frictions: individual food banks are not completely aware of economic conditions and only occasionally update their knowledge of the state of the world. This is because obtaining such information is time-consuming and costly.

Relating this to our question of optimal monetary policy for the food bank economy: How should the fake money supply be set, taking into consideration this friction?

Obviously, if Feeding America were to randomly double the supply of (fake) money, then all prices would double, and this would be confusing for food banks. A food bank might go online to bid for peanut butter, see that the price has doubled, and mistakenly think that demand specifically for peanut butter has surged.

This “monetary misperception” would distort decision making: the food bank wants peanut butter, but might bid for a cheaper good like chicken noodle soup, thinking that peanut butter is really scarce at the moment.

Clearly, random variation in the money supply is not a good idea. More generally, how should Feeding America set the money supply?

One natural idea is to copy what real-world central banks do: target inflation.

The Fed targets something like 2% inflation. But, if the price of a box of pasta and other foods were to rise 2% per year, that might be confusing for food banks, so let’s suppose a 0% inflation target instead.

It turns out inflation targeting is not a good idea! In the presence of the information frictions described above, inflation targeting will only sow confusion. Here’s why.

As I go through this, keep in the back of your mind: if households and firms in the real-world macroeconomy face similar information frictions, then – and this is the punchline of this entire post – perhaps inflation targeting is a bad idea in the real world as well.

III. Monetary misperceptions
I demonstrate the following argument rigorously in a formal mathematical model in a paper, “Monetary Misperceptions: Optimal Monetary Policy under Incomplete Information,” using a microfounded Lucas Islands model. The intuition for why inflation targeting is problematic is as follows.

Suppose the total quantity of all donations doubles.

You’re a food bank and go to bid on cheerios, and find that there are twice as many boxes of cheerios available today as yesterday. You’re going to want to bid at a price something like half as much as yesterday.

Every other food bank looking at every other item will have the same thought. Aggregate inflation thus would be something like -50%, as all prices would drop by half.

As a result, under inflation targeting, the money supply would simultaneously have to double to keep inflation at zero. But this would be confusing: Seeing the quantity of cheerios double but the price remain the same, you won’t be able to tell if the price has remained the same because
(a) The central bank has doubled the money supply
or
(b) Demand specifically for cheerios has jumped up quite a bit

It’s a signal extraction problem, and rationally you’re going to put some weight on both of these possibilities. However, only the first possibility actually occurred.

This problem leads to all sorts of monetary misperceptions, as money supply growth creates confusions, hence the title of my paper.

Inflation targeting, in this case, is very suboptimal. Price level variation provides useful information to agents.

IV. Optimal monetary policy
As I work out formally in the paper, optimal policy is instead something close to a nominal income (NGDP) target. Under log utility, it is exactly a nominal income target. (I’ve written about nominal income targeting before more critically here.)

Nominal income targeting in this case means that the money supply should not respond to aggregate supply shocks. In the context of our food banks, this result means that the money supply should not be altered in response to an increase or decrease in aggregate donations.

Instead, if the total quantity of all donations doubles, then the price level should be allowed to fall by (roughly) half. This policy prevents the confusion described above.

Restating, the intuition is this. Under optimal policy, the aggregate price level acts as a coordination mechanism, analogous to the way that relative prices convey useful information to agents about the relative scarcity of different goods. When total donations double, the aggregate price level signals that aggregate output is less scarce by halving.

It turns out that nominal income targeting is only exactly optimal (as opposed to approximately optimal) under some special conditions. I’ll save that discussion for another post though.

Feeding America, by the way, does not target constant inflation. They instead target “zero inflation for a given good if demand and supply conditions are unchanged.” This alternative is a move in the direction of a nominal income target.

V. Real-world macroeconomic implications
I want to claim that the information frictions facing food banks also apply to the real economy, and as a result, the Federal Reserve and other central banks should consider adopting a nominal income target. Let me tell a story to illustrate the point.

Consider the owner of an isolated bakery. Suppose one day, all of the customers seen by the baker spend twice as much money as the customers from the day before.

The baker has two options. She can interpret this increased demand as customers having come to appreciate the superior quality of her baked goods, and thus increase her production to match the new demand. Alternatively, she could interpret this increased spending as evidence that there is simply more money in the economy as a whole, and that she should merely increase her prices proportionally to account for inflation.

Economic agents confounding these two effects is the source of economic booms and busts, according to this model. This is exactly analogous to the problem faced by food banks trying to decide how much to bid at auction.

To the extent that these frictions are quantitatively important in the real world, central banks like the Fed and ECB should consider moving away from their inflation targeting regimes and toward something like a nominal income target, as Feeding America has.

VI. Summing up
Nominal income targeting has recently enjoyed a surge in popularity among academic monetary economists, so the fact that this result aligns with that intuition is pretty interesting.

To sum up, I’ll use a metaphor from Selgin (1997).

Consider listening to a symphony on the radio. Randomly turning the volume knob up and down merely detracts from the musical performance (random variation in the price level is not useful). But, the changing volume of the orchestra players themselves, from quieter to louder and back down again, is an integral part of the performance (the price level should adjust with natural variations in the supply of food donations). The changing volume of the orchestra should not be smoothed out to maintain a constant volume (constant inflation is not optimal).

Central banks may want to consider allowing the orchestra to do its job, and reconsider inflation targeting as a strategy.

 

Behavioral economists have a concept called loss aversion. It’s almost always described something like this:

“Loss aversion implies that one who loses $100 will lose more satisfaction than another person will gain satisfaction from a $100 windfall.”
Wikipedia, as of December 2015

Sounds eminently reasonable, right? Some might say so reasonable, in fact, that it’s crazy that those darn neoclassical economists don’t incorporate such an obvious, fundamental fact about human nature in their models.

It is crazy – because it’s not true! The pop definition of loss aversion given above – that ‘losses hurt more than equivalent size gains’ – is precisely the concept of diminishing marginal utility (DMU) that is boringly standard in standard price theory.

Loss aversion is, in fact, a distinct and (perhaps) useful concept. But somewhat obnoxiously, behavioral economists, particularly in their popular writings, have a tendency to conflate it with DMU in a way that makes the concept seem far more intuitive than it is, and in the process wrongly makes standard price theory look bad.

I’m not just cherry-picking a bad Wikipedia edit. I name names at the bottom of this post, listing where behavioral economists – Thaler, Kahneman, Sunstein, Dubner, etc. – have (often!) given the same misleading definition. It’s wrong! Loss aversion is about reference dependence.

To restate, what I’m claiming is this:

  1. Behavioral economists use an incorrect definition of loss aversion when writing for popular audiences
  2. This incorrect definition is in fact the property of DMU that is assumed in all of neoclassical economics
  3. DMU is much more intuitive than the real definition of loss aversion, and so by using a false definition of loss aversion behavioral economists make neoclassical economics look unnecessarily bad and behavioral economics look misleadingly good

Let me walk through the difference between DMU and loss aversion painstakingly slowly:

Diminishing marginal utility
“Diminishing marginal utility” is the idea that the more you have of something, the less you get out of having a little bit more of it. For example:

If you own nothing but $1,000 and the clothes on your back, and I then give you $100,000, that is going to give you a heck of a lot more extra happiness then if you had $100 million and I gave you $100,000.

An important corollary follows immediately from this: losses hurt more than gains!

I made a super high quality illustration to depict this:

What we have here is a graph of your utility as a function of your wealth under extremely standard (i.e., non-behavioral) assumptions. The fact that the line flattens out as you get to higher wealth levels is the property of DMU.

We can also see that equivalently sized losses hurt more than gains. As you go from 10k wealth to 2k wealth (middle green line to bottom green line), your utility falls by more than the amount your utility rises if you go from 10k wealth to 18k wealth (middle green to top green lines), despite the change in wealth being the same 8k in both directions.

Standard economics will always assume DMU, thus capturing exactly the intuition of the idea described in the above Wikipedia definition of loss aversion.

More mathematically – and I’m going to breeze through this – if your utility is purely a function of your wealth, Utility=U(W), then we assume that U'(W)>0 but U''(W)<0, i.e. your utility function is concave. With these assumptions, the result that U(W+ε)-U(W) < U(W)-U(W-ε) follows from taking a Taylor expansion. See proof attached below.

Loss aversion
Loss aversion is a consequence of reference dependence and is an entirely different beast. The mathematical formulation was first made in Tversky and Kahneman (1991).

In words, loss aversion says this: Suppose you have nothing but the clothes you’re wearing and $10,000 in your pocket, and then another $10,000 appears in your pocket out of nowhere. Your level of utility/happiness will now be some quantity given your wealth of $20,000.

Now consider a situation where you only own your clothes and the $30,000 in your pocket. Suppose suddenly $10,000 in your pocket disappears. Your total wealth is $20,000 – that is, exactly the same as the prior situation. Loss aversion predicts that in this situation, your level of utility will be lower than in the first situation, despite the fact that in both situations your wealth is exactly $20,000, because you lost money to get there.

Perhaps this concept of loss aversion is reasonable in some situations. It doesn’t seem crazy to think that people don’t like to lose things they had before.

But this concept is entirely different from the idea that ‘people dislike losses more than they like gains’ which sloppy behavioral economists go around blathering about. It’s about reference dependence! Your utility depends on your reference point: did you start with higher or lower wealth than you currently have?

In their academic papers, behavioral economists are very clear on the distinction. The use of math in formal economic models imposes precision. But when writing for a popular audience in the less-precise language of English – see below for examples – the same economists slip into using an incorrect definition of loss aversion.

Conclusion
So, please, don’t go around claiming that behavioral economists are incorporating some brilliant newfound insight that people hate losses more than they like gains. We’ve known about this in price theory since Alfred Marshall’s 1890 Principles of Economics.

 

Addendum
It’s kind of silly for me to write this post without naming names. Here we go:

1. Richard Thaler, one of the founding fathers of behavioral economics, in his 2015 bestseller, Misbehaving:

2. Richard Thaler, in the 2008 bestseller, Nudge:

3. Cass Sunstein (Oct. 2015), Harvard law and behavioral economics professor:

4. Daniel Kahneman, Nobel Prize-winning behavioral economist, in his 2011 bestseller, Thinking Fast and Slow:

5. Stephen Dubner (Nov. 2005):

6. New York Times (Dec. 2013):

7.The Economist (Feb. 2015):

I should note that Tversky and Kahneman in their original paper describing loss aversion are admirably clear in their usage of the concept: the title of their QJE paper is Loss Aversion in Riskless Choice: A Reference-Dependent Model, explicitly highlighting the notion of reference dependence.

References

 

Until very recently – see last month’s WSJ survey of economists – the FOMC was widely expected to raise the target federal funds rate this week at their September meeting. Whether or not the Fed should be raising rates is a question that has received much attention from a variety of angles. What I want to do in this post is answer that question from a very specific angle: the perspective of a New Keynesian economist.

Why the New Keynesian perspective? There is certainly a lot to fault in the New Keynesian model (see e.g. Josh Hendrickson). However, the New Keynesian framework dominates the Fed and other central banks across the world. If we take the New Keynesian approach seriously, we can see what policymakers should be doing according to their own preferred framework.

The punch line is that the Fed raising rates now is the exact opposite of what the New Keynesian model of a liquidity trap recommends.

If you’re a New Keynesian, this is the critical moment in monetary policy. For New Keynesians, the zero lower bound can cause a recession, but need not result in a deep depression, as long as the central bank credibly promises to create an economic boom after the zero lower bound (ZLB) ceases to be binding.

That promise of future growth is sufficient to prevent a depression. If the central bank instead promises to return to business as normal as soon as the ZLB stops binding, the result is a deep depression while the economy is trapped at the ZLB, like we saw in 2008 and continue to see in Europe today. The Fed appears poised to validate earlier expectations that it would indeed return to business as normal.

If the New Keynesian model is accurate, this is extremely important. By not creating a boom today, the Fed is destroying any credibility it has for the next time we hit the ZLB (which will almost certainly occur during the next recession). It won’t credibly be able to promise to create a boom after the recession ends, since everyone will remember that it did not do so after the 2008 recession.

The result, according to New Keynesian theory, will be another depression.

I. The theory: an overview of the New Keynesian liquidity trap

I have attached at the bottom of this post a reference sheet going into more detail on Eggertsson and Woodford (2003), the definitive paper on the New Keynesian liquidity trap. Here, I summarize at a high level –skip to section II if you are familiar with the model.

A. The NK model without a ZLB

Let’s start by sketching the standard NK model without a zero lower bound, and then see how including the ZLB changes optimal monetary policy.

The basic canonical New Keynesian model of the economy has no zero lower bound on interest rates and thus no liquidity traps (in the NK context, a liquidity trap is defined as a period when the nominal interest rate is constrained at zero). Households earn income through labor and use that income to buy a variety of consumption goods and consume them to receive utility. Firms, which have some monopoly power, hire labor and sell goods to maximize their profits. Each period, a random selection of firms are not allowed to change their prices (Calvo price stickiness).

With this setup, the optimal monetary policy is to have the central bank manipulate the nominal interest rate such that the real interest rate matches the “natural interest rate,” which is the interest rate which would prevail in the absence of economic frictions. The intuition is that by matching the actual interest rate to the “natural” one, the central bank causes the economy to behave as if there are no frictions, which is desirable.

In our basic environment without a ZLB, a policy of targeting zero percent inflation via a Taylor rule for the interest rate exactly achieves the goal of matching the real rate to the natural rate. Thus optimal monetary policy results in no inflation, no recessions, and everyone’s the happiest that they could possibly be.

B. The NK liquidity trap

The New Keynesian model of a liquidity trap is exactly the same as the model described above, with one single additional equation: the nominal interest rate must always be greater than or equal to zero.

This small change has significant consequences. Whereas before zero inflation targeting made everyone happy, now such a policy can cause a severe depression.

The problem is that sometimes the interest rate should be less than zero, and the ZLB can prevent it from getting there. As in the canonical model without a ZLB, optimal monetary policy would still have the central bank match the real interest rate to the natural interest rate.

Now that we have a zero lower bound, however, if the central bank targets zero inflation, then the real interest rate won’t be able to match the natural interest rate if the natural interest rate ever falls below zero!

And that, in one run-on sentence, is the New Keynesian liquidity trap.

Optimal policy is no longer zero inflation. The new optimal policy rule is considerably more complex and I refer you to the attached reference sheet for full details. But the essence of the idea is quite intuitive:

If the economy ever gets stuck at the ZLB, the central bank must promise that as soon as the ZLB is no longer binding it will create inflation and an economic boom.

The intuition behind this idea is that the promise of a future boom increases the inflation expectations of forward-looking households and firms. These increased inflation expectations reduce the real interest rate today. This in turn encourages consumption today, diminishing the depth of the recession today.

All this effect today despite the fact that the boom won’t occur until perhaps far into the future! Expectations are important, indeed they are the essence of monetary policy.

C. An illustration of optimal policy

Eggertsson (2008) illustrates this principle nicely in the following simulation. Suppose the natural rate is below the ZLB for 15 quarters. The dashed line shows the response of the economy to a zero-inflation target, and the solid line the response to the optimal policy described above.

Under optimal policy (solid line), we see in the first panel that the interest rate is kept at zero even after period 15 when the ZLB ceases to bind. As a result, we see in panels two and three that the depth of the recession is reduced to almost zero under policy; there is no massive deflation; and there’s a nice juicy boom after the liquidity trap ends.

In contrast, under the dashed line – which you can sort of think of as closer to the Fed’s current history independent policy – there is deflation and economic disaster.

II. We’re leaving the liquidity trap; where’s our boom?

To be completely fair, we cannot yet say that the Fed has failed to follow its own model. We first must show that the ZLB only recently has ceased or will cease to be binding. Otherwise, a defender of the Fed could argue that the lower bound could have ceased to bind years ago, and the Fed has already held rates low for an extended period.

The problem for showing this is that estimating the natural interest rate is extremely challenging, as famously argued by Milton Friedman (1968). That said, several different models using varied estimation methodologies all point to the economy still being on the cusp of the ZLB, and thus the thesis of this post: the Fed is acting in serious error.

Consider, most tellingly, the New York Fed’s own model! The NY Fed's medium-scale DSGE model is at its core the exact same as the basic canonical NK model described above, with a lot of bells and whistles grafted on. The calibrated model takes in a whole jumble of data – real GDP, financial market prices, consumption, the kitchen sink, forecast inflation, etc. – and spits outs economic forecasts.

It can also tell us what it thinks the natural interest rate is. From the perspective of the New York Fed DSGE team, the economy is only just exiting the ZLB:

Barsky et al (2014) of the Chicago Fed perform a similar exercise with their own DSGE model and come to the same conclusion:

Instead of using a microfounded DSGE model, John Williams and Thomas Laubach, president of the Federal Reserve Bank of San Francisco and director of monetary affairs of the Board of Governors respectively, use a reduced form model estimated using a Kalman filter. Their model has that the natural rate in fact still below its lower bound (in green):

David Beckworth has a cruder but more transparent regression model here and also finds that the economy remains on the cusp of the ZLB (in blue):

If anyone knows of any alternative estimates, I’d love to hear in the comments.

With this fact established, we have worked through the entire argument. To summarize:

  1. The Fed thinks about the world through a New Keynesian lens
  2. The New Keynesian model of a liquidity trap says that to prevent a depression, the central bank must keep rates low even after the ZLB stops being binding, in order to create an economic boom
  3. The economy is only just now coming off the ZLB
  4. Therefore, a good New Keynesian should support keeping rates at zero.
  5. So: why is the Fed about to raise rates?!

III. What’s the strongest possible counterargument?

I intend to conclude all future posts by considering the strongest possible counterarguments to my own. In this case, I see only two interesting critiques:

A. The NK model is junk

This argument is something I have a lot of sympathy for. Nonetheless, it is not a very useful point, for two reasons.

First, the NK model is the preferred model of Fed economists. As mentioned in the introduction, this is a useful exercise as the Fed’s actions should be consistent with its method of thought. Or, its method of thought must change.

Second, other models give fairly similar results. Consider the more monetarist model of Auerbach and Obstfeld (2005) where the central bank’s instrument is the money supply instead of the interest rate (I again attach my notes on the paper below).

Instead of prescribing that the Fed hold interest rates lower for longer as in Eggertsson and Woodford, Auerbach and Obstfeld’s cash-in-advance model shows that to defeat a liquidity trap the Fed should promise a one-time permanent level expansion of the money supply. That is, the expansion must not be temporary: the Fed must continue to be “expansionary” even after the ZLB has ceased to be binding by keeping the money supply expanded.

This is not dissimilar in spirit to Eggertsson and Woodford’s recommendation that the Fed continue to be “expansionary” even after the ZLB ceases to bind by keeping the nominal rate at zero.

B. The ZLB ceased to bind a long time ago

The second possible argument against my above indictment of the Fed is the argument that the natural rate has long since crossed the ZLB threshold and therefore the FOMC has targeted a zero interest rate for a sufficiently long time.

This is no doubt the strongest argument a New Keynesian Fed economist could make for raising rates now. That said, I am not convinced, partly because of the model estimations shown above. More convincing to me is the fact that we have not seen the boom that would accompany interest rates being below their natural rate. Inflation has been quite low and growth has certainly not boomed.

Ideally we’d have some sort of market measure of the natural rate (e.g. a prediction market). As a bit of an aside, as David Beckworth forcefully argues, it’s a scandal that the Fed Board does not publish its own estimates of the natural rate. Such data would help settle this point.


 

I’ll end things there. The New Keynesian model currently dominates macroeconomics, and its implications for whether or not the Fed should be raising rates in September are a resounding no. If you’re an economist who finds value in the New Keynesian perspective, I’d be extremely curious to hear why you support raising rates in September if you do – or, if not, why you’re not speaking up more loudly.

 

References

 

I comment on Josh Hendrickson's interesting post. While it certainly is hard for me to believe that the natural rate of interest could be negative, it's difficult to find a satisfying alternative explanation for the sustained output gap of the past seven years coexisting with the federal funds rate at the zero lower bound plus positive inflation.

 

JP Koning makes the case that even if Greece were to leave the Eurozone and institute a new currency (call it the New Drachma), Athens would still not have independent monetary policy: if households and firms continue to post prices in Euros rather than New Drachmas, Greek monetary policy would not be able to affect the Greek economy. As JP explains:

“Consider what happens if the euro remains the economy's preferred accounting unit, even as Greek drachmas begin to circulate as a medium of exchange. No matter how low the drachma exchange rate goes, there can be no drachma-induced improvement in competitiveness. After all, if olive oil producers accept payment in drachmas but continue to price their goods in euros, then a lower drachma will have no effect on Greek olive oil prices, the competitiveness of Greek oil vis-à-vis , say, Turkish oil, remaining unchanged. If a Greek computer programmer continues to price their services in euros, the number of drachmas required to hire him or her will have skyrocketed, but the programmer's euro price will have remained on par with a Finnish programmer's wage.”

Thus, if the New Drachma is not adopted as the dominant unit of account, Greece would still be at the mercy of the ECB, and worse, now without any voice in ECB decision-making.

I think this story is largely correct, but I want to throw out a counterpoint for discussion, which perhaps demonstrates that leaving the Eurozone could benefit Greece.

Currency reform and rewriting of debt contracts
One of the most important actions a government takes when it institutes a new currency or a currency reform is to legally redenominate all old contracts (issued under domestic law) in the new currency. In particular: debt therefore becomes automatically priced in the new currency.

In American history, this occurred during Franklin Roosevelt’s 1933 “currency reform”, when the dollar was devalued relative to gold and gold clauses in existing contracts were invalidated. To quote from Amity Shlaes’ “The Forgotten Man: A New History of the Great Depression”:

“Next Roosevelt set to work invalidating gold clauses in contracts. Since the previous century, gold clauses had been written into both government bond and private contracts between individual businessmen. The clauses committed signatories to paying not merely in dollars but in gold dollars. The boilerplate phrase was that the obligation would be “payable in principal and interest in United States gold coin of the present standard of value.” The phrase “the present standard” referred, or so many believed, to the moment at which the contract had been signed. The line also referred to gold, not paper, just as it said. This was a way of ensuring that, even if a government did inflate, an individual must still honor his original contract. Gold clause bonds had historically sold at a premium, which functioned as a kind of meter of people’s expectation of inflation. In order to fund World War I, for instance, Washington had resorted to gold clause bonds, backing Liberty Bonds sold to the public with gold.

Now, in the spring of 1933, upon the orders of Roosevelt, the Treasury was making clear that it would cease to honor its own gold clauses. This also threw into jeopardy gold clauses in private contracts between individuals. The notion would be tested in the Supreme Court later; meanwhile, bond and contract holders had to accept the de facto devaluation of their assets. The deflation had hurt borrowers, and now this inflationary act was a primitive revenge. To end the gold clause was an act of social redistribution, a $200 billion transfer of wealth from creditor to debtor, a victory for the populists.” [Chapter 5]

Unfortunately I can’t find a citation right now, but I believe Argentina did the same thing when it replaced the austral with the peso; and that this relabeling almost always occurs during currency reforms.

Thus after a currency reform, the price of existing debt, at the very least, would be in the new currency.

Debt: the most important nominal friction?
And there’s a good argument to be made that the most important “sticky” price is the price of debt. Selgin’s “Less Than Zero”, Sheedy (2014), and Mian and Sufi's new book make this argument.

Debt contracts are almost always both (a) fixed in nominal, not real, terms and (b) not contingent on aggregate economic conditions. In perfectly complete markets, on the other hand, we would expect debt contracts to be state-contingent. Contracts would be written in such a way that (perhaps by tracking an inflation index and some index of real economic conditions) if inflation or economic growth increases, borrowers would pay more back to their lender; and if inflation or economic growth went down, borrowers would pay less. Both borrowers and lenders would ex ante prefer this type of arrangement, but transaction costs make such contracts prohibitively expensive.

For more intuition on this see Chapter III of Less Than Zero and the introduction to Sheedy’s paper. As for why this nominal friction may be more important than the traditional nominal frictions that economists worry about – that is, sticky prices and sticky wages – I would again suggest a look at Sheedy’s paper where he calibrates his model and finds that the central bank should care 90% about this nominal debt “stickiness” and 10% about traditional price stickiness. However, the relative importance of these two categories of frictions is very much still an open question.

If non-state contingent debt is indeed the most important nominal friction, than perhaps if Greece were to rewrite existing debt contracts when instituting a New Drachma, the new Greek central bank would have enough control over the nominal economy to pull Greece out of its depression.

(Of course, after the switch over to the New Drachma, Greek households and firms could – unless further legislation prevented them – write *new* contracts denominated in euros. JP’s Latin America pricing hysteresis example would seem to suggest that this is very possible.)

In short
To summarize, JP writes, “As long as a significant portion of Greek prices are expressed in euros, Greece’s monetary policy will continue to be decided in Frankfurt, not Athens.” While true, it is at least conceivable that a government-mandated relabeling of existing debt contracts (as has occurred historically during currency reforms) could ensure that debt prices, which are perhaps the most important prices, are no longer expressed in euros but instead in New Drachma.