Tuesday, November 25, 2014

Abstract and notes for Ignite Seattle talk

If there's a bright center of the econoblogosphere, you're at the blog that it's farthest from. I thought I'd try and do some outreach by submitting a talk to Ignite Seattle; this post is the draft abstract/outline along with some graphics to be used in the presentation. If you have any comments, let me know!


The graph that could transform economics

In 1912, Henry Norris Russell presented a graph of star color (temperature) versus luminosity at a Royal Astronomical Society meeting that inspired Arthur Eddington to come up with a theory of how stars worked while being ignorant of what made them shine -- in fact, the traditional thinking at the time was decidedly wrong. [See more here]

I've put together a graph of economic growth versus the amount of printed money based on a theory that assumes we have no idea how supply and demand works and it shows universal behavior in economies across the world. It's based on information entropy from information theory. This theory has deep implications for economic policy, and could transform how the field of economics is practiced.

Contrary to traditional economic thinking, prices in the market place do not communicate information about droughts, harvests or quality, but rather confirm that supply and demand have matched up ignorance of those factors on either side of a transaction. And this modesty -- admitting ignorance about human behavior or what factors boost economic growth -- leads us to question traditional economic thinking on subjects from Seattle's new minimum wage to how to deal with the aftermath of the financial crisis.

Information entropy and transactions. From my post here.


Outline -- talk will be based on these posts (a series of sub-minute summaries to fit within  the 5-minute limit):
I'll probably show this graph because it's awesome:

Saturday, November 22, 2014

Is market monetarism wrong because the market is wrong?

That title will probably get me in trouble. The other candidate was "Are incorrect models a source of excess volatility?", but that was boring. 

First here's the background reading. James Hamilton at econbrowser put up a post earlier this month I've only now read. In it there was a very interesting graph:

Movement of 10-year treasury on news of additional QE. Source James Hamilton at econbrowser.com.

My reaction was "that's a big, sudden shift ... wait, wha?!?"

The 10-year treasury is not impacted by QE as I rather decisively showed in the data here, so why should it drop on news of additional QE? Hamilton continues: "But over the next few days, the yield started climbing back up. By the end of April, the 10-year yield was higher than it had been before the Fed’s announcement." Hamilton then suggests and disproves that this was due to inflation expectations. The best answer seems to be that the market was wrong and then randomly drifted back to where it should be. [See diagram at the bottom of this post.]

[As an aside, market inflation expectations are also wrong.]

Note that I've discussed the idea that markets might not know what they are doing about a year ago to explain a set of observations from Scott Sumner about interest rates; the first two are here:
  1. Moves toward easier money usually lower short term rates. The effect on long term rates is unpredictable.
  2. Moves toward tighter money usually raise short term rates. The effect on long term rates is unpredictable.
The market appears to think monetary expansion lowers interest rates. However in the information transfer model (ITM) if inflation is high (the IT index 'kappa' is low), monetary expansion will lead to higher interest rates though the income/inflation effect. If inflation is low (kappa is high), then monetary expansion will lead to lower interest rates since the income/inflation effect is muted. I then explained Sumner's rules like this:
  1. Markets like what they think is easier money, but the long run depends on whether the information transfer index is high or low. 
  2. Markets don't like what they think is tighter money, but the long run depends on whether the information transfer index is high or low.
If the market knew how monetary expansion impacts interest rates, then we wouldn't get things like the incorrect adjustment in the graph at the top of this post.

Another potential area where the market appears to be in error is in exchange rates. The immediate response to expansionary policy in Japan and the Eurozone were a falling Yen and Euro ... but the Euro should rise if the supply of Euros expands because relative demand for currency explains the behavior of exchange rates, thus more Euros means more demand for Euros.

There are two takeaways from this:

  • Since "the market" appears to have something like a monetarist view, immediate market responses to news should confirm e.g. Scott Sumner's model. The Fed announces QE, and the market expects a rise in the stock market, lower interest rates, a fall in the dollar and a return to target inflation. The market moves are taken as evidence that the Fed hasn't "run out of ammunition". However, in the long run, the economy moves back towards the ITM trends ... and you get pieces from Sumner like this: Were market monetarists wrong about Japan?
  • If the market frequently moved in the wrong direction in particular venues, that would become a source of excess volatility. I'm not saying all venues! There are many where things where the market appears to get things right. However, there are excess volatility problems in exchange rates (mentioned above) and stocks [pdf].

Regarding the stocks, James Hamilton closes his post with a question about whether the rise in the Nikkei on news of QE from the BoJ will last. If stocks rise on what the market believes is good macro news, then if that belief is incorrect and the news should be considered neutral (e.g. QE) from the correct model, then there will be excess volatility and market moves should be discounted [1].

This also implies that market monetarism won't work. No matter the market expectations set by forward guidance or NGDP targets, they won't lead to the desired outcomes unless the underlying model is correct. You could guide inflation and NGDP with the ITM if it is correct -- but then if the ITM is correct, only certain values of NGDP growth and inflation are attainable.

Schematic drawing. Market belief in purple and the correct model in orange. The market considers neutral news to raise the given price, overshoots and returns to the original price.
[1] These are theoretical musings, and the ITM may well be wrong. Actually, the ITM does not as yet predict when the prices should return to trend (the market can be irrational far longer than you can remain solvent). So if you lose money shorting the Nikkei based on this low-traffic blog by a non-economist, it's your own fault.

Monday, November 17, 2014

Because empirical success

I've occasionally had discussions with some commenters, most recently Philippe on this thread, that such and such explanatory variable can't possibly be used to explain what I'm trying to explain because it doesn't make sense to that commenter. In the recent case, Philippe suggested that it didn't make sense that physical currency (the currency component of the monetary base, which I call M0) was an explanatory variable for inflation. Philippe suggested other forms of money that should matter for inflation, like M1 or central bank reserves.

You could probably put the brief back and forth between Nick Rowe and myself in the previous few posts into this framework, except in that case I'm disagreeing with Nick by saying expectations (the explanatory variable) can't explain the price level or inflation. Nick suggests that expectations (and central bank targets or guidance) are an explanatory variable, while I'm saying expectations do not explain much more than the most recent NGDP and M0 numbers.

Maybe the information transfer model is wrong and expectations and M1 matter for inflation. Whatever model you have, though, it can't be too different from the ITM. Why? Because empirical success. Here are the information transfer model results for the price level and inflation. First, here they are for the core PCE measure:

And here are the results for core CPI:

This is more empirically successful than any economic model of inflation that has ever been published. The P* model from the Federal Reserve was comparable in the 1990s, but choked on new data. Maybe the ITM will choke on new data, too. However, time and data will tell, not theoretical arguments. The details of the model are all written down in the "for beginners" posts linked on the sidebar of this blog. Have a go yourself! All of the data I've used is from FRED (except the Japanese monetary base data, which is from here).

Also note that the fact that both fits are good means that core PCE and core CPI inflation are not independent measures. The information transfer model describes them both equally well given the data we have, so there is no telling which is the "real" measure of inflation. What is interesting is that this observation follows regardless of whether you believe the information transfer model or not. The ITM fit to each measure implicitly defines a global transformation you can perform to turn CPI data into PCE data meaning that PCE = f(CPI) so that any property of CPI can be mapped to a property of PCE. The next time you see an economist chide someone for confusing the two measures, you can come to the rescue with the retort: there is no economically meaningful difference in PCE or CPI inflation. It's like the fact that there is no physically meaningful difference in measuring distances in kilometers or miles.

And it's not like the model for the US is a fluke; here is Japan:

The empirical accuracy doesn't mean various theoretical criticisms of the information transfer model are prima facie wrong; it just means they should be discounted (i.e. given a low prior probability of being correct) unless they are accompanied by a model of comparable empirical success. Or as I said to Philippe:
The benefit of using M0 as the explanatory variable for inflation is that it gives an incredibly accurate model. If you have done [sic] better model that uses M1 or MB that explains the price level, I'm interested in hearing about it! But if you're just making hand waving arguments without any empirical evidence, that seems like a step backwards from the ITM.
The ITM doesn't just do inflation -- the interest rate model is pretty good too:

If you're of the opinion that the Fed's expansion of reserves will result in inflation, that people's expectations matter, or even that human decisions and behavior matter in a macroeconomic system at all, I'd first like to see some lines going through some data points.

Update 7:30pm MST, for Nick's comment below: 

Both of the error results for the PCE and CPI inflation models above have approximately zero mean. First, PCE inflation error:

Second, CPI inflation error:

Tuesday, November 11, 2014

In which I irritate Nick Rowe (again)

I am currently on travel for work (again) so light updates this week, but I thought I could irritate Nick Rowe some more. He has a new post up where he shows how police could regulate the speed of drivers without actually pulling anyone over:


I took on two other analogies before here:


... and my critique is similar this time.

The speed limit analogy would make sense if we all had NGDP/inflation meters/pedals. However, since many of the influences that go into NGDP or inflation are beyond our control, it's hard for individuals to target them.

A restaurant owner can't have her patrons enjoy the food 1% more while paying 3% more (to hit 2% inflation, including quality adjustments). She can't help it if suddenly her chief ingredients become cheaper and competition forces her to reduce prices ... This has been happening in the computer industry for awhile now, with actual deflation in the tech sector price level.

That's the gist of the linked post; here's some new stuff.


A tiny tweak to the behavior model turns the speed limit equilibrium into a boom-bust cycle. If the police don't pull anyone over (and no one is being seen being pulled over), people will start to speed more often and the police will start having to pull people over (concrete steps). Eventually enough people are pulled over (or witness such events) that leads back to the zero-enforcement scenario. Which leads to speeding and the cycle begins anew.

In the economic version, you'd see central bank targets start to lose their effectiveness, followed by concrete steps.


You can get the same speed limit equilibrium without expectations. In an ideal gas the velocity of particles follows a maxwell Boltzmann distribution with average speed ~ kT. Nothing actually acts on the molecules to achieve this at the micro level -- the effect of the cops in this case is an emergent entropic force that does not exist for individuals.


If Inflation and demand don't actually exist at the micro level, then the speed limit analogy doesn't make any sense at all. We can't measure our individual speed much like an individual molecule doesn't have a temperature. There is nothing to expect! You can't get tickets for going "undefined" ...

[This is not a very good post. BTW I'm writing it in a hotel room down the street from the St Louis Fed with one finger on my iPad, hence the picture above.]

Sunday, November 9, 2014

The information transfer model and the econoblogosphere

Paul Krugman has a post up that criticizes the "neo-Fisherite" view. Oddly I completely agree with his post, yet I wrote a post about agreeing with John Cochrane's post, which Krugman calls the "highest level" of  Keynesian denial. It might be confusing as to exactly where I or the information transfer model (ITM) stands.

I wrote two posts in the past that illustrate a bit of how the ITM fits in with both the history of macroeconomic thought and the debate around the current crisis. Actually, the ITM can help explain the history of macroeconomic thought. At its heart, the ITM says Paul Krugman is always right, but not necessarily for the right reasons. Anyway, this is a post fleshing that statement out with a bit more detail in fun, easy to read listicle format.

First, let me say that the ITM has a critical parameter κ (kappa, basically named after the parameter in this paper by Fielitz and Borchardt), called the information transfer index. In short, it represents the relative size of information chunks received by the supply and transmitted by the demand. Anything you say about the information transfer model has a caveat depending on the value of κ. There are two major κ regimes, and they're not very far apart numerically. When κ ~ 0.5, the ITM reduces to the quantity theory of money (and is similar to the AD/AS model with monetary offset). When κ is larger, getting towards κ ~ 0.8 to 1.0, the quantity theory stops being a good approximation to the ITM and the IS-LM model becomes a better approximation. The details are linked at this post

The thing is that κ can change over time and is different for different countries. That ends up muddling things a bit so I end up agreeing with Scott Sumner, Nick Rowe, Paul Krugman, David Glasner, or John Cochrane on various occasions and disagreeing with them on other occasions.

One additional detail is that the ITM says that κ tends to rise as economies get bigger and can only be reset by changing the definition of money or a monetary policy regime change. Hence you can consider old/advanced economies as generally having larger κ while younger emerging economies have lower κ. This is not always true, but can be a good guide. With that bit of background, on with the listicle!

I personally think the way expectations are used in macroeconomics make the field unscientific. They appear to be important in microeconomics (and game theory) -- and I have no particular problem with the way they are used there. However, mainstream macroeconomics does not appear to have any kind of constraints on what form expectations take, and hence allow anything to happen in a model. This reaches an almost absurd level with e.g. Nick Rowe's insistence that if a central bank is credible with its NGDP (or inflation) target, the economy will reach that NGDP (or inflation) target ... without the central bank actually having to do anything (besides 'be credible'). I've encountered many other theories and papers in my short few years of studying economics that effectively assume the conclusion through expectations. One economist called these chameleon models (although the author does not specifically call out expectations as the source ... however the questionable assumptions in economic models are typically about expectations or human behavior).
That aside, in the information transfer model, 'expectations' as such take the specific form of probability distributions over market variables (they parameterize our ignorance of the future). Since these distributions always differ from the actual probability distributions (we do not have perfect foresight), they represent information loss and hence a drag on economic growth (relative to perfect foresight). Additionally, prices are not only lower than they would be if we knew the actual probability distribution of market variables, but frequently lower than if we parameterized our ignorance as maximal (which is what the information transfer model does).
The monetary base
The monetary base is directly related to short term interest rates in the ITM. However, only the currency component of the monetary base (I've called it M0 as they have in the UK in the past) has any impact on inflation and then only when κ is closer to 0.5. Monetary base reserves have little to do with inflation ... except in the sense that movements in reserves can sometimes cause movements in the currency base.
Liquidity trap
The ITM model has a lot of similarities with the liquidity trap when κ ~ 1.0 -- I've called it the "information trap". Monetary policy does not have strong impact -- neither raising nor lowering interest rates, nor expanding nor contracting the currency base. The "information trap" differs from the modern liquidity trap in that it doesn't have to happen at the zero lower bound (ZLB) ... it is more like Hawtrey's credit deadlock or Keynes original liquidity trap that didn't have to happen at the ZLB. 
The ITM is, in a sense, identical to Paul Krugman's mental model (or what seems to be his mental model) if you replace "normal times" with κ ~ 0.5 and "liquidity trap" with κ ~ 1.0.
The Phillips curve
This sounds reasonable, but doesn't appear to have a strong signal in the data using the ITM. The two variables (inflation and unemployment) have a complicated relationship and the ITM doesn't describe the fluctuations leading to unemployment -- unemployment seems to be the result of, for lack of a better set of words, irrational panic that could only be modeled by modeling human behavior.
(New) Keynesianism
Essentially, the ITM is well-approximated by the ISLM model when κ ~ 1.0, but not when κ ~ 0.5. So the ITM is sometimes Keynesian inasmuch as the ISLM model is Keynesian. New Keynesianism is based on the expectations-augmented Phillips curve. Given what I've said about expectations and the Phillips curve above, you can guess that the ITM probably doesn't agree with new Keynesian methodology. This isn't to say the models are wrong or won't outperform the ITM against data -- just that methodologically they represent completely different viewpoints. 
Also, since in the US κ has been close to 1.0 both today and during the 1920s-30s, the ITM basically says Keynesianism has been the right theory at those times ... as Paul Krugman says, our world today (and Japan in the 90s) represents the return of depression economics.
(Market) Monetarism
If you take out the expectations piece (the "market" in market monetarism) ... and instead of M2 or MB use M0 ... and give a specific form for the velocity of money, the ITM basically agrees with monetarism ... when κ ~ 0.5. That is to say that Milton Friedman was (almost) right about the US during the 1960s and 70s (but wrong about Japan and the Great Depression). Scott Sumner and Nick Rowe are also right about the 1970s. Additionally, κ < 1.0 for Canada, Australia, China, Russia and Sweden (currently), so monetarism gets those right. However monetarists frequently try to appeal to data from these countries to prove their point about the US, Japan or the EU; in the ITM this is comparing apples and oranges.
Neo-Fisherite model
The only two things that the ITM has in common with this idea/model is that lower interest rates run you into low inflation faster than higher interest rates, and, if κ gets too large, the dependence of the price level on M0 (currency base) becomes an inverse relationship ... i.e. deflationary monetary expansion (as evidenced by Japan). This latter mechanism will lead to even lower interest rates over time.
However! An economy with a constant rate of inflation and a constant interest rate is impossible (unless RGDP grows at an increasingly exponential rate), and the mechanism has nothing to do with expectations, but rather is closely related to the liquidity trap. This makes it different from the typical neo-Fisherite view.
Fiscal policy
Debt-financed fiscal policy always boosts NGDP as it represents an independent process from economic growth. It also raises interest rates (aka 'crowding out'). However, when κ ~ 0.5, if the central bank is targeting inflation or NGDP, fiscal policy will fail to produce inflation (or NGDP) due to monetary offset (q.v. Scott Sumner). When κ ~ 1.0, then there is no monetary offset and the impact on interest rates is minimal. Again this view almost perfectly matches up with Paul Krugman's views, except that "liquidity trap conditions" mean κ ~ 1.0.
(My personal politics on this issue say that even if e.g. unemployment insurance negatively impacted NGDP, we should still do it because we are human beings not heartless automatons optimizing economic variables.)
Coordination failures
David Glasner and Nick Rowe have several posts that present the idea that coordination failures are the cause of recessions (Nick Rowe tends to put the onus on monetary policy, while David Glasner does not). The ITM motivates the idea that coordination causes the recession in the first place (i.e. people en masse becoming pessimistic about the economy) and that the economy does not naturally re-coordinate (create the 'inverse coordination' of the original pessimism) in order to undo that loss in NGDP. That re-coordination would require resources (e.g. debt financed fiscal policy) comparable to the original NGDP loss ... basically the idea that government spending should approximately equal the output gap per Keynesian analysis.
Other ideas?


Saturday, November 8, 2014

More goodness from Nick Rowe

It's a beautiful Saturday here in Seattle, so I'll make this quick (and somewhat out of order). Nick Rowe has new post up where he puts a few ideas very clearly:
Nominal demand can fall, but it can't fall to zero, unless the stock of base money falls to zero too. And central banks can stop it falling to zero, ZLB or not.
This is basically what happens in the information transfer model -- the existence of money means that the economy can't deviate too far from the NGDP-M0 path without some randomly irrationally exuberant people stepping in and cleaning up.
So any Neo-Wicksellian model with a ZLB must have currency in the model implicitly, even if it's not there explicitly.

Something of a side note on this one: I think that ZLB comes from the existence of money -- an asset that pays a 0% nominal interest rate; that's how it enters implicitly. If you think the ZLB is important, you're implicitly assuming currency. (I think that is the traditional economics answer.)

This next one is a long one:
If the central bank permanently raises the nominal interest rate, will this result in higher or lower inflation? If you tell me what permanently raising the nominal interest rate does to the base money supply growth rate, I can answer your question. If it causes the base money growth rate to increase permanently, like in John Cochrane's model, then inflation will increase. If it causes base money growth to decrease, then inflation will decrease. Tell me how the central bank raises interest rates, and what it is doing with base money growth when it does this. 
So why not just change the question? Ask what happens to inflation and nominal interest rates if the central bank permanently increases the money growth rate? Inflation and nominal interest rates will eventually increase too. What happens to inflation and nominal interest rates immediately is a little more complicated. It depends on whether prices and inflation are sticky, and on how quickly expectations adjust and people learn that the increase is permanent.
This is what I tackled in these two posts:

And yes, the result is complicated, but it doesn't have to do with expectations or the ZLB (I have explicit money in this model, not implicit, so the ZLB is not relevant), but rather the relative size of the monetary base and the size of the economy. There is a way to translate expectations into this measure if you like expectations more.

Friday, November 7, 2014

Assume a (neo-Fisherite) can opener

Nick Rowe tries to explain the paper [pdf] from Schmitt-Grohe and Uribe. I think I have a shorter version: Schmitt-Grohe and Uribe simply assume their result. They invented a new "lack of confidence shock" with different dynamics that leads directly to the result -- 'confidence' is directly proportional to the nominal interest rate so a higher interest rate brings inflation up and a lower interest rate brings inflation down. Basically, if the central bank sets a higher interest rate in a liquidity trap then people think the economy is going to improve, and, voilà, expectations of improvement lead to improvement.

Now I am interested in the neo-Fisherite view where low interest rates lead to lower inflation -- the information transfer model (ITM) gives that result. However the result in the ITM follows from trying to fit data from before the liquidity trap happened -- it doesn't assume different inflation dynamics in a liquidity trap.

In which I agree with John Cochrane (again)

A rising in nominal interest rate leading to a dip in inflation followed by increased inflation.
When John Cochrane says it -- Noah Smith is all twitterpated and Nick Rowe is moony-eyed; when I say it -- Noah deletes the comment from his blog post [no link, naturally] and Nick calls me warped. So it goes.

Anyway, Cochrane put out a working paper the other day that's set the econoblogosphere alight and Tom Brown asked for a response from me. I'll just work from the most recent post from Cochrane. I'm a little surprised at how much I've been agreeing with him lately.

The subject this time? The neo-Fisherite rebellion. I'm not fully a neo-Fisherite, but I'm sympathetic to many of the ideas and mechanisms.

There are two major ideas Cochrane considers:

1. Raising nominal interest rates can result in inflation.

He allows that it might cause a bit of deflation before and then result in inflation. I pretty much showed that, depending on the path of policy, this is what happens in the information transfer model. I discuss it in this post where I tried to reproduce the results as Stephen Williamson:


One of the graphs from that post is at the top of this post. Of course, as I mention in the post on Williamson, something gets in the way: the fact that there is no stable result with a permanent increase in interest rates. Guess what Cochrane' second consideration is?

2. If the Fed permanently pegs interest rates, is inflation stable or unstable in the long run?

I mentioned in the link on Williamson above that there is no stable path of the economy with a permanent increase in interest rates in the information transfer model. In this older post, I show that in order to have an economy with some constant interest rate and some constant inflation rate, you must have an economy with an increasing RGDP growth rate:


Cochrane also puts together a list of ideas that have been "demolished" ...
... MV = PY. Sorry, we loved you. But when reserves go from \$50 billion to \$3 trillion and nothing at all happens to inflation -- or at most we're arguing about percentage points -- it has to go out the window.
This is exactly what I've said here two days before:


There still is an equation of exchange, but the real equation is log P ~ k(Y, M) log M where k is a function.
... Keynesian deflationary spirals. Just as much as monetarists worried about hyperinflation, Keyensians' forecast of a deflationary spiral just didn't happen.
For this one, I'd say read any post on this blog about inflation. The price level is a function of M and NGDP, and doesn't depend (in the long run) on the expectations involved in the spiral. The only way deflation can happen is either though directly affecting M. In Sweden, the central bank decided to take some currency out of circulation -- and got deflation. In Japan there's a different effect; the currency in circulation has expanded so much that the average information content of each Yen is falling, resulting in deflation.
... The Philips curve. Unemployment went to levels not seen since the great depression; the output gap went to 10 percent and ... inflation moved less than one percent.
Unemployment and inflation trends are pretty unrelated, first here:


And in a more recent update here:

Fiscal stimulus... well, we'll take that up another day.
Given Cochrane's priors, I think he'll say it doesn't have a positive effect (at least). But if he comes with the answer that's right (ha!) rather than the one he likes, let me tell you what that answer should be -- at least according to the information transfer model.

Fiscal policy has an impact when monetary policy doesn't, such as in a liquidity trap (or generally when the monetary base is large relative to the size of the economy). The two "forces" are orthogonal in a liquidity trap and are closer to parallel when the quantity theory of money applies.


It's pretty amazing that all this comes from some simple considerations stemming from the idea of money transferring information.

Tuesday, November 4, 2014

Maybe it should be called the ignorance equilibrium model?

In a brief email exchange about the information transfer model with Robin Hanson, one of the more original and insightful thinkers around today (I read his blog regularly, though I expect roughly a 50/50 chance of disagreeing with a given post -- maybe we disagree on the process of developing our priors [1]), is understandably skeptical and asked a very good question. I am paraphrasing (these are my words, not his), but the essence of the question was: economists have been using information theory for 50+ years, so do you really think you have something new?

This post is going to be an open response to that question and hopefully serve as notes for a future conversation. Please add your comments or criticisms!

I can probably sum up the reason I think I have something different (and as I mentioned to Hanson, the difference may be that it is wrong) in a single sentence: traditional economics puts fathomable human minds at the center of the enterprise whereas information transfer economics assumes humans are inscrutable.

Now I was aware of the importance of information in economics -- my second post on this blog mentions information economics in a postscript:
PS -- While it might eventually be related, the work presented here has nothing immediately to do with "information economics" a la Akerlof, Stiglitz, etc. "Information" is being used in the Shannon sense: a price is communicating a signal from the demand to the supply.
I should have said the price is detecting a signal sent from the demand to the supply, but otherwise that postscript stands up pretty well. The Nobel prize-winning authors I cited are the big names in information asymmetry, where either the seller (usually) or the buyer has more information about the quality of the good being sold. Akerlof showed in his The Market for Lemons that information asymmetry typically drives the value of goods down (or may lead to a lack of any sales at all) and I originally thought this had something to do with non-ideal information transfer where the non-ideal price P* ≤ P (the ideal price) that derives from I(S) ≤ I(D) -- the information received by the supply is at best equal to the information transmitted by the demand.

In the information transfer framework, information asymmetry is a form of non-ideal information transfer. However, in that sentence, the word "information" is being used in two distinct ways. In information asymmetry, a large amount of information about the quality of a good selects a more specific economic state (e.g. quality Q, price P) and thus represents a small amount of information (a small number of possible states consistent with the macroscopic parameters) transferred by the market. A market operating where the demand has a complete lack of knowledge about the quality of a good (i.e. a uniform distribution over Q) has the potential to transfer a large amount of information (a large number of possible states consistent with the macroscopic parameters) ... as long as the supply has an equal uniform distribution.

Ideal information transfer in this case is where the expected distribution over Q on the demand side (call it Qd) the distribution over Q on the supply side (Qs) are equal, Qd = Qs. If they differ, there is information loss -- measured by e.g. the Kullback–Leibler divergence D so that D(Qd || Qs) = D(Qd || Qd) = 0 in the ideal case.

The information asymmetry argument is then that Qs is some tight distribution around the actual quality Q0 while Qd is more diffused (and may not even cover Q0). In this case 0 ≤ D(Qd || Qs), and, in this toy model, I(D) ~ I(S) – D(Qd || Qs) so that P* ≤ P. In the information transfer picture, however, the value Q0 is not relevant and Qs could be narrow around Q1 ≠ Q0 and the result would be the same. What matters is the difference between Qs and Qd, not the accuracy of either estimate.

What is interesting is that in the information transfer model the approximation I(D) ~ I(S) on average, meaning Qd and Qs are on average effectively [2] the same, is a really good approximation for macroeconomic variables (here we've changed Q to mean any relevant metric, e.g. expected NGDP).

Information asymmetry is not important in the generic scenario: it is either always present (see footnote [2]) or always absent. It may have relevance to the dynamics of recession (e.g. it may suddenly change) -- I don't know. However, information loss not important to first order.

Hanson pointed me to Robert Aumann [1] and John Harsanyi, Nobel prize-winners for their research in game theory, specifically on repeated games and Bayesian games, respectively.

The information transfer model in no way invalidates the many insights into microeconomics from game theory, and in fact has little to do with game theory. How do I know? Game theory is a representation of economic microfoundations; the information transfer framework makes as few assumptions about microfoundations as possible. We are assuming we don't know any game theory.

The following is a rough sketch, but one way to think about the information transfer model is as an ensemble of games with random payoff matrices [3] (zero-sum and not) with players repeating games, switching between games and possessing some correct and/or potentially incorrect information about the payoff matrix (or its probability distribution). The only "constraint" is that all of the realized payoffs sum up to a macroeconomic observable like NGDP [3].

The information transfer approach assumes maximum ignorance (maximum entropy) about the players, payoff matrices and strategies (let those define the microstates), that each microstate consistent with the macroeconomic observable (e.g. NGDP) is equally probable.

You could imagine that in such a model, a spreading malaise where investors feel they have insight into the payoff matrices such that the optimal strategy is not to play. Other possible correlated equilibria could appear. A correlation in strategies would represent a fall in entropy (less ignorance about the microstates) and a fall in NGDP (a recession).

This illuminates how the information transfer model differs from game theoretic takes -- regardless of whether they use perfect information, complete information or incomplete information. A correlated equilibrium or Nash equilibrium where there is some optimal strategy that maximizes the payoff (imagine tit-for-tat as applied to the prisoner's dilemma) represents selecting a particular microstate (or set of microstates) which represents a massive loss of ignorance (entropy) of the microstates. That equates to a loss of NGDP relative to players having a random strategy.

One of the issues with the word "information" in information theory is that it is sometimes confused with news or facts, but what is being equated on either side of I(D) = I(S) is the ignorance of the microstates. That ignorance is what is being transferred from demand to the supply. I(D) = I(S) then represents an "ignorance equilibrium". In this model, money is about moving ignorance around.

In searching through the internet, the primary association between economics and information theory the way I am using it is though statistical measures. For example, the Theil index uses information in the same way the information transfer model does. But that is a construction of a particular metric and teasing out the dependence of macroeconomic outcomes on inequality may well be outside the scope of informaiton transfer economics.

The biggest reason I think this is different from previous work in economics -- and maybe some of you out there can help me with this since Google will skew its results based on who is searching -- is that when I do a google image search on "information theory" GDP I mostly get back my own graphs. Have a go yourself:


And a search through Google scholar turns up nothing related:


An additional reason for believing I am approaching economics in an entirely different way is found in this quote from the textbook (from 2008) linked here (I chose this quote because it puts the idea so concisely):
It is important to note that any definition of supply and demand functions should satisfy two conditions. It should reflect the concrete decision-making of producers and consumers, and it should be founded empirically.
I think this sums up the mainstream view. However, according to information transfer economics, the first condition is not just unnecessary -- it is totally wrong. You assume complete ignorance of the concrete decision-making of producers and consumers. The fact that you give this up -- a deeply held belief that human decision-making is important -- makes me think few economists, if any, have tried this approach. It's a bit like asking if physicists have ever considered a theory where energy isn't conserved. You don't really have to know the literature to say that such theories probably fall outside the mainstream -- even if it was empirically accurate.

Yet another reason is that it has been nigh impossible to find models of the price level -- there are a couple out there, but none use information theory.

If anyone else out there knows of some work that closely resembles information transfer economics, let me know!


[1] Aumann's agreement theorem is quite useful if you find yourself disagreeing with someone you suspect of being a Bayesian.

[2] Note this may mean that I(D) ~ I(S) + C or a I(D) ~ I(S), where C is some unknown information scale or a is some unknown dimensionless parameter a < 1 that is characteristic of economic processes. We don't necessarily know what the absolute value is for I(D). However, we could just redefine I*(D) = I(D) – C or I*(D) = a I(D) and proceed as if we did know.

[3] Random payoff matrices have been treated in the literature; here are two papers I've found [pdf]:

Neither are related to information transfer economics.

[4] One could put expected payoffs and expected NGDP here as well.

Quantitative Easing: cleanest experiment ever?

I'm going to take off my information transfer economics hat and put on my scientist lab coat for this post. I'm going to put forward the idea that the quantitative easing (QE) undertaken by the Federal Reserve was in fact the cleanest macroeconomic experiment ever conducted.

We have a variable that changes decisively by orders of magnitude at a specific time -- and where we know to a high degree of accuracy the counterfactual path of that variable. This variable is the central bank reserves. In the following graph we have the full monetary base (which includes reserves, we'll call MB, blue) and the currency component of the monetary base (which doesn't include reserves, M0, red):

The onset of QE 1 (the first round of QE) happens in November 2008; if we were in grade school science class, that's the moment we connected the battery to the light bulb. 

What do we observe?

Here are the graphs of the 10-year treasury rate ('long' rate, RL, red) and the 3-month secondary market rate ('short' rate, RS, blue). Alongside, I've graphed the ratio NGDP/MB (blue) and NGDP/M0 (red) [1]

Before QE, NGDP/MB, NGDP/M0, RL and RS were all highly correlated. Before QE, it was actually impossible to figure out if any of these variables had anything to do with each other, or if it was just spurious correlation. QE starts and RS falls precipitously. We put the battery leads on the terminals and the blue light lit up.

This happens in several countries by the way (just search on interest rates in the search box on this blog). The experiment was reproducible. I do want to note that Nick Rowe comes close to this measure in this post, except he used M2 instead of M0 or MB.

What else do we observe?

Well, economists have noted for a long time that the price level P and the money supply seem to be correlated, especially during periods of high inflation. Let's plot the Pearson's correlation coefficient of MB (blue) and M0 (red) with P (as well as the correlation of MB and M0, green):

Before QE, all of these are fairly highly correlated -- actually MB and M0 are almost perfectly correlated. This really doesn't tell us very much. NGDP is also highly correlated with the price level. So is population, and in fact any exponentially growing variable.

With the onset of QE, the correlation between MB and P drops precipitously (as well as the correlation between M0 and MB). We see that the counterfactual path MB without QE would have been more correlated with P (effectively given by the red line). We attached the battery leads to the terminal and the blue light didn't light up (the red light stayed lit).

This means central bank reserves have nothing to do with the price level or inflation. Now this doesn't prove that M0 does have something to do with inflation, just that central bank reserves have nothing to do with inflation.

Now there are two arguments out there against this. The first is: the inflation data has been faked. Maybe it has, but realize that same data is the basis for believing the original correlation in the first place. The second is: the central bank reserves are expected to vanish. To me, this is a bit like saying the blue light didn't light because the light bulb expects the battery to be taken away. My counter-argument is Japan:

Both MB and M0 become uncorrelated with P (in fact, they become anti-correlated), so not only does Japan expect the reserves to be taken away, but, as time goes on, a greater and greater portion of the currency in circulation? For each Yen that is printed the market thinks the central bank will take not just that Yen out of circulation, but a second one as well? Additionally, MB and M0 never become uncorrelated with each other -- there never was any large (i.e. order of magnitude) changes in MB in Japan until very recently. There is really no evidence from Japan that MB and M0 are any different from each other -- at least in terms of the price level (MB still is directly related to short term interest rates in Japan like in the pictures above).

This remarkably clean experiment has taught us two things:
  1. Central bank reserves are directly connected to short term interest rates.
  2. Central bank reserves are unrelated to inflation.
Anyone claiming otherwise is making a strongly counterintuitive claim that, given the uniformativeness of macroeconomic data in general, should be considered speculative at best.

[1] In physics, we like make dimensional analysis arguments (asking 'what else could it be?' about various combinations of scales like masses and lengths) -- if the interest rate with dimensions (units) of 1/year is related to the money supply M (with units of dollars) then it will have to coupled with something with the units of dollars/year -- NGDP fits the bill. Nominal wages and government spending also fit the bill -- however, each of these is highly correlated with NGDP.

Sunday, November 2, 2014

Expectations (rational or otherwise) and information loss

The market expectation for NGDP (B, orange) and the actual distribution (A, blue).

This is going to be a rough sketch of an argument and a derivation. The one sentence summary is that there is always information loss in an economy, and that information loss is measured by a combination of NGDP and inflation.

Let's say that the market has an expectation of future NGDP measured by some distribution B; let's call the actual [1] distribution of A. Cartoons of these two distributions are shown in the graph at the top of this post. The difference between these two distributions, measured by e.g. the KL divergence, is:

D_{KL} (A || B) = \Delta I

This is the information gained by learning the real distribution A given distribution B, or alternatively, the information lost by assuming B. I previously talked about expectations destroying information -- they lead to information loss -- here (there's also bit on on the KL divergence). This information loss means that the future NGDP will be lower than the "ideal" NGDP since information in the source (aggregate demand) was not captured at the destination (aggregate supply). Another way to put this is that economic growth is lower than it could be if we knew the future as well as it could be known (i.e. knowing A).

Let's define this ideal (zero information loss) NGDP; we'll call it $N^{*}$ -- measured (observed) NGDP will be denoted $N$. Now the information loss should be equal to the difference in the "economic entropy" (see here, here) of these two NGDPs (I'll work in "natural" units where the parameter $c_{0} = 1$):

\Delta I = \log N^{*} ! - \log N !

Information and entropy are essentially the same thing, just with different units. Using Stirling's approximation for large $N$, we can write this as

\Delta I = N^{*} \log N^{*} - N \log N

Dividing through by the monetary aggregate that defines the unit of account $M$, we get:

\frac{\Delta I}{M} = \frac{N^{*}}{M} \log N^{*} - \frac{N}{M} \log N

And using the information transfer differential equation $dN/dM \sim (1/\kappa) N/M$, we can say:

\frac{\Delta I}{M} = \kappa \frac{dN^{*}}{dM} \log N^{*} - \kappa \frac{dN}{dM} \log N

Let's identify the ideal NGDP with the shock-less NGDP that defines inflation (and is the result of numerically integrating that differential equation without shocks) so that we can use $d N^{*}/dM \sim P$ where $P$ is the price level and $N^{*} \sim M^{1/\kappa}$.

\frac{\Delta I}{M} = P \log M - \kappa \frac{dN}{dM} \log N

Divide through by $\log M$

\frac{\Delta I}{M \log M} = P - \kappa \frac{dN}{dM} \frac{\log N}{\log M}

Using the definition of the information transfer index $\kappa$, we finally obtain:

\text{(1) } \frac{\Delta I}{M \log M} = P - \frac{dN}{dM} = \frac{dN^{*}}{dM} - \frac{dN}{dM}

The information loss is proportional to the difference between the growth rates of the ideal NGDP (also known as the price level $P$) and the observed NGDP ($N$) with respect to the monetary aggregate that defines the unit of account ($M$) [2].

Some observations:
  • Rational expectations is the assumption that the distributions A and B are equal, that the information loss $\Delta I = 0$ and that $dN/dM = P$. This is empirically false (what we really have is $dN^{*}/dM = P$). While rational expectations may be a good first order approximation (over the short run), the market does not accurately know the distribution from which macro random variables are selected.
  • An inflation target or price level target is a target for the first term in equation (1) while a NGDP level or growth rate target is a target for the second term. That is to say, an inflation or price level target is an ideal NGDP target ($N^{*}$) analogous to an NGDP target. Since $\Delta I$ is a priori unknown, setting one target sets the other. Another way to put this same information is that a given inflation target cannot be achieved simultaneously with an NGDP target (such an economy is over-determined).
  • It is interesting that if one adds the assumption that $\Delta I$ is stable (e.g. is a stochastic process with unit root), this comes to the same conclusion as Nick Rowe: But we could live in a [self-stabilizing economy] ... if we adopted price level path targeting [i.e. $N^{*}$], or NGDP level path targeting [i.e. $N$]. However, there is no reason to assume $\Delta I$ is stable and the knob that the central bank would turn would be concrete step of adjusting the currency base ("M0"), not setting expectations (i.e. targeting $\Delta I$, which is impossible since the distribution A is fundamentally unknowable -- it requires knowing not only the future, but all possible futures). Nick's post was the inspiration for this one, which I previously mentioned working on here.
  • If rational expectations were true (in the model above), NGDP and the price level would not be independent quantities.

[1] This would be measured using a large number of identical economies and observing the measured NGDP in each economy.

[2] Note this is a growth rate with respect to the monetary aggregate that defines the unit of account unit of account, not time.