Tuesday, December 29, 2015

Facebook as microscope

A quick one while I have a moment. I want to write more about this later:

But a simple point I'd like to make is that we should view Facebook and Twitter as measurements of social interaction -- complete with measurement error and bias. Like Heisenberg's microscope, the measurement also affects how people behave. I know that much of my own Facebook feed has collapsed to marriage announcements, baby pictures, food pictures and vacations. People literally say they don't want to hear bad news in their feeds and tend to refrain from posting about negative events in their life (or alternatively, sharing every negative event).

Some of us only have Facebook friends we've met IRL as we used to say. Some friend anyone who requests.

As such, Facebook is obviously affecting the exact social interactions you wish to measure. Therefore it is likely only a good source of data for the interactions that are unaffected.

An analogy is a telescope with aberration. It's not good for panchromatic imaging, but at a single wavelength it can work just fine.

Friday, December 18, 2015

Holiday/blogging hiatus

I have some time off for the holidays and will be spending time with family; I probably won't have much time to write anything (maybe respond to comments). After the rate hike, not much more exciting economic news will come out until the end of January anyway (with the next Fed Meeting January 26-27, 2016 with 2015 Q4 NGDP and PCE inflation updates for the predictions).

We might have enough bi-weekly monetary base observations to see if this prediction is true.

Maybe we'll finally get that recession (???) ...***

*** Not actually rooting for a recession.

Wednesday, December 16, 2015

ZIRP is over ... let the experiment begin.

So it looks like it was option C: a 25 basis point hike to a range of 0.25% to 0.5%.

Now the experiment begins ...

The graph above shows what will happen to the monetary base if the information equilibrium model is right (0 represents the no change path) and the rate holds for a couple years (it probably won't, I imagine Summer of 2016 will see a second rise).

Also note that the information equilibrium model doesn't tell us about the non-equilibrium adjustment process, so we don't know how fast the base will fall. Keep checking the bi-weekly adjusted monetary base -- it should stop around 2.6 trillion dollars for option C unless there is another rate change intervening.

Tuesday, December 15, 2015

Who said macroeconomics isn't an experimental science?

Well, according to the econoblogosphere echo-chamber, it looks like were going to get a rate hike tomorrow. That means we get a test of a couple theories ... one pseudo-scientific and two scientific:

  • An interest rate hike will be a contractionary signal leading to lower inflation and lower NGDP. However interpreting this signal based on the Fed's statements will be similar to interpreting tea leaves and allows most of the establishment macro profession and even upstart market monetarists to say whatever happens is consistent with whatever their model is or whatever post hoc reading of the Fed statement makes the status quo make sense retroactively.
  • And interest rate hike will cause a neo-Fisherite rise in inflation
  • The interest rate hike will mostly have an effect on the monetary base [from me]

Information theory 101 (information equilibrium edition)

With this blog ostensibly dedicated to the purpose of using information theory to understand economics, it seems only natural to have a short introduction to information theory itself. Or at least as much as is necessary to understand this blog (which isn't much). Nearly everything you'd need is contained in Claude Shannon's 1948 paper:

A mathematical theory of communication [pdf]

In that paper Shannon defined what "communication" was, mathematically speaking. It comes down to reproducing a string of symbols (a message) selected from a distribution of messages at one point (the transmitter, Tx) at another point (the receiver, Rx). Connecting them is what is called a "channel". The famous picture Shannon drew in that paper is here:

A communication channel

Information theory is sometimes made to seem to spring fully formed from Shannon's head like Athena, but it has some precursors in Hartley and Nyquist (both even worked at Bell labs, like Shannon), and Hartley's definition of information (which coincides with Shannon's when all symbols are considered equally probable) is actually the one we resort to most of the time on this blog.

One improvement Shannon made was to come up with a definition of information that could handle symbols with different probabilities. In their book, Shannon and Weaver are careful to note that we are only talking about the symbols when we talk about information, not their meaning. For example I could successfully transmit the series of symbols making up the word


but the meaning could be different to a British English (10⁹ or 10¹²) and an American English (10⁹) speaker. It would be different to a French speaker (10¹²). The information would also be slightly different since letter frequencies (i.e. their probabilities of occurring in a message) differ slightly among the languages/dialects.

Shannon came up with the definition of information by looking at its properties:

  • Something that always happens carries no information (a light that is always on isn't communicating anything -- it has to have at least a tiny chance of going out)
  • The information received from transmitting two independent symbols is the sum of the information from each symbol
  • There is no such thing as negative information

You can see these are intimately connected to probability. Our information function I(p) -- with p a probability -- therefor has to have the mathematical properties

  • I(p = 1) = 0
  • I(p₁ p₂) = I(p₁) + I(p₂)
  • I(p) ≥ 0

The second one follows from the probability of two independent events being the product of the two probabilities. It's also the one that dictates that I(p) must be related to the logarithm. Since all probabilities have to obey 1 ≥ p ≥ 0, we have

I(p) = log(1/p)

This is the information entropy of an instance of a random variable with probability p. The Shannon (information) entropy of a random event is the expected value of it's information entropy

H(p) = E[I(p)] = Σₓ pₓ I(pₓ) = - Σₓ pₓ log(pₓ)

where the sum is taken over all the states pₓ (where Σₓ pₓ = 1). Also note that p log(p) = 0 for p = 0. There's a bit of abuse of notation in writing H(p). More accurately you could write this in terms of a random variable X with probability function P(X):

H(X) = E[I(X)] = E[- log(P(X))]

This form makes it clearer that X is just a dummy variable. The information entropy is actually a property of the distribution the symbols are drawn from P:

H(•) = E[I(•)] = E[- log(P(•))]

In economics, this becomes the critical point; we say that the information entropy of the distribution P₁ of demand (d) is equal to the information entropy of the distribution P₂ of supply (s):

E[I(d)] = E[I(s)]

E[- log(P₁(d))] = E[- log(P₂(s))]

E[- log(P₁(•))] = E[- log(P₂(•))]

and call it information equilibrium (for a single transaction here). The market can be seen as a system for equalizing the distributions of supply and demand (so that everywhere there is some demand, there is some supply ... at least in an ideal market).

Also in economics (at least on this blog), we frequently take P to be a uniform distribution (over x = 1..σ symbols) so that:

E[I(p)] = - Σₓ pₓ log(pₓ) = - Σₓ (1/σ) log(1/σ) = - (σ/σ) log(1/σ) = log σ

The information in n such events (a string of n symbols from an alphabet of size σ with uniformly distributed symbols) is just

n E[I(p)] = n log σ

Or another way using random variable form for multiple transactions with uniform distributions:

E[- log(P₁(•)P₁(•)P₁(•)P₁(•) ... )] = E[- log(P₂(•)P₂(•)P₂(•)P₂(•) ...)]

n₁ E[- log(P₁(•))] = n₂ E[- log(P₂(•))]

n₁ E[- log(1/σ₁)] = n₂ E[- log(1/σ₂)]

n₁ log(σ₁) = n₂ log(σ₂)

Taking n₁, n₂ >> 1 while defining n₁ ≡ D/dD (in another abuse of notation where dD is an infinitesimal unit of demand) and n₂ ≡ S/dS, we can write

D/dD log(σ₁) = S/dS log(σ₂)


dD/dS = k D/S

where k ≡ log(σ₁)/log(σ₂) is the information transfer index. That's the information equilibrium condition.


PS  In another abuse of notation, on this blog I frequently write:

I(D) = I(S)

Where I should more technically write (in the notation above)

E[n₁ I(P(d))] = E[n₂ I(P₂(s))]

where d and s are random variables with distributions P₁ and P₂. Also note that these E's aren't economists' E operators, but rather ordinary expected values.

Monday, December 14, 2015

An empirical validation of k - (k - 1) = 1

The equations for the price level (P) and output (N) in the information equilibrium model have the forms:

N ~ (M/m0)ᵏ
P ~ k (M/m0)ᵏ⁻¹

Which means that

log N ~ k log M
log P ~ (k-1) log M

log N ~ k₁ log M
log P ~ k₂ log M

So that the difference in the slopes in log-log space between the first relationship and the second relationship is k₁ - k₂ = k - (k - 1) = 1 (if measured over short time periods -- since k is changing). In the real world there are shocks that come from the changes in the labor supply (E) so that we should actually add in the slope of 

log E ~ s log M

So that 

k₁ + s - k₂ ≈ 1

And guess what: it works! Non-ideal information transfer during recessions mean there should be deviations, and in general I(N) > I(M) so P < P* (the ideal price level) so we fall below the ideal value:

Abenomics or optical illusion?

David Beckworth highlights a segment of the data and concludes that Abenomics has accelerated growth in Japan:
More generally, Abenomics has been mildly successful at reflating the economy. The figure below shows nominal Japan's NGDP with the Abenomics period highlighted in red. It has risen relatively rapidly.

I would make the case that the Keynesian component ("fiscal arrow") explains an acceleration as much as the monetary one ("monetary arrow"). But I think some perspective is lost because Japan has basically zero net NGDP growth over the past twenty years.

Let's artificially add 2% trend growth to Japan's numbers (this will only shift the NGDP growth rate graph up and down) and take a new look:

Well, now it looks like every other economy's gradual return to trend after 2008. And the quantitative easing period looks like trend growth. That is to say the lack of growth exaggerates the return to trend in the graph of Japan's NGDP. (Note the lack of growth also exaggerates the noise in Japan's NGDP numbers because a plot of NGDP doesn't require as large a range versus its domain.)

So maybe the "mean reversion" is over? The IT model predicts lower NGDP going forward.

The key question as always is: What is the counterfactual?

Who has two thumbs and isn't stumped?

 ... this guy.

From the WSJ (via Tim Duy (via Mark Thoma)) we get a nice compilation of central bank forecasts from 2011 to 2015 along with actual inflation. Although the information equilibrium model didn't exist in 2011, I thought I'd run the projections using the data that would have been available at the time (although it does include revisions, but these shouldn't perturb the results too much since they're relatively stable) and overlay the results on the nice WSJ graph:

For reference, the various forecasts are here.

Saturday, December 12, 2015

Switzerland's negative interest rates

Here's the model for Swiss interest rates presented both as an upper bound (representing ideal information transfer so that observed interest rates are less than the ideal interest rate r < r*) and as a fit:

Negative interest rates can be seen in the "bound" form as representing un-modeled complex economic interactions as discussed here using an ideal gas as an analogy. In that post, the PV diagram for a gas is observed to be below the "ideal bound" set by the ideal gas law because of molecular forces and condensation from a gas to a liquid.

In the interest rate model, we'd imagine because of e.g. program buying by mutual funds or the costs of holding physical currency information equilibrium would be a bad assumption and since the bound is near zero, negative rates become possible.

The effect of a rate increase coming out of the Fed's Dec 2015 meeting

The emerging consensus (here, or in summary here) is that the Fed will/should "raise rates" after its December meeting. This is probably a bad idea for reasons that fall outside the realm (scope) of the information equilibrium model -- the IEM actually says it probably won't matter until reserves get down to pre-2008 levels.

There is one issue here: since the current rate is actually a ceiling (0.25%, here) and a floor (0%, here -- and my vote for most boring FRED graph), what does "raise rates" mean? Assuming it'll just be a 25 basis point increase [1], we have three options:
A. Raise the ceiling? 
In this case, nothing should happen. Market rates are already below the ceiling, so it shouldn't make a big difference. However, instead of fluctuating between 0 and 0.25%, rates will fluctuate between 0 and 0.5% -- a slightly higher average.
B. Raise the floor to the ceiling? 
This would return to pre-2008 where the Fed had a single target rate, so I kind of doubt they'd want to give up newly normalized flexibility (hence more likely that they take option C below). It would would probably be an "effective" 10 basis point increase because of the current effective Fed funds rate
C. Raise the floor and the ceiling? 
In this case we get a new ceiling of 0.5% and a new floor of 0.25%. As the market seems to oscillate between the current floor and ceiling, I'd imagine it would oscillate between this new floor and ceiling ... resulting in an effective 25 basis point increase.
Now that we have these contingencies, I can make a prediction of what will happen to the monetary base (assuming the path of NGDP remains unchanged). To do the averages (e.g. between 0.25 and 0.5), I took the average of the logarithm (using the lowest observed Fed funds rate of 0.04 as a floor). That means:

No change (0) = EFF of 0.1%
Scenario A = 0.14%
Scenario B = 0.25%
Scenario C = 0.35%

I also added 1% and 2% for reference.

Note that if the monetary base goes to the levels corresponding to these rates (barring a sudden change in NGDP), it would represent a pretty remarkable success of the interest rate model functional form:

log(r) = c log(NGDP/MB) + c log(k)

with c = 2.8 and c log(k) = -11.1 (these are the same values as other places I've presented them, just written in a different form).


Update 12/13/2015

Calculated Risk says that Option C looks like the analyst consensus.



[1] There is the possibility of raising the ceiling by more than the floor. But I'll ignore that here.

Friday, December 11, 2015

Japan's RGDP growth

I wanted to give this treatment to Japan (at least as far as the RGDP model goes towards the end), but unfortunately I was unable to find good data for the total employed population (for the RGDP fluctuations). So we're just left with trends. Anyway, the price level for Japan last discussed here is below:

I have only occasionally shown the NGDP-M0 path for Japan as well. The thing is that it is such a short segment of data (1994-present) that it's really hard to get an assessment of the model errors. Here's the fit at roughly the same scale I show the US NGD-M0 path:

Really, we don't have a long NGDP time series. If anyone has a good source out there, let me know. One result of this is the NGDP trend:

The interesting thing is that, like in the price level graph, it appears Abe has benefited from taking office (at the beginning of 2013) at the peak of a negative fluctuation (in both NGDP and inflation). The mean reversion (and fiscal stimulus) has made it look like "monetary Abenomics" has done something. The resulting trend RGDP growth looks like this:

The real story appears to be a negative fluctuation in inflation and NGDP around the time Abe takes office coupled with real growth on the heels of the financial crisis. The upward trend in RGDP starts at least a year before Abe, if not during the trough of the 2008 financial crisis.

In a sense, Abe has benefited from timing in the same way Jeb Bush did, except Abe came in at the bottom of a trough rather than exiting just before a bubble burst. 

Scale invariance and constant returns to scale

As an addendum to this post, I thought I'd talk a bit more about 1) scale invariance and 2) constant returns to scale that may help with some understanding -- both yours and mine! There is not meant to be a point to this post. It's just general information and making some distinctions.

The information equilibrium Solow (Cobb-Douglas) production function model (with output in equilibrium with capital N ⇄ K and labor N ⇄ L) gives us the form

(N/N0) = (K/K0)ᵝ (L/L0)ᵞ

Actually this helps clear up one of the decent criticisms of the Cobb-Douglas production function that comes from Austrian economists. They say that the units of total factor productivity are meaningless unless β = γ = 1. It's true. If you take the function:

N = A Kᵝ Lᵞ

The units of total factor productivity are

[A] = n k⁻ᵝ l⁻ᵞ

Where [N] = n, [K] = k and [L] = l. If β and γ aren't 1, then total factor productivity has units of inverse capital to some fractional power times inverse labor to some fractional power times output. Of course, there are different places to put the A ... sometimes it goes with the L so that:

N = Kᵝ (A L)ᵞ

But that just requires a different kind of weirdness with units to make it work out. The information equilibrium version says that

A = N0/(K0ᵝ L0ᵞ)

and we have

N = N0 (K/K0)ᵝ (L/L0)ᵞ = A Kᵝ Lᵞ

... i.e. A represents a combination of the scale of labor, the scale of capital and the scale of the economy (or possibly A = L0 with some other bits in the other case). The main point is that it isn't its own scale and therefore it's fine if it has weird units.

Normally, economists use the transformation

K → α K
L → α L

to derive the necessary constraint on the exponents to maintain constant returns to scale. However, this transformation leaves the IT model equations invariant:

dN/dK = β (N/K) → (1/α) dN/dK = (1/α) β (N/K) ⇒ dN/dK = β (N/K)

dN/dL = γ (N/L) → (1/α) dN/dL = (1/α) γ (N/L) ⇒ dN/dK = γ (N/L)

As well as the solution:

(N/N0) = (K/K0)ᵝ (L/L0)ᵞ →
(N/N0) = (αK/αK0)ᵝ (αL/αL0)ᵞ = (K/K0)ᵝ (L/L0)ᵞ ⇒
(N/N0) = (K/K0)ᵝ (L/L0)ᵞ

Note that the equations are also invariant under:

N → α N
K → α K
L → α L

Essentially, we are scaling up the measuring rod with these transformations.

In contrast, constant returns to scale take

N → α N
K → α K
L → α L

but don't transform the scales

N0 → N0
K0 → K0
L0 → L0

Which leaves us with a non-trivial condition if we want constant returns to scale:

(N/N0) = (K/K0)ᵝ (L/L0)ᵞ →
(αN/N0) = (αK/K0)ᵝ (αL/L0)ᵞ = αᵝ αᵞ (K/K0)ᵝ (L/L0)ᵞ ⇒
α (N/N0) = αᵝ αᵞ (K/K0)ᵝ (L/L0)ᵞ

This implies that β + γ = 1 so that the α's on both sides cancel.

So what is the difference here? Constant returns to scale is an argument about extensivity of economic growth: double inputs means you double output. It's about what measurable changes in your system do to your system measured with scales in your system (N0, L0, K0, etc). Scale invariance is about whether the size of the measuring rod matters. The former is an experiment you can do. The latter is an experiment you can never do (if the theory is correct).

For example, with special relativity, you can never determine an absolute rest frame. No experiment will determine a privileged frame. However, the speed of light is a physical scale and so you can do experiments that test effects relative to that scale (for example, the self-rest-frame lifetime of muons is not long enough for them to make it through the atmosphere from cosmic rays, but they do make it through because of time dilation).

The extensivity  is a measurable property but doesn't generally apply when the constituents of the system being combined interact. A good example is batteries in a circuit. A bunch of batteries in series has voltage as an extensive property while a bunch of batteries in parallel has current as an extensive property. Some calculations speed up if done in parallel, while others do not.

That is to say constant returns to scale is a major assumption about how labor and capital interact to produce output. In the "double Earth" thought experiment, the two don't interact ... the new Earth is effectively separate from the old earth. So it makes sense that output just doubles and there are constant returns to scale. But that doesn't necessarily apply to doubling the capital and labor supply on a single Earth.


Side note: effective field theory

K → α K

dN/dK = β1 (N/K) + β2 (N/K)² →
(1/α) dN/dK = (1/α) β1 (N/K) + (1/α)² β2 (N/K)²

does not work, but

N → α N
K → α K

dN/dK = β1 (N/K) + β2 (N/K)² →
(α/α) dN/dK = (α/α) β1 (N/K) + (α/α)² β2 (N/K)² ⇒
dN/dK = β1 (N/K) + β2 (N/K)²

does work. Terms like

N d²N/dK², (N/K) dN/dK, and (N²/K) d³N/dK³

preserve the homogeneity of degree zero ... should represent higher order corrections to the IT model.

Wednesday, December 9, 2015


Nick Rowe has an interesting incremental argument asking: when does the cash disappear? It's kind of another Sorites paradox, except it asks what properties must you take away for something to stop being "cash". In one sense, I agree: money seems to be better defined by symmetry properties, so whatever it is, it doesn't stop being money unless you take away those symmetry properties ... which means you've made the economy become a non-ideal information processor and lost lots of output.

But that's the key -- Nick Rowe's properties are mostly extraneous properties of money, not extraneous properties of "cash".

In the information transfer model, the key property that distinguishes what we call "cash" (bits of paper with e.g. Benjamin Franklin on them in the US) from other types of "money" like M1 or MZM is that "cash" (aka M0) is what anchors the core inflation rate as well as the NGDP path. That is to say the inflation rate (the change in the price level P) is set in the market:

P : NGDP ⇄ M0

M1 and M2 don't set the inflation rate (there are large fluctuations), but M1 and M2 are money in the sense that they can be used for transactions.

What is it about M0 that makes it have this property?

I don't know for certain. I think it may be the fact that central banks can't get a hold of your M0 to pay (positive or negative) interest on it. Things that are included in M1 and M2? Yes, definitely. I have some M1 in my bank right now gaining interest.

So when Nick says:
2. Muggers become a problem, so people put their currency in a box at the central bank with their name on it. ...

3. The bank notices it is now administratively easier to pay (positive or negative) interest on currency than it was when people kept their currency in their pockets. So the bank now has a second monetary policy instrument, in addition to Open Market Operations.
I think these steps are a bigger deal than Nick makes it out to be. The information entropy in the distribution of currency would suddenly vanish since the state of every dollar is now known (so that interest could be paid). Basically I have just acquired log2 C(n, k) ~ n H(k/n) ~ 1 trillion 4 billion bits of information (k = people in the country 300 million, n = 1.3 trillion dollars). That may not sound like a lot in terms of electronic storage, but the entire annual economy (n = 18 trillion dollars) represents only about 2 trillion 5 billion bits. (Assuming indistinguishablility.)

In a thermodynamic system, knowing the state of every atom in an ideal gas would allow you to extract useful work out of the system. However, knowing each bit of information about the state of every atom costs kT log 2 in energy, so this is an impossible task.

There is no energy restriction in the economic case, but this still represents a major change in the properties of the macroeconomy.

Basically, labeling those boxes at the central bank is what "cash" does.

Tuesday, December 8, 2015

Mayan targetan on-when NGDP

Imagine we travel back in time to the second quarter of 2009. We stop by the Federal Reserve and reveal to Fed officials that the recession has now bottomed out. We inform them about a non-causal monetary policy first presented by Scott Sumner. 
The Fed agrees with our assessment and decides to spend its political capital on the temporally-independent NGDP target and gets the backing of Congress and the Treasury Department (and the NSF). The great macroeconomic (and physics) experiment begins. 
So what would happen next in this counterfactual history? How would the economy respond to these compound time-travelling events starting with traveling back to mid-2009? No one can answer these questions with certainty, but it is likely that at a minimum there would be temporarily undefined inflation (as inflation would fail to be a function of time). 
The figures below lend support to this understanding. They come from a paper I wioll haven be currently willing worken where I willan on-run a counterfactual retro-forecast of nominal input/output starting in mid-2009. The retro-forecast willan on-be based on four different paths of NGDP willing returnen to its pre/post-crisis trend: a two-year path, a three-year path, a four-year path and a negative two-year path. The first figure shows the four NGDP return paths and the second figure shows the inflation forecasts associated with these paths:


Ok, < /snark >

This is a parody of David Beckworth's post on NGDP targeting (with some Douglas Adams thrown in). The question isn't really (and has never been) what would happen if NGDP targeting worked, but would NGDP targeting work?

If the Fed is unable to meet an inflation target, why can it meet an NGDP target? Why does the liquidity trap argument fail with NGDP targeting? Why does the IT index k suddenly rise with NGDP targeting?

In short, why is NGDP targeting as magical as time travel? 

Friday, December 4, 2015

Limit cycles versus avalanches

I'm not sure I understand the allure of Steve Keen. One of his papers came up in comments (from Steve Roth) here. One of the first comments on my blog was from someone who said I should check Keen out. His big thing is nonlinear differential equations; you get graphs like this in his papers:

One of Keen's claims to fame is that he "predicted" the global financial crisis. Of course, if you think everything happens in cycles, then a financial crisis is always just around the corner. However the business cycle isn't very cyclic -- in the sense of being periodic (see Noah Smith for a good overview). I have a view that would be more like avalanches or earthquakes. They will be spaced out because it takes time to build up mechanical energy or snow (or in the economics case, too much output), but the exact failure time isn't unpredictable.

One of the other problems with these limit cycles (as they're called in the biz) is that they aren't always terribly robust to noise. Here's a simple one with and without noise:

Anyway, Keen has a model in the paper Roth linked to that produces this graph of the real growth rate:

If we take this at its most charitable (i.e. best fit) of Keen's model (red) to some real data from the US, we see something like this:

It's not terrible, but it seems like it wouldn't really continue into the past or future and line up exactly. And with noise on this level, you'd actually expect those nice cycles at the top of the post to completely evaporate. 

What does the IT model have to say about this? Well the RGDP growth trend (brown) definitely looks less impressive:

The IT model story of the post-war era has been that of gradually falling RGDP growth. This is roughly what we'd expect if those nominal shocks (avalanches) were at their average level over the entire period. If we add the shocks in, we get this:

Note that this model comes from just M0 and the labor supply (equal to nominal shocks). Overall, instead of a story of cyclic booms and busts with one leading to another, we have a story of over-estimating future growth because it slows over time. These over-estimates lead to the Fed being unsustainably expansionary relative to the trend, eventually culminating in a recession. Strong enough recessions are usually accompanied by financial crises (because contracts become unsustainable, and panic leads to non-ideal information transfer and falls in prices).

That is to say recessions happen because of a backward-looking bias in growth expectations (similar to inflation). Previous growth will be higher than future growth, so estimating future growth from previous growth will be biased high -- leading to unsustainable Fed growth targets, business investment, and general over-optimism about the future.

I wouldn't say this is totally inconsistent with Keen's version (there's still over-investment that leads to recession), but here there is also a continuous fall in economic growth as the economy gets larger.

But I still don't get the desire to see the economy as a robust and well-defined cycle. The idea of aperiodic failures of the market due to a combination of an optimism bias and irrational panic is more appealing to my sensibilities. Maybe other people want economics to be as precise as a nonlinear circuit (sometimes even more precise than possible)? 

Supply, demand, stock, flow

Steve Roth of Asymtosis, Angry Bear, and other places stopped by the blog the other day and over the course of some discussion -- and perusing his links -- I wanted to note a couple things.


Steve says:
Am I foolish to suggest that the central concepts of economics, supply and demand, are embarassingly un(der)theorized?
I agree! Actually, I think economists might not even understand supply and demand (except Gary Becker). For one, the supposed experiment designed by Vernon Smith (Nobel prize winner for this very kind of experimental economics) actually assumes its result. If you assign numerical utility to people, along with a budget constraint, you get supply and demand. This is how monkeys can end up illustrating supply and demand. Additionally, various classroom experiments don't show what they purport to show. Supply and demand is a property of the state space. Gary Becker showed this way back in 1962.


There was also some discussion of stocks and flows. This is one area where many economists and non-economists get a little too fastidious. It is true that in one case you can have an associated time (well, 'velocity') scale (flows) and one where you don't (stocks) and it is important to understand where this scale comes from.

However, in the information equilibrium model, I frequently put a flow in information equilibrium with a stock -- notably in the relationship NGDP ⇄ M0. Nominal output is dollars per year (or annualized), whereas the money supply is a stock of currency. And there is nothing particularly wrong with this. Why? Because the information in a distribution doesn't really care too much about the domain it's over ... just what the distribution is over the domain. This post has a bit of background (it goes over the rationale behind the blog's favicon and the meaning of the diagram up in the upper right corner on a desktop browser).

Let's say we have a spatial-temporal distribution of an information source distribution (demand, translucent white) and one for the destination (supply, blue):

As you move along the time axis, you have different "stocks" at different instants in time; the change as you move along the time dimension is a "flow". When these are in information equilibrium, they match up:

When they don't match up (demand doesn't meet supply at the right place and right time), you have information loss that can be measured with the KL divergence between the two 2-dimensional distributions:

However, I can also integrate over the time dimension (add up the areas under the blue and white surfaces in one direction) and still see the information loss as a "stock" (an integrated flow):

For a simplified version of this with widgets, you can go to that aforementioned post. Once I've done this integration, I am free to compare it to the information content of a probability distribution of something else:

Since information doesn't really have units per se (bits can be changed to nibbles by multiplying by a constant pure number), it is perfectly fine for the information content in the distribution of a stock to be in equilibrium with the information content in the distribution of a flow.

Now maybe you don't buy this. That's fine. But here's a well known equation in physics that says a stock is equal to a flow (plus a change in flow):

ẍ + (1/τ) ẋ + ω² x = 0

This equation for a damped harmonic oscillator is essentially

change in flow + flow + stock = 0

There are just parameters (scales τ and ω) that make the units work out. If I say

stock Y = flow X

There is just an implicit time scale T (parameter, or constant of proportionality) so that:

Y = T X

If you never couple stocks to flows (putting a stock and a flow in the same equation 'couples' them -- makes them depend on each other), then all flows will be independent of all stocks! There will be a stock description of the economy and a flow description of the economy, and the two will have nothing to do with each other.

Caring a bit too much about stocks and flows is a bit like saying you can't set time equal to a position. And while it's true x = t gets the units wrong, time and position can be in equilibrium -- just add a constant of proportionality (velocity) so that x = v t.

In fact, the great achievement of Einstein was creating the idea of space-time -- space and time are in a sense equivalent. The reason they're equivalent is the existence of a universal constant of proportionality: the speed of light c. Or you could say the equivalence means there must be a universal constant with units of velocity.


Update 06 December 2016

Since I wrote this I found that SFC analysis does couple stocks to flows -- it also forgets those time scales T -- by looking at stock-to-flow ratios.

Thursday, December 3, 2015

More of Draghi's open mouth operations

So the ECB made some comments and the Euro exchange rate jumped. OMG the FT is onit! Here's data from Bloomberg:

Well, actually, the exchange rate is now where it dropped to back in October when it was considered a loosening monetary policy:

So monetary policy is now as loose as it was on October 22 ...

The thing is that the exchange rate should fall on bad economic news (evaluated relative to the US for the Euro-dollar rate) since bad economic news should mean lower AD (relative to the US continuing along its path). The Euro-dollar exchange rate could also fall if the US was doing better than the EU.

Chalk it up to another case where markets are operating under a mistaken theory.


Here's the IT model using M2 (in the footnote) (today's move is in black):

Nice Gaussian residuals ...

Wednesday, December 2, 2015

Please don't help

Economists: teaching economic models for the math, not the economics.

Bill Craighead, a professor at Miami University in Ohio, wrote a post that Mark Thoma linked to (Prof. Craighead, get ready for a few thousand more pageviews) about the value of the undergrad intro to economics series. He (Craighead) was arguing the counterpoint to Noah Smith's attack on the intro series.

Noah writes:
We have a lot of macro theories, but none that really work well unless you pick your data set very carefully and squint very hard. ... And to make it worse, most of the macro theories that economists take halfway seriously are too hard for intro kids, so they end up learning silly stuff like Mundell-Fleming and Keynesian Cross that no one even halfway believes.

One of Craighead's defenses is a bit silly, though:
Second, working with economic models develops thinking and mathematical skills. Smith makes the point that the models we teach in an intro class have their flaws (as do the models we teach in PhD-level classes...), though I still think they're quite useful for thinking about a number of issues. But the act of manipulating a model and working out how assumptions are linked to conclusions helps students become sharper thinkers, and this stays with them long after they've forgotten the specifics of any particular model.

Call me old-fashioned, but mathematical skills are best taught in math class. And if the content of the models doesn't matter (and even economists don't believe them), why not teach some models without flaws that are actually pretty accurate that are believed by their practitioners ... like undergraduate physics. Have one of these people teach how to build and use models!

Why has economics taken upon itself to teach undergraduates how to use models and derive them from assumptions? No offense, but economics is the last field that comes to mind for this task. Especially since economists tend to sneak political bias in with those assumptions ...

It's like teaching people how to work on car engines using movie props. But students learn motor skills and how to think in 3-dimensions ...

NY Fed DSGE model "update"

I hadn't noticed until today, but the NY Fed has been updating their predictions with their DSGE model. Well, "update" is a strange word for it since they have changed their model and are still projecting to only 2018 (the projection period has gotten shorter by over a year -- now cutting off the projected return to 2% inflation in the graph [1]).

Normally, we'd want to see how a forecast does over its entire period, rather than making new forecasts and sending the previous ones down the memory hole. That is how your local weather station deals with its long term forecasts. The forecast for next Friday changes four times from Monday until Friday arrives. But at least the NWS doesn't change the model in the interim!

The NY Fed should just be projecting 2 quarters if they're going to keep doing this. They should have noted that the May 2015 projection was wrong in Dec 2015 (skirting the 90% contour after changing your model is not success). Ironically, had they stuck with their Sep 2014 projection it actually would have looked better!

Here's the link for the IT model predictions.



[1] It's still in the text:
The change in inflation forecasts relative to those reported last May reflects stronger-than-expected inflation data caused by the temporary increase in energy prices in 2015:Q2. However, this effect is expected to dissipate over the next few months and to be followed by a very gradual return to the 2 percent target.

Some day my long run will come

Brad DeLong asks: At what time scale, if any, does the long run come?

Some other answers besides the 3-6 years discussed by Krugman and DeLong:

Tuesday, December 1, 2015

Money is that which is conserved via a symmetry principle

Not sure where I got the link to this (so H/T to someone on the internet), but it was an entertaining jaunt through questions about "money" with supposedly more to come. I have my own theory about the origin of money that isn't really inconsistent with what Steve Roth (the author) says. He does say something that I'll take issue with, but in a constructive way I hope ...
Now to be fair: a definition of money will never be simple and straightforward. Physicists’ definition of “energy” certainly isn’t. But physicists don’t completely talk past each other when they use the word and its associated concepts.

Energy actually has a really beautiful and simple definition: that which is conserved due to the time-translation symmetry of the universe. Because the laws of physics don't change if you go backwards or forwards a billion years (time translation), there is something that is conserved by Noether's theorem. We call this energy, but once we have time and time-translation symmetry, we have this conjugate quantity and we should call it something. Energy is a fine name. The equivalent conserved quantity for the space translation invariance of the universe is momentum, again ... something that took us awhile to figure out was conserved.

Now if we just knew what time was ...

Anyway, to make this a constructive criticism let me posit that money is a unit of measure [and quantity] of that which is conserved because of the homogeneity of degree zero of the economic universe. Homogeneity of degree zero is a bit mathy, but it's a simple concept. If I double the demand for sheep and the supply of sheep, then the price of sheep will stay the same (is conserved) ... and prices are measured in money. That's how the tally marks in Roth's piece can be "money". If you doubled the amount of sheep owed and doubled the amount of tally marks, you haven't done anything accounting-wise to the price of a sheep.

But that's really it -- anything that becomes an intermediary between supply and demand can be money as long as it "works". What do we mean by "works"? Well, if changes in supply and changes in demand still carry the same amount of information with the intermediary in place (they are in information equilibrium), then we can say that intermediary "works" and therefore is money. In general, information equilibrium relationships are the bare minimum required to maintain this capability.

The tally marks and sheep above represent two things that are information equilibrium. If I change the tally marks, it is translated into a change in sheep. If I was to change the number of sheep in some sort of pattern (+1 sheep, -1 sheep, -1 sheep, -1 sheep, + 1 sheep) and I get the exact same pattern of change in the tally marks (+1 tally, -1 tally, -1 tally, -1 tally, + 1 tally), then I could in principle send a message (in Morse code, say). And if I can send a message, information must be flowing through a communication channel. We say the tally marks and sheep are in information equilibrium. I should be able to communicate a signal of changes in supply through the price mechanism, otherwise information is being lost along the way -- basic information theory.

But the real bonus is that something that can be used for a lot of products works the best to communicate that information without loss, especially if it isn't a product itself (doesn't have intrinsic value) and is completely fungible (high entropy). That's where my definition of money comes in:

Money is a thing that mediates transactions and has high information entropy

Tally marks fit this bill. So does our modern currency. But essentially, what money is doing is helping maintain homogeneity of degree zero, so that prices don't change if you double supply and double demand. And an information equilibrium relationship (linked above) is the most general form of the simplest condition required for this to be true. 


Update 12/6/2015

I thought I should express mathematically (and therefore more precisely) what the previous post means.

Homogeneity of degree zero (aka invariance under scale transformations or conformal symmetry) in the supply (S) and demand (D) defines the relationship (to leading order) 

P = dD/dS = k D/S

where P is the (abstract) price of whatever S is. This is the fundamental information equilibrium relationship and is denoted D ⇄ S. This is invariant under the transformation:

D → α D
S → α S


d(αD)/d(αS) = dD/dS
k (αD)/(αS) = k D/S

So P  → P under the symmetry transformation. That means P is a measure of whatever is conserved under the symmetry principle (per Noether's theorem) -- it is invariant under the transformation.

As an aside, if D = D(t), P = P(t) and S = S(t), the the symmetry transformation applies at all times simultaneously D(t) → α D(t), S(t) → α S(t); P(t)  → P(t) is invariant. This is really the idea that if you add a couple zeros to every price for all times, you haven't really done anything.

P is the (abstract) price in the information equilibrium model of economics, and it is related to money through the unit of account. But we can go further. Let's introduce something called M (the medium of exchange) and because of the chain rule (and multiplying by 1 = M/M)

dD/dS = k D/S
(dD/dM) (dM/dS) = k (M/M)(D/S)
(dD/dM) (dM/dS) = k (D/M)(M/S)

If D ⇄ M ⇄ S (D is in information equilibrium with M and M with S so that information gets from D to S via M), then we have a relationship

P'' = dD/dM = k'' D/M

And so we can re-write the equation above

(P/P'') P'' = (dD/dM) (dM/dS) = (k/k'') k'' (D/M)(M/S)
P' = dM/dS = k' M/S

Therefore we can capture the relationship P : D ⇄ S entirely with P' : M ⇄ S. Now dM/dS is the exchange rate for an infinitesimal quantity of money for an infinitesimal quantity of supply ...i.e. dM/dS = P' is the money price of a unit of S (as opposed to the abstract price P).

This equation is just another information equilibrium relationship and is invariant under the conformal symmetry transformation 

M → α M
S → α S

which leaves the money price P' invariant.

You can actually do the trick above to come out with P'' : D ⇄ M as the relationship that captures P : D ⇄ S entirely. Let's relax this to an information transfer relationship (non-ideal information transfer) P'' : D  M. Let's say D is an aggregate demand of several goods D₁, D₂, D₃, ... Dn. The budget constraint means you can't spend more than M (the total amount of medium of exchange) at a given time, so D₁ + D₂ + D₃ + ... + Dn ≤ M. Since this is a high dimensional system (n >> 1), the most likely (maximum entropy) point saturates the budget constraint D₁ + D₂ + D₃ + ... + Dn ≈ M. Therefore at any given time most medium of exchange will be allocated against all the investments, goods and services, etc ... and therefore the distribution of medium of exchange will be approximately equal to the distribution of goods and services. This means that medium of exchange minimizes the difference between the information content of D and M, i.e. minimizes I(D) − I(M), or more usefully

I(D) = β I(M)

for some β where 0 < β ≤ 1. Therefore (returning to the definition of the information equilibrium relationship)

(D/dD) log σd = β (M/dM) log σm

D/dD = (1/k''') M/dM 

dD/dM = k''' D/M

where k''' = (log σd)/(β log σm). I.e. the existence of a high entropy medium of exchange M allows us to write I(D) ≥ I(M) (where the conformal symmetry doesn't hold) as an effective information equilibrium relationship I(D) = I(M) (where the conformal symmetry does hold).

So to be more precise, the money as unit of account is that which is preserved by a symmetry principle (scale invariance or conformal symmetry). Money as medium of exchange makes the symmetry principle hold.

By magic number, Nick Rowe means scale of the theory

Nick Rowe has a speculative post up about "magic numbers". It's really about scales, but economics doesn't quite know how to use scales yet.

If you start with nothing and want to talk about an inflation rate, the only rate you could sensibly discuss zero. It's "magic" in Rowe's description.

In physics, we call this, colloquially, a "what else could it be?" argument. Given the scales in your theory, what can you construct as the scale for your answer?

By fiat, we start with nothing. No fundamental scales. Therefore if the inflation rate π that we stick in the exponential factor defines the time scale t0

exp(π t) = exp(t/t0)

... then t0 could only be zero (essential singularity ... aaah!) or infinity (no fundamental scales to anchor t0). If t0 → ∞, then π → 0. There's our "magic" number: the only scale available.

Another way to think about it is that the inflation rate has units of percent per year. We don't have anything with units of years (i.e. t0), so we can't construct an inflation rate. Note that you can think of t0 as proportional to the doubling time of the economy.

As an aside, in the case of the 2-player dictator game, you get the "magic" 50% per player from 100% and the 2 players who set the scale of the division.

Conformal anomalies are a way to get around this need for an explicit scale and generate a scale in a more complex way, but let's leave that alone for now.

Nick's speculation is that since the central bank has set 1/t0 = 2% per year, inflation is now π ~ 1/t0 no matter what economic growth is, destroying the so-called divine coincidence where monetary policy targets inflation and output is magically stabilized.

Note that you can use this idea of a scale to organize more than just Nick's view. In Nick's view, the scale t0 could only come from the mouth of central bankers (as it did with Scott Sumner). In old school monetarism (quantity theory of money), it comes from the growth rate of the money supply μ = 1/tμ with μ ~ π. You could even say that in a liquidity trap, t0 comes from the growth rate of fiscal expansion since monetary policy targets no longer give us relevant scales.

Nick's presentiation assumes 1) the central bank has set the scale t0 and 2) the central bank scale is the only possible scale for t0. The question immediately arises: What sets the scale for growth? Why isn't growth zero?

One interesting resolution is that the scale for real growth rate ρ ~ 1/t0 ... i.e. nominal growth rate ν = π + ρ with ρ ~ π so that  ν = 2 π. Note that this is exactly what happens in the IT model when k = 2 (the quantity theory of money) where π = μ and ν = 2 π = 2 μ and ρ = π = μ. There's only one scale μ. This is therefore one way to realize the divine coincidence.

But in Nick's (speculative, by his own account) presentation has a scale for inflation π ~ 1/t0, but no scale for growth. Therefore growth should be either be: a) 1/t0 (per the previous paragraph, aka the divine coincidence), b) set by its own time scale T, or c) zero. He disallows b) and says a) fails, so growth should be zero. That's the only magic number allowed since the central bank controls ν and π (in his view), i.e. ν = π, ρ = 0.

Basically, failure of the divine coincidence implies that real growth should be zero unless it is set by it's own scale ρ = 1/T. But then we get into the problems ...

Let's say we start with an economy where there is some outside scale T (the scale of the concrete steppes); there is a divine coincidence so that 1/T ~ ρ ~ π. Now the central bank decides to start targeting π, setting a scale t0. Do we have

ρ ~ 1/T and π ~ 1/t0


ρ ~ 1/t0 and π ~ 1/t0


If the former, there is a scale (magic number) T besides t0 so monetary policy is not the only question about the economy. Growth is set by the real factors that operate on timescale T outside of monetary policy targets.

If the latter, did T just disappear? And in that case, why and how does the divine coincidence fail so that we end up with:

ρ ~ 0 and π ~ 1/t0


If the divine coincidence goes from operating (ie. ρ ~ 1/t0 and π ~ 1/t0) to failure (ie. ρ ~ 0 and π ~ 1/t0), what sets the time scale for that to happen? (My guess is T. But you see the need for an additional scale.)

Or does the divine coincidence fail and we have

ρ ~ 1/T and π ~ 1/t0

If that is the case, why didn't that happen in the first place?

What is important to understand here is that in order for Nick's view to make sense there basically must be some other scale (T, the scale of the concrete steppes) in the economy that controls economic growth that is independent of monetary policy targets (in the long run). It's the only thing that allows the divine coincidence to fail and still have non-zero economic growth. It could still be a monetary variable [1], it just has to be concrete.

Or another way: what is the "magic number" for real growth and why is it not zero in the failure of the divine coincidence?

The real magic number is of course 3:


[1] In the IT model, we have three scales: 1/σ (nominal shock/labor growth time scale), 1/μ (money growth time scale) and m0 (economy size). Roughly speaking, if M < m0 (large k) then μ sets ρ and π. If M > m0 (k ~ 1), σ sets ρ, and π ~ 0.