Inflation in the US has not been a monetary phenomenon since 1992

Apologies for the long title. It’s a reference to one of the most famous quote in economics, which is due to Milton Friedman in 1970:

“Inflation is always and everywhere a monetary phenomenon in the sense that it is and can be produced only by a more rapid increase in the quantity of money than in output.”

Now, I’m not a blind Friedman hater. In fact when it comes to monetary policy, I tend to agree with him.  The statement above was grounded in fact for most of history, especially during very severe episodes of hyperinflation.  However, it has not accurately described the experience in the U.S. since at least 1992, and this fact is presented as a puzzle in the textbook I use for my money and banking class.

The statement above is grounded in a theory called the quantity theory of money. I’m going to skip specifics, but here’s the gist. There is an equation that must hold at all time periods, it is a simple accounting identity. That equation is:


where M stands for the money supply, V stands for the velocity of the money supply (i.e. how many times a given dollar bill changes hands throughout the course of a given period of time), P stands for the price level, and Y stands for real output (i.e. real GDP).  Finally, note that a change in the price level is what we call inflation. In other words, if P increases, then there has been inflation.

The quantity theory is a theory about the long run, so it makes a sensible assumption: over long periods of time, the velocity of money should not change very much. In fact, it should be constant. Imposing this assumption, and using some algebra, it can be shown that to a first approximation, the equation above can be written the following way:
p = m – y
Where the lowercase letters represent percent change in each of the variables above.  So, p is the inflation rate, m is the growth rate of the money supply, and y is the growth rate of real GDP.

So, if Milton Friedman, and the quantity theory, are correct, then over long periods of time and after adjusting for growth in real output, the rate of inflation should equal the growth rate of money.  Inflation should be a monetary phenomenon.  But look at the following chart:


Here, I have taken the 10-year moving average of inflation (measured by CPI) and the output-adjusted growth rate of the money supply (measured by M2).  We can see that the quantity theory describes things very well until 1975, and then continues to do a decent job until about 1992.  At that point, inflation levels off, while the growth rate of money plummets.  More recently, the growth rate of money rises substantially, but inflation has remained steady.

Here’s another way of looking at the same thing:


This is a scatterplot of the same two variables, with a line of best fit going through the period 1969-1991. According to the quantity theory, this line should line up on the 45 degree line, i.e. it should go through the points (x,y) = (.03,.03), (.04,.04), (.05,.05), etc.  It is a little bit too steep, but the relationship is close, and quite obvious just from eyeballing the chart.

However, for the period 1992-2014, it’s not even close.  I didn’t plot the line of best fit, but due to the bottom right most blob of red points, the slope is actually negative! The quantity theory is an abysmal failure at describing the last 20 years of economic experience.

I believe that the simplest and most likely explanation for this phenomenon is that it is an example of Goodhart’s law in action, combined with improvements in policy making at the Federal Reserve.  That is, one of two things happened at the Fed in 1992: first, they either started trying to hit an inflation target for the first time; second, they had always had an inflation target, but for the first time reacted to the non-monetary causes of inflation – changes in consumer behavior (and therefore velocity).  In either case, the improvements in policy at the Fed triggered Goodhart’s law, in which case we would expect to see the red dots and not the blue dots in the second chart.

To see why, note that if the Fed thought that inflation was going to fall below their target (or that velocity was going to fall), they should increase the growth rate of the money supply.  If they do a good job, and increase the growth rate of the money supply by exactly the right amount, then inflation will remain unchanged.  In contrast, if they believed that the inflation rate was going to rise above the target, then they should decrease the growth rate of the money supply.  Again, if they do this by the right amount, then inflation will remain unchanged.  If the Fed is doing a god job at hitting its inflation target, what we would expect to see are fluctuations in the growth rate of the money supply, but a steady rate of inflation.  Looking at the first chart, that is exactly what we have seen over the past 20+ years.

A final note, in theory, the Fed doesn’t really care what the money supply is; instead, they target short term interest rates, and let the money supply be whatever it needs to be in order to hit that interest rate target.  However, that target is determined (in part) by what the inflation rate is, so in practice it’s fine to theorize as if the Fed is directly manipulating the money supply to hit an inflation target.


Putting Seattle’s New Minimum Wage into Perspective

Recently, Seattle passed a new minimum wage bill that will increase the minimum wage to $15.00 for all employees by 2021 (some will receive that wage earlier).  Much of the commentary around minimum wages often cites the inflation adjusted federal minimum wage of $10.98 in 1968 as the highest federally mandated minimum wage in the nation’s history.  This got me thinking about what policymakers should take into account when deciding on a minimum wage.  In my opinion, it depends on what they view the purpose of the minimum wage to be, and a large number of policymakers may want to consider adjusting the minimum wage for both inflation and productivity increases.

The first view of the minimum wage is that it should be a basic living standard for someone who is fully employed.  Under this view, a policymaker might want to index the minimum wage only for inflation – that is, to make sure that the minimum wage will provide the same living standard throughout time.

The second view of the minimum wage is that in addition to providing a base living standard, it also serves to equalize the bargaining power of employees and their employers.  Under this view, the minimum wage should not only increase with inflation, but it should also rise as workers become more productive.  Therefore, a policymaker should index the minimum wage to nominal per capita GDP – this would ensure that someone earning the minimum wage earns the same fraction of per capita GDP throughout time.

With these two views in mind, let’s take a look at history. If we only adjust for inflation, the peak value of the federal minimum wage occurred in 1968, when a worker earned the equivalent of $10.98 per hour today.  By this perspective, the $15.00 per hour minimum wage in Seattle looks historically high:

 Screen Shot 2014-08-10 at 8.06.32 PM

If we also adjust for productivity, then the picture changes.  A simple way to adjust for both productivity and inflation is to index the minimum wage to the level of nominal per capita GDP.  Nominal per capita GDP increases for two reasons.  First is inflation, which increases prices, and therefore the dollar value of GDP.  The other reason is due to increased productivity, that is, as technology improves and each worker can produce more, we produce more goods and services.

Taking this second view, the minimum wage peaked at $20.48 in 1956, and didn’t fall below $15.00 per hour until 1971.  This view would support the view that Seattle’s new minimum wage is in line with historical norms, and is not extravagantly high. 

Screen Shot 2014-08-10 at 8.07.29 PM

In either case, we will have evidence in a few years on the impact of Seattle’s law.  

Two final notes. First, by the time Seattle’s minimum wage law takes full effect in 2021, the real purchasing power of the minimum wage will most likely be about $13.00, due to inflation between now and 2021.  Second, after I had thought about this for a while and written most of this post, I discovered a column that makes a similar argument to mine, and goes into a little more detail.  It can be found here.


Stochastic Volatility Approximations

This post is more geared to economists/econometricians, and will compare two different approximations that are frequently used when estimating stochastic volatility.

Basically, stochastic volatility means that the variance of a process can change over time. This is a commonly observed phenomenon in economic data.  For example, if we look at the quarterly growth rate of GDP since 1948, we can see that GDP bounced around a lot before 1980, but then it settled down until the financial crisis hit in 2008.

Screen Shot 2014-08-02 at 6.57.45 PM

It’s a little confusing at first, but the main problem is that the time-varying volatility has a Chi squared distribution, so that you can not use the Kalman Filter to estimate the time-varying volatility directly.  You can use a particle filter, but the most common way to undertake the estimation is to use a 7-point mixture of normals approximation that was introduced Kim, Shephard, and Chib (1998, ReStud).  However, a better 10-point approximation to the distribution was introduced in Omori, Chib, Shephard and Nakajima (2007, Journal of Econometrics).

Since a large body of research has used the earlier, 7-point distribution, I wanted to see how much better the 10-point distribution performed in practice.  To do the comparison, I generated 100 fake time series, each of length 100 periods, and all with stochastic volatility.  In the standard set-up, the volatility follows a random walk:

h_{t} = h_{t-1} + e_{t}
e_{t} ~ N(0,sig2)

Since the variance of this random walk, sig2, controls how much the volatility can change over time, I repeated the exercise for four different values: 0.01, 0.05, 0.10, and 0.15.  To compare the approximations, I performed Bayesian estimation using the Gibbs sampler, with 1,000 burn in draws and 9,000 posterior draws. Since I generated the data, I knew the true underlying values of both sig2 and the entire time path of volatilities, h_{t} for t = 1:100. Therefore, I could compare the estimates I got using each of the approximations to the true values.

To judge the approximations I used four criteria: the bias of the average estimated volatility path, the mean squared error (MSE) of the average estimated volatility path, the bias of the sig2 estimate, and the MSE of the sig2 estimate. The results are as follows, with the bolded numbers representing the better performance.

Screen Shot 2014-08-02 at 7.14.20 PM

The results are actually fairly mixed, although it does appear that the mixture of 10 normals performs very slightly better. The differences are not economically meaningful, however.

So what have we learned?  It probably isn’t worth re-estimating previous work that had used the 7-point mixture, since the gains from using the 10-point are so small.  But, for a young economist, it wouldn’t hurt to use the 10-point (it is more accurate, no more difficult to code, and only negligibly increases the run-time of the estimation procedure).