# Recession Probabilities for all 50 States

A couple of weeks ago, I came across an article in The Atlantic titled “What on Earth is Wrong with Connecticut.” The article is about the condition of Connecticut’s economy and state budget, and inspired me to consider two questions that I had been thinking about for some time – (1) are any U.S. states currently in recession, and (2) has there been any historical pattern around national recessions regarding which states enter recession earlier than others?

DATA & METHODOLOGY

In order to try and answer these questions, I used data on the month-over-month (MoM) percentage change in total payroll employment in all 50 states (and Washington DC) from 1990 through May, 2017. While recessions are typically defined as a decline in output, not employment, national employment recessions and output recessions have historically been highly correlated. Additionally, the data for state employment goes back further then the data for state GDP, at least on FRED, and the the employment data is measured at a higher frequency. To give a sense of what the raw data looks like, below is the MoM percentage change in total payroll employment for Minnesota:

To estimate recession probabilities for each state, I use a version of the Markov Switching (MS) model developed in Hamilton (1989). In this model, there are two regimes, or “states of the world”. When Hamilton estimated this model using data on U.S. GNP, it returned two clear regimes – “expansion” and “recession”. Furthermore, as a byproduct of estimation, the model provided estimated probabilities for each regime at each date in the history of the data, and these probabilities matched up very closely with the official recession dates in the U.S.

I decided to estimate an MS model independently for each U.S. state, using the MoM percentage change in total payroll employment as the data. Similar estimation strategies have been undertaken before, for example, see Owyang, Piger, and Wall (2005) (pdf).  After censoring the data to ignore large outliers that greatly influenced estimation in approximately 10 states (such as the massive decline in employment in Louisiana following Hurricane Katrina), I fit the following Markov Switching model to each U.S. state, independently:
$y_t = \mu_0 + \mu_1 s_t + \rho(y_{t-1}-\mu_0-\mu_1 s_{t-1}) + \varepsilon_t$
$\varepsilon_t \sim N(0,\sigma)$
And $s_t \in \{0,1\}$ evolves according to an exogenous first order Markov process, with transition matrix given by:
$P= \begin{bmatrix} p_{00} & p_{01} \\ p_{10} & p_{11} \end{bmatrix}$

I performed Bayesian estimation, with the following priors on the regression coefficients:

• Annual expansion growth rate $\sim N(2.4,0.85)$
• Annual recession growth rate $\sim N(-2.4,0.85)$
• AR(1) term $\sim N(0.25,0.06)$

Note that I am using the annual growth rate here instead of the monthly growth rate, since it is a more intuitive number. These priors imply 99% prior confidence intervals for the unconditional annual growth in expansions and recessions of roughly $[0\%,4.8\%]$ and $[-4.8\%,0\%]$, and a 99% prior confidence interval for the AR(1) of roughly $[-0.4,0.9]$.

For the transition probabilities, the prior probability of staying in expansion next month if the state was in expansion this month is set to 0.9, and the prior probability of staying in recession next month if the state was in recession this month is set to 0.8, each with 5 prior observations.

RESULTS

In regard to the first question – are any states currently in recession, the answer is probably no. As of May, 2017, only Idaho had a recession probability greater than or equal to 50% (and it was exactly 50%). However, May was the first month in which the recession probability exceeded 49%, and based on earlier research on national recessions, a recession probability typically has to exceed 49% for at least two months in a row to reliably signify the onset of a recession.

 State Rec. Prob ID 50% NJ 40% NH 39% OK 34% KS 32%

As far as Connecticut is concerned, it currently has a recession probability of 0%, but it is estimated to have the slowest expansion growth rate among all 50 states, which could be a result of the factors discussed in the article, or simply due to out-migration (and disentangling these two causal factors is not something I am able to do).

 State Exp. Growth Rate NV 3.9% UT 3.6% … … PA 0.9% CT 0.8%

In regard to the second question – has there been any historical pattern regarding which states enter recessions “first” before the beginning of a national recession, I don’t find any sort of pattern. The two images below show monthly employment recession probabilities for all 50 states (plus DC) over time, starting in 1990 (click twice to enlarge).

CONCLUSION

I used an MS model with AR(1) dynamics to estimate historical recession probabilities in all 50 U.S. states. For the most recent month for which data is available, May 2017, I found that there were probably no U.S. states in recession, although if payroll employment growth is again negative in Idaho in June, it would likely indicate an employment recession in Idaho. I also found that there does not seem to be a consistent pattern regarding which states enter recessions first, prior to a national recession. In other words, there are no states that have served as reliable “leading indicators” for the national economy over the last three business cycles. Finally, while Connecticut is the wealthiest state in the U.S. in per-capita terms, it has had the slowest rate of increase in employment during expansions over the past 25 years. The current methodology does not allow me to determine any factors that may be causing this slow growth.

As new data is released, I will keep updated graphs and estimates here: http://adamjcheck.com/state_rec.html

# Employment Policies Cannot Solve Poverty

Employment policies, broadly defined here as policies that aim to achieve the maximum amount of employment and/or achieve a living wage for employed individuals, are central tenets of both political parties.  While the exact policies advocated for differ between the parties, these policies are seen to serve two primary objectives:
1. Bolstering the middle class (either through increasing its size from the bottom up or increasing its income).
2. Helping the poor and alleviating poverty by providing the poor with either more income or more employment opportunities.
These policies may be effective at achieving the first objective. Surely, the supposed effect on the middle class is the motivating political force behind these types of policies. However, given the types of individuals who find themselves in poverty, I believe that these employment policies play an out-sized role in our political discourse, especially when they are presented as a means to combat poverty.

To fully understand my position requires letting go of many prejudices about the “undeserving” poor.  We are often told that the reason most people lack an adequate income is due to a “culture of poverty” which pervades in low income areas, or to a moral or academic failing of the individual impoverished person – therefore, they are “undeserving” of assistance.  Overcoming this culture, this logic says, requires much work and an abundance of jobs; we should instill discipline in impoverished children through the harsh tactics practiced in many charter schools and ask low wage workers to work longer hours. After this discipline is achieved, it is vital to make sure that there are enough jobs available, and that they pay a living wage. One strategy, popular in the Republican party, is to bless job creators with tax cuts so that they have the means to provide more jobs. Other strategies, more commonly attributed to the Democratic party, are increases in job training and increases in the minimum wage.

While these measures may be well intentioned, they surely cannot solve, or even come close to solving, the problem of poverty in America.  This is due to one simple fact that the “culture of poverty” types do not like to disseminate – the vast majority of the poor in America are not allowed to hold full time employment (children and students) or are not capable of holding employment (elderly, disabled, and their caretakers).  This can be seen quite easily in the two charts below, created by Matt Bruenig at Demos.  To create these charts, Matt uses census level data to break down the percentage of the poor population made up of children, elderly, disabled, students 18+, caretakers of disabled relatives, unemployed, employed, and “other” which are members of a poor household that are not in the labor force. These charts represent the “official poverty metric”, which takes into government transfer payments like social security, but not does not take into account food security programs like SNAP or the Earned Income Tax Credit.

If a culture of poverty truly inflicted the majority of these individuals, you would be hard pressed to find evidence in this chart.  For example, if poor people were truly lazy, the vast majority of them should lie in the “other” category, meaning that they would be able bodied, working age, not employed, and not looking for a job.  However, only 7.6% of poor individuals fall into the “other” category.  It should also be noted that “other” does not only include the lazy, but also, for example, poor stay at home parents whose partner is in the workforce but cannot afford child care.

Reading this chart also makes clear that employment policies would do very little to alleviate poverty. Even if we could reduce the unemployment rate to 0% among the poor (we couldn’t), many of these formerly unemployed individuals would remain in poverty if they received a wage at or near the minimum.  Furthermore, even if they all somehow escaped poverty (maybe there was an increase in the minimum wage), and all of the fully employed escaped poverty, 75% of poverty would still remain (this excludes all children that these people have, but even if all of these children were lifted out of poverty, 45% of poverty would still remain).

# Stochastic Volatility Approximations

This post is more geared to economists/econometricians, and will compare two different approximations that are frequently used when estimating stochastic volatility.

Basically, stochastic volatility means that the variance of a process can change over time. This is a commonly observed phenomenon in economic data.  For example, if we look at the quarterly growth rate of GDP since 1948, we can see that GDP bounced around a lot before 1980, but then it settled down until the financial crisis hit in 2008.

It’s a little confusing at first, but the main problem is that the time-varying volatility has a Chi squared distribution, so that you can not use the Kalman Filter to estimate the time-varying volatility directly.  You can use a particle filter, but the most common way to undertake the estimation is to use a 7-point mixture of normals approximation that was introduced Kim, Shephard, and Chib (1998, ReStud).  However, a better 10-point approximation to the distribution was introduced in Omori, Chib, Shephard and Nakajima (2007, Journal of Econometrics).

Since a large body of research has used the earlier, 7-point distribution, I wanted to see how much better the 10-point distribution performed in practice.  To do the comparison, I generated 100 fake time series, each of length 100 periods, and all with stochastic volatility.  In the standard set-up, the volatility follows a random walk:

h_{t} = h_{t-1} + e_{t}
e_{t} ~ N(0,sig2)

Since the variance of this random walk, sig2, controls how much the volatility can change over time, I repeated the exercise for four different values: 0.01, 0.05, 0.10, and 0.15.  To compare the approximations, I performed Bayesian estimation using the Gibbs sampler, with 1,000 burn in draws and 9,000 posterior draws. Since I generated the data, I knew the true underlying values of both sig2 and the entire time path of volatilities, h_{t} for t = 1:100. Therefore, I could compare the estimates I got using each of the approximations to the true values.

To judge the approximations I used four criteria: the bias of the average estimated volatility path, the mean squared error (MSE) of the average estimated volatility path, the bias of the sig2 estimate, and the MSE of the sig2 estimate. The results are as follows, with the bolded numbers representing the better performance.

The results are actually fairly mixed, although it does appear that the mixture of 10 normals performs very slightly better. The differences are not economically meaningful, however.

So what have we learned?  It probably isn’t worth re-estimating previous work that had used the 7-point mixture, since the gains from using the 10-point are so small.  But, for a young economist, it wouldn’t hurt to use the 10-point (it is more accurate, no more difficult to code, and only negligibly increases the run-time of the estimation procedure).