Stochastic Volatility Approximations

This post is more geared to economists/econometricians, and will compare two different approximations that are frequently used when estimating stochastic volatility.

Basically, stochastic volatility means that the variance of a process can change over time. This is a commonly observed phenomenon in economic data.  For example, if we look at the quarterly growth rate of GDP since 1948, we can see that GDP bounced around a lot before 1980, but then it settled down until the financial crisis hit in 2008.

Screen Shot 2014-08-02 at 6.57.45 PM

It’s a little confusing at first, but the main problem is that the time-varying volatility has a Chi squared distribution, so that you can not use the Kalman Filter to estimate the time-varying volatility directly.  You can use a particle filter, but the most common way to undertake the estimation is to use a 7-point mixture of normals approximation that was introduced Kim, Shephard, and Chib (1998, ReStud).  However, a better 10-point approximation to the distribution was introduced in Omori, Chib, Shephard and Nakajima (2007, Journal of Econometrics).

Since a large body of research has used the earlier, 7-point distribution, I wanted to see how much better the 10-point distribution performed in practice.  To do the comparison, I generated 100 fake time series, each of length 100 periods, and all with stochastic volatility.  In the standard set-up, the volatility follows a random walk:

h_{t} = h_{t-1} + e_{t}
e_{t} ~ N(0,sig2)

Since the variance of this random walk, sig2, controls how much the volatility can change over time, I repeated the exercise for four different values: 0.01, 0.05, 0.10, and 0.15.  To compare the approximations, I performed Bayesian estimation using the Gibbs sampler, with 1,000 burn in draws and 9,000 posterior draws. Since I generated the data, I knew the true underlying values of both sig2 and the entire time path of volatilities, h_{t} for t = 1:100. Therefore, I could compare the estimates I got using each of the approximations to the true values.

To judge the approximations I used four criteria: the bias of the average estimated volatility path, the mean squared error (MSE) of the average estimated volatility path, the bias of the sig2 estimate, and the MSE of the sig2 estimate. The results are as follows, with the bolded numbers representing the better performance.

Screen Shot 2014-08-02 at 7.14.20 PM

The results are actually fairly mixed, although it does appear that the mixture of 10 normals performs very slightly better. The differences are not economically meaningful, however.

So what have we learned?  It probably isn’t worth re-estimating previous work that had used the 7-point mixture, since the gains from using the 10-point are so small.  But, for a young economist, it wouldn’t hurt to use the 10-point (it is more accurate, no more difficult to code, and only negligibly increases the run-time of the estimation procedure).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s