- nastradamus
**Posts:**13**Joined:**

Remember the squared returns are just a proxy for the realized and unobservable variance. You'll have to define "perform very bad." What criteria are you using? Are you sure you moving average has the same informationally content as your GARCH model? That is you are not using information at time t to estimate the volatility at time t.What GARCH models are you using? What is the persistance of your GARCH model? For GARCH(1,1) what is a+b, where garch(1,1) isv(t) = c + a*e(t-1)^2 + b*v(t-1)

True, the squared returns are only approximations.I use the given parameters from a fitted GARCH model in Splus. a+b mostly lie around 0.9, depending on what stock and what time window. I am not using integrated GARCH.I also use EGARCH, TGARCH and PGARCH.They all give the same result (well different of course).But all perform worse than (weighted) moving average models in forecast-measures such as MAE (mean absolute error) and RMSE (root mean squared error).I have also read an article noting this "problem". In this article, they get (just as I do) about 2/3 of the estimates too high.This results in a bad MAE. The moving average, do not has this property.The article also says, that the GARCH models provide (on average) more explanatory power with respect to the actual volatility.I am not quite sure what is meant by this.Can anybody help me on that point?Thks

- nastradamus
**Posts:**13**Joined:**

MAE and RMSE might not be the best criteria to use. The distribution of squared returns (sq'd rt) is quite non-normal. I suspect, don't know for sure, that these forecasting criteria might best be used if data is close to normal. One idea might be to use the log of sq'd rt vs. log of the forecasts. The distribution of log sq'd rt is much closer to normal. See if that gives you the same conclusion.There isn't that many paper that extensively talk about how to analysis volatility forecasts. I've seen people test the fitting of garch forecast by the use of regression. Regress sq'd rt or log(sq'd rt) on forecast or log of forecast. Including both garch forecast and moving average forecast in the equation is a direct way to test which one has more explanatory power. You might see that the moving average has a lower significant, indicating garch is better. This, I believe, is what the paper is saying by "more explanatory power".

ok, I calculate abs(log(sigma_est^2)-log(sigma_obs^2)) and compare the models. The different GARCH models still perform much worse than the other simple models. For the MAE and RMSE as I used before, the forecast for the vola of tomorrow even gave worse value than the forecast for the vola in 5 days from now (looking at the daily vola only, and not the sum over the next five days, i.e. predicted value for t+5 compared to observed vola at t+5).The log measure as above, do have this last property, but the improvement is small.I almost draw the conclusion, that forecasting vola is a pretty difficult thing. Many recent papers come up with all kinds of stochastic vola models, and they all compare the forecast ability with standard GARCH (1,1). They are often very happy because their models do better forecasts. I ask the question whether we should put so much effort into this field, or maybe just use e.g. weighted moving average, and then put our energy on other things.Take the day off and head off to the bar...

Hi Muzzex,I would be interested to know how many observations are you using to fit your Garch model ?Also does it make a difference if you use 500 point rather then 1000 points ? is there a rule of thumb that every body is using ??ThanksHabib

- nastradamus
**Posts:**13**Joined:**

Habib, In general you'll like to have more observations than less. A report Engle and I prepared a while back indicates that parameters estimated from small sample such as 500 observations tend to be unstable. 1000 or more would be better. Anyway, it's a good question. I think the best way to measure volatility forecast is make use of the idea that if the forecast is accurate then the standardized return should be normally distributed. Or distributed as student t or ged if one use one of the alternative distributions. In my experience of forecasting S&P500 volatility, Garch does better than moving average. Using regression analysis as mention in my previous letter.

nastradamus,Thank you very much for your reply,Is it possible to obtain a copy of the report that you mentioned?Thanks again

Hi again,I have mostly NOT used as many as 1000 observations. I have tested how the parameters change, by estimating the parameters using different numbers of past observations. Using windows smaller than 500 make the parameters change very much. I think this is a bit unfortunate, because I don't want to assume that the time series is stationary over such a long period as 1000 observations, which correspond to several years of daily data.I am using the garch models to forecast volatility for a single stock (or many different), and some stocks have only been on the market for some years, meaning we do not have even 500 data points in some cases.Do you suggest not to use GARCH models at all to forecast the vola for such a stock?Thanks for your answers, Muzzex

- nastradamus
**Posts:**13**Joined:**

Muzzex,1) Does these packages offer the ability of variance targetting? If GARCH's volatility is too high and you do not have large enough sample, the constant in the volatility equation might be important. Usually people estimate that constant. However, another way is to use variance targeting. That is the constant is set at var0/(1-alpha-beta)where var0 is the unconditional variance of the beta. I think Splus offer this option. Ding, the writer of Splus' GARCH module, orginated this idea.2) Another idea is use non-normal distributions. This usually lowers the persistance of the model, which generally lowers the volatility.3) The problem with small set of data is that there are not enough "features" in the data to mode the model. This is a special problem with ML problems such as GARCH. The report I mentioned before started precisely because a researcher was getting unreliable results when using small set of data. I can think of ways to extend number of observations using proxy stocks. 4) Yes, forecasting volatility is hard. There hasn't been enough published study about how to do this and how to judge the results.

Hi,Since you are restricted to so many observations, if you include extra variables in the variance equation i.e Volume of trade would this improve your volatility forecast?

Is there any good place to find some kind of GARCH manual??

Start with google and work your way through

the handbook of economertics is available online it has a chapter on GARCH

Last edited by reza on December 6th, 2003, 11:00 pm, edited 1 time in total.

Hey reza: Could you please post the link to the handbook of econometrics?Thanks

here it iswww.elsevier.com/hes/books/02/menu02.htmitem 49 is on ARCH