May 25th, 2004, 1:53 pm
Zed,That sounds like overkill to me. Someone is trying to model and forecast y, and you suggest starting by modeling the var(y) (or y^2). Okay, then how do they get back to modeling y after the GARCH is done? I'm not sure, but here's a guess: run through the time series, forecast var(y) for each time t, and divide y(t) by that forecast to make the variance constant, namely all roughly equal to 1. Once that variance has been divided out, they can finally get back to modeling y, except that they've already fitted at least one-- more likely several-- parameters to the data, reducing the statistical significance of whatever they end up finding for y. Here's another thought: Why not just divide by the actual var(y) at each t rather than the forecast? This "actual variance" could be calculated with a moving window, for example, use the previous 3 days and next 3 days for a window width of 7. On the down side, the moving window results in even MORE estimated parameters! If there are 700 data points, then a window width of 7 results in estimating 100 variances!Zed, can you cite a paper that first estimates a GARCH in order to forecast the underlying variable itself?Jean