Hi there,I am working on GARCH Volatility modelling too. So far, the alpha + beta is always less than 1 and alpha & beta are always greater than 1, but recently, the betas turn negative. Can anyone explain why and what is the possibility of this to happen?Thanks.

Hi Lees,I guess you mean that alpha and beta are always greater than zero, and the sum less than one.As you probably know the criteria for stationarity is that the sum is less than one.I have also fitted the GARCH model to different data from stock returns and it also happens that I get negative values for the beta.Especially when I use few past observation when calculating the likelihood estimates for the parameters.Does this somehow mean that there is negative correlation between the volatilties?A negative beta, imply that a high volatility is followed by a smaller volatility, right?

- nastradamus
**Posts:**13**Joined:**

Lees, See #2, #3 and #5 below for possible reasons for your problem.Reasons for "badly behaved" parameters:1) bad starting values (ex. starting values where loglikelihood is undefined or close to undefined, value too far from the optimum value, etc.)2) sample too small -- not enough data points to shape a reasonable model (this is true with more ML problem, esp. GARCH). If you have a small sample, during which the time the volatility has been steadly increasing and data ends on a "high note", you might get a explosive (alpha+beta>1) process. 500 observations for daily or lower frequencies. 800-1000 or more is better.3) The data is homoscadastic! Do a GARCH test first to make sure you have ample reasons to run a GARCH model. If judging by robust standard error, your beta is insignificant think of using ARCH or drop the volatility portion of your model all together.4) over parametization -- if model is GARCH(1,1), estimating a GARCH(5,5) is a bad idea5) wrong model/miss specification a) pick your poison (not all are good ideas): ARCH, EGARCH, PARCH, TARCH, SWARCH, NGARCH, IGARCH, FIGARCH, FIEGARCH, CGARCH, etc. b) If you are estimating a intraday series you've better make sure to model or remove intraday seasonalities. 6) numerical issues -- multiply your returns by 1000 or some number to check if you have any scaling problems. 7) bad optimizer.There are probably more, but these are more likely.

I recently saw a very good article by Fleming and Kirby in Journal of Financial Econometrics about GARCH and SVI would recommend it to anyone with an interest in GARCH and SV

Here's a general question for the ARCH gurus: do these models stop at variance, or can they be generalized to handle higher moments, such as time varying skew and kurtosis of a time series?

- nastradamus
**Posts:**13**Joined:**

Exotiq, Two strends of not-so-developed literature out there ... One estimates the skewness and kurtosis a la GARCH style, done in additional to variance estimation. The kurtosis and skewnes is, if memory is correct, time varying. The results are not very statisfactory. The other estimates the garch model with skewed distribution which gives you the conditional skewed error distribution. From this estimated distribution, you can calculate the non-time-varying skewness and kurtosis.

Hopefully the following summarizes in a few simple steps how to fit a GARCH process to a time series; gurus (reza, nastradamus, or anyone else who's fit GARCH before) please confirm or correct this:1. Let u(i) = (X(i) - X(i - 1))/X(i-1) be the series of discrete returns of X2. Initialize v(1,w,a,b) = w, set the first variance as the overall variance3. Define v(i,w,a,b) = (1 - a - b)*w + a*[u(i - 1)]^2 + b*v(i - 1,w,a,b)4. Define z_i(w,a,b) = [u(i)/sqrt(v(i,w,a,b))] as the series of "Z-scores" of returns.5. Minimize [(Mean(z_i))^2 + (Stdev(z_i) - 1)^2 + (Skew(z_i))^2] w.r.t. <w,a,b> with constraints (a+b) <= 1, w > 0, a >= 0, b >= 0. In other words, fit the parameters <w,a,b> so that the z-scores are as close as possible to being normally distributed, while disallowing negative values that could cause instability.Not so much trying to set the record for shortest GARCH modeling how-to, but hopefully this is both simple and correct!

Hi,It is easy to model the mean equation, but when it comes to modelling the Garch variance equation I get confused with which structure to use. Most literature assumes Garch(1,1) for simplicity. I would like to know how select the right Garch variance equation with the right number of lags for the innovations and past variance?Thanks

- nastradamus
**Posts:**13**Joined:**

Exotiq,If the data is skewed you are going have a hard time fitting a square into a round hole (i.e. a skewed distribution into a normal distribution). You are assuming that the GARCH parameters will somehow able to explain way the skewness. It might, but then again it probably will not. I don't see how that can happen. Not without estimating a more complex model. Looking at your setup, as an alternative you might want to estimate a conditional skewed z_i.Habib,GARCH(1,1) is general enough for most what you want. Probably don't need more lags. Remember GARCH(1,1) is ARCH with infinite order. Formally, the standardized residuals from a GARCH model should be i.i.d. Do a GARCH test on it to decide if you should add more lags. You can also use Log Ratio to do a formal test between models.Using different models (EGARCH, TARCH, etc.) are other alternatives. In that case use, use Loglikelihood value to compare between the models. As in the previous case, you can also use Log Ratio to do a formal test between models.

So what is a better way than my #5 to practically fit a GARCH model/estimate its parameters? All the material I have seen either doesn't go into that step. Is there any open C++ or Mathematica code that I could look at which estimates GARCH parameters that I could look at? The packages that do it seem to be black boxes.The reasons I am looking for a simple, open algorithm is that:1.) I'd like to be able to write a prectical GARCH lesson on a postcard2.) I'd like to extend GARCH to handle time-varying higher moments, like skew, kurtosis, and co-skew.

Would any GARCH gurus give me a hint? I am curious at two following properties of historical market data:1/ In GARCH(1,1), as Netwalker explained long ago in this thread, autocorrelation of volatility decays exponentially. But empirical SP500 data tells a different story: it is power-law decay instead, the temporal correlation can survive several months (or even years). In practice, in order to apply GARCH one could tune the GARCH parameters to make the lifetime of the exponential law very large, say, 3 months. As such, the difference between power and exponential laws becomes insignificant. Yet, the largeness of this timescale puzzles me. In other words, is there a fundamental reason for the emergence of a large timescale in real markets?2/ It is also found from empirical SP500 data that the return probability (the probability that the index remains unchanged upon a time delay t; graphically it is also the peak of the pdf versus return) decreases as t^(-0.7) as opposed to t^(-0.5) of Gaussian distribution. Can any version of GARCH or its extensions account for this fact?Thanks a lot.

- nastradamus
**Posts:**13**Joined:**

The long memory property of S&P 500 daily returns has been documented in Ding and Granger (1996). Long memory Garch models include PARCH (Ding, Granger and Engle (1993)) and FIGARCH/FIEGARCH (Baillie, Bollerslev and Mikkelsen(1996)). There might be others as well. PARCH seems to be easier to program than FIGARCH. Don't know too much about the comparison between the two models. Hope that helps.

Please let me know if you find an open C++ code for GARCH. ThanksQuoteOriginally posted by: exotiqSo what is a better way than my #5 to practically fit a GARCH model/estimate its parameters? All the material I have seen either doesn't go into that step. Is there any open C++ or Mathematica code that I could look at which estimates GARCH parameters that I could look at? The packages that do it seem to be black boxes.The reasons I am looking for a simple, open algorithm is that:1.) I'd like to be able to write a prectical GARCH lesson on a postcard2.) I'd like to extend GARCH to handle time-varying higher moments, like skew, kurtosis, and co-skew.

Hi all,Well i am a newbie to this group. And I am currently using GARCH to estimate the volatility of NIFTY (India) with intraday data. I have a few doubts regarding my work.1.What is generally good time horizon to take as sample when i have a data as fine as multiple values for each second?2.Is there any better criterion to evaluate the models apart from RMSE and AIC? I need to compare the models to arrive at the best frequency (i am varying time gap as 1 min,3min,5min,10min and 15 min)3.How good or justified is the regression between log returns and log forecasts?

Greenleaf:A model that can create large and sudden swings in the volatility can create these patterns. They are not always caused by long memory.Lamourex and Lastrapes have a paper on that issue.Kyriakos

GZIP: On