July 24th, 2015, 11:01 am
Hello everybody!I am currently trying to evaluate volatility forecasting models. I want to construct two models: one GARCH(1,1) model and one B&S at-the-money implied volatility model. I am in possession of daily data, looking like this (we are looking at some out-of-sample data here):The GARCH column equals the output of the eviews GARCH(1,1) forecast, the "B&S-ATM" column contains the observed (no forecast) Black-and-Scholes ATM volatilities at the end of each day (constant 1 month maturity).We are looking at EUR/USD here. As suggested in the literature ( Yu, Lui & Wang (2010) ) I estimate a volatility forecasting model for implied volatility by estimating a simple regression of the form "realizedvolatility = a + b * impliedvolatility", using in-sample data. I then use this model to create my out-of-sample forecasts.For the 1-period-ahead forecast everything is fine, but now I want to evaluate 1-month/21-day-ahead volatility forecasts. GARCH(1,1) forecasts again are fine (I am using the n-step-ahead forecasting method as suggested by Engle & Bollerslev (1986)), but when I estimate my B&S-atm model (using the in-sample data) I experience significant autocorrelation (DW around 0.12) in my model. The RLVOL21 column contains the future realized volatility over the next 21 days, shifted upwards, so it is in the same row as my forecasts. My solution so far is to only use every 21st day to estimate my BS-ATM model, which heavily reduces my data set (I have around 3 years in sample and 1.5 years out-of-sample). Is this corrent and the only valid method?I have checked the literature and could not find anyone explaining properly how they tackle this issue.Thank you all for your reading this.
Last edited by
jonasre on July 23rd, 2015, 10:00 pm, edited 1 time in total.