Paul and others have previously mentioned on the forums that there are problems with daily recalibration of option pricing model parameters. I?ve been working on a paper that estimates these models using a long panel of option and return data and would be interested in any comments or feedback.In particular I focus on the Heston stochastic volatility model and a couple of affine jump-diffusions (e.g. Bates 2000). My data consists of 8 years of daily options of multiple strikes and maturities and the underlying return.I estimate the models using a recursive filtration-based approach to filter latent states and calculate a likelihood function over both the returns and option prices. This enables me to estimate a single set of model parameters over a long panel of data consisting of over 65,000 option prices. I use the filtered state estimates to calculate the model implied option prices each day and look at the RMSE between model and market prices.The Heston model has an option price RMSE of 11% and Bates has a RMSE of 9%. This is obviously very large, and shows that the models are not able to fit the option surface dynamically over time.I?d be interested to hear your thoughts on whether this approach has any potential practical applications (other than model comparison).

Well, "practical" is a loaded word. It could mean any or all of:1. How can I make some money from all this?2. How can I advance my career, promote my work?3. Is it possible to turn what I have done into a trading strategy for myself, or for third parties, say traders or money managers?What did you have in mind?

Haha. They all sound good.From a more academic point of view I'm curious about what the implications might be for dynamic hedging. I think my approach is more dynamically consistent than constant recalibration. Is there a trading strategy that could be used to exploit this?

Agree that it is more dynamically consistent. As for trading, I would investigate a related model (3/2 model + jumps) and bring in VIX options (assuming your data is SPX). Re dynamic hedging: the insurance companies used to be very interested in replication of cliquet-type structures, as they were selling a *lot* of related products. Don't know how much that is still true. Again, would switch models for that. Bottom line : a good, well-calibrated SPX model will have a lot of applications, both academic and real-world.

Last edited by Alan on November 3rd, 2012, 11:00 pm, edited 1 time in total.

Thanks for your comments Alan.I haven't investigated the 3/2 model so it would be interesting to see how much better it can fit the data. I suspect it will fit the option surfaces slightly better than the AJD models, although I'd guess it will still be at the 8-9% RMSE level. One of the problems with the models is that they can't cope with the dynamic term-structure and slope of the IV surface. Another way of addressing this might be be to use additional latent factors.One of the things I find interesting is that regardless of which model is used, when calibrated to a single option surface these models typically have a RMSE of less than 1% which is much more accurate than when fitted using my method. I would assume that this daily calibration would result in more accurate pricing of exotics at that particular moment in time. However, as Paul has mentioned in the past, this accuracy is somewhat misleading. I can see that a cliquet-type option might be able to exploit the difference between the two estimation methods - thanks for the suggestion.

Last edited by APablo on November 4th, 2012, 11:00 pm, edited 1 time in total.

ApabloYou do not understand paul's point - which is simply that essentially all the parameters in ,say, the black scholes model are stochastic ( interest rates, volatility, dividends etc) and should be modelled as such rather than deterministically and recalibrated each day. Taking it to the extreme one should be modelling the whole implied volatility surface [or some other parameterisation] as a stochastic process. However the key issue is "how stochastic" and does it affect the option you are pricing.you are saying the best prediction for tomorrow is some long term average, whereas the standard approach is to use today's market data- but in both cases you are assuming the parameters stay constant, whereas clearly they change.I think you are taking these models a bit too seriously.Can you justify the model financially? why do you think past statistics predict the future? how long in the past etc.?there are two kind of traders - the punters, taking bets, and the market makers = investment banks[ trying to manage a portfolio of bets, and essentially trying to adjust their prices to stay flat]. The market makers aim to be vega neutral so that they are as little exposed as possible to assumptions on dynamics of implied volso your econometric analysis is more relevant to punters and so the question is does your method predict realised volatility better. see eg euan sinclair's books volatility trading/ option trading. In other words when you delta hedge a vanilla option with your approach to expiry , how does the distribution of returns compare to delta hedging with (say) black scholes and daily implied vol.

- katastrofa
**Posts:**9435**Joined:****Location:**Alpha Centauri

Some empirical results show that frequent recalibration improves hedging performance: "frequent recalibration to option prices is not even consistent with most stochastic volatility models; yet we find that daily recalibration of the Heston model to option prices clearly improves its hedging performance."Source: http://www.carolalexander.org/publish/d ... M_2012.pdf

QuoteYou do not understand paul's point - which is simply that essentially all the parameters in ,say, the black scholes model are stochastic ( interest rates, volatility, dividends etc) and should be modelled as such rather than deterministically and recalibrated each day. Taking it to the extreme one should be modelling the whole implied volatility surface [or some other parameterisation] as a stochastic process. However the key issue is "how stochastic" and does it affect the option you are pricing.Agreed. This is why I have chosen to look at several models that allow the BS parameters to vary stochastically. My understanding is that these models have been criticised due to the difficulty in estimating parameters. I believe my estimation methodology overcomes this problem, allowing me to focus on the model performance.Quoteyou are saying the best prediction for tomorrow is some long term average, whereas the standard approach is to use today's market data- but in both cases you are assuming the parameters stay constant, whereas clearly they change.I think that if you specify a stochastic model of the underlying and use it to price options, then by definition the parameters shouldn't change over time. The fact that model parameters have to be reestimated regularly to get a good fit to the option surface is just evidence that the models are misspecified.What I find interesting is this: We know that the Heston or Bates model is misspecified. If we estimate the model consistently using a long set of option data we can fit the option surface with a 10% error. Alternatively we can reestimate the model each day, which gives a much better fit. However, when we do this we are just hiding the misspecification. I'm interested in whether or not the hidden misspecification of the daily reestimated model could be exploited using the more consistent estimation method. katastrofaThanks for the article. I wonder if the results would be different using my estimation method? It doesn't surprise me that the daily recalibration would lead to more accurate results over a short horizon since it is closer to the market prices.

Last edited by APablo on November 9th, 2012, 11:00 pm, edited 1 time in total.

QuoteOriginally posted by: APabloThis is why I have chosen to look at several models that allow the BS parameters to vary stochastically. My understanding is that these models have been criticised due to the difficulty in estimating parameters. I believe my estimation methodology overcomes this problem, allowing me to focus on the model performance.QuoteI think that if you specify a stochastic model of the underlying and use it to price options, then by definition the parameters shouldn't change over time. The fact that model parameters have to be reestimated regularly to get a good fit to the option surface is just evidence that the models are misspecified.What I find interesting is this: We know that the Heston or Bates model is misspecified. If we estimate the model consistently using a long set of option data we can fit the option surface with a 10% error. Alternatively we can reestimate the model each day, which gives a much better fit. However, when we do this we are just hiding the misspecification. I'm interested in whether or not the hidden misspecification of the daily reestimated model could be exploited using the more consistent estimation method. Yes. The problem lies with the fact that volatility -- even if stochastic -- is not such a useful quantity for describing price behavior that these models assume. There are better ways of describing price behavior that give more stable parameters. Once you have a set of reasonably stable parameters, the justification for calibrating less stable ones is that you need to accommodate market irrationality and inefficiency if you want to stay solvent while you wait for your profits.

APabloPaul's point applies equally to any (parametric) stoch vol model. you are linked to a cycle of recalibrating the vol of vol parameters etc. ( precisely because the model is misspecified)heston's model and others are criticised because they are misspecified - the parameters are relatively straightforward to calibrate - the point is that they imply unrealistic dynamics ( and cannot fit all market data)... see lorenzo bergomi's papersIt is your approach that is hiding the misspecification not the standard approach of daily recalibration. your approach amounts to saying that the heston model is correct and the errors are due to observation noise ( not mispecification), which we can average out over lots of samples.instead daily calibration is rather like fitting a linear regression locally. We may know that our data is not generated by a linear model ( so misspecified by linear model) but we can still make accurate interpolations with a linear model by localising in various regimes. trying to fit a global linear regression is bound to fail.In any case the best test for your approach is hedging performance on a vanilla option...

I don't think it's hiding the misspecification. It's quantifying it in terms of a likelihood score and RMSE. Using your analogy of a linear regression on non-linear data, the local regression will fit the data much better than a global regression around the local points, but will fit much worse than the global regression at most other points. The question is, how best to exploit this. I mentioned earlier that some sort of hedging exercise might be useful. Could you be more specific?

QuoteI don't think it's hiding the misspecification. It's quantifying it in terms of a likelihood score and RMSE.So you conclude that the model you have estimated is not supported by the data. what can you use the estimated model for then?QuoteUsing your analogy of a linear regression on non-linear data, the local regression will fit the data much better than a global regression around the local points, but will fit much worse than the global regression at most other points.The question is, how best to exploit this.But that is precisely the point - if you only use it for local predictions its performance at other points is irrelevant. if I calibrate using data from this point last year, and hedge today then performance will be rubbish and your method should be better - so what? QuoteI mentioned earlier that some sort of hedging exercise might be useful. Could you be more specific?replicate the paper catastrofa linked to

QuoteBut that is precisely the point - if you only use it for local predictions its performance at other points is irrelevant. if I calibrate using data from this point last year, and hedge today then performance will be rubbish and your method should be better - so what?Yes, this is the key question.My intuition has been that for daily hedging or pricing exotic derivatives, daily recalibration is probably going to produce superior results. This, I think, is largely due to the persistence of volatility process (and hence option prices). What I'm trying to get is a better understanding of what the limitations of daily recalibration might be. For example, will the daily recalibration method struggle with large movements in the underlying? I think that it might.

Last edited by APablo on November 12th, 2012, 11:00 pm, edited 1 time in total.

GZIP: On