This thread is being started to encourage a conversation that started in a book thread and to invite others to contribute.The topic is implied volatility implementations that are robust and accurate ( after which fast would be helpful ).I'm a retail trader of index options that has been adding Greeks to a spreadsheet. I found several good VBA implementations of implied volatility that use bisection or newton-raphson approaches in books by Paul Wilmott and Espen Haug. One of those approaches or some small variation of them look like they should work great for tracking and adjusting theta positive income trades in a spreadsheet.As time permits I would like to estimate the expectation of a set of trading rules/guidelines.More involved simulations seem to require the calculation to be as robust and accurate as practical.So putting my big toe in the water, I'm all ears.Several people were kind enough to have already suggested a few alternatives.I will try to summarize the current state of those conversations in future posts.

QuoteOriginally posted by: CuchulainnI used the fixed point iteration for this very problem and it worked very well. Maybe for the 3rd edition? QuoteIn order to compute the implied volatility values in Table 1 (article Wilmott, Lewis, Duffy) we use the Black Scholes formula for calls and puts using the option values that we found using ADE. There are many methods to solve this nonlinear equation and in this case we used the fixed-point method in combination with Aitken?s delta-squared process to accelerate convergence. It is worth nothing that the fixed-point method converges for any initial guess because the corresponding iteration function is a contraction (Haaser (1991)). This is in contrast to the Newton Raphson and Bisection methods (for example), where it can sometimes be an issue in finding good starting values for these iterative methods. BTW the NR method can give problem when fitting yield for some short rate models. It is not robust, at least not always.QuoteOriginally posted by: Cuchulainntthrift,I have some C++ code to calculate implied volatility for BS using fixed point that I can post. Let me know. In VBA it would look similar. Mahematically, you have to choose the parameter m such that the iteration function g(x) is a contraction i.e. abs(g'(x)) < 1So f(x) = 0 is same as x = g(x)whereg(x) = x - f(x)/mChoose m s.t. g is a contraction . AFAIR you bound vega to compute a suitable m. // Espen also uses the Bisection method for iv.QuoteOriginally posted by: Cuchulainntthrift,What I have done is a self-contained C++11 to compute iv for calls and puts. The core algo is easy to port to VBA IMO:1. Linear fixed point iteration2. 1 + Aitken acceleration processBoth 1 and 2 converge for start guess v_0 and it is necessary to provide an estimate for that 'm' (that is just vega). m must be chosen to make it a contractionDid stress tests as well1. converges in 64 iterations2. converges in [3,5] iterationsCode on its way to you.QuoteOriginally posted by: tthriftI got it running and will test it some more.Neat stuff!Thank You for taking the time to share it.

QuoteOriginally posted by: tthriftQuoteOriginally posted by: piterbargPeter Jaeckel has done a lot of work on efficient implied vol calculations. On his site you can find his reference implementationhttp://jaeckel.org/http://www.pjaeckel.webspace.virginmedi ... arg:Thanks for pointing out Peter Jaeckel's detailed implementation.I have it running as well and lightly tested.To make sure I knew how to call the code,I had to do some algebra to change the black pricing notation of Wilmott's book into the form Jaeckel is using.I thought I would work on the greeks a bit and tried to denormalize his normalized-vega.So far, I have not been clever enough to reproduce his normalized-vega.It looks like he is approximating it in different regions.// my denormalized vega in his form// where F = S*exp([r-d]*T), K is Wilmott's E, T is Wilmott's (T-t)EXPORT_EXTERN_C double vega(double F, double K, double sigma, double T, double r) { const double x = log(F / K); const double s = sigma*sqrt(T); // return (sqrt(F)*sqrt(K))*normalised_vega(x, s); // sqrt(F)*sqrt(K) is clearly the incorrect denormalization return (F*sqrt(T)*ONE_OVER_SQRT_TWO_PI*exp(-r*T - square(x / s + s / 2)))/100; //this is the closest I got to casting vega into PJ's form}Thanks for pointing out the excellent reference implementation he has shared and documented so well.Oops! I appreciate the grace and mercy that the senior members have been showing me in this thread.This thread is about Dr Haug's options formula book.In my enthusiasm I have overstepped my bounds.I realize that if I want to discuss IV implementations, I should start a separate thread in the beginner's forum.-Terry-

- Cuchulainn
**Posts:**62409**Joined:****Location:**Amsterdam-
**Contact:**

Nice title (3+1 Muskateers)I suppose your goal is robust, accurate and fast iv solvers. Already, a number of candidate methods has been proposed. A data set for stress testing would be nice to have if you could provide it.// One devil's advocate remark is: the nonlinear equation may _not_ have a solution (e.g. give a ridiculous market price) in which case no numerical method will work. We want to distinguish this case from the one that the parameters are OK but the numerical method breaks down. And ... if there is a solution, is it always unique? e.g.r = 0.08, T = 0.25, K = 65, S = 60Market price = 333Vol = ??

Last edited by Cuchulainn on December 1st, 2014, 11:00 pm, edited 1 time in total.

I assumed that this thread was a "dialogue interieur"

knowledge comes, wisdom lingers

QuoteOriginally posted by: daveangelI assumed that this thread was a "dialogue interieur"Hi daveangel:If you are interested, Please contribute, lurk, whatever.All perspectives and solvers are welcome.

QuoteOriginally posted by: tthriftQuoteOriginally posted by: daveangelI assumed that this thread was a "dialogue interieur"Hi daveangel:If you are interested, Please contribute, lurk, whatever.thanks

knowledge comes, wisdom lingers

- Cuchulainn
**Posts:**62409**Joined:****Location:**Amsterdam-
**Contact:**

QuoteOriginally posted by: daveangelQuoteOriginally posted by: tthriftQuoteOriginally posted by: daveangelI assumed that this thread was a "dialogue interieur"Hi daveangel:If you are interested, Please contribute, lurk, whatever.thanksAll are welcome, Dave.

- Traden4Alpha
**Posts:**23951**Joined:**

QuoteOriginally posted by: CuchulainnQuoteOriginally posted by: daveangelQuoteOriginally posted by: tthriftQuoteOriginally posted by: daveangelI assumed that this thread was a "dialogue interieur"Hi daveangel:If you are interested, Please contribute, lurk, whatever.thanksAll are welcome, Dave.This sounds like a Church of Implied Volatility: Greeks Orthodox, perhaps?

- Cuchulainn
**Posts:**62409**Joined:****Location:**Amsterdam-
**Contact:**

tt,I ran the solver with C = 9 and it seems not to converge. r = 0.20, T = 0.5, K = 100, S = 100Market price = 91st impression is there is _no_ solution!(?)//take F = K, T = 1, r = , then Black 76 givesC = F * (N(d) - N(-d)] (1)where d = sig/2.NOW the nonlinear eq (1) has a solution for which values of sig? All??// tt: Dave and T4A are very savvy and Wilmott treasures.===edit:C/F = (N(d) - N(-d)] (2)So right site <= 1 and do left should be as well, intuitively.Try F = 100, C = 99.99 OKF = 100, C = 1O2 foreverBTW C = 100.001 takes 1018479 iterations.

Last edited by Cuchulainn on December 1st, 2014, 11:00 pm, edited 1 time in total.

- Traden4Alpha
**Posts:**23951**Joined:**

QuoteOriginally posted by: tthriftThe topic is implied volatility implementations that are robust and accurate ( after which fast would be helpful ).On a more serious note, are you sure these aren't antithetical requirements -- that the most robust methods will be less accurate and the most accurate methods will be less robust?Robustness would seem to require ignoring spurious values in calculating implied volatility. But accuracy would seem to require using ALL of the data to get an unbiased and correct implied volatility. Especially in the context of volatility, the inclusion or exclusion of outliers has a extremely large effect on the calculated value.One thought-tool that I like to use is the Jacobian of the system. It provides a first-approximation of the sensitivities of system to inaccuracies in any measurement. And if those sensitivities vary too much, it suggests the system might be hypersensitive to measurement error. It also provides a first approximation for how statistical dispersion in the measured values translates into statistical dispersion in the outputs. That can be really useful if one knows that some values have higher or lower measurement errors.

- Cuchulainn
**Posts:**62409**Joined:****Location:**Amsterdam-
**Contact:**

QuoteOriginally posted by: Traden4AlphaQuoteOriginally posted by: tthriftThe topic is implied volatility implementations that are robust and accurate ( after which fast would be helpful ).On a more serious note, are you sure these aren't antithetical requirements -- that the most robust methods will be less accurate and the most accurate methods will be less robust?Robustness would seem to require ignoring spurious values in calculating implied volatility. But accuracy would seem to require using ALL of the data to get an unbiased and correct implied volatility. Especially in the context of volatility, the inclusion or exclusion of outliers has a extremely large effect on the calculated value.One thought-tool that I like to use is the Jacobian of the system. It provides a first-approximation of the sensitivities of system to inaccuracies in any measurement. And if those sensitivities vary too much, it suggests the system might be hypersensitive to measurement error. It also provides a first approximation for how statistical dispersion in the measured values translates into statistical dispersion in the outputs. That can be really useful if one knows that some values have higher or lower measurement errors.The scope AFAIK is the 1-factor Black76. No more, no less?

QuoteOriginally posted by: CuchulainnNice title (3+1 Muskateers)I suppose your goal is robust, accurate and fast iv solvers. Already, a number of candidate methods has been proposed. A data set for stress testing would be nice to have if you could provide it.// One devil's advocate remark is: the nonlinear equation may _not_ have a solution (e.g. give a ridiculous market price) in which case no numerical method will work. We want to distinguish this case from the one that the parameters are OK but the numerical method breaks down. And ... if there is a solution, is it always unique? e.g.r = 0.08, T = 0.25, K = 65, S = 60Market price = 333Vol = ??I like that goal (ie., robust, accurate and fast iv solvers). Data to stress solvers seems necessary.You have a good point about solvers being fragile with respect to ridiculous inputs.If left to my own devices, the monkey in me might want to generate combinations of inputs that cover the ranges of the individual inputs' data types.This would undoubtedly lead to some ridiculous inputs.An industrial grade solver might protect itself from combinations of inputs that don't make sense.However, some people's solvers may be called such that little or no protective pre-filtering is needed.From a robustness to inputs point of view, perhaps solvers should be stressed at several levels to characterize how they respond.

- Cuchulainn
**Posts:**62409**Joined:****Location:**Amsterdam-
**Contact:**

QuoteOriginally posted by: tthriftQuoteOriginally posted by: CuchulainnNice title (3+1 Muskateers)I suppose your goal is robust, accurate and fast iv solvers. Already, a number of candidate methods has been proposed. A data set for stress testing would be nice to have if you could provide it.// One devil's advocate remark is: the nonlinear equation may _not_ have a solution (e.g. give a ridiculous market price) in which case no numerical method will work. We want to distinguish this case from the one that the parameters are OK but the numerical method breaks down. And ... if there is a solution, is it always unique? e.g.r = 0.08, T = 0.25, K = 65, S = 60Market price = 333Vol = ??I like that goal (ie., robust, accurate and fast iv solvers). Data to stress solvers seems necessary.You have a good point about solvers being fragile with respect to ridiculous inputs.If left to my own devices, the monkey in me might want to generate combinations of inputs that cover the ranges of the individual inputs' data types.This would undoubtedly lead to some ridiculous inputs.An industrial grade solver might protect itself from combinations of inputs that don't make sense.However, some people's solvers may be called such that little or no protective pre-filtering is needed.From a robustness to inputs point of view, perhaps solvers should be stressed at several levels to characterize how they respond."ridiculous input" leads to no solution. This is really a pure maths problem as I posted:C/F = (N(d) - N(-d)] (2)will not always be solvable for depending on C/F(e.g. F = 1, C = 2000, find d, _never_ IMO because N(d) - N(-d)] <= N(d) <= 1) // Heuristic: if Aitken does not converge after +- 8 iterations, it will never converge and hence no solution. (?)

Last edited by Cuchulainn on December 1st, 2014, 11:00 pm, edited 1 time in total.

- Traden4Alpha
**Posts:**23951**Joined:**

QuoteOriginally posted by: CuchulainnQuoteOriginally posted by: Traden4AlphaQuoteOriginally posted by: tthriftThe topic is implied volatility implementations that are robust and accurate ( after which fast would be helpful ).On a more serious note, are you sure these aren't antithetical requirements -- that the most robust methods will be less accurate and the most accurate methods will be less robust?Robustness would seem to require ignoring spurious values in calculating implied volatility. But accuracy would seem to require using ALL of the data to get an unbiased and correct implied volatility. Especially in the context of volatility, the inclusion or exclusion of outliers has a extremely large effect on the calculated value.One thought-tool that I like to use is the Jacobian of the system. It provides a first-approximation of the sensitivities of system to inaccuracies in any measurement. And if those sensitivities vary too much, it suggests the system might be hypersensitive to measurement error. It also provides a first approximation for how statistical dispersion in the measured values translates into statistical dispersion in the outputs. That can be really useful if one knows that some values have higher or lower measurement errors.The scope AFAIK is the 1-factor Black76. No more, no less?I could be wrong, but it seemed like tthrift was open to other implementations or tweaks to 1-factor Black76. The Jacobian does offer some insights into the robustness of different/tweaked implementations.As for accuracy, there's also the problem of the effects of discrete pricing on the values of IV which may transcend the choice of implementation.

GZIP: On