Serving the Quantitative Finance Community

  • 1
  • 5
  • 6
  • 7
  • 8
  • 9
 
User avatar
jaesmine
Posts: 4
Joined: April 7th, 2004, 6:46 am

Re: iv for all and all for iv

October 23rd, 2023, 11:41 pm

I hope that my recent paper is helpful for the discussion: Choi et al. (2023),  https://arxiv.org/abs/2302.08758 .

The paper finds quite many new IV bounds (both upper and lower) for a given price (this is main theme of the paper).
Then, it also formulates a new Newton-Raphon method on the log price to handle very small option premiums properly. The iteration formula is quite simple (see Eq. (30) in the paper), and it always converges as long as the initial guess is a lower bound. (The proof is a bit tedious. Just see the left side of Figure 1. The log price is a concave function of sigma.) 
So we use a lower bound found in the earlier part of the paper (specifically L3 in Eq (23)). 

I actually tested the famous example in this thread (S0=1, K=1.5, sigma=4%) and reported it in the paper (See Section 3.3 Numerical Example(. The new NR method reaches an IV value within 2e-11 just after three iterations. 
This looks like NR with some upfront work to find an initial seed (which is always the main challenge/bottleneck).  
Mention is made of iterating a number of times to get a desired accuracy which may affect performance.
The upfront work to find a seed is very light in my algorithm. It just takes one evaluation of inverse normal CDF, so it's lighter than or equivalent to one NR iteration. (Among many lower/upper bounds, some take more computation. I picked one (L3) with the lightest calculation, although it is not the tightest one.) 
 
User avatar
pj
Posts: 11
Joined: September 26th, 2001, 3:31 pm

Re: iv for all and all for iv

February 18th, 2024, 3:50 pm

I have put the latest version of "Let's Be Rational" up on www.jaeckel.org . It has nominal revision number 1520 (it is a nominal unique identifier, this does not mean that I have worked on this 1520 times over the years).

On my i5-12500H laptop, the average calculation time for one implied volatility is 180 nanoseconds. That's about 5.5 million implied volatilities per second. No assembler/vector instruction or GPU shenanigans, no multithreading or any other sort of parallelisation.

The main intention of this latest version was accuracy improvements in the few areas where it was not getting quite close enough to the theoretically attainable accuracy for my liking. Speed improvements were added on.

It comes with Python wrapper, gnuplot wrapper GNU Octave (free version of matlab) wrapper, Visual Studio 2022 solution file, Makefile for Linux build etc.

As always, positive feedback is welcome. Happy to answer sensible questions about the methodology, though, caveat emptor, some of it is a bit intricate - mathematically, not codewise.

PJ
 
User avatar
pj
Posts: 11
Joined: September 26th, 2001, 3:31 pm

Re: iv for all and all for iv

February 18th, 2024, 3:52 pm

Forgot to mention: it also has a slightly modified inverse cumulative normal function that is the same accuracy (i.e., practically perfect accuracy) as AS241 but is 20%-30% faster which may obviously be of interest in other, wider contexts.
 
User avatar
pj
Posts: 11
Joined: September 26th, 2001, 3:31 pm

Re: iv for all and all for iv

February 18th, 2024, 4:02 pm

Without intention of inciting a heated discussion, iterating for a solution of implied volatility by formulating the objective function on the logarithm of the price was already in "By Implication" published in 2006, and obviously still is part of "Let's Be Rational" to this day. I would never have called the idea of going to logarithmic coordinates a "new Newton-Raphson" method since non-linear transformations are bread and butter in Numerical Analysis (and I certainly did not invent the logarithm). The subject of concave versus convex was also already part of "By Implication" (see section 3).
 
User avatar
pj
Posts: 11
Joined: September 26th, 2001, 3:31 pm

Re: iv for all and all for iv

February 18th, 2024, 4:13 pm

Regarding the example S=1,K=1.5,T=1,sigma=4% => call price = 9.01002030924271E-27, "Let's Be Rational" returns 4% exactly in IEEE 754 double precision (64 bit, 53 bit mantissa) in its standard implementation (initial guess + exactly two iterations).

It is of course possible to implement faster methods than "Let's Be Rational" when compromising on parameter coverage, input (price) value coverage, or output accuracy. The intention of "Let's Be Rational" was to be universally applicable and to return (nearly) the maximum attainable accuracy [as can be derived by standard error propagation analysis, e.g., see the comments in the source code of Let's Be Rational for a derivation] given by DBL_EPSILON·(1+|b(x,s)/(s·vega(x,s)|) where b(x,s) is the normalised Black price, and vega is of course the normalised vega, and s = sigma·sqrt(T).

To demonstrate what is intended in "Let's Be Rational", take the example S=1,K=4.45,T=1,sigma=4% => call price = 7.9643474321049E-308, "Let's Be Rational" returns 4% exactly in IEEE 754 double precision (64 bit, 53 bit mantissa) in its standard implementation (initial guess + exactly two iterations).

Noone is claiming that such prices are financially relevant. However, we demand that exp() and log() always give us correct answers to within what the computer's hardware specifications are, and I intended to develop a truly mathematical implied volatility (inverse Black) function.

PJ