hi,I hope this information is helpful, and even better useful, to people.For some recent work on a risk system, we needed a very fast way to approximate the inverse of the normal distribution CDF with "reasonable" accuracy. I found the following, which I couldn't find in the literature anywhere:http://arxiv.org/abs/1002.0567Definitely appreciate any comments.PV

Very nice paper. Well explained too. I suppose if you want to increase the accuracy you only need to split the domain in more subdomains, and do the rational approximation on all of them. This wouldn't slow down the algorithm at all, it would only make it more difficult to implement. Taken to an extreme, this idea produces the fastest algorithm for the inverse of the normal CDF known to man: make a huge look-up table and interpolate between points. This idea has been implemented at some banks. Best,V.

I am doing a course in Computational Finance with William Shaw of Kings College London. He's also into fast methods of approximating the normal (and student T) distribution and may well be interested in your paper. If you don't already know him I'm sure you'd get on well His Quantile approximation page is at http://www.mth.kcl.ac.uk/~shaww/web_pag ... ntiles.htm

Last edited by willsmith on February 4th, 2010, 11:00 pm, edited 1 time in total.

I agree - my guess is that the Shaw & Brickman paper on branchless quantiles will be much better

Two comments (besides the other you received & the question why you are 'happy' with abs(error) ~ 1e-4):- It might be useful to append your Maple code (reporting on CPU time to find solutions) and say, that this error tolerance holds, if using double precision (since you initially compute with 60 Digits that is not clear).- You may save CPU time in Maple, if not using minimax, but a (rational) Chebyshev-Pade approximation (as you are experimenting anyway to find out ranges)

hi again guys,thanks for the good comments. It was encouraging and informative to receive them.Getting in touch with William Shaw is definitely on my to-do list. Interestingly, someone else (who knows both of us) also thinks that the two of us would get on.Quick reply to why we are happy with that level of accuracy is that it suffices for the accuracy of the results that are required (that's not meant to sound quite as tautological as I now fear it might): "reasonably" accurate but very fast is the objective.thanks again!PV

oops,and that same person also berates me for underselling myself:there is also a branchless approximation in my paper (Section 3) which should compare well speed-wise with Shaw-Brickman on a GPU.And on a CPU, my branched approximation performs much better than theirs speed-wise.PV

Some remarks.These 'branchless' ('uniform' ? ) approximations for cdfN(x) = p usuallysuffer from p=0 (as in the A&S case or complementary p=1) or p = 1/2.In your case (where I think you need the negative and 0 < p < 1/2), i.e.for the rational approximation in t = sqrt(ln(1/p^2)) as in A&S, it canonly solve exactly cdfN(x) = 1/2 if your parameters are *not* chosenindependently.For example c0 in that case is an affine linear combination of the otherparams, c0 = sqrt(2*ln(2) + ... and having the other parameters fixedit will lead to absolute errors of ~ 0.00014 (for your choice).Thus you can not use minimax (as it you seemed you have done) that wayif you want small relative errors and invN(1/2) = 0 at the same time.The compromise in A&S is: a small absolute error on average leading toacceptable tails, but a relative mess in the very center.The other stuff I am aware of are the other way round: good in the center(including *exactness* for cdfN(x) = 1/2) and needing a tail-switch ifdesiring something better (well: why not ... that are the *rare* casesand if those are the main concern, then speedy computations are the veryleast problem).Some variant is: the error function erf and tanh 'look quite similar'(where arctan is very fast) and approximating their distinction (shouldcover -1 , 0 and +1 with some reasonable speed and moderate exactness),but never that did in deep as my interests are towards 'exactness'.I neither have Steinbrecher's article on recursion (which I would findinteresting from a mathematical point [having doubts about practicaluse, but it sounds aesthetical]) nor the other one.

Does anyone have papers related to rational approximation of normal distribution?

I guess the uniform approximation would have to much error so the piecewise approximation is better?

I guess the uniform approximation would have to much error so the piecewise approximation is better?

- Cuchulainn
**Posts:**59714**Joined:****Location:**Amsterdam-
**Contact:**

why do you think that?Does anyone have papers related to rational approximation of normal distribution?

I guess the uniform approximation would have to much error so the piecewise approximation is better?

Have tried many uniform approximation formulas for normal distribution CDF function, none of them is accurate enough.

I guess by partition the x into different subintervals it would be much easier to approximate.

I guess by partition the x into different subintervals it would be much easier to approximate.

The late Graeme West, Wilmott mag, Fig. 2 quoting an alogorithm of Hart (accurate to machine double precision everywhere).

This paper looks like a piecewise rational approximation of cumulative normal distribution, isn’ it?

- Cuchulainn
**Posts:**59714**Joined:****Location:**Amsterdam-
**Contact:**

Indeed. And it is implemented in Quantlib. I extensively compared Graeme's code with QL in my recent C++ book and I get exactly the same answers.The late Graeme West, Wilmott mag, Fig. 2 quoting an alogorithm of Hart (accurate to machine double precision everywhere).

The original algorithm is due to Genz.

Done and dusted I would say.

GZIP: On