SERVING THE QUANTITATIVE FINANCE COMMUNITY

  • 1
  • 4
  • 5
  • 6
  • 7
  • 8
 
User avatar
Cuchulainn
Posts: 62410
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Bivariate Normal Integral paper

March 26th, 2017, 1:07 pm

Instead of a PDE solution we can examine the dual problem and integrate by various versions of Romberg integraton as a 2d loop. This works showing that the integrand is well-behaved.

Looking at 26.3.3 we can see that the inner integral can be directly written in terms of C++11 erf(x). Then in the other direction we can use Romberg/Midpoint to get accuracy. In this case we don't need to build a matrix. And C++11 takes care of all edge conditions in erf(x) and exp(x). Just in case you can do a Kahan summation.

http://people.math.sfu.ca/~cbm/aands/page_936.htm

Some results starting with a modest N = 10 gives this (the result are 7 digits accuracy without too much effort). N = 50,100, 400 are stress tests.

So we can get accurate results directly in C++11 without external libraries. It's another option.

a, b, rho: 1.93185,1.78999, -0.411002
*Genz West                  : 0.9366216225051415
*Tanh 2d Extrap/Adaptive    : 0.9366219402518232
*Midpoint 2d                : 0.9366238179854843
*Tanh 2d 100X100            : 0.9367276920000295
*A&S 26.3.3                 : 0.9366704332207069
 
*Genz QuantLib 1.8          : 0.9366216225051415
*A&S 26.3.3 Extrap (N=10)   : 0.9366210429626975
*A&S 26.3.3 Extrap (N=50)   : 0.9366216224739409
*A&S 26.3.3 Extrap (N=100)  : 0.9366216225046546
*A&S 26.3.3 Extrap (N=400)  : 0.9366216225051428
 
a, b, rho: 1.337236842557504,1.264162992646285, 0.486393246573539
*Genz West                  : 0.8363779098801795
*Tanh 2d Extrap/Adaptive    : 0.8363726930947498
*Midpoint 2d                : 0.836377419248175
*Tanh 2d 100X100            : 0.8365230433697367
*A&S 26.3.3                 : 0.8364483519381961
 
*Genz QuantLib 1.8          : 0.8363779098801796
*A&S 26.3.3 Extrap (N=10)   : 0.8363776712097363
*A&S 26.3.3 Extrap (N=50)   : 0.836377909871765
*A&S 26.3.3 Extrap (N=100)  : 0.8363779098800628
*A&S 26.3.3 Extrap (N=400)  : 0.8363779098801919
 
a, b, rho: 1.955091230607987,0.941464704072475, -0.4825543785261079
*Genz West                  : 0.80173352113734
*Tanh 2d Extrap/Adaptive    : 0.8017309528547933
*Midpoint 2d                : 0.8017282455937149
*Tanh 2d 100X100            : 0.8018595575746997
*A&S 26.3.3                 : 0.8017798723119439
 
*Genz QuantLib 1.8          : 0.80173352113734
*A&S 26.3.3 Extrap (N=10)   : 0.8017329230670884
*A&S 26.3.3 Extrap (N=50)   : 0.8017335211054371
*A&S 26.3.3 Extrap (N=100)  : 0.8017335211368406
*A&S 26.3.3 Extrap (N=400)  : 0.8017335211373599
 
a, b, rho: 1.860920640238858,0.3513236324632962, -0.09898276912896131
*Genz West                  : 0.614801617049458
*Tanh 2d Extrap/Adaptive    : 0.6147996574002191
*Midpoint 2d                : 0.6147982757717783
*Tanh 2d 100X100            : 0.6148746733453729
*A&S 26.3.3                 : 0.6148381569839125
 
*Genz QuantLib 1.8          : 0.6148016170494581
*A&S 26.3.3 Extrap (N=10)   : 0.6148012461785963
*A&S 26.3.3 Extrap (N=50)   : 0.6148016170304943
*A&S 26.3.3 Extrap (N=100)  : 0.6148016170491677
 
User avatar
Cuchulainn
Posts: 62410
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Bivariate Normal Integral paper

March 29th, 2017, 11:29 am

One more thing...

Speed Test
 
a, b, rho, N: 1.19473, 0.10332, -0.666994, 814657
Genz QL: 1.02002
Genz West: 1.19002
26.3.3: 5.40011
 
a, b, rho, N: 0.167764, -1.08659, 0.436802, 301351
Genz QL: 0.450009
Genz West: 0.430009
26.3.3: 1.31303
 
a, b, rho, N: 0.371799, 0.321643, 0.20042, 417296
Genz QL: 0.334011
Genz West: 0.350007
26.3.3: 1.53003
 
a, b, rho, N: -0.386024, -0.825493, 0.544039, 769328
Genz QL: 0.98002
Genz West: 1.11002
26.3.3: 4.56005
 
 
User avatar
Cuchulainn
Posts: 62410
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Bivariate Normal Integral paper

March 31st, 2017, 1:54 pm

For 26.3.3 using 20-point Gauss-Legendre is vary accurate and performs much better than Romberg. It is [1.1, 3] times slower than QuantLib Genz. The performance cannot be improved since we are using std::erf(x) as a black box. We don't know hot it is implemented.
It's always a trade-off; tweek ad infinitum or choose a general robust method.
Fin.
 
User avatar
AVt
Posts: 1074
Joined: December 29th, 2001, 8:23 pm

Re: Bivariate Normal Integral paper

April 2nd, 2017, 7:46 pm

I looked at the example in post #106, a, b, rho: 1.93185,1.78999, -0.411002 and get 0.936622085501903 (15 decimals)

PS: What time units are meant in the speed test in post # 107?
 
User avatar
Cuchulainn
Posts: 62410
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Bivariate Normal Integral paper

April 3rd, 2017, 8:13 am

I looked at the example in post #106, a, b, rho: 1.93185,1.78999, -0.411002 and get 0.936622085501903 (15 decimals)

PS: What time units are meant in the speed test in post # 107?
I ran the experiment again with these precise (truncated) numbers + some extra methods for comparison. And a performance test 10^7 trials (time is in seconds).

What I particularly like is 12-digits accuracy for 26.3.3 using 20-point GL and standard C+11 (no extra files and/or libraries needed). I did 10-point GL as well and has lower accuracy as expected, but it will be faster.

a, b, rho: 1.93185,1.78999, -0.411002
*Goursat Classico 200x200   : 0.9366486100496062
*Goursat Extrap             : 0.9366220856907832
*Genz West                  : 0.9366220855019028
*Drezner 1978 Quantlib 1.8  : 0.9366220580788491
*A&S 26.3.3 Extrap (N=10)   : 0.936621505959591
*A&S 26.3.3 Extrap (N=950)  : 0.9366220855018679
*Genz QuantLib 1.8          : 0.9366220855019028
*A&S 26.3.3 Gauss Legendre  : 0.9366220855178885
 
 
Speed Test (in seconds)
 
a, b, rho; NTrials: 1.93185, 1.78999, -0.411002; 10000000
Genz QL: 15.0989
Genz West: 13.9828
26.3.3 GauLeg: 31.6778
26.3.3 Midpt Extrap N = 10: 92.4843
 
a, b, rho; NTrials: 1.93185, 1.78999, -0.411002; 10000000
Genz QL: 17.072
Genz West: 15.8739
26.3.3 GauLeg: 33.3149
26.3.3 Midpt Extrap N = 10: 105.156
 
a, b, rho; NTrials: 1.93185, 1.78999, -0.411002; 10000000
Genz QL: 16.912
Genz West: 15.4709
26.3.3 GauLeg: 38.7462
26.3.3 Midpt Extrap N = 10: 105.854
 
User avatar
Cuchulainn
Posts: 62410
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Bivariate Normal Integral paper

July 24th, 2017, 8:25 am

Concluding:

. The West and Quantlib implementations of the Genz algorithms give the same accuracy up to machine precision. We used the finite difference method to check them just to ensure that they do not give the same incorrect value for certain values of the parameters. We generated a,b,rho randomly and computed 10^10 trials. Use FDM to check both Genz algorithms.
. The added value of the finite difference approach is that it can be tuned to suit our accuracy needs. The basic model is second-order accurate and we can use  Richardson extrapolation to achieve fourth-order accuracy. Furthermore, these meshes deliver a matrix as part of the algorithm which saves recomputing for certain classes of applications.
. The finite difference approach can be applied to a wide range of bivariate and trivariate  distributions, for example the bivariate t distribution and the trivariate normal distribution. Genz becomes difficult in these cases.
 
. The Drezner1978 algorithm is less accurate than the other schemes and it degrades for correlation near 1 in absolute.
 
User avatar
Cuchulainn
Posts: 62410
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Bivariate Normal Integral paper

September 28th, 2017, 6:46 am

Genz has an algorithm for trivariate normal distribution and Graeme West has an implementation. The question is if the PDE approach works in 3d and if so the PDE would be

[$]\frac{\partial^3 u}{\partial x \partial y \partial z} = f[$]  (BTW Anyone know if this PDE has a name, it's the kind of stuff that Goursat, Monge and Darboux would do?)

We discretised this as a second-order FDM scheme and tested again Genz. I generated some tests such as

      Values, error: 1/ 3.36056e-25,8.60479e-26,2.50008e-25
      Values, error: 2/ 5.55148e-15,5.55106e-15,4.2039e-19
      Values, error: 3/ 0.998061,0.998062,1.55523e-06
      Values, error: 4/ 0.667054,0.667056,2.38335e-06
      Values, error: 5/ 3.57559e-08,3.57549e-08,9.60296e-13
 
      // …
 
      Values, error: 96/ 7.25962e-15,7.21828e-15,4.13414e-17
      Values, error: 97/ 0.0118934,0.0118931,2.91526e-07
      Values, error: 98/ 6.82096e-17,6.19754e-17,6.23427e-18
      Values, error: 99/ 9.44464e-07,9.44434e-07,2.96727e-11
      Values, error: 100/ -4.16418e-25,1.23604e-73,4.16418e-25
 
      Max, Min error: 6.003088278805357e-06, 
                      1.36752501534285e-45

 and tested for rho in range [-0.5, 0.5]. and b1, b2 and b3 in range [-8,8]. I used domain truncation but a more elegant approach is to transform to [-1,1]^n exactly.

The PDE approach can be applied to any distribution in principle. 

I have also done it for 4 dimensional integral for zero correlation (I did not feel like inverting 4X4 covariance matrix by hand...)

[$]\frac{\partial^3 u}{\partial x \partial y \partial z \partial p} = f[$] 
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On