How about e^5 ~ 163^(160/163)?

It does seem to give accurate results. Where do the magic numbers come from?

Statistics: Posted by Cuchulainn — March 22nd, 2017, 2:04 pm

]]>

]]>

]]>

]]>

Compute [$]e^5[$] from cdf of the exponential distribution

boost::math::exponential_distribution<double> e(1.0);

double a = boost::math::cdf(e,5.0);

std::cout << 1.0 / (1 - a) << '\n';

// 148.413

Another one. Where does this list stop?

Btw boost provided complements precisely for this : http://www.boost.org/doc/libs/1_63_0/li ... ments.html

Statistics: Posted by outrun — March 13th, 2017, 12:33 pm

]]>

boost::math::exponential_distribution<double> e(1.0);

double a = boost::math::cdf(e,5.0);

std::cout << 1.0 / (1 - a) << '\n';

// 148.413

Statistics: Posted by Cuchulainn — March 13th, 2017, 11:26 am

]]>

About that 1e-16: you can change the weights of a quadrature and pick different orthogonal polynomials depending on the type of interval: see wikipedia

Statistics: Posted by outrun — March 9th, 2017, 2:46 pm

]]>

katastrofa wrote:Isn't the fact that the integration limits are no longer rectangular a problem for most methods?

That makes A&S 26,3,3 tricky, in principle.

auto f = [&](double d) { return Z(d)*CdfNormal((y-rho*d)*rho2); };

// Tanh Rule

result = DEIntegrator::Integrate(f, AL, x, 1.0e-16, evals, error);

Statistics: Posted by katastrofa — March 9th, 2017, 2:22 pm

]]>

** Bivariate NX, NY: 15,15

Goursat Classico NX: 0.5275646914439414,

Goursat Extrap 2*NX: 0.519400274728015

Goursat Extrap 4*NX: 0.5195176107403655

Genz: 0.5195173418566086

Tanh +(A&S 26.3.3) : 0.5195173411668271

Let's discuss this example since you've posted it.

the 4x extrapolation seems to have 6 significant digits, right (519 517)? This means you've computed at some point a 60x60 grid -since you started with a 15x15 grid (Nx = 15.. later on it say's 4*Nx)? So how long did that computation take compared to the Genz calculation?

Statistics: Posted by outrun — March 8th, 2017, 3:16 pm

]]>

outrun wrote:Cuchulainn wrote:The latter is a bit slower than Genz and PDE approach.

If we require to compute something like M(x<=0.4, y<=0.12, rho=0.9) with 7 digits precision how long wil Tanh, Genz and PDE take? The PDE is *a lot* slower than Tanh or Genz in this case. You need to explain how you measure PDE speed and what it's precision is and why my simple example conflicts with that.

I think Genz pretty much does what's stated, translate it to a 1d integral.

Your analysis is incorrect. Your intuition tells you it is slower. I have done all the tests and will post on the other thread.

And this thread on "best method", Efficiency is only one of the metrics.

BTW have you ever used these methods? What's your run time for Tanh for example? Is it 10, 20, 30 times slower than Genz?

Your results show me it's slower, there is no intuition.You need very large grid to get a couple of digits precision.

Shall we do a defined bet then? I bet that:

1) to compute M(x<=0.4, y<=0.12, rho=0.9) with 7 digits precision we have Tahn and Genz being *much* faster than PDE. Probably at least a factor 100. You claim as quoted that PDE is faster and so I bet it's it slower by at leat a factor 10. You are allowed to use Richardson extrapolation but need to of course include that in your timing. The Genz answer to this bet is 5.195173365E-1

To answer your question: depending on the required precision you need to pick a number of quadrature points for the 2d numerical tanh integral . In my experience both Gauss Lengedre and Tanh quatrature converge very fast (as a function of the number of nodes) fro smooth function like the bivariate Guassian. A 2d numerical Tanh integral is more generic than the specialized Genz method and I expect Genz to be at least 5 times fasters for similar precision.

With the low precision 7 digits requirements in the bet the PDE method will also be at least factor 1000 less accurate. So its' both slower *and* less accurate by orders of magnitudes!

Statistics: Posted by outrun — March 8th, 2017, 3:06 pm

]]>

Cuchulainn wrote:The latter is a bit slower than Genz and PDE approach.

If we require to compute something like M(x<=0.4, y<=0.12, rho=0.9) with 7 digits precision how long wil Tanh, Genz and PDE take? The PDE is *a lot* slower than Tanh or Genz in this case. You need to explain how you measure PDE speed and what it's precision is and why my simple example conflicts with that.

I think Genz pretty much does what's stated, translate it to a 1d integral.

Your analysis is incorrect. Your intuition tells you it is slower. I have done all the tests and will post on the other thread.

And this thread on "best method", Efficiency is only one of the metrics.

BTW have you ever used these methods? What's your run time for Tanh for example? Is it 10, 20, 30 times slower than Genz?

//

Examples on rough meshes

** Bivariate a,b, rho (one-off scenario): 0.4,0.12,0.9

** Bivariate NX, NY: 15,15

Goursat Classico NX: 0.5275646914439414,

Goursat Extrap 2*NX: 0.519400274728015

Goursat Extrap 4*NX: 0.5195176107403655

Genz: 0.5195173418566086

Tanh +(A&S 26.3.3) : 0.5195173411668271

Tanh fun evals and error : 385,2.842170943040401e-15

ex2

[font=Courier New]** Bivariate a,b, rho (one-off scenario): 8,1.1,-0.5

[/font]

** Bivariate NX, NY: 15,15

Goursat Classico NX: 0.8680812732240816,

Goursat Extrap 2*NX: 0.864315708265091

Goursat Extrap 4*NX: 0.8643339688264521

Genz: 0.8643339390536167

Tanh +(A&S 26.3.3) : 0.8643339395362047

[font=Courier New]Tanh fun evals and error : 385,9.71445146547012e-17[/font]

Statistics: Posted by Cuchulainn — March 8th, 2017, 2:33 pm

]]>

The latter is a bit slower than Genz and PDE approach.

If we require to compute something like M(x<=0.4, y<=0.12, rho=0.9) with 7 digits precision how long wil Tanh, Genz and PDE take? The PDE is *a lot* slower than Tanh or Genz in this case. You need to explain how you measure PDE speed and what it's precision is and why my simple example conflicts with that.

I think Genz pretty much does what's stated, translate it to a 1d integral.

Statistics: Posted by outrun — March 8th, 2017, 12:57 pm

]]>

Isn't the fact that the integration limits are no longer rectangular a problem for most methods?

That makes A&S 26,3,3 tricky, in principle. Lambda functions are useful

auto f = [&](double d) { return Z(d)*CdfNormal((y-rho*d)*rho2); };

// Tanh Rule

result = DEIntegrator::Integrate(f, AL, x, 1.0e-16, evals, error);

Statistics: Posted by Cuchulainn — March 8th, 2017, 12:49 pm

]]>

Taking what list1 suggested one step further, you can transform this double integral to an ordinary integral, and use your favorite method to compute it[$]\frac{1}{\sqrt{2\pi}} \int_{-\infty}^b dy\, e^{-y^2/2} N\left(\frac{a-\rho y}{\sqrt{1-\rho^2}}\right)[$]where [$]N[$] is cumulative normal function. Most likely, there is no closed form solution to the ordinary integral, except for special values of [$]a[$], [$]\rho[$].

This is certainly the most elegant solution, mathematically IMO and takes the least number lines of code, and the most accurate if you use the Tanh Integration rule. The latter is a bit slower than Genz and PDE approach.

Is this formula applicable to trivariate?

Statistics: Posted by Cuchulainn — March 8th, 2017, 10:54 am

]]>

Traden4Alpha wrote:10 digits of e is easy-peasy due to the repeating structure: 2.7 1828 1828.

Floating point significands have a structure of (1 ± ∆) and (1 ± ∆)^N ≈ 1 ± N*∆. From that we can estimate the least number of bits or digits required to achieve D decimal places of accuracy. But the upper bound on the number of required bits or digits to get D decimal places is actually unbounded in the general case due to the non-zero chance of getting a round-up/round-down ambiguity in the deeper digits.

It seems that induced errors (caused by rounding of products aXb) is uniformly distributed in interval [-u/2, u/2] where u is machine precision. Is that reasonable?

1. Empirical non-uniformity: Benford's Law would predict a slight bias in the distribution, especially if one is looking at low precision multiplication.

2. Theoretical non-uniformity: The induced errors sometimes occur on [-u,+u/2], [-u/2,+u] for cases where a*b is on the cusp of a change in exponent. The induced errors can be stranger in cases related to underflow and overflow. (Assuming we're talking about IEEE-style floating point with a fixed division of bits between significand and exponent)

3. Dangerous assumptions about inputs: For a & b that are IID and "smoothly" distributed, uniformity might be fine. But if a & b are drawn from the set of powers of two or any set with a preponderance of powers of two, the chance of induced error of zero would be higher than expected. And what if the upstream process that produces a and b actually has b = 1/a? This feels like one of those software contract issues in which the code that does stuff with the induced error on low precision a*b uses some non-trivial assumptions about the nature of a and b that the user of the software needs to understand.

Statistics: Posted by Traden4Alpha — February 24th, 2017, 2:08 pm

]]>