SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Using Quantlib

December 26th, 2017, 11:50 pm

Statistical observable deviation is a clear mathematical concept at the core of analyzing these two type of issues. In numerical methods you consider finite sets, and there are infinitely many theoretical distributions (generated by algorithms) that don't deviate statistically at a level below detectable sample or representation error. Thats how you quantify relevance, it's applied statistics and probability theory of finite sample sets.

BM or any other method is like you said an theoretical concept that can't be implemented exactly numerically, it can only be approximated with a finite length number representation system and finite precision operations on those.

Floating point resolution issues have been discussed extensively, no? It can be addressed in the sense that you can refine your representation (more bits etc) and make certain properties provable deviate below some threshold. You do his without having to change the algorithms (zigurrat, BM don't need modifications), so it's and orthogonal issue.
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Re: Using Quantlib

December 27th, 2017, 1:15 am

Yes, statistical observable deviation is great idea. And certainly for the 64-bit version, statistical observable deviation wouldn't show up in the lower moments of the PRNG. The 32-bit version is a lot more likely to show deviations but the deviations would be unexpected unless one understands the combined system.

What's interesting is that the issue I'm mentioning isn't the usual round-off error and numerical errors inside BM or Ziggurat. Instead, it is an interaction between the particular generator of the 0-1 U() values and the particular transformation methods that converts U() to N(). I doubt Ziggurat clips the tails in the same way that BM does. Ziggurat's PDF in the set of floating point numbers might be better or worse that BM's. Each of the 11 methods for creating a Gaussian probably has different resolution issues.

More importantly, the effect is modulated by how U() is generated. Plain-old MT (or any other integer RNG) which is rescaled to a float will create these issues. A different kind of U() PRNG that generates a proper uniform random floating pointing value by creating a suitable random exponent and random mantissa will not have these issues (but may have other ones.)

Thus, RNG and transform pair matters. A true 32-bit float U() PRNG would not have the tail clipping problems with BM that are seen with a U() PRNG that is based on simple rescaled uniform 32-bit integers.
 
User avatar
Cuchulainn
Topic Author
Posts: 59713
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Using Quantlib

December 27th, 2017, 11:54 am

If I can summarise:

1. Ziggurat is fastest for 32-bit serial CPUs.
2, It is supported in Boost and Quantlib. It runs.
3. It works in  practice but the the mathematical theory is a bit flaky. Not universally applicable it seems Can it be applied to the non-central chi^2 for example?
4. For multi-core my hunch is it will not be efficient. Haven't tried it ..

That's 4 items more than I knew last week.

Both of you seem to place a lot of emphasis/heavy weather  on tails (the QL and Boost check for this). Is the issue caused by the fact the pdf is defined on [$](0,\infty)[$]. Why not try to map this interval tp [$](0,1)[$] by using [$]y = x/(x+1)[$] (it is a useful trick for BS PDE). It might distort the nice stattistical properties of of the non-transformed case.
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Using Quantlib

December 27th, 2017, 1:56 pm

It's not flaky, why so?
I expect it to work just as well multicore.

Distortion is a property of floats, not the rng
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Re: Using Quantlib

December 27th, 2017, 2:33 pm

Distortion is a property of floats, not the rng
Actually no!

A true floating point U() PRNG would not induce the clipped tails in Box-Muller that are caused by using a rescaled integer U() PRNG. Nor would a true floating point U() PRNG have gaps in the PDF (stairsteps in the CDF) that are produced by using a rescaled integer U() PRNG.

The choice of U() and the U->N transform are not orthogonal.
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Using Quantlib

December 27th, 2017, 3:22 pm

Distortion is a property of floats, not the rng
Actually no!

A true floating point U() PRNG would not induce the clipped tails in Box-Muller that are caused by using a rescaled integer U() PRNG. Nor would a true floating point U() PRNG have gaps in the PDF (stairsteps in the CDF) that are produced by using a rescaled integer U() PRNG.

The choice of U() and the U->N transform are not orthogonal.
Floats has gaps too,.. and all these things exist and are not very relevant in practice. That's why we can still do science using a computer machine you see at everyone's desk.
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Re: Using Quantlib

December 27th, 2017, 3:47 pm

Distortion is a property of floats, not the rng
Actually no!

A true floating point U() PRNG would not induce the clipped tails in Box-Muller that are caused by using a rescaled integer U() PRNG. Nor would a true floating point U() PRNG have gaps in the PDF (stairsteps in the CDF) that are produced by using a rescaled integer U() PRNG.

The choice of U() and the U->N transform are not orthogonal.
Floats has gaps too,.. and all these things exist and are not very relevant in practice. That's why we can still do science using a computer machine you see at everyone's desk.
Yes, floats have gaps. But a U() PRNG created with rescaled integers will be have much larger gaps than a correctly constructed floating point U() PRNG.

The choice of U() PRNG will effect the gaps and clipped tails coming out of the transform to N().
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Using Quantlib

December 27th, 2017, 3:51 pm

Actually no!

A true floating point U() PRNG would not induce the clipped tails in Box-Muller that are caused by using a rescaled integer U() PRNG. Nor would a true floating point U() PRNG have gaps in the PDF (stairsteps in the CDF) that are produced by using a rescaled integer U() PRNG.

The choice of U() and the U->N transform are not orthogonal.
Floats has gaps too,.. and all these things exist and are not very relevant in practice. That's why we can still do science using a computer machine you see at everyone's desk.
Yes, floats have gaps. But a U() PRNG created with rescaled integers will be have much larger gaps than a correctly constructed floating point U() PRNG.

The choice of U() PRNG will effect the gaps and clipped tails coming out of the transform to N().
Yez, and still all not very relevant. It's very easy to point out issues with computation on a computer involving continuous functions. It's much harder to classify these issues as relevant/irrelevant using statistics in the context of an actual group of applications.

We can take an european call option price in USD with MC as an example to put things in perspective?

Q1) what precision are we aiming for?
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Re: Using Quantlib

December 27th, 2017, 4:24 pm

Floats has gaps too,.. and all these things exist and are not very relevant in practice. That's why we can still do science using a computer machine you see at everyone's desk.
Yes, floats have gaps. But a U() PRNG created with rescaled integers will be have much larger gaps than a correctly constructed floating point U() PRNG.

The choice of U() PRNG will effect the gaps and clipped tails coming out of the transform to N().
Yez, and still all not very relevant. It's very easy to point out issues with computation on a computer involving continuous functions. It's much harder to classify these issues as relevant/irrelevant using statistics in the context of an actual group of applications.

We can take an european call option price in USD with MC as an example to put things in perspective?

Q1) what precision are we aiming for?
With 64-bit RNGs, you are right. Many problems can be solved by adding more bits.

But with 32-bit RNGs, the answer will be much more sensitive to using the wrong U() PRNG. More to the point, the developer of the simulation might think 32-bit is good enough based on the usual round-off and resolution math and not know that the choice of U() PRNG makes a difference.
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Using Quantlib

December 27th, 2017, 5:14 pm

Yes, floats have gaps. But a U() PRNG created with rescaled integers will be have much larger gaps than a correctly constructed floating point U() PRNG.

The choice of U() PRNG will effect the gaps and clipped tails coming out of the transform to N().
Yez, and still all not very relevant. It's very easy to point out issues with computation on a computer involving continuous functions. It's much harder to classify these issues as relevant/irrelevant using statistics in the context of an actual group of applications.

We can take an european call option price in USD with MC as an example to put things in perspective?

Q1) what precision are we aiming for?
With 64-bit RNGs, you are right. Many problems can be solved by adding more bits.

But with 32-bit RNGs, the answer will be much more sensitive to using the wrong U() PRNG. More to the point, the developer of the simulation might think 32-bit is good enough based on the usual round-off and resolution math and not know that the choice of U() PRNG makes a difference.
Ok, so lets try put things in perspective: a European binary call with strike 0.3, which is just 1-CDF(0.3), and use 32 bits?
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Re: Using Quantlib

December 27th, 2017, 5:54 pm

Yez, and still all not very relevant. It's very easy to point out issues with computation on a computer involving continuous functions. It's much harder to classify these issues as relevant/irrelevant using statistics in the context of an actual group of applications.

We can take an european call option price in USD with MC as an example to put things in perspective?

Q1) what precision are we aiming for?
With 64-bit RNGs, you are right. Many problems can be solved by adding more bits.

But with 32-bit RNGs, the answer will be much more sensitive to using the wrong U() PRNG. More to the point, the developer of the simulation might think 32-bit is good enough based on the usual round-off and resolution math and not know that the choice of U() PRNG makes a difference.
Ok, so lets try put things in perspective: a European binary call with strike 0.3, which is just 1-CDF(0.3), and use 32 bits?
Sorry! I've no clue because I don't price options. But as a guess, I'd say that it would have negligible impact on that particular pricing estimate.

Now if the user of the N() has a function that is very sensitive to either tail samples or samples very near zero, then it's a different story. An example, Cauchy distributed random numbers (computed as a ratio of two Gaussians) might be sensitive to both the near-zero gaps and the clipped tails. There's other functions and systems that might depend on a high quality N() PRNG. Note: I don't know how rescaled integer U() affects Ziggurat or the other 9 methods of creating N().

The point is that that the user of N() might make certain assumptions about those random numbers that are false because of the choice of U().
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Using Quantlib

December 27th, 2017, 6:08 pm

I also expect it to be neglectible. If I was to give you two sets of prices, generated by two methods you probably wouldn't be able to detect which one was which.

I *do* want to see if we can quantify things for the sake of improving these discussions.

So can you give an example of those cases? Imo a core assumption is that people a immensely stupid yet keep their jobs. Eg price a binary option that pays a dollar if the price is 22 std up, and they start drawing numbers till the end of the universe. The dependecy on an extremely small p value is not going to give a very precise value no matter how good you distribution of floats are because of the dominant sample noise.
 
User avatar
Cuchulainn
Topic Author
Posts: 59713
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Using Quantlib

December 27th, 2017, 8:51 pm

Took advice to do a Google..

It is widely accepted that if f is decreasing or symmetric ziggurat algo-
rithm [1] is the fastest of the algorithms that can be implied for sampling from the
distribution in terms of generation time. However, ziggurat algorithm requires
generation of tables at the preliminary setup stage which is time-expensive. Due
to this drawback ziggurat has never been considered a viable option for most
widely applied rv's. We have developed 

Which was my suspicions regarding them "Side effects" . 
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Using Quantlib

December 27th, 2017, 9:11 pm

What?? That's a crazy argument!

Static const table??
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Using Quantlib

December 27th, 2017, 9:17 pm

I can only imagine if it was a parametric density for which you can only generate samples to get an idea for the density and the density would vary unpredictably if you changed the parameters? Non of the densties in std or boost fall in that category, you can simply compute a static table with eg quadrature using the pdf

Do you have the link to the source?
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On