SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
Cuchulainn
Topic Author
Posts: 59713
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Using Quantlib

December 21st, 2017, 4:37 pm

Yes I understand. So you can time the various MTs,  pick the fastest, then time the various normal_distributions, pick the faster.. there is no cross term when you combine them.
Yes, that's the emerging aha erlebnis. We are not forced to use _solely_ Boost or C++11.
As long as you stick to C++11 conforming concepts.  QL has an MT, right? But  can you use it with xxx::normal_distribution?
Seems QL only used MT32 and is embedded in Zigurrat. Interface is non-standard C++ in any way..
 
User avatar
Cuchulainn
Topic Author
Posts: 59713
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Using Quantlib

December 21st, 2017, 4:37 pm

what we expected I suppose (various implementations of N(0,1), yes.

G. C++ MT with C++ Normal
H. C++ MT with Boost Normal
I. C++ MT with QL Ziggurat
Price QL Ziggurat: 5.84094,
Price Boost Normal: 5.85704,
Price C++ Normal: 5.8386,
 
Price QL Ziggurat: 5.85097,
Price Boost Normal: 5.85268,
Price C++ Normal: 5.83889,
 
Price QL Ziggurat: 5.84542,
Price Boost Normal: 5.8449,
Price C++ Normal: 5.82566,
 
Price QL Ziggurat: 5.84392,
Price Boost Normal: 5.83863,
Price C++ Normal: 5.84976,
 
Price QL Ziggurat: 5.85469,
Price Boost Normal: 5.84345,
Price C++ Normal: 5.8412,
 
Price QL Ziggurat: 5.85157,
Price Boost Normal: 5.84697,
Price C++ Normal: 5.85354,
 
Price QL Ziggurat: 5.84572,
Price Boost Normal: 5.84186,
Price C++ Normal: 5.84783,
 
Price QL Ziggurat: 5.83995,
Price Boost Normal: 5.84866,
Price C++ Normal: 5.84148,
 
Price QL Ziggurat: 5.85369,
Price Boost Normal: 5.85297,
Price C++ Normal: 5.85381,
 
Last edited by Cuchulainn on December 21st, 2017, 4:40 pm, edited 1 time in total.
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Using Quantlib

December 21st, 2017, 4:40 pm

They should refactor it, decompose the two element, that would have been the first thing to come to mind when designing it "and what if we want a different distribution? Copy paste the MT blob?"
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Re: Using Quantlib

December 21st, 2017, 4:40 pm

Why would you need to know the relative time between the two concepts?

You can time engines and distributions individually, there is no dependency in computational effort between the two.
I select the best combi e.g. Boost MT with C++ Normal, whatever.
?
Yes I understand. So you can time the various MTs, pick the fastest, then time the various normal_distributions, pick the faster.. there is no cross term when you combine them.
Are you sure there's no cross-terms? I can understand that being true for selections from within a given library. But between libraries might there be glue-code instructions needed to move parameters, inputs, commands, and outputs between the two library systems? I'd think that cross-terms would be inevitable if the code uses library-specific data types rather than the types built into the language.
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Using Quantlib

December 21st, 2017, 4:46 pm

I'm sure, there is no copying of library specific data types or parameters. Engines and distributions are very thin linked: and engine has a member function that distributions use to get an integer type out. They stick to agreed data types and agreed interfaces.

In C++ a lot of concepts are orthogonal. Eg to copy a library specific element you would want to have that library do that for you, using an agreed interface that tells you how to do that.
 
User avatar
Cuchulainn
Topic Author
Posts: 59713
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Using Quantlib

December 21st, 2017, 6:36 pm

Some issues that may be of importance

1. The contract between MT an N(). (preconditions/postconditions); bad input.
2. Exceptions, algo timeouts.
3. Guaranteed response times and accuracy of Ziggurat etc. (C++() is slower). e,g. new improved version of software is slower. Service Level Contract (SLAs).
4. Does the algo converge in a (fixed) finite number of steps?
5. Side effects; is code thread-safe?
 
User avatar
Cuchulainn
Topic Author
Posts: 59713
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Using Quantlib

December 24th, 2017, 11:28 am

Image
Ziggurat: approximate Lebesgue or Riemann integration? And how many subdivisions are needed? Why does it use acceptance-rejection (it is not deterministic convergence?). A bit rough-and-ready?

Image
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Using Quantlib

December 24th, 2017, 11:55 am

Subdivisions depend on both required accuracy and cpu cache sizes (speed).
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Re: Using Quantlib

December 24th, 2017, 1:35 pm

Image
Ziggurat: approximate Lebesgue or Riemann integration? And how many subdivisions are needed? Why does it use acceptance-rejection (it is not deterministic convergence?). A bit rough-and-ready?

Image
Ziggurat has higher throughput than the alternatives but has nondeterministic latency. With a true RNG, there's no guarantee that it will ever produce an output although the chance of waiting N cycles drops exponentially with N. But with a PRNG that produces every value in the uniform sample space and has a deterministic cycle, Ziggurat is guaranteed to finish.

If you want a lot of N() for the least CPU time, Ziggurat is good. But i'd not use it for a hard realtime system. Also, I get the sense that it's not compatible with GPUs or other SIMD architectures.
 
User avatar
Cuchulainn
Topic Author
Posts: 59713
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Using Quantlib

December 26th, 2017, 12:08 pm

Yes, I don't think it would be the most portable RNG in the world (it violates points 1 and 5 above).

It is a strange mix of numerical analysis (Lebesgue) and statistics (rejection). Two 'correlated' convergence measure are being used. You can decrease the number of samples needed by taking more vertical strips but then you have to fit them in memory.

And the 3-4 static arrays will probably create havoc with L1 cache memory.

I suppose the algo implicitly assumes 32 bit single processor (the method is very old). It works in practice but how does it work in theory and  for all the other platforms?

//
A search threw up this
The fastest method in software is the ziggurat method (Marsaglia 2000). This procedure uses a complicated method to split the Gaussian probability density function into axisaligned rectangles, and it is designed to minimize the average cost of generating a sample. However, this means that for 2 percent of generated numbers, a more complicated route using further uniform samples and function calls must be made. In software this is acceptable, but in a GPU, the performance of the slow route will apply to all threads in a warp, even if only one thread uses the route. If the warp size is 32 and the probability of taking the slow route is 2 percent, then the probability of any warp taking the slow route is (1 - 0.02)32, which is 47 percent! So, because of thread batching, the assumptions designed into the ziggurat are violated, and the performance advantage is destroyed.
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Using Quantlib

December 26th, 2017, 2:38 pm

If your interested in those question like speed, accuracy, relevance,.hen there is a lot of research you can easily find googling. Eg search for "11 Gaussian number generators"
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Re: Using Quantlib

December 26th, 2017, 3:16 pm

Yes, Ziggurat is a strange beast and certainly does have CPU-specific performance levels tied to L1 cache size. But I'd think that if L1 cache is known, then strip count can be optimized for best performance against the bathtub curve of poor performance due to too few strips (and high rates of rejections) versus poor performance due to too many strips (and high rates of cache misses). But then runs of Ziggurat that start with the same seed but run of different CPUs may produce different N() RN streams.
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Re: Using Quantlib

December 26th, 2017, 3:30 pm

Here's an interesting "accuracy" issue: what would you say about an algorithm that fails to produce a large percentage of the values that N() can take on? What if the algorithm entirely fails to ever produce tail values beyond a certain threshold?
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Using Quantlib

December 26th, 2017, 5:21 pm

You can quantify that by tying it to the probability of that (not) happing being statistical significant.

Eg suppose the tail level is such that the probability of a draw from that tail is 1E-9 then you would need to do approx 1E11 draw before you can detect statistical significant deviation.

For the normal distribution the probability drops fast in the tails so there is a finite limit you *can* set that's untestable in practice, you'll only should see a sample <-21 with p = 1E-100.
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Re: Using Quantlib

December 26th, 2017, 6:05 pm

Yes, the missing tails are quite far out there. Box-Muller with 64-bit uniform integer input will create tails out to 9.4 sigma but nothing beyond that.  The chance of any effect of the missing tails is very low as long as sample sizes are below 2^64.  (The 32 bit version is far more dangerous.)

What's weird is that the empirical PDF coming out of Box-Muller will look like a comb in the floating point number system and the empirical CDF will be a stair-step pattern.  Obviously the spacing of the teeth or steps is extremely fine.  But it does lead one to wonder if there are users of "random" N() samples that expect perfect continuity of the CDF and no gaps in the PDF.  Maybe not but maybe so?

---

Many of Cuch's comments on this topic seem to concern the difference between practical correctness (good enough to get the job done) and the higher standard of mathematical or logical correctness (provably correct). These kinds of numerical round-off and resolution issues are more the second kind of correctness than the first.
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On