October 19th, 2014, 3:22 pm
QuoteOriginally posted by: CuchulainnQuoteOriginally posted by: Traden4AlphaQuoteOriginally posted by: CuchulainnQuoteOriginally posted by: MiloRambaldiQuoteOriginally posted by: outrunQuoteOriginally posted by: MiloRambaldiI was looking at QuantLib's implementation and it is closely tied to the MT19937 (Mersenne Twister) generator. That's a design flaw, they should have used orthogonal concepts and reuse standard concepts, abstract away specifics as much as possible. Now they can only eat their own spaghetti food. It's an old project, it's alway easy to judge in retrospect but e.g. Efficient C++ by Scott Meyers has been around for ages. QuoteIt only uses 24-bits of the MT output to avoid some correlation issue.Do you know if these issues have been resolved in boost? I cannot find the post you once made here about ziggurat quality issues.The issue seems to be with MT, not with ziggarat. MT is standardised algorithm (with some constants) which should be *identical* across implementations, so boost can't 'fix' MT, nor ziggurat, the combination is apparently invalid? It would be interesting if you have some more info about that issue?To be fair, QuantLib's polar Box-Muller appears to be well-designed with appropriate orthogonality and abstraction. Perhaps there is some reason that their Ziggurat hard-codes MT. I plan to look into this further very soon.Several possible reasons for putting BM into history:1. slow: it uses log, sqrt, sin, cos2. Peter Jaeckel in his book discusses nasty side-effects with this method; it goes haywire with pseudo-random algorithms.3. It is claimed that BM is mucho slower than Ziggurat, especially for large numbers of random numbers.Are these good reason for not using BM?QuoteThat's a design flaw, they should have used orthogonal concepts and reuse standard concepts, abstract away specifics as much as possible.In fairness, the QL code is shipping now.Maybe not. Ziggurat has stochastic latency and consumes a stochastic number of RNs which may be unacceptable in some cases. The claims about relative speed make assumptions that are not true in practice. For example, BM on a GPU is 20X faster than Ziggurat on a CPU. I also seem to recall some differences between BM and Z in terms of how well they fill the space of floating point numbers but don't remember which one had which flaws.There's no one perfect RNG and anyone doing important work using them must be aware of the flaws of each and every method.3. I agree, no silver bullet. But points 1, 2 are kind of fundamental and need to addressed as well.I don't why we should compare GPUs with CPU. The main issue was accuracy and not efficiency, well not completely.Points 1 and 3 are about efficiency, right? And we seem to be most concerned with RNG performance on extremely large sample sizes which is also when we'd be most likely to think about a GPU solution. Thus, the poor performance of Z on GPUs becomes a major downside of Z for high N.Point 2 is certainly a major concern. Yet Ziggurat also uses sequences of RNs in a structured way that could make it prone to analogous flaws. The fact that BM has flaws is not evidence that Z lacks flaws.