November 24th, 2004, 9:01 am
Do you mean like rand48() as on Lnux, a bit better than rand()? No, those are algorithmic, this is real noise turned into reals.I'll be doing the DieHard stuff on the latest version, the final test will be on the reals that come out. Currently I generate a sequence of bytes and map them to reals by casting 4 bytes to a 32 bit integer.This is then cast to a double, and divided by 2^32, giving numbers in the [0..1] range. The result is then cast back into a single float, to be tyhe same asalgorithmic RNGs.This is not quite perfect. Floating point representation has values that don't show up, and there are a tiny set of numbers that will occur twice as often as they should. I think I've worked that out as 64 in 4 billion, with a corresponding 64 that cannot ever occur.This applies to double precisions as well, though of course the ratios are better.Something vaguely bugs be about this conversion, so if anyone has a better idea I'm interested.I'm also looking at a generator for the normal distribution, but by "brute force" of summing N numbers. Many algortithms for this generate non-random artifacts, as I recall Box-Muller has a pronounced step pattern. I'm working out the optimal value for N, and thoughts on this are welcome as well.Do you mean you need computer time to produce random numbers ? if you give source code for your prgI've promised someone 40 gb of random numbers, but I can now do that in a sensible time.The top grade of performance is got by the "head start" cache it can put on your disk. This can work with my "Eater" tool , which takes a mildly random file, and ingests the randomness of other files. JPGs, MP3s et al are "natural processes" but aren't actually all that random. The Eater merges them in, and overnight produces a gB of randomness.You're quite welcome to the source, once it is cleaned up.Actually, you're quite welcome to the source now, if you have a strong stomach and promise not to show it to anyone, since it is rather dodgy.