Hi everyone,Thank again for the vivid debate, it is very interesting for me to see that this topic is interesting for you.Quote This reminds me a bit of IBM/Rational '-ify' products like purify, quantify etc. A non-finite number and a memory leak are not a million miles apart?I am not sure that what we do could apply here. From my perspective, memory leak are more of procedural problem, meaning that the programmer is not cautious enough or the garbage collector is not ?smart? enough. Memory is roughly an integer world, where there is almost no computation needed to use it.Quote I find this text below a bit weird: an underflow is not an error.I won?t quote all the response of everyone. I think Traden4Alpha has a very good point about that. Signaling an underflow could be done of course, some systems already does it (in Fortran for example). For and undeflow that could be interesting, however for a numerical drift a flag won?t be able to do much as this way more difficult to detect.Quote It's not an issue of confidence interval or drift but a more fundamental limitation of adding small numbers to large ones with a fixed-length floating point representation. Actually, in single precision floats, the problem starts showing up at much lower sample counts than 2^32 especially for calculations of higher moments.If you want a simple test of this issue, create a loop that adds 1.0 to a 32-bit float and let it loop 2^26 times. What's the result? It should be 2^26 but it won't be. You'll reach a point where x+1.0 == x.Sure, absorption occurs at some point but only if your values are all of the same sign, and you SumX is move toward larger and larger values. Is this always the case in a Monte Carlo simulation?When they have different signs you could have catastrophic cancellation (which is also bad), but it is not the same kind of numerical drift.Quote Take a look at Box Muller method for computing random Normal values and think about what happens when a 32-bit uniform integer is converted into a 32-bit float and then the values are transformed via Box Muller into Normal distribution. IIRC, this method never produces random values outside of about -6 to +6 sigma (i.e., it truncates the tails of the distribution) no matter how many samples you run which means it will underestimate all the even moments of the distribution and cause some anomalies in the mean and skew if the MC involves nonlinear payoffs. Alright, I will try to implement this to see how it behaves. I will inform you of the results.Quote What's also interesting about the machine precision issue for MC is that as the number of samples grows, the chance that a sample's value will be ignored because it is less than the machine epsilon for the accumulating variable will grow. At first, only small values will fail to change the sum but as the sum grows, (SumX + X) == SumX will happen more and more often until virtually all values of X fail to change the sum. Do you think we can build a case like this easily to test it?On a more open debate, do you have ever encounters this kind of issue we discuss here in your jobs?