- Cuchulainn
**Posts:**62869**Joined:****Location:**Amsterdam-
**Contact:**

Nasty!pow(x, 2) != pow(x, 2.0) (4.65e-10 difference)and..

Last edited by Cuchulainn on March 13th, 2013, 11:00 pm, edited 1 time in total.

Step over the gap, not into it. Watch the space between platform and train.

http://www.datasimfinancial.com

http://www.datasim.nl

http://www.datasimfinancial.com

http://www.datasim.nl

- Traden4Alpha
**Posts:**23951**Joined:**

QuoteOriginally posted by: Cuchulainn....!! max == 0.0002441 These errors seem unavoidable due to round-off in different algorithms and the large numbers being bandied about, no?exp(sqrt(200))^2 = 1.92 e12. And an error of 0.0002441 in 1.92 e12 is about 1 part in 2^53 which is about the LSB on a double.Double trouble, what?-----------Perhaps the more interesting question is which variant is closer to the truth?

- Cuchulainn
**Posts:**62869**Joined:****Location:**Amsterdam-
**Contact:**

QuoteOriginally posted by: Traden4AlphaQuoteOriginally posted by: Cuchulainn....!! max == 0.0002441 These errors seem unavoidable due to round-off in different algorithms and the large numbers being bandied about, no?exp(sqrt(200))^2 = 1.92 e12. And an error of 0.0002441 in 1.92 e12 is about 1 part in 2^53 which is about the LSB on a double.Double trouble, what?-----------Perhaps the more interesting question is which variant is closer to the truth?I don't want to tempt fate but in general;t*t is the same as pow(t,2) but pow(t, 2.0) is out by a miniscule margin.

Step over the gap, not into it. Watch the space between platform and train.

http://www.datasimfinancial.com

http://www.datasim.nl

http://www.datasimfinancial.com

http://www.datasim.nl

- Traden4Alpha
**Posts:**23951**Joined:**

QuoteOriginally posted by: CuchulainnQuoteOriginally posted by: Traden4AlphaQuoteOriginally posted by: Cuchulainn....!! max == 0.0002441 These errors seem unavoidable due to round-off in different algorithms and the large numbers being bandied about, no?exp(sqrt(200))^2 = 1.92 e12. And an error of 0.0002441 in 1.92 e12 is about 1 part in 2^53 which is about the LSB on a double.Double trouble, what?-----------Perhaps the more interesting question is which variant is closer to the truth?I don't want to tempt fate but in general;t*t is the same as pow(t,2) but pow(t, 2.0) is out by a miniscule margin.Perhaps an old but interesting man page might be still valid today. QuoteERROR (due to Roundoff etc.) exp(x), log(x), expm1(x) and log1p(x) are accurate to within an ulp, and log10(x) to within about 2 ulps; an ulp is one Unit in the Last Place. The error in pow(x,y) is below about 2 ulps when its magnitude is moderate, but increases as pow(x,y) approaches the over/underflow thresholds until almost as many bits could be lost as are occupied by the floating-point format's exponent field; 11 bits for double. No such drastic loss has been exposed by testing; the worst errors observed have been below 300 ulps for double. Moderate values of pow are accurate enough that pow(integer,integer) is exact until it is bigger than 2**53 for double.Who wrote pow() and what compromises did they use to make it "good enough"?

- Cuchulainn
**Posts:**62869**Joined:****Location:**Amsterdam-
**Contact:**

QuoteOriginally posted by: outrunI tried to find it in the standard which is supposed to say "pow should be implemented as exp(y*log(x))"This will be a super-slow algorithm?(?) An exp and a log...

Step over the gap, not into it. Watch the space between platform and train.

http://www.datasimfinancial.com

http://www.datasim.nl

http://www.datasimfinancial.com

http://www.datasim.nl

- Traden4Alpha
**Posts:**23951**Joined:**

I was playing around with a graphing calc app on the iPad (plotting x^2.0 - x*x) and seeing similar sorts of errors. The properties of the error seemed to be:1. There's a strong bias in the error: x^2.0 ≤ x*x for both positive and negative values of x.2. The error is always 0 or -1 LSB so that the error doubles as the binary exponent on x^2 increments (e.g., error in 16 < x^2 < 32 is half that of the error in 32 < x^2 < 64 )3. The density of errors appears uniform in x (no "obvious" pattern) and seems to occur about once in every 20-50 values of x.Overall, the frequency of the errors surprised me and the bias in the error frightened me. It implies that if one sums a very large number of values of x^2.0, one can get a significantly lower number than if one sums a very large number of values of x*x (with the same {x}).

Last edited by Traden4Alpha on March 15th, 2013, 11:00 pm, edited 1 time in total.

- Cuchulainn
**Posts:**62869**Joined:****Location:**Amsterdam-
**Contact:**

QuoteIt implies that if one sums a very large number of values of x^2.0, one can get a significantly lower number than if one sums a very large number of values of x*x (with the same {x}). Yep.The fun begins when these (embedded) maths things start throwing exceptions into the blue skies.

Last edited by Cuchulainn on March 15th, 2013, 11:00 pm, edited 1 time in total.

http://www.datasimfinancial.com

http://www.datasim.nl

- Cuchulainn
**Posts:**62869**Joined:****Location:**Amsterdam-
**Contact:**

What about Boost Math Toolkit pow?

http://www.datasimfinancial.com

http://www.datasim.nl

- Traden4Alpha
**Posts:**23951**Joined:**

P.S. Delving a little deeper finds that these errors are not random in x at all. They have quasiperiodic and log-periodic structure and grow denser as x^2 -> an integer power of 2.

- Traden4Alpha
**Posts:**23951**Joined:**

QuoteOriginally posted by: outrunInteresting, that makes sense: the floating point representation <integer> * 2^<exponent> makes the algorithm for log precision-invariant for powers of two.NiceYes. The strange part is that the density of errors is much much higher for values of X in the neighborhood of (2^N) -∆ than in the neighborhood of (2^N)+∆. The density of errors seems to drop by a factor of maybe 6 to 20 across the 2^N boundary.

- Cuchulainn
**Posts:**62869**Joined:****Location:**Amsterdam-
**Contact:**

QuoteOriginally posted by: Traden4AlphaQuoteOriginally posted by: outrunInteresting, that makes sense: the floating point representation <integer> * 2^<exponent> makes the algorithm for log precision-invariant for powers of two.NiceYes. The strange part is that the density of errors is much much higher for values of X in the neighborhood of (2^N) -∆ than in the neighborhood of (2^N)+∆. The density of errors seems to drop by a factor of maybe 6 to 20 across the 2^N boundary.What I see (in an extended case) is that pow(x,2.0) is wrong for let's say x = 3.51, while for x = 3.50 and x = 3.52 the value is OK. Better to use pow(x,2) no matter what. So, the problem points seem to be well-defined and repeatable in that sense.

Last edited by Cuchulainn on March 17th, 2013, 11:00 pm, edited 1 time in total.

http://www.datasimfinancial.com

http://www.datasim.nl

GZIP: On