Page **8** of **11**

### Optimized math functions

Posted: **October 14th, 2010, 7:58 am**

by **Cuchulainn**

Quote1) preprocessing can even be done with pen and paper, it not preprocessing that is done during e.g. a library init, ..but preprocessing before you begin with coding. Is about estimating constants.Indeed, as with R22, R33 etc.But it ain't always necessarily so in software.Quote2) We get a polynomial (or PadÃ© or something else) approximation of exp in some interval. Inside that interval is will have limited error and predictable behaviour. Outside the interval, we will use the "rescale K" trick to translate it back inside the interval. So robustness is not an issue (if you only work inside a fixed, calibrated/optimized interval), right? Limited error and predictable behaviour, yes; except for the pathological cases.

### Optimized math functions

Posted: **October 14th, 2010, 6:38 pm**

by **AVt**

Though I always admire bit manipulation and its voodoo I think youare on the wrong track. Here is my last attempt to correct it.The recipe is just some lines and 40 years old (at least) and onecan find it at ancient, like in Luke (or netlib/Cephes, directly as code).Intel describes a variant (translating a smaller initial range usingtables): exp is from the additive to the multiplicative group andthat means an additive reduction, while mantissa*2^exponent leadsto powers and (large) relative errors.If you want relative exactness + usual speed you have no chance andseriously underestimate the cleverness of those guys.Except you are willing to pay the price of large relative errorsto gain speed. If you only want small absolute errors it is evenbetter ... but you have not answered or specified towards that.

### Optimized math functions

Posted: **October 15th, 2010, 4:20 pm**

by **Cuchulainn**

Quoteno problem with math analysis op 69, that was a good era.Not only a good era, it was the Golden Age era. But the pinnacle was the 19th century.

### Optimized math functions

Posted: **October 15th, 2010, 7:21 pm**

by **AVt**

Find below the principle to get off the wrong track (it does not matter: float or doubles).It shows the reduction and its value, there are no 'logs etc' except constants. There arerelative errors due to straight multiplications with non-representables like log(2) or sqrt(2)for a way off 'study' netlib/cephes. Then you are ready to go for speed: floor is expensive (there are suggestions aroundon the www), may be ldexp(a, k) = a*2^k is worth a look as well. Beyond that it showshow to reduce calculating exp(x) to the case "x close to 0" and the most simple casefor using a table (with 2 entries: 1 and sqrt(2)).Edited: on that reduced range 5 additions+ 4-5 multiplications are enough, I think.

### Optimized math functions

Posted: **October 16th, 2010, 7:52 am**

by **Cuchulainn**

QuoteOriginally posted by: outrunThanks.I'm trying to get rid of inefficiencies, and one strange thing is the +1/2 term inxi = x / (log(2.0)/2) + 1/2; the integer division 1/2 results in zero (integer), ..and thus that term can be removed! Also, if I change it in 0.5f I get wrong results, ..so it's meant to be + zero.Maybe it is compiler-dependent, which would be real scary. Does it depend on the expression parser? One double on the RHS ==> everything double (including 1/2)?A Fortran programmer would ALWAYS write 1./2A classic example is:REAL*4 myVarIF (myVAR .EQ. 0) THENis always FALSE BTW what do you think of all code in UPPER CASE?

### Optimized math functions

Posted: **October 16th, 2010, 4:45 pm**

by **AVt**

I think the C compiler avoids loss of precision and treats float + 1/2 as float + 0.5f,and perhaps that would be more clear.The code is taken from what I said, mentioned or linked before, a kind of 'essential'. Edited: what wrong result do you get using 0.5f instead?