Serving the Quantitative Finance Community

  • 1
  • 4
  • 5
  • 6
  • 7
  • 8
  • 11
 
User avatar
Cuchulainn
Posts: 20253
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Optimized math functions

October 12th, 2010, 9:02 am

QuoteOriginally posted by: outrunjust for the record (now that you have quotes it)=> I made a mistake stating that the error is 3% with exp(z) = (32793 +31836 z + 21146 z^2)/32768the formula is correct, and the error is not 3%, but +/-0.00086 on the interval z=[ 0.. 1/2] for exp(z)How were the numbers 327.. etc. arrived at? (step 1 in Remez?)
 
User avatar
spv205
Posts: 1
Joined: July 14th, 2002, 3:00 am

Optimized math functions

October 12th, 2010, 9:53 am

OutrunI guess you are not able to be a bit more specific on your overall algorithm/goals? Are there no higher level optimisations? curious
 
User avatar
spv205
Posts: 1
Joined: July 14th, 2002, 3:00 am

Optimized math functions

October 12th, 2010, 1:23 pm

OutrunI don't know much about the EM algorithm workings and your data set, but doesn't the exponential effectively mean that only points in a small radius of the mean (at any one time) have an impact? So can you just ignore points outside some #number of standard deviations and not do the exp calc?
 
User avatar
renorm
Topic Author
Posts: 1
Joined: February 11th, 2010, 10:20 pm

Optimized math functions

October 12th, 2010, 2:15 pm

What is verdict on inverse CDF? We have Acklam for doubles and Shaw-Brickman (ICNDfloat2) for singles. Acklam's formula has 5% (two 2.5% branches) branch which uses sqrt and log. Shaw-Brickman has no branches, but uses log every time. Acklam formula is more accurate, but looses too much by switching to singles.One thing that puzzles me about these formulas is why they rely on math function? Math functions themselves are approximated using similar formulas. Why not to approximate the final result directly?Acklam's formula uses math functions only 5% of time, but both sqrt and log are evaluated. We can't use SIMD versions of math function in the branches. Without SIMD Shaw-Brickman it is slower and less accurate than Acklam, at least on CPU.
 
User avatar
spv205
Posts: 1
Joined: July 14th, 2002, 3:00 am

Optimized math functions

October 12th, 2010, 3:40 pm

OutrunI guess you've gone through this all, but can't you just zero those probabilities (so the probability still sums to 1)? In other words for each point store the most likely class (from the previous iteration) then revalue starting with that class and threshold based on some multiple of that level? On another note would you mind clarifyingQuoteanother thing to note is that *I* need exp to evaluate the Gaussian density, so I guess I'm better of to approximate the function f(x) = exp(-1/2 x^2) / sqrt(2 pi) directly using Remezexp(-1/2 sum x^2) is slower than product exp(-1/2 x^2) [if you can be bothered!]
 
User avatar
Cuchulainn
Posts: 20253
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Optimized math functions

October 12th, 2010, 5:44 pm

QuoteOriginally posted by: outrunjust found that boost has the Remez algorithm (boost/math/tools/remez.hpp), so that should make finding min-max approximation coefficients very easy!boost docWell, you should know, you are in the Boost Maths consortium See also Mon Oct 11, 10 06:38 PM
Last edited by Cuchulainn on October 11th, 2010, 10:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 20253
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Optimized math functions

October 12th, 2010, 5:57 pm

QuoteOriginally posted by: outrunQuoteOriginally posted by: CuchulainnSee also Mon Oct 11, 10 06:38 PM Ah I didn;t notice that first time around! Nicely spotted!I only wrote a boost math function *once*, the code itself was not much more than "exp(abs(x))", but it ended up being quite a bit of extra work (docs, plots, tables) to get it in! That's why I suggest to keep that for phase 2.Boost Maths is well documented; Mr. Maddock, Bristow and colleagues are doing an admirable job imo.The code part is the easy part (like a dress rehearsal), it's the plot/doc that is vital.
 
User avatar
AVt
Posts: 90
Joined: December 29th, 2001, 8:23 pm

Optimized math functions

October 12th, 2010, 7:04 pm

Do I read your reply from Tue Oct 12, 10 03:39 PM correctly, that N = pdfN and you dosome 'binning' up to ~ 3 or 4 decimal places?Then this would be far more easy the implementing some exp, as it only asks for the*absolute* error: while code tries to minimize relative error at the same time and thishas its troubles towards the tails - while binning would certainly not care not there.Especially in that case you only need some 'exp' on the negative axis with absoluteerror below 4 significant decimals - did I understand it correctly that way?