SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

October 22nd, 2016, 3:34 pm

huh, it seems the standard library supports complex exp out of the box (no need for boost)???

http://en.cppreference.com/w/cpp/numeric/complex/exp
#include <complex>
#include <iostream>
 
int main()
{
   const double pi = std::acos(-1);
   const std::complex<double> i(0, 1);
 
   std::cout << std::fixed << " exp(i*pi) = " << std::exp(i * pi) << '\n';
}
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

October 22nd, 2016, 3:41 pm

..but exp is not what Alan needs, no? That's just a special test case of the confluent hypergeometric M(a,b,z). Besides that he also need the Bessel I(nu,z)?
 
User avatar
Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

October 22nd, 2016, 3:50 pm

..but exp is not what Allen needs, no? That's just a special test case of the confluent hypergeometric M(a,b,z). Besides that he also need the Bessel I(nu,z)?
Yes, special case of 3.2 (a ==b).
But the issue is CHF for all a,b and z in the bespoke paper. And MP is needed.

And outside the world of complex numbers, there's the much bigger realm of  real numbers with zillions of applications, for which all this MP stuff is needed.
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

October 22nd, 2016, 3:58 pm

..but exp is not what Allen needs, no? That's just a special test case of the confluent hypergeometric M(a,b,z). Besides that he also need the Bessel I(nu,z)?
Yes, special case of 3.2 (a ==b).
But the issue is CHF for all a,b and z in the bespoke paper. And MP is needed.
yes CHF, ..but Bessel too (that's still missing) 
I'm not sure if multi precision is a requirement for Alan (it *is* handy for testing). He's using it for a Maximum Likelihood estimation AFAIK and when I use that to find optimal parameter values a couple of digits is all I need (but Alan might be doing something different). It's a relevant question: if Alan needs multiprecision then that reduces the number of libraries that can be used, ..and it also implies that hand-made functions need to cover all corner cases (he would need not just accuracy but also precision!)
 
User avatar
Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

October 22nd, 2016, 4:08 pm

A workaround to compute [$]e^z[$]  or even [$]z^n[$]
1. Write z in polar form [$]r e^{it}[$]
2. Use Euler's formula for [$]e^{it} = cos(t) + i sin(t)[$]

3. etc.

You've separated in real numbers.
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget
 
User avatar
Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

October 22nd, 2016, 4:16 pm

..but exp is not what Allen needs, no? That's just a special test case of the confluent hypergeometric M(a,b,z). Besides that he also need the Bessel I(nu,z)?
Yes, special case of 3.2 (a ==b).
But the issue is CHF for all a,b and z in the bespoke paper. And MP is needed.
yes CHF, ..but Bessel too (that's still missing) 
I'm not sure if multi precision is a requirement for Alan (it *is* handy for testing). He's using it for a Maximum Likelihood estimation AFAIK and when I use that to find optimal parameter values a couple of digits is all I need (but Alan might be doing something different). It's a relevant question: if Alan needs multiprecision then that reduces the number of libraries that can be used, ..and it also implies that hand-made functions need to cover all corner cases (he would need not just accuracy but also precision!)
Alan needs to define the requirements, e.g. are a and b complex etc.?
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

October 22nd, 2016, 4:22 pm

Yes, and the nu in the Bessel, is that an integer?
 
User avatar
Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

October 22nd, 2016, 4:33 pm

Yes, and the nu in the Bessel, is that an integer?
Which Bessel function is that, 1st or 2nd? Can you post the formula?
Abramowitz and Stegun page 509 => Bessel is (just) a special CHF with a = nu + 1/2, b = 2 nu + 1, z = 2 i z. (QED)
It seems CHF has yuge number of special cases:
http://www.nit.eu/czasopisma/JTIT/2005/3/112.pdf

No new code needed (IMO) once you got M(a,b,z).
I think worst case ==> nu is complex. Alan?

//One idea: can you 'extract' just the code you need  from that linux library and leave the rest behind?
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

October 22nd, 2016, 6:29 pm

Yes extracting would be nice, so far I haven't found a thing (but I've only looked a bit), only C++ wrappers to yet other libraries (fortran was one).

GitHub has a nice search function, you can search all code for "Bessel" then filter on programming languages.
 
User avatar
Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

October 25th, 2016, 9:26 am

Mtest[a_,b_,z_,trials_]:=
Timing[Do[M[a,b,z],{trials-1}];
    M[a,b,z]]
Thanks again for your experiments. I just tested with the above Mathematica code. The result gives the timing (in secs) to do the computation 'trials' times, and then the M value.

Taking 10,000 trials:

Mtest[1.,4.,50 I,10000]
 {1.185608,0.00241259 +0.0599983 I}

Mtest[3.,10.,30+100. I,10000]
{1.747211,-13811.6-953.439 I}

Mtest[15.,20.,200. I,10000]
{1.326008,-2.82866*10^-6-3.27839*10^-6 I}

Mtest[4.,1.,50. I,10000]
{0.046800,-9044.44-18975.1 I}

Mtest[10.,3.,30+100. I,10000]
 {0.078001,-1.8468*10^20+9.77876*10^21 I}
 
Mtest[4000.,4200.,50. I,10000]
{0.249602,-0.868055-0.468886 I}

So, roughly, the results are [0.05 - 1.75] secs to do 10,000 evals. To make a port to C/C++ (or FORTRAN or whatever) worth the effort, I would need a factor of 10x improvement on these times or better, as well as a similar speedup with the Bessel [$]I_{\nu}(z)[$] function. (Also, [$]M(a,b,z)[$] needs to admit complex 'a' and 'z', and [$]I_{\nu}(z)[$] needs complex z). I know you haven't paid attention to speed, but that's where I am on the ballpark requirements.
I did some experiments on C++ performance fro CHF using the series solution based on 3.2 b
In general, C++ _much_ faster than MM (in fairness, MM has to be robust over a range of value; BTW does it use GMP internally?)
I have tests

A. using double
B. using float (bit faster than A)
C. A + OpenMP

For A, I look many trials and here are the running times in seconds (N = 10 terms in series, E-8 accuracy)
(10^5, 10^6, 10^7, 10^8, 10^9, 1/2 10^10)
0.03, 0.47, 4.22, 34, 334, -)

Using cote i5 (4cores/4 threads OR 2 core + 4 threads hyperthread?) I get
10^5, 10^6, 10^7, 10^8, 10^9, 1/2 10^10)

0.03, 0.27, 2.27, 12.5, 124, 586)
The speedup is noticeable.

Of course, an algo for all values of z is needed, but I cannot see how GPU offers a better solution. The problem has to be 'big' enough to warrant its use. It's like going to the supermarket to do shopping I a Ferrari. I parallelized the 3.2b itself with N = 10 and it was much slower than the sequential code.

// For trials = 10,000 I get running time of ~ 0.05 rather consistently.

My own plan to is get a machine with 10 cores + onboard double precision(!) GPU in the short term. I'll write a business plan and send to Santa. I'm hoping they don't pull the plug when the powers that be see the electricity bill!!!
Last edited by Cuchulainn on October 25th, 2016, 9:47 am, edited 4 times in total.
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

October 25th, 2016, 9:42 am

Nice information about the speed! That exactly what he needs to know.

Alan's problem is comparable to having to compute N(x) for a thousand different values of x (except that it's not N() but something involving CHF, and the x is not a single parameter but a set of parameters).

It's SIMD, he can give each GPU core an xi, and then all the N(xi) computations will be done in parallel. A GPU core is a bit slower than a CPU core, but I bought two low ends GPU for a couple of hundred euro and I now have 4000 cores. Linear algebra routines (which are more difficult to parallelize) are 100x faster than CPU now.

(At least this is my expectation based on the fact that he's doing a ML and other things he mentioned
 
User avatar
Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

October 25th, 2016, 9:50 am

For completeness, I tested the code using 50 digit data type in Boost Multiprecision. It's kinda slow but it would have its uses (e.g. accurate precomputation or running something in the background).

(10^4, 10^5, 10^6, 10^7)
5.64, 60, 600, 3900 (seconds)

Alan's problem is comparable to having to compute N(x) for a thousand different values of x (except that it's not N() but something involving CHF, and the x is not a single parameter but a set of parameters).

In all probability, yes. That's the way I tested CHF(z). If you try to parallelise the internals of CHF it just goes pear-shaped. It's like getting 10 builders to build a 5X5 shed, they get in each other's way and they drink coffee and listen to the radio all day Getting them back to work is a drain for the project manager.
Last edited by Cuchulainn on October 25th, 2016, 9:59 am, edited 4 times in total.
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

October 25th, 2016, 9:52 am

Let the arbitrage begin!
 
User avatar
Alan
Topic Author
Posts: 10015
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Re: Looking for hardware recommendations

October 25th, 2016, 6:06 pm

Yes, and the nu in the Bessel, is that an integer?
Which Bessel function is that, 1st or 2nd? Can you post the formula?
Abramowitz and Stegun page 509 => Bessel is (just) a special CHF with a = nu + 1/2, b = 2 nu + 1, z = 2 i z. (QED)
It seems CHF has yuge number of special cases:
http://www.nit.eu/czasopisma/JTIT/2005/3/112.pdf

No new code needed (IMO) once you got M(a,b,z).
I think worst case ==> nu is complex. Alan?

//One idea: can you 'extract' just the code you need  from that linux library and leave the rest behind?
For my problem, nu is real, but not an integer.

Re GPU discussion, I don't see how that can work at all here, as there is no native GPU support for [$]I_{\nu}(z)[$] and [$]M(a,b,z)[$].  From what I could see by googling something like "GPU, special functions", only elementary functions like [$]\sin(x)[$], etc. are supported and that's only by loading tables into the GPU and interpolating. That's just not going to work reasonably on special functions with multiple arguments, some complex-valued. And transferring the values from the CPU to the GPU on demand,  I will guess, would destroy any putative GPU speed-up as these functions are called at the very lowest levels of my codes. 
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

October 25th, 2016, 6:26 pm

You can use the CUDA math toolkit which has all the primitive math function implemented for GPU

https://developer.nvidia.com/cuda-math-library
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On