SERVING THE QUANTITATIVE FINANCE COMMUNITY

• 1
• 2
• 3
• 4
• 5
• 17

Alan
Topic Author
Posts: 10013
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Re: Looking for hardware recommendations

Thanks -- that's a good article. So, I will keep tweaking my Mathematica code on my current machine and monitor the new hardware scene for a few months. I continue to be interested in hardware recommendations, esp. something that ships running Windows 10 on Intel chips with 8 or more cores.

outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

Cuch, you're using a high precision library here no? Which one, and is it easy to port code to it ( is the syntax like z=x+y; )?

Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

Cuch, you're using a high precision library here no? Which one, and is it easy to port code to it ( is the syntax like z=x+y; )?
It is quite easy and nice from what I have tested. I am using Boost multiprecision to implement the series solutions to avoid overflow with doubles, so we can use data types for 50 and 100 digit accuracy., Here we test CHF with exp(1,0) as a complex number.
(MP even works with special functions..)

std::cout << '\n' << std::setprecision(std::numeric_limits<cpp_dec_float_100>::digits10) << "(3.2)b: " << SN(z, N) <<  '\n';

Approx

(3.2)b: (2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427,0)

(3.2)a: (2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427,0)

(3.2)c: (2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427,0)

exact std: (2.7182818284590450908,0)

real exact std: 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427427466

http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget

Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

Boost MP has code for exp() etc. Here I call MP:exp(1) which actually is less accurate (it only gives ~ 40 digits accuracy)  than algo 3.2b I created. (and MP does not support complex)

MP exact exp: 2.718281828459045235360287471352662497757247093699958846289456201786842248

remarks: the series solution is trivial to parallelise (OMP, PPL, TBB).
Last edited by Cuchulainn on October 22nd, 2016, 1:46 pm, edited 2 times in total.
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget

Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

Cuch, you're using a high precision library here no? Which one, and is it easy to port code to it ( is the syntax like z=x+y; )?
It is quite easy and nice from what I have tested. I am using Boost multiprecision to implement the series solutions to avoid overflow with doubles, so we can use data types for 50 and 100 digit accuracy., Here we test CHF with exp(1,0) as a complex number.
(MP even works with special functions..)
Approx

(3.2)b: (2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427,0)

(3.2)a: (2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427,0)

(3.2)c: (2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427,0)

exact std: (2.7182818284590450908,0)

real exact std: 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427427466

http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget

Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

Thanks -- that's a good article. So, I will keep tweaking my Mathematica code on my current machine and monitor the new hardware scene for a few months. I continue to be interested in hardware recommendations, esp. something that ships running Windows 10 on Intel chips with 8 or more cores.
It's a nice article. I have coded 3.2a-c and it's super fast. I'll do a benchmark and document CHF in the 2nd edition of my C++ book.
I was at a talk on Xeon Phi (10K) that is like a GPU except it uses C++11 in contrast to CUDA. It is about the size of a small box of chocolates and uses 250 Euros electricity per month.
https://en.wikipedia.org/wiki/Xeon_Phi
The guy said they were going to outsource to the Cloud.. it might be a 'user-level' option as well (saves electricity cost ) and upgrades are automagic.
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget

outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

Boost MP has code for exp() etc. Here I call MP:exp(1) which actually is less accurate (it only gives ~ 40 digits accuracy)  than algo 3.2b I created. (and MP does not support complex)

MP exact exp: 2.718281828459045235360287471352662497757247093699958846289456201786842248

remarks: the series solution is trivial to parallelise (OMP, PPL, TBB).
Thanks, that looks indeed very useful!
I still need (if I ever find the time!) compute american puts with higher accuracy to investigate that stability issue, and in general convergence speed. The doubles have too little digits to test that algorithm.
About this exp(1) example: did you use a 50 digit float point representation of 1, and when fed through exp() the result only had 40 digits precision?? If so then you should file a bug report I often use the casio high precision calculator when I need high precision constants (e.g. that;s where I got the weights of the Gaussian Quadrature used in the American Put algorithm).
http://keisan.casio.com/has10/Free.cgi
which gives 2.7182818284590452353602874713526624977572470937
For Alan I still think it would be a good idea (if he want to test if it's worthwhile to re-implement in C++/Fortran) to start a Linux virtual machine, link to one of those mature math libs because the functions he need are a bit difficult to implement, they seem to have many corner cases. If we can do that then we can at least give some timing feedback: is moving from Mathematica to C++ 2 times faster, 10 faster, 100 faster? I wouldn't know, it could be any of these.
I also suspect that this type of problem -evaluating a lot of similar probabilities- is highly parallelisable. Moving from a single core to a GPU can easily speed things up another factor 50.

Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

Boost MP has code for exp() etc. Here I call MP:exp(1) which actually is less accurate (it only gives ~ 40 digits accuracy)  than algo 3.2b I created. (and MP does not support complex)

MP exact exp: 2.718281828459045235360287471352662497757247093699958846289456201786842248

remarks: the series solution is trivial to parallelise (OMP, PPL, TBB).
bump (casio vs MP vs wiki vs 3.2b)
2.7182818284590452353602874713526624977572470937
2.71828182845904523536028747135266249775724709369995 8846289456201786842248
2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427 427466
2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427
So, it looks like a bug in MP  and casio.
BTW 3.2b is about 4 lines of code.
Last edited by Cuchulainn on October 22nd, 2016, 2:43 pm, edited 2 times in total.
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget

outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

thanks. But I'm worried about boost multiprecision losing precision (in general, or just exp?). can you do something like MP::exp(float_50(1.0)) and MP::exp(float_100(1.0))?

I don't have boost and I hate installing and compiling it

outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

cool, I found Corilu, an online C++ compiler that supports boost
#include <iostream>
#include <string>
#include <vector>
#include <boost/multiprecision/cpp_dec_float.hpp>
#include <boost/math/constants/constants.hpp>

using boost::multiprecision::cpp_dec_float_50;

int main()
{
std::cout
<< std::setprecision(std::numeric_limits<cpp_dec_float_50>::digits10)
<< boost::multiprecision::exp(cpp_dec_float_50(1.0))
<< std::endl;
}
gives
2.7182818284590452353602874713526624977572470937
and for the float_100 case
2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427
..so I don't understand that 40 digit remark..
?

Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

Cuchulainn:Boost MP has code for exp() etc. Here I call MP:exp(1) which actually is less accurate (it only gives ~ 40 digits accuracy)  than algo 3.2b I created. (and MP does not support complex)

MP exact exp: 2.718281828459045235360287471352662497757247093699958846289456201786842248

remarks: the series solution is trivial to parallelise (OMP, PPL, TBB).
Correction: I was not consistent with 50./100 digits and 1.0
MP exact exp: 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427
Corilu: 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427
Lesson; don't mix precisions!
CODE
std::cout << '\n'
<< std::setprecision(std::numeric_limits<cpp_dec_float_100>::digits10 )
<< "MP exact exp: "
<< boost::multiprecision::exp(boost::multiprecision::cpp_dec_float_100(cpp_dec_float_100(1.0)))
<< '\n';
Last edited by Cuchulainn on October 22nd, 2016, 3:07 pm, edited 1 time in total.
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget

Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

Summary: values now look good.
But the functions don't support complex arguments!(?)
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget

outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

Indeed, no complex arguments in boost, which is a shame. Now we either have to implement it and be cross platform, or use something from Linux libs.

Cuchulainn
Posts: 61483
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

Here's a tricky one; if I shovel 100 digits type into std::exp() I get less accurate result.

krazee exact std: (2.718281828459045090795598298427648842334747314453125,0)
mp exact               2.718281828459045  !!!     235360287471352662497757247093699959574966967627724076630353547594571382178525166427

So, there is a workaround for complex args (support in C++11) but then we have to be careful.
http://www.datasimfinancial.com
http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself
Jean Piaget

outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

std::exp only support double, no? So I think the boost mp automatically down casts to double? I'm not sure, I'm surprised even even works.. But I wouldn't expect anything beyond double from std.