Thanks -- that's a good article. So, I will keep tweaking my Mathematica code on my current machine and monitor the new hardware scene for a few months. I continue to be interested in hardware recommendations, esp. something that ships running Windows 10 on Intel chips with 8 or more cores.

Cuch, you're using a high precision library here no? Which one, and is it easy to port code to it ( is the syntax like z=x+y; )?

- Cuchulainn
**Posts:**61483**Joined:****Location:**Amsterdam-
**Contact:**

It is quite easy and nice from what I have tested. I am using Boost multiprecision to implement the series solutions to avoid overflow with doubles, so we can use data types for 50 and 100 digit accuracy., Here we test CHF with exp(1,0) as a complex number.Cuch, you're using a high precision library here no? Which one, and is it easy to port code to it ( is the syntax like z=x+y; )?

(MP even works with special functions..)

std::cout << '\n' << std::setprecision(std::numeric_limits<cpp_dec_float_100>::digits10) << "(3.2)b: " << SN(z, N) << '\n';

Approx

(3.2)b: (2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427,0)

(3.2)a: (2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427,0)

(3.2)c: (2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427,0)

exact std: (2.7182818284590450908,0)

real exact std: 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427427466

http://www.datasimfinancial.com

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

- Cuchulainn
**Posts:**61483**Joined:****Location:**Amsterdam-
**Contact:**

Boost MP has code for exp() etc. Here I call MP:exp(1) which actually is less accurate (it only gives ~ 40 digits accuracy) than algo 3.2b I created. (and MP does not support complex)

MP exact exp: 2.718281828459045235360287471352662497757247093699958846289456201786842248

remarks: the series solution is trivial to parallelise (OMP, PPL, TBB).

MP exact exp: 2.718281828459045235360287471352662497757247093699958846289456201786842248

remarks: the series solution is trivial to parallelise (OMP, PPL, TBB).

Last edited by Cuchulainn on October 22nd, 2016, 1:46 pm, edited 2 times in total.

http://www.datasimfinancial.com

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

- Cuchulainn
**Posts:**61483**Joined:****Location:**Amsterdam-
**Contact:**

It is quite easy and nice from what I have tested. I am using Boost multiprecision to implement the series solutions to avoid overflow with doubles, so we can use data types for 50 and 100 digit accuracy., Here we test CHF with exp(1,0) as a complex number.Cuch, you're using a high precision library here no? Which one, and is it easy to port code to it ( is the syntax like z=x+y; )?

(MP even works with special functions..)

Approx

(3.2)b: (2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427,0)

(3.2)a: (2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427,0)

(3.2)c: (2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427,0)

exact std: (2.7182818284590450908,0)

real exact std: 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427427466

http://www.datasimfinancial.com

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

- Cuchulainn
**Posts:**61483**Joined:****Location:**Amsterdam-
**Contact:**

It's a nice article. I have coded 3.2a-c and it's super fast. I'll do a benchmark and document CHF in the 2nd edition of my C++ book.Thanks -- that's a good article. So, I will keep tweaking my Mathematica code on my current machine and monitor the new hardware scene for a few months. I continue to be interested in hardware recommendations, esp. something that ships running Windows 10 on Intel chips with 8 or more cores.

I was at a talk on Xeon Phi (10K) that is like a GPU except it uses C++11 in contrast to CUDA. It is about the size of a small box of chocolates and uses 250 Euros electricity per month.

https://en.wikipedia.org/wiki/Xeon_Phi

The guy said they were going to outsource to the Cloud.. it might be a 'user-level' option as well (saves electricity cost ) and upgrades are automagic.

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

Thanks, that looks indeed very useful!Boost MP has code for exp() etc. Here I call MP:exp(1) which actually is less accurate (it only gives ~ 40 digits accuracy) than algo 3.2b I created. (and MP does not support complex)

MP exact exp: 2.718281828459045235360287471352662497757247093699958846289456201786842248

remarks: the series solution is trivial to parallelise (OMP, PPL, TBB).

I still need (if I ever find the time!) compute american puts with higher accuracy to investigate that stability issue, and in general convergence speed. The doubles have too little digits to test that algorithm.

About this exp(1) example: did you use a 50 digit float point representation of 1, and when fed through exp() the result only had 40 digits precision?? If so then you should file a bug report I often use the casio high precision calculator when I need high precision constants (e.g. that;s where I got the weights of the Gaussian Quadrature used in the American Put algorithm).

http://keisan.casio.com/has10/Free.cgi

which gives 2.7182818284590452353602874713526624977572470937

For Alan I still think it would be a good idea (if he want to test if it's worthwhile to re-implement in C++/Fortran) to start a Linux virtual machine, link to one of those mature math libs because the functions he need are a bit difficult to implement, they seem to have many corner cases. If we can do that then we can at least give some timing feedback: is moving from Mathematica to C++ 2 times faster, 10 faster, 100 faster? I wouldn't know, it could be any of these.

I also suspect that this type of problem -evaluating a lot of similar probabilities- is highly parallelisable. Moving from a single core to a GPU can easily speed things up another factor 50.

- Cuchulainn
**Posts:**61483**Joined:****Location:**Amsterdam-
**Contact:**

bump (casio vs MP vs wiki vs 3.2b)Boost MP has code for exp() etc. Here I call MP:exp(1) which actually is less accurate (it only gives ~ 40 digits accuracy) than algo 3.2b I created. (and MP does not support complex)

MP exact exp: 2.718281828459045235360287471352662497757247093699958846289456201786842248

remarks: the series solution is trivial to parallelise (OMP, PPL, TBB).

2.7182818284590452353602874713526624977572470937

2.71828182845904523536028747135266249775724709369995 8846289456201786842248

2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427 427466

2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427

So, it looks like a bug in MP and casio.

BTW 3.2b is about 4 lines of code.

Last edited by Cuchulainn on October 22nd, 2016, 2:43 pm, edited 2 times in total.

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

thanks. But I'm worried about boost multiprecision losing precision (in general, or just exp?). can you do something like MP::exp(float_50(1.0)) and MP::exp(float_100(1.0))?

I don't have boost and I hate installing and compiling it

I don't have boost and I hate installing and compiling it

cool, I found Corilu, an online C++ compiler that supports boost

gives
and for the float_100 case
..so I don't understand that 40 digit remark..

?

Code: Select all

```
#include <iostream>
#include <string>
#include <vector>
#include <boost/multiprecision/cpp_dec_float.hpp>
#include <boost/math/constants/constants.hpp>
using boost::multiprecision::cpp_dec_float_50;
int main()
{
std::cout
<< std::setprecision(std::numeric_limits<cpp_dec_float_50>::digits10)
<< boost::multiprecision::exp(cpp_dec_float_50(1.0))
<< std::endl;
}
```

Code: Select all

`2.7182818284590452353602874713526624977572470937`

Code: Select all

`2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427`

?

- Cuchulainn
**Posts:**61483**Joined:****Location:**Amsterdam-
**Contact:**

Correction: I was not consistent with 50./100 digits and 1.0Cuchulainn:Boost MP has code for exp() etc. Here I call MP:exp(1) which actually is less accurate (it only gives ~ 40 digits accuracy) than algo 3.2b I created. (and MP does not support complex)

MP exact exp: 2.718281828459045235360287471352662497757247093699958846289456201786842248

remarks: the series solution is trivial to parallelise (OMP, PPL, TBB).

MP exact exp: 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427

Corilu: 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427

CODE

std::cout << '\n'

<< std::setprecision(std::numeric_limits<

<< "MP exact exp: "

<< boost::multiprecision::exp(boost::multiprecision::

<< '\n';

Last edited by Cuchulainn on October 22nd, 2016, 3:07 pm, edited 1 time in total.

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

- Cuchulainn
**Posts:**61483**Joined:****Location:**Amsterdam-
**Contact:**

Summary: values now look good.

But the functions don't support complex arguments!(?)

But the functions don't support complex arguments!(?)

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

Indeed, no complex arguments in boost, which is a shame. Now we either have to implement it and be cross platform, or use something from Linux libs.

- Cuchulainn
**Posts:**61483**Joined:****Location:**Amsterdam-
**Contact:**

Here's a tricky one; if I shovel 100 digits type into std::exp() I get less accurate result.

krazee exact std: (2.718281828459045090795598298427648842334747314453125,0)

mp exact 2.718281828459045 !!! 235360287471352662497757247093699959574966967627724076630353547594571382178525166427

So, there is a workaround for complex args (support in C++11) but then we have to be careful.

krazee exact std: (2.718281828459045090795598298427648842334747314453125,0)

mp exact 2.718281828459045 !!! 235360287471352662497757247093699959574966967627724076630353547594571382178525166427

So, there is a workaround for complex args (support in C++11) but then we have to be careful.

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

std::exp only support double, no? So I think the boost mp automatically down casts to double? I'm not sure, I'm surprised even even works.. But I wouldn't expect anything beyond double from std.

GZIP: On