Page 17 of 17

### Re: Looking for hardware recommendations

Posted: November 30th, 2017, 9:19 pm
Next, I transform 10.4 to an ODE that I solve using Boost, which can handle complex coefficients.

I feel NDSolve could do this as well.

### Re: Looking for hardware recommendations

Posted: November 30th, 2017, 10:28 pm
M() in boost is a good idea,they will require you to provide double precision (eg genz & quadrature), sufficient error analysis -what is the maximum error- unit test cases and simple documentation. So it will cost you some time to delivere a contribution that matches their standards. There view is that it need to reliable and usable in a wide range a of production cases, they don't want to have to fix things later on. I've had many discussions about the last binary digits not always being correct at the first version submitted which I think is a sensible peer review process.

So good idea but not trivial to execute!

### Re: Looking for hardware recommendations

Posted: November 30th, 2017, 11:52 pm
There are two overloaded M()'s, at least.

1. Bivariate Cumulative Normal .. I have 3 methods (Genz, A&S, PDE). This is not the 'current' M. It could  go in Boost but is too easy.  It's in  Quantlib already and that works very well and it give exactly the same values as the late Graeme West's code.

2. I mean the current M is confluent hypergeometric function that scares everyone except Alan
https://en.wikipedia.org/wiki/Confluent ... c_function

Now that would be a major addition for a number of reasons.

We had a lot of discussion on this very topic on this thread in 2016.

### Re: Looking for hardware recommendations

Posted: December 1st, 2017, 12:19 am
You might also see if you can improve the cooling in the system either through software (changing the fan RPM settings) or changing the internal or external airflow.
The sticks are packed quite closely so they might overheat (I don't have any sensor there), but they felt just nicely warm. Besides, the computer didn't freeze specifically during computations, but at random moments. My friend told me that it sometimes happens on new Intel architectures (she's an overclocking maniac). I will see how it goes.
On Macs you can get the temperature of each stick of RAM so there's probably a way to do that on a PC, too.

I vaguely remember (from 10 years ago when I paid attention to this stuff) that RAM ran hot even under light loads but I may be wrong.
I suppose you won't get a Mac with 128 GB RAM in the first place, but if the sticks have temperature sensors it's, nomen omen, cool (the readings from the processor or motherboards are not reliable). Mine don't have them, so I used my finger

### Re: Looking for hardware recommendations

Posted: December 1st, 2017, 7:06 pm
Equation (10.4) looks like a very good example indeed. I have enough to write M(a,b,z) in the most general case (a,b,z are complex). Next, I transform 10.4 to an ODE that I solve using Boost, which can handle complex coefficients. BTW do you solve 10.4 using numerical quadrature?
Yes, the code is in the book, middle of pg 461 -- I just use Mathematica's NIntegrate. There is an example calling routine on the previous page. Note that [$](a,b,c) = (\omega,\theta,\sigma)[$].

### Re: Looking for hardware recommendations

Posted: December 2nd, 2017, 2:42 am
Often spec'ing out a beast of a trading computer is a 'cart before the horse' type activity. You can get away with a few generations old refurb computers to do 95%+ of the analyst / trading workload that people on this forum do.. I detail my algo hardware setup and thoughts on the overkill I see in the community in this writeup: Python Algo Trading Stack: Hardware

### Re: Looking for hardware recommendations

Posted: December 2nd, 2017, 12:19 pm
Equation (10.4) looks like a very good example indeed. I have enough to write M(a,b,z) in the most general case (a,b,z are complex). Next, I transform 10.4 to an ODE that I solve using Boost, which can handle complex coefficients. BTW do you solve 10.4 using numerical quadrature?
Yes, the code is in the book, middle of pg 461 -- I just use Mathematica's NIntegrate. There is an example calling routine on the previous page. Note that [$](a,b,c) = (\omega,\theta,\sigma)[$].
CHF looks likes a benign integrand and even if not I reckon NIntegate is able to see that fact? (I'm guessing). The graphs of CHF in A&S look OK.
I see that the complex-valued Fresnel integral can be written in terms of CHF. I have solved the former integral by turning it into an ODE and solving bot as a complex-valued ODE and as a pair of real-valued ODEs, I get the same results, also the examples in A&S.
So, at some stage I want to try 10.4 as an ODE and check against Mathematica.

// Nice analysis in chapter 10, Alan.

I would like to call it Hypergeometric1F1 to avoid confusion with [$]M(a,b,\rho)[$] which is a different ball-game.

edit
Q: For 10.4 we don't need the complex case for H1F1 (just yet) and we can start with doubles as pilot case?

### Re: Looking for hardware recommendations

Posted: December 4th, 2017, 1:11 am
Thanks.

Definitely 10.4 only uses real parameters. But, if you look carefully at my code, you will see I am using some high precision.

Now I frankly can't remember if/why that was necessary.

The likely thing is I was just being lazy. Instead, you should perhaps analytically develop the integrand near s=0, use that developed expression if s is very small, and then (guessing) machine precision (doubles) works fine. Instead I was probably just too lazy to do that and so used unnecessary high precision instead.

### Re: Looking for hardware recommendations

Posted: December 4th, 2017, 2:39 pm
Definitely 10.4 only uses real parameters. But, if you look carefully at my code, you will see I am using some high precision.

I tried C++ MP last time and AFAIR it was needed when summing series.
Q: all this CHF is very useful and doable, so it could it be a nice MSc project?

### Re: Looking for hardware recommendations

Posted: December 6th, 2017, 4:00 pm
Which C++ parallel libraries do you guys (intend to) use? A series summation for Hypergeometric1F1 is embarassingly parallel and a 10-core machine + C++ Concurency (or even better OpenMP or PPL) will be super fast. Use reduction variable.

I think it would be like grease lightning.

oh yeah
(Keep talking whoa keep talking)
A fuel injection cutoff and chrome plated rods oh yeah
(I'll get the money I'll kill to get the money)
With a four speed on the floor they'll be waiting at the door
...

### Re: Looking for hardware recommendations

Posted: November 18th, 2018, 9:10 pm
I have an excuse to look at new hardware (following my last question in this subforum, I've decided that I need more RAM for my computations - 128GB is not enough). NVIDIA released new GPUs, GeForce RTX, with ray tracing and deep learning for speeding up the graphics at the same quality level (by interpolating frames to higher resolution). Not many games use it at the moment, but here is a demo:
The technology is quite impressive in itself, but I wonder if there's a risk it will ruin the gaming experience in a similar one HFR did with Hobbit?

No doubt ray tracing will be a great thing:

### Re: Looking for hardware recommendations

Posted: November 19th, 2018, 12:40 pm
ray tracing and deep learning for speeding up the graphics