SERVING THE QUANTITATIVE FINANCE COMMUNITY

Cuchulainn
Posts: 53050
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Re: Looking for hardware recommendations

This is all very interesting. I am thinking of getting the MSc students with a good mathematical background on this topic in 2017.
Thanks for the explanation of the Linetsky approach. I am just wondering the pros and cons of Residue Theorem versus its numerical inversion, in particular after applying Mellin transform as discussed compactly here in section 3
http://www.ijstr.org/final-print/dec2014/Black-scholes-Partial-Differential-Equation-In-The-Mellin-Transform-Domain.pdf
The last stage is to solve eq. (31) which can be done in the 2 ways you mention. Nice here is that (26) has an explicit solution in terms of elementary functions (no need for FDEM nor CHF).

I checked the analysis of section and the end-result is correct. What is appealing is that we recover/brainstorm about boundary condition when performing integration by parts.

BTW did you experiment with numerical inversion in these cases as 2nd opinion as it were?
http://www.datasimfinancial.com

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

Alan
Topic Author
Posts: 9141
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

### Re: Looking for hardware recommendations

In general, my experience has been that if you need a numerical Laplace inversion of some $F(s)$, the integrand along a Bromwich contour (vertical line in s-plane)  is typically much more oscillatory and hard-to-integrate than one along the neg. real s-axis. So, it is always worth making that contour move (via the Residue Theorem) if you can. But, there are other ways to do the inversion numerically, due to Abate and Valko, especially when $F(s)$ can be evaluated to arbitrary precision. In my Vol II, I show/apply their 'Fixed-Talbot' (FT) method on pgs 190-191, and their 'Gaver-Wynn-Rho' (GWR) method on pgs 491-493. As I recall, for the Asian option problem, I tried the Bromwich contour, the Residue Theorem contour (neg. s-axis), and at least one of the Abate-Valko methods. Nowdays, whatever the inversion problem, I almost always try to avoid the Bromwich contour and switch to one of the 3 alternatives.

Re the Mellin transform, I always think of it as just a rewriting of a Fourier transform. I may have never used one directly.  For the Black-Scholes model, I do show the Fourier transform analytic inversion in my Vol. I  (Appendix 2.1)

Cuchulainn
Posts: 53050
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Re: Looking for hardware recommendations

Many of these "Special Functions" will be in C++17. Useful to know. Of course, they are already in Boost.

http://en.cppreference.com/w/cpp/numeric/special_math
http://www.datasimfinancial.com

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

Alan
Topic Author
Posts: 9141
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

### Re: Looking for hardware recommendations

How does it work? If a special function is supported in C++17, but not with complex arguments, is the source available somewhere to modify?

outrun
Posts: 3813
Joined: April 29th, 2016, 1:40 pm

### Re: Looking for hardware recommendations

from what I've just read it's indeed just real arguments. ("Our investigation of the alternative led us to realize that the complex landscape for the special functions is figuratively dotted with land mines")

The open source compiler have the source code available (since its open source) but the implementation details are not designed to be copy pasted / modified. maybe windows too, I can't tell. Most libraries have all sort of abstract helper functions that makes it difficult to isolate and fix bits of code.

Another option might be to switch to python. I don't know the speed (but I can benchmark it), python scipy package support complex arguments.

Cuchulainn
Posts: 53050
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Re: Looking for hardware recommendations

Alan wrote:
Thanks -- that's good to learn about the existence of C++AMP rather than CUDA -- maybe not for this specific project, but for a follow-up one that I have (more Monte Carlo related).

I am loath to attempt to write these special functions myself. For the gurus who understand both unix and windows, how much trouble do you think it would be to get this math library compiled and then linkable to C/C++ code compiled under Visual Studio on Windows?  (It seems to have many dependencies on unix-type stuff).

Why not contact author for easy install for Windows?
fredrik.johansson@gmail.com
http://www.datasimfinancial.com

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

Cuchulainn
Posts: 53050
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Re: Looking for hardware recommendations

Alan wrote:
Thanks -- that's good to learn about the existence of C++AMP rather than CUDA -- maybe not for this specific project, but for a follow-up one that I have (more Monte Carlo related).

I am loath to attempt to write these special functions myself. For the gurus who understand both unix and windows, how much trouble do you think it would be to get this math library compiled and then linkable to C/C++ code compiled under Visual Studio on Windows?  (It seems to have many dependencies on unix-type stuff).

Why not contact author for easy install for Windows?
fredrik.johansson gmail kom
http://www.datasimfinancial.com

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

Alan
Topic Author
Posts: 9141
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

### Re: Looking for hardware recommendations

My thinking on this project has evolved. First, the calculations that prompted my original post can likely be done *much* more efficiently with smart asymptotics. Having said that, I would still like to test that approach with brute force and a new machine. But the easiest approach there is simply to get a multi-core machine with more cores, upgrade my Mathematica license to enable the extra cores, and run my existing code!

That's why!

dd3
Posts: 246
Joined: June 8th, 2010, 9:02 am

### Re: Looking for hardware recommendations

Why not take a look at renting a machine in the cloud if Mathematica has a command line interface?
https://aws.amazon.com/ec2/instance-types/

I've only experience with AWS.

Alan
Topic Author
Posts: 9141
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

### Re: Looking for hardware recommendations

Because the license does not permit an install into a rented cloud machine. There was one company offering an already installed AWS setup in a special license arrangement with WRI, but I already checked with them and that service is defunct.

Posts: 1
Joined: March 18th, 2017, 2:29 pm

### Re: Looking for hardware recommendations

I don’t have much time to structure this text more.

As intro, I finish in Sept my PhD in the field of programming language theory and somewhat theoretical computer science. As side project I do research in scaling problems in server systems and research on measuring performance.

A few pointers. Amazon AWS was mentioned a few times. Have you speed checked the cost of virtualization? Because this idea might sound strange from one point. Taking my server examples, the virtualization penalty might be at least 80% less performance on virtualized hardware. So running on AWS would need at least 32 to 64 to compete against a fast 4 core in the web server example. I don’t have the overhead for mathematica.

The kind of problem is that in the 70ies to now the performance (amount of transistors, speed) followed mostly moore's law while memory speed lags a factor 1000 behind. So the question is less of cores than of memory speed. And if memory speed speed is the limiting factor, it’s main memory, caches and memory bus regarding the inter CPU communication.

Just an example that is clear but sounds paradoxical at the same time. If you have an application that runs on 2 or 4 core machine, run the same application of 32 core and on a 64 core machine. You would expect that the 2 or 4 core is the fastest, then the 32 core and the slowest would be the 64 core machine. This is because the clock speed is significantly slower on the 32 core machine so the 2 or 4 core machine wins if the application doesn’t scale well in the number of cores. The inter core overhead, interrupts, etc., is so much an overhead on the 64 core machine that the 32 machine is, of course, much faster than the 64 core machine.

So, if you run your mathematica code and turn of 1 to 8 threads, do you get a linear line for the first 4 threads? What is the performance gain of turning hyperthreading on or off? If the latter is a 5% to 8% increase, then a 4 core machine with hyper threading will beat a 8 core machine probably. And in this case just order a 7700k intel chip and run it with 4.7 Ghz. With fast RAM that machine costs you 750 USD (or 700 in EUR) and you order it on amazon (or newegg).

How big is your data set? I assume quite small such as 2 to 10GB? In performance terms 10GB is read in 1 second, if you data set is that large.

And also the thread would be clearer if you said what are the speed benefits to you. I have to guess faster development and debugging times, as if you make more money from speed you would not use old hardware by now. (Also what is current memory speed and exact CPU, then it would be clearer how much speed benefit you get from a 7700k chip.)

How much faster is mathematica on windows than on linux? If it is about the same speed, linux benefits in great length regarding the freedom what you can do with the kernel and OS configuration (just think of booting windows to a RAM disk vs. linux).

It might be beneficial to solve your problem by creating your own custom language / domain specific language (DSL) to solve your problem. Then you compile that code to mathematica. And if you need speed later, you compile our DSL to a cuda / GPU DSL (or c code). You just write difference compiles to all the options (or better get someone to write those). This might have potential for a number of reasons (you can always test your software on mathematica and compare results in case your super optimization code has a bug).

Alan
Topic Author
Posts: 9141
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

### Re: Looking for hardware recommendations

Thanks for your comments. As I mentioned, my current thinking on the problem that inspired this thread is to solve it with a better algorithm. But I am still in the market for a new general purpose desktop, as my current one is almost 4 years old, and your comments inspired a little shopping for systems supporting the i7-7700K chip you mentioned.

Alan
Topic Author
Posts: 9141
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

### Re: Looking for hardware recommendations

Alan wrote:
I am starting to search for a higher performance desktop than my current system. Mainly I need an Intel 8 core system because I am doing some Mathematica stuff that needs the performance boost. (Mathematica's std license will launch up to 8 kernels if you have 8 or more cores. I have just discovered how easy it is invoke all the cores at once with my particular Mathematica app, but I only have 4 at the moment).

Currently I have a Dell XPS desktop. Shopping at Dell, I learned their only 8 core system is an Alienware Area-51 R2. The rep says he expects 8 core to migrate to their XPS systems pretty soon, but didn't have an announced launch date. I am not buying until Jan, so may wait. The Alienware seems somewhat pricey for what you get, but I would certainly like to hear any experiences with that model.

Any recommendations/experiences with *any* manufacturer using Intel 8 (or more) core systems. If you have one, what is your cooling system?

Let me bump this thread because I found a very interesting possibility in Thinkmate (the apparent hardware dba of Integrated IT Solutions, Inc.).

The things I like:

- Best configurator on the web
- They have been around for a long time
- I can configure a relatively high performance desktop that I like that is *not* a gaming machine (unlike like Dell Alienware)
- Apparently, they use ASUS and Supermicro motherboards, which seem well-regarded (Supermicro was outrun's recommended solution), but my sense is I will get a lot more hand-holding from an experienced systems integrator, which I like. In fact, the configuration I like best so far from them uses an ASUS motherboard.

- They are relatively small if one can believe the dba link
- They have no online forum (and little in the way of third party reviews), so one doesn't get to hear issues that people have had, and one doesn't get the benefit of users for support issues.

Anybody with experience with them?

outrun
Posts: 3813
Joined: April 29th, 2016, 1:40 pm

### Re: Looking for hardware recommendations

I don't know them but I just wanted to say that you might keep in might that in the future you'd want to add a Nvidia GPU, ..maybe two.

Besides making sure you have free slots you should then also have enough extra spare power to feed the GPUs. E.g. a Telsa k80 GPU can use up to 300 Watt. I was bitten by this once. I wanted to add GPUs, I had free slots, but my power supply wasn't big enough.

About your worries: most components (in general) are very standard, you can buy and replace them anywhere, buy components yourself, or go to a local repair shop.

Alan
Topic Author
Posts: 9141
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

### Re: Looking for hardware recommendations

Very good points.

The configuration I am thinking about has a single Nvidia Titan Xp (Pascal) card with a 1000 watt power supply. But if I want to add a 2nd GPU in the future, there was another configuration with a 1500 watt supply. I think I might want to look again at that 2nd one. It was one of their models optimally configured for multiple GPU's: here

Also, on the repairs, you're right and that makes me more confident of doing this with them.