SERVING THE QUANTITATIVE FINANCE COMMUNITY

  • 1
  • 6
  • 7
  • 8
  • 9
  • 10
 
User avatar
Cuchulainn
Posts: 50458
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

November 7th, 2016, 9:06 am

This is all very interesting. I am thinking of getting the MSc students with a good mathematical background on this topic in 2017.
Thanks for the explanation of the Linetsky approach. I am just wondering the pros and cons of Residue Theorem versus its numerical inversion, in particular after applying Mellin transform as discussed compactly here in section 3
http://www.ijstr.org/final-print/dec2014/Black-scholes-Partial-Differential-Equation-In-The-Mellin-Transform-Domain.pdf
The last stage is to solve eq. (31) which can be done in the 2 ways you mention. Nice here is that (26) has an explicit solution in terms of elementary functions (no need for FDEM nor CHF).

I checked the analysis of section and the end-result is correct. What is appealing is that we recover/brainstorm about boundary condition when performing integration by parts.

BTW did you experiment with numerical inversion in these cases as 2nd opinion as it were?
http://www.datasimfinancial.com

“The two most important days in your life are the day you are born and the day you find out why.”
 
User avatar
Alan
Topic Author
Posts: 8857
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Re: Looking for hardware recommendations

November 7th, 2016, 3:33 pm

 In general, my experience has been that if you need a numerical Laplace inversion of some [$]F(s)[$], the integrand along a Bromwich contour (vertical line in s-plane)  is typically much more oscillatory and hard-to-integrate than one along the neg. real s-axis. So, it is always worth making that contour move (via the Residue Theorem) if you can. But, there are other ways to do the inversion numerically, due to Abate and Valko, especially when [$]F(s)[$] can be evaluated to arbitrary precision. In my Vol II, I show/apply their 'Fixed-Talbot' (FT) method on pgs 190-191, and their 'Gaver-Wynn-Rho' (GWR) method on pgs 491-493. As I recall, for the Asian option problem, I tried the Bromwich contour, the Residue Theorem contour (neg. s-axis), and at least one of the Abate-Valko methods. Nowdays, whatever the inversion problem, I almost always try to avoid the Bromwich contour and switch to one of the 3 alternatives.  

Re the Mellin transform, I always think of it as just a rewriting of a Fourier transform. I may have never used one directly.  For the Black-Scholes model, I do show the Fourier transform analytic inversion in my Vol. I  (Appendix 2.1)
 
User avatar
Cuchulainn
Posts: 50458
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

January 26th, 2017, 4:09 pm

Many of these "Special Functions" will be in C++17. Useful to know. Of course, they are already in Boost.

http://en.cppreference.com/w/cpp/numeric/special_math
http://www.datasimfinancial.com

“The two most important days in your life are the day you are born and the day you find out why.”
 
User avatar
Alan
Topic Author
Posts: 8857
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Re: Looking for hardware recommendations

January 26th, 2017, 4:22 pm

How does it work? If a special function is supported in C++17, but not with complex arguments, is the source available somewhere to modify?
 
User avatar
outrun
Posts: 2420
Joined: April 29th, 2016, 1:40 pm

Re: Looking for hardware recommendations

January 26th, 2017, 5:19 pm

from what I've just read it's indeed just real arguments. ("Our investigation of the alternative led us to realize that the complex landscape for the special functions is figuratively dotted with land mines")

The open source compiler have the source code available (since its open source) but the implementation details are not designed to be copy pasted / modified. maybe windows too, I can't tell. Most libraries have all sort of abstract helper functions that makes it difficult to isolate and fix bits of code.

Another option might be to switch to python. I don't know the speed (but I can benchmark it), python scipy package support complex arguments.
 
User avatar
Cuchulainn
Posts: 50458
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

January 26th, 2017, 8:05 pm

Alan wrote:
Thanks -- that's good to learn about the existence of C++AMP rather than CUDA -- maybe not for this specific project, but for a follow-up one that I have (more Monte Carlo related).

I am loath to attempt to write these special functions myself. For the gurus who understand both unix and windows, how much trouble do you think it would be to get this math library compiled and then linkable to C/C++ code compiled under Visual Studio on Windows?  (It seems to have many dependencies on unix-type stuff).

Why not contact author for easy install for Windows?
fredrik.johansson@gmail.com
http://www.datasimfinancial.com

“The two most important days in your life are the day you are born and the day you find out why.”
 
User avatar
Cuchulainn
Posts: 50458
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Looking for hardware recommendations

January 26th, 2017, 8:05 pm

Alan wrote:
Thanks -- that's good to learn about the existence of C++AMP rather than CUDA -- maybe not for this specific project, but for a follow-up one that I have (more Monte Carlo related).

I am loath to attempt to write these special functions myself. For the gurus who understand both unix and windows, how much trouble do you think it would be to get this math library compiled and then linkable to C/C++ code compiled under Visual Studio on Windows?  (It seems to have many dependencies on unix-type stuff).

Why not contact author for easy install for Windows?
fredrik.johansson gmail kom
http://www.datasimfinancial.com

“The two most important days in your life are the day you are born and the day you find out why.”
 
User avatar
Alan
Topic Author
Posts: 8857
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Re: Looking for hardware recommendations

January 26th, 2017, 8:21 pm

My thinking on this project has evolved. First, the calculations that prompted my original post can likely be done *much* more efficiently with smart asymptotics. Having said that, I would still like to test that approach with brute force and a new machine. But the easiest approach there is simply to get a multi-core machine with more cores, upgrade my Mathematica license to enable the extra cores, and run my existing code! 

That's why! :D
 
User avatar
dd3
Posts: 245
Joined: June 8th, 2010, 9:02 am

Re: Looking for hardware recommendations

January 28th, 2017, 6:01 pm

Why not take a look at renting a machine in the cloud if Mathematica has a command line interface?
https://aws.amazon.com/ec2/instance-types/

I've only experience with AWS.
 
User avatar
Alan
Topic Author
Posts: 8857
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Re: Looking for hardware recommendations

January 28th, 2017, 6:14 pm

Because the license does not permit an install into a rented cloud machine. There was one company offering an already installed AWS setup in a special license arrangement with WRI, but I already checked with them and that service is defunct. 
 
stevethetrader1
Posts: 1
Joined: March 18th, 2017, 2:29 pm

Re: Looking for hardware recommendations

March 18th, 2017, 2:31 pm

I don’t have much time to structure this text more.

As intro, I finish in Sept my PhD in the field of programming language theory and somewhat theoretical computer science. As side project I do research in scaling problems in server systems and research on measuring performance.

A few pointers. Amazon AWS was mentioned a few times. Have you speed checked the cost of virtualization? Because this idea might sound strange from one point. Taking my server examples, the virtualization penalty might be at least 80% less performance on virtualized hardware. So running on AWS would need at least 32 to 64 to compete against a fast 4 core in the web server example. I don’t have the overhead for mathematica.

The kind of problem is that in the 70ies to now the performance (amount of transistors, speed) followed mostly moore's law while memory speed lags a factor 1000 behind. So the question is less of cores than of memory speed. And if memory speed speed is the limiting factor, it’s main memory, caches and memory bus regarding the inter CPU communication.

Just an example that is clear but sounds paradoxical at the same time. If you have an application that runs on 2 or 4 core machine, run the same application of 32 core and on a 64 core machine. You would expect that the 2 or 4 core is the fastest, then the 32 core and the slowest would be the 64 core machine. This is because the clock speed is significantly slower on the 32 core machine so the 2 or 4 core machine wins if the application doesn’t scale well in the number of cores. The inter core overhead, interrupts, etc., is so much an overhead on the 64 core machine that the 32 machine is, of course, much faster than the 64 core machine.

So, if you run your mathematica code and turn of 1 to 8 threads, do you get a linear line for the first 4 threads? What is the performance gain of turning hyperthreading on or off? If the latter is a 5% to 8% increase, then a 4 core machine with hyper threading will beat a 8 core machine probably. And in this case just order a 7700k intel chip and run it with 4.7 Ghz. With fast RAM that machine costs you 750 USD (or 700 in EUR) and you order it on amazon (or newegg).

How big is your data set? I assume quite small such as 2 to 10GB? In performance terms 10GB is read in 1 second, if you data set is that large.

And also the thread would be clearer if you said what are the speed benefits to you. I have to guess faster development and debugging times, as if you make more money from speed you would not use old hardware by now. (Also what is current memory speed and exact CPU, then it would be clearer how much speed benefit you get from a 7700k chip.)

How much faster is mathematica on windows than on linux? If it is about the same speed, linux benefits in great length regarding the freedom what you can do with the kernel and OS configuration (just think of booting windows to a RAM disk vs. linux).

It might be beneficial to solve your problem by creating your own custom language / domain specific language (DSL) to solve your problem. Then you compile that code to mathematica. And if you need speed later, you compile our DSL to a cuda / GPU DSL (or c code). You just write difference compiles to all the options (or better get someone to write those). This might have potential for a number of reasons (you can always test your software on mathematica and compare results in case your super optimization code has a bug).
 
User avatar
Alan
Topic Author
Posts: 8857
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Re: Looking for hardware recommendations

March 20th, 2017, 8:39 pm

Thanks for your comments. As I mentioned, my current thinking on the problem that inspired this thread is to solve it with a better algorithm. But I am still in the market for a new general purpose desktop, as my current one is almost 4 years old, and your comments inspired a little shopping for systems supporting the i7-7700K chip you mentioned.  
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...