SERVING THE QUANTITATIVE FINANCE COMMUNITY

quantmeh
Posts: 5974
Joined: April 6th, 2007, 1:39 pm

### Three years from now: can you predict the Software Landscape in anno 2018?

QuoteOriginally posted by: outrunQuoteOriginally posted by: jawabeanintel was showing off 48 core CPU. would it compete with GPU?I don't think so (for numerical jobs). Today's GPU serves (1u) have 2048 cores, 2Tflop double precision. GPU's are more specialized, CPU more genericcan you point to case studies on CPU -> GPU code porting and ROI?

exneratunrisk
Posts: 3559
Joined: April 20th, 2004, 12:25 pm

### Three years from now: can you predict the Software Landscape in anno 2018?

QuoteOriginally posted by: outrunQuoteOriginally posted by: jawabeanQuoteOriginally posted by: outrunQuoteOriginally posted by: jawabeanintel was showing off 48 core CPU. would it compete with GPU?I don't think so (for numerical jobs). Today's GPU serves (1u) have 2048 cores, 2Tflop double precision. GPU's are more specialized, CPU more genericcan you point to case studies on CPU -> GPU code porting and ROI?I don't have studies, just some personal observations:* a friend a Shell uses GPU for numerical stuff, running on hardware+code delivered by another friend at some other company* Matlab introduced GPU extensions, so does R* the new Apple OSX is able to offload threadsto GPU* generic matrix algebra code (like FD) is easily ported to GPU with CBlas* GPU deliver 1Ghz cores at cost $5-$10 per core, about 10x cheaper than CPUI myself haven't done GPU coding yet, but I will somewhere nextyear as part of a C++ numerical projectI totally agree. We have developed on GPU (NVIDIA Tesla) for more than a year and now optimize for their latest "personal supercomputer" (CPUs plus GPUs). C++ environment is still lower level (has to do with the weakness of C++ with massive threading?). However, I expect all of the major PC makers will add GPU ... Price per performance is decreasing drastically.At the other hand the lack of massive threading support in plain C++ will limit its further expansion (Java much better).However, do we need this speed, if we use clever techniques, like principal component application, surrogate model implementations? IMO, yes, because it will open a path for much more brute force algorithms (which are cheaper in development)? And multi-method and muli-strategy systems (overdue in quant finance IMO).
Last edited by exneratunrisk on December 6th, 2009, 11:00 pm, edited 1 time in total.

zeta
Posts: 1973
Joined: September 27th, 2005, 3:25 pm
Location: Houston, TX
Contact:

### Three years from now: can you predict the Software Landscape in anno 2018?

2-3 years ago I hadn't touched a GPU and now it provides more work than I can handle. I have to agree with Outrun and others, this trend will only continue, particularly now that fermi is on the way. I can't understand why intel is doing so poorly in this regard -larabee is dead on arrival- but I remember IBM put all their eggs in the one basket with mainframes for a while, even big firms make bad wagers I guess.Heterogeneous computing in general is the way forward. Currently we have CPU/GPU combinations, I think things will only become more esoteric. We do FPGA + x dedicated machines for specific applications, perhaps a project with outrun is in the works In the not too distant future I think we'll see FPGA controlled quantum computers as well. I designed a scalable qubit model while a postdoc, article should be in press soon.

quantmeh
Posts: 5974
Joined: April 6th, 2007, 1:39 pm

### Three years from now: can you predict the Software Landscape in anno 2018?

QuoteOriginally posted by: outrunQuoteOriginally posted by: jawabeanQuoteOriginally posted by: outrunQuoteOriginally posted by: jawabeanintel was showing off 48 core CPU. would it compete with GPU?I don't think so (for numerical jobs). Today's GPU serves (1u) have 2048 cores, 2Tflop double precision. GPU's are more specialized, CPU more genericcan you point to case studies on CPU -> GPU code porting and ROI?I don't have studies, just some personal observations:* a friend a Shell uses GPU for numerical stuff, running on hardware+code delivered by another friend at some other company* Matlab introduced GPU extensions, so does R* the new Apple OSX is able to offload threadsto GPU* generic matrix algebra code (like FD) is easily ported to GPU with CBlas* GPU deliver 1Ghz cores at cost $5-$10 per core, about 10x cheaper than CPUI myself haven't done GPU coding yet, but I will somewhere nextyear as part of a C++ numerical projectthis is great stuff, if only it was in a case study or presentation. i'm working on the business case, need references. i totally believe that GPU is very promising and can be complementary to cloud computing, but need some numbers and overview of trends in the field. i'm good with clouds, but didn't do much with GPUs yet, hope to get the project started.

Posts: 23951
Joined: September 20th, 2002, 8:30 pm

### Three years from now: can you predict the Software Landscape in anno 2018?

Cuchulainn
Topic Author
Posts: 62132
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Three years from now: can you predict the Software Landscape in anno 2018?

Quantnet on CUDA

exneratunrisk
Posts: 3559
Joined: April 20th, 2004, 12:25 pm

### Three years from now: can you predict the Software Landscape in anno 2018?

QuoteOriginally posted by: CuchulainnQuantnet on CUDABinomial option pricing? Make a weak approach faster? I see Montecarlo and Longstaff-Schwartz, "global" optimization of objective functionals, principle component application, svm and kernel methods, .... .

exneratunrisk
Posts: 3559
Joined: April 20th, 2004, 12:25 pm

### Three years from now: can you predict the Software Landscape in anno 2018?

I agree with T4A, there is a risk that niche technology producers have not enough breath.I used a Symbolics Lisp machine once ... and learned for money. So, we implement on GPU ....

Cuchulainn
Topic Author
Posts: 62132
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Three years from now: can you predict the Software Landscape in anno 2018?

QuoteOriginally posted by: exneratunriskQuoteOriginally posted by: CuchulainnQuantnet on CUDABinomial option pricing? Make a weak approach faster? I see Montecarlo and Longstaff-Schwartz, "global" optimization of objective functionals, principle component application, svm and kernel methods, .... .Large-scale trading simulations, sensitivity analysis, what if scenarios...

exneratunrisk
Posts: 3559
Joined: April 20th, 2004, 12:25 pm

### Three years from now: can you predict the Software Landscape in anno 2018?

QuoteOriginally posted by: CuchulainnQuoteOriginally posted by: exneratunriskQuoteOriginally posted by: CuchulainnQuantnet on CUDABinomial option pricing? Make a weak approach faster? I see Montecarlo and Longstaff-Schwartz, "global" optimization of objective functionals, principle component application, svm and kernel methods, .... .Large-scale trading simulations, sensitivity analysis, what if scenarios...This opens to me another challenge: the coarse and fine grain parallelism in CPU+GPU clouds. I can imagine your optimized PDE and PIDE solvers are not so easy to parallelize?, but distributed-solving of variations in grids is easy? (this is, why we have put the domain-segmented PDE solving for GPUs a low priority, related to the other things mentioned). p.s. having all (GPU) power, we have done some challenging tests on Heston calibration (objective function optimization). It is as Paul always states: with too many parameters, information "evaporates" .
Last edited by exneratunrisk on December 8th, 2009, 11:00 pm, edited 1 time in total.

Cuchulainn
Topic Author
Posts: 62132
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Three years from now: can you predict the Software Landscape in anno 2018?

I can parallelise P(I)DE at the matrix level let's say in 2d, 3d but in general latency means that single threaded is almost as fast, grosso modo. It's better to focus on other issues.Besides, the current crop of FDM methods are inherently based on sequential thinking. But parallel FDM algorithms (dating from the 19th century) do exist If you wanted to calculate sensitivities or calibrate with PDE, that's a different story. Quotep.s. having all (GPU) power, we have done some challenging tests on Heston calibration (objective function optimization). It is as Paul always states: with too many parameters, information "evaporates" .What's too many?
Last edited by Cuchulainn on December 8th, 2009, 11:00 pm, edited 1 time in total.

exneratunrisk
Posts: 3559
Joined: April 20th, 2004, 12:25 pm

### Three years from now: can you predict the Software Landscape in anno 2018?

QuoteOriginally posted by: CuchulainnI can parallelise P(I)DE at the matrix level let's say in 2d, 3d but in general latency means that single threaded is almost as fast, grosso modo. It's better to focus on other issues.Besides, the current crop of FDM methods are inherently based on sequential thinking. But parallel FDM algorithms (dating from the 19th century) do exist If you wanted to calculate sensitivities or calibrate with PDE, that's a different story. Quotep.s. having all (GPU) power, we have done some challenging tests on Heston calibration (objective function optimization). It is as Paul always states: with too many parameters, information "evaporates" .What's too many?case-dependent. How easy can you stuck in one-of-many local minima for example (how do you "know")? It is not an evidence, but a feeling: Bates (with jumps) might help you better out of local minima (BECAUSE of jumping capabilities?) than Heston. But is has other complexity dangers. So, as-simple-as-possible
Last edited by exneratunrisk on December 8th, 2009, 11:00 pm, edited 1 time in total.

FastExcel
Posts: 50
Joined: December 2nd, 2003, 8:10 am

### Three years from now: can you predict the Software Landscape in anno 2018?

64 bit Excel does support VBA - but not VB6> The brain damaged decision not to support VBA in 64 bit Excel means that very few people will have migrated into it, and those> few firms the have put it on the trading desk will be getting DefCon 2 grade grief from users.I can't see too much traction for .NET until they solve the Excel>.NET performance problem (more than 3 years I think).