SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
quantmeh
Posts: 5974
Joined: April 6th, 2007, 1:39 pm

Three years from now: can you predict the Software Landscape in anno 2018?

December 6th, 2009, 11:53 pm

QuoteOriginally posted by: outrunQuoteOriginally posted by: jawabeanintel was showing off 48 core CPU. would it compete with GPU?I don't think so (for numerical jobs). Today's GPU serves (1u) have 2048 cores, 2Tflop double precision. GPU's are more specialized, CPU more genericcan you point to case studies on CPU -> GPU code porting and ROI?
 
User avatar
exneratunrisk
Posts: 3559
Joined: April 20th, 2004, 12:25 pm

Three years from now: can you predict the Software Landscape in anno 2018?

December 7th, 2009, 7:48 am

QuoteOriginally posted by: outrunQuoteOriginally posted by: jawabeanQuoteOriginally posted by: outrunQuoteOriginally posted by: jawabeanintel was showing off 48 core CPU. would it compete with GPU?I don't think so (for numerical jobs). Today's GPU serves (1u) have 2048 cores, 2Tflop double precision. GPU's are more specialized, CPU more genericcan you point to case studies on CPU -> GPU code porting and ROI?I don't have studies, just some personal observations:* a friend a Shell uses GPU for numerical stuff, running on hardware+code delivered by another friend at some other company* Matlab introduced GPU extensions, so does R* the new Apple OSX is able to offload threadsto GPU* generic matrix algebra code (like FD) is easily ported to GPU with CBlas* GPU deliver 1Ghz cores at cost $5-$10 per core, about 10x cheaper than CPUI myself haven't done GPU coding yet, but I will somewhere nextyear as part of a C++ numerical projectI totally agree. We have developed on GPU (NVIDIA Tesla) for more than a year and now optimize for their latest "personal supercomputer" (CPUs plus GPUs). C++ environment is still lower level (has to do with the weakness of C++ with massive threading?). However, I expect all of the major PC makers will add GPU ... Price per performance is decreasing drastically.At the other hand the lack of massive threading support in plain C++ will limit its further expansion (Java much better).However, do we need this speed, if we use clever techniques, like principal component application, surrogate model implementations? IMO, yes, because it will open a path for much more brute force algorithms (which are cheaper in development)? And multi-method and muli-strategy systems (overdue in quant finance IMO).
Last edited by exneratunrisk on December 6th, 2009, 11:00 pm, edited 1 time in total.
 
User avatar
zeta
Posts: 1952
Joined: September 27th, 2005, 3:25 pm

Three years from now: can you predict the Software Landscape in anno 2018?

December 7th, 2009, 12:51 pm

2-3 years ago I hadn't touched a GPU and now it provides more work than I can handle. I have to agree with Outrun and others, this trend will only continue, particularly now that fermi is on the way. I can't understand why intel is doing so poorly in this regard -larabee is dead on arrival- but I remember IBM put all their eggs in the one basket with mainframes for a while, even big firms make bad wagers I guess.Heterogeneous computing in general is the way forward. Currently we have CPU/GPU combinations, I think things will only become more esoteric. We do FPGA + x dedicated machines for specific applications, perhaps a project with outrun is in the works In the not too distant future I think we'll see FPGA controlled quantum computers as well. I designed a scalable qubit model while a postdoc, article should be in press soon.
 
User avatar
quantmeh
Posts: 5974
Joined: April 6th, 2007, 1:39 pm

Three years from now: can you predict the Software Landscape in anno 2018?

December 7th, 2009, 1:35 pm

QuoteOriginally posted by: outrunQuoteOriginally posted by: jawabeanQuoteOriginally posted by: outrunQuoteOriginally posted by: jawabeanintel was showing off 48 core CPU. would it compete with GPU?I don't think so (for numerical jobs). Today's GPU serves (1u) have 2048 cores, 2Tflop double precision. GPU's are more specialized, CPU more genericcan you point to case studies on CPU -> GPU code porting and ROI?I don't have studies, just some personal observations:* a friend a Shell uses GPU for numerical stuff, running on hardware+code delivered by another friend at some other company* Matlab introduced GPU extensions, so does R* the new Apple OSX is able to offload threadsto GPU* generic matrix algebra code (like FD) is easily ported to GPU with CBlas* GPU deliver 1Ghz cores at cost $5-$10 per core, about 10x cheaper than CPUI myself haven't done GPU coding yet, but I will somewhere nextyear as part of a C++ numerical projectthis is great stuff, if only it was in a case study or presentation. i'm working on the business case, need references. i totally believe that GPU is very promising and can be complementary to cloud computing, but need some numbers and overview of trends in the field. i'm good with clouds, but didn't do much with GPUs yet, hope to get the project started.
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Three years from now: can you predict the Software Landscape in anno 2018?

December 7th, 2009, 1:46 pm

QuoteOriginally posted by: zetaI can't understand why intel is doing so poorly in this regard -larabee is dead on arrival- but I remember IBM put all their eggs in the one basket with mainframes for a while, even big firms make bad wagers I guess.If you study the history of innovation you can see why large incumbents do exactly what they do when something new pops up. Intel faces the following wager: 1) put more money in it's well-understood, high-margin, high-volume, widely-adopted CPUs; or 2) risk tons of money on unproven, niche technology that can't do everything those Intel CPUs can do. If the answer to that wager isn't obvious enough, then Intel executives will probably ask themselves two questions: what percentage of PC users really use GPUs for non-graphics (answer: a fraction of a percent) and can Windows run on a GPU (answer: no). The point is that GPUs don't look that promising in the eyes of an experienced, dominant CPU maker. Time will tell whether Intel is still paranoid enough to make the non-obvious choice by taking a big risk, and turning their backs on their bread-and-butter CPUs for something new like GPUs.QuoteOriginally posted by: zetaHeterogeneous computing in general is the way forward. Currently we have CPU/GPU combinations, I think things will only become more esoteric. We do FPGA + x dedicated machines for specific applications, perhaps a project with outrun is in the works In the not too distant future I think we'll see FPGA controlled quantum computers as well. I designed a scalable qubit model while a postdoc, article should be in press soon.You are right that things will become more esoteric, but I predict that GPUs will become subsumed into mainstream computing in the same way that math coprocessors went from being esoteric separate chips used in a minority of computers to being integrated functionality in virtually all CPUs. Overall, I think nVidia will have a much easier time adding a few cores of generic x86 to nVidia's chips than Intel will have in trying to add a 1000 cores of cutting-edge GPU to Intel's chips.
 
User avatar
Cuchulainn
Topic Author
Posts: 59921
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Three years from now: can you predict the Software Landscape in anno 2018?

December 7th, 2009, 3:54 pm

Quantnet on CUDA
 
User avatar
exneratunrisk
Posts: 3559
Joined: April 20th, 2004, 12:25 pm

Three years from now: can you predict the Software Landscape in anno 2018?

December 7th, 2009, 4:36 pm

QuoteOriginally posted by: CuchulainnQuantnet on CUDABinomial option pricing? Make a weak approach faster? I see Montecarlo and Longstaff-Schwartz, "global" optimization of objective functionals, principle component application, svm and kernel methods, .... .
 
User avatar
exneratunrisk
Posts: 3559
Joined: April 20th, 2004, 12:25 pm

Three years from now: can you predict the Software Landscape in anno 2018?

December 7th, 2009, 4:41 pm

I agree with T4A, there is a risk that niche technology producers have not enough breath.I used a Symbolics Lisp machine once ... and learned for money. So, we implement on GPU ....
 
User avatar
Cuchulainn
Topic Author
Posts: 59921
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Three years from now: can you predict the Software Landscape in anno 2018?

December 7th, 2009, 4:42 pm

QuoteOriginally posted by: exneratunriskQuoteOriginally posted by: CuchulainnQuantnet on CUDABinomial option pricing? Make a weak approach faster? I see Montecarlo and Longstaff-Schwartz, "global" optimization of objective functionals, principle component application, svm and kernel methods, .... .Large-scale trading simulations, sensitivity analysis, what if scenarios...
 
User avatar
exneratunrisk
Posts: 3559
Joined: April 20th, 2004, 12:25 pm

Three years from now: can you predict the Software Landscape in anno 2018?

December 9th, 2009, 7:39 am

QuoteOriginally posted by: CuchulainnQuoteOriginally posted by: exneratunriskQuoteOriginally posted by: CuchulainnQuantnet on CUDABinomial option pricing? Make a weak approach faster? I see Montecarlo and Longstaff-Schwartz, "global" optimization of objective functionals, principle component application, svm and kernel methods, .... .Large-scale trading simulations, sensitivity analysis, what if scenarios...This opens to me another challenge: the coarse and fine grain parallelism in CPU+GPU clouds. I can imagine your optimized PDE and PIDE solvers are not so easy to parallelize?, but distributed-solving of variations in grids is easy? (this is, why we have put the domain-segmented PDE solving for GPUs a low priority, related to the other things mentioned). p.s. having all (GPU) power, we have done some challenging tests on Heston calibration (objective function optimization). It is as Paul always states: with too many parameters, information "evaporates" .
Last edited by exneratunrisk on December 8th, 2009, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Topic Author
Posts: 59921
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Three years from now: can you predict the Software Landscape in anno 2018?

December 9th, 2009, 8:27 am

I can parallelise P(I)DE at the matrix level let's say in 2d, 3d but in general latency means that single threaded is almost as fast, grosso modo. It's better to focus on other issues.Besides, the current crop of FDM methods are inherently based on sequential thinking. But parallel FDM algorithms (dating from the 19th century) do exist If you wanted to calculate sensitivities or calibrate with PDE, that's a different story. Quotep.s. having all (GPU) power, we have done some challenging tests on Heston calibration (objective function optimization). It is as Paul always states: with too many parameters, information "evaporates" .What's too many?
Last edited by Cuchulainn on December 8th, 2009, 11:00 pm, edited 1 time in total.
 
User avatar
exneratunrisk
Posts: 3559
Joined: April 20th, 2004, 12:25 pm

Three years from now: can you predict the Software Landscape in anno 2018?

December 9th, 2009, 10:18 am

QuoteOriginally posted by: CuchulainnI can parallelise P(I)DE at the matrix level let's say in 2d, 3d but in general latency means that single threaded is almost as fast, grosso modo. It's better to focus on other issues.Besides, the current crop of FDM methods are inherently based on sequential thinking. But parallel FDM algorithms (dating from the 19th century) do exist If you wanted to calculate sensitivities or calibrate with PDE, that's a different story. Quotep.s. having all (GPU) power, we have done some challenging tests on Heston calibration (objective function optimization). It is as Paul always states: with too many parameters, information "evaporates" .What's too many?case-dependent. How easy can you stuck in one-of-many local minima for example (how do you "know")? It is not an evidence, but a feeling: Bates (with jumps) might help you better out of local minima (BECAUSE of jumping capabilities?) than Heston. But is has other complexity dangers. So, as-simple-as-possible
Last edited by exneratunrisk on December 8th, 2009, 11:00 pm, edited 1 time in total.
 
User avatar
FastExcel
Posts: 50
Joined: December 2nd, 2003, 8:10 am

Three years from now: can you predict the Software Landscape in anno 2018?

December 10th, 2009, 8:58 pm

64 bit Excel does support VBA - but not VB6> The brain damaged decision not to support VBA in 64 bit Excel means that very few people will have migrated into it, and those> few firms the have put it on the trading desk will be getting DefCon 2 grade grief from users.I can't see too much traction for .NET until they solve the Excel>.NET performance problem (more than 3 years I think).
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On