SERVING THE QUANTITATIVE FINANCE COMMUNITY

  • 1
  • 2
  • 3
  • 4
  • 5
  • 39
 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Parallel RNG and distributed MC

January 26th, 2012, 8:44 pm

QuoteOriginally posted by: outrun..for the job queue, I'm going to test this thread safe adaptation of the std:: queue. It uses boost::mutex and has some nice features like the thread's not having to keep polling the queue when it's (temporary) empty.http://www.justsoftwaresolutions.co.uk/ ... ables.html???? "examples use the types and functions from the upcoming 1.35 release of Boost): "I would use . my thread-safe queue http://www.quantnet.com/cplusplus-multithreading-boost/ (section 18.11). PPL concurrent queue. TPL dittoThey are shipping now.This way you will get bogged down in detail. Others and I have done it already in C++ and C#.
Last edited by Cuchulainn on January 25th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Parallel RNG and distributed MC

January 26th, 2012, 9:06 pm

QuoteOriginally posted by: outrunQuoteOriginally posted by: CuchulainnQuoteOriginally posted by: outrun..for the job queue, I'm going to test this thread safe adaptation of the std:: queue. It uses boost::mutex and has some nice features like the thread's not having to keep polling the queue when it's (temporary) empty.http://www.justsoftwaresolutions.co.uk/ ... ables.html???? "examples use the types and functions from the upcoming 1.35 release of Boost): "I would use . my thread-safe queue. PPL concurrent queue. TPL dittoThis way you will get bogged down in detail. I have done it already in C++ and C#.That's all MS / Windows specific, and I think it already works.. but you're right in general only this is so tiny it's not worthwhile to introduce these external dependencies (and I like the condition variable non polling thing)* I got an answer about random123 with boost (will post it later)* I almost got a simple gbm_vanilla_call distribution<> V(...); finished, compatible with boost::random::uniform_int_distribution<> six(1,6);My queue is not Windows specific, but I stick to PPL and TPL for the moment. You will have to test your new queue + sentinel/poison pill.Random 123 does look promising, indeed.We need N(0,1) variates,yes?//Boost should have concurrent collections.
Last edited by Cuchulainn on January 25th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Parallel RNG and distributed MC

January 27th, 2012, 6:53 am

QuoteOriginally posted by: outrunhere is the "gbm vanilla call sampler"Very good. This is standard Boost till now I presume and is the same as in MC102.I suppose Random123 is the next engine on the list?? BTW I would make the queue into a concept because we need to support a number of scalable ones (PPL, TPL, (MKL?)).Question: how to generate N(0,1) variates based on Random123.going to sleep now! don't blocking waitspinlock.
Last edited by Cuchulainn on January 26th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Parallel RNG and distributed MC

January 27th, 2012, 7:22 am

QuoteRegarding the queue, at some point it might not actually be a queue, but an algorithm creating jobs on the fly, so we indeed need to define the interface!You can do it hard-coded in V1, as long as we have U(0,1) and N(0,1) variates. We can always make adapters later.
Last edited by Cuchulainn on January 26th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Alan
Posts: 9783
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Parallel RNG and distributed MC

January 27th, 2012, 3:25 pm

Philosophical question. I can see the point of doing MC on GPU cores -- after all, there are a lot of them.But I don't see the point of bothering with (say Intel) multi-cores. I mean, how many are we talking about: 6-8? Why bother?
 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Parallel RNG and distributed MC

January 27th, 2012, 3:51 pm

QuoteOriginally posted by: AlanPhilosophical question. I can see the point of doing MC on GPU cores -- after all, there are a lot of them.But I don't see the point of bothering with (say Intel) multi-cores. I mean, how many are we talking about: 6-8? Why bother?Many reasons. Maybe GPU and Intel?1. GPUs are not suitable for MIMD type applications, and compiler is proprietary. Speedup of 200 is possible but engineers use the efficiency metric (bad).2. Intel is used by everyone; it is a standard; GPU is niche and restricted in its applicability AFAIK.3. I doubt GPU works for PDE and non SIMD problems. (i even heard speedup < 1)4. Intel/AMD and GPU speeds will converge (imo).5. No time to hobby with h/w cards. I don't like non-ISO languages especially from ROI viewpoint6. and what if? Then maybe do RNG via an F# interface to Excel (wild guess).7. MPI
Last edited by Cuchulainn on January 26th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Parallel RNG and distributed MC

January 27th, 2012, 3:56 pm

QuoteOriginally posted by: outrunThe tesla's now do MIMD.Any results on non MC type applications?
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On