Page **37** of **39**

### Parallel RNG and distributed MC

Posted: **August 31st, 2012, 11:09 pm**

by **MiloRambaldi**

QuoteOriginally posted by: CuchulainnI think this is similar to what I am doing it. I think we have similar designs but different syntax. To scope the problem I propose modelling the concepts/classes1. SDE2. FDM3. Mesher(4. RNG?)Of course I have a solutiom in MC1 but indeed see that std::transform() with function is going to save a lot of loops.What do you think? This model is small enough to keep us on track and big enough to see essential difficulties. I have been looking into MC1. I decided to make a more user friendly version of TestMC as a way of getting acquainted with MC1. I used boost:rogram_options for parameters, so it is now command line rather than interactive input. For starters, I allow the user to select which PRNG to use. I assume this would be a more useful program if the user could also select the model and the FDM scheme using command line parameters as well? This will probably be my next step. I replaced the statistics functions called from MCReporter with calls to the statistics library I mentioned in another thread (there was a typo causing the wrong sd and se to be given, btw). Here is a run of the program with no parameters given. It displays the available command line parameters to let the user know they are available:test_MC.exeThe code has been checked into qfcl/random because I don't have write access to qfcl/MC1. In any case, there are also dependencies so that it is easier to download everything from qfcl/random.I still have no idea what RAT is all about or how it is relevant to the MC1 design. Also, I have no idea what MIS Agent is supposed to be. It might help if the references "Duffy 2010" and "Duffy 2004" in the brief documentation were specified. I also don't get what boost::signals is about or how it fits in here. Probably I should look at the boost::signals docs.

### Parallel RNG and distributed MC

Posted: **September 2nd, 2012, 5:31 pm**

by **Cuchulainn**

That's nice!I have assembled some ppt and pdfs on these Domain Architectures, especially MIS and RAT, here Duffy 2004 == Domain Architecture book (Wiley).The most important part of DA is that initial context/component diagram.QuoteI have been looking into MC1. I decided to make a more user friendly version of TestMC as a way of getting acquainted with MC1. I used boost:rogram_options for parameters, so it is now command line rather than interactive input. For starters, I allow the user to select which PRNG to use. I assume this would be a more useful program if the user could also select the model and the FDM scheme using command line parameters as well? This will probably be my next step. It is possible to combine command line and interactive option the same main() by Abstract Factory (GOF) or using Boost Function Factory.QuoteI replaced the statistics functions called from MCReporter with calls to the statistics library I mentioned in another thread (there was a typo causing the wrong sd and se to be given, btw). Here is a run of the program with no parameters given. It displays the available command line parameters to let the user know they are available:My MCReporter was very simple. It can be replaced by yours, of course. Does the interface remain stable?QuoteThe code has been checked into qfcl/random because I don't have write access to qfcl/MC1. In any case, there are also dependencies so that it is easier to download everything from qfcl/random.[./q]We can ask Admin to give you write access if you wish.QuoteI still have no idea what RAT is all about or how it is relevant to the MC1 design. Also, I have no idea what MIS Agent is supposed to be. It might help if the references "Duffy 2010" and "Duffy 2004" in the brief documentation were specified. I also don't get what boost::signals is about or how it fits in here. Probably I should look at the boost::signals docs. I have 5 categories: MAN, RAT, MIS, PCS and ACS (see my link).Signals(and Signals2) is Observer pattern on steroids. But it is _not_ OO and is very flexible.

### Parallel RNG and distributed MC

Posted: **January 29th, 2015, 9:30 am**

by **Cuchulainn**

Let's scope the problem even more...it's a very simple example/question..Let's say we want to generate a (large) matrix of random numbers. It's got to be done in parallel somehow. So, how to do it with a good speedup and no race conditions?1. No correlations between the numbers in different sequences.2. Scalable: support many processes/threads, each with its own sequence.3. Locality: a thread can generate a sequence of random numbers with no inter-thread communication.Ideally, it should be possible to dynamically create new RN streams.This seems like a candidate RNGThe documentation does not seem to address points 1 or 3. The method uses Leapfrog so dynamic stream creation is not supported its seems.And a practical question: how to use the library in loop-level parallelism in OpenMP (and C++11)? On a 8-core machine, for example, what is the speedup? Here is SRNG that looks interesting. At first glance it only seems to work on MPI(?)

### Parallel RNG and distributed MC

Posted: **January 29th, 2015, 2:02 pm**

by **pcaspers**

I didn't read through the whole thread, so sorry in advance if this is not relevant.Here is a ql wrapper for the dynamic creator for mersenne twister library dcmt, the homepage for the original library is here.

### Parallel RNG and distributed MC

Posted: **January 29th, 2015, 2:19 pm**

by **Cuchulainn**

QuoteOriginally posted by: pcaspersI didn't read through the whole thread, so sorry in advance if this is not relevant.Here is a ql wrapper for the dynamic creator for mersenne twister library dcmt, the homepage for the original library is here.Thanks, PeterThat looks like what I am looking for. Normally #streams >> # threads but I reckon e.g. (100 versus 4) but that should be possible?QuoteDynamic Creation (DC) of Mersenne Twister generators. For example, if you want 100 independent different random streams, in 100 paralell machines, then what you need to do is: call DC 100 times, with id numbers 0 -- 99. Then DC returns 100 different parameters for generators. Now you have 100 independent source.

### Parallel RNG and distributed MC

Posted: **January 29th, 2015, 2:32 pm**

by **pcaspers**

yes sure. The problem with the big MT is that dynamic creation takes a long time, therefore I only precomputed 8 distinct instances. Smaller p's can be computed much faster, maybe you do not even need to precompute them, you can do it just before you start your simulation.

### Parallel RNG and distributed MC

Posted: **January 29th, 2015, 2:36 pm**

by **Cuchulainn**

Great. So I am gonna test it on QL 1.4 with VS2013 and compare with my serial DE solver I want to apply it to. QuoteThe problem with the big MT is that dynamic creation takes a long time, therefore I only precomputed 8 distinct instancesCan this actual process be parallelized without danger of a race condition?

### Parallel RNG and distributed MC

Posted: **January 29th, 2015, 2:45 pm**

by **pcaspers**

it's not in 1.4, you'd have to pull this branch (or just copy the two files into your project).Both the creation process and the random number generation can be used multithreaded without taking care of anything special.

### Parallel RNG and distributed MC

Posted: **January 29th, 2015, 4:08 pm**

by **Cuchulainn**

QuoteOriginally posted by: pcaspersHere is a ql wrapper for the dynamic creator for mersenne twister library dcmt, the homepage for the original library is here.Silly questions1. I can just use the code as a standalone header in my project? (I think it's answered, 2 files).2. How do I get that code?

### Parallel RNG and distributed MC

Posted: **January 29th, 2015, 6:00 pm**

by **pcaspers**

there are dependencies to other quantlib files, so you should copy the 2 dynamiccreator.?pp files into ql / experimental / math and update the all.hpp in that folder. Since you are on Windows, add them to the respective VS quantlib 1.4 project folders (or filters), recompile quantlib and it should work. I will send you the two files per mail.

### Parallel RNG and distributed MC

Posted: **January 29th, 2015, 7:22 pm**

by **Cuchulainn**

yep, did all the steps and it was fine

Nice stuff, Peter.BTW this is not needed -> //#include <ql/methods/montecarlo/sample.hpp>BTW

### Parallel RNG and distributed MC

Posted: **January 29th, 2015, 7:35 pm**

by **Cuchulainn**

Back to barracks:QuoteLet's say we want to generate a (large) matrix of random numbers. The next steps I want is to try OMP on my DE solver. I several choices:1. loop-level parallelism (the outer loop).2. master-worker pattern3. Task decomposition (== 2 in this case??)Testing? TestU01/Modified Diehard and NIST benchmarks.Many thanks again, Peter.// I'm having a look at QL::DE as well.

### Parallel RNG and distributed MC

Posted: **January 30th, 2015, 1:29 pm**

by **Cuchulainn**

Is this the way to create arrays of RNGs? And just to be clear, your class generates unifom random numbers based on w parameter, so I need to scale to get U(0,1), U(a,b) etc.?