Yes I saw them too, you need the latest VS2017 it seems, must download it.

Thanksan interesting read on Clang for MSVC: https://clang.llvm.org/docs/MSVCCompatibility.html

- Cuchulainn
**Posts:**59944**Joined:****Location:**Amsterdam-
**Contact:**

very interestingWell yes, unlikely doesn't mean impossible, I didn't even say highly unlikely:)Does this need to be revised based on outrun's tests, for example? What were the assumptions?Wow, good tests! Thanks outrun. I am wondering now if Clang could be faster for other code as well, not just the mt rng... Any experience?

but a 3-fold difference in performance because of different compiler implementations on something as widely used as rng seems unlikely to me.

I am surprised, but then again an RNG may be a special case. Now I would be a lot more surprised if VS turned out to be producing 3 or even 2 times slower PDE code. Then I could switch compiler and have my codes run 2-3 time faster. That would be something.

I use VS for PDE as well. But PDE do not use RNG, but just + and -. Then we only have to compare compilers, not 'embedded' libraries.

- Cuchulainn
**Posts:**59944**Joined:****Location:**Amsterdam-
**Contact:**

Now, getting back to outrun's accuracy Q, I made some offhand remarks based on this MC on Quantlib. Instead of Boost RNG you can use others. (p = 5.8462822). In general, I get worse answers with Boost than with C++11 (no idea why).

Code: Select all

```
// OneFactorMC101.cpp
//
// MC for 1-factor put = 5.8462822.
// (We use GBM SDE)
// djd
#include <boost/random/variate_generator.hpp>
#include <boost/random/mersenne_twister.hpp>
#include <boost/random.hpp>
#include <boost/random/normal_distribution.hpp>
#include <iostream>
#include <ql/quantlib.hpp>
#include <ql/processes/geometricbrownianprocess.hpp>
#include "StopWatch.cpp"
double Payoff(double x, double K)
{
return std::max(K - x, 0.0);
}
int main()
{
boost::mt19937 mt;
mt.seed(static_cast<boost::uint32_t> (std::time(0))); // V1
boost::normal_distribution<> distribution;
boost::variate_generator<boost::mt19937&, boost::normal_distribution<>>
normal(mt, distribution);
StopWatch<> sw;
sw.Start();
int numberSimulations = 1'000'000;
double S0 = 60.0;
double sigma = 0.3;
double r = 0.08;
double T = 0.25;
double K = 65.0;
double sum = 0.0;
double payoff = 0.0;
double NT = 500;
double dt = T / static_cast<double>(NT);
double Sprev, Snext, x, t;
double sqrtdt = std::sqrt(dt);
QuantLib::GeometricBrownianMotionProcess gbm(S0, r, sigma);
for (int iSim = 0; iSim < numberSimulations; ++iSim)
{
Sprev = S0;
t = 0.0;
for (std::size_t n = 0; n < NT; ++n)
{
x = normal();
// How to create a path (exact, Euler, "direct")
// Snext = Sprev * std::exp((r -0.5*sigma * sigma) * dt + sigma * std::sqrt(dt)*x);
Snext = Sprev + gbm.drift(t, Sprev)* dt + gbm.diffusion(t, Sprev)* sqrtdt*x;
// Snext = Sprev + r*Sprev* dt + sigma*Sprev* sqrtdt*x;
Sprev = Snext;
t += dt;
}
sum += Payoff(Snext, K);
}
std::cout << "\nput =" << std::exp(-r*T)*sum / numberSimulations << std::endl;
sw.Stop();
std::cout << "\nTime:" << sw.GetTime() << std::endl;
return 0;
}
```

Strange, I bet it's a bug.

Do you get the exact same results for C++ and Boost when you don't seed time(0) ? This is what should happen since C++ and boost MT give the *exact* same pseudo-random number sequence (e.g. all benchmarks I did print the sum of random numbers, all identical for all MLT on all compilers, meaning they are adhering to the standard as expected).

That would exclude differences in outcome due to random sample noise caused by seeding.

Instead of the boost specific variate generator (which if I remember keep internal state, and which is a legacy interface) you can do it like this using C++11 portable interfaces:

Do you get the exact same results for C++ and Boost when you don't seed time(0) ? This is what should happen since C++ and boost MT give the *exact* same pseudo-random number sequence (e.g. all benchmarks I did print the sum of random numbers, all identical for all MLT on all compilers, meaning they are adhering to the standard as expected).

That would exclude differences in outcome due to random sample noise caused by seeding.

Instead of the boost specific variate generator (which if I remember keep internal state, and which is a legacy interface) you can do it like this using C++11 portable interfaces:

Code: Select all

```
std::mt19937 gen();
std::normal_distribution<> d();
...
x = d(gen);
```

- Cuchulainn
**Posts:**59944**Joined:****Location:**Amsterdam-
**Contact:**

Indeed. But C++11 is slower on VS.

edit:

For the above problem, no seed, Boost == 5.83816 and C++11 == 5.83957.

more to come ...

edit:

For the above problem, no seed, Boost == 5.83816 and C++11 == 5.83957.

more to come ...

So.. we know that the engines give the exact same sequence or random numbers, yet when you run them thought QL to transform them into paths and payoffs some other factor creeps in and the end results start to deviate. These transforms are deterministics.

the slowness is in the "eng", not in the "std::normal_distribution"?

Even so, you should still (pure boost, or mixed boost/C++11) be able to get rid of variate generator.

I think variate generator might play a role. From what I remember it has specializations for Normal distribution, box muller every other step, keeping 1 rng in a cache for the next draw. I think in C++11 distribution are more stricter: no state, predictable outcome.

the slowness is in the "eng", not in the "std::normal_distribution"?

Even so, you should still (pure boost, or mixed boost/C++11) be able to get rid of variate generator.

Code: Select all

```
boost::random::mt19937 gen();
boost::random::normal_distribution<> d();
...
x = d(gen);
```

- Cuchulainn
**Posts:**59944**Joined:****Location:**Amsterdam-
**Contact:**

I am a plumber, not a heart surgeon. Now I seed at each iteration both Boost MT and C++ MT using std::random_device.

The results coming out of the pipes + QL MC are as below (next w/o QL).

How do we interpret them? BTW I posted the QL code already for all to see. (NSIM = 10^6, NT = 500)

BTW I use QL 1.8 here!

put Boost =5.83741, put C++11 =5.8419

put Boost =5.84773, put C++11 =5.84763

put Boost =5.85105, put C++11 =5.85567

put Boost =5.84643, put C++11 =5.84755

put Boost =5.84868, put C++11 =5.85189

put Boost =5.8528, put C++11 =5.83665

put Boost =5.8524, put C++11 =5.84765

put Boost =5.84634, put C++11 =5.84832

put Boost =5.84765, put C++11 =5.85179

put Boost =5.84696, put C++11 =5.85373

The results coming out of the pipes + QL MC are as below (next w/o QL).

How do we interpret them? BTW I posted the QL code already for all to see. (NSIM = 10^6, NT = 500)

BTW I use QL 1.8 here!

put Boost =5.83741, put C++11 =5.8419

put Boost =5.84773, put C++11 =5.84763

put Boost =5.85105, put C++11 =5.85567

put Boost =5.84643, put C++11 =5.84755

put Boost =5.84868, put C++11 =5.85189

put Boost =5.8528, put C++11 =5.83665

put Boost =5.8524, put C++11 =5.84765

put Boost =5.84634, put C++11 =5.84832

put Boost =5.84765, put C++11 =5.85179

put Boost =5.84696, put C++11 =5.85373

First solve this puzzle below -the case *without* seeding-. Seeding all over the place is not going to explain things.

I ran this comparison (a wandabox) of the random numbers you feed into your code when either using std::mt19937_64 or boost::random::mt19937_64, see my point?

code:

I ran this comparison (a wandabox) of the random numbers you feed into your code when either using std::mt19937_64 or boost::random::mt19937_64, see my point?

code:

Code: Select all

```
#include <iostream>
#include <random>
#include "boost/random.hpp"
int main(void) {
std::mt19937_64 std_eng;
boost::random::mt19937_64 boost_eng;
for (auto i=0; i<4; ++i)
std::cout << i << ": " << std_eng() << " ==? " << boost_eng() << std::endl;
return 0;
}
```

Code: Select all

```
0: 14514284786278117030 ==? 14514284786278117030
1: 4620546740167642908 ==? 4620546740167642908
2: 13109570281517897720 ==? 13109570281517897720
3: 17462938647148434322 ==? 17462938647148434322
```

- Cuchulainn
**Posts:**59944**Joined:****Location:**Amsterdam-
**Contact:**

I get the same! Does this prove that the differences are NOT due to MT?First solve this puzzle below -the case *without* seeding-. Seeding all over the place is not going to explain things.

I ran this comparison (a wandabox) of the random numbers you feed into your code when either using std::mt19937_64 or boost::random::mt19937_64, see my point?

code:Code: Select all`#include <iostream> #include <random> #include "boost/random.hpp" int main(void) { std::mt19937_64 std_eng; boost::random::mt19937_64 boost_eng; for (auto i=0; i<4; ++i) std::cout << i << ": " << std_eng() << " ==? " << boost_eng() << std::endl; return 0; }`

Code: Select all`0: 14514284786278117030 ==? 14514284786278117030 1: 4620546740167642908 ==? 4620546740167642908 2: 13109570281517897720 ==? 13109570281517897720 3: 17462938647148434322 ==? 17462938647148434322`

BTW I use boost::mt19937 etc. not boost::random::mt19937 (1.65.1??)

Very good.

See! It's doesn't matter what random number generator you use, they should generate the exact same scenarios, yet you get different option prices.

See! It's doesn't matter what random number generator you use, they should generate the exact same scenarios, yet you get different option prices.

- Cuchulainn
**Posts:**59944**Joined:****Location:**Amsterdam-
**Contact:**

But we need N(0,1) numbers? I assume all this stuff just uses Inverse Transform method?Very good.

See! It's doesn't matter what random number generator you use, they should generate the exact same scenarios, yet you get different option prices.

Yes, so the normal_distribution object has a member "operator()(uniform_engine& eng)" that transforms these uniform 64 bit integers into normal distributed variates. Thats the code that runs when you write "x = d(eng);"

I found a link to the code of an old GCC version, and if you skip past all the type inference boilerplate you see that it uses the box muller transform. Since it's an object it (apparantly) *does* have a state where it stores samples. The first time you call it consumes two uniform random integers, converts them to two normal samples, returns one and stores the other. In the next call it returns the stored one.

https://gcc.gnu.org/onlinedocs/gcc-4.6. ... tml#l01649

.. so it the two engines give identical random integers, then this routine will convert them into identical random N(0,1) numbers, ..still no possible difference.

I found a link to the code of an old GCC version, and if you skip past all the type inference boilerplate you see that it uses the box muller transform. Since it's an object it (apparantly) *does* have a state where it stores samples. The first time you call it consumes two uniform random integers, converts them to two normal samples, returns one and stores the other. In the next call it returns the stored one.

https://gcc.gnu.org/onlinedocs/gcc-4.6. ... tml#l01649

.. so it the two engines give identical random integers, then this routine will convert them into identical random N(0,1) numbers, ..still no possible difference.

this is interesting: boost::normal_distribution<> uses a different algorithm than std::normal_distribution<> to convert the random integers to N(0,1).

Code: Select all

```
#include <iostream>
#include <random>
#include "boost/random.hpp"
int main(void) {
std::mt19937_64 std_eng;
boost::random::mt19937_64 boost_eng;
//std::normal_distribution<> d1;
boost::normal_distribution<> d1;
//std::normal_distribution<> d2;
boost::normal_distribution<> d2;
for (auto i=0; i<4; ++i)
std::cout << i << ": " << d1(std_eng) << " ==? " << d2(boost_eng) << std::endl;
return 0;
}
```

I'd be very surprised if either used Box Muller now. Surely it's the Ziggurat or something equally efficient?

GZIP: On