Page 1 of 1
Quasi Monte Carlo in a basket of underlyings
Posted: January 13th, 2010, 12:39 am
by tkh
Hi;I am investigating/exploring Quasi Monte Carlo,with Sobol sequences, using J.Kuo direction numbers.So far so good, done it in C++. But I have a simple question which I hope someone can enlighten me.In the context of simulating a single derivative under Monte Carlo simulationthe sobol numbers generated from my implementation is a matrix of N by D whereN= number of simulationsD = Dimension = # time Steps of the simulationQuestion:What if I want to simulate a correlated basket of underlyings ie say 3underlying in the basket?I know this requires the usual Cholesky decomposition on the 3x3correlation Matrix to generate correlated random variates, but that's not my question.Assume N = 1000, D=1 ie 1000 simulations and only one time step,inception, t0 to maturity, TSo is this correct.. :I would need to generate N = 3*1000=3000 , (D=1) random numbers ,where rdnNumbers is a 3000 by 1 matrix, generated with the Sobol sequencethen for the 1st simulation path,rdnNumbers[0][0] would be used for the 1st underlyingrdnNumbers[1][0] would be used for the 2nd underlyingrdnNumbers[2][0] would be used for the 3rd underlyingfor the 2nd simulation path,rdnNumbers[3][0] would be used for the 1st underlyingrdnNumbers[4][0] would be used for the 2nd underlyingrdnNumbers[5][0] would be used for the 3rd underlyingi.e. triplets orderingORrdnNumbers[0][0] would be used to simulate the 1st underlying...rdnNumbers[999][0] would be used for the 1st underlying===============================================rdnNumbers[1000][0] would be used to simulate the 2nd underlying...rdnNumbers[1999][0] would be used to simulate the 2nd underlying===============================================rdnNumbers[2000][0] would be used to simulate the 3rd underlying...rdnNumbers[2999][0] would be used to simulate the 3rd underlyingie 3 contiguous blocksetcis any of it correct?if not, what would be the correct way to generate the random variates under Quasi MC for a(correlated) basket of 3 underlyings, in the context of QMC?
Quasi Monte Carlo in a basket of underlyings
Posted: January 13th, 2010, 9:51 am
by carlitos
The dimension of your problem is [number of steps]*[number of assets]; in your three-asset one-step example you'll need to generate N 3-D points of the low-discrepancy sequence. To use the sequence efficiently, you want the lower dimensions to have a bigger effect on your paths than the higher dimensions: you can use the brownian bridge method to get an independent path for each underlying and introduce the correlation between them later, or you can use the spectral decomposition of the whole covariance matrix (can be more efficient if you have many time steps and only a few assets).
Quasi Monte Carlo in a basket of underlyings
Posted: January 14th, 2010, 12:25 am
by tkh
Hi Carlitos;Thank you for your reply ^_^The literature i have does mention brownian bridge for higher dimensionality with qmc for high dimensional cases.But i thought it was a 1-D problem since only 1 time step. so no need to reduce the effective dimension.so from your reply, the dimensions also include the number of underlyings to simulate..3 more simple questions if you do'nt mind, 1. if i have 1000 simulation paths, 12 time steps and 3 underlyings, this would require 1000 36-D points from the low discrepancy sequence?2. in the context of emphasising the lower dimensions to have a greater effect, would this allocation algo b ok ieN=1, D=1 to 3 is for 1st time step, 3 underlyingsN=1, D=4 to 6 is for 2nd time step, 3 underlyings...N=1,D=34 to 36 is for the 12th time step,3 underlyingsetcor it doesn't matter3. sorry , do'nt really get you regarding "introduce correlation later"in the simplest implementation, for the i-th simulation and j-th time step, i have a triplet of random normal variates. then apply the usual cholesky decomposed correlation matrix (lower/upper triangular form) etc....first simulating the full path via brownian bridge, yes ok i can do that but then introduce correlation later?how could i introduce correlation later, since the random numbers are already used to get the simulated values ie "consumed" to generate the new simulated values??any info/literature you could point me with regards to this statement, would be much appreciated
Quasi Monte Carlo in a basket of underlyings
Posted: January 14th, 2010, 8:42 am
by carlitos
Quote1. if i have 1000 simulation paths, 12 time steps and 3 underlyings, this would require 1000 36-D points from the low discrepancy sequence?That's right, for each simulation you need 12*3=36 independent numbers, now you have to generate three 12-step correlated paths using this numbers somehow. Quote2. in the context of emphasising the lower dimensions to have a greater effect, would this allocation algo b ok ieN=1, D=1 to 3 is for 1st time step, 3 underlyingsN=1, D=4 to 6 is for 2nd time step, 3 underlyings...N=1,D=34 to 36 is for the 12th time step,3 underlyingsetcor it doesn't matterThe point of using low-discrepancy sequences is that they are well-distributed over the sampling space, but in fact the lower dimensions behave better. So you want the best numbers to have a bigger effect on your payoff. Forget for a moment the multiple assets, and imagine you have 4 numbers to generate a 4-step path. By using the incremental construction as you suggest, you're "wasting" the first number because the first step is not so important for the final payoff; none of the individual steps is very important.It's much better to use the brownian bridge construction: use the first number to generate the final point (which usually have a big effect on the payoff), use the second number to generate the middle point conditional on the final point (that starts to define the shape of the path), and use the third and fourth numbers to generate the remaining points (conditional on the points already there). In fact there are still better ways to generate the path, but the brownian bridge method may be good enough and it's easy to understand and implement.You can proceed in a similar way in your case. For N=1 you got 36 numbers D=1,2,..,.36Start by generating three independent paths:Use D=1 for the end point of the first pathUse D=2 for the end point of the second pathUse D=3 for the end point of the third pathThis assures your three first numbers are used efficiently.Use the next three numbers for the middle points in each of the pathsEtc, etc. (after T=12 and T=6 you can still take the middle-points of the intervals T=3,T=9 but afterwards things get a bit messy, it's easier if the number of steps is a power of two).So you got three independent brownian motions, and you know already how to introduce the correlation at each step based on the three normal, independent increments.Quote3. sorry , do'nt really get you regarding "introduce correlation later"in the simplest implementation, for the i-th simulation and j-th time step, i have a triplet of random normal variates. then apply the usual cholesky decomposed correlation matrix (lower/upper triangular form) etc....first simulating the full path via brownian bridge, yes ok i can do that but then introduce correlation later?how could i introduce correlation later, since the random numbers are already used to get the simulated values ie "consumed" to generate the new simulated values??any info/literature you could point me with regards to this statement, would be much appreciated Check section 3.1 in Glasserman's book on Monte Carlo methods:
http://books.google.ch/books?id=e9GWUsQ ... q=&f=false
Quasi Monte Carlo in a basket of underlyings
Posted: January 14th, 2010, 9:38 am
by tkh
Hi again Carlitos;Thank you for your prompt reply.1. I get what you're saying. Very enlightening indeed. Looks like i have alot more investigation/study/implementation to do. Yup i have Glasserman's yellow book. Again thank you for pointing me in the right direction. It might take awhile, but i'm sure there'll be more questions later on.rgds;teh kh
Quasi Monte Carlo in a basket of underlyings
Posted: January 14th, 2010, 6:36 pm
by Lapsilago
Hi !I have followed the discussion and I use Sobol for a Libor Market model. In fact the way I handle the dimension Nfactors x NSteps is the way to go. There are some "sorting" methods around but I apply the one you mentioned. It works fine for a full factor LMM.Brownian Bridge is certainly a good way to go and it is efficient.You two mention other more efficient ways to generate the path. Could you name the method or spot to the literature. This would be much appreciated.I know sequential, bridge, Karhunen-Loeve, Spectral. Best,Lapsi
Quasi Monte Carlo in a basket of underlyings
Posted: January 14th, 2010, 10:36 pm
by carlitos
When I mentioned better ways (compared to the simple Brownian bridge method just described) I was thinking of things like
http://dx.doi.org/10.1016/j.jco.2007.06.001 (this may be one of the "sorting methods" you talk about) or using the spectral decomposition of the (Nassets x Nsteps)-dimensional covariance matrix (I mentioned that possibility explicitely in my first message).In my tests (barrier product with 4 assets, up to 512 steps) the spectral decomposition was the most efficient method but quite similar to generating independent Brownian bridges and introducing the correlation via the spectral decomposition (results were somewhat worse with Cholesky decomposition). But the Brownian bridge method was actually more efficient when computation time was taken into account (I used Matlab and R for those tests, in C the trade-offs might be different). The Brownian bridge method can be written in matrix form, faster than an iterative calculation (at least up to a certain size, afterwards the iterative version is preferable: in Matlab the threshold was around 200 steps, in R the interpreter is so slow that the matrix version was still better beyond 600 steps). For the spectral method the matrices to manipulate are bigger, so the performance degrades quicker.I also found helpful to randomize the Monte Carlo sequence, shifting every point in a sequence by a random vector in the unit hypercube (module 1 to stay in the hypercube). By generating ten sequences of a thousand points instead of a single sequence of ten thousands points, keeping the computation time more or less constant, one can produce an unbiased estimator with a confidence interval.
Quasi Monte Carlo in a basket of underlyings
Posted: January 18th, 2010, 7:54 pm
by Lapsilago
Thnx Carlitos!I observed similar stuff. Your comment on random shifts is interesting!Best Lapsi!
Quasi Monte Carlo in a basket of underlyings
Posted: February 2nd, 2011, 8:20 pm
by bearrito
So I am in a similar situation as tkh was.I am generating a stream of quasi-randoms usinghttp://web.maths.unsw.edu.au/~fkuo/sobol/joe-kuo-notes.pdf and
http://web.maths.unsw.edu.au/~fkuo/sobo ... o-6.21201I have some confusion about dimensions and the use of the generated random numbers.Let's say I want to generate 512 steps on 3 correlated underlyings over 1000 paths.I recognize that my dimension is 512*3. Where I am confused is the dimension of the numbers generated by my generator.My stream is essentially a vector of floats . Not a vector of vector of dimension 1*(512*3).So below when Carlitos says generate 36-D points he loses me.Also, what is the dimension of the generator I should be employing to create my stream of numbers.Edit: Is it the case that for each dimension I also need a Primitive Polynomial. Let x(i,n) be the ith point generated by the nth primitive polynomial. Then x(i) = { x(i,0) ... x(i,512*3) } would be a 512*3 dimensional number.Then x(0,1) .... x(0,512*3) would be the first path over the three underlyings to which I could apply the brownian bridge procedure?I can iterate my generators to compute x(1000) = { x(1000,0) ... x(1000,512*3) } Also would someone mind checking the direction numbers generated for the standard recurrence relation using :degree = 3 a1 = 0 a2 = 1 m1 = 1 m2 = 3 m3 = 7