Is there any desirable property in MC that is not found in QMC? in other words is MC preferred over QMC in any particular case?In this paper - Hybrid Sequencesit is said that "MC (unlike QMC) is a stochastic method, hence they allow statistical error estimation"Can someone explain the above sentence? I thought QMC was always better than MC.

the scentence means that because MC is a stochastic method, you know (by the CLT) that it converges with rate 1/sqrt(N). For QMC you don't have this kind of error estimate because it is not stochastic, so there you have some error bounds such as the Koskma-Hlawka bound, but these are rather useless in practice. That is why you do QMC with randomization: best of both worlds.I think QMC, when done properly, performs at least as good as MC, but it can be the same. In finance it is not always clear how to implement QMC.

Unfortunately - given any qMC sequence - it is always possible to find an integrand on which it performs worse than MC, atleast preasymptotically. This is obvious but the interesting part is that examples are not necessarily so degenerate as one might think at first. qMC must be handled with care, although it can give huge speedups. Anyway, MC (in theory, not practice) is easier to parallelize, samples are not dependent and there's no risk of interference effects (as i.e. with lattice rules).

Last edited by quartz on January 20th, 2011, 11:00 pm, edited 1 time in total.

Hi r00kie, If you want to calculate the error given by an MC integration, you run it several times with different seeds (and the same number of paths) and calculate the standard deviation. If it's higher than your tolerance, you increase the number of seeds. With QMC you don't have a seed. However, as Douglas01 mentioned, there randomized quasi-random numbers.Here's roughly how randomized quasi-random numbers work: in a Halton sequence you start with 1/2, then you take 1/4 and 3/4, and then you take 1/8, 5/8, 3/8, 7/8. Now, for reasons of symmetry, you can take first 3/4 and then 1/4; after that you can take 5/8 and then 1/8, or you can take 3/8 and 7/8. You can get infinitely many sequences that have the same discrepancy at all times. Chosing such a sequence amounts to choosing a random number. But now you have another problem: for this particular sequence, the first 1023 numbers are the same, no matter how you pick them. That's only a problem in very low dimension though.Quartz, I'm not sure pseudo-random numbers are easier to parallelize. The Halton sequence is extremly easy to parallelize, in any dimension.Your point about "pathological" integrands is very interesting. I didn't know about that. Best,V.

QuoteOriginally posted by: quartzUnfortunately - given any qMC sequence - it is always possible to find an integrand on which it performs worse than MC, atleast preasymptotically. This is obvious but the interesting part is that examples are not necessarily so degenerate as one might think at first. qMC must be handled with care, although it can give huge speedups. Anyway, MC (in theory, not practice) is easier to parallelize, samples are not dependent and there's no risk of interference effects (as i.e. with lattice rules).Can you provide an example where MC performs better than QMC? I am curious to know, because all my "sources" say that QMC is atleast as good as (and usually better than) MC

QuoteOriginally posted by: CosteanuHi r00kie, If you want to calculate the error given by an MC integration, you run it several times with different seeds (and the same number of paths) and calculate the standard deviation. If it's higher than your tolerance, you increase the number of seeds. With QMC you don't have a seed. However, as Douglas01 mentioned, there randomized quasi-random numbers.Here's roughly how randomized quasi-random numbers work: in a Halton sequence you start with 1/2, then you take 1/4 and 3/4, and then you take 1/8, 5/8, 3/8, 7/8. Now, for reasons of symmetry, you can take first 3/4 and then 1/4; after that you can take 5/8 and then 1/8, or you can take 3/8 and 7/8. You can get infinitely many sequences that have the same discrepancy at all times. Chosing such a sequence amounts to choosing a random number. But now you have another problem: for this particular sequence, the first 1023 numbers are the same, no matter how you pick them. That's only a problem in very low dimension though.V.That is precisely what I am trying to understand - I ve a sequence of quasi random numbers, why does the order matter?

QuoteOriginally posted by: r00kieQuoteOriginally posted by: CosteanuHi r00kie, If you want to calculate the error given by an MC integration, you run it several times with different seeds (and the same number of paths) and calculate the standard deviation. If it's higher than your tolerance, you increase the number of seeds. With QMC you don't have a seed. However, as Douglas01 mentioned, there randomized quasi-random numbers.Here's roughly how randomized quasi-random numbers work: in a Halton sequence you start with 1/2, then you take 1/4 and 3/4, and then you take 1/8, 5/8, 3/8, 7/8. Now, for reasons of symmetry, you can take first 3/4 and then 1/4; after that you can take 5/8 and then 1/8, or you can take 3/8 and 7/8. You can get infinitely many sequences that have the same discrepancy at all times. Chosing such a sequence amounts to choosing a random number. But now you have another problem: for this particular sequence, the first 1023 numbers are the same, no matter how you pick them. That's only a problem in very low dimension though.V.That is precisely what I am trying to understand - I ve a sequence of quasi random numbers, why does the order matter?Order matters because you want a low-discrepancy sequence; basically you want about as much points in all subsets of the same size over [0,1]^d (this statement is a simplification of course).

QuoteHere's roughly how randomized quasi-random numbers work: in a Halton sequence you start with 1/2, then you take 1/4 and 3/4, and then you take 1/8, 5/8, 3/8, 7/8. Now, for reasons of symmetry, you can take first 3/4 and then 1/4; after that you can take 5/8 and then 1/8, or you can take 3/8 and 7/8. You can get infinitely many sequences that have the same discrepancy at all times. Chosing such a sequence amounts to choosing a random number. This makes sense. How do you implement it?

QuoteCan you provide an example where MC performs better than QMC? I am curious to know, because all my "sources" say that QMC is atleast as good as (and usually better than) MCAgain: get a function null "near" the initial LDS points, and 1 elsewhere... but this is pathologic, sure. Nevertheless experiment a bit with exotic americans and SV and you'll soon encounter examples. Afair already with a Sobol' sequence and an asian it's easy if you use brownian bridge but mirror the LDS axes so that the largest function variation is matched to the "worst" qMC direction (and with other tricks related to the Sobol' seq. deficiencies).Actually there are better examples where you literally see that a Sobol' sequence is severly biased at "reasonable" sample sizes.[more detail -and funky pictures- at the course this autumn in germany ]

Thanks. But I still need some clarification - QuoteOrder matters because you want a low-discrepancy sequence; basically you want about as much points in all subsets of the same size over [0,1]^d (this statement is a simplification of course).Suppose I fix my number of steps to some constant value(like 100000). Then I generate 100000 low discrepancy numbers. How does the order of those numbers matter? ie if the sequence is x1, x2, ..., xn, I should get the same result even if I use it in reverse order xn, xn-1,..., x2, x1. Right? QuoteAgain: get a function null "near" the initial LDS points, and 1 elsewhere... but this is pathologic, sure. Nevertheless experiment a bit with exotic americans and SV and you'll soon encounter examples. Yes I agree, but isnt this the case for any PRNG? I mean, I can always choose a function that performs badly for say - Mersenne Twister RNG.

r00kieare you talking 1-D or multi dimensional?If you are talking about 1-d then you are right, however LD sequences(series?) are generally designed like monte carlo so that they can generate as many as you need ( ie continue adding LD numbers until you reach convergence)- and then it makes sense to go 1/2,1/4,3/4 ... etc ie filling in the gaps. the real problem with ld sequences is in multi dimension. You do not generate 1000 10 dimensional LD points by 10,000 1D LD points (unlike with MC).[Not sure if thats what you were saying; if you were saying rearranging the 1000 vectors then that's fine -like 1D]

GZIP: On