Page 1 of 1

Richardson extraploation to reduce variance in Monte Carlo

Posted: October 7th, 2004, 3:51 pm
by manatee
Hi Folks:I apologise if this has been asked before, but my question is : has anybody tried Richardson extrapolation to increase the accuracy of Monte Carlo simulations ?By way of background, Romberg integration comes to mind. The idea is to use a fairly low order methodrepeatedly and obtain a higher order result by exploting the form of the error term. So, for example ifyou have two first order quadrature rules at different resolutions (step sizes) then Richardson extrapolation can give you a second order result. It works only if the error is a) smooth and b)is a known function of step size.In Monte carlo, (b) is satisfied, since we know the result converges as 1/sqrt(n). I dont know about (a). Since the error is not deterministic, maybe this will not work.Any thoughts ? Thanks for all replies !S.

Richardson extraploation to reduce variance in Monte Carlo

Posted: October 8th, 2004, 2:25 pm
by stampeding
If you Monte Carlo-valuate the absolutely most trivial case, i.e. the expected value of the normal distribution N(0,1) you will use these estimators:First 100 samples: sum100/100, standard deviation will be 0.1Add next 300 samples: (sum300+sum100)/400, standard deviation will be 0.05And if you as estimatior use any other combination of sum300 and sum100 than (sum300+sum100)/400, sdev will be greater than 0.05. Trying to apply RE would probably yield something like (3*sum300-sum100)/200, with standard deviation 0.16, i.e. even worse than the "only 100 samples".This is the intuitive explanation. The more formal explanation, I assume, would be something like that if we write the dominant error component as K/sqrt(n), then K isn't independent of n.

Richardson extraploation to reduce variance in Monte Carlo

Posted: October 12th, 2004, 1:06 pm
by manatee
Hi Stampeding:Thanks for your reply. I dont get what you get and I dont know who is right.I take your example of estimating the mean of a norml distribution. So let S_n/n denote the computed mean of the samples. From CLT, we know that S_n/n is distributed as N(0,1/sqrt(n)).In RE, we write this as (u_n = S_n/n, u_e = "exact value", c1 = some constant)u_n = u_e + c1/sqrt(n)u_2n = u_e + c1/sqrt(2n)Now, the idea is to use these two values to eliminate c1 and obtain u_e. If we do this, we getu_e = [sqrt(2)*u_2n - u_n]/(sqrt(2) - 1)Now, E[u_e] = 0.0 and Var[u_e] = 2 Var[u_2n]/(Sqrt(2)-1)^2 - Var[u_n]/(Sqrt(2)-1)^2 = 0.0So in this simple case, the RE gets the exact answer. As you mention, it wont work unless c1 is a constant. Is there something else I am missing, or cheating in this analysis ?Best regards,m

Richardson extraploation to reduce variance in Monte Carlo

Posted: October 12th, 2004, 1:34 pm
by mj
in MC, c_1 is not a constant, it's random. There's no way this will work.