1.) Never use Cholesky decomposition, use Numerical Recipes tred2, tqli, eigsrt, in that order, or any other robust spectral decomposition you like. All text book Cholesky decompositions (including Numerical Recipes in C's) already fail when you have a zero or near-zero but positive eigenvalue which is perfectly permissible for a correlation matrix. Spectral decompositions will not fail there, and they also allow you to truncate the noisy sub-zero ones that are irrelevant.2.) Sobol' numbers if initialised sensibly work extremely well in dimensions well above 100, Halton numbers don't. There are number theoretical reasons for that: the dimensionality coefficient c(d) in the convergence order c(d)log(N)/N grows geometrically for many low-discrepancy numbers but not for Sobol' or Niederreiter-Xing numbers. Niederreiter-Xing numbers require serious number theoretical calculations to precompute the construction numbers (see
http://www.dismat.oeaw.ac.at/pirs/niedxing.html), and I don't know if anyone has precomputed them up to d=100 or higher. In my experience, they are no better than Sobol' numbers for d>5, and Sobol' numbers are much simpler to construct, so I always use those.3.) The myth about high-dimensionality and low discrepancy numbers is the main reason I wrote "Monte Carlo Methods in Finance". Please discontinue the myth. It is not true that you cannot use low discrepancy numbers in high dimensions. Honestly.Yours truly,Peter Jaeckel