June 10th, 2006, 8:04 pm
There's no problem with 10,000 steps. It's true that if you tried to compute the 2^(-10,000) it would round to zero even with double precision. But you never do that computation and, in any case, a zero wouldn't hurt your result. It's also true that you don't have to bother with anything beyond, say, five standard deviations. If you're getting overflows, then you're doing something wrong.In general, when approximating something continuous, the more steps, the more accuracy. However, once you get the error from discretization down to a tolerable level, further increases in step size may not help, and could lead to other problems (certainly, slow runs and possibly numerical instabilities). So a good general method for choosing the number of steps is to compute the discretization error, choose a number of steps such that it is less than other sources of error (for example, errors less than one basis point of price probably don't matter in most applications). If that number of steps is too large for convenient processing, you might want to use less, but it's usually not a good idea to use more, even if the computer time is insignificant.If you are constrained, there are usually clever things you can do to increase the effective number of steps. Even if you're not constrained, some of those things can be useful.