January 23rd, 2003, 12:33 pm
The lognormal distribution has very fat tails. From below, the process is bounded by zero, but not from above. Once in a while, you'll have extremely large final values V_T. And that will be enough to make E[V_T]=V_0.I encountered this phenomenon when I simulated a lognormal random walk for CDS rates which have very high volatility (e.g. 80-100%). Over a 5-year horizon, you essentially either get something close to zero (in most of the cases), or an extremely large value (in the other case). The mean was still at V_0, though, because one large realisation cancelled all the other realisations close to zero. numerical example:If you substitute T=4, sigma=100%, V_0=1, W(T)=sigma* sqrt(T)* epsilon = 2* epsilon, into the closed-form solution, (where epsilon is N(0,1)-distributed) you get V_T=V_0*exp{(-0.5*sigma^2)*t+sigma*W_t} = exp{2(epsilon-1)},The exponent will be smaller than zero in N(1) ~~ 84% of the cases. So in 6 out of seven cases you'll see a decrease. But if you have a large absolute value in epsilon, things look different: let's say epsilon=+-3. For epsilon =-3 we get V_T=0.0003. For epsilon =+3, we get V_T=54.6. And 54.6 can balance a lot of zero-realisations. Bottom line: The most freqent occurrence is not always the expected value.PS: Of course with such high volatilities you should simulate using the exact solution over one time-step: V_(t+dt)=V_t*exp{(-0.5*sigma^2)*dt+sigma*(W_(t+dt)-W_t)},