November 6th, 2009, 6:28 am
Because you are applying classical calculus to random variable, which does not really make sense. For a deterministic function d(f(S)) = df/dS . dS, so int ( df/ds . ds) = f(s), int being your favorite integration theory. But you cannot apply rieman/lebesgue to random variables. You have to use another integration theory, which main result can can be written :df(S) = df/ds dS + 1/2 d²F/dS² d<S,S>One way to get the intuition, is to think of dW as having the same order of magnitude as sqrt(dt), which is really what previous equation tells you, so if you do a taylor exapnsion of df, you have to go to the second order in df/dW to get all the terms in dt, hence the integration result.W moves super quickly, you can't control it in dt, which is one of the reason why the stop loss hedging strategy fails.