- Cuchulainn
**Posts:**62395**Joined:****Location:**Amsterdam-
**Contact:**

QuoteOriginally posted by: AminDaniel, I will do simple math behind the sde you are simulating. Here is the equation that we expanded just one level more than traditional monte carlo. This is equation right after equation six in previous post. I will write it again.[$]x(t)=x(0)+\mu (x(0))\int _0^t\text{ds}+\frac{\partial \mu (x(0))}{\partial x}\mu (x(0))\int _0^t\int _0^s \text{dvds}+\frac{\partial \mu (x(0))}{\partial x}\sigma (x(0))\int _0^t\int _0^s \text{dz}(v)\text{ds}+.5\frac{\partial ^2\mu (x(0))}{\partial x^2}\sigma ^2(x(0)) \int _0^t\int _0^s \text{dv} \text{ds}[$][$]+\sigma (x(0))\int _0^t\text{dz}(s)+\frac{\partial \sigma (x(0))}{\partial x}\mu (x(0))\int _0^t\int _0^s \text{dvdz}(s)+\frac{\partial \sigma (x(0))}{\partial x}\sigma (x(0))\int _0^t\int _0^s \text{dz}(v)\text{dz}(s)+.5\frac{\partial ^2\sigma (x(0))}{\partial x^2}\sigma ^2(x(0))\int _0^t\int _0^s\text{dv}\text{dz}(s)[$] Eq(1)I have tried to use your notation.we use [$]\text{$\mu $(x(0))=(r-d)x(0)}[$][$]\frac{\partial \mu (x(0))}{\partial x}\text{=(r-d)}[$][$]\frac{\partial ^2\mu (x(0))}{\partial x^2}\text{=0}[$][$]\sigma\text{(x(0))= sig x(0)}[$][$]\frac{\partial \sigma (x(0))}{\partial x}\text{=sig}[$][$]\frac{\partial ^2\sigma (x(0))}{\partial x^2}\text{=0}[$]substituting the above values in Eq(1), we get[$]x(t)=x(0)+(r-d)x(0) \text{$\Delta $t}+(r-d)^2x(0)\frac{\text{$\Delta $t}^2}{2}+(r-d)\text{sig} x(0)\int _0^t\int _0^s \text{dz}(v)\text{ds}[$][$]+\text{sig} x(0)\text{nor}(0,1)\sqrt{\text{$\Delta $t}}+\text{sig} (r-d)x(0)\int _0^t\int _0^s \text{dvdz}(s)+\text{sig}^2 x(0)\int _0^t\int _0^s \text{dz}(v)\text{dz}(s)[$]We use the identities [$]\int _0^t\int _0^s \text{dz}(v)\text{ds}=\int _0^t\int _0^s \text{dvdz}(s)=\text{nor}(0,1)\frac{\text{$\Delta $t}^{1.5}}{\sqrt{3}}[$]and [$]\int _0^t\int _0^s \text{dz}(v)\text{dz}(s)=\left(\text{nor}(0,1)^2-1\right)\frac{\text{$\Delta $t}}{2}[$]Now we can get the monte carlo simulation scheme.[$]x(t+\text{$\Delta $t})=x(t)+(r-d)*x(t) \text{$\Delta $t}+(r-d)^2x(t)\frac{\text{$\Delta $t}^2}{2}+\text{sig}* x(t)\text{nor}(0,1)\sqrt{\text{$\Delta $t}}+2\text{sig}* (r-d)x(t)\text{nor}(0,1)\frac{\text{$\Delta $t}^{1.5}}{\sqrt{3}}[$][$]+\text{sig}^2x(t)\left(\text{nor}(0,1)^2-1\right)\frac{\text{$\Delta $t}}{2}[$]The above is order [$]\text{$\Delta $t}[$] accurate with some terms of higher order. Adding one more term that comes from further expansion will take the order to [$]\text{$\Delta $t}[$]^1.5. The new better scheme would be[$]x(t+\text{$\Delta $t})=x(t)+(r-d)*x(t) \text{$\Delta $t}+(r-d)(r-d)x(t)\frac{\text{$\Delta $t}^2}{2}+\text{sig}* x(t)\text{nor}(0,1)\sqrt{\text{$\Delta $t}}+2*\text{sig}* (r-d)x(t)\text{nor}(0,1)\frac{\text{$\Delta $t}^{1.5}}{\sqrt{3}}[$][$]+\text{sig}^2*x(t)\left(\text{nor}(0,1)^2-1\right)\frac{\text{$\Delta $t} }{2}+\text{sig}^3*x(t)\left(\text{nor}(0,1)^3-3\text{nor}(0,1)\right)\frac{\text{$\Delta $t}^{1.5}}{6}[$]Neither scheme is an improvement on Euler in any way.. Believe me. I won't be posting numbers because they all the same. The schemes fall short for MC simulation. Most disconcerting is that convergence is erratic and not monotone.QuoteI added a higher order term quickly thinking it might add to accuracyAnd it might not.

Last edited by Cuchulainn on May 9th, 2016, 10:00 pm, edited 1 time in total.

QuoteNeither scheme is an improvement on Euler in any way.. Believe me. I won't be posting numbers because they all the same. The schemes fall short for MC simulation. Please post numbers since they would be very helpful to see what they are. They might be off as you have suggested but only numbers you posted were from a bad scheme. I am not asking you to do something that is worthless.QuoteQuoteI added a higher order term quickly thinking it might add to accuracyAnd it might not.QuoteAs I already explained, when we go to higher order all terms have to be taken to higher order otherwise there are biases. I was not so deeply aware when I wrote the scheme. Please post numbers from good scheme and I can see what is wrong and how wrong it is and many other things.

- Cuchulainn
**Posts:**62395**Joined:****Location:**Amsterdam-
**Contact:**

QuoteOriginally posted by: AminQuoteNeither scheme is an improvement on Euler in any way.. Believe me. I won't be posting numbers because they all the same. The schemes fall short for MC simulation. Please post numbers since they would be very helpful to see what they are. They might be off as you have suggested but only numbers you posted were from a bad scheme. I am not asking you to do something that is worthless.QuoteQuoteI added a higher order term quickly thinking it might add to accuracyAnd it might not.QuoteAs I already explained, when we go to higher order all terms have to be taken to higher order otherwise there are biases. I was not so deeply aware when I wrote the scheme. Please post numbers from good scheme and I can see what is wrong and how wrong it is and many other things.This is not the way to proceed. So, I'm out until you post a correct scheme.

- Cuchulainn
**Posts:**62395**Joined:****Location:**Amsterdam-
**Contact:**

QuoteOriginally posted by: AminQuoteNeither scheme is an improvement on Euler in any way.. Believe me. I won't be posting numbers because they all the same. The schemes fall short for MC simulation. Please post numbers since they would be very helpful to see what they are. They might be off as you have suggested but only numbers you posted were from a bad scheme. I am not asking you to do something that is worthless.QuoteQuoteI added a higher order term quickly thinking it might add to accuracyAnd it might not.QuoteAs I already explained, when we go to higher order all terms have to be taken to higher order otherwise there are biases. I was not so deeply aware when I wrote the scheme. Please post numbers from good scheme and I can see what is wrong and how wrong it is and many other things.This is not the way to proceed. So, I'm out.

- Cuchulainn
**Posts:**62395**Joined:****Location:**Amsterdam-
**Contact:**

QuoteOriginally posted by: amikeFor your example there is also a handy exact integral: \[\int_0^t z_t dz_t=\frac{1}{2}(z_t^2-t)\], so you may want to explain how/if your result agrees with it...I agree.This is still an open question here. I think a yes/no should be given to clear up the misunderstandings.

Sorry Daniel for the delay in posting my results. I would request you to rejoin the discussion if you like, please. OK, I tested the right O(dt) scheme and it gives far better results, at least on my computer, for longer step sizes and the difference becomes especially obvious when step size becomes greater than .05 years. I will be posting my results like you did in an hour.I would politely say that what you call open question in your previous post has been definitively and rightly answered by me on this thread. I claim the answer I gave is the only right answer. In fact I am not the first one who gave this answer especially when there are only dz-integrals the answer was well known to many stochastic calculus experts. I do not know whether anybody had done both a combination of dt-and dz-integrals before like I posted on this thread. Many experts in stochastic calculus will like to tell you that my answer is exactly the right answer.In order to clear any misconception, I will post my matlab program for the problem Daniel posted so people can see for themselves how my method performs. Daniel, I hope you could be able to look at equations in the matlab program and clone in C++.Sorry I was unable to post my programs before like I continued to say. I dug up a six year old damaged laptop, removed all wifi devices, attached it to an external monitor and I am back to work.

Last edited by Amin on May 10th, 2016, 10:00 pm, edited 1 time in total.

Here is my table of Results. I quickly did some quick simulations.NT NSIM Euler A.Amin1 10^6 3.94247 4.05782 10^6 3.98663 4.07155 10^6 4.04323 4.078810 10^6 4.05666 4.0746320 10^6 4.06806 4.0765950 10^6 4.07302 4.07637100 10^6 4.0691 4.0707Exact answer suggested by Daniel is 4.07326. We can see that both methods have enough monte carlo variance. Only NT=2 can get my method in the approximate right territory while it takes NT=50 for the standard Euler. This is just a quick observation from the above table. I did not do anything special and used matlab's own random number generator. I am sure that going to higher level will be far better in accuracy but we have to take all the terms to higher level and selectively taking one term from the higher level (like I earlier naiively did) will introduce biases unless somebody has done special analysis for an SDE.

Could you please explain in words the difference between your new method and the "Strong Taylor Approximations" presented, for example, in slides 6-7 of http://www.damtp.cam.ac.uk/user/na/FoCM ... loeden.pdf ?

- Cuchulainn
**Posts:**62395**Joined:****Location:**Amsterdam-
**Contact:**

QuoteOriginally posted by: savrCould you please explain in words the difference between your new method and the "Strong Taylor Approximations" presented, for example, in slides 6-7 of http://www.damtp.cam.ac.uk/user/na/FoCM ... loeden.pdf ?Looking at the two schemes produced by Ahsan the terms look like Hermite polynomial chaos expansion. So, it must be the relationship between Wienr integrals and Hermite polynomials(?) (due to K. Ito?) I am beginning to see the rationale behind his scheme, slowly.

Last edited by Cuchulainn on May 10th, 2016, 10:00 pm, edited 1 time in total.

- Cuchulainn
**Posts:**62395**Joined:****Location:**Amsterdam-
**Contact:**

QuoteOriginally posted by: AminHere is my table of Results. I quickly did some quick simulations.NT NSIM Euler A.Amin1 10^6 3.94247 4.05782 10^6 3.98663 4.07155 10^6 4.04323 4.078810 10^6 4.05666 4.0746320 10^6 4.06806 4.0765950 10^6 4.07302 4.07637100 10^6 4.0691 4.0707Exact answer suggested by Daniel is 4.07326. We can see that both methods have enough monte carlo variance. Only NT=2 can get my method in the approximate right territory while it takes NT=50 for the standard Euler. This is just a quick observation from the above table. I did not do anything special and used matlab's own random number generator. I am sure that going to higher level will be far better in accuracy but we have to take all the terms to higher level and selectively taking one term from the higher level (like I earlier naiively did) will introduce biases unless somebody has done special analysis for an SDE.I agree with these results in the main. Which RNG did you use for Euler? I used C++11 Mersenne Twister which is probably the reason my Euler results are closer to yours.

Many people asked me the question what exactly is the breakthrough in the understanding and simulation of SDEs. Well, the breakthrough is new understanding of how higher derivatives of drift and volatility term of an SDE and their interaction affect the dynamics of evolution of the particular SDE probabilistically(or conditionally when the normal draw or particular realization of the normal has been realized) when driving uncertainty is a normal distribution . When we solve an SDE, we need exact values of drift and volatility in continuous time along the path of the SDE variable which is mostly not known analytically. Of course, we can never tell/predict the future realization of the SDE since normal uncertainty cannot be predicted. but once we know the particular draw or realization of the normal uncertainty, we can, at least theoretically, tell with great accuracy what would be the expansions of drift and volatility to any increasing degree of precision by taking a higher order of expansion of drift and volatility functions and by using it in conjunction with hermite polynomials functions of realized value of the normal draw which models the driving uncertainty. The Ito change of variable formula tells us how 1st two derivatives affect the evolution of functions dependent on a stochastic process described by an SDE. When we apply Ito formula, we need exact value of 1st two derivatives dependent terms in continuous time along the path of the stochastic process conditional on the particular realization of normal uncertainty something which is not known analytically for most SDEs. I was able to express the forward values(in continuous time along the path conditional on normal driving uncertainty) of drift and variance into first two derivatives dependent terms evaluated at time zero plus higher order terms in forward time by using Ito formula. The higher order forward time terms in turn can be expressed as sum of further time zero terms and further higher order forward time terms by repeated use of Ito formula and this process continues depending upon the structure of drift and volatility functions and their derivatives. Each time we repeatedly apply Ito formula in the above method on time forward terms of drift and volatility expansion, the accuracy and precision of the forward value of drift and volatility as a function of normal uncertainty draw increases.However sometimes there is a problem with use of higher order terms in some non-linear SDEs since they are used in conjunction with higher order hermite polynomials. The current normal random variable sampling/generation techniques many times do not sample variance with very high accuracy. Normal is known by two moments only but when variance increases slightly, the higher moments of normal increase more drastically. For methods with smaller order expansions that use hermite polynomials upto level two, we are almost always fine with some very rare exceptions. However the variance of higher order hermite polynomials start to blow with a small increase in the variance of normal draws since these hermite polynomials use third, fourth or higher powers of normal random variable. This may not always be the case but sometimes for non-linear SDEs there is a loss of robustness due to blowing of variance of higher order hermite polynomials. So we have the theoretical apparatus in place to do very high precision monte carlo but we need better normal random variable sampling techniques in which variance of higher order hermite polynomials along the monte carlo path does not blow. No research has been done on it since there was no knowledge of higher order monte carlo and especially of use of hermite polynomials in the monte carlo was not known so nobody probably ever cared about better and precise sampling techniques that would closely match the variance of higher order hermite polynomials.I will just complete my paper with explanation of monte carlo with second order hermite polynomial and also apply it to generation of transition probabilities with greater order hermite polynomials. Though I would describe the theoretical procedure how to expand drift, volatility and other stochastic integrals of an SDE to any level or any degree of precision something which is just a straightforward extension of what I have mentioned on this thread.However totally robust and very high precision monte carlo methods could only be available after other people have done research on precise methods of drawing normal random numbers in a way that variances of high order hermite polynomials are retrieved.

- Cuchulainn
**Posts:**62395**Joined:****Location:**Amsterdam-
**Contact:**

QuoteHowever totally robust and very high precision monte carlo methods could only be available after other people have done research on precise methods of drawing normal random numbers in a way that variances of high order hermite polynomials are retrieved. Any supporting literature?

Here is a small matlab program that makes a graph comparison of density of CEV noise from Euler Monte Carlo and Stochastic Calculus of Standard Deviations. I hope many friends would really like this program.function [] = SDEXticsInfinitiTechnologiesCEVNoise01_Public( )X0_max=800;%Total number of SD Fractions.X0_mid=400.5;%Mid of the SD Grid. Lies at z0 by definition.X0_mid0=400;X0_start=1; dX0=.0125;%Width of SD cells in terms of Standard Deviations. T_index=200; dt=.005/1.0; epsilon=0.750gamma=.750;z0=1.0;dz_integral1(1:X0_max)=z0;iivector(1:X0_max)=1:X0_max;paths=1000000; %No. of Paths for creation of Monte Carlo density by simulating;zz4(1:paths)=z0; C1=1/sqrt(2*pi)/2;C2=1/(2*pi)/4;for nn=1:T_index %Loop over time. t=(nn)*dt; t0=(nn-1)*dt; prob=exp(-.5*((iivector-X0_mid).*dX0).^2)./sqrt(22/7.0*2.0)/(sqrt(t)*epsilon); prob2=exp(-.5*((iivector-X0_mid).*dX0).^2)./sqrt(22/7.0*2.0)*dX0; dt=t-t0; dz_integral1=dz_integral1+... epsilon.*dz_integral1.^gamma.*(iivector-X0_mid).*dX0*(sqrt(t)-sqrt(t0))... +epsilon.*gamma.*dz_integral1.^(gamma-1).*(epsilon.*dz_integral1.^(gamma).*(t-t0)/2.0.*(((iivector-X0_mid).*dX0).^2.*C2-1))... +.5*epsilon.*gamma.*(gamma-1).*dz_integral1.^(gamma-2).*epsilon.^2.*dz_integral1.^(2*gamma).*(t.^1.5-t0.^1.5)./sqrt(3).*(((iivector-X0_mid).*dX0)).*C1; %Naive brute force Monte carlo part follows. Since the steps are extremely %short the monte carlo is quite close to the right density for most of the cases %but may be off for a very small number of cases but it is a good initial %check on our analytic density Random1=randn(size(zz4)); RandTemp1=Random1.*sqrt(dt); zz4 =zz4+zz4.^gamma.*RandTemp1*epsilon;endeffective_sigma=epsilon;Jacobian_dz1(1:X0_max)=0;for ii=4:X0_max-3 Jacobian_dz1(ii)=((dz_integral1(ii-2))-8*(dz_integral1(ii-1))+8*(dz_integral1(ii+1))-(dz_integral1(ii+2)))/(12*dX0*sqrt((T_index-1+1)*dt)*effective_sigma); if(Jacobian_dz1(ii)~=0.0) Jacobian_dz1(ii)=1/Jacobian_dz1(ii); endendBinSize=.005;%Here you can change the resolution of monte carlo density. You might have to look at the graph and if it is jagged and noisy, increase the bin size%If it is made of straight line increments, decrease the bin size.MaxCutOff=20;[XDensity,IndexOutX,IndexMaxX] = MakeDensityFromSimulation_Infiniti(zz4,paths,BinSize,MaxCutOff ); str=input('Press any key to see the graph, Please make sure to rescale the graph from Edit menu on the matlab graph when diffusion reaches zero.') plot(IndexOutX(1:IndexMaxX),XDensity(1:IndexMaxX),'g',(dz_integral1(X0_start+2:X0_max-2)),prob(X0_start+2:X0_max-2).*Jacobian_dz1(X0_start+2:X0_max-2),'k') str=input('look at overlaid Graph comparison of Monte carlo density of SDE(green line) with density of SDE generated from analytic method(black line)>'); end

Here is the helper function to graph from monte carlo and you will need this as a separate function in matlab to make a graph from monte carlo paths. Some friends pointed out that this function is missing in the previous code. I had distributed it earlier in the same thread but I am writing it here again.function [XDensity,IndexOut,IndexMax] = MakeDensityFromSimulation_Infiniti(X,Paths,BinSize,MaxCutOff )%Processes monte carlo paths to return a series Xdensity as a function of IndexOut. IndexMax is the maximum value of index.% Xmin=0;Xmax=0;for p=1:Pathsif(X(p)>MaxCutOff)X(p)=MaxCutOff;endif(Xmin>real(X(p)))Xmin=real(X(p));endif(Xmax<real(X(p)))Xmax=real(X(p));endendIndexMax=floor((Xmax-Xmin)/BinSize+.5)+1XDensity(1:IndexMax)=0.0;for p=1:Pathsindex=floor(real(X(p)-Xmin)/BinSize+.5)+1;if((index)<1)index=1;endif(real(index)>IndexMax)index=IndexMax;endXDensity(index)=XDensity(index)+1.0/Paths/BinSize;endIndexOut(1:IndexMax)=Xmin+(0:(IndexMax-1))*BinSize;end

Last edited by Amin on May 17th, 2016, 10:00 pm, edited 1 time in total.

In stochastic calculus of standard deviations, we divide the normal density of standard brownian motion into a large number of subdivisions measured in terms of standard deviations (as difference of standard deviations which are units of normal density). These subdivisions remain constant when measured in terms of standard deviations but expand in time in absolute terms with increasing variance such that probability mass in each sub division remains exactly the same at any point in the evolution of the density. Each subdivision can be considered as a branch of a non-recombining tree so that number of branches(or subdivisions) remain exactly the same from the start from origin to expiry at any future time. Each branch expands straight in the direction of the same realization of normal(measured again in standard deviations) but expanding and moving out due to increase in variance with t. In stochastic calculus of standard deviations, these branches do not interact, and stochastic calculus is applied on each branch independently. Here we use n as index for number of branches of the tree with each subdivisions/branch of equal width [$]{\Delta X0}[$] so that [$]n{\Delta X0}[$] ranges typicaly between -5 to 5, the entire range of normal with constant increments of [$]{\Delta X0}[$] between successive subdivisions. At any time in the evolution of the tree, we have that centre of nth branch or subdivision is given by [$]X_n(t)=n \text{$\Delta $X0} \sqrt{t}[$] The absolute width of each of these subdivisions is given as [$]\text{$\Delta $X0} \sqrt{t}[$]Since probability mass in each of these subdivisions remain constant as a function of time, we can apply the formulas as[$]\int _{X_n(t)-.5\text{$\Delta $X0} \sqrt{t}}^{X_n(t)+.5\text{$\Delta $X0} \sqrt{t}} f(X(t))dX(t)=\int _{X_n(s)-.5\text{$\Delta $X0} \sqrt{s}}^{X_n(s)+.5\text{$\Delta $X0} \sqrt{s}}f(X(s))dX(s)[$]or any two times s or t.For the case of evaluation of powers of Normal in case of hermite polynomials, we have to analytically evaluate the integrals of the kind for power p[$]\int _{X_n(t)-.5\text{$\Delta $X0} \sqrt{t}}^{X_n(t)+.5\text{$\Delta $X0} \sqrt{t}}(X(t))^p f(X(t))dX(t)[$]and we have to calculate, for the evolution from density at time s to density at time t, the following integrals[$]\int _{X_n(t)-.5\text{$\Delta $X0} \sqrt{t}}^{X_n(t)+.5\text{$\Delta $X0} \sqrt{t}}(X(t))^p f(X(t))dX(t)-\int _{X_n(s)-.5\text{$\Delta $X0} \sqrt{s}}^{X_n(s)+.5\text{$\Delta $X0} \sqrt{s}}(X(t))^p f(X(s))dX(s)[$]where[$]f(X(t))=\frac{1}{\sqrt{2 \text{Pi} t}}\text{Exp}\left[-.5\frac{\left(X\right){}^2}{t}\right][$]The above formulas help us find the constants that multiply with powers of Z in the analysis of stochastic calculus of standard deviations.The above discussion was for stochastic calculus of standard deviations applied to evolution of normal density. For a general SDE, we simply follow the previous discussion in the thread and subdivisions expand nonlinearly but we make use of fact that evolution of standard normal density employed in the evolution of general nonlinear SDE is independent of evolution of any nonlinear SDE.The code I gave above in a recent post was an initial demonstration and the constants in that code will change now.

Last edited by Amin on May 19th, 2016, 10:00 pm, edited 1 time in total.

GZIP: On