SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
Amin
Topic Author
Posts: 1948
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

June 7th, 2018, 8:22 am

Here I try to explain the method in more detail. Suppose we want to calculate the density of an SDE given as
[$]dX(t)=\mu (t) X(t)^{\beta} dt + \sigma (t) X(t)^{\gamma} dz(t) [$]  Eq(1)

The second order Ito-Taylor expansion of the above SDE is given as

[$]X(t)=X(t_0) + \mu X(t_0)^{\beta} \int_{t_0}^t ds + \sigma X(t_0)^{\gamma} \int_{t_0}^t dz(s)[$]
[$]+\mu \sigma \beta X(t_0)^{\beta + \gamma -1} \int_{t_0}^t \int_{t_0}^s dz(v) ds [$]
[$]+\mu \sigma \gamma X(t_0)^{\beta + \gamma -1} \int_{t_0}^t \int_{t_0}^s dv dz(s)[$]
[$]+{\sigma}^2 \gamma X(t_0)^{2 {\gamma} -1 } \int_{t_0}^t  \int_{t_0}^s dz(v) dz(s)[$]
[$]+ .5 {\sigma}^3 \gamma (\gamma-1) X(t_0)^{3 \gamma -2} \int_0^t \int_0^s dv dz(s)[$] 
[$]+ {\mu}^2 \beta X(t_0)^{2 \beta -1} \int_{t_0}^t \int_{t_0}^s dv ds + .5 \beta (\beta -1) \mu  {\sigma}^2 X(t_0)^{\beta + 2 \gamma -2} \int_{t_0}^t \int_{t_0}^s dv ds[$]    Eq(2)

if [$]\Delta t=t-t_0[$], we can write the above expressions in terms of time variances and standard normal as

[$]X(t)=X(t_0) + \mu X(t_0)^{\beta}  \Delta t + \sigma X(t_0)^{\gamma} \sqrt{\Delta t} N[$]
[$] +\mu \sigma \beta X(t_0)^{\beta + \gamma -1} \frac{1}{\sqrt{3}} {\Delta t}^{1.5} N[$]
[$]+\mu \sigma \gamma X(t_0)^{\beta + \gamma -1} (1-\frac{1}{\sqrt{3}}) {\Delta t}^{1.5} N[$]
[$]+{\sigma}^2 \gamma X(t_0)^{2 {\gamma} -1 } \frac{\Delta t}{2} (N^2-1) [$] 
[$]+.5 \gamma (\gamma - 1) {\sigma}^3 X(t_0)^{3 \gamma -2} \frac{1}{\sqrt{3}} {\Delta t}^{1.5} N [$]
[$] + {\mu}^2 \beta X(t_0)^{2 {\beta} -1} \frac{{\Delta t}^2}{2}[$]
[$]+ .5 \beta (\beta -1) \mu {\sigma}^2 X(t_0)^{\beta + 2 \gamma -2} \frac{{\Delta t}^2}{2}[$]   Eq(3)



In my program, I simply wrote the expression so that powers of N are lumped together, so just re-arranging above equation, we get

[$]X(t)=X(t_0) + \mu X(t_0)^{\beta}  {\Delta t} [$]
[$]+ {\mu}^2 \beta X(t_0)^{2 {\beta} -1} \frac{{\Delta t}^2}{2}[$]
[$]+ .5 \beta (\beta -1) \mu {\sigma}^2 X(t_0)^{\beta + 2 \gamma -2} \frac{{\Delta t}^2}{2} -{\sigma}^2 \gamma X(t_0)^{2 {\gamma} -1 } \frac{\Delta t}{2}[$]
[$] + \sigma X(t_0)^{\gamma} \sqrt{\Delta t} N + \mu \sigma \beta X(t_0)^{\beta + \gamma -1} \frac{1}{\sqrt{3}} {\Delta t}^{1.5} N[$] 
[$] +.5 \gamma (\gamma -1) {\sigma}^3 X(t_0)^{3 \gamma - 2} \frac{1}{\sqrt{3}} {\Delta t}^{1.5} N[$]
[$] + \mu \sigma \gamma X(t_0)^{\beta + \gamma -1} (1- \frac{1}{\sqrt{3}}) {\Delta t}^{1.5} N [$]
[$]+{\sigma}^2 \gamma X(t_0)^{2 {\gamma} -1 } \frac{\Delta t}{2}  N^2[$]   Eq(4)

So basically, we can see that one step evolution computational effort is exactly comparable to one step effort in monte carlo when the monte carlo simulation is based on two hermite polynomials as above.
In my algorithm, I simply used 81 paths and assigned a normal value to each path. The simulation algorithm for nth path would be 

[$]X_n(t)=X_n(t_0) + \mu X_n(t_0)^{\beta}  {\Delta t} [$]
[$]+ {\mu}^2 \beta X_n(t_0)^{2 {\beta} -1 } \frac{{\Delta t}^2}{2}[$]
[$]+ .5 \beta (\beta -1) \mu {\sigma}^2 X_n(t_0)^{\beta + 2 \gamma -2} \frac{{\Delta t}^2}{2} -{\sigma}^2 \gamma X_n(t_0)^{2 {\gamma} -1 } \frac{\Delta t}{2}[$]
[$] + \sigma X_n(t_0)^{\gamma} \sqrt{\Delta t} N_n + \mu \sigma \beta X_n(t_0)^{\beta + \gamma -1} \frac{1}{\sqrt{3}} {\Delta t}^{1.5} N_n[$] 
[$] +.5 \gamma (\gamma -1) {\sigma}^3 X_n(t_0)^{3 \gamma - 2} \frac{1}{\sqrt{3}} {\Delta t}^{1.5} N_n[$]
[$] + \mu \sigma \gamma X_n(t_0)^{\beta + \gamma -1} (1- \frac{1}{\sqrt{3}}) {\Delta t}^{1.5} N_n [$]
[$]+{\sigma}^2 \gamma X_n(t_0)^{2 {\gamma} -1 } \frac{\Delta t}{2}  N_n^2[$]   Eq(5)

In my implementation [$]N_n[$], the normal variable grid,  ranges from -4 to +4 with an increment of .1, so there are only 81 paths/branches each for a specific value of associated normal variable value.

You could safely use the above algorithm but the variance added over normal increments would go to [$]T[$] if there are T time intervals of equal width. And you would have to map it using a normal density of variance [$]\frac{1}{T}[$]
In my algorithm, I simply used a slight modification and weighted every normal random number value with weight [$]\sqrt{\frac{1}{T}}[$] so that variance of weighted normal would add to one and we could simply map the evolution of SDE at terminal time (after T time intervals) to a standard normal. Please note that I have used [$]T[$] as number of time intervals and it is not the absolute value of time anywhere.  Here is the algorithm I used.

[$]X_n(t)=X_n(t_0) + \mu X_n(t_0)^{\beta}  {\Delta t} [$]
[$]+ {\mu}^2 \beta X_n(t_0)^{2 {\beta} -1 } \frac{{\Delta t}^2}{2}[$]
[$]+ .5 \beta (\beta -1) \mu {\sigma}^2 X_n(t_0)^{\beta + 2 \gamma -2} \frac{{\Delta t}^2}{2} -{\sigma}^2 \gamma X_n(t_0)^{2 {\gamma} -1 } \frac{\Delta t}{2}[$]
[$] + \sigma X_n(t_0)^{\gamma} \sqrt{\Delta t} (\sqrt{\frac{1}{T}} N_n) + \mu \sigma \beta X_n(t_0)^{\beta + \gamma -1} \frac{1}{\sqrt{3}} {\Delta t}^{1.5} (\sqrt{\frac{1}{T}} N_n)[$] 
[$] +.5 \gamma (\gamma -1) {\sigma}^3 X_n(t_0)^{3 \gamma - 2} \frac{1}{\sqrt{3}} {\Delta t}^{1.5} (\sqrt{\frac{1}{T}} N_n)[$]
[$] + \mu \sigma \gamma X_n(t_0)^{\beta + \gamma -1} (1- \frac{1}{\sqrt{3}}) {\Delta t}^{1.5} (\sqrt{\frac{1}{T}} N_n) [$]
[$]+{\sigma}^2 \gamma X_n(t_0)^{2 {\gamma} -1 } \frac{\Delta t}{2}  (\sqrt{\frac{1}{T}} N_n)^2[$]           Eq(6)

In my implementation [$]N_n[$], the normal variable grid, ranges from -4 to +4 with an increment of .1, so there are only 81 paths/branches each for a specific value of associated normal variable value.
And finally, I calculated the density of SDE by doing a change of variable derivative with respect to standard normal variable as ( I have omitted time index)
[$]dX_n=\frac{X_{n+1}- X_{n-1}}{N_{n+1} - N_{n-1} }[$]   Eq(7)
Here [$]X_n[$] is the value of X calculated along nth branch and [$]N_n[$] is the value of normal variable associated with the nth branch/ or equivalently at nth point on normal variable grid.
and then used the formual for density of SDE as
[$]f(X_n)=\frac{f(N_n)}{dX_N}[$]  Eq(8)
In my implementation  [$]f(N_n)[$] is the value of standard normal density at [$]N=N_n[$] which was the specific value of normal assigned to nth branch.  
I used a standard normal density for calculation of SDE density since I used a weighting [$]\sqrt{\frac{1}{T}}[$] but for different possible weights, you would use a different variance of the normal density in formula of Eq(8). For example, if I used no weight (or unity weight) as in Eq(5), the squared weights summed over T intervals would give a weight of T and then I would have to use a variance of [$]\frac{1}{T}[$] for normal density in equation(8). Again, Please note that I have used [$]T[$] as number of time intervals and it is not the absolute value of time anywhere. Sorry for this slightly ambiguous notation.
And there could be a large number of ways of assigning weights to normal variable in the algorithm and mapping to normal density with different variance corresponding to weights used in the algorithm.
 
User avatar
amike
Posts: 74
Joined: October 21st, 2005, 12:57 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

June 7th, 2018, 12:11 pm

Amin wrote:
...
[$]X(t)=X(t_0) + \mu X(t_0)^{\beta} \int_{t_0}^t ds + \sigma X(t_0)^{\gamma} \int_{t_0}^t dz(s)[$]
[$]+\mu \sigma \beta X(t_0)^{\beta + \gamma -1} \int_{t_0}^t \int_{t_0}^s dz(v) ds [$]
...

Everything else aside, I get that you can introduce standard normal drivers to represent the distribution of the integrals of the brownian driver, but they are not all perfectly correlated as (I think) you are assuming.
Take the first two you have:
[$]\int_{t_0}^t dz(s) \sim \sqrt{\Delta t} Z_1[$]
[$]\int_{t_0}^t \int_{t_0}^s dz(v) ds \sim \frac{(\Delta t)^{3/2}}{\sqrt{3}} Z_2[$]
where both [$]Z_1,Z_2[$] are standard normal.  The problem is that they are not perfectly correlated, but have correlation [$]\sqrt{3}/2[$], so you can't really just introduce your single [$]N[$] to represent them both...

Or is there something more subtle going on?
 
User avatar
Amin
Topic Author
Posts: 1948
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

June 7th, 2018, 5:05 pm

Amike, no, you are not familiar with Ito-Taylor expansion theory. We use exactly the same normal random number in the expansion of a single SDE. We use hermite polynomials based on exactly the same random value drawn from normal density even in case of monte carlo simulations. 
Yes, we know, that in case of two dimensional correlated SDEs, we will have to be more careful and then problems of the sort you mentioned would have to be properly tackled. And yes that can also be done.
 
User avatar
amike
Posts: 74
Joined: October 21st, 2005, 12:57 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

June 7th, 2018, 11:10 pm

I am reasonably familiar with Kloeden+Platen, and thought I was simply repeating Exercise 5.2.7 (first edition) or, since you're going beyond first order, what happens in section 10.4/10.5.
Maybe not.
 
User avatar
Amin
Topic Author
Posts: 1948
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

June 8th, 2018, 3:13 pm

amike, at the moment, I do not have Kloeden and Platen at hand. I will try to look at it. It has several simulation schemes and, I believe, some of them do use the same normal random number draw everywhere in a second order accurate simulation.  
 
User avatar
Amin
Topic Author
Posts: 1948
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

June 10th, 2018, 3:57 pm

I will have a formal research paper ready about my work on SDEs in next two weeks.
And I have been writing about my research for past few years on Wilmott.com. And using the same forum, I will like to tell friends that I would absolutely love to work on some reasonably general analytic method to solve different types of partial differential equations. Though I know that I can make decent money from the new research I did on SDE density evolution algorithms, I would prefer to work on more stimulating and interesting problems related to solution of partial differential equations. But I would need some support from a research institution or a commercial organization. And I would absolutely love to seriously work and focus on the more interesting problem of solution of PDEs. I am writing here in the hope that some organization would be interested in supporting me for this research. 
 
User avatar
Amin
Topic Author
Posts: 1948
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

June 13th, 2018, 8:30 pm

My research opens up many new possibilities and many of those things in quantitative finance that were previously considered impossible due to computational complexity can now be very easily modelled and evaluated. In addition to the traditional applications in quantitative finance related to derivatives pricing and risk management, many new possibilities could open up in other wide ranging scientific areas of stochastic filtering, parameter estimation and the application of Artificial intelligence to stochastic processes due to this new research because calculation of densities of general SDE based stochastic processes that earlier required extreme computing power, has now become almost instantaneous. I hope there will be development of many new methods and algorithms in Bayesian statistics since it really becomes so easy to calculate the densities of SDE based stochastic processes and researchers would like to take advantage of this new development. And surely I foresee really a lot of new work on marriage between artificial intelligence and Bayesian statistics related methods. If you would like me to work with you in these scientific disciplines with new emerging possibilities due to this new research, you can email me at my private email anan2999ATyahoo
 
User avatar
Amin
Topic Author
Posts: 1948
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

June 16th, 2018, 9:48 pm

Dear friends, I was able to find very interesting applications of my research to non-linear filtering that would make non-linear filtering computationally very simple and efficient. The new method would work equally well for discrete time state space models. 
I will try to explain the idea with Bayesian filtering applied to a very simple set of univariate discrete time state space equations giving observation process equation and the State process equation.

We can apply the method to general discrete time models of the form given below. I tried to be reasonably general in writing the model equations and many other possible variants of the state model could also be solved for filtering. 
[$]X_{t}=\mu_1 {Y_{t}}^{\beta_1} + \sigma_1 {Y_{t}}^{\gamma_1} z_{t}[$]
[$]Y_{t}= \mu_2 [{Y_{(t-1)}}]^{\beta_2} + \sigma_2 [{Y_{(t-1)}}]^{\gamma_2} w_{t}[$]

I will give details of the algorithm tomorrow but using this new algorithm even non-linear filtering will become simpler than Kalman filtering. 
 
User avatar
Amin
Topic Author
Posts: 1948
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

June 17th, 2018, 5:05 pm

Before we work on a complete algorithm, there are a few principles that we need for that purpose.

1. Mapping an arbitrary density on a normal (or other) density.
We can map any density on a normal density. Densities have probability mass normalized to one and this mapping requires that corresponding subdivisions on two densities have the same probability mass. Sometimes, we would like to map a density on a normal density so that subdivisions on the normal density are equally spaced.

2. Non-linear transformation of a density
We can easily take non-linear transformation of a density and this requires scaling the density variable by the appropriate transformation but the density changes everywhere and new density for the transformed variable can be calculated from the pre-transformation density by finding the change of variable derivative for the densities. 

3. Adding locally non-linear innovations to a density.
Just like we did for SDEs, we can easily add local possibly non-linear (in the SDE variable) normal-based innovations to a density and calculate the resulting density. For this we first map the density to a normal density and then add innovations linearly along the appropriate value of normal variable associated with the particular subdivision of the density.

4. Calculating Transition probabilities from one density to the other density when both densities have been mapped on a normal density.
This is far simpler than people would otherwise think. And it would only require Green's function that takes into account appropriate variance of the innovations and most local non-linearities can be easily neglected. For example, in the case of SDEs, we could map the two densities of SDE on Brownian motion densities(by equating the probability mass) and then transition probabilities between any two subdivisions would equal the transition probabilities between corresponding two subdivisions of the Brownian motion densities and this requires only appropriate calculation of time elapsed [$]\Delta t[$]. And any other non-linearities in the SDE evolution can be safely neglected. Other non-linearities show up in local scaling of the density but they can very easily be accounted for by a change of variable for densities. I will be explaining it more clearly with equations tomorrow.

5. Calculation of Filtered density once an Observation has been made.
Suppose we have an updated state density and we possibly take a transformation of it (as dictated by observation/measurement equation) and then we add observation noise (or observation innovation) to it to calculate what I call the observation/measurement density. Calculation of filtered density from this observation/measurement density simply requires calculation of all the transition probabilities from the updated state density that would result in the observed point estimate(on the measurement/observation density). Again this filtered density is the transition density from all the points on the updated state density to the observed point on the observation/measurement density. This would not in general be a normalized density and we have to normalize it and this normalized density is the filtered density. 
And then we update this filtered density according to state update equation to calculate the updated state density. This would possibly require a transformation of the filtered density(2), mapping the transformed density on a normal density(1) and then adding local update innovations(3) required to calculate the state update density. 
And then we will again map the state update density on a normal density(1) and add observation noise/innovations(3) to calculate the measurement density. And then again find filtered density(5) again.

I will explain everything in more detail with equations tomorrow.
 
User avatar
Amin
Topic Author
Posts: 1948
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

June 18th, 2018, 4:09 pm

Sorry that when I wrote about mapping a density of SDE on a normal density, I forgot to mention another important relevant fact and that is when densities of SDEs are generated by using Ito-Taylor algorithm which gives us the SDE variable X,as a function of standard normal, Z, and in that case
[$]F(X(Z))=F(Z)[$] where [$]F[$] means CDF of the density. So when SDE variable X is found as a function of standard normal variable Z, the CDF of X is exactly the same as the CDF of corresponding Z. And SDE variable X is a local scaling of the standard normal variable.

As I mentioned yesterday that we can calculate the transition probabilities of SDEs by mapping the density of SDEs on relevant generating Brownian motion densities and then transition probability between various subdivisions on SDE density would be exactly the same as the transition probability between the corresponding subdivisions of two Brownian motion densities. Here is the link to the paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3119980
But I believe that normal density and densities of SDEs share a far stronger result which says that when we divide the evolving normal density or density of SDEs along CDF subdivisions or probability mass subdivisions, then the the cumulative transition probabilities between different CDF subdivisions of both normal densities at different time remain the same. We are used to looking at transition probabilities in terms of absolute grids. Greater variance simply expands the CDF subdivisions and on fixed grids, it seems that density is expanding which is, of course, true with respect to fixed grids . But when considered in terms of CDF subdivisions (fixed probability mass subdivisions i.e subdivisions that expand so as to keep the probability mass the same in them), the local subdivisions also expand in a synchronized fashion so that transition probabilities between various expanding subdivisions really remain the same though on fixed grids one would only notice that normal density/SDE density is expanding while it remains constant when considered in terms of locally expanding fixed probability mass density subdivisions while transition probability dynamics remain constant in terms of expanding subdivisions. 
Last edited by Amin on June 18th, 2018, 5:04 pm
 
User avatar
MaxwellSheffield
Posts: 62
Joined: December 17th, 2013, 11:08 pm

Re: Breakthrough in the theory of stochastic differential equations and their simulation

June 18th, 2018, 4:32 pm

too many posts, it is hard to keep track of your work . 
 
User avatar
Amin
Topic Author
Posts: 1948
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

June 18th, 2018, 5:06 pm

Amin:Sorry that when I wrote about mapping a density of SDE on a normal density, I forgot to mention another important relevant fact and that is when densities of SDEs are generated by using Ito-Taylor algorithm which gives us the SDE variable X,as a function of standard normal, Z, and in that case
[$]F(X(Z))=F(Z)[$] where [$]F[$] means CDF of the density. So when SDE variable X is found as a function of standard normal variable Z, the CDF of X is exactly the same as the CDF of corresponding Z. And SDE variable X is a local scaling of the standard normal variable.

As I mentioned yesterday that we can calculate the transition probabilities of SDEs by mapping the density of SDEs on relevant generating Brownian motion densities and then transition probability between various subdivisions on SDE density would be exactly the same as the transition probability between the corresponding subdivisions of two Brownian motion densities. Here is the link to the paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3119980
But I believe that normal density and densities of SDEs share a far stronger result which says that when we divide the evolving normal density or density of SDEs along CDF subdivisions or probability mass subdivisions, then the the cumulative transition probabilities between different CDF subdivisions of both normal densities at different time remain the same. We are used to looking at transition probabilities in terms of absolute grids. Greater variance simply expands the CDF subdivisions and on fixed grids, it seems that density is expanding which is, of course, true with respect to fixed grids . But when considered in terms of CDF subdivisions (fixed probability mass subdivisions i.e subdivisions that expand so as to keep the probability mass the same in them), the local subdivisions also expand in a synchronized fashion so that transition probabilities between various expanding subdivisions really remain the same though on fixed grids one would only notice that normal density/SDE density is expanding while it remains constant when considered in terms of locally expanding fixed probability mass density subdivisions while transition probability dynamics remain constant in terms of expanding subdivisions. 


I will be explaining how to calculate the transition probability using this observation with detailed equations in a few hours.  
 
User avatar
Amin
Topic Author
Posts: 1948
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

June 18th, 2018, 7:49 pm

Post in progress:
Here I made a post on linkedin for friends: https://www.linkedin.com/pulse/new-algorithms-non-linear-filtering-applications-my-research-amin/
Here I copy the linkedin post.

New Algorithms For Non-linear Filtering: Applications of my Research On SDEs (Part I)

I was able to find very interesting applications of my research to non-linear filtering that would make non-linear filtering computationally very simple and efficient. The new method would work equally well for discrete time state space models. Before we work on a complete algorithm, there are a few principles that we need for that purpose.

1. Mapping an arbitrary density on a normal (or other) density.
We can map any density on a normal density. Densities have probability mass normalized to one and this mapping requires that corresponding subdivisions on two densities have the same probability mass. Sometimes, we would like to map a density on a normal density so that subdivisions on the normal density are equally spaced. when densities of SDEs are generated by using Ito-Taylor algorithm which gives us the SDE variable X,as a function of standard normal, Z, and in that case F(X(Z))=F(Z) where F means CDF of the density. So when SDE variable X is found as a function of standard normal variable Z, the CDF of X is exactly the same as the CDF of corresponding Z. And SDE variable X is a local scaling of the standard normal variable.

2. Non-linear transformation of a density
We can easily take non-linear transformation of a density and this requires scaling the density variable by the appropriate transformation but the density changes everywhere and new density for the transformed variable can be calculated from the pre-transformation density by finding the change of variable derivative for the densities. 

3. Adding locally non-linear innovations to a density.
Just like we did for SDEs, we can easily add local possibly non-linear (in the SDE variable) normal-based innovations to a density and calculate the resulting density. For this we first map the density to a normal density and then add innovations linearly along the appropriate value of normal variable associated with the particular subdivision of the density.

4. Calculating Transition probabilities from one density to the other density when both densities have been mapped on a normal density.
This is far simpler than people would otherwise think. And it would only require Green's function that takes into account appropriate variance of the innovations and most local non-linearities can be easily neglected. For example, in the case of SDEs, we could map the two densities of SDE on Brownian motion densities(by equating the probability mass) and then transition probabilities between any two subdivisions would equal the transition probabilities between corresponding two subdivisions of the Brownian motion densities and this requires only appropriate calculation of time elapsed Δt. And any other non-linearities in the SDE evolution can be safely neglected. Other non-linearities show up in local scaling of the density but they can very easily be accounted for by a change of variable for densities. As I mentioned earlier that we can calculate the transition probabilities of SDEs by mapping the density of SDEs on relevant generating Brownian motion densities and then transition probability between various subdivisions on SDE density would be exactly the same as the transition probability between the corresponding subdivisions of two Brownian motion densities. Here is the link to the paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3119980
But I believe that normal density and densities of SDEs based on locally non-linear scaling of normal noise share a far stronger result which says that when we divide the evolving normal density or density of SDEs along CDF subdivisions or probability mass subdivisions, then the cumulative transition probabilities between different CDF subdivisions of both normal densities at different time remain the same. We are used to looking at transition probabilities in terms of absolute grids. Greater variance simply expands the CDF subdivisions and on fixed grids, it seems that density is expanding which is, of course, true with respect to fixed grids . But when considered in terms of CDF subdivisions (fixed probability mass subdivisions i.e subdivisions that expand so as to keep the probability mass the same in them), the local subdivisions also expand in a synchronized fashion so that transition probabilities between various expanding subdivisions really remain the same though on fixed grids one would only notice that normal density/SDE density is expanding while it remains constant when considered in terms of locally expanding fixed probability mass density subdivisions while transition probability dynamics remain constant in terms of expanding subdivisions. 

5. Calculation of Filtered density once an Observation has been made.
Suppose we have an updated state density and we possibly take a transformation of it (as dictated by observation/measurement equation) and then we add observation noise (or observation innovation) to it to calculate what I call the observation/measurement density. Calculation of filtered density from this observation/measurement density simply requires calculation of all the transition probabilities from the updated state density that would result in the observed point estimate(on the measurement/observation density). Again this filtered density is the transition density from all the points on the updated state density to the observed point on the observation/measurement density. This would not in general be a normalized density and we have to normalize it and this normalized density is the filtered density. 
And then we update this filtered density according to state update equation to calculate the updated state density. This would possibly require a transformation of the filtered density(2), mapping the transformed density on a normal density(1) and then adding local update innovations(3) required to calculate the state update density. 
And then we will again map the state update density on a normal density(1) and add observation noise/innovations(3) to calculate the measurement density. And then again find filtered density(5) again.

I have written about this post in equal detail at post # and post of wilmott forum here: viewtopic.php?f=4&t=99702&start=720 And at nuclearphynance forum here:http://nuclearphynance.com/Show%20Post.aspx?PostIDKey=180518
 
User avatar
Amin
Topic Author
Posts: 1948
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

Yesterday, 8:15 am

I am looking for PhD positions in mainland Europe. I want to research in the area of artificial intelligence with dynamic Bayesian models. Here is my resume

Ahsan Amin
Email: anan2999@yahoo.com
Phone: +92-336-2602125
Website: https://ahsanamin2999.wordpress.com/
Research Website:https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=435366
Education
New York University, New York, NY , USA .
Masters of Arts in Economics with a concentration in Mathematical Finance, September 2002.
Wrote my thesis on pricing of Bermudan Swaptions and other Bermudan structured Derivatives within the framework of Libor Market Model. Libor Market Model was the newly introduced and most modern interest rate model at that time. My thesis was titled, “Pricing Bermudan fixed income derivatives in multi-factor extended LIBOR market model.” My thesis was cited in several other research papers/theses.

Northwestern University, Evanston, IL , USA .

Bachelor of Science in Electrical Engineering, March 97


Work
CEO, Infiniti Derivatives Technologies, Lahore, Pakistan
Feb 2012 - Present
1.        My company  web site is: http://www.infinitiderivatives.com
 
2.       Worked on research on the analytic solution of ordinary differential equations. I presented methods for analytic solution of general nth-order ordinary differential equations and also for the analytic solution of systems of nth-order ordinary differential equations. My research is discussed in the research paper, "On the General Solution of Initial Value Problems of Ordinary Differential Equations Using the Method of Iterated Integrals" downloadable at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2872598
 
3.       I have worked with a new Ito-Taylor based method for calculation of densities of SDEs and their path integrals. In this method, SDE or its functionals evolve like a set of autonomous ODEs. This results in an extremely fast algorithm which is several orders of magnitude faster than previous methods. I have not completed the formal research paper yet but I worked on this method over several years and simultaneously continued to post any new major advances in my understanding on internet as they occurred. I made these posts on wilmott.com mathematical finance forum. Here is the chronicle of my posts about this research over the years: https://forum.wilmott.com/viewtopic.php?f=4&t=99702&start=690
My formal research paper would be ready in about two weeks.
 
4.       My research paper titled, “Calibration, Simulation and Hedging in a Heston Libor Market Model with Stochastic Basis” was published in a Risk book “Interest Rate Modelling after the Financial Crisis.” Here is the link to Risk book web site: http://riskbooks.com/interest-rate-mode ... ial-crisis  
 
 
5.       I also successfully completed another research project which resulted in the research paper titled “Solution of Stochastic Volatility Models Using Variance Transition Probabilities and Path Integrals.” This paper can be downloaded at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2149231
 
 
Quantitative Research Analyst, UP-FRONT Inc., Tokyo, Japan
Feb 2003 – Oct 2010

It was a research and development Role and I was based in Lahore, Pakistan. These were the major research projects I completed with Upfront.
1.        I developed a stochastic volatility displaced diffusion LIBOR Market Model.  Worked on global Calibration of the model to more than 2000 swaptions with forward curve out to sixty years. Priced callable structured derivatives and more exotic deals as well as their deltas and vegas in this stochastic volatility model.
 
2.        Worked on advanced multi-factor skew extended calibration of LIBOR Market model. Very robust smoothing constraints were implemented which were required for stability of hedges. Worked with calibration of four different versions of skew extended LIBOR Market Models. Successfully dealt with the problem of simultaneous calibration of the whole swaptions matrix.


3.       Worked extensively with pricing of most complex callable LIBOR exotics. The instruments include Bermudans swaptions, Bermudan callable reverse floaters, Bermudan captions, Bermudan CMS floaters, Bermudan CMS reverse floaters, Bermudan CMS spread options, CMS TARN structures, Bermudan Snowballs, snowblades, and Bermudan callable PRDC.
 
4.       Extended Longstaff and Schwartz method to deal with complex callable fixed income structured derivatives in my research paper, “Multi-Factor Cross Currency Libor Market Models: Implementation, Calibration and Examples.” The paper can be downloaded here: https://papers.ssrn.com/sol3/papers.cfm ... id=1214042  
 
5.       Worked on the pricing of Bermudan Callable and Trigger Power Reverse Dual Currency Note and other FX/IR hybrid derivatives. Developed a multi-factor cross-currency LIBOR Market Model for the pricing of these instruments. Implemented the advanced analytics of the model on Excel using VBA and C++.
 
6.     Worked on a three factor Hull White Cross Currency model in Cheyette framework for fast pricing of Bermudan trigger PRDCs. This involved analytic calibration of the model to FX volatilities and to swaption volatilities of local and foreign economy.
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...