Serving the Quantitative Finance Community

 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 23rd, 2024, 5:39 pm

Friends, I have come up with my postulated theory (which can be wrong. Not tested yet) of hermite polynomials in relation with normal dnesity, general hermite series densities and diffusions.

We know that a standard normal density can be expanded in terms of hermite series as derivatives of normal density can be described in terms of (after multiplication with density) hermite polynomial terms. 
A random process is basically a random particle that is moving randomly to and fro along standard normal axis. This random particle is divided into several states and its properties have to be described as a sum/composition of properties of its different states. These states of random particle are called hermite polynomial states and these states vary in relative magnitude as the particle moves to and fro on standard normal axis. Towards the mid of the density smaller hermite polynomial states are dominant and towards the extremes of the density, larger hermite polynomial states are dominant. I suppose that each point on standard normal axis, the relative magnitude of the hermite states can be determined by relative contribution to standardized density at that point by different hermite polynomials.
The concept of hermite states is important since we can determine their relative magnitude in density at any point along standard normal axis as the particle moves to and fro on the normal axis. We have not done this with hermite polynomials earlier and this is where this goes beyond simple hermite polynomial analytics.
Each hermite state has its own variance and variance of the particle at a certain point on standard normal axis is combined weighted sum of the variances in each of the hermite states.
We can define other processes from standard normal density simply by assigning different coefficients to hermite states that magnify or diminish the variances of each hermite state.
Correlations are between hermite states of the same order between two stochastic processes.

When we develop the theory of transition probabilities of a stochastic process across time, these hermite states are very helpful. Our marginal density in state two in future has coefficients that are expected values. However transition density has higher variance than marginal density since there are many transitions that have zero expected value but they have significant variances. 
On top of expected values of coefficients of the hermite series of marginal density in future (these expected values are associated with zeroth hermite transition. They are non-zero expectations of coefficients), there are transitions from 1st hermite state at time one to all the 1st, 2nd, 3rd up to fifth hermite(in our set up where fifth hermite is largest) at 2nd time in future. Similalry there are transitions from 2nd hermite state at time one  to all the first to fifth hermite states at time two. And so on from 3rd, fourth and fifth hermite at time one to 1st to fifth hermite states at time two. All these hermite state(at time one) to hermite state(at time two) transitions are pure variances and have zero expected values. These pure variances do not contribute to marginal density at time two which comes solely from zeroth expectation hermite.
In our set up, all terms with single hermite polynomials contribute to marginal density coefficients. All terms with products of hermite polynomials are pure variances with zero expected values and they do not contribute to marginal density. They are purely related to transition portion of transition densities.
However, I believe that we can actually take all the realizations of stochastic process at time one and divide each realization to hermite states from knowledge of associated value of standard normal (this is different from only knowing polynomial value there and requires knowledge of relative magnitude of hermite state in density at that point). Similalry we can take associated realization of stochastic process at time two and divide it into hermite states again from knowledge of its associated standard normal. We can then study relative transitions from  various hermite states at time one to various other hermite states at time two and calculate their transtioin coefficients numerically from pairs of historical data.
I am trying to brainstorm and think of other ideas and hope to share them with friends as I get to know something.
Please know that I have not tested anything in above theory so I may be totally wrong.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 23rd, 2024, 6:34 pm


When we develop the theory of transition probabilities of a stochastic process across time, these hermite states are very helpful. Our marginal density in state two in future has coefficients that are expected values. However transition density has higher variance than marginal density since there are many transitions that have zero expected value but they have significant variances. 
On top of expected values of coefficients of the hermite series of marginal density in future (these expected values are associated with zeroth hermite transition. They are non-zero expectations of coefficients), there are transitions from 1st hermite state at time one to all the 1st, 2nd, 3rd up to fifth hermite(in our set up where fifth hermite is largest) at 2nd time in future. Similalry there are transitions from 2nd hermite state at time one  to all the first to fifth hermite states at time two. And so on from 3rd, fourth and fifth hermite at time one to 1st to fifth hermite states at time two. All these hermite state(at time one) to hermite state(at time two) transitions are pure variances and have zero expected values. These pure variances do not contribute to marginal density at time two which comes solely from zeroth expectation hermite.
 Our marginal density in state two in future has coefficients that are expected values.
What I mean is that these are expected values of coefficients associated with the marginal density. These are associated with expected values of variances of hermites related with the marginal density. This is on level one.
These coefficients are themselves systematically random and have their own variances on level two which are also related with transition across hermites.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 24th, 2024, 6:59 am

Here are some more thoughts about yesterday's ideas.
1. We know hermite polynomials are associated with variances. 
2. When we talk about transition from hermites in first random variable to other hermites in second random variable, obviously something has to be conserved.

Here we come to our standardized densities framework. Where variance of every density is conserved to unity. And relative weights on hermites that together squared sum to  one (after accounting for hermite weights/variances), define the properties of the density and its moments and shape etc.

Transitions between hermites of one density into hermites of second density are so that total standardized variance is conserved and sums to unity.

I am sure we can easily find transition between hermites by pairing the data and analyzing the flow of standardized variances associated with each combination of 1st density's hermite to second density's hermites since the coefficients of hermites are known from our marginal density analysis. We can easily have a regression algorithm that solves for this.

When we did hermites regression, we had the idea of correlations between same order hermites across two densities. But when we would work with transition densities, we will have to develop the idea of second order correlations of transition densities between hermites of first set of densities to hermites of second sets of densities (these correlations depend on product of appropriate hermites across first set of densities and second set of dnesities).  So we will have these second order transition correlations corresponding to same sets of products of two hermites across various sets of densities. 
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 24th, 2024, 7:49 am

Again, in relevance to previous post, once we have calculated transition probabilities across standardized densities, we can easily convert to true transition probabilities by multiplying all columns with sqrt of variance of first density and all rows with sqrt of variance of second density. Since we will likely multiply the transition probability matrix (two dimensional hermite series) with true probabilities (1D hermite series) of first density (as opposed to standardized version), we may not need this first de-standardization multiplication of columns with square root of variance of first density. 
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 24th, 2024, 6:48 pm

Friends, I will continue to brainstorm for another day and then move on to numerics. I will continue to share my thoughts with friends.
However, I am writing this post to clear the confusion related to standardization of the density. I want believe that our 2D hermite density Density of dependent variable X should always have variance unity despite that it does not apparently seem true. 

For clearer exposition, we first consider that we are in a non-correlated framework where correlation between X and Y is zero.

1D hermite series of independent variable Y is given as

[$]Y(Z_y)\,=\, ah_{0} \, + \, ah_{1} \,H_1(Z_y) \,+\, ah_{2} \, H_2(Z_y) \,\, + \, bh_{3} \,H_3(Z_y)[$]    Eq(1)

while the dependent variable is given as

[$]X(Z_y \, , \, Z_x)\,=\, bh_{0,0} \, + \, bh_{1,0} \,H_1(Z_y) \,+\, bh_{2,0} \, H_2(Z_y) \,\, + \, bh_{3,0} \,H_3(Z_y)\, \\
+\, \Big[ \, bh_{0,1} \, + \, bh_{1,1} \,H_1(Z_y) \,+\, bh_{2,1} \, H_2(Z_y) \, + \, bh_{3,1} \,H_3(Z_y)\, \Big] \, H_1(Z_x) \, \\
+\, \Big[ \, bh_{0,2} \, + \, bh_{1,2} \,H_1(Z_y) \,+\, bh_{2,2} \, H_2(Z_y) \, \, + \, bh_{3,2} \,H_3(Z_y)\, \Big] \, H_2(Z_x) \, \\
+\, \Big[ \, bh_{0,3} \, + \, bh_{1,3} \,H_1(Z_y) \,+\, bh_{2,3} \, H_2(Z_y) \, + \, bh_{3,3} \,H_3(Z_y)\, \Big] \, H_3(Z_x) \, [$]  Eq(2)

For brevity in analysis, we have further used the notation

[$]Bh_0(Z_y)\,=\, bh_{0,0} \, + \, bh_{1,0} \,H_1(Z_y) \,+\, bh_{2,0} \, H_2(Z_y) \,\, + \, bh_{3,0} \,H_3(Z_y)\,[$]
[$]Bh_1(Z_y)\,=\, bh_{0,1} \, + \, bh_{1,1} \,H_1(Z_y) \,+\, bh_{2,1} \, H_2(Z_y) \, + \, bh_{3,1} \,H_3(Z_y)\,[$] 
[$]Bh_2(Z_y)\,=\, bh_{0,2} \, + \, bh_{1,2} \,H_1(Z_y) \,+\, bh_{2,2} \, H_2(Z_y) \, \, + \, bh_{3,2} \,H_3(Z_y)\,[$]
[$]Bh_3(Z_y)\,=\, bh_{0,3} \, + \, bh_{1,3} \,H_1(Z_y) \,+\, bh_{2,3} \, H_2(Z_y) \, + \, bh_{3,3} \,H_3(Z_y)\,[$]   Eqs(3)

and we write X in a brief form as

[$]X(Z_y \, , \, Z_x)\,=\, Bh_0(Z_y)\, +Bh_1(Z_y)\, \,H_1(Z_x) \,+\,Bh_2(Z_y)\,\, H_2(Z_x) \,\, + \, Bh_3(Z_y)\,\,H_3(Z_x)\,[$]  Eq(4)

We want to emphasize that Eq(2) and Eq(4) are the same equations. We see that Zx hermite polynomials coefficients of the above equation are dependent on Zy which means that equation describes the conditional/transition density of X given any value that our independent variable Y(Zy) takes. This is why we call them transition density since it completely describes how the density of X changes when Y changes along its domain. 
Up till now this is simply rehash of old things we all know.
We want to explore hermite series in Eq(4) in greater detail now.
In order to know the marginal density that is integrated over all transition probabilities coming from all values of Zy, we know that  

[$]X(Z_y \, , \, Z_x)\,=\,  bh_{0,0}\, + bh_{0,1}\, \,H_1(Z_x) \,+\, bh_{0,2}\,\, H_2(Z_x) \,\, + \,  bh_{0,3}\,\,H_3(Z_x)\,[$]  Eq(4)

where  bh_{0,0}\, is zeroth hermite in hermite series of Bh_0(Z_y)\, and so on for bh_{0,1}\,,bh_{0,2}\,, and bh_{0,3}\,.Since we are using zeroth hermites from the hermite series of coefficients, this is the expected value of the density as expectation over all other hermites is zero.

Now, we come to major question that if above marginal density is standardized to unity, then variance must be larger than unity when we would include all the rest of the hermites in the expression for Zy dependent coefficients. 

We are prone to thinking that variance of the equation including proper transition probabilities depending on Zy (equation given below)

[$]X(Z_y \, , \, Z_x)\,=\, Bh_0(Z_y)\, +Bh_1(Z_y)\, \,H_1(Z_x) \,+\,Bh_2(Z_y)\,\, H_2(Z_x) \,\, + \, Bh_3(Z_y)\,\,H_3(Z_x)\,[$]  Eq(4)

must be larger than variance of equation including only averages of zeroth hermites  (equation is given below)

[$]X(Z_y \, , \, Z_x)\,=\,  bh_{0,0}\, + bh_{0,1}\, \,H_1(Z_x) \,+\, bh_{0,2}\,\, H_2(Z_x) \,\, + \,  bh_{0,3}\,\,H_3(Z_x)\,[$]  Eq(4)

This however may not be true.

Because coefficients  [$]Bh_0(Z_y)\,, Bh_1(Z_y)\, ,Bh_2(Z_y)\, , Bh_3(Z_y)\,[$] are not independent of each other. We cannot assume independence when calculating their variance. These coefficients are constrained since our density is standardized. This is as if in some sense when we move over density Zy, the variance is symmetric(in the sense of cancelling) around the average value. 
I will try to come up with a new post tomorrow hoping I will have more clarity.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 25th, 2024, 7:04 am

Friends, OK I thought more last night and I have more clarity now about the problem.
First of all I want to correct that 2D hermite series density of X(Zx,Zv) should have unity variance. Obviously the two dimensional density has more variance than one. 
I thought of the concept of one dimensional variance and two dimensional variance. One dimensional marginal density should indeed have variance unity after standardization. We standardized only in one dimension of marginal density and this is the only dimension where we can keep the variance unity. We cannot retrieve unity variance in two dimensions after standardization only in one dimension.
This led me to think about the two dimensional conditional standardization of the density. 
We can first standardize the density by variance of marginal density (after subtracting the mean). This is what we had done before.
Then we conditionally standardize in Zy direction. We first calculate conditional values  E(X|Zy) (we did this with hermite regression and this was correlated part that was subtracted from density to get Xdecorr in 2D regression program) and E(X^2|Zy) which is actually simplified version of ((X-E(X|Zy))^2|Zy). Both these quantities can be calculated from hermite polynomials regression. 
Then we take the data(that already had been standardized earlier by variance of marginal density) and subtract the expression for E(X|Zy) to do the conditional de-mean part of standardization. (This part led to calculation of Xdecorr in 2D regression program)
(Thinking of this in context of our 2D regression on Zx and Zy, I believe that 2D regression should have been done on decorrelated part Xdecorr only and we needed to simply add the correlated part (Xcorr(Zy) that earlier came from hermite regression) to it after doing regression on Xdecorr only. It will make our 2D regression far more stable and robust especially for high correlation values. Xcorr has no dependence on Zx and dependence on Zy has already been captured by first hermite regression used to calculate it. This simply needed to be added after 2D regression on Xdecorr only.) 
Coming back to 2D standardization, we would have to divide the de-meaned data (after subtracting E(X|Zy)) by sqrt(((X-E(X|Zy))^2|Zy)). This will give us a two dimensional standardization that will actually remain standardized in two dimensions. 
Again this is standardization in two dimensions only. It will not be simultaneously standardized in both one marginal dimension and in two dimensions and in order to get first marginally standardized data ,we will have to add the expression for E(X|Zy)) and then multiply with  sqrt(((X-E(X|Zy))^2|Zy)).  
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 25th, 2024, 7:52 am

Friends, I am quickly posting changed better version of the 2D regression algorithm in which 2D regression of X on Y is done only on decorrelated part of X and correlated part of X is subtracted before regression and is simply added to regression result of decorrelated part after the regression has been done. Not only should this be more robust, it should be easier to generalize it to more dimensions.

Here is the new function.
.
.
.
function [bnm,aa] = CalculateConditionalProbabilitiesYtoXLinearRegression04IterC02(Yin,Xin)

%Yin argument translates to independent random variable Y
%Xin argument translates to dependent variable X.
%This function takes observations of an independent variable Y and
%dependent varaible X. These have to be joint observations in the sense
%that nth observation of Y, given by Y(n) is taken jointly with nth
%observation of X, given by X(n). Two arrays of observations are the same
%length and nth mmber of independent variable Y is jointly observed with nth
%member of depedent variable X.
%We want to find the detailed relationship between Y and X so we can get
%the conditional density of dependent variable X given a particular value
%of independent variable Y. This will be done by calibrating coefficients
%of our 2D hermite series to data so that a very good fit to cross moments
%between X and Y is retrieved.
%We will first find univariate Z-series/hermite series of both random
%variables independently. For joint dynamics of dependent variable given 
%te independnet variable, we will assume a 2D hermite series (This has to 
%be two dimensional to be able to capture the dynamics from independent variable Y
%and have its own dynamics) and form but we do not know the coefficients of 
%this 2D hermite series. In this program, we try to use iterative optimization 
%to find the unknown coefficients of 2D hermite series of dependent variable X 
%in a way that after taking products with independent varaible Y, expectation   
%of these products i.e cross moments are as exactly retrieved as possible.
%Our iterative optimization procedure perturbs each hermite coefficient in 
%2D hermite series  by a very slight amount and checks the objective function.
%If the objective function decreases, we eccept the perturbation otherwise
%reject it and then perturb by a small amount in other opposite direction
%and then again check the objective function. If perturbations in both sides 
%are rejected, we do not change the coefficients but slightly decrease the
%perturbation size. 
%We will first standardize the data of both random variables and then
%calculate univariate hermite series of both variables and also their 
%cross moments in standardized form. We do all the calculations of coefficients
%of independent variable Y and 2D dependent variable X
%in standardized version of variables and moments. After optimization of coefficents on 
%standardized data, at the end, we invert the standardization of both 
%dependent and independent variables appropriately to get the Z-series and
%hermite series of actual varaibles in theri original coordinates back from
%standardized coordinates.
%Before optimization, we place two conditions on 2D dependent variable X so that
%first row and first column of 2D hermite coefficents of X is already
%known before optimization stage and it is not changed in optimization. 
%We do not iteratively optimize over this first row and first colums
%which are analytically known from these two conditions.
%Our first condition is that after integration (of bivaraite density or
%alternatively bivaraite 2D Z-series/2D Hermite series) over independent
%variable, we should get the univaraite marginal density/Z-series/Hermite series of X
%which has already been calculated out of data. This fixes the zeroth
%hermite(of independent varaible Y) related terms in the Z-series/hermite series. 
%Please look at this forum post# 2121 for explanation of this first condition.
% https://forum.wilmott.com/viewtopic.php?t=99702&start=2115#p879042

%We can analytically solve for first order cross moments of X with higher
%order in Y. From the analytical solution of these moments, we find first
%row of coefficents in the hermite form.


% 2D Hermite Series Form used in this program is given as
% 
% X = bnmH(1,1) + bnmH(2,1) H_1(Z_y) + bnmH(3,1) H_2(Z_y) + bnmH(4,1) H_3(Z_y) + bnmH(5,1) H_4(Z_y) + bnmH(6,1) H_5(Z_y) 
% + [ bnmH(1,2) + bnmH(2,2) H_1(Z_y) + bnmH(3,2) H_2(Z_y) + bnmH(4,2) H_3(Z_y) + bnmH(5,2) H_4(Z_y) + bnmH(6,2) H_5(Z_y)] H_1(Zx) 
% + [ bnmH(1,3) + bnmH(2,3) H_1(Z_y) + bnmH(3,3) H_2(Z_y) + bnmH(4,3) H_3(Z_y) + bnmH(5,3) H_4(Z_y) + bnmH(6,3) H_5(Z_y)] H_2(Zx)
% + [ bnmH(1,4) + bnmH(2,4) H_1(Z_y) + bnmH(3,4) H_2(Z_y) + bnmH(4,4) H_3(Z_y) + bnmH(5,4) H_4(Z_y) + bnmH(6,4) H_5(Z_y)] H_3(Zx)
% + [ bnmH(1,5) + bnmH(2,5) H_1(Z_y) + bnmH(3,5) H_2(Z_y) + bnmH(4,5) H_3(Z_y) + bnmH(5,5) H_4(Z_y) + bnmH(6,5) H_5(Z_y)] H_4(Zx)
% + [ bnmH(1,6) + bnmH(2,6) H_1(Z_y) + bnmH(3,6) H_2(Z_y) + bnmH(4,6) H_3(Z_y) + bnmH(5,6) H_4(Z_y) + bnmH(6,6) H_5(Z_y)] H_5(Zx)
 
% 2D Z-Series Form used in this program is given as
% 
% X = bnm(1,1) + bnm(2,1) Z_y + bnm(3,1) (Z_y)^2 + bnm(4,1) (Z_y)^3 + bnm(5,1) (Z_y)^4 + bnm(6,1) (Z_y)^5 
% + [ bnm(1,2) + bnm(2,2) Z_y + bnm(3,2) (Z_y)^2 + bnm(4,2) (Z_y)^3 + bnm(5,2) (Z_y)^4 + bnm(6,2) (Z_y)^5 ] Zx 
% + [ bnm(1,3) + bnm(2,2) Z_y + bnm(3,3) (Z_y)^2 + bnm(4,3) (Z_y)^3 + bnm(5,3) (Z_y)^4 + bnm(6,3) (Z_y)^5 ] Zx^2 
% + [ bnm(1,4) + bnm(2,2) Z_y + bnm(3,4) (Z_y)^2 + bnm(4,4) (Z_y)^3 + bnm(5,4) (Z_y)^4 + bnm(6,4) (Z_y)^5 ] Zx^3 
% + [ bnm(1,5) + bnm(2,2) Z_y + bnm(3,5) (Z_y)^2 + bnm(4,5) (Z_y)^3 + bnm(5,5) (Z_y)^4 + bnm(6,5) (Z_y)^5 ] Zx^4 
% + [ bnm(1,6) + bnm(2,2) Z_y + bnm(3,6) (Z_y)^2 + bnm(4,6) (Z_y)^3 + bnm(5,6) (Z_y)^4 + bnm(6,6) (Z_y)^5 ] Zx^5




%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%In the code block below, we standardize the independent variable Y and dependent variable X.
%We do all later calculations of various Z-series and cross moments in 
%standardized framework and then at the end of the program, we convert 
%the calculated coefficients of 2D hermite series/Z-series of dependent 
%variable X into original non-standardized framework.

 Ndata=length(Yin);
 
 MeanY=sum(Yin(1:Ndata))/Ndata;
 MeanX=sum(Xin(1:Ndata))/Ndata;
 
 Ym(1:Ndata)=Yin(1:Ndata)-MeanY;
 Xm(1:Ndata)=Xin(1:Ndata)-MeanX;
 
 YmVar=sum(Ym(1:Ndata).^2)/Ndata;
 XmVar=sum(Xm(1:Ndata).^2)/Ndata;
 
 Y(1:Ndata)=Ym(1:Ndata)/sqrt(YmVar);
 X(1:Ndata)=Xm(1:Ndata)/sqrt(XmVar);

 
 sqrt(YmVar)
 sqrt(XmVar)
 
 str=input("Look at standardization constants");
 

 NMomentsY=6;
 NMomentsX=6;
     for nn=1:NMomentsY
        for mm=1:NMomentsX
            CrossMoments11(nn,mm)=0;
            for pp=1:Ndata
               CrossMoments11(nn,mm)=CrossMoments11(nn,mm)+Yin(pp).^nn.*Xin(pp).^mm/Ndata;
               %CrossMoments11(nn,mm)=CrossMoments11(nn,mm)+Y(pp).^nn.*X(pp).^mm/Ndata;
               %CrossMoments11(nn,mm)=CrossMoments11(nn,mm)+Ym(pp).^nn.*Xm(pp).^mm/Ndata;
            end
        end
     end
 
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Code block below uses polynomial regression to find univariate Z-series 
%coefficients of independent variable Y.
%To know more about polynomial regression read the following and associated
%posts:  https://forum.wilmott.com/viewtopic.php?t=99702&start=2010#p877143
 
%You can get the text file MomentMatchParams08.txt from the forum post below
%https://forum.wilmott.com/viewtopic.php?f=4&t=99702&start=2040#p877863

Mtable=readtable('C:\Users\Lenovo\Documents\MATLAB\MATLAB1\MomentMatchParams08.txt');

%ZI is moment matched grid along the normal density with number of grid
%points equal to data elements in Y given by Ndata. The grid is constructed
%so that probability mass in each grid cell is 1/Ndata so this matches the
%observation probability of Ndata data elements. Due to equal probability
%mass in each grid cell, we call it equiprobable grid.

[ZI] = MatchMomentParametersOfIdealZhalf04(Ndata,Mtable);

Ys=sort(Y);
Ymu(1:Ndata,1)=Ys(1:Ndata);

W(1:Ndata,1)=1;
W(1:Ndata,2)=ZI(1:Ndata);
W(1:Ndata,3)=ZI(1:Ndata).^2;
W(1:Ndata,4)=ZI(1:Ndata).^3;
W(1:Ndata,5)=ZI(1:Ndata).^4;
W(1:Ndata,6)=ZI(1:Ndata).^5;
% W(1:Ndata,7)=ZI(1:Ndata).^6;
% W(1:Ndata,8)=ZI(1:Ndata).^7;


coeff=inv(W'*W)*(W'*Ymu);

%Above, we get coefficients of univariate Z-series of independent
%variable Y by least square regression.


a0=coeff(1,1);
a(1:5)=coeff(2:6,1);

%Above, we assign zeroth power Z-series coefficient to a0 and rest of
%the coefficients to array a.
%This was older format but we might need it since many old functions are written 
%in this format.

%Below, we assign same Z-series coefficients to a single array. This is
%newer format that we want to use mostly.
aa(1:6)=coeff(1:6);


%uncomment these lines if you want to see shape of Z-series constructed
%density of Y. We had calculated coefficients of this Z-series by
%least square polynomial regression.
%PlotZSeriesDensity(a0,a,'r')
%str=input("Look at the density of volatility");
    
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5    
%In this code block, we calcualate the coefficients of univariate marginal
%density of dependent variable X again by least square polynomial
%regression as we did in previous block for independent variable Y.
    
Xs=sort(X);
Xmu(1:Ndata,1)=Xs(1:Ndata);

W(1:Ndata,1)=1;
W(1:Ndata,2)=ZI(1:Ndata);
W(1:Ndata,3)=ZI(1:Ndata).^2;
W(1:Ndata,4)=ZI(1:Ndata).^3;
W(1:Ndata,5)=ZI(1:Ndata).^4;
W(1:Ndata,6)=ZI(1:Ndata).^5;

coeff=inv(W'*W)*(W'*Xmu);

c0=coeff(1,1);
c(1:5)=coeff(2:6,1);

cc(1:6)=coeff(1:6,1);

[ccH] = ConvertZSeriesToHermiteSeriesNew(cc,5);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

[Ys,Iy]=sort(Y);
%ZI=Zy(I);
Zy(Iy)=ZI;

% [Zy] = CalculateZgivenXAndZSeriesC5Improved(Y,a0,a);
 [Zx] = CalculateZgivenXAndZSeriesC5Improved(X,c0,c);
% 
 paths=Ndata;
 [CorrH0] = CalculateCorrelationBivariateHermiteCH(Zx,Zy,paths,5);

Xcorr(1:Ndata)=ccH(2).*CorrH0(1).*Zy(1:Ndata) ...
    +ccH(3).*CorrH0(2).*(Zy(1:Ndata).^2-1) ...
    +ccH(4).*CorrH0(3).*(Zy(1:Ndata).^3-3*Zy(1:Ndata)) ...
    +ccH(5).*CorrH0(4).*(Zy(1:Ndata).^4-6*Zy(1:Ndata).^2+3) ...
    +ccH(6).*CorrH0(5).*(Zy(1:Ndata).^5-10*Zy(1:Ndata).^3+15*Zy(1:Ndata));
Xdcorr(1:Ndata)=X(1:Ndata)-Xcorr(1:Ndata);% ...
%     -ccH(2).*CorrH0(1).*Zy(1:Ndata) ...
%     -ccH(3).*CorrH0(2).*(Zy(1:Ndata).^2-1) ...
%     -ccH(4).*CorrH0(3).*(Zy(1:Ndata).^3-3*Zy(1:Ndata)) ...
%     -ccH(5).*CorrH0(4).*(Zy(1:Ndata).^4-6*Zy(1:Ndata).^2+3) ...
%     -ccH(6).*CorrH0(5).*(Zy(1:Ndata).^5-10*Zy(1:Ndata).^3+15*Zy(1:Ndata));
    

[Xdcorrs,I]=sort(Xdcorr);
%Zys=Zy(I);
Zx=ZI;
%Xs=X(I);

Zys=Zy(I);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


Xmu(1:Ndata,1)=Xdcorrs(1:Ndata);


W=0;

W(1:Ndata,1)=1.0;
W(1:Ndata,2)=Zx(1:Ndata);
W(1:Ndata,3)=(Zx(1:Ndata).^2-1);
W(1:Ndata,4)=(Zx(1:Ndata).^3-3*Zx(1:Ndata));
W(1:Ndata,5)=(Zx(1:Ndata).^4-6*Zx(1:Ndata).^2+3);
W(1:Ndata,6)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata));

W(1:Ndata,7)=1.*Zys(1:Ndata);
W(1:Ndata,8)=Zx(1:Ndata).*Zys(1:Ndata);
W(1:Ndata,9)=(Zx(1:Ndata).^2-1).*Zys(1:Ndata);
W(1:Ndata,10)=(Zx(1:Ndata).^3-3*Zx(1:Ndata)).*Zys(1:Ndata);
W(1:Ndata,11)=(Zx(1:Ndata).^4-6*Zx(1:Ndata).^2+3).*Zys(1:Ndata);
W(1:Ndata,12)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata)).*Zys(1:Ndata);

W(1:Ndata,13)=(Zys(1:Ndata).^2-1);
W(1:Ndata,14)=Zx(1:Ndata).*(Zys(1:Ndata).^2-1);
W(1:Ndata,15)=(Zx(1:Ndata).^2-1).*(Zys(1:Ndata).^2-1);
W(1:Ndata,16)=(Zx(1:Ndata).^3-3*Zx(1:Ndata)).*(Zys(1:Ndata).^2-1);
W(1:Ndata,17)=(Zx(1:Ndata).^4-6*Zx(1:Ndata).^2+3).*(Zys(1:Ndata).^2-1);
W(1:Ndata,18)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata)).*(Zys(1:Ndata).^2-1);

W(1:Ndata,19)=(Zys(1:Ndata).^3-3*Zy(1:Ndata));
W(1:Ndata,20)=Zx(1:Ndata).*(Zys(1:Ndata).^3-3*Zy(1:Ndata));
W(1:Ndata,21)=(Zx(1:Ndata).^2-1).*(Zys(1:Ndata).^3-3*Zy(1:Ndata));
W(1:Ndata,22)=(Zx(1:Ndata).^3-3*Zx(1:Ndata)).*(Zys(1:Ndata).^3-3*Zys(1:Ndata));
W(1:Ndata,23)=(Zx(1:Ndata).^4-6*Zx(1:Ndata).^2+3).*(Zys(1:Ndata).^3-3*Zys(1:Ndata));
W(1:Ndata,24)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata)).*(Zys(1:Ndata).^3-3*Zys(1:Ndata));

W(1:Ndata,25)=(Zys(1:Ndata).^4-6*Zys(1:Ndata).^2+3);
W(1:Ndata,26)=Zx(1:Ndata).*(Zys(1:Ndata).^4-6*Zys(1:Ndata).^2+3);
W(1:Ndata,27)=(Zx(1:Ndata).^2-1).*(Zys(1:Ndata).^4-6*Zys(1:Ndata).^2+3);
W(1:Ndata,28)=(Zx(1:Ndata).^3-3*Zx(1:Ndata)).*(Zys(1:Ndata).^4-6*Zys(1:Ndata).^2+3);
W(1:Ndata,29)=(Zx(1:Ndata).^4-6*Zx(1:Ndata).^2+3).*(Zys(1:Ndata).^4-6*Zys(1:Ndata).^2+3);
W(1:Ndata,30)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata)).*(Zys(1:Ndata).^4-6*Zys(1:Ndata).^2+3);

W(1:Ndata,31)=(Zys(1:Ndata).^5-10*Zys(1:Ndata).^3+15*Zys(1:Ndata));
W(1:Ndata,32)=Zx(1:Ndata).*(Zys(1:Ndata).^5-10*Zys(1:Ndata).^3+15*Zys(1:Ndata));
W(1:Ndata,33)=(Zx(1:Ndata).^2-1).*(Zys(1:Ndata).^5-10*Zys(1:Ndata).^3+15*Zys(1:Ndata));
W(1:Ndata,34)=(Zx(1:Ndata).^3-3*Zx(1:Ndata)).*(Zys(1:Ndata).^5-10*Zys(1:Ndata).^3+15*Zys(1:Ndata));
W(1:Ndata,35)=(Zx(1:Ndata).^4-6*Zx(1:Ndata).^2+3).*(Zys(1:Ndata).^5-10*Zys(1:Ndata).^3+15*Zys(1:Ndata));
W(1:Ndata,36)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata)).*(Zys(1:Ndata).^5-10*Zys(1:Ndata).^3+15*Zys(1:Ndata));

coeff=inv(W'*W)*(W'*Xmu);


%bnmH0(1,1:6)=ccH(1:6);  %Replace first line of X-hermites with Marginal density retrieval condition
bnmH0(1,1:6)=coeff(1:6);
bnmH0(2,1:6)=coeff(7:12);
bnmH0(3,1:6)=coeff(13:18);
bnmH0(4,1:6)=coeff(19:24);
bnmH0(5,1:6)=coeff(25:30);
bnmH0(6,1:6)=coeff(31:36);


bnmH0

str=input("bnmH0 before addition of correlated part");


bnmH0(2,1)=bnmH0(2,1)+ccH(2).*CorrH0(1);
bnmH0(3,1)=bnmH0(3,1)+ccH(3).*CorrH0(2);
bnmH0(4,1)=bnmH0(4,1)+ccH(4).*CorrH0(3);
bnmH0(5,1)=bnmH0(5,1)+ccH(5).*CorrH0(4);
bnmH0(6,1)=bnmH0(6,1)+ccH(6).*CorrH0(5);


bnmH=bnmH0;

[bnmHVar] = CalculateVariance2D(bnmH,6,6);

bnmH
bnmHVar

%bnmH=bnmH/sqrt(bnmHVar);

str=input("Look at variacne 2D");
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% 
% 
% 
% Xmu(1:Ndata,1)=Xs(1:Ndata);
% 
% 
% W=0;
% 
% %W(1:Ndata,1)=1.0;
% W(1:Ndata,1)=Zx(1:Ndata);
% W(1:Ndata,2)=(Zx(1:Ndata).^2-1);
% W(1:Ndata,3)=(Zx(1:Ndata).^3-3*Zx(1:Ndata));
% W(1:Ndata,4)=(Zx(1:Ndata).^4-6*Zx(1:Ndata).^2+3);
% W(1:Ndata,5)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata));
% 
% W(1:Ndata,6)=1.*Zys(1:Ndata);
% W(1:Ndata,7)=Zx(1:Ndata).*Zys(1:Ndata);
% W(1:Ndata,8)=(Zx(1:Ndata).^2-1).*Zys(1:Ndata);
% W(1:Ndata,9)=(Zx(1:Ndata).^3-3*Zx(1:Ndata)).*Zys(1:Ndata);
% W(1:Ndata,10)=(Zx(1:Ndata).^4-6*Zx(1:Ndata).^2+3).*Zys(1:Ndata);
% W(1:Ndata,11)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata)).*Zys(1:Ndata);
% 
% W(1:Ndata,12)=1.*(Zys(1:Ndata).^2-1);
% W(1:Ndata,13)=Zx(1:Ndata).*(Zys(1:Ndata).^2-1);
% W(1:Ndata,14)=(Zx(1:Ndata).^2-1).*(Zys(1:Ndata).^2-1);
% W(1:Ndata,15)=(Zx(1:Ndata).^3-3*Zx(1:Ndata)).*(Zys(1:Ndata).^2-1);
% W(1:Ndata,16)=(Zx(1:Ndata).^4-6*Zx(1:Ndata).^2+3).*(Zys(1:Ndata).^2-1);
% W(1:Ndata,17)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata)).*(Zys(1:Ndata).^2-1);
% 
% 
% 
% 
% %W(1:Ndata,17)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata)).*(Zys(1:Ndata).^2-1);
% 
% W(1:Ndata,18)=(Zys(1:Ndata).^3-3*Zy(1:Ndata));
% W(1:Ndata,19)=Zx(1:Ndata).*(Zys(1:Ndata).^3-3*Zy(1:Ndata));
% W(1:Ndata,20)=(Zx(1:Ndata).^2-1).*(Zys(1:Ndata).^3-3*Zy(1:Ndata));
% W(1:Ndata,21)=(Zx(1:Ndata).^3-3*Zx(1:Ndata)).*(Zys(1:Ndata).^3-3*Zys(1:Ndata));
% W(1:Ndata,22)=(Zx(1:Ndata).^4-6*Zx(1:Ndata).^2+3).*(Zys(1:Ndata).^3-3*Zys(1:Ndata));
% W(1:Ndata,23)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata)).*(Zys(1:Ndata).^3-3*Zys(1:Ndata));
% 
% W(1:Ndata,24)=(Zys(1:Ndata).^4-6*Zys(1:Ndata).^2+3);
% W(1:Ndata,25)=Zx(1:Ndata).*(Zys(1:Ndata).^4-6*Zys(1:Ndata).^2+3);
% W(1:Ndata,26)=(Zx(1:Ndata).^2-1).*(Zys(1:Ndata).^4-6*Zys(1:Ndata).^2+3);
% W(1:Ndata,27)=(Zx(1:Ndata).^3-3*Zx(1:Ndata)).*(Zys(1:Ndata).^4-6*Zys(1:Ndata).^2+3);
% W(1:Ndata,28)=(Zx(1:Ndata).^4-6*Zx(1:Ndata).^2+3).*(Zys(1:Ndata).^4-6*Zys(1:Ndata).^2+3);
% W(1:Ndata,29)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata)).*(Zys(1:Ndata).^4-6*Zys(1:Ndata).^2+3);
% 
% W(1:Ndata,30)=(Zys(1:Ndata).^5-10*Zys(1:Ndata).^3+15*Zys(1:Ndata));
% W(1:Ndata,31)=Zx(1:Ndata).*(Zys(1:Ndata).^5-10*Zys(1:Ndata).^3+15*Zys(1:Ndata));
% W(1:Ndata,32)=(Zx(1:Ndata).^2-1).*(Zys(1:Ndata).^5-10*Zys(1:Ndata).^3+15*Zys(1:Ndata));
% W(1:Ndata,33)=(Zx(1:Ndata).^3-3*Zx(1:Ndata)).*(Zys(1:Ndata).^5-10*Zys(1:Ndata).^3+15*Zys(1:Ndata));
% W(1:Ndata,34)=(Zx(1:Ndata).^4-6*Zx(1:Ndata).^2+3).*(Zys(1:Ndata).^5-10*Zys(1:Ndata).^3+15*Zys(1:Ndata));
% W(1:Ndata,35)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata)).*(Zys(1:Ndata).^5-10*Zys(1:Ndata).^3+15*Zys(1:Ndata));
% 
% coeff=inv(W'*W)*(W'*Xmu);
% 
% 
% %bnmH0(1,1:6)=ccH(1:6);  %Replace first line of X-hermites with Marginal density retrieval condition
% bnmH0(1,1)=0;%coeff(2:5);
% bnmH0(1,2:6)=coeff(1:5);
% bnmH0(2,1:6)=coeff(6:11);
% bnmH0(3,1:6)=coeff(12:17);
% bnmH0(4,1:6)=coeff(18:23);
% bnmH0(5,1:6)=coeff(24:29);
% bnmH0(6,1:6)=coeff(30:35);
% 
% bnmH=bnmH0;
% 


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
SeriesOrder1=6;
NMomentsY=6;
NMomentsX=6;
    for nn=1:NMomentsY
       for mm=1:NMomentsX
           CrossMoments(nn,mm)=0;
           for pp=1:Ndata
              CrossMoments(nn,mm)=CrossMoments(nn,mm)+Y(pp).^nn.*X(pp).^mm/Ndata;
           end
       end
    end



[chh] = ProduceFirstRowofConditioal2DZSeries02_6M_A(aa,SeriesOrder1,CrossMoments);


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

[bnm] = Convert2DHermitesInto2DSeries(bnmH);

SeriesOrder1=6;
SeriesOrder10=6;
SeriesOrder01=6;
NMomentsX=6;
NMomentsY=6;
[Moments2D0] = CalculateCrossMomentsConditional2DNew02(aa,bnm,SeriesOrder1,SeriesOrder10,SeriesOrder01,NMomentsX,NMomentsY);

Moments2D0

CrossMoments

str=input("Look at comparison of moments--first-standardized");

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%In the code block below, we convert 2D hermite series coefficients of 
%dependent variable X from standardized version to original version.
%For that we first multiply the entire 2D Hermite series by square root of
%variance.

 bnmH=bnmH*sqrt(XmVar);

%We convert X from hermite series form to Z-series form.
[bnm] = Convert2DHermitesInto2DSeries(bnmH);
bnm(1,1)=MeanX-bnm(3,1)-3*bnm(5,1)-1*bnm(1,3)-1*3*bnm(1,5)-3*bnm(5,3)-bnm(3,3)-3*bnm(3,5)-9*bnm(5,5);

%Finally in above line, we make sure that mean of the 2D Z-series is


%Below, we convert Z-series of independent variable Y from standard form to
%original variable form.

aa=aa*sqrt(YmVar);
aa(1)=MeanY-aa(3)-3*aa(5);


 SeriesOrder1=6;
 SeriesOrder10=6;
 SeriesOrder01=6;
 NMomentsX=6;
 NMomentsY=6;
 [Moments2D] = CalculateCrossMomentsConditional2DNew02(aa,bnm,SeriesOrder1,SeriesOrder10,SeriesOrder01,NMomentsX,NMomentsY);
 
 Moments2D
 
 CrossMoments11
 
 str=input("Look at comparison of moments");

end
.
.
.
function [bnmHVar,bnmHVar1D] = CalculateVariance2D(bnmH,SeriesOrder10,SeriesOrder01)


bnmHVar=0;
for nn=1:SeriesOrder10
    for mm=1:SeriesOrder01
        if(nn==1)&&(mm==1)
            ;
        else
            bnmHVar=bnmHVar+bnmH(nn,mm).^2.*factorial(nn-1).*factorial(mm-1);
        end
    end
end


bnmHVar1D=0;
for nn=2:SeriesOrder10
    bnmHVar1D=bnmHVar1D+bnmH(nn,1).^2.*factorial(nn-1);
end
for mm=2:SeriesOrder01
    bnmHVar1D=bnmHVar1D+bnmH(1,mm).^2.*factorial(mm-1);
end


end

You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 25th, 2024, 9:24 am

Friends, when you run the previous function, you will notice that after regression on decorrelated part, we get a result that has miniscule values on correlated column part of the density so we get pure transition hermite series as a result. The correlated part will go directly from the Zy cells to all the Zx cells. Since we had removed the correlated part from regression, I expect that our results for transition hermite series will be more robust.
I want to check against actual transition hermite series calculated from monte carlo data and see how good the results are.
If the results are not good enough, I want to go back to optimization but I do not want cross moments in objective function. I want Zy dependent conditional moments of X in the objective function. But these moments might not be calculable in an analytic form always though I think we could mostly do it with first four analytic conditional moments. Another possibility is to keep the conditional powers of all the data directly in the objective function and compare it with model conditional powers of data with a squared  difference objective function.
I also want to verify if two dimensional standardization I talked about in earlier post would be better for calculations of transition hermite series.
Unlike before, I am also thinking of constraints on coefficients that have to be applied when doing optimization. 
I will soon come with a program to calculate first few conditional moments/transition moments of X conditional on Y in analytic from using hermite regression.
We will keep working until we fix everything in our research very well.
Also I noticed that new regression algorithm is very good and we get stable 2D densities starting mostly from 200 data points. I notice that there could be some spikes in the density for very little data points but that can be improved by systematically altering the coefficients very slightly so that we can get seamless densities. 
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 25th, 2024, 10:54 am

For those friends who have been following the logic of my research, I wanted to briefly explain how to take the 2D density regression and graphing algorithm to higher dimensions. Suppose we have three variables, V, Y and X. And we want to find the 3D density of X as a function of Zv, Zy and  Zx. First we have to take the first two variables V and Y and find their 1D and 2D densities using the two dimensional algorithm, we already know so we have a complete knowledge of joint density of V and Y. Then we find correlations between hermite polynomials of Zv and Zy and the marginal density of X. We use these correlations find decorrelated version of  X that does not have any correlations with Y and V. We now sort this decorrelated X and assign a value of Zx to each element of the data and then run a regression on Zx (alone for decorrelated marginal density) and Zv and Zy where Zv and Zy are moved along with sorted X so that original data triplets remain together. We will have to calculate jacobian with respect to three standard normals to calculate and graph the density. 
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 25th, 2024, 11:26 am

Friends, I am going to go out in the city now to get good food and water. There was a relative inactivity of mind control agents till 4-5 days ago and I had started to dare to take a lot of food more freely. But since past four days or so, they have extremely increased their attempts to drug my food. I see agents approaching food shops and stalls closely after I go there in areas randomly picked at the spot by me where I know food would be better. They are also drugging water on the shelves again. Just two days ago, I bought good water from the largest bakery chain in Lahore and I was very happy that they were selling good water(I had bought this water from another branch a few days ago and it was also good at that time.) But mind control agencies noticed that I had taken note of water and within a day they drugged water in close neighborhoods where I live and water I bought from another shop of the same bakery chain in a closeby neighborhood was drugged. This shows the urgency they have to control me that they are drugging water at places (that cooperate very well) within a single day after I bought good water. I have been able to survive without getting bad food for these days but there is a sudden sharp increase in attempts by mind control agencies to drug my food and several food places tried to play with me to keep me waiting to get food when I approached them so that they can have a chance to drug my food ( I know these tactics very well over time and was able to avoid bad food). 
When I started working on conditional Z-series in october, I had decided that I really want to minimize writing about mind control and totally try to concentrate on my research. I was not writing anything but I had to protest when they drugged the city to unprecedented levels as many friends would have known and still they were resolute to continue drugging the city despite protests from good people. It was only a few weeks ago that there was some decrease in drugging of food and I was slowly opening up to taking food more openly at many places but very recently they have started drugging of food with great urgency and it is becoming more difficult to get good food again. 
I just want to concentrate on my research and have not written about any persecution after drugging of food decreased despite that I remain under mind control all the time that hurts in so many ways but still whenever I try to do some serious research, mind control agencies lose their calm and temper and start drugging the city again like there is no tomorrow. 
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 25th, 2024, 6:46 pm

Friends, I have slightly changed the density calculation and graphing algorithm.
When we just want to calculate 2D density, we do not need products of hermites in our expression at all. Those products of hermites are for transition purposes and 2D density can easily be calculated accurately without them. 
So our new 2D algorithm to plot 2D density of Y and X is
1. Find 1D hermite series of independent density of first random variable Y.
2. Use hermite regression to find correlated part of X called Xcorr for each data value of X and subtract if from data values to find uncorrelated part Xdcorr.
3. Sort Xdcorr and regress it on ideal normal density array. We do not include hermite products in this regression and we do not include Zy in this regression since its correlated contribution has been already removed. We just regress on Six values of first five hermites. This results in a very robust regression algorithm since regression matrix is very small in size (we regress on only six variables. constant and first five hermites) and its inversion is perfectly stable.
4. As dimensionality grows, our regression matrix size remains the same and this regression is done on six variables only(for chosen size of hermite series) for each dimension recursively and I believe that we can tackle joint densities of hundreds of variables (hundreds of dimensions) in a stable fashion only with a few hundred data points (pairlets in high dimensions)

I did this for our 2D hermite series example and found that we could find a very recognizable density for only 50 data points as opposed to 200 data points earlier since decreasing the size of design matrix and removing redundant varaibles that are not needed for 2D density, made  inversion of design matrix very stable  

Here is the new regression program with this. In this program, bnmH has only one column and one array populated and rest of the matrix entries are zero. I let it stay in this form so we could run it with rest of the older program.

the size of hermite series required to find density of correlated variables with increase linearly with dimensionality. If we are taking hermite series with five polynomials, first correlated variable will be represented by six coefficients of hermite series. second correlated variable will require eleven coefficents of hermite series in two dimensions. Nth correlated variable will  have a total of 6+ 5 *(N-1) total coefficients with five hermites in each dimension.

Here is the program.
function [bnm,aa] = CalculateConditionalProbabilitiesYtoXLinearRegression04IterC04(Yin,Xin)

%Yin argument translates to independent random variable Y
%Xin argument translates to dependent variable X.
%This function takes observations of an independent variable Y and
%dependent varaible X. These have to be joint observations in the sense
%that nth observation of Y, given by Y(n) is taken jointly with nth
%observation of X, given by X(n). Two arrays of observations are the same
%length and nth mmber of independent variable Y is jointly observed with nth
%member of depedent variable X.
%We want to find the detailed relationship between Y and X so we can get
%the conditional density of dependent variable X given a particular value
%of independent variable Y. This will be done by calibrating coefficients
%of our 2D hermite series to data so that a very good fit to cross moments
%between X and Y is retrieved.
%We will first find univariate Z-series/hermite series of both random
%variables independently. For joint dynamics of dependent variable given 
%te independnet variable, we will assume a 2D hermite series (This has to 
%be two dimensional to be able to capture the dynamics from independent variable Y
%and have its own dynamics) and form but we do not know the coefficients of 
%this 2D hermite series. In this program, we try to use iterative optimization 
%to find the unknown coefficients of 2D hermite series of dependent variable X 
%in a way that after taking products with independent varaible Y, expectation   
%of these products i.e cross moments are as exactly retrieved as possible.
%Our iterative optimization procedure perturbs each hermite coefficient in 
%2D hermite series  by a very slight amount and checks the objective function.
%If the objective function decreases, we eccept the perturbation otherwise
%reject it and then perturb by a small amount in other opposite direction
%and then again check the objective function. If perturbations in both sides 
%are rejected, we do not change the coefficients but slightly decrease the
%perturbation size. 
%We will first standardize the data of both random variables and then
%calculate univariate hermite series of both variables and also their 
%cross moments in standardized form. We do all the calculations of coefficients
%of independent variable Y and 2D dependent variable X
%in standardized version of variables and moments. After optimization of coefficents on 
%standardized data, at the end, we invert the standardization of both 
%dependent and independent variables appropriately to get the Z-series and
%hermite series of actual varaibles in theri original coordinates back from
%standardized coordinates.
%Before optimization, we place two conditions on 2D dependent variable X so that
%first row and first column of 2D hermite coefficents of X is already
%known before optimization stage and it is not changed in optimization. 
%We do not iteratively optimize over this first row and first colums
%which are analytically known from these two conditions.
%Our first condition is that after integration (of bivaraite density or
%alternatively bivaraite 2D Z-series/2D Hermite series) over independent
%variable, we should get the univaraite marginal density/Z-series/Hermite series of X
%which has already been calculated out of data. This fixes the zeroth
%hermite(of independent varaible Y) related terms in the Z-series/hermite series. 
%Please look at this forum post# 2121 for explanation of this first condition.
% https://forum.wilmott.com/viewtopic.php?t=99702&start=2115#p879042

%We can analytically solve for first order cross moments of X with higher
%order in Y. From the analytical solution of these moments, we find first
%row of coefficents in the hermite form.


% 2D Hermite Series Form used in this program is given as
% 
% X = bnmH(1,1) + bnmH(2,1) H_1(Z_y) + bnmH(3,1) H_2(Z_y) + bnmH(4,1) H_3(Z_y) + bnmH(5,1) H_4(Z_y) + bnmH(6,1) H_5(Z_y) 
% + [ bnmH(1,2) + bnmH(2,2) H_1(Z_y) + bnmH(3,2) H_2(Z_y) + bnmH(4,2) H_3(Z_y) + bnmH(5,2) H_4(Z_y) + bnmH(6,2) H_5(Z_y)] H_1(Zx) 
% + [ bnmH(1,3) + bnmH(2,3) H_1(Z_y) + bnmH(3,3) H_2(Z_y) + bnmH(4,3) H_3(Z_y) + bnmH(5,3) H_4(Z_y) + bnmH(6,3) H_5(Z_y)] H_2(Zx)
% + [ bnmH(1,4) + bnmH(2,4) H_1(Z_y) + bnmH(3,4) H_2(Z_y) + bnmH(4,4) H_3(Z_y) + bnmH(5,4) H_4(Z_y) + bnmH(6,4) H_5(Z_y)] H_3(Zx)
% + [ bnmH(1,5) + bnmH(2,5) H_1(Z_y) + bnmH(3,5) H_2(Z_y) + bnmH(4,5) H_3(Z_y) + bnmH(5,5) H_4(Z_y) + bnmH(6,5) H_5(Z_y)] H_4(Zx)
% + [ bnmH(1,6) + bnmH(2,6) H_1(Z_y) + bnmH(3,6) H_2(Z_y) + bnmH(4,6) H_3(Z_y) + bnmH(5,6) H_4(Z_y) + bnmH(6,6) H_5(Z_y)] H_5(Zx)
 
% 2D Z-Series Form used in this program is given as
% 
% X = bnm(1,1) + bnm(2,1) Z_y + bnm(3,1) (Z_y)^2 + bnm(4,1) (Z_y)^3 + bnm(5,1) (Z_y)^4 + bnm(6,1) (Z_y)^5 
% + [ bnm(1,2) + bnm(2,2) Z_y + bnm(3,2) (Z_y)^2 + bnm(4,2) (Z_y)^3 + bnm(5,2) (Z_y)^4 + bnm(6,2) (Z_y)^5 ] Zx 
% + [ bnm(1,3) + bnm(2,2) Z_y + bnm(3,3) (Z_y)^2 + bnm(4,3) (Z_y)^3 + bnm(5,3) (Z_y)^4 + bnm(6,3) (Z_y)^5 ] Zx^2 
% + [ bnm(1,4) + bnm(2,2) Z_y + bnm(3,4) (Z_y)^2 + bnm(4,4) (Z_y)^3 + bnm(5,4) (Z_y)^4 + bnm(6,4) (Z_y)^5 ] Zx^3 
% + [ bnm(1,5) + bnm(2,2) Z_y + bnm(3,5) (Z_y)^2 + bnm(4,5) (Z_y)^3 + bnm(5,5) (Z_y)^4 + bnm(6,5) (Z_y)^5 ] Zx^4 
% + [ bnm(1,6) + bnm(2,2) Z_y + bnm(3,6) (Z_y)^2 + bnm(4,6) (Z_y)^3 + bnm(5,6) (Z_y)^4 + bnm(6,6) (Z_y)^5 ] Zx^5




%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%In the code block below, we standardize the independent variable Y and dependent variable X.
%We do all later calculations of various Z-series and cross moments in 
%standardized framework and then at the end of the program, we convert 
%the calculated coefficients of 2D hermite series/Z-series of dependent 
%variable X into original non-standardized framework.

 Ndata=length(Yin);
 
 MeanY=sum(Yin(1:Ndata))/Ndata;
 MeanX=sum(Xin(1:Ndata))/Ndata;
 
 Ym(1:Ndata)=Yin(1:Ndata)-MeanY;
 Xm(1:Ndata)=Xin(1:Ndata)-MeanX;
 
 YmVar=sum(Ym(1:Ndata).^2)/Ndata;
 XmVar=sum(Xm(1:Ndata).^2)/Ndata;
 
 Y(1:Ndata)=Ym(1:Ndata)/sqrt(YmVar);
 X(1:Ndata)=Xm(1:Ndata)/sqrt(XmVar);

 
 sqrt(YmVar)
 sqrt(XmVar)
 
 str=input("Look at standardization constants");
 

 NMomentsY=6;
 NMomentsX=6;
     for nn=1:NMomentsY
        for mm=1:NMomentsX
            CrossMoments11(nn,mm)=0;
            for pp=1:Ndata
               CrossMoments11(nn,mm)=CrossMoments11(nn,mm)+Yin(pp).^nn.*Xin(pp).^mm/Ndata;
               %CrossMoments11(nn,mm)=CrossMoments11(nn,mm)+Y(pp).^nn.*X(pp).^mm/Ndata;
               %CrossMoments11(nn,mm)=CrossMoments11(nn,mm)+Ym(pp).^nn.*Xm(pp).^mm/Ndata;
            end
        end
     end
 
 
 %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Code block below uses polynomial regression to find univariate Z-series 
%coefficients of independent variable Y.
%To know more about polynomial regression read the following and associated
%posts:  https://forum.wilmott.com/viewtopic.php?t=99702&start=2010#p877143
 
%You can get the text file MomentMatchParams08.txt from the forum post below
%https://forum.wilmott.com/viewtopic.php?f=4&t=99702&start=2040#p877863

Mtable=readtable('C:\Users\Lenovo\Documents\MATLAB\MATLAB1\MomentMatchParams08.txt');

%ZI is moment matched grid along the normal density with number of grid
%points equal to data elements in Y given by Ndata. The grid is constructed
%so that probability mass in each grid cell is 1/Ndata so this matches the
%observation probability of Ndata data elements. Due to equal probability
%mass in each grid cell, we call it equiprobable grid.

[ZI] = MatchMomentParametersOfIdealZhalf04(Ndata,Mtable);

Ys=sort(Y);
Ymu(1:Ndata,1)=Ys(1:Ndata);

W(1:Ndata,1)=1;
W(1:Ndata,2)=ZI(1:Ndata);
W(1:Ndata,3)=ZI(1:Ndata).^2;
W(1:Ndata,4)=ZI(1:Ndata).^3;
W(1:Ndata,5)=ZI(1:Ndata).^4;
W(1:Ndata,6)=ZI(1:Ndata).^5;
% W(1:Ndata,7)=ZI(1:Ndata).^6;
% W(1:Ndata,8)=ZI(1:Ndata).^7;


coeff=inv(W'*W)*(W'*Ymu);

%Above, we get coefficients of univariate Z-series of independent
%variable Y by least square regression.


a0=coeff(1,1);
a(1:5)=coeff(2:6,1);

%Above, we assign zeroth power Z-series coefficient to a0 and rest of
%the coefficients to array a.
%This was older format but we might need it since many old functions are written 
%in this format.

%Below, we assign same Z-series coefficients to a single array. This is
%newer format that we want to use mostly.
aa(1:6)=coeff(1:6);


%uncomment these lines if you want to see shape of Z-series constructed
%density of Y. We had calculated coefficients of this Z-series by
%least square polynomial regression.
%PlotZSeriesDensity(a0,a,'r')
%str=input("Look at the density of volatility");
    
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5    
%In this code block, we calcualate the coefficients of univariate marginal
%density of dependent variable X again by least square polynomial
%regression as we did in previous block for independent variable Y.
    
Xs=sort(X);
Xmu(1:Ndata,1)=Xs(1:Ndata);

W(1:Ndata,1)=1;
W(1:Ndata,2)=ZI(1:Ndata);
W(1:Ndata,3)=ZI(1:Ndata).^2;
W(1:Ndata,4)=ZI(1:Ndata).^3;
W(1:Ndata,5)=ZI(1:Ndata).^4;
W(1:Ndata,6)=ZI(1:Ndata).^5;

coeff=inv(W'*W)*(W'*Xmu);

c0=coeff(1,1);
c(1:5)=coeff(2:6,1);

cc(1:6)=coeff(1:6,1);

[ccH] = ConvertZSeriesToHermiteSeriesNew(cc,5);

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

[Ys,Iy]=sort(Y);
%ZI=Zy(I);
Zy(Iy)=ZI;

% [Zy] = CalculateZgivenXAndZSeriesC5Improved(Y,a0,a);
 [Zx] = CalculateZgivenXAndZSeriesC5Improved(X,c0,c);
% 
 paths=Ndata;
 [CorrH0] = CalculateCorrelationBivariateHermiteCH(Zx,Zy,paths,5);

Xcorr(1:Ndata)=ccH(2).*CorrH0(1).*Zy(1:Ndata) ...
    +ccH(3).*CorrH0(2).*(Zy(1:Ndata).^2-1) ...
    +ccH(4).*CorrH0(3).*(Zy(1:Ndata).^3-3*Zy(1:Ndata)) ...
    +ccH(5).*CorrH0(4).*(Zy(1:Ndata).^4-6*Zy(1:Ndata).^2+3) ...
    +ccH(6).*CorrH0(5).*(Zy(1:Ndata).^5-10*Zy(1:Ndata).^3+15*Zy(1:Ndata));
Xdcorr(1:Ndata)=X(1:Ndata)-Xcorr(1:Ndata);% ...
%     -ccH(2).*CorrH0(1).*Zy(1:Ndata) ...
%     -ccH(3).*CorrH0(2).*(Zy(1:Ndata).^2-1) ...
%     -ccH(4).*CorrH0(3).*(Zy(1:Ndata).^3-3*Zy(1:Ndata)) ...
%     -ccH(5).*CorrH0(4).*(Zy(1:Ndata).^4-6*Zy(1:Ndata).^2+3) ...
%     -ccH(6).*CorrH0(5).*(Zy(1:Ndata).^5-10*Zy(1:Ndata).^3+15*Zy(1:Ndata));
    

[Xdcorrs,I]=sort(Xdcorr);
%Zys=Zy(I);
Zx=ZI;
%Xs=X(I);

Zys=Zy(I);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


Xmu(1:Ndata,1)=Xdcorrs(1:Ndata);


W=0;

W(1:Ndata,1)=1.0;
W(1:Ndata,2)=Zx(1:Ndata);
W(1:Ndata,3)=(Zx(1:Ndata).^2-1);
W(1:Ndata,4)=(Zx(1:Ndata).^3-3*Zx(1:Ndata));
W(1:Ndata,5)=(Zx(1:Ndata).^4-6*Zx(1:Ndata).^2+3);
W(1:Ndata,6)=(Zx(1:Ndata).^5-10*Zx(1:Ndata).^3+15*Zx(1:Ndata));


coeff=inv(W'*W)*(W'*Xmu);


%bnmH0(1,1:6)=ccH(1:6);  %Replace first line of X-hermites with Marginal density retrieval condition
bnmH0(1,1:6)=coeff(1:6);

bnmH0(2,1)=ccH(2).*CorrH0(1);
bnmH0(3,1)=ccH(3).*CorrH0(2);
bnmH0(4,1)=ccH(4).*CorrH0(3);
bnmH0(5,1)=ccH(5).*CorrH0(4);
bnmH0(6,1)=ccH(6).*CorrH0(5);



bnmH0

%str=input("bnmH0 before addition of correlated part");
% 
% 
% bnmH0(2,1)=bnmH0(2,1)+ccH(2).*CorrH0(1);
% bnmH0(3,1)=bnmH0(3,1)+ccH(3).*CorrH0(2);
% bnmH0(4,1)=bnmH0(4,1)+ccH(4).*CorrH0(3);
% bnmH0(5,1)=bnmH0(5,1)+ccH(5).*CorrH0(4);
% bnmH0(6,1)=bnmH0(6,1)+ccH(6).*CorrH0(5);
% 

bnmH=bnmH0;

[bnmHVar] = CalculateVariance2D(bnmH,6,6);

bnmH
bnmHVar

%bnmH=bnmH/sqrt(bnmHVar);

str=input("Look at variacne 2D");
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
SeriesOrder1=6;
NMomentsY=6;
NMomentsX=6;
    for nn=1:NMomentsY
       for mm=1:NMomentsX
           CrossMoments(nn,mm)=0;
           for pp=1:Ndata
              CrossMoments(nn,mm)=CrossMoments(nn,mm)+Y(pp).^nn.*X(pp).^mm/Ndata;
           end
       end
    end



[chh] = ProduceFirstRowofConditioal2DZSeries02_6M_A(aa,SeriesOrder1,CrossMoments);


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

[bnm] = Convert2DHermitesInto2DSeries(bnmH);

SeriesOrder1=6;
SeriesOrder10=6;
SeriesOrder01=6;
NMomentsX=6;
NMomentsY=6;
[Moments2D0] = CalculateCrossMomentsConditional2DNew02(aa,bnm,SeriesOrder1,SeriesOrder10,SeriesOrder01,NMomentsX,NMomentsY);

Moments2D0

CrossMoments

str=input("Look at comparison of moments--first-standardized");

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%In the code block below, we convert 2D hermite series coefficients of 
%dependent variable X from standardized version to original version.
%For that we first multiply the entire 2D Hermite series by square root of
%variance.

 bnmH=bnmH*sqrt(XmVar);

%We convert X from hermite series form to Z-series form.
[bnm] = Convert2DHermitesInto2DSeries(bnmH);
bnm(1,1)=MeanX-bnm(3,1)-3*bnm(5,1)-1*bnm(1,3)-1*3*bnm(1,5)-3*bnm(5,3)-bnm(3,3)-3*bnm(3,5)-9*bnm(5,5);

%Finally in above line, we make sure that mean of the 2D Z-series is


%Below, we convert Z-series of independent variable Y from standard form to
%original variable form.

aa=aa*sqrt(YmVar);
aa(1)=MeanY-aa(3)-3*aa(5);


 SeriesOrder1=6;
 SeriesOrder10=6;
 SeriesOrder01=6;
 NMomentsX=6;
 NMomentsY=6;
 [Moments2D] = CalculateCrossMomentsConditional2DNew02(aa,bnm,SeriesOrder1,SeriesOrder10,SeriesOrder01,NMomentsX,NMomentsY);
 
 Moments2D
 
 CrossMoments11
 
 str=input("Look at comparison of moments");

end
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 26th, 2024, 6:44 pm

Friends, I am very thankful to people who have been following my posts on this thread. And we have come a very long way from when I started this thread for analytic solution of ODEs and SDEs. Little by little, year after year, we have been able to make something out of it and the thread has started to become more meaningful now eight years after it was started. I always believe that, for us humans, it takes hard work and persistent effort on order of decades to make something reasonably significant (while it can take only minutes to destroy it).
I am extremely thankful to many good people whose support made it possible for me to continue my research. Even though things were still challenging, many good people tried to provide me a better  environment with far decreased coercion in which I was able to work reasonably well. My first four years during the life of this thread were very difficult but later four years have been much better and it is during this time that I was able to do more creative work. I do not have the words to tell all the good people how indebted I am for their support that helped me make something positive out of my life. I still recall during earlier years before 2016, that I would be driving to my office(I had a small office for a few years where I would work), and I would have tears in my eyes and I would think that I am not a bad human being at all and I wanted to do something good in my life but I would be under so much medication that it was hard to be creative and consistent on my research. 
I really want to thank Paul Wilmott for allowing me to write on his forum despite that I could be extremely "rude and annoying" to some people who cruelly and totally remorselessly force mind control on other good and intelligent human beings to turn their life into stories of torture, coercion and tragedy. I may disagree with views of Paul on many issues but I want to tell him that he is a man of his word to stand staunchly for his avowed support for freedom of speech. On freedom of speech, I perfectly agree that even though freedom of speech can sometimes possibly have a negative side, it is such a sacred concept that all of us should be ready to tolerate the negative side for its huge and monumental positive benefits to the society.
Regarding the future plans, I am fully committed to continue my research in the field of probability theory and stochastics and hope to continue in the same format of trying to find solutions to problems and sharing attempts at solving all problems and issues one by one with friends as the research advances. However, I think I will have to find some financial support to continue my research since I want to continue sharing everything freely with friends. I want to try approaching some good foreign universities for accepting me as a paid adjunct researcher based in Pakistan. Obviously getting associated with a good university will also give me opportunity to get to know good people in my field at other universities as well and I believe that it will be very interesting to share ideas with other researchers in the field of probability theory and stochastics.
Again, I am totally determined to continue my research and hope that little by little, over the decades, may be we can say that we made a positive impact on our field of mathematics and science. I really believe that it is only discovery of knowledge associated with science that human beings have been able to support and advance modern human civilization and start to tame the nature in so may unprecedented ways.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 27th, 2024, 3:58 am

Friends, I remained away from work yesterday. In a day or two, I hope to post an automated program that finds hermite series representation of joint densities in arbitrary dimensions using orthogonal coordinates and also 2D graphs the desired dimensions together. 
Going directly to high dimensional data will give friends in finance an opportunity to know how different financial instruments move together and make trading strategies with a better understanding of co-movement dynamics. It will also help research in areas of risk-reward frontier.
I want to go out to go out to buy food in around noon today and will start working after I come back. I hope they do not try to drug my food like rabid animals today.  
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 27th, 2024, 7:46 pm

Friends, I am working on new matlab program to find hermite series representation of multidimensional data in arbitrary dimensions. You would be able to specify any arbitrary size of the hermite-series(since everything is calculated with algorithmic generation of hermite polynomials and I have not hard-coded any hermite polynomial anywhere even in sub-functions) though the size, once specified, would have to be the same for every dimension. Obviously right size of hermite series would require user discretion. It will be a very small and concise program. May be half or one third of the size of the program for 2D regression that I last posted. I am trying to make it a good program. And I hope to be able to post it tomorrow night or day after tomorrow.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal
 
User avatar
Amin
Topic Author
Posts: 2586
Joined: July 14th, 2002, 3:00 am

Re: Breakthrough in the theory of stochastic differential equations and their simulation

February 29th, 2024, 4:48 pm

Friends, I did most work for the calculation of high dimensional hermite series of correlated variables. And I am working on a multidimensional jacobian that would give us the solution of density after doing multidimensional change of variables with underlying orthogonal normal densities. 
If we want to graph any two dimensions against each other, that may not be difficult and might require a two dimensional algorithm. 
But complexity of densities quickly increases since we have to calculate jacobian along grids in multidimensional hyper-cubes. Though a better way to calculate jacobian might be possible and hopefully it will be an interesting research topic among many of us, for this first program, I am currently sticking to hypercube version so I can complete it in another day. So it will be difficult for friends to go beyond 7-8 dimensions with average computers.  
I hope to complete the program in another day and look forward to posting it on the thread.
Again, with the new program, you will be able to find analytic hermite series of stochastic variables in arbitrary high dimensions, but density graphing on a hyper-cube grid will be limited to a few dimensions. I want to release a good working version first and then later try to improve the graphing capabilities in other ways. Also my experience with multidimensional graphing is very little.
You think life is a secret, Life is only love of flying, It has seen many ups and downs, But it likes travel more than the destination. Allama Iqbal