I have posted the research paper on SSRN here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3119980

The title of the paper is: Calculation of Transition Probabilities of SDEs only From the Knowledge of Marginal Probabilities using the CDF-Equivalent Brownian Motion Method.

The abstract is given as: In this article, we show how to calculate the conditional and transition probabilities of any SDE between two different points across time only from the knowledge of their marginal probabilities on these two time grids. Briefly, we construct CDF-Equivalent standard Brownian motion grids from the knowledge of marginal probabilities of the SDE. The solution of conditional and transition probability of the SDE is the found from the solution of transition probabilities between the corresponding nodes of the CDF-Equivalent standard Brownian motion grids which can be easily calculated. What is the most interesting is that we do not need to know the SDE dynamics for calculation of conditional probabilities and the method requires only the knowledge of marginal probabilities. We can, in fact, use this method to determine the SDE that is best suited as a data generating process.

I believe that the method given in the paper can be used for very fast simulation of SV and SVJD models where marginal probabilities are given by transform formulas. If somebody wants me to do this work for them, please email me at anan2999(at)yahoo(dot)com

Statistics: Posted by Amin — Today, 7:57 pm

]]>

We use cubic spline interpolation for the function [$]\theta(t)[$]..

Also, what is zcb price in special case [$]\theta = 0.05[$] as a sanity check against possible spline inaccuracies.

Code:

`double T = 3.0;`

double P1 = 1.0;

double P2 = 1.0;

double sigma1 = 0.0168;

double sigma2 = 0.00462;

double rho = -0.117;

double a = 0.5;

double b = 0.03;

std::vector<double> Tenor = { 0.25, .50, 1.0, 2.0, 3.0 };

std::vector<double> Yield = { 0.0146, 0.0165, 0.0182, 0.0204, 0.0210 };

// Spot

double r_t = 0.1;

double u_t = 0.05;

Statistics: Posted by Cuchulainn — Today, 1:37 pm

]]>

I am trying to calculate the IM(initial margin) for an ABS or CDO, and try to figure out how the mechanisms work for the whole trade life cycle.

Could anybody please point me some references with formulas?

Thanks!

Statistics: Posted by mattdd — Today, 1:14 pm

]]>

Statistics: Posted by Cuchulainn — Today, 11:27 am

]]>

]]>

No doubt "settlement finality" can be improved although non-POW seems prone to other risks of fraud, flooding, gaming, etc. And it would seem to be very hard if not impossible to have both a distributed system and hard real-time performance guarantees.

Capacity bounds and transaction cost issues seem a bit more intractable to me. If you can find a trustworthy central marketplace provider thats running 1 copy of the ledger on 1 CPU, they are always going to be faster and more energy efficient than some amorphous network of N copies of the ledger running on N CPUs with all the added overhead of network synchronization between them. BitCoin's initial promise of disintermediating traditional financial services firms and offering lower transaction costs was based on BitCoin not offering the same services such as dispute resolution and repudiation of transactions. Otherwise BitCoin would always be more expensive.

Obviously, the key word is "trustworthy" and that's no small hurdle. There will be competition between centralized versus decentralized system with centralized touting speed + consistent performance and decentralized touting trust assurance + access to a broader array of untrustworthy counterparties.

Statistics: Posted by Traden4Alpha — February 23rd, 2018, 5:02 pm

]]>

]]>

What do you mean by "but change the capital structures of the deals"?

For your 2nd point, the "settlement finality" problem can be removed in ledgers with other consensus algos(non-POW, etc.)

For your 3rd point, I guess a better Blockchain/DLT solution in financial world should not be capacity bounded, and should not have the high transaction fees such as in Bitcoin case right now.

Statistics: Posted by mattdd — February 23rd, 2018, 4:15 pm

]]>

In that case a first order integral comprising only one term has iterated integral extension using Ito, is given as

[$]\int _0^tX(s)^{\gamma }dz(s)=\int _0^tX(0)^{\gamma }dz(s)[$]

[$]+\int _{\text{}^0}^t\int _0^s\gamma X(v)^{\gamma -1} \mu (X(v))dvdz(s)[$]

[$]+\int _{\text{}^0}^t\int _0^s\gamma X(v)^{\gamma -1} \sigma (X(v))dz(v)dz(s)[$]

[$]+.5\int _{\text{}^0}^t\int _0^s\gamma (\gamma -1)X(v)^{\gamma -2} (\sigma (X(v)))^2dvdz(s)[$]

Also the first order integral comprising a product of two functions has its iterated integral expansion given as

[$]\int _0^tX(s)^{\gamma }X(s)^{\beta }dz(s)=\int _0^tX(0)^{\gamma }X(0)^{\beta }dz(s)[$] Term (1)

[$]+\int _{\text{}^0}^t\int _0^sX(v)^{\beta } \gamma X(v)^{\gamma -1} \mu (X(v))dvdz(s)[$] Term(2)

[$]+\int _{\text{}^0}^t\int _0^sX(v)^{\beta }\gamma X(v)^{\gamma -1} \sigma (X(v))dz(v)dz(s)[$] Term(3)

[$]+.5\int _{\text{}^0}^t\int _0^sX(v)^{\beta }\gamma (\gamma -1)X(v)^{\gamma -2} (\sigma (X(v)))^2dvdz(s)[$] Term(4)

[$]+\int _{\text{}^0}^t\int _0^sX(v)^{\gamma } \beta X(v)^{\beta -1} \mu (X(v))dvdz(s)[$] Term(5)

[$]+\int _{\text{}^0}^t\int _0^sX(v)^{\gamma } \beta X(v)^{\beta -1} \sigma (X(v))dz(v)dz(s)[$] Term(6)

[$]+.5\int _{\text{}^0}^t\int _0^sX(v)^{\gamma }\beta (\beta -1)X(v)^{\beta -2} (\sigma (X(v)))^2dvdz(s)[$] Term(7)

[$]+\int _{\text{}^0}^t\int _0^s\gamma X(v)^{\gamma -1} \beta X(v)^{\beta -1}(\sigma (X(v)))^2dvdz(s)[$] Term(8)

let me present the material that I want to post in next few hours.

Statistics: Posted by Amin — February 23rd, 2018, 1:46 pm

]]>

]]>

As Gatheral says repeatedly in the chapter (bottom of pg 32, for example) he is trying to derive an approximation valid close-to-money. So, it's meant to be just the first two terms of an approximate Taylor series in the moneyness parameter about the at-the-money point. Perhaps you should compute both of those terms numerically and see how good the approximation is (at various T). Also, exact analytic results are known (for the atm smile level and slope) at both large and small T, so you could compare with those.

Of course, it's possible you have a different book by a different author.

Thanks Alan. Got the idea now. I was confused because in the later part of the chapter the author (well, Jim Gatheral. I misspelled his name in op but yes we are talking about the same book...) demonstrated a whole surface and I thought it was generated from the methodology introduced before. I learned several other ways to calibrate Heston and tried to reproduced the one in this book as a practice. Seems the wrong direction...

Statistics: Posted by mrravioli — February 22nd, 2018, 9:27 pm

]]>

Settlement time/reliability risk: If the experiences with Bitcoin are an indicative, some of these systems offer no set time of settlement and even some chance of trade settlement failure. Another issue is the extent that the parties can manipulate the chance of settlement ex post.

Transaction cost dynamics: If the DLT is capacity bounded, the cost to enact a trade can grow without bound especially if the parties want to avoid a failed trade. That will surely affect the use of dynamic hedging.

Statistics: Posted by Traden4Alpha — February 22nd, 2018, 5:17 pm

]]>

Of course, it's possible you have a different book by a different author.

Statistics: Posted by Alan — February 22nd, 2018, 3:02 pm

]]>

How would Blockchain or DLT impact derivative pricing, e.g. models, hedging, risk management, etc especially for credit derivatives?

Lets discuss!

Statistics: Posted by mattdd — February 22nd, 2018, 2:50 pm

]]>

The basic idea of the chapter is:

1. derive implied vol in terms of local vol

2. derive local vol expression for a certain SVM (Heston in this case)

3. with 1 & 2, derive implied vol expression from the SVM

4. with 3 and market data, calibrate parameters in the orginal SVM

When applying the process to Heston, with some approximations and ansatz, Jatheral got (3.17) in the attached pic. However with this form, for a given time to expiration, implied variance is linear in x (log strike), which is obviously not true. In later part of the chapter, Jatheral showed the whole fitted surface and the skew and curvature are obviously there.

Did I miss or misunderstand something here? Helps appreciated.

Thanks!

Statistics: Posted by mrravioli — February 22nd, 2018, 5:07 am

]]>