wiki has Barbalat's lemma and stability of time-varying systems for when you have [$]\frac{dy}{dt}=f(t)[$] rather than [$]f(t;y(t))[$]

Statistics: Posted by ppauper — December 15th, 2018, 7:23 am

]]>

Consider a hypothetical parabolic PDE problem with a stationary solution, approached exponentially at large t with, let's say, a 'half-life' time of 50 (but you don't know this).

You give this parabolic problem to NDSolve with T=1000. The solver takes, let's say, 50 time steps to do the whole computation. There will be many t-steps with, say, [$]t < 10[$] and the last several, sure enough, will use [$]\Delta t = 100[$], the max allowed. I have seen this pattern many times with NDSolve.

For your scheme, in effect you have a cutoff at [$]T \approx 1/\epsilon[$]. Suspect you would generally also have to adopt adaptive time stepping (in your case, [$]\tau[$]-stepping) to get any sort of decent run-time performance. If so, in the end, the pattern of the [$]t_i's[$] actually visited (under a good adaptive scheme) might be similar regardless of the time coordinate change: many at the start and few at the end.

In other words, my point is that, in the end, if you adopt adaptive time-stepping to get decent performance, the choice of the time coordinate may not matter much. Just speculating.

Statistics: Posted by Alan — December 14th, 2018, 3:16 pm

]]>

Statistics: Posted by Cuchulainn — December 14th, 2018, 12:35 pm

]]>

Statistics: Posted by BuffaloFan32 — December 13th, 2018, 2:27 pm

]]>

[$]y'(t) = f(t, y(t))[$]

In particular, we want to find equilibrium points where [$]y'(t) = 0[$]. We can do this in different ways. But now we take the new variable [$]\tau = t/(1+t)[$] to get a new ODE

[$](1 - \tau)^2dy(\tau)/d\tau = F(\tau, y(\tau))[$] on the interval [$](0,1)[$]

Similar to transforming PDE to the unit interval we have an ODE on [$](0,1)[$]. So, for "big" [$]t[$] we approximate the solution at [$]\tau = 1 - \varepsilon[$].

I have tried with a number of simple and extended (nasty) scalar and system ODEs looks OK. At least, we know that the world ends at [$]\tau = 1[$]. ([$]\tau > 1[$] is the empty quarter; Picard iterates will probably blow up).

How does this approach work in theory? What are the risks, e.g. do invariants get messed up etc.?

Statistics: Posted by Cuchulainn — December 13th, 2018, 9:46 am

]]>

]]>

We can use grade school arithmetic to demonstrate that price approximations can still produce accurate P&L calculations. Say we want to compute the P&L between T-1 and T. If we are using a pricing approximation, our price estimates will have some amount of error, presumably a reasonably ‘small’ error (otherwise you would not use the approximation).

That is, Price(T,estimate) = Price(T,true) + Error(T)

And similarly Price(T-1,estimate) = Price(T-1,true) + Error(T-1)

By definition P&L(true) = Price(T,true) - Price(T-1,true). Hold that thought for a moment.

Our P&L estimate is by definition P&L(estimate) = Price(T,estimate) - Price(T-1,estimate)

Substituting the definitions of Price(T,estimate) and Price(T-1,estimate)

P&L(estimate) = Price(T,true) + Error(T) – [Price(T-1,true) + Error(T-1)]

Rearranging the terms

P&L(estimate) = Price(T,true) – Price(T-1,true) + Error(T) - Error(T-1)

Now unless the error in the price estimates are systemic, Error(T) - Error(T-1) more or less cancels out (the difference will be ‘very small’) and hence

P&L(estimate) = Price(T,true) – Price(T-1,true)= P&L(True)

That pricing approximation errors cancel out when differenced is the rationale for using approximations in P&L and risk systems.

Statistics: Posted by DavidJN — November 25th, 2018, 4:17 pm

]]>

Indeed, this is standardised mathematical jargon. A la carte definitions abound in AI and physics.

Maybe OP means the

https://en.wikipedia.org/wiki/Finite_difference

FDM = calculus of finite differences applied to PDE.

BTW are CDSs liquid these days? In 2006 no one wanted them (except Paulson)??

Statistics: Posted by bearish — November 24th, 2018, 1:10 pm

]]>

]]>

As part of my masters dissertation I have to implement the hybrid Heston-Hull-White (HHW) model. There is no closed form solution for the characteristic function of this model. In https://staff.fnwi.uva.nl/p.j.c.spreij/ ... 0Rates.pdf , Lech A. Grzelak and Cornelis W. Oosterlee provide a couple of methods for approximating the HHW model so that a closed form expression for the characteristic method can be found.

On page 273 and 274 of the paper (table 1 and attached), the authors provide a pricing comparison of these approximated models with the full model, obtained via Monte Carlo simulation. The comparison is done in terms of implied volatilities. It is not immediately clear to me how one would compute Black Scholes implied volatilities when there is a stochastic interest rate.

Does anyone have any thoughts on this?

Statistics: Posted by riazp94 — November 20th, 2018, 5:13 pm

]]>

Indeed, this is standardised mathematical jargon. A la carte definitions abound in AI and physics.

Maybe OP means the

https://en.wikipedia.org/wiki/Finite_difference

FDM = calculus of finite differences applied to PDE.

BTW are CDSs liquid these days? In 2006 no one wanted them (except Paulson)??

Statistics: Posted by Cuchulainn — November 20th, 2018, 1:34 pm

]]>

]]>

// Once you get beyond 3 underlings FDM is out of the question.

Statistics: Posted by Cuchulainn — November 20th, 2018, 10:52 am

]]>

To reword the question:

We have two systems one for trading and another for pricing. We are trying to integrate pricing system into our trading system so that we have one source for pricing and risk.

Our trading system uses Finite difference method to calculate the scenarios for risk and P&L where as the pricing system uses closed form approximation. We would like to intregrate closed form in trading system. We wanted to assess if that was possible. Hence I wanted to know more information on difference between the two methods so that further analysis can be done

Statistics: Posted by reemashah — November 20th, 2018, 10:35 am

]]>

]]>