You'll need boundary info coming in from the edges of the domain.

- Cuchulainn
**Posts:**63371**Joined:****Location:**Amsterdam-
**Contact:**

Fair enough.This heuristic has been made directly on the SDE, not on the corresponding PDE. It's a classical "drift freezing" method. Obviously, some properties are broken with this approximation, otherwise we wouldn't misprice a swap (this approximation breaks the fitting of the initial yield curve, because by construction on the HJM framework, y must be equal to is expression under the qG model).

The justification is that empirically, it provides good enough results to calibrate on swaptions (even in the linear local vol + stoch vol case).

I understand why one would adopt such a tactic as it makes life easier. But numerical success with swaptions does not mean the trick will work on the current problems, or will it? In general, it feels like we are losing information by removing the y dimensions and replacing it by an approximation without boundaries.

The BGM book mention freezing in 5 lines and the authors seem to equate it with the Euler method..Is that a fair assessment? Or am I missing something crucial? Even authors sat it's 'very simple'. At fist glance, even the predictor-corrector method would be worth a try as well?

My C++ Boost code gives

262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com

http://www.datasim.nl

262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com

http://www.datasim.nl

- Cuchulainn
**Posts:**63371**Joined:****Location:**Amsterdam-
**Contact:**

In both x and y or just x?You'll need boundary info coming in from the edges of the domain.

My C++ Boost code gives

262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com

http://www.datasim.nl

262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com

http://www.datasim.nl

Of course we are loosing information, but the variance of y is negligible vs the variance of x, so the error on prices won't be that big for not too high maturities / negatives mean reversions. Furthermore, if you price vanilla derivatives (in the sense hedgeable by swaptions, such as cms caps/floors/swaps, cancelled swaps, ...), a small errors on swaptions => a small error on your price as it is a integral over swaption prices.Fair enough.This heuristic has been made directly on the SDE, not on the corresponding PDE. It's a classical "drift freezing" method. Obviously, some properties are broken with this approximation, otherwise we wouldn't misprice a swap (this approximation breaks the fitting of the initial yield curve, because by construction on the HJM framework, y must be equal to is expression under the qG model).

The justification is that empirically, it provides good enough results to calibrate on swaptions (even in the linear local vol + stoch vol case).

I understand why one would adopt such a tactic as it makes life easier. But numerical success with swaptions does not mean the trick will work on the current problems, or will it? In general, it feels like we are losing information by removing the y dimensions and replacing it by an approximation without boundaries.

The BGM book mention freezing in 5 lines and the authors seem to equate it with the Euler method..Is that a fair assessment? Or am I missing something crucial? Even authors sat it's 'very simple'. At fist glance, even the predictor-corrector method would be worth a try as well?

With some delay, I made a try on a 20y, 0.5% annual vs E6M swap, with the parameters we agreed on, for different mean reversions.

Theoretical price: 921bps.

[$]

\begin{array}{|c|c|}

\hline

\chi & price (bps)\\

\hline

10\% & 921\\

5\% & 921\\

1\% & 918\\

0\% & 919\\

-1\% & 919\\

-5\% & 927\\

-6\% & 933\\

-7\% & 944\\

-8\% & 963\\

-9\% & 999\\

-10\% & 1060\\

\hline

\end{array}

[$]

The error is < 10bps if he mean rev is <= 5%.

Theoretical price: 921bps.

[$]

\begin{array}{|c|c|}

\hline

\chi & price (bps)\\

\hline

10\% & 921\\

5\% & 921\\

1\% & 918\\

0\% & 919\\

-1\% & 919\\

-5\% & 927\\

-6\% & 933\\

-7\% & 944\\

-8\% & 963\\

-9\% & 999\\

-10\% & 1060\\

\hline

\end{array}

[$]

The error is < 10bps if he mean rev is <= 5%.

Thank you very much for posting the results..!!! they look pretty good. Just to make sure, the approximation you use for [$]\bar{y}_t[$] is the variance of [$]x_t[$] (the one where the local vol is killed), right?

Thx again ..!

Thx again ..!

You're welcome!

I use the approximation proposed in the Andersen, Piterbarg's book, ie I compute [$]y[$] as if [$] \forall t, x(t) = 0[$]. Then [$]\bar y'(t) = (\alpha^2 - 2\chi \bar y(t)), \bar y(0) = 0 \Rightarrow \bar y(t) = \alpha^2 \frac{1 - e^{-2\chi t}}{2\chi}[$] (ie the same as you).

I use the approximation proposed in the Andersen, Piterbarg's book, ie I compute [$]y[$] as if [$] \forall t, x(t) = 0[$]. Then [$]\bar y'(t) = (\alpha^2 - 2\chi \bar y(t)), \bar y(0) = 0 \Rightarrow \bar y(t) = \alpha^2 \frac{1 - e^{-2\chi t}}{2\chi}[$] (ie the same as you).

- Cuchulainn
**Posts:**63371**Joined:****Location:**Amsterdam-
**Contact:**

I have had a look at the maths and C++ code regarding the article by Alan, Paul and myself on anchoring PDE (looks like Asian classic and Cheyette). Some facts are;

1. I use ADE (not ADI) in x and upwind.downwinding**at time level n+1** (NB) for y. This means I don't use/need any BC in y and I get the same results (Table 2) as Alan's MOL in NDSOLVE (I suspect Alan does not use BC for y?), certainly not 5 point llnearity BC. I used to do FEM and also no BC given. This was the subtle hint by Paul.

2. Do a semi-discretisation in x and y to get a ODE [$]dU/dt = AU[$]. which schemes lead to an** M-matrix A**? Which do not?

3. Both Alan and I use domain transformation (x,y) -> [0,1]^2 Much better than mucking around with truncation and PDE becomes amenable to Fichera analysis. You might end up with a Feller-like constraint as in CIR.

There are working techniques, Next is possible

A. Apply C++ MOL to Anchoring PDE. Use Bulirsch Stoer as with NDSOLVE.

B. Apply ADE and MOL to Asian PDE and Cheyette.

In any case, ADI is making life non-easier. A wee bit of overthinking in the BC department as well?

Almost everyone I know has had a trauma with Asian PDE

The article

http://onlinelibrary.wiley.com/doi/10.1 ... 6/abstract

1. I use ADE (not ADI) in x and upwind.downwinding

2. Do a semi-discretisation in x and y to get a ODE [$]dU/dt = AU[$]. which schemes lead to an

3. Both Alan and I use domain transformation (x,y) -> [0,1]^2 Much better than mucking around with truncation and PDE becomes amenable to Fichera analysis. You might end up with a Feller-like constraint as in CIR.

There are working techniques, Next is possible

A. Apply C++ MOL to Anchoring PDE. Use Bulirsch Stoer as with NDSOLVE.

B. Apply ADE and MOL to Asian PDE and Cheyette.

In any case, ADI is making life non-easier. A wee bit of overthinking in the BC department as well?

Almost everyone I know has had a trauma with Asian PDE

The article

http://onlinelibrary.wiley.com/doi/10.1 ... 6/abstract

My C++ Boost code gives

262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com

http://www.datasim.nl

262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com

http://www.datasim.nl

- Cuchulainn
**Posts:**63371**Joined:****Location:**Amsterdam-
**Contact:**

question: what is the typical wall click time for this problem using

ADI

The modified a factor problem?

ADI

The modified a factor problem?

262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com

http://www.datasim.nl

- Cuchulainn
**Posts:**63371**Joined:****Location:**Amsterdam-
**Contact:**

I would like to come back to this discussion (is anyone out there). The 2-factor PDE is a special case of a class of PDEs such as Asian, Cheyette and PDEs in which one factor is deterministic. More recently, my work with Alan and Paul on the Anchor PDE has sharpened insights into these problems somewhat.

A shortlist of answers to stability problems (been there as well by using the wrong method) that are raised here are:

1. Craig Sneyd is the wrong method for this problem. The PDE(x,y) in question is really 1 1/2 factor do CS is a sledgehammer (BTW I and others prefer Soviet Splitting).

2. The methods have not been mathematically justified.

3. A 5-point pentagonal scheme for is horrendous. Imagine a band matrix to solve a 1st order wave equations.

4. Ghost points and linearity BC, why?

5. For x, exponential and centred difference, irrespective of the sign of the convection term, alternative to low-order upwinding.

6. I am pretty sure that your scheme demands a lot of expensive time to get off the ground?

6. In the Anchor PDE we map y to z in (0,1) by z = y/(1+y). This makes life easier at the boundaries!

7. 1st order hyperbolic PDE!!

In short, a candidate solution is offerred without a decent understanding of the PDE problem. The solution does seem to be in widespread use.

A shortlist of answers to stability problems (been there as well by using the wrong method) that are raised here are:

1. Craig Sneyd is the wrong method for this problem. The PDE(x,y) in question is really 1 1/2 factor do CS is a sledgehammer (BTW I and others prefer Soviet Splitting).

2. The methods have not been mathematically justified.

3. A 5-point pentagonal scheme for is horrendous. Imagine a band matrix to solve a 1st order wave equations.

4. Ghost points and linearity BC, why?

5. For x, exponential and centred difference, irrespective of the sign of the convection term, alternative to low-order upwinding.

6. I am pretty sure that your scheme demands a lot of expensive time to get off the ground?

6. In the Anchor PDE we map y to z in (0,1) by z = y/(1+y). This makes life easier at the boundaries!

7. 1st order hyperbolic PDE!!

In short, a candidate solution is offerred without a decent understanding of the PDE problem. The solution does seem to be in widespread use.

262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com

http://www.datasim.nl

- Cuchulainn
**Posts:**63371**Joined:****Location:**Amsterdam-
**Contact:**

Another numerical remark concerning the ADI splitting

[$]\frac{\partial V}{\partial t} + A_{1}(t)V = F_{1}[$]

[$]\frac{\partial V}{\partial t} + A_{2}(t)V = F_{2}[$]

is that in general the splitting process itself is 2nd order only if operators [$]A_{1}[$] and [$]A_{2}[$] commute, i.e. [$]A_{1}A_{2} = A_{2}A_{1}[$]. Otherwise it is 1st order accurate.

So, it seems the Cheyette coefficients are time-dependent.

[$]\frac{\partial V}{\partial t} + A_{1}(t)V = F_{1}[$]

[$]\frac{\partial V}{\partial t} + A_{2}(t)V = F_{2}[$]

is that in general the splitting process itself is 2nd order only if operators [$]A_{1}[$] and [$]A_{2}[$] commute, i.e. [$]A_{1}A_{2} = A_{2}A_{1}[$]. Otherwise it is 1st order accurate.

So, it seems the Cheyette coefficients are time-dependent.

262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com

http://www.datasim.nl

- Cuchulainn
**Posts:**63371**Joined:****Location:**Amsterdam-
**Contact:**

For Anchor PDE, 2-point upwinding is better but 1st order accurate, Can be rectified by taking Towler-Yang or Roberts-Weiss for the convection term.He does say it's ADI Craig-Sneyd.On time discretisation you don't mention if implicit, CN etc.

Ferdelo, I'm not against 5-point stencils in general (or even ghost points for that matter) but this 5-point stencil is still a central discretization with 2 points downwind for a convective term, so I agree with Cuchulainn, that's obviously asking for trouble. The real question is why doesn't simple 2-point upwind work for you. Have you by any chance messed up the downwind/upwind sign? I always do a stupid check like that just to make sure. Your pdf does seem to indicate that but I may be wrong.

If that's not it then I'd plot the results, have a visual. Are there oscillations, does the solution really have enough space to reach zero gamma at the boundaries as your boundary conditions assume, etc. And by the way, don't any of the references (books/papers) you have on this mention the boundary conditions they used?

And convection dominance can occur.

262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com

http://www.datasim.nl

- Cuchulainn
**Posts:**63371**Joined:****Location:**Amsterdam-
**Contact:**

A useful method that can be used in this context is Strang-Marchuk (and Lie_Trotter) operator splitting, So you solve diffusion and convection as independent steps

https://en.wikipedia.org/wiki/Strang_splitting

https://en.wikipedia.org/wiki/Lie_product_formula

As a off-topic example, you can split Schroedinger PDE into kinetic and potential energy terms.

// ADI is dimensional splitting, with convection-diffusion issues, an it can be a Pandora's box. It is a bit clumsy TBH.

https://en.wikipedia.org/wiki/Strang_splitting

https://en.wikipedia.org/wiki/Lie_product_formula

As a off-topic example, you can split Schroedinger PDE into kinetic and potential energy terms.

// ADI is dimensional splitting, with convection-diffusion issues, an it can be a Pandora's box. It is a bit clumsy TBH.

262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com

http://www.datasim.nl

GZIP: On