- Cuchulainn
**Posts:**53050**Joined:****Location:**Amsterdam-
**Contact:**

I presume section 9.4?

http://www.datasimfinancial.com

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

Right -- plus Sec 9.8 has an example (Stefan problem). Also I answered a Mathematica Stack Exchange question on the latter. The Stefan problem is a nice test case because there is an exact solution for what amounts to the critical exercise boundary.

I did some more tests with this and here's a few more things, I used the Bulirsch-Stoer stepper with dt=1.e-12 (using dt=1.e-1 doesn't change timings). What I found was:

NX Price CPT

300 6.298500 6 sec

600 6.299607 17 sec

1200 6.299675 43 sec

2400 6.299611 3m35sec

4800 6.299586 18m10sec

1) Execution time does not increase linearly with NX. I haven't looked into this method so I don't know if that's expected, but it's not good.

2) Convergence in space is not monotone either. That's assuming dt=e-12 all but eliminates temporal error, but even if that's not the case, all this means we cannot do say Richardson extrapolation in space.

3) Last but most important, all this is (and I don't like to disappoint) terribly slow. Simple C-N with Brennan-Schwartz gives perfect monotone convergence in space and for the last case of NX=4800 (that MOL needed 18+ min), it takes 265ms to give the same result. That's with NX=4800 & NT=3000. That's about 4000 times faster. So...

NX Price CPT

300 6.298500 6 sec

600 6.299607 17 sec

1200 6.299675 43 sec

2400 6.299611 3m35sec

4800 6.299586 18m10sec

1) Execution time does not increase linearly with NX. I haven't looked into this method so I don't know if that's expected, but it's not good.

2) Convergence in space is not monotone either. That's assuming dt=e-12 all but eliminates temporal error, but even if that's not the case, all this means we cannot do say Richardson extrapolation in space.

3) Last but most important, all this is (and I don't like to disappoint) terribly slow. Simple C-N with Brennan-Schwartz gives perfect monotone convergence in space and for the last case of NX=4800 (that MOL needed 18+ min), it takes 265ms to give the same result. That's with NX=4800 & NT=3000. That's about 4000 times faster. So...

On a side note, Alan has only 141 reputation points on https://mathematica.stackexchange.com !!!

Yeah, for better or for worse, I hang out here and not there.

- Cuchulainn
**Posts:**53050**Joined:****Location:**Amsterdam-
**Contact:**

tagoma wrote:On a side note, Alan has only 141 reputation points on https://mathematica.stackexchange.com !!!

You may it all sound like an X-factor show..

http://www.datasimfinancial.com

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

- Cuchulainn
**Posts:**53050**Joined:****Location:**Amsterdam-
**Contact:**

I*f you want some challenge areas for the new book, discuss how to do fast, efficient solutions to the Fokker-Planck pde for transition probability densities for popular two-factor models. Not so easy to use NDSolve (or perhaps whatever scheme you have in mind) with some of those. There are applications in calibration and maximum likelihood parameter estimation.*

This would be a good project and had been on to do list. One could do the usual suspects like Soviet Splitting but ADE and MOL might be good enough?

Is this FPE pde specified somewhere in excruciatingly painful detail so that it can be mapped to FDM?

One niggling open question is approximating delta functions or am I worrying too much?

This would be a good project and had been on to do list. One could do the usual suspects like Soviet Splitting but ADE and MOL might be good enough?

Is this FPE pde specified somewhere in excruciatingly painful detail so that it can be mapped to FDM?

One niggling open question is approximating delta functions or am I worrying too much?

http://www.datasimfinancial.com

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

The basic FPE is here, with Dirac initial condition for the applications I mentioned.

If a boundary is reachable and the associated parameter is mean-reverting, many finance models will use a zero-flux boundary condition. For example, see my Vol. II Sec 7.9 for discussion with the Heston model -- although that discussion is applied to getting an analytic soln, not fdm.

In general, boundary issues are model-specific, so a comprehensive discussion would be hard to find.

Another example. For the lognormal SABR model, the parameter/coordinate [$]\sigma = 0[$] is not reachable. In that model (pretty sure), numerical absorption at all spatial boundaries will probably work well.

You're right to worry about the Dirac i.c. Personally, if doing the FPE for [$]p(t,\vec{x}_t | \vec{x}_0)[$], I will generally try to place the starting hotspot of the particle [$]\vec{x}_0[$] exactly on a node and then take the i.c. to be a constant there and zero at all other nodes. Then, the constant is an appropriate function of the (local) lattice spacing to imitate the n-dimensional Dirac delta on the fdm lattice.

The challenge is to get an efficient, apparently convergent, norm-preserving, positivity-preserving solution -- not so easy for many standard 2D models -- IMO.

If a boundary is reachable and the associated parameter is mean-reverting, many finance models will use a zero-flux boundary condition. For example, see my Vol. II Sec 7.9 for discussion with the Heston model -- although that discussion is applied to getting an analytic soln, not fdm.

In general, boundary issues are model-specific, so a comprehensive discussion would be hard to find.

Another example. For the lognormal SABR model, the parameter/coordinate [$]\sigma = 0[$] is not reachable. In that model (pretty sure), numerical absorption at all spatial boundaries will probably work well.

You're right to worry about the Dirac i.c. Personally, if doing the FPE for [$]p(t,\vec{x}_t | \vec{x}_0)[$], I will generally try to place the starting hotspot of the particle [$]\vec{x}_0[$] exactly on a node and then take the i.c. to be a constant there and zero at all other nodes. Then, the constant is an appropriate function of the (local) lattice spacing to imitate the n-dimensional Dirac delta on the fdm lattice.

The challenge is to get an efficient, apparently convergent, norm-preserving, positivity-preserving solution -- not so easy for many standard 2D models -- IMO.

- Cuchulainn
**Posts:**53050**Joined:****Location:**Amsterdam-
**Contact:**

Good tip!

Boost ODE also supports WhenEvents in the form of Observers and that's what I have done (Brennan-Schwarz) as well as penalties.

The WhenE solution is mucho faster than penalty by a factor [100,300] at least. I'll post more details later. In fairness, penalty is more accurate than Brennan-Schwarz.

So, for plain option penalty is no use; the next question is for American barrier options whether Brennan-Schwarz + CN cuts the mustard. I do remember issues e.g. see Tavella's book.

Things become interesting when we solve real nonlinear PDEs like UVM. What's the best methods? It is documented that ADE is 40% faster than the fastest Crank Nicolson in the West for this problem (2014).

Last edited by Cuchulainn on September 15th, 2017, 6:12 pm

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

- Cuchulainn
**Posts:**53050**Joined:****Location:**Amsterdam-
**Contact:**

Billy7 wrote:I did some more tests with this and here's a few more things, I used the Bulirsch-Stoer stepper with dt=1.e-12 (using dt=1.e-1 doesn't change timings). What I found was:

NX Price CPT

300 6.298500 6 sec

600 6.299607 17 sec

1200 6.299675 43 sec

2400 6.299611 3m35sec

4800 6.299586 18m10sec

1) Execution time does not increase linearly with NX. I haven't looked into this method so I don't know if that's expected, but it's not good.

2) Convergence in space is not monotone either. That's assuming dt=e-12 all but eliminates temporal error, but even if that's not the case, all this means we cannot do say Richardson extrapolation in space.

3) Last but most important, all this is (and I don't like to disappoint) terribly slow. Simple C-N with Brennan-Schwartz gives perfect monotone convergence in space and for the last case of NX=4800 (that MOL needed 18+ min), it takes 265ms to give the same result. That's with NX=4800 & NT=3000. That's about 4000 times faster. So...

This is a subtle problem because it is not just Boost ODE versus hand-crafted code, although the latter wins in speed (it also costs more in developer manhours).

There are 3 bottlenecks

Model <-> Numerics<-> C++

The baseline was my original code. The bottom line is that Brennan Schwarz using WhenEvent AND hand-crafted loop unrolling is [250, 2000] times faster than the baseline depending on the type of problem. Next would be a compiler vectorisation.

Ex. up and out American calls seem to computationally more demanding than up and out American put.

IMO making the linear BS PDE into semilinear PDE via penalty is not optimal IMO but you do see it a lot.

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

- Cuchulainn
**Posts:**53050**Joined:****Location:**Amsterdam-
**Contact:**

Alan wrote:The basic FPE is here, with Dirac initial condition for the applications I mentioned.

...

You're right to worry about the Dirac i.c. Personally, if doing the FPE for [$]p(t,\vec{x}_t | \vec{x}_0)[$], I will generally try to place the starting hotspot of the particle [$]\vec{x}_0[$] exactly on a node and then take the i.c. to be a constant there and zero at all other nodes. Then, the constant is an appropriate function of the (local) lattice spacing to imitate the n-dimensional Dirac delta on the fdm lattice.

The challenge is to get an efficient, apparently convergent, norm-preserving, positivity-preserving solution -- not so easy for many standard 2D models -- IMO.

First shot across the bows. To do FPE in FDM we need a "good" approximation to delta functions, this is the major challenged IMO. What do you think of this? (1 factor).

[$]f_\varepsilon(x) = \varepsilon/(x^2 + \varepsilon^2)[$] as [$]\varepsilon \rightarrow 0[$]

or closer to your idea

[$]f_\varepsilon(x) = \exp(-x^2/\varepsilon) / \sqrt\pi \sqrt\varepsilon [$] as [$]\varepsilon \rightarrow 0[$]

Open question is how to choose [$]\varepsilon[$].

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

I am not a fan of spreading out the delta. I like concentrating it on a node. The main reason is: you don't want to introduce an extraneous limit (besides taking the lattice spacing to zero). If you concentrate it on a node, it adjusts 'automatically' with the lattice spacing. For example with uniform spacing of a one-factor probem, my method has lattice Dirac delta with amplitude [$]\frac{1}{\Delta x}[$]. Easy!

But you will likely have to cook up some extraneous rule: [$]\epsilon = \epsilon(\Delta x)[$] to see convergence to the exact transition density without having to do a pain-in-the-neck double limit. Things will just get messier in multiple dimensions.

Another reason is you want the (numerical) transition density to be norm-preserving to a good approximation. If it isn't, and there are boundaries, you will have to worry that it might be because your [$]\epsilon[$] was too large.

Try it various ways on some simple one-factor problem, say [$]p_t = \frac{1}{2} p_{xx}[$] on half-line with reflection at 0 -- you'll likely see what I mean. What if the particle starts at a very small [$]x_0[$]?

But you will likely have to cook up some extraneous rule: [$]\epsilon = \epsilon(\Delta x)[$] to see convergence to the exact transition density without having to do a pain-in-the-neck double limit. Things will just get messier in multiple dimensions.

Another reason is you want the (numerical) transition density to be norm-preserving to a good approximation. If it isn't, and there are boundaries, you will have to worry that it might be because your [$]\epsilon[$] was too large.

Try it various ways on some simple one-factor problem, say [$]p_t = \frac{1}{2} p_{xx}[$] on half-line with reflection at 0 -- you'll likely see what I mean. What if the particle starts at a very small [$]x_0[$]?

- Cuchulainn
**Posts:**53050**Joined:****Location:**Amsterdam-
**Contact:**

xxxxx

*EDIT: this is the wong post, it should have been the other one..*

Last edited by Cuchulainn on September 21st, 2017, 4:43 pm

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.

Boost ODE also supports WhenEvents in the form of Observers and that's what I have done (Brennan-Schwarz) as well as penalties.

I play around with odeint atm. Just a quick question at this point: Obersvers are not available in the implementation of the operator. This is where we calculate the derivative. And this is also the place to implement any where condition. Isnt it? The Obersver only gives us the solution once the system is integrated. But maybe i am wrong (just starting with odeint)

Things become interesting when we solve real nonlinear PDEs like UVM. What's the best methods? It is documented that ADE is 40% faster than the fastest Crank Nicolson in the West for this problem (2014).[/quote]

There is an Article from Prof. Gunter Meyer. It can be found here. http://people.math.gatech.edu/~meyer/ He does BSB with a time discrete mol. He promisses that the gamma will become smooth unlike CN or space discretization.

I play around with odeint atm. Just a quick question at this point: Obersvers are not available in the implementation of the operator. This is where we calculate the derivative. And this is also the place to implement any where condition. Isnt it? The Obersver only gives us the solution once the system is integrated. But maybe i am wrong (just starting with odeint)

Things become interesting when we solve real nonlinear PDEs like UVM. What's the best methods? It is documented that ADE is 40% faster than the fastest Crank Nicolson in the West for this problem (2014).[/quote]

There is an Article from Prof. Gunter Meyer. It can be found here. http://people.math.gatech.edu/~meyer/ He does BSB with a time discrete mol. He promisses that the gamma will become smooth unlike CN or space discretization.

- Cuchulainn
**Posts:**53050**Joined:****Location:**Amsterdam-
**Contact:**

I agree that this is a good place for this.

I think you can use one of the integrate functions to choose were to call oberver events (?)

That 40% figure is from Pealat and Duffy 2011.

ADE is very easy for nonlinear. Whether it is a universal panacea for all NL problems is another open question.

HMOL (odeint is VMOL)

I have the book but have not implemented it. The method is a bit esoteric, even more so than VMOL.

/// brainstorm idea; use ADE for time discretisation in combination with Meyer's HMOL in S direction. Why not?

"Science is what you know, Philosophy is what you don't know" Bertrand Russell.