SERVING THE QUANTITATIVE FINANCE COMMUNITY

  • 1
  • 3
  • 4
  • 5
  • 6
  • 7
 
User avatar
Polter
Posts: 2526
Joined: April 29th, 2008, 4:55 pm

PDE/PIDE (FDM. FEM) methods thread

November 11th, 2011, 9:21 pm

QuoteOriginally posted by: outrunI copied the convention used in boost math.The way they solve it, is to have a long name, typically with the "type-of-thing" appended to the name for the template version, and a typedef to double with a compact name. E.g.:template <class RealType>class normal_distribution {...};typedef normal_distribution<double> normal;We could do the same, adding _contract to spread_call, adding _process to brownian_motion. I'm currently using that e.g. with hitting time distribution.I will also need a PolicyType template argument.OK, I guess that also answers some of the questions/feature suggestions by Alan regarding the working-precision settings; some useful references:PoliciesPolicies Have Sensible DefaultsHow accuracy is controlled by internal promotion to use more precise types. Internal Floating-point Promotion PoliciesWhat working precision should be used to calculate results. Precision PoliciesHow many iterations a special function is permitted to perform in a series evaluation or root finding algorithm before it gives up and raises an evaluation_error. http://www.boost.org/doc/libs/release/l ... rIteration Limits PoliciesUse with User-Defined Floating-Point TypesConceptual Requirements for Real Number Types"The functions, and statistical distributions in this library can be used with any type RealType that meets the conceptual requirements given below. All the built in floating point types will meet these requirements. User defined types that meet the requirements can also be used. For example, with a thin wrapper class one of the types provided with NTL (RR) can be used. Submissions of binding to other extended precision types would also be most welcome!The guiding principal behind these requirements, is that a RealType behaves just like a built in floating point type."Conceptual Archetypes for Reals and Distributions
 
User avatar
Polter
Posts: 2526
Joined: April 29th, 2008, 4:55 pm

PDE/PIDE (FDM. FEM) methods thread

November 11th, 2011, 9:34 pm

QuoteOriginally posted by: CuchulainnQuoteFinally, is "Real" a good name that we want to have -- Alan mentioned complex coefficients and multi-precision features of Mathematica, in general we only have different kinds of floating-point-numbers and integers available out of the box. Coefficients can belong to any Ring , including complex, rational and Interval (uncertain coefficients) yes?Yes, that's what I was generally thinking about!Although a few caveats apply that we might think about/brainstorm, see below.QuoteOriginally posted by: CuchulainnIn C++ a templated infinity could be used and it would be in line with the underlying type (e.g. float, double). Polter, is this a good idea?numeric lmits Yeah, this might also come in handy:Floating-Point Classification: Infinities and NaNsQuoteOriginally posted by: CuchulainnQuoteThe Ring is also a good idea.'Ring' could be a concept describing the interface +, *, for mathematical rings (including 0<T> and 1<T> if you get my drift)?? Then real classes implement the concept.outrun & Cuchulainn -- see also:Conceptual Requirements for Real Number TypesConceptual Archetypes for Reals and DistributionsNow, regarding the caveats and ideas.First, the caveats -- when thinking about a type models a concept, there are further distictions to be made, which is esp. relevant when we're talking about floating-point-types and, say, RealType concept -- two further levels of distiction are partial models (this already kicks for integers) and approximate models.The below are points from "Notes on Programming" by Alexander Stepanov, Lecture 10. Generic algorithms:http://www.stepanovpapers.com/notes.pdfoutrun & Cuchulainn: he introduces the relationships between concepts and the relevant algebraic structures, like Monoids, Semigroups, etc. -- I think his comments convention might be quite nice there, so I included a few code snippets (there's plenty more in the Notes), what do you think?Quote"When we dealt with max, min and other order selection operations we observed that they are defined on types that satisfy certain requirements: namely, they provide a strict weak ordering or, in cases when we use operator<, a strict total ordering. A collection of types that satisfy a common set of requirements is called a concept. A common example of a concept is the concept of Integral: a collection of all integer types in C. A type in the collection associated with a particular concept is said to model this concept or, using a noun instead of a verb, is a model of this concept. int is a model of Integral. An algorithm that is defined on all the types in a concept is called a generic algorithm."Quote"The third objection to the code is that, strictly speaking, no type can model a totally ordered group. (Or, being precise, no type with a non-zero element can be such a model.) Indeed, from the axioms we can derive that in such a group there are no elements of finite order and therefore the group is infinite. As we all know, even with memory being cheap, computers cannot hold infinitely many different values. A type such as int is not strictly speaking a group under addition, since addition is not defined when we add two values whose sum is greater than MAX_INT. Moreover, C does not seem to guarantee that -MIN_INT is defined. There could be more negative numbers than positive numbers and that will prevent our abs from being a total function. We need to drop the requirement that the model implements all the operations defined in the concepts as total functions. Operations can be implemented as partial functions, and the axioms have to hold only when all the operations are defined. Our models are partial models"Quote"Sometimes even using a partial model is not good enough. Sometimes even when the basic operations are defined, the axioms do not hold. In particular, when we deal with doubles all the equational axioms are suspect. Even such a basic law as associativity of addition does not hold. We need to develop a notion of an approximate model but that clearly is outside of the scope of this course, since it belongs to a course on Generic Numerical Methods which I plan to teach in the year 2016."
 
User avatar
Cuchulainn
Topic Author
Posts: 59715
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

PDE/PIDE (FDM. FEM) methods thread

November 12th, 2011, 9:05 am

In the V1 version of the 2d FDM solver (using boost function) I intend to provide ADE and Explicit Euler code which do not entail any matrix inversion, splitting and such like. Then we can focus on essential issues for the moment 1) generic interfaces 2) feedback on PDE requirements for V2. For EE we need a small time step but it is useful for a number of reasons. For ADE we don't have this restriction.For the big discussion on BC, since I have separate pde and domain classes, I can define a (multi-thread) solver let's say a 2-asset pde and two domain types using 1) Fichera BC or 2) using the payoff as 'surrogate' BC. Then the post-processing error analysis library kicks in when I compare max norm difference of 1) versus 2).What do you think?
Last edited by Cuchulainn on November 11th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
Alan
Posts: 9788
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

PDE/PIDE (FDM. FEM) methods thread

November 12th, 2011, 9:52 pm

How does it all work for some basic non-GBM model like Heston?
 
User avatar
Alan
Posts: 9788
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

PDE/PIDE (FDM. FEM) methods thread

November 13th, 2011, 2:47 pm

Here is one of my typical pde solver calls in Mathematica. In spite of theremark at the top of the code, I was actually using this to solve for a CIR Bond Price B(t,r)The pde and boundary condtions in interest rate (r-space) areB_t = r B_rr + (a - b r) B_r - r B for r in (0,infty)B(t=0,r) = 1, B_r(t,infty) = 0.By hand algebra, I transform coordinates to y = r/(r+r0) in (0,1) and set B(t,r) = h(t,y) and then invoke the postedcode. As you can see, I create a simple grid on (0,1 - dy) and basically just call NDSolve, which is Mathematica'sbuilt-in. The coefs a2[y] and so on are defined elsewhere in my Mathematica file. That's essentially the patternfor any pde problem. When you call NDSolve, Mathematica takes the pde, discretizes it spatially - the default is 4th order -- and recasts the problem as the matrix system dh/dt = A h. It then solves the latter via an ode system method withan adaptive time-step. This is the MethodOfLines method that you can see in the code.For the GARCH diffusion model, I make this same type of 1 factor solver call for H(t,y;z), the 'Fundamental Transform'from my book. In that case, the pde coefs are complex-valued functions with a parameter z. So the resultfrom the pde call is a complex-valued function. I pass those results to an integration routine over z and, using my book formulas, get option values. For the Heston model, H(t,y,z) is already known analytically so there is no need for the pde (for euro-styles).You get option values from the integration. This generally works fine. However, I want to calibrate the GARCH diffusion and need a faster running time asper earlier discussion. I am hoping to be able to call the QFCL 1-factor solver around 0.5 million times in 1 hour.
Last edited by Alan on November 12th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Topic Author
Posts: 59715
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

PDE/PIDE (FDM. FEM) methods thread

November 13th, 2011, 6:42 pm

QuoteOriginally posted by: AlanHow does it all work for some basic non-GBM model like Heston?If we are talking about a 2-factor model, then according to the current model you give PDE coefficients and the Dirichlet BC. Having said that, we know that these degenerate PDE have to do with Fichera/Feller theory and formally the BC can be any of:1. 1 factor BS (has an exact solution)2. 1 or 2 factor 1st order hyperbolic PDE3. OtherThese may have an analytical solution, in which case we are back in Dirichlet land. If not, we will need to adopt Plan B (solve numerically) but this will demand some interface changes (new classes), which is doable. It's actually a clean concept; composing PDE classes from smaller ones. The interface is very easy==> V(S, t) etc.In fact, since I use Boost functions, these other PDEs could be solved using binomial method, MC, whatever. The master PDE just sees an interface. And we can use parallel threads by firing them up from the master.QuoteI am hoping to be able to call the QFCL 1-factor solver around 0.5 million times in 1 hour.I remember testing this; 55 minutes on laptop, with ADE and not bothering with any optimisation.
Last edited by Cuchulainn on November 12th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
Alan
Posts: 9788
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

PDE/PIDE (FDM. FEM) methods thread

November 13th, 2011, 7:59 pm

QuoteOriginally posted by: CuchulainnQuoteOriginally posted by: AlanHow does it all work for some basic non-GBM model like Heston?If we are talking about a 2-factor model, then according to the current model you give PDE coefficients and the Dirichlet BC. Having said that, we know that these degenerate PDE have to do with Fichera/Feller theory and formally the BC can be any of:1. 1 factor BS (has an exact solution)2. 1 or 2 factor 1st order hyperbolic PDE3. OtherThese may have an analytical solution, in which case we are back in Dirichlet land. If not, we will need to adopt Plan B (solve numerically) but this will demand some interface changes (new classes), which is doable. It's actually a clean concept; composing PDE classes from smaller ones. The interface is very easy==> V(S, t) etc.In fact, since I use Boost functions, these other PDEs could be solved using binomial method, MC, whatever. The master PDE just sees an interface. And we can use parallel threads by firing them up from the master.QuoteI am hoping to be able to call the QFCL 1-factor solver around 0.5 million times in 1 hour.I remember testing this; 55 minutes on laptop, with ADE and not bothering with any optimisation.Yes, so boundary condition #2 is what is critical for my applications, CIR, sqrt model, etc.I hope these "Plan B interface changes" are not going to be a performance drag. It sounds kindamessy. I don't know your algorithm, but are you sure you can't just treat y=0 (using the notation from my posted code)more or less like any other grid point, except that the diffusion coef a2[y] vanishes there? [Also, that boundary degenerated pde is deeply coupled to the interior solution. Again, I don't know your algorithm, but it can't really be solved 'separately' as your remarks suggest.]
Last edited by Alan on November 12th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Topic Author
Posts: 59715
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

PDE/PIDE (FDM. FEM) methods thread

November 14th, 2011, 4:51 am

QuoteOriginally posted by: AlanQuoteOriginally posted by: CuchulainnQuoteOriginally posted by: AlanHow does it all work for some basic non-GBM model like Heston?If we are talking about a 2-factor model, then according to the current model you give PDE coefficients and the Dirichlet BC. Having said that, we know that these degenerate PDE have to do with Fichera/Feller theory and formally the BC can be any of:1. 1 factor BS (has an exact solution)2. 1 or 2 factor 1st order hyperbolic PDE3. OtherThese may have an analytical solution, in which case we are back in Dirichlet land. If not, we will need to adopt Plan B (solve numerically) but this will demand some interface changes (new classes), which is doable. It's actually a clean concept; composing PDE classes from smaller ones. The interface is very easy==> V(S, t) etc.In fact, since I use Boost functions, these other PDEs could be solved using binomial method, MC, whatever. The master PDE just sees an interface. And we can use parallel threads by firing them up from the master.QuoteI am hoping to be able to call the QFCL 1-factor solver around 0.5 million times in 1 hour.I remember testing this; 55 minutes on laptop, with ADE and not bothering with any optimisation.Yes, so boundary condition #2 is what is critical for my applications, CIR, sqrt model, etc.I hope these "Plan B interface changes" are not going to be a performance drag. It sounds kindamessy. I don't know your algorithm, but are you sure you can't just treat y=0 (using the notation from my posted code)more or less like any other grid point, except that the diffusion coef a2[y] vanishes there? [Also, that boundary degenerated pde is deeply coupled to the interior solution. Again, I don't know your algorithm, but it can't really be solved 'separately' as your remarks suggest.]From a numerical perspective, this is no problem. For CIR, I have done it by solving u_t + a_u_r = 0 on r = 0 by one-step methods. Then we get 'essentially' Dirichlet BC. In 2d Heston, Roelof has done it using Soviet Splitting but we get LU systems as before which reduces performance by ~ 40%.With ADE, I get two equations in two unknowns at v = 0, and this kicks off the standard ADE scheme in interior of the domain.For C++ perspective, I will need to change the interface somewhat. edit: there are different degrees of coupling, from NONE (using explicit Euler) to ADI/Splittting (high).
Last edited by Cuchulainn on November 13th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
Alan
Posts: 9788
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

PDE/PIDE (FDM. FEM) methods thread

November 14th, 2011, 2:19 pm

OK, well some of these schemes may work for r=0 and others may not. For example, "solving u_t + a_u_r = 0 on r = 0 by one-step methods" soundssuspicious to me. The reason is that the exact CIR bond solution B(t,r) at r=0depends on the volatility parameter, but u_t + a_u_r = 0 doesn't. That's what I meant by 'deeply coupled' and 'it can't really be solved separately'.What am I missing?
Last edited by Alan on November 13th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Topic Author
Posts: 59715
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

PDE/PIDE (FDM. FEM) methods thread

November 14th, 2011, 4:36 pm

QuoteOriginally posted by: AlanOK, well some of these schemes may work for r=0 and others may not. For example, "solving u_t + a_u_r = 0 on r = 0 by one-step methods" soundssuspicious to me. The reason is that the exact CIR bond solution B(t,r) at r=0depends on the volatility parameter, but u_t + a_u_r = 0 doesn't. That's what I meant by 'deeply coupled' and 'it can't really be solved separately'.What am I missing?In general, we have 2 unknown values at mesh j = 0 and j = 1 for both the bdy PDE (upwinding) and the full PDE at j = 1. Then we solve for both U(n+1, 0) and U(n+1, 1), Sigma will in there, and then iterate up to j = J for the up sweep.The same idea as eq. 5.28 in Sheppard's thesis.Another solution I did take was indeed u_t + a_u_y = 0 and then BC at y = 1 (y = r/(r+a)) u = 0. This latter BC uncouples everything, so you are saying that this approach is not good? When r = 0, the exact CIR is indeed == A. With coupling we would get it back at j = 0. A related issue is here choice II. I have the code for the uncoupled case that is easy to adapt to coupled case.
Last edited by Cuchulainn on November 13th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
Alan
Posts: 9788
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

PDE/PIDE (FDM. FEM) methods thread

November 14th, 2011, 5:11 pm

The exact CIR soln is indeed A = A(t,sigma, other parms) at r=0. I think it's reasonable to ask that any finite difference approximation converges to this exact value at r=0 as the mesh becomes fine. I can't see how any 'uncoupled' BC method can achieve that -- so, yes: not good.
Last edited by Alan on November 13th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Topic Author
Posts: 59715
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

PDE/PIDE (FDM. FEM) methods thread

November 15th, 2011, 8:46 am

QuoteOriginally posted by: AlanThe exact CIR soln is indeed A = A(t,sigma, other parms) at r=0. I think it's reasonable to ask that any finite difference approximation converges to this exact value at r=0 as the mesh becomes fine. I can't see how any 'uncoupled' BC method can achieve that -- so, yes: not good.You are right; it is a coupled system; solving it as a hyperbolic PDE along r = 0 is wrong. I have done the maths again. It is two equations in two unknowns indeed. I will work this out and compare with the exact solution and post in V1. Included will be some post analysis in the different 'regions'.On the PDE interface level I need to see how I can incorporate this new kind of BC into the classes.
Last edited by Cuchulainn on November 14th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Topic Author
Posts: 59715
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

PDE/PIDE (FDM. FEM) methods thread

December 20th, 2011, 2:28 pm

In sections 18.10 and onwards we describe the design of the FDM application. I have the code ready but I need to get it ready so that you can use it.Requirements R1,...,R11 needs to be fleshed out more. KC, any new ones?
Attachments
FDMDesign.zip
(784.13 KiB) Downloaded 18 times
Last edited by Cuchulainn on December 19th, 2011, 11:00 pm, edited 1 time in total.
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On