hello everyone.you may be interested in the followings on FME. it is from the academic world, and from French authors (that is maybe not well-know) :Finite Element Methods for Option PricingFreeFem++ is an implementation of a language dedicated to the finite element methodCf chapter 4 on FEM in : Computational Methods for Option Pricing

Last edited by tagoma on January 9th, 2011, 11:00 pm, edited 1 time in total.

- Cuchulainn
**Posts:**63232**Joined:****Location:**Amsterdam-
**Contact:**

Adding to the list, The FEM book on QF by Juergen Topper (Wiley)@Costeanu,Have you investigated ADE method?

Last edited by Cuchulainn on January 9th, 2011, 11:00 pm, edited 1 time in total.

Step over the gap, not into it. Watch the space between platform and train.

http://www.datasimfinancial.com

http://www.datasim.nl

http://www.datasimfinancial.com

http://www.datasim.nl

- Cuchulainn
**Posts:**63232**Joined:****Location:**Amsterdam-
**Contact:**

Cf chapter 4 on FEM inThe FEM uses domain truncation. Now if you transform to (0,1) you get a PDE in conservative form _immediatement_ and then integration by parts just gives all BC for free (see "FDM quiz" on this channel). The semi-discrete schemeMU_t + KU = Fcan be solved using (explicit) ADE sweeps. It could be a nice project for MSc student. An good exercise is to explicitly work out the tridiagonal matrices M and K using hat function and solve using ADE as a test. It is not so much work, just simple integration.

Last edited by Cuchulainn on January 9th, 2011, 11:00 pm, edited 1 time in total.

Step over the gap, not into it. Watch the space between platform and train.

http://www.datasimfinancial.com

http://www.datasim.nl

http://www.datasimfinancial.com

http://www.datasim.nl

Hi Daniel, In general, I don't think boundery conditions are that important. If you go 5 sigma away from the mean, what's the probability to hit the boundary during the life of some trade? Quite small. If you put zeros everywhere on the boundary, it's not going to affect the result. On the other hand, you need to sample from a space that may be too large, and so the calculation may be too slow. In that case you may need to think about boundary conditions. The curse of dimensionality in my mind affects primarily the interpolation, and then it has ripple effects in integration, PDE's, etc. And instead of interpolation I should probably say function approximation.For example, if I have a function in 1-dim, I may approximate it using its values at a discrete number of points (Lagrange interpolation, or splines), or its values and the values of its derivatives (Hermite interpolation), or as a sum of basis functions (Fourier series, wavelets, kernel regression). No matter how I do it, when I increase the number of dimensions I need to use more points, more basis functions, or generally speaking more computer memory. And more here means exponentially more. Now all numerical schemes for evolution PDE's will involve approximating the solution at various time-points somehow. How we go from one time-point to another is quite irrelevant; even if we had a perfect crystall ball we'd still need to calculate the approximant, and this requires that exponential amount of memory, and to do this calculation we need an exponential amount of time. Monte Carlo goes around this by not calculating an approximant at each point in time, only at one point in time and space (i.e. now and at the current market price). Regarding your question if I looked at ADE, yes I did. Not sure why, but my results showed it's not as good as Crank-Nicolson, but then I didn't spend too much time with this. I tried however to generalize ADE to use higher derivatives, and unfortunately the only thing I could come up with was some unstable schemes. Best,V.

- Cuchulainn
**Posts:**63232**Joined:****Location:**Amsterdam-
**Contact:**

QuoteIn general, I don't think boundery conditions are that important. If you go 5 sigma away from the mean, what's the probability to hit the boundary during the life of some trade? Quite small. If you put zeros everywhere on the boundary, it's not going to affect the result. This is the far-field BC you are referrng to I presume? At the near field (all S == 0) then we need to solve a lower-order PDE, yes? For example, a 3 factor PDE has a BC on 6 rectangles and these are non-trivial. It would be great to be able to simplify BC but I think it has not been done yet. QuoteRegarding your question if I looked at ADE, yes I did. Not sure why, but my results showed it's not as good as Crank-Nicolson, but then I didn't spend too much time with this. I tried however to generalize ADE to use higher derivatives, and unfortunately the only thing I could come up with was some unstable schemes. ADE is unconditionally stable, always. In general, it is preferable to do domain transformation and this helps with BC. And the convection terms is an issue. If you like to post yoir scheme we could have a look at it. http://www.math.ust.hk/~masyleung/Reprints/leuosh05.pdf

Last edited by Cuchulainn on January 10th, 2011, 11:00 pm, edited 1 time in total.

Step over the gap, not into it. Watch the space between platform and train.

http://www.datasimfinancial.com

http://www.datasim.nl

http://www.datasimfinancial.com

http://www.datasim.nl

I prefer to think in terms of standard deviations away from the mean. In these units 0 is as far away as +infinity. So I guess when you change your domain, you can choose to make the new domain [0, 1]^n, or R^n. In the first case you need boundary conditions, in the second you don't. However, when you have a diffusion process where the coefficients are so that you can hit the boundary, then you certainly need to think about boundary conditions. But then you also need to think whether the process is appropriate for the model. As for ADE being unconditionally stable, that's indeed the case, at least in theory. In practice, I was able to find combinations of parameters so that the scheme is not stable anymore, so in general you can't always trust blindly the theory. Best,V.

- Cuchulainn
**Posts:**63232**Joined:****Location:**Amsterdam-
**Contact:**

QuoteAs for ADE being unconditionally stable, that's indeed the case, at least in theory. In practice, I was able to find combinations of parameters so that the scheme is not stable anymore, so in general you can't always trust blindly the theory. For which kind of PDE? And what kinds of parameters? I would be interested in your example.I think the way you approach BC and lack of domain transformation might be an issue. But a full description would pinpoint the problem.Are you using convexity at far field by any chance?edit: forgot to say there are 2 kinds of instability 1) dynamic (due to delta_t versus h) and 2) static (convection-dominance). I think 2) rather than 1).

Last edited by Cuchulainn on January 10th, 2011, 11:00 pm, edited 1 time in total.

http://www.datasimfinancial.com

http://www.datasim.nl

- Cuchulainn
**Posts:**63232**Joined:****Location:**Amsterdam-
**Contact:**

No?

http://www.datasimfinancial.com

http://www.datasim.nl

Daniel, I suppose your question is addressed to me. I don't have at hand my experiments with ADE, and besides, the topic of the discussion was the meshless method, not ADE. For what it's worth, I don't find any particular FD method in current use very exciting. CN with ADI works ok in 3D+time. Trinomial tree is again fine, and it works even in 4D+time. I haven't heard of 5D+time, or higher dimensions, but there are certainly derivatives that woud require that. In my opinion all PDE schemes suffer from the curse of dimensionality. Thus we can only hope to conquer one dimension at a time. If somebody comes up with a scheme that can handle 5D+time, or 6D+time, that's quite an achievement. But I'm not very hopeful this can be achieved without an adaptive scheme. So, again only in my opinion, it's much more important to focus on adaptive schemes than on particular types of non-adaptive ones. For my taste, a fully explicit adaptive scheme is just fine; even if it's only conditionally stable, it's ok as long as you know what you need to do to have it stable. Unfortunately I didn't find any adaptive FD schemes in the literature (I mean adaptive in space, not in time). Actually I saw one article about a trinomial tree where you can graft subtrees, but I didn't quite get if it was truly adaptive, or you decide to graft subtrees based on some heuristics. Best,V.

- Cuchulainn
**Posts:**63232**Joined:****Location:**Amsterdam-
**Contact:**

Hi Costeanu,Yes. The question was hanging in mid air. So, I asked it again. If you get a chance to get the experimental results it would be interesting to see. Thanks.The esssence of ADE is how it handles time. So, it would not be out of place. Use RBF in space, ADE in time. Of course, there is an ADE thread. QuoteIn my opinion all PDE schemes suffer from the curse of dimensionality. Thus we can only hope to conquer one dimension at a time. If somebody comes up with a scheme that can handle 5D+time, or 6D+time, that's quite an achievement. But I'm not very hopeful this can be achieved without an adaptive scheme. I don't see that conquering dimensionality has a direct relationship with adaptive meshes. The latter are for other reasons?It's a memory problem, mainly? With a 64-bit machine then 5d will be possible.edit: interesting observation: using adaptive method in Larkin ADE destroys unconditional stability whereas it is unconditionally stable for uniform mesh.

Last edited by Cuchulainn on January 19th, 2011, 11:00 pm, edited 1 time in total.

http://www.datasimfinancial.com

http://www.datasim.nl

Hi Daniel,Memory problems translate into time problems. If you have a 100 billion nodes per time point, the scheme will be at least as slow as a Monte Carlo with a 100 billion paths. As for the ADE that I have played with, I did a straight implementation of Saulyev's scheme. I didn't know that there are better versions such as Barakat and Clark and Larkin. If I have time, at some time I'll take a look at those scheme as well. Best,V.

- Cuchulainn
**Posts:**63232**Joined:****Location:**Amsterdam-
**Contact:**

QuoteAs for the ADE that I have played with, I did a straight implementation of Saulyev's scheme. I didn't know that there are better versions such as Barakat and Clark and Larkin. If I have time, at some time I'll take a look at those scheme as well. Costeanu,Ah, Saul'yev classico is called conditionally consistent, so the k/h -> 0 in a uniform way, a bit like Dufort-Frankel. That's probably why the erratic behaviour. 99% of fdm schemes are unconditionally consistent.BC is much better and more amenable to truncation analysis. All the examples in my article use it. regardsDaniel

Last edited by Cuchulainn on January 19th, 2011, 11:00 pm, edited 1 time in total.

http://www.datasimfinancial.com

http://www.datasim.nl

Hi Costeanu, Cuchulainn,I'm currently implementing the RBF method, and I may have a slightly better experience with it as what you mentioned,though I admitt it is not straightforward and I'm not sure yet if I can consider it as viable or not.I'm just beginning, but here is what I have observed about the points you made:1) it's not really clear how many nodes we need, but we don't necessarily need a large number of them. On the contrary, the few papersin the Finance literature that talk about the subject tend to say that we don't need much more than 100 or 200 nodes. They say that due to its spectral nature, the convergence should be fast.I can confirm by experience that 100 or 200 nodes gives quite an accuract result. 2) yes, it tends to be more unstable as we use more nodes (and also much slower, due to the larger amount of time to solve the dense matrix).Some people say this can be improved a lot by using preconditioning technique. I haven't tried that yet.3) people may not use them with more than 1000 basis functions, but if the accuracy reached with 100 nodes is already very good, then this is not a worry.By the way, I read the chapter on Meshfree in Daniel's book as a starting point for my implementation. I don't really see where he says that Meshfree with 121 basisis compared with FDM with 121 meshs. I see 2 places where he talks about performance comparisons. One is a comparison within the Meshfree method, betweendifferent types of radial basis, with 121 meshs. The other one is really a comparison with FDM, but with the FDM having 500 nodes. I also wouldn't think it makesmuch sense to compare the two methods with the same number of nodes, since they have quite different behaviours in convergence and the meaning of a "node" is different.We need to compare at a fixed accuracy, or at a fixed runtime.4) Same thing for a comparison with Monte-Carlo, I don't see why we should compare them for the same number of nodes/simulations. As to the dimension problem, I have implementedthe RBF in 1 and 2 dimensions. I must say I find the method quite elegant for implementation, as the extension to many dimensions is particularly convenient. In 2 dimensions, I seem toneed a bit more nodes than in 1 dimension, but far from an exponential increase. So, at a given accuracy, although my RBF tends to lose against my FDM in 1 dimension, in 2 dimensions it startsbeing pretty efficient by comparison. Which brings one question, maybe more to Daniel. The nice runtime gain that you mention in your chapter on Meshfree, is it with the matrix inversion at every time steps or not?Because in a general situation, I think the matrix will be different at each time, such that it has to be inverted many times for 1 pricing. Then it seems slower than FDM to me. But if we take constantcoefficients and time steps, then the inversion can be done only once and the RBF can win. But that does not look like a realistic and interesting test case to me, as most of the time the interestrates, volatility, and time-steps, will not be constant.5) the choice of the colocation points indeed seem to be important and I'm not sure how to do that. I have tried several distributions: a regular lattice, a random draw with Sobol distribution, anda random draw following the true Log-Normal distribution. For a reason I do not understand, the Sobol distribution gives me the best results. As for the interpolation problem you mention, I thinkyour solution to it seems reasonnable. In order to keep a meaningful colocation distribution through time, we may want to re-interpolate to new colocation points a few times for a long-term pricingwith intermediate cash-flows.7) I am also wondering about the unit issue you mention for higher dimensions. Until now, my 2 spot dimensions were in the same "unit" as I considered them to be 2 spot equities with roughly thesame order of magnitude. In the example of a hybrid model, where the 2 factors have completely different meanings, we might have a problem with the definition of the radial functions. Although, solvingthis might not be so difficult. We could imagine always working in reduced variables that would have the same order of magnitude, such as an interest rate and the relative return of a spot equity. Or onecould also think of having different shape coefficients in each space direction.To share some more results and opinions, I would say that:1) the main problem from my point of view is the inversion of ill-conditioned dense matrices. First of all, this is a very slow process, second, it's unstable. Hopefully, the preconditioning fixes that.2) even if the RBF method was made to work in a stable and runtime-efficient way in practical situations, I still wouldn't see it as "getting the best of FD and Monte-Carlo". Indeed, you would stillhave to work out these tedious boundary condition issues, that are so easy to handle with Monte-Carlo, and especially for path-dependent options, I guess Monte-Carlo will remain much more convenient.

Hi sebgur, Thanks for sharing your experience. I hope you'll keep getting nice results as you try higher dimensions, and new option types. Please keep us posted. Regarding your remark that the RBF method should converge quickly due to its spectral nature, well, this spectral thing can be a blessing but also a curse. What I have in mind is the Gibbs phenomenon, with its close but more frightening cousin, the Runge phenomenon. When the function to be approximated is smooth, spectral methods are beautiful. When you have some discontinuities, you generally have oscillatory phenomena. Since many options contain either early maturity triggers (so they are cousins of barrier options) or some type of embedded binary option, discontinuous payoff is a fact of life. I suppose figuring out how to deal with these discontinuities in the context of RBF would be a very interesting long-term project. Unfortunately nobody has undertook it so far, or even mentioned that it's a problem at all. Best,V.

Hi,For my engineering school, we have a project on radial basis functions to do.The teacher gave us the article: Option Pricing with Radial Basis Functions: A Tutorial: Wilmott Magazine Article by Alonso Peña.The project is to price options using radial basis functions. I code in C#. With XLW, I can export the C# functions in Excel. I use dnAnalytics for matrix operations.For the moment, I try to price an European call.The problem I'm facing is that I get wrong results. I've taken these parameters:N 30T 1M 30a 0b 3,912023005c 0,53958938E 15r 0,05sigma 0,3theta 0,5S 15And I got the price of the call= -1,524E+189.The problem is that all the students got bad results. The teacher took the time to see my code and said it was correct.Here's the C# code:

GZIP: On