Just thought I'd share the benefits of some C++ coding I did with the forum. Before I worked in finance, I worked for a few years in a CFD (computational fluid dynamics) company. I took one of the basic finite difference discretisation schemes (MacCormack scheme) used for around 15 years in CFD and a implemented a numerical solution to Black-Scholes. This scheme is fully explicit so lends itself easily to american style exercise, but is second order accurate in time and space, just like Crank-Nicholson. I used two forms, the non-conservative (NC) form which most people are used to seeing, and the conservative formulation (C), which should help to reduce some of the error due to discretisation in S. For a call option with K = 80.0, sigma = 0.3, r ( risk-free rate ) = 0.07, S = 100 and T = 0.5, I get the following analytic resultAnalyticValue = 23.758, Delta = 0.907077, Gamma = 0.0078387, theta = -8.21389and the following result for the non-conservative formulation. Numerical ( NC ) - dS = 0.1, solution time 103.819sValue = 23.758, Delta = 0.907076, Gamma = 0.00783868, theta = -8.21388Numerical ( NC ) - dS = 1.0, solution time 0.032349sValue = 23.7583, Delta = 0.907054, Gamma = 0.00783643, theta = -8.21291Numerical ( NC ) - dS = 5.0, solution time 0.000827sValue = 23.7357, Delta = 0.906996, Gamma = 0.00777608, theta = -8.19161What can be seen is that the result is pretty much spot on for dS = 0.1, but is also pretty accurate for large time steps such as 1.0, with a solution time sub 1 second, and even dS = 5.0 with a solution time of sub 1-millisecond. The conservative results are shown below:Numerical ( C ) - dS = 0.1, solution time 195.699sValue = 23.758, Delta = 0.907077, Gamma = 0.00783868, theta = -8.21388Numerical ( C ) - dS = 1.0, solution time 0.090287sValue = 23.7582, Delta = 0.907057, Gamma = 0.00783633, theta = -8.2128Numerical ( C ) - dS = 5.0, solution time 0.001376sValue = 23.7343, Delta = 0.907071, Gamma = 0.00777471, theta = -8.18929The results are a little better in most cases, especially in the delta, but it seems that the extra accuracy doesn't warrant the extra solution time. This formulation should however fare alot better over the non-conservative form when there is variable volatility, sigma = f(S). Finally these can be compared to the results I got from my Crank-Nicholson solver:Numerical ( CN ) - dS = 0.1, solution time 14.0876sValue = 23.7546, Delta = 0.907130, Gamma = 0.007839, theta = -8.21444Numerical ( CN ) - dS = 1.0, solution time 1.18434sValue = 24.6647, Delta = 0.914685, Gamma = 0.007283, theta = -8.08372Numerical ( CN ) - dS = 5.0, solution time 0.235688sValue = 28.3642, Delta = 0.939682, Gamma = 0.005295, theta = -7.549203I have attached the C++ code for both schemes and they should compile on either windows or linux. I develop on linux/unix but have attempted to keep the code portable. Simple change the parameters in the sections " Common " and " finite difference " to adjust the option parameters and space discretisation. The time step is automatically calculated. You can also change the option type ( call, put ,etc ) in a section further down. Then simply compile and get the results. The code was pretty much done over two days, so bear in mind it is a bit rough around the edges, but I would interested to see what you think of the scheme and how it fares against some the FD solvers you guys are using. Thanks, Blade

- Cuchulainn
**Posts:**64420**Joined:****Location:**Drosophila melanogaster-
**Contact:**

Hi BladeAre Performance and Accuracy the two criteria which your results should be judged? If your code is proof-of-concept then fine.Remembering that all ye young fellas out there cost a bucket of money to maintain, speaking as a software manager, some other QUALITY features are important:Maintainability (easy to change code)?Portability (linux and Win)Scalability (it works for Heston, jumps, n-factor)etc.it's also possible to get things working; Real challenge is when we go into production.Example: does it work in 15 dimensions? It would be nice if you could display your results in Excel and gnuplotmaybe a discussion for after the break. Happy new year

"Compatibility means deliberately repeating other people's mistakes."

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

- Cuchulainn
**Posts:**64420**Joined:****Location:**Drosophila melanogaster-
**Contact:**

Blade,Suggestion is to design your code into a design pattern for future flexibility?

"Compatibility means deliberately repeating other people's mistakes."

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

Cuchulainn, Of course the code is not production. It is more proof of concept. No one would put everything in one main function for production code, but it is far easier to allow people to compile this code cross platform so I've left it like this. (Portability) The key concept is the discretisation scheme, which when understood, can easily be integrated into any FD solver, and yes, it is fine for multiple dimensions - it has been used in CFD ( equivalent to 3 factor ) for around 15 years, but I've yet to see any FD solver do 15 dimensions. I thought this was always left to Monte Carlo methods.

- Cuchulainn
**Posts:**64420**Joined:****Location:**Drosophila melanogaster-
**Contact:**

> I've yet to see any FD solver do 15 dimensionsIt wss a bit of a joke, someone said once that it was possible !! Of course not, but peopel do say the strangest things.Do you know van leer scheme (Finite Volume). It's a very good one.Does McCormack handle mixed derivatives well (correlation?)

"Compatibility means deliberately repeating other people's mistakes."

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

The Van Leer scheme is actually not a discretisation scheme but a flux limiting scheme. It can be applied to any discretisation scheme which uses a flux formulation, which is a discretisation form that is rare in finance. The reason for using this approach is to ensure that a result is total variation diminishing, ie that the total variation defined by TV = Sum { mod { x (n+1) - x (n) } does not increase in time TV(t+1) < TV(t)This ensures that you get a monotone scheme and don't get the spurious oscillations often associated with sharp gradients in a finite difference solution. MacCormack does handle second derivatives and mixed derivatives pretty well. The jist of the scheme isa) Predictor step - The values at time t are used to form a value for dC/dt. A new value for the Call price is then found at t+1 using C + dC/dt. Any mixed derivatives are evaluated at time t. b) Corrector step - The new values at time t are used to find dC/dt at time t +1. Any mixed derivatives are again evaluated at t+1. The new value for C is then the average dC/dt from both steps plus the original price C at time T. Any good CFD site or book should describe this method in detail, as it has been around for a while now.

- Cuchulainn
**Posts:**64420**Joined:****Location:**Drosophila melanogaster-
**Contact:**

Blade,Sounds good. I have found that predictor corrector methods are very useful in all kinds of problems, incluing cross derivatives, PIDE etc.Can you recommed a good book on your methods please?

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

Two good books are An introduction to computational fluid dynamics: The Finite volume Method - Versteeg, Malalasekeraand Computational Fluid Dynamics : The Basics with applications - AndersonI got the method above from the second book, but both should have it. You can get some more books from http://www.cfd-online.com/Books/browse_ ... ortunately some of the best basic notes I have are in hardcopy from my Computational Fluid Dynamics course at Imperial. I did Aeronautical Engineering and took this as one of my electives. At the time it seemed tough and to be honest a bit pointless, but if I saw my lecturers now I'd definitely be buying them a drink because they have come in very useful.

- Cuchulainn
**Posts:**64420**Joined:****Location:**Drosophila melanogaster-
**Contact:**

> buying them a drink because they have come in very useful. absolutely, you are on a winner and one of a seelct few if I may say so!

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

Just a few comments from my PhD time when I used to solve heat, mass and momentum tranfer equations as well as stress/strain equations. I noted that discretisation and numerical scheme used to solve the equations were interconnected; i.e. a change the discretisation and your numerical schema might fall over (even though stability-criteria were observed). I never had the patience to look into it in more detail, but I suspect that the material properies (and their dependece on temperature etc.) was critical.In a financial context this could mean that if one starts to extend models to include say a dependence of volatility on interest-rates, the numerical solution might not give you correct results. The differences might not be noticed; however, differences might be introduced.As a matter of practise, I now use a standard Rung-Kutta method to get a second solution. Runge-Kutta is crude, but seems to get you there in most cases.

One of the biggest problems I would say when people use any sort of numerical model including finite differences, is that they instantly believe the answer. It takes time to learn where certain solutions may go wrong and learn how to avoid these steps. So far with the work I've done, I've got code for analytic, binomial, fd and monte carlo solutions and for european options I can get them all to converge to the same price and greek values so it gives me some confidence in the results I've obtained. For my american options, I can compare my binomial and fd methods, which again converge pretty much to the same answer ( to 3-4 dp anyway). The Runge-kutta method does sound like another good one to add to my arsenal of checks, as I'm sure I'll soon start running into difficulties with closed form solutions once I start to get more complex. Thanks for the tip !!

Try this linkhttp://www.ae.gatech.edu/~sruffin/courses/ae60 ... ine.htmThe notes are hand written so will need some patience

- Cuchulainn
**Posts:**64420**Joined:****Location:**Drosophila melanogaster-
**Contact:**

> I now use a standard Rung-Kutta method Which one do you use? There are many, incluidng adaptive meshing.What are reasons thar RK is 'crude'? (I've seen cruder!!)

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

- Cuchulainn
**Posts:**64420**Joined:****Location:**Drosophila melanogaster-
**Contact:**

> I now use a standard Rung-Kutta method Which one do you use? There are many, incluidng adaptive meshing.What are reasons thar RK is 'crude'? (I've seen cruder!!)

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

I used 4th order Runge-Kutta with adaptive step-size control. The idea is that the error is kept within pre-defined limits and the step-size is decreased when the error gets to big. The advantage is that the step-size is increased when the error gets smaller and smaller, thereby increasing computational efficiency.The method is "crude" in the sense that conversion can be slow (hence it can take time to get a solution); "crude" might be the wrong word here - the solutions can be as good as those obtained by other methods. One key-point that I like about Runge-Kutta is that it is comparatively siple to implement and when you're not quite sure how the euqation behave, Runge-Kutta can get "detect" instabilities, i.e. if suddenly your step-size drops it suggests that something is happening and I can have a closer look, either using analytical or numerical mehtods.