March 30th, 2016, 6:36 pm
Hello everybody,I like stressing numerical methods to their accuracy limits and have been playing with my old pricer recently, so I found you guys here. I guess I'm a little late to the party, but surely outrun used some kind of witchery to get that many digits, so I think his results don't count and he should be burned at the stake:-) But seriously, as far as standard methods go, I see that a textbook PDE method has not been represented. And it's a pity because I get much closer and faster with it to outrun's benchmarks than the other methods seem to do. So, just for reference here's what one can do with Crank-Nicolson / Brennan-Schwartz:Using NS=25,000 and NT=100,000 I getAlan case: 6.299596935MJ case: 5.929804895For both cases grid goes up to 4 x Strike. That's 7-8 significant digits correct. Execution time is about 45 secs on a budget laptop, implementation in C++. In order to squeeze out the maximum accuracy possible I applied Richardson extrapolation using NT >> NS in order to have negligible time-discretization error (compared to that due to spatial discretization) and thus be able to observe the spatial convergence on its own. NS was kept below 30,000 where convergence was smooth and perfectly quadratic. For higher NS things get worse and convergence eventually stops being monotonic. I am assuming that this is due to round-off error contamination, presumably creeping in during the LU substitutions. At some point I might test this assumption by using some higher precision library. So for "fast" (100 secs for each run) high precision I tried the following:Alan case: NS = 5000, NT =1Mil: 6.299595083 NS = 6000, NT =1Mil: 6.299595700 RE result: 6.299597102 Abs Error: 4e-9MJ case: NS = 5000, NT =1Mil: 5.929805765 NS = 6000, NT =1Mil: 5.929805495 RE result: 5.929804881 Abs Error: 5e-9For max accuracy I used NT=5Mil and an NS range from 10,000 to 32,000. Those averaged at:Alan case, RE result : 6.29959710634MJ case, RE result : 5.92980487655Finally tackling Cuchulainn's more extreme case I get 0.164996416 in 0.5 secs (that's more than 100 times faster than Tian's binomial apparently, though obviously depends on implementation and processor speed), then 0.1650017 in 1 sec, 0.1650031 in 2 sec, 0.1650044 in 20 secs and finally in about 10 min 0.16500487. The error due to the time discretization here is a lot lower than in the cases with high vol, so I used a lot less time steps in this case. Didn't try RE here. If you want me to run another case just to double check, let me know. Could also do American/Bermudan barriers or Asians if someone's interested.
Last edited by
Billy7 on March 30th, 2016, 10:00 pm, edited 1 time in total.