February 19th, 2005, 4:27 pm
Could it be that the problem arises from the fact that in that procedure we end up taking the inverse cumulative normal of numbers in (0,1) and then again the cumulative normal of numbers nearby (or vice versa), thus losing resolution for Gaussians that are significantly positive? This is a common problem raising its head in many areas, so maybe it is what gives rise to the instabilities you observe. The workaround I have been using in this context for years is as follows: Whenever I need to do something like Phi^-1( Prob(z<x) ) for given x and known distribution of z, I branch switch. If P = Prob(z<x)<1/2, I compute straightaway Phi^-1(P), else, I compute Q=Prob(z>x) directly from the law of z, and then evaluate -Phi^-1(Q). The key is to compute the complementary term Q directly without ever subtracting two numbers, that are both near one, from each other. I have yet not seen an application where this approach was not applicable. One must, of course, not simply call a library function for all the required terms, though: you have to step through and ensure that the involved algorithms do at no point use similar complementary expressions such as Q=1-P. In other words, in terms of standard option theory, you must never use put-call parity as an aid behind the scenes of your calculation. By the way: this is why I would never advocate using put-call parity in an implementation; sooner or later the loss of accuracy in the wings will come to haunt you ;-(, especially during calibration to implied volatilities in the far-away tails.Does this possibly help for your common-factor-copula-cdo-delta calculations?Best regards,pj