September 20th, 2006, 1:19 am
I used to vet. In general it's not possible to test all states that could possibly give a model trouble. Think of an option price as a map from R^n to R^1, and naturally you can't test all possible inputs. A good vetting report will list a market assumptions section (with ranges employed). Outside of the tested combinations, you assume things are still ok. It proves nothing more than "I looked here, and it was ok". For insipiration, think of the pentium's "FDIV bug", for example. I bet you didn't know that every new processor comes with a 40 page list of errata nowadays. And that's the known problems, on release date.David's comments about testing corner cases are a good metric. I did this when I vetted and usually found tons of problems, and generally the line/the vendor never appreciated the feedback. If you've got a copy of Hull's book's barrier options formulas in your library, try typing in 1% or 0.1% as a vol and see what happens. (It's the "exp" statement that causes the grief, btw -- exp(+738) returns inf or nan or something in IEEE arithmetic). Who's to blame? The coder? The user?I wrote a list up somewhere of "numerical analysis for dummies" at a prior workplace. Every "/" and "exp" statement could potentially be considered dangerous. Even subtraction can be dangerous, given the potential for catastrophic cancellation. Writing defensive code makes it ugly but if you want to write production code, you have to program defensively. BTW I am firmly in the school that the program should complain like hell and/or blow up when given illegal inputs rather than force some "default behavior". Banks are crufty places and silently broken things have a habit of staying so for years on end. In my own code I usually have the "rotating knives / no safety" layer of library code, and overlay that with a whack of validation code when interfacing to, say, excel.