April 4th, 2009, 9:12 am
I'm glad you like the "best enemy" approach A true war story is that my first trading system was tested with stuff I'd learned working as hired gun by IBM to find bugs in Microsoft S O/S code. This includes the RoboTraders, running on a collection of machines which submitted random trades at random high frequency intervals, which one day not long before live date showed a deep hole...You'd be better advised to use VMs, maybe VMWare ?The logging database went into a complete spasm that took days to diagnose and fix, everything stopped, was horrible, and it was my fault. I'd economised the costs on this component subtree because it was supposed to be wholly decoupled from the real time elements. Which it had been, then someone "optimised" something and it went synchronous, locks cut in, disks filled, everything stopped.The simulation crashed after 8 months of the expected volume of trading, pretty much the worst possible time.But it made me look just so cool.It allowed me to say to my management, and my management to say to their management that the system had been subjected to the must cruel and unusual stress. Thus my cost / time saving gamble had only a tiny negative effect when it went down.A lesson to be drawn from this is that a well tested system is more efficient both in execution time and development cost. I model T/S development with the same tools as you would use for a portfolio.There are risks to be taken, but you need hedges and most importantly some control over the distribution of outcomes.Consider a dilemma facing people wanting to use GPUs for high speed number crunching.Almost by instinct we code in doubles, even when it is not strictly speaking necessary. That's a risk/return decision based upon the fact that double procession is typically not that much slower than single.But the cost of a calc that is subtly, but significantly wrong because we used SP, is pretty large both in financial terms, and how dumb it makes us look if it goes wrong that way.But in the world of GPUs DP may not be available, or it may be 20 times slower. A really vicious test suite will allow you to experiment with non-obvious coding ploys when you need every millisecond (or hour) of performance, and be confident that you are hedged.That's not just GPUs, but anything that you think will make things better.Also, test suites do not have to be fast...If you have a model there is absolutely no reason why you can't test it practically exhaustively.Most QF parameters are bound within an order or two of magnitudeCreate a regression harness that calculates "known good" results across some vast range of combinations of the parameters to a few significant figures.Every time someone makes any change to any component, run the new version against the old over a weekend, better still do this every weekend.Any unexpected change is a cause for worry,Either the original "good" code is wrong or a subtle nasty has leaked in. Either fact is a precious thing to find, and it is cheap.Sort of thing you give a newbie to do. Good learning experience, and not only are his screwups easily found, they don't do much harm.He can then be let loose on more critical stuff, because you have better safety barriers.Of course "different" result is a question. Obviously a change in something like how you ingest market data should produce identical results down to the lest significant bit.,But a better / faster calculation will produce similar numbers but not similar enough that == is a useful operator. Thus you need to define an acceptable bound to the correctness of a result, and of course you did that already right ? I do a bit of this in the CQF C++, but is quite tricky, but is worth doing as an exercise to make sure you really understand what it is you want and that you know it when you get it.Also, it goes without saying that as well as a deeply nested loop which may take days to run, or several machines for longer you should pepper it with random numbers.Note I say two orders of magnitude for variables in this space. Although I am a pitiful headhunter I can predict with absolute certainty that within the next 18 months that the $ / Euro rate will change by a factor of 10, as in 15 $ to the Euro, or vice versa. Or that the price of oil will hit $400 or $4, or that inflation will hit 20% or minus 2%, that vol will be am extreme number that if I write it now you will just laugh, or some other mad shit. I can guarantee mad shit will hit the fan, I just can't tell you from which direction it will be thrown.This also reduces "key man" risk, where you only have one person who is trusted to touch the "important" stuff lest he screw it all up.In some ways it is good to be "key", but also it can be a pain, and as a manger you don't wan anyone in your team to be critical but you (and yes I do the politics of programming in the CQF as well )