Serving the Quantitative Finance Community

 
User avatar
endian675
Topic Author
Posts: 0
Joined: March 11th, 2008, 7:09 pm

Automated trading system - recommended book

March 27th, 2009, 5:54 pm

I am going to be building an automated trading and hedging system over the next few months. I was wondering if anyone knows of good texts on the mechanics of doing that, general pitfalls, design patterns, etc. Not so much interested in exchange connectivity or other specifics, more on ways to avoid "runaway trader" problems and things like that.Any help greatly appreciated.
 
User avatar
DominicConnor
Posts: 41
Joined: July 14th, 2002, 3:00 am

Automated trading system - recommended book

March 27th, 2009, 8:23 pm

I am reasonably confident that no single book exists on this.I fear you need to break this down into books on the various components, and at the risk of sounding self-important, the only stuff I've ever read on avoiding "runaway trader" and other pitfalls have been the stuff I've put up on this forum, which is of course pathetically inadequate.I will however share with you one critical architectural principle, and one engineering discipline.Do not design and build the system by what it does, but what you can see. This class of system is never finished, let alone shrink wrapped. You should be designing it as a collection of displays, around which you hang functionality. That means you develop an intuition to "correct" internal behaviour, and you can firefight problems as well as set up internal safety barriers, and most importantly of all, build up an understanding of what the safety barriers should be.Next, you need to hire an enemy. You want someone who will take delight in finding obscure and demented ways in which your system could (in theory) go wrong. To misquote Tom Clancy "You can only be betrayed by those you trust, everything else is just business". He will think up scenarios that you find ridiculous, like (for instance) the FIX interface deciding to throw away randomly chosen messages, and change their order, or the system deciding to run your process at 1% of the speed it normally does. That's on top of using up all your memory, and building a test frame that runs 24*7 on random price data, trying to find holes by brute force.The key to that relationship, and your profitability / survival is that he understands that the more he screws with your system, the bigger his bonus
 
User avatar
endian675
Topic Author
Posts: 0
Joined: March 11th, 2008, 7:09 pm

Automated trading system - recommended book

March 30th, 2009, 9:41 am

Dominic, thanks for those helpful insights. Your idea about building the system around a series of displays is a very good one. I had considered first building a "semi-auto" system, which generates trades but required my/other user input to release them, which could be completely automated once it had proven itself. I'll have to ponder who I can find to be my best "enemy", I'm sure there's someone on our desk who will be suitable :-D
 
User avatar
yeswici
Posts: 0
Joined: March 28th, 2009, 10:21 pm

Automated trading system - recommended book

March 30th, 2009, 12:15 pm

I have been trying to build auto-trading system for a while, with a platform (www.yeswici.com) for modeling and computing. I believe we need to (1) craft sound strategy from research; (2) integrate with trading platform such as IB. Please keep us posted.
Last edited by yeswici on March 29th, 2009, 10:00 pm, edited 1 time in total.
 
User avatar
DominicConnor
Posts: 41
Joined: July 14th, 2002, 3:00 am

Automated trading system - recommended book

April 4th, 2009, 9:12 am

I'm glad you like the "best enemy" approach A true war story is that my first trading system was tested with stuff I'd learned working as hired gun by IBM to find bugs in Microsoft S O/S code. This includes the RoboTraders, running on a collection of machines which submitted random trades at random high frequency intervals, which one day not long before live date showed a deep hole...You'd be better advised to use VMs, maybe VMWare ?The logging database went into a complete spasm that took days to diagnose and fix, everything stopped, was horrible, and it was my fault. I'd economised the costs on this component subtree because it was supposed to be wholly decoupled from the real time elements. Which it had been, then someone "optimised" something and it went synchronous, locks cut in, disks filled, everything stopped.The simulation crashed after 8 months of the expected volume of trading, pretty much the worst possible time.But it made me look just so cool.It allowed me to say to my management, and my management to say to their management that the system had been subjected to the must cruel and unusual stress. Thus my cost / time saving gamble had only a tiny negative effect when it went down.A lesson to be drawn from this is that a well tested system is more efficient both in execution time and development cost. I model T/S development with the same tools as you would use for a portfolio.There are risks to be taken, but you need hedges and most importantly some control over the distribution of outcomes.Consider a dilemma facing people wanting to use GPUs for high speed number crunching.Almost by instinct we code in doubles, even when it is not strictly speaking necessary. That's a risk/return decision based upon the fact that double procession is typically not that much slower than single.But the cost of a calc that is subtly, but significantly wrong because we used SP, is pretty large both in financial terms, and how dumb it makes us look if it goes wrong that way.But in the world of GPUs DP may not be available, or it may be 20 times slower. A really vicious test suite will allow you to experiment with non-obvious coding ploys when you need every millisecond (or hour) of performance, and be confident that you are hedged.That's not just GPUs, but anything that you think will make things better.Also, test suites do not have to be fast...If you have a model there is absolutely no reason why you can't test it practically exhaustively.Most QF parameters are bound within an order or two of magnitudeCreate a regression harness that calculates "known good" results across some vast range of combinations of the parameters to a few significant figures.Every time someone makes any change to any component, run the new version against the old over a weekend, better still do this every weekend.Any unexpected change is a cause for worry,Either the original "good" code is wrong or a subtle nasty has leaked in. Either fact is a precious thing to find, and it is cheap.Sort of thing you give a newbie to do. Good learning experience, and not only are his screwups easily found, they don't do much harm.He can then be let loose on more critical stuff, because you have better safety barriers.Of course "different" result is a question. Obviously a change in something like how you ingest market data should produce identical results down to the lest significant bit.,But a better / faster calculation will produce similar numbers but not similar enough that == is a useful operator. Thus you need to define an acceptable bound to the correctness of a result, and of course you did that already right ? I do a bit of this in the CQF C++, but is quite tricky, but is worth doing as an exercise to make sure you really understand what it is you want and that you know it when you get it.Also, it goes without saying that as well as a deeply nested loop which may take days to run, or several machines for longer you should pepper it with random numbers.Note I say two orders of magnitude for variables in this space. Although I am a pitiful headhunter I can predict with absolute certainty that within the next 18 months that the $ / Euro rate will change by a factor of 10, as in 15 $ to the Euro, or vice versa. Or that the price of oil will hit $400 or $4, or that inflation will hit 20% or minus 2%, that vol will be am extreme number that if I write it now you will just laugh, or some other mad shit. I can guarantee mad shit will hit the fan, I just can't tell you from which direction it will be thrown.This also reduces "key man" risk, where you only have one person who is trusted to touch the "important" stuff lest he screw it all up.In some ways it is good to be "key", but also it can be a pain, and as a manger you don't wan anyone in your team to be critical but you (and yes I do the politics of programming in the CQF as well )