Serving the Quantitative Finance Community

 
User avatar
molecool

Real Time Support Vector Machine

August 21st, 2014, 9:52 am

Hey guys - first post here, so be gentle ;-)I have been immersing myself into ANNs and SVMs recently - it's like a whole new world opened up to me. So I'm trying to put together a basic module and came across this idea. Basically someone has done what I always imagined doing - wiring an SVM directly into live data and train/predict on the fly. Have any of you guys ever tried that? Any input would be welcome. FYI - I'm a pretty mad coder but not a math genius, at least compared to you quant guys. Thanks in advance.
Last edited by molecool on August 20th, 2014, 10:00 pm, edited 1 time in total.
 
User avatar
taylan
Posts: 0
Joined: October 28th, 2007, 12:13 pm

Real Time Support Vector Machine

August 21st, 2014, 2:06 pm

What you are mentioning has been quite a hot topic in Machine Learning community, the term used for this is 'Online Learning'. There are online implementations for many machine learning algorithms, and svm is no exception. Once you dive into this however, you need to be really careful about over-fitting. The learning period becomes an important vector affecting performance, and local minima might become an issue. After all, what you want is to generalize from past data, and if your learning period is too short your algorithm might become unstable. There are however heuristics that can help optimize this.Having said that, I think you're on the right track
Last edited by taylan on August 20th, 2014, 10:00 pm, edited 1 time in total.
 
User avatar
molecool

Real Time Support Vector Machine

August 21st, 2014, 3:17 pm

QuoteOriginally posted by: taylanWhat you are mentioning has been quite a hot topic in Machine Learning community, the term used for this is 'Online Learning'. There are online implementations for many machine learning algorithms, and svm is no exception. Once you dive into this however, you need to be really careful about over-fitting. The learning period becomes an important vector affecting performance, and local minima might become an issue. After all, what you want is to generalize from past data, and if your learning period is too short your algorithm might become unstable. There are however heuristics that can help optimize this.Having said that, I think you're on the right track Thanks for the response taylan. Let me precede the following with an obligatory disclaimer - until a month ago I didn't know anything about ANNs, so chances are I'm still a bit out of my depth ;-)That said, after having absorbed a ton of white papers on the subject matter, my understanding is that SVMs very resistant to over-fitting. As they are based on the structural risk minimization principle SVMs attempt to find an optimal model complexity for a finite sample. In my mind they follow a hyperbolic geometry approach - which is something I actually pondered about way before learning about SVMs. Which is why I was rather elated when I came across this - I always suspected that market prediction was a non-linear problem but didn't know how to solve it (and my math skills are weak).I cut my teeth with some basic ANN demos and realized their inherent limitation rather quickly. ANNs requires nonlinear optimization with the danger of getting stuck at local minima. Which is why I have completely abandoned them in favor of SVMs as I believe that the latter offer a far superior approach. But then again - I may be jumping to conclusions (see disclaimer above).Anyway, this is what I'm up to: I'm currently hacking the latest version of Encog which unfortunately has some inherent limitations. Mainly the SVM engine's gamma and C appear to be hardcoded to 1.0/0.5 no matter which params are fed in. Easily fixed as I have the source but I wonder if I should perhaps switch to LIBSVM directly - Encog uses LIBSVM internally but it's an older version. I have no idea how much faster/better/optimized the recent versions of LIBSVM are. If anyone can provide any guidance on that it would be appreciated.Per your comment regarding 'online learning with SVMs' - I'm still a bit soft on how to feed live data into an SVM and have it predict at the same time. With an ANN it's normalize -> train - predict. I understand that a hardcoded gamma/C setting offers an almost real-time SVM but I don't know how to plug it all together. Does it make sense to produce a hybrid training solution that has a basic SVM with an alternative engine that gets trained on the fly? I know some of you guys are doing this and I don't expect to be handed any secrets. But a few hints on how to architect it and what pitfalls to avoid would help tremendously. Many thanks in advance.
Last edited by molecool on August 20th, 2014, 10:00 pm, edited 1 time in total.