January 16th, 2009, 11:56 pm
In regards to an earlier poster: I think it is quite often the case that people just blindly throw one of a selection of "exciting" ML algorithms at a problem...e.g. I have seen a lot of excitement recently on a couple of programming forums, where people have "rediscovered" evolutionary algorithms and are now busy applying them to all sorts of problems, some of which are totally unsuited to the evolutionary approach. However, the formal foundations of machine learning are that of statistical learning theory, which in turn is heavily reliant on measure theory and probability. Machine learning has been referred to by one notable statistician as "statistics minus model checking or validation", and you will find that a lot of standard statistical techniques are the same, just "rebranded" for the ML arena (e.g. look at the lasso or ridge regression techniques in ML). So the foundations of ML are rigorous, but its application is rarely so.