August 1st, 2024, 9:40 pm
The post-1969 AI winter, following the publication of "Perceptrons"? It was ludicrous and says a lot about the domain - should be a warning to its practitioners. Minsky and Papert aimed to analyse the limitations of single-layer perceptrons (particularly the XOR problem) with mathematical scrutiny "in the name of science". In the very same work, they explicitly mention that a multi-layer version could overcome these limitations. But the community was completely out of tune with their scrupulous research style and used it blindly against the idea of neural networks altogether. The true problem they couldn't solve, though, was training...
This brings in a slightly less domain-specific paradox: the backpropagation algorithm was already described in the literature at the time (i.e. long before the seminal 1986 paper which popularized it and earned the authors many awards, incl. Turing / Nobel for computing):
Paul Werbos (1974): Ph.D. dissertation at Harvard University, "Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences"
Bryson and Ho (1969): "Applied Optimal Control: Optimization, Estimation, and Control"
Kelley (1960): technical report "Gradient Theory of Optimal Flight Paths"
(and probably many others thought about it, since everyone knows the Leibnitz rule!)
IMHO (and I think most believe it), the true reason for the slow progress in AI research at the time was the limitations in computational power.