Found a nice booklist
by Michael Jordan: "not only do I think that you should eventually read all of these books (or some similar list that reflects your own view of foundations), but I think that you should read all of them three times—the first time you barely understand, the second time you start to get it, and the third time it all seems obvious."
I think what's currently going on is very interesting and profound. There is a big research boom (see plot below), *and* extremely short release cycles of new research tools to production. It seems almost *any* narrow human domain skill can now be surpassed by some ML tool. The other day some kid taught a neural network to lipread from video,.. things like that are apparently easy now.
What puzzles me is the "major disasters". Nobody is giving examples. I can imagine that a flawed self driving car can cause disasters, or a flawed HF trading bot, but the disasters would not be caused by the technology but by lack of testing and safeguards *). Abuse of these powers is another potential disaster.
*) I was once was talking to a clearing party about getting direct access to the exchange matching engines for my algo startup. They said that their access was very fast because they had stripped out all checks like price limits, volume limits, margin limits
. I would need to do some simple test to show robustness.