November 11th, 2017, 1:10 pm
Agreed! Your sentiment is the basis for all of science and engineering. Yes, we certainly should try to determine how things work so that we can predict which operating conditions lead to performance or failure.
The question is how. Can we use deductive methods to logically prove the system properties or must we use inductive methods to empirically assess performance and then use interpolation (sometimes safe) or extrapolation (often dangerous) to predict performance under new conditions?
For simple math and simple code, deductive analysis of the system can mathematically prove the properties of the system. For more complex things (human brains and neural networks) deduction may be intractable. (It may even be true that any deductive system capable of proving the properties of a complex system would, itself, be so complex that we'd not know how that deductive system works and thus not trust the deductive's system's assessment of the complex system).
Your example of the panda, gibbon, and alley illustrates this nicely. Technically, we really don't know how panda, gibbons, and dark alleys work in any mathematical sense -- it's all empirical knowledge. Moreover, we know the empircal properties of pandas, gibbons, and dark alleys independently of each other -- having no data on those specific animals in those specific locations. Predicting whether a gibbon in a dark alley or a panda in a dark alley is actually more dangerous calls for extrapolation. Extrapolation is dangerous and yet we do it because there's no alternative.
The stickier issue is: are there things that perform as well as back-propagation for learning-related tasks for which we do know how they work? If we don't use back-propagation, what do we use that is provably better? (BTW: human brains are demonstrably worse on both deductive and inductive dimensions: we don't know how they work and they have worse empirical performance.)