April 3rd, 2015, 12:54 pm
Interesting! But this approach contains a serious hidden vulnerability. The neural net model assumes there is some kind of invariant physics in the system (as in volley ball or chess) which the neural net might learn to operate effectively within. Yet, in reality, the counterparties in the market create that physics (unlike volley ball or chess) and if they know that there is a neural net on the other side which will learn the physics then they can create one type of physics for the neural net to learn and then change that physics at a later date to take all the neural net's money. What's interesting is that this strategy of manipulation to mislead others can arise spontaneously such as in collections of neural nets or genetic algorithms playing the Prisoners Dilemma game with each other.Of course, if the first neural net knew that it faced a counterparty neural net that was going to fake the physics, then the first neural net could pretend to learn the physics whilst actually waiting for the counterparty to switch physics and the first neural net would be ready for that. Which ever neural net goes deeper on the "I know that you know that I know that you know ......." recursion of social knowledge will win.But here's the final kicker: the collection of counterparties is also a distributed adaptive learning system with each counterparty potentially being a neuron in that broader network. To truly succeed, the first neural net must be larger and deeper than the sum total of all neural nets that it faces.