I am using 3 methods to test if the data is random or not, as you can guess most sets of stock prices appear as most likely random according to all of the tests!
When they aren't I am using the NN, Genetic algo and Levenberg-Marqrant to see what can be predicted and what not or better yet just put them through the volatility test tool to see how much % is the real change on month-to-month basis so that I have an idea if I can buy cheap put/call OTM.
I am interested in the rationale and the (inner workings) for choosing these three methods. They are a bit of a motley crew. They use a mix of heuristics and have very different properties.
Output from them feels like a description rather an explanation.
1. What is NN in this case? the usual GD, SGD,stuff with BPN and learning rates?
2. What's the advantage of GA compared to say Differential Evolution which IMO is more versatile? I thought GA was a bit passe, but maybe it's on its way back.
3. LevMar is a Opel Kadette .. it works on most days but it is not very robust.
1 and 2 only produce local minima at best. Can't remember if this is also true of GA..
Do you have reports on how these methods compare to each other based on a range of data.