I don't think so: there is no objective function! A NN isn't really trying to mimic a real brain. There is one cool project that tries to simulate the nervous system of a real worm, and including the worms environment. I bet its a very different thype of NN.Is there an equivalent of local and global minima in the training of actual neurons? I mean the ones in your head!
* An accurate representation of the ion channels and their distributions in each neuron has not yet been attempted. Work on a cell model from C. elegans with ion channels can be found here
* An accurate representation of the synapses between the neurons has not yet been attempted. They are simplistic synapses only for the moment.
More accurate models of conductance based neurons and more realistic synapses will be incorporated into c302 first, and then the neuroConstruct model will be updated.
What's the academic background of the author? CS, maths, physics, eco.Actually, the properties of NN loss surfaces depend quite a lot on the dataset: see this paperI think it's impossible to construct a NN that has a loss surface like that.
"It is widely believed that training of deep models using gradient methods works so well because the error surface either has no local minima, or if they exist they need to be close in value to the global minimum. It is known that such results hold under very strong assumptions which are not satisfied by real models. In this paper we present examples showing that for such theorem to be true additional assumptions on the data, initialization schemes and/or the model classes have to be made. We look at the particular case of finite size datasets. We demonstrate that in this scenario one can construct counter-examples (datasets or initialization schemes) when the network does become susceptible to bad local minima over the weight space."
Note that their results also cast a slight shadow of doubt on the results of the paper I linked to today in my previous post. Other nice results were also proven under optimistic and, as admitted by the author, somewhat unrealistic assumptions.
I seem to recall that babies go through something like four different stages with different motion patterns as they learn to crawl -- the inefficient pre-crawling motion patterns would be local minima.Is there an equivalent of local and global minima in the training of actual neurons? I mean the ones in your head!
There was the AI winter but now we have gradient again!Yes! Good old Fosbury had to show them how to flop. But the flop could also be a local minimum.
I'd also say the human brain's current understanding of NN is beset by local minima, no?
Yes, it matters. I have not read the paper, but I have read others (e.g. DL fo 100 dimension PDE..).I don't know. Does it matter? What can you say about the paper itself?What's the academic background of the author? CS, maths, physics, eco.