- language that is meaningless or is made unintelligible by excessive use of technical terms.
Who are they trying to fool, (again)?
I am afraid that they already fooled a lot of people around there...Theorem 1.1 is GOBBELY GOOK.
This article is the weirdest I have ever seen.
Who are they trying to fool?
www.youtube.com/watch?v=HykF5KX4STAI am afraid that they already fooled a lot of people around there...Theorem 1.1 is GOBBELY GOOK.
This article is the weirdest I have ever seen.
Who are they trying to fool?
I wish I had your faith. I am not convinced. But what do I know.@Cuchulainn we might intervene in this conference, that I severely criticized recently due to their positioning for artificial intelligence. Might the quantitative community starting to open up to critcisms ?
Why? It's an awful,method.Yes gradient descent with back-propagation is the most widely used method when training a neural networks with supervised learning.
later more discussion material!..
Can you please tell why you think it is awful?Why? It's an awful,method.Yes gradient descent with back-propagation is the most widely used method when training a neural networks with supervised learning.
later more discussion material!..
@Cuchullain, for me, Gradient Descent is a swiss-knife methods. Always produce results, but can be stuck in local minima.Can you please tell why you think it is awful?Why? It's an awful,method.Yes gradient descent with back-propagation is the most widely used method when training a neural networks with supervised learning.
later more discussion material!..
(Apologies if you already did earlier in this thread. but I find it (this thread) especially difficult to follow.)
JohLeMI also wrote to AJ et collegas a few years ago as well about his FEM papers ... pure fantasy.
https://arxiv.org/pdf/1706.04702.pdf
For the record I spent 4 years of uni doing FEM research from profs at Paris / IRIA. But "deep Galerkin methods" don't exist, so they don't.
"The current view of deep learning is more on a higher level. The network is a computational graph, and the choices you make -topological, activation function- should be seen in the light of "gradient management". "
This is scary, and the reason I don;t go to seminars.