Page 20 of 21

Re: Universal Approximation theorem

Posted: February 28th, 2021, 11:27 am
by Cuchulainn
Theorem 1.1 is GOBBELY GOOK. 
  1. language that is meaningless or is made unintelligible by excessive use of technical terms.
This article is the weirdest I have ever seen.

Who are they trying to fool, (again)?

Re: Universal Approximation theorem

Posted: February 28th, 2021, 11:29 am
by JohnLeM
Theorem 1.1 is GOBBELY GOOK. 
This article is the weirdest I have ever seen.

Who are they trying to fool?
I am afraid that they already fooled a lot of people around there...

Re: Universal Approximation theorem

Posted: February 28th, 2021, 11:51 am
by Cuchulainn
"Das ist nicht nur nicht richtig; es ist nicht einmal falsch!" 

Re: Universal Approximation theorem

Posted: February 28th, 2021, 11:55 am
by Cuchulainn
Theorem 1.1 is GOBBELY GOOK. 
This article is the weirdest I have ever seen.

Who are they trying to fool?
I am afraid that they already fooled a lot of people around there...
www.youtube.com/watch?v=HykF5KX4STA

Re: Universal Approximation theorem

Posted: March 5th, 2021, 8:19 am
by JohnLeM
@Cuchulainn we might intervene in this conference, that I severely criticized recently due to their positioning for artificial intelligence. Might the quantitative community starting to open up to critcisms ?

Re: Universal Approximation theorem

Posted: March 5th, 2021, 11:55 am
by Cuchulainn
@Cuchulainn we might intervene in this conference, that I severely criticized recently due to their positioning for artificial intelligence. Might the quantitative community starting to open up to critcisms ?
I wish I had your faith. I am not convinced. But what do I know.

"a real quant is someone who blows up a hedge fund in greenwich connecticut in 1996 or revolutionizes the field by creating a “gaussian copula”

Re: Universal Approximation theorem

Posted: March 5th, 2021, 11:55 am
by Cuchulainn
"It has become clear that kernel methods provide a framework for tackling some rather profound issues in machine learning theory. At the same time, successful applications have demonstrated that SVMs not only have a more solid foundation than artificial neural networks, but are able to serve as a replacement for neural networks that perform as well or better, in a wide variety of fields."

Schölkopf and Smola (2002).

Dat's 20 years ago..
 

Re: Universal Approximation theorem

Posted: March 5th, 2021, 12:26 pm
by Cuchulainn
Yes gradient descent with back-propagation is the most widely used method when training a neural networks with supervised learning.

later more discussion material!..
Why? It's an awful,method.

Re: Universal Approximation theorem

Posted: March 6th, 2021, 12:16 am
by tags
Yes gradient descent with back-propagation is the most widely used method when training a neural networks with supervised learning.

later more discussion material!..
Why? It's an awful,method.
Can you please tell why you think it is awful?
(Apologies if you already did earlier in this thread. but I find it (this thread) especially difficult to follow.)

Re: Universal Approximation theorem

Posted: March 6th, 2021, 1:55 pm
by Cuchulainn
Yes gradient descent with back-propagation is the most widely used method when training a neural networks with supervised learning.

later more discussion material!..
Why? It's an awful,method.
Can you please tell why you think it is awful?
(Apologies if you already did earlier in this thread. but I find it (this thread) especially difficult to follow.)
@Cuchullain, for me, Gradient Descent is a swiss-knife methods. Always produce results, but can be stuck in local minima.

Local minima, if it is lucky, That's the least of your worries. GD has a whole lot of issues: Off the top of my head

0. Inside GD lurks a nasty Euler method.
1. Initial guess must be close to real solution (Analyse Numerique 101).
2. No guarantee that GD is applicable in the first place (assumes cost function is smooth).
3. "Vanishing gradient syndrome"
https://en.wikipedia.org/wiki/Vanishing ... nt_problem
4. Learning rate parameter... so many to choose from (ad hoc/trial and error process).
5. Use Armijo and Wolfe to improve convergence.
6. Modify algorithm by adding momentum.
7. Any you have to compute gradient 1) exact, 2) FDM, 3) AD, 4) complex step method.
8. Convergence to local minimum.
9. The method is iterative, so no true reliable quality of service (QOS).
10. It's not very robust (cf. adversarial examples). Try regularization.

There might be some more.

// Maybe I'm hallucinating but I thought I already posted this (but it was before my first koffee).

Re: Universal Approximation theorem

Posted: March 6th, 2021, 2:02 pm
by Cuchulainn
still here/

Re: Universal Approximation theorem

Posted: March 6th, 2021, 2:26 pm
by tags
Your comments are back Cuchulainn. Many thanks.

Re: Universal Approximation theorem

Posted: March 6th, 2021, 2:47 pm
by Cuchulainn
From Luenberger 1973

"It can be shown that after a (possibly infinite) number of steps, gradient descent will converge."

More GPUs? LOL

I saw the quote from the nice book on kernels by Schoelkopf and Smola 2002..

Re: Universal Approximation theorem

Posted: March 21st, 2021, 3:35 pm
by Cuchulainn
I also wrote to AJ et collegas a few years ago as well about his FEM papers ... pure fantasy. 
https://arxiv.org/pdf/1706.04702.pdf
For the record I spent 4 years of uni doing FEM research from profs at Paris / IRIA. But "deep Galerkin methods" don't exist, so they don't. 

"The current view of deep learning is more on a higher level. The network is a computational graph, and the choices you make -topological, activation function- should be seen in the light of "gradient management". "

This is scary, and the reason I don;t go to seminars.
JohLeM

I wrote again a few weeks. No answer. This is a joke.

Re: Universal Approximation theorem

Posted: May 11th, 2021, 4:20 pm
by katastrofa
Cuchulainn, perhaps something for you: https://arxiv.org/abs/1711.10561
modulo the physics part! (-: