Serving the Quantitative Finance Community

 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

February 28th, 2021, 11:27 am

Theorem 1.1 is GOBBELY GOOK. 
  1. language that is meaningless or is made unintelligible by excessive use of technical terms.
This article is the weirdest I have ever seen.

Who are they trying to fool, (again)?
Last edited by Cuchulainn on February 28th, 2021, 11:30 am, edited 2 times in total.
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Universal Approximation theorem

February 28th, 2021, 11:29 am

Theorem 1.1 is GOBBELY GOOK. 
This article is the weirdest I have ever seen.

Who are they trying to fool?
I am afraid that they already fooled a lot of people around there...
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

February 28th, 2021, 11:51 am

"Das ist nicht nur nicht richtig; es ist nicht einmal falsch!" 
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

February 28th, 2021, 11:55 am

Theorem 1.1 is GOBBELY GOOK. 
This article is the weirdest I have ever seen.

Who are they trying to fool?
I am afraid that they already fooled a lot of people around there...
www.youtube.com/watch?v=HykF5KX4STA
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Universal Approximation theorem

March 5th, 2021, 8:19 am

@Cuchulainn we might intervene in this conference, that I severely criticized recently due to their positioning for artificial intelligence. Might the quantitative community starting to open up to critcisms ?
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

March 5th, 2021, 11:55 am

@Cuchulainn we might intervene in this conference, that I severely criticized recently due to their positioning for artificial intelligence. Might the quantitative community starting to open up to critcisms ?
I wish I had your faith. I am not convinced. But what do I know.

"a real quant is someone who blows up a hedge fund in greenwich connecticut in 1996 or revolutionizes the field by creating a “gaussian copula”
Last edited by Cuchulainn on March 5th, 2021, 12:04 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

March 5th, 2021, 11:55 am

"It has become clear that kernel methods provide a framework for tackling some rather profound issues in machine learning theory. At the same time, successful applications have demonstrated that SVMs not only have a more solid foundation than artificial neural networks, but are able to serve as a replacement for neural networks that perform as well or better, in a wide variety of fields."

Schölkopf and Smola (2002).

Dat's 20 years ago..
 
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

March 5th, 2021, 12:26 pm

Yes gradient descent with back-propagation is the most widely used method when training a neural networks with supervised learning.

later more discussion material!..
Why? It's an awful,method.
 
User avatar
tags
Posts: 3162
Joined: February 21st, 2010, 12:58 pm

Re: Universal Approximation theorem

March 6th, 2021, 12:16 am

Yes gradient descent with back-propagation is the most widely used method when training a neural networks with supervised learning.

later more discussion material!..
Why? It's an awful,method.
Can you please tell why you think it is awful?
(Apologies if you already did earlier in this thread. but I find it (this thread) especially difficult to follow.)
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

March 6th, 2021, 1:55 pm

Yes gradient descent with back-propagation is the most widely used method when training a neural networks with supervised learning.

later more discussion material!..
Why? It's an awful,method.
Can you please tell why you think it is awful?
(Apologies if you already did earlier in this thread. but I find it (this thread) especially difficult to follow.)
@Cuchullain, for me, Gradient Descent is a swiss-knife methods. Always produce results, but can be stuck in local minima.

Local minima, if it is lucky, That's the least of your worries. GD has a whole lot of issues: Off the top of my head

0. Inside GD lurks a nasty Euler method.
1. Initial guess must be close to real solution (Analyse Numerique 101).
2. No guarantee that GD is applicable in the first place (assumes cost function is smooth).
3. "Vanishing gradient syndrome"
https://en.wikipedia.org/wiki/Vanishing ... nt_problem
4. Learning rate parameter... so many to choose from (ad hoc/trial and error process).
5. Use Armijo and Wolfe to improve convergence.
6. Modify algorithm by adding momentum.
7. Any you have to compute gradient 1) exact, 2) FDM, 3) AD, 4) complex step method.
8. Convergence to local minimum.
9. The method is iterative, so no true reliable quality of service (QOS).
10. It's not very robust (cf. adversarial examples). Try regularization.

There might be some more.

// Maybe I'm hallucinating but I thought I already posted this (but it was before my first koffee).
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

March 6th, 2021, 2:02 pm

still here/
 
User avatar
tags
Posts: 3162
Joined: February 21st, 2010, 12:58 pm

Re: Universal Approximation theorem

March 6th, 2021, 2:26 pm

Your comments are back Cuchulainn. Many thanks.
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

March 6th, 2021, 2:47 pm

From Luenberger 1973

"It can be shown that after a (possibly infinite) number of steps, gradient descent will converge."

More GPUs? LOL

I saw the quote from the nice book on kernels by Schoelkopf and Smola 2002..
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

March 21st, 2021, 3:35 pm

I also wrote to AJ et collegas a few years ago as well about his FEM papers ... pure fantasy. 
https://arxiv.org/pdf/1706.04702.pdf
For the record I spent 4 years of uni doing FEM research from profs at Paris / IRIA. But "deep Galerkin methods" don't exist, so they don't. 

"The current view of deep learning is more on a higher level. The network is a computational graph, and the choices you make -topological, activation function- should be seen in the light of "gradient management". "

This is scary, and the reason I don;t go to seminars.
JohLeM

I wrote again a few weeks. No answer. This is a joke.
 
User avatar
katastrofa
Posts: 7440
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Universal Approximation theorem

May 11th, 2021, 4:20 pm

Cuchulainn, perhaps something for you: https://arxiv.org/abs/1711.10561
modulo the physics part! (-: