Serving the Quantitative Finance Community

 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Universal Approximation theorem

February 2nd, 2021, 3:46 pm

Christoph Schwab was PhD student of Ivo Babuska, Tzar of the FEM method. You can believe anything he (CS) writes on Approximation Theory will be rigorous.

A major problem with AI is that it is is not rigorous and not up to the job in proving things that need to be proved..It is a finite-dimensional matrix (aka graph) world, grosso modo. Results are achieved a-posteriori (mucho experimentation).
I share the same opinion of Christoph Schwab. To me CS is an excellent mathematician, himself was involved in the curse of dimensionality and proposed sparse wavelet approach that I read carefully. I have no doubt that the maths are correct in his paper, even if I should have a more careful read. But look, his paper arrived yesterday on Arxiv, probably under pressure to try legitimate deep learning methods just before the big kudo party (CS is, or was, the boss of Arnulf Jentzen at ETH. Remember, Jentzen is the guy that uses AI to publish papers, thus ETH reputation is at stake now ). And it is clear that CS is embarassed here: his main theorem 1 just says that "if deep learning methods are convergent, then they converge". He is honest, he can't write more than that. I wrote to these two guys two years ago to point them out this convergence problem, as many others...I wrote to CS+AJ today again...Did we enter a bi-annual time loop ?
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

February 2nd, 2021, 6:34 pm

I also wrote to AJ et collegas a few years ago as well about his FEM papers ... pure fantasy. 
https://arxiv.org/pdf/1706.04702.pdf
For the record I spent 4 years of uni doing FEM research from profs at Paris / IRIA. But "deep Galerkin methods" don't exist, so they don't. 

"The current view of deep learning is more on a higher level. The network is a computational graph, and the choices you make -topological, activation function- should be seen in the light of "gradient management". "

This is scary, and the reason I don;t go to seminars.
Last edited by Cuchulainn on February 2nd, 2021, 7:26 pm, edited 2 times in total.
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

February 2nd, 2021, 6:43 pm

From Ruf and Wang 2020

The Stone-Weierstrass theorem asserts that any continuous function on a compact set can be approximated
by polynomials. Similarly, the universal approximation theorems ensure that ANNs approximate
continuous functions in a suitable way. In particular, ANNs are able to capture nonlinear dependencies
between input and output.
With this understanding, an ANN can be used for many applications related to option pricing and hedging.
In the most common form, an ANN learns the price of an option as a function of the underlying
price, strike price, and possibly other relevant option characteristics. Similarly, ANNs might also be trained
to learn implied

Sure; it is not even wrong.
This is the most crass I have seen; 300-degree polynomials ... something really really wrong here. You couldn't make it up,

Image
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Universal Approximation theorem

February 2nd, 2021, 8:45 pm

I also wrote to AJ et collegas a few years ago as well about his FEM papers ... pure fantasy. 
https://arxiv.org/pdf/1706.04702.pdf
For the record I spent 4 years of uni doing FEM research from profs at Paris / IRIA. But "deep Galerkin methods" don't exist, so they don't. 

"The current view of deep learning is more on a higher level. The network is a computational graph, and the choices you make -topological, activation function- should be seen in the light of "gradient management". "

This is scary, and the reason I don;t go to seminars.
We both wrote on the basis of the same paper to CS+AJ, and they did not answered :) Well, I really think that the math are not the problem here. Some of the guys to whom I talked, currently involved in th development of deep learning methods for finance, perfectly knew that it does not work, since they quoted me other references showing that these methods do not work. I really think today that the point is an investment one : a researcher won't question the tools that he is using if he is paid and asked to use these tools. Ok, too late, money has been wasted to produce mind garbages. Now, we should start to get a cartography of all potential problems that they might left behind. That's what we are trying to do in France with this link (it is in French, google trad ?). 
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Universal Approximation theorem

February 2nd, 2021, 8:58 pm

From Ruf and Wang 2020

The Stone-Weierstrass theorem asserts that any continuous function on a compact set can be approximated
by polynomials. Similarly, the universal approximation theorems ensure that ANNs approximate
continuous functions in a suitable way. In particular, ANNs are able to capture nonlinear dependencies
between input and output.
With this understanding, an ANN can be used for many applications related to option pricing and hedging.
In the most common form, an ANN learns the price of an option as a function of the underlying
price, strike price, and possibly other relevant option characteristics. Similarly, ANNs might also be trained
to learn implied

Sure; it is not even wrong.
This is the most crass I have seen; 300-degree polynomials ... something really really wrong here. You couldn't make it up,
I think that mathematicians are outdated : with faith you can send holy grenads to psychopath rabbits. I can't :/
 
User avatar
katastrofa
Posts: 7440
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Universal Approximation theorem

February 4th, 2021, 3:32 pm

Dude, that made absolutely no sense (to begin with, psi is an activation function)... Ego te absolvo a peccatis tuis in Intelligentia Artificialis.
No one born after 1975 knows Latin.

Pity you don't have published work in this area, Is it better than sniping. 
You are not adding any value.
@Latin: They taught us the Catholic liturgy in Latin at school. I got special treatment. Ora pro nobis pecatrollibus nunc et i hora mortis nuestre.

Everybody publishes in "this area" these days. Even you and JLM ;-)
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Universal Approximation theorem

February 4th, 2021, 3:55 pm

Dude, that made absolutely no sense (to begin with, psi is an activation function)... Ego te absolvo a peccatis tuis in Intelligentia Artificialis.
No one born after 1975 knows Latin.

Pity you don't have published work in this area, Is it better than sniping. 
You are not adding any value.
@Latin: They taught us the Catholic liturgy in Latin at school. I got special treatment. Ora pro nobis pecatrollibus nunc et i hora mortis nuestre.

Everybody publishes in "this area" these days. Even you and JLM ;-)
True. We do it to kill this absurd, novlangue, orwellien, obscurantist, artificial intelligence trend. Quantuum computing might be next. We do it to try avoiding an industrial mess, even if probably too late, and also because listening to oracles presented as science is a real moral pain for mathematicians.
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

February 4th, 2021, 5:34 pm

Dude, that made absolutely no sense (to begin with, psi is an activation function)... Ego te absolvo a peccatis tuis in Intelligentia Artificialis.
No one born after 1975 knows Latin.

Pity you don't have published work in this area, Is it better than sniping. 
You are not adding any value.
@Latin: They taught us the Catholic liturgy in Latin at school. I got special treatment. Ora pro nobis pecatrollibus nunc et i hora mortis nuestre.

Everybody publishes in "this area" these days. Even you and JLM ;-)
So, you learned Latin but had no clue what you were say,
Think: auditors.

ML is just an application of Functional Analysis. Seriously.
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Universal Approximation theorem

February 4th, 2021, 5:46 pm

@Latin: They taught us the Catholic liturgy in Latin at school. I got special treatment. Ora pro nobis pecatrollibus nunc et i hora mortis nuestre.
I have very few latin memories, but isn't it "Ora pro nobis pecatorribus nunc et in hora mortis nostrae" ?
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

February 4th, 2021, 6:02 pm

@Latin: They taught us the Catholic liturgy in Latin at school. I got special treatment. Ora pro nobis pecatrollibus nunc et i hora mortis nuestre.
I have very few latin memories, but isn't it "Ora pro nobis pecatorribus nunc et in hora mortis nostrae" ?
My Latin memories feel just like yesterday. In stone.

Pecunia non olet
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

February 4th, 2021, 7:15 pm

The origins of kernels. 100 years old.

Image
 
User avatar
katastrofa
Posts: 7440
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Universal Approximation theorem

February 4th, 2021, 9:57 pm

@Latin: They taught us the Catholic liturgy in Latin at school. I got special treatment. Ora pro nobis pecatrollibus nunc et i hora mortis nuestre.
I have very few latin memories, but isn't it "Ora pro nobis pecatorribus nunc et in hora mortis nostrae" ?
Mea culpa! (But troll was frigidus sanguis.)
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

February 5th, 2021, 6:44 pm

Demonstratio nova theorematis omnem functionem algebraicam rationalem integram unius variabilis in factores reales primi vel secundi gradus resolvi posse

Quam pro obtinendis summis in philosophia honoribus inclito philosophorum ordini Academiae Iuliae Carolinae / exhibuit Carolus Fridericus Gauss
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Universal Approximation theorem

February 17th, 2021, 4:44 pm

Talk by Jean-Marc Mercier and Daniel J. Duffy on Hilbert Space methods for Machine Learning

At QuantUniversity


Image
[font={defaultattr}]
Hilbert Space Kernel Methods for Machine Learning: Background and Foundations - Splash

[/font]
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Universal Approximation theorem

February 28th, 2021, 9:09 am

For historical purposes, we should all remind what happen when you build an house on ricketty foundations, as the Universal Approximation Theorem is for numerical applications. Cohorts of researchers and practitionners today using neural networks should try to read this article, and should be fascinated by its main Theorem 1.1 as well as some chosen phrases as
"Although many ANN training algorithms have demonstrated great success in practice, the reasons for this are generally not known, as no mathematically rigorous analysis exists for most algorithms"
or (doing my best to summarize)
"to lower the numerical error on a SGD batch gradient descent, starting from an intitial error \epsilon, it is enough to consider a Neural network of size 1/\epsilon^{4}"

Let me recall that NNs methods are used for industrial applications, and that these methods benefit from massive private and public investments, since at least 2015 in Finance. Nice job !