SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
ISayMoo
Posts: 2143
Joined: September 30th, 2015, 8:30 pm

Re: Universal Approximation theorem

October 22nd, 2019, 12:10 pm

It starts to sound like a married couple's argument. I'm outta here.
 
User avatar
JohnLeM
Posts: 362
Joined: September 16th, 2008, 7:15 pm

Re: Universal Approximation theorem

October 22nd, 2019, 12:34 pm

It starts to sound like a married couple's argument. I'm outta here.
@Cuchullain, it seems that you put your finger exactly where it hurts ...
 
User avatar
Cuchulainn
Topic Author
Posts: 60256
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Universal Approximation theorem

October 22nd, 2019, 1:38 pm

It starts to sound like a married couple's argument. I'm outta here.
I have no experience in this area. Sorry. Life is too short for that, and it wastes good drinking time. Or whatever.
 
User avatar
katastrofa
Posts: 8376
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Universal Approximation theorem

October 24th, 2019, 3:34 am

I see. Meantime, I hear more and more voices warning that billions of public and private money are wasted in a technology without any foundation, thus inefficient. The last time that this legitimacy problem popped up, the artificial intelligence community crossed a 15 years desert as nice as your picture...
Not sure what you mean, but the last AI winter set in when perceptron couldn't XOR, and passed with the application of "backpropagation" (calculating a gradient by a chain rule) in NN training. No Nobel Prize for that yet? (it was owing to the rapidly improving computer technologies at the time, but scientific NPs go to lousy academics)
Last edited by katastrofa on October 24th, 2019, 8:43 am, edited 1 time in total.
 
User avatar
JohnLeM
Posts: 362
Joined: September 16th, 2008, 7:15 pm

Re: Universal Approximation theorem

October 24th, 2019, 8:21 am

I see. Meantime, I hear more and more voices warning that billions of public and private money are wasted in a technology without any foundation, thus inefficient. The last time that this legitimacy problem popped up, the artificial intelligence community crossed a 15 years desert as nice as your picture...
Not sure what you mean, but the last AI winter set in when perceptron couldn't XOR, and passed with the application of "backpropagation" (calculating a gradient by a chain rule) in NN training. No Nobel Prize for that yet? (it's owed by the rapidly improving computer technologies at the time, but scientific NPs go to lousy academics)
I was rereading this thread. Taking away old grouchy slings, there is a bunch of very interesting references. I will try to gather all this.
 
User avatar
Cuchulainn
Topic Author
Posts: 60256
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Universal Approximation theorem

October 24th, 2019, 9:02 am

The difference between mathematics and engineering is the difference between starting again and tweaking ad nauseam. (like zillion learning rate schedules). 
How many times have we heard about vanishing gradients? 

Actually, this is a great thread because it exposes all our implicit assumptions and incorrect conceptions. It is an opportunity.

(BTW Paul Halmos wrote one of the best books on Measure Theory. I once attended a lecture of his. Brilliant exposition, a rare gift.).

Image
 
User avatar
Cuchulainn
Topic Author
Posts: 60256
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Universal Approximation theorem

October 24th, 2019, 9:39 am

 but the last AI winter set in when perceptron couldn't XOR, and passed with the application of "backpropagation" (calculating a gradient by a chain rule) in NN training. 

Probably very embarassing for MIT at the time. These are maths undergraduate exercises in optimisation.

Even simple 2d geometric reasoning on a x-y graph (dis)proves it.

 "Das ist nicht nur nicht richtig; es ist nicht einmal falsch!".
 
User avatar
Cuchulainn
Topic Author
Posts: 60256
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Universal Approximation theorem

October 28th, 2019, 12:04 pm

Navel-gazing time..
 
the key point IMO is that you need a neural network of a least 2 layers for univeral approximation, this was prove in 1991 by Hornik, the sigmoid is not what makes it work, its' the >1 layers.

I don't think this is true, anymore.

In the 60s DARPA funded a lot of research into the perceptron -which is a single layer NN- as a generic learning machine. However, in 1969 Marvin Minsky proved in a classic paper that the perceptron can't learn the XOR function (and hence no universal learning) https://www.quora.com/Why-cant-the-XOR- ... perceptron

Actually, it was Minsky and Papert proved this results in their book.

The real issue IMHO was that the perceptron is a linear classifier and will not classify correctly if the training set is not linearly separable. It's a basic mathematical problem. BTW the perceptron was invented in 1958, Darpa invests $$$ and in 1969 a counterexample is produced!

Here is a sociological post-mortem review (The press loved Frank Rosenblatt).

https://pdfs.semanticscholar.org/f3b6/e ... 434277.pdf
 
User avatar
ISayMoo
Posts: 2143
Joined: September 30th, 2015, 8:30 pm

Re: Universal Approximation theorem

November 4th, 2019, 4:27 pm

A loss surface of a neural network can approximate anything. Including cows.
 
User avatar
katastrofa
Posts: 8376
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Universal Approximation theorem

November 4th, 2019, 5:45 pm

But will milk from the NN cows be vegan?
 
User avatar
ISayMoo
Posts: 2143
Joined: September 30th, 2015, 8:30 pm

Re: Universal Approximation theorem

November 4th, 2019, 11:14 pm

Everything is possible in a 23,000,000-dimensional space.
 
User avatar
Cuchulainn
Topic Author
Posts: 60256
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Universal Approximation theorem

November 5th, 2019, 10:30 am

Everything is possible in a 23,000,000-dimensional space.
Use condensed milk?
 
User avatar
ISayMoo
Posts: 2143
Joined: September 30th, 2015, 8:30 pm

Re: Universal Approximation theorem

November 5th, 2019, 1:09 pm

Yes, almost all the milk will condense on the border of the sphere.
 
User avatar
JohnLeM
Posts: 362
Joined: September 16th, 2008, 7:15 pm

Re: Universal Approximation theorem

November 6th, 2019, 5:52 pm

A loss surface of a neural network can approximate anything. Including cows.
This paper seems quite interesting but I spent unsuccessfully two hours trying to understand it. But nice cows indeed !
 
User avatar
Cuchulainn
Topic Author
Posts: 60256
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Universal Approximation theorem

November 6th, 2019, 7:53 pm

A loss surface of a neural network can approximate anything. Including cows.
Another extremely annoying article to try reading.

Too many 

pictures
text
references
forward references

Not enough 

motivating examples
maths 
special cases
algorithms

https://en.wikipedia.org/wiki/How_to_Solve_It
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On