SERVING THE QUANTITATIVE FINANCE COMMUNITY

ISayMoo
Posts: 2314
Joined: September 30th, 2015, 8:30 pm

### Re: Universal Approximation theorem

It starts to sound like a married couple's argument. I'm outta here.

JohnLeM
Posts: 380
Joined: September 16th, 2008, 7:15 pm

### Re: Universal Approximation theorem

It starts to sound like a married couple's argument. I'm outta here.
@Cuchullain, it seems that you put your finger exactly where it hurts ...

Cuchulainn
Topic Author
Posts: 62391
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Re: Universal Approximation theorem

It starts to sound like a married couple's argument. I'm outta here.
I have no experience in this area. Sorry. Life is too short for that, and it wastes good drinking time. Or whatever.

katastrofa
Posts: 9327
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

### Re: Universal Approximation theorem

I see. Meantime, I hear more and more voices warning that billions of public and private money are wasted in a technology without any foundation, thus inefficient. The last time that this legitimacy problem popped up, the artificial intelligence community crossed a 15 years desert as nice as your picture...
Not sure what you mean, but the last AI winter set in when perceptron couldn't XOR, and passed with the application of "backpropagation" (calculating a gradient by a chain rule) in NN training. No Nobel Prize for that yet? (it was owing to the rapidly improving computer technologies at the time, but scientific NPs go to lousy academics)
Last edited by katastrofa on October 24th, 2019, 8:43 am, edited 1 time in total.

JohnLeM
Posts: 380
Joined: September 16th, 2008, 7:15 pm

### Re: Universal Approximation theorem

I see. Meantime, I hear more and more voices warning that billions of public and private money are wasted in a technology without any foundation, thus inefficient. The last time that this legitimacy problem popped up, the artificial intelligence community crossed a 15 years desert as nice as your picture...
Not sure what you mean, but the last AI winter set in when perceptron couldn't XOR, and passed with the application of "backpropagation" (calculating a gradient by a chain rule) in NN training. No Nobel Prize for that yet? (it's owed by the rapidly improving computer technologies at the time, but scientific NPs go to lousy academics)
I was rereading this thread. Taking away old grouchy slings, there is a bunch of very interesting references. I will try to gather all this.

Cuchulainn
Topic Author
Posts: 62391
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Re: Universal Approximation theorem

The difference between mathematics and engineering is the difference between starting again and tweaking ad nauseam. (like zillion learning rate schedules).

Actually, this is a great thread because it exposes all our implicit assumptions and incorrect conceptions. It is an opportunity.

(BTW Paul Halmos wrote one of the best books on Measure Theory. I once attended a lecture of his. Brilliant exposition, a rare gift.).

Cuchulainn
Topic Author
Posts: 62391
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Re: Universal Approximation theorem

but the last AI winter set in when perceptron couldn't XOR, and passed with the application of "backpropagation" (calculating a gradient by a chain rule) in NN training.

Probably very embarassing for MIT at the time. These are maths undergraduate exercises in optimisation.

Even simple 2d geometric reasoning on a x-y graph (dis)proves it.

"Das ist nicht nur nicht richtig; es ist nicht einmal falsch!".

Cuchulainn
Topic Author
Posts: 62391
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Re: Universal Approximation theorem

Navel-gazing time..

the key point IMO is that you need a neural network of a least 2 layers for univeral approximation, this was prove in 1991 by Hornik, the sigmoid is not what makes it work, its' the >1 layers.

I don't think this is true, anymore.

In the 60s DARPA funded a lot of research into the perceptron -which is a single layer NN- as a generic learning machine. However, in 1969 Marvin Minsky proved in a classic paper that the perceptron can't learn the XOR function (and hence no universal learning) https://www.quora.com/Why-cant-the-XOR- ... perceptron

Actually, it was Minsky and Papert proved this results in their book.

The real issue IMHO was that the perceptron is a linear classifier and will not classify correctly if the training set is not linearly separable. It's a basic mathematical problem. BTW the perceptron was invented in 1958, Darpa invests \$ and in 1969 a counterexample is produced!

Here is a sociological post-mortem review (The press loved Frank Rosenblatt).

https://pdfs.semanticscholar.org/f3b6/e ... 434277.pdf

ISayMoo
Posts: 2314
Joined: September 30th, 2015, 8:30 pm

### Re: Universal Approximation theorem

A loss surface of a neural network can approximate anything. Including cows.

katastrofa
Posts: 9327
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

### Re: Universal Approximation theorem

But will milk from the NN cows be vegan?

ISayMoo
Posts: 2314
Joined: September 30th, 2015, 8:30 pm

### Re: Universal Approximation theorem

Everything is possible in a 23,000,000-dimensional space.

Cuchulainn
Topic Author
Posts: 62391
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Re: Universal Approximation theorem

Everything is possible in a 23,000,000-dimensional space.
Use condensed milk?

ISayMoo
Posts: 2314
Joined: September 30th, 2015, 8:30 pm

### Re: Universal Approximation theorem

Yes, almost all the milk will condense on the border of the sphere.

JohnLeM
Posts: 380
Joined: September 16th, 2008, 7:15 pm

### Re: Universal Approximation theorem

A loss surface of a neural network can approximate anything. Including cows.
This paper seems quite interesting but I spent unsuccessfully two hours trying to understand it. But nice cows indeed !

Cuchulainn
Topic Author
Posts: 62391
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Re: Universal Approximation theorem

A loss surface of a neural network can approximate anything. Including cows.
Another extremely annoying article to try reading.

Too many

pictures
text
references
forward references

Not enough

motivating examples
maths
special cases
algorithms

https://en.wikipedia.org/wiki/How_to_Solve_It