Mc Ghee is outrunned, NNs compute today 100 millions times faster than ODE methods !!!

https://arxiv.org/pdf/1910.07291.pdf

- Yesterday, 2:19 pm
- Forum: Numerical Methods Forum
- Topic: 100 millions time faster than ODE methods
- Replies:
**13** - Views:
**245**

Mc Ghee is outrunned, NNs compute today 100 millions times faster than ODE methods !!!

https://arxiv.org/pdf/1910.07291.pdf

https://arxiv.org/pdf/1910.07291.pdf

- November 12th, 2019, 11:44 am
- Forum: Numerical Methods Forum
- Topic: Universal Approximation theorem
- Replies:
**251** - Views:
**22235**

// BTW kernels can be characterised a being univeral, characteristic, translation-invariant, strictly positive-definite, What's that? Stricly positive kernels on [$]\Omega[$] are functions [$]k(x,y)[$] satisfying [$]k(x^i,x^j)_{i,j \le N}[$] is a s.d.p matrix for any set of distinct points [$]x^i \...

- November 12th, 2019, 10:22 am
- Forum: Numerical Methods Forum
- Topic: Universal Approximation theorem
- Replies:
**251** - Views:
**22235**

For me it's perfectly clear what the article is saying. After reading one of their included reference , it is now more clear. I was unable to understand their log-entropy functional (3) without this reading. Frankly, they could have developed a little bit more to make it understandable, or at leas...

- November 12th, 2019, 9:41 am
- Forum: Numerical Methods Forum
- Topic: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?
- Replies:
**329** - Views:
**16069**

JohnLeM, Looks like those kernels (and RKHS) that you mention have many applications. Can we say that kernels allow us to define metrics and norms on probability measures? Then you can use the artillery of Functional Analysis bear, as I attempted to introduce before it was shot down. People like th...

- November 7th, 2019, 9:38 am
- Forum: Numerical Methods Forum
- Topic: Universal Approximation theorem
- Replies:
**251** - Views:
**22235**

I tried again one hour this morning. But it is really unclear. However one of the included reference seems more detailed and clear, I'll try to fall back to it to understand their work.

- November 7th, 2019, 7:09 am
- Forum: Numerical Methods Forum
- Topic: Universal Approximation theorem
- Replies:
**251** - Views:
**22235**

A loss surface of a neural network can approximate anything. Including cows. This paper seems quite interesting but I spent unsuccessfully two hours trying to understand it. But nice cows indeed ! Is the experience kathartic? I am not sure if it is cathartic or not. I will try again to see if I can...

- November 6th, 2019, 5:52 pm
- Forum: Numerical Methods Forum
- Topic: Universal Approximation theorem
- Replies:
**251** - Views:
**22235**

This paper seems quite interesting but I spent unsuccessfully two hours trying to understand it. But nice cows indeed !A loss surface of a neural network can approximate anything. Including cows.

- October 25th, 2019, 4:13 pm
- Forum: Technical Forum
- Topic: DL and PDEs
- Replies:
**168** - Views:
**21810**

Another one ... bite the dust ?

mmm...I am really feeling some kind of embarrassments when I look to all these DL papers that will revolution PDE methods.

- October 24th, 2019, 8:21 am
- Forum: Numerical Methods Forum
- Topic: Universal Approximation theorem
- Replies:
**251** - Views:
**22235**

I see. Meantime, I hear more and more voices warning that billions of public and private money are wasted in a technology without any foundation, thus inefficient. The last time that this legitimacy problem popped up, the artificial intelligence community crossed a 15 years desert as nice as your p...

- October 22nd, 2019, 12:34 pm
- Forum: Numerical Methods Forum
- Topic: Universal Approximation theorem
- Replies:
**251** - Views:
**22235**

@Cuchullain, it seems that you put your finger exactly where it hurts ...It starts to sound like a married couple's argument. I'm outta here.

- October 22nd, 2019, 11:28 am
- Forum: Numerical Methods Forum
- Topic: Universal Approximation theorem
- Replies:
**251** - Views:
**22235**

Correct. Too bad, maybe a little bit of good will could help ? For me the topic is good, if I understood it well : "dig in Cybenko Theorem to turn it into a practical tool". This could really help the IA community to understand their tools. Or, equally fairly, it could help the numerical optimisati...

- October 22nd, 2019, 6:34 am
- Forum: Numerical Methods Forum
- Topic: Universal Approximation theorem
- Replies:
**251** - Views:
**22235**

I see. Meantime, I hear more and more voices warning that billions of public and private money are wasted in a technology without any foundation, thus inefficient. The last time that this legitimacy problem popped up, the artificial intelligence community crossed a 15 years desert as nice as your pi...

- October 21st, 2019, 8:17 pm
- Forum: Numerical Methods Forum
- Topic: Universal Approximation theorem
- Replies:
**251** - Views:
**22235**

Correct. Too bad, maybe a little bit of good will could help ? For me the topic is good, if I understood it well : "dig in Cybenko Theorem to turn it into a practical tool". This could really help the IA community to understand their tools. I agree. This was always -and still is- the goal. There's ...

- October 21st, 2019, 7:44 pm
- Forum: Numerical Methods Forum
- Topic: Universal Approximation theorem
- Replies:
**251** - Views:
**22235**

People trying to communicate but not speaking the same language .... This thread is going nowhere. Correct. Too bad, maybe a little bit of good will could help ? For me the topic is good, if I understood it well : "dig in Cybenko Theorem to turn it into a practical tool". This could really help the...

- October 21st, 2019, 2:20 pm
- Forum: Numerical Methods Forum
- Topic: Universal Approximation theorem
- Replies:
**251** - Views:
**22235**

There is a difference between numerical optimisation and statistical learning algorithms, which I think you and Cuch are missing. The goal of the first is, given f(x), find x0 = argmin x f(x). The goal of the latter is given a training set, find model parameters which lead to the expected error rat...

GZIP: On