Page 1 of 2

100 millions time faster than ODE methods

Posted: November 14th, 2019, 2:19 pm
by JohnLeM
Mc Ghee is outrunned, NNs compute today 100 millions times faster than ODE methods !!!
https://arxiv.org/pdf/1910.07291.pdf

Re: 100 millions time faster than ODE methods

Posted: November 14th, 2019, 3:51 pm
by Alan
Mc Ghee is outrunned, NNs compute today 100 millions times faster than ODE methods !!!
https://arxiv.org/pdf/1910.07291.pdf
 "Generating these (training) data required over 10 days of computer time".

Re: 100 millions time faster than ODE methods

Posted: November 14th, 2019, 4:42 pm
by Paul
It's like learning to play the violin. Ten thousand hours of practice, but then you can rattle through all four seasons in minutes.

But 'ukulele is still better.

Re: 100 millions time faster than ODE methods

Posted: November 14th, 2019, 4:44 pm
by Cuchulainn

Re: 100 millions time faster than ODE methods

Posted: November 14th, 2019, 4:48 pm
by Cuchulainn
Mc Ghee is outrunned, NNs compute today 100 millions times faster than ODE methods !!!
https://arxiv.org/pdf/1910.07291.pdf
 "Generating these (training) data required over 10 days of computer time".
Lies, damned lies and neural networks. No way, Jose.

For a real shot at finance SABR application, see 

https://www.datasim.nl/blogs/26/msc-the ... al-finance

Re: 100 millions time faster than ODE methods

Posted: November 14th, 2019, 4:58 pm
by Cuchulainn
Mc Ghee is outrunned, NNs compute today 100 millions times faster than ODE methods !!!
https://arxiv.org/pdf/1910.07291.pdf
As Wittgenstein would say, it is a description and not an explanation. Not a single formula. Reads like a piece of marketing.

Re: 100 millions time faster than ODE methods

Posted: November 14th, 2019, 5:00 pm
by Alan
It's like learning to play the violin. Ten thousand hours of practice, but then you can rattle through all four seasons in minutes.

But 'ukulele is still better.
That analogy may be better than you think. When your ten thousand hours does not help me -- and I still have to practice/train the damn thing -- maybe that's a good criterion for a 'useless' machine learning application. 

On the other hand, when the app can be trained by a third party and then I can use it  --  I am thinking Waymo self-driving cars -- that's a different story!    

Re: 100 millions time faster than ODE methods

Posted: November 14th, 2019, 5:03 pm
by Cuchulainn
12 days training adds to the destruction of the planet.

Re: 100 millions time faster than ODE methods

Posted: November 14th, 2019, 5:06 pm
by Paul
It's like learning to play the violin. Ten thousand hours of practice, but then you can rattle through all four seasons in minutes.

But 'ukulele is still better.
That analogy may be better than you think. When your ten thousand hours does not help me -- and I still have to practice/train the damn thing -- maybe that's a good criterion for a 'useless' machine learning application.  
Cheeky!  I always think my analogies are spot on!

Re: 100 millions time faster than ODE methods

Posted: November 14th, 2019, 5:29 pm
by Cuchulainn
It's like learning to play the violin. Ten thousand hours of practice, but then you can rattle through all four seasons in minutes.

But 'ukulele is still better.
That analogy may be better than you think. When your ten thousand hours does not help me -- and I still have to practice/train the damn thing -- maybe that's a good criterion for a 'useless' machine learning application.  
Cheeky!  I always think my analogies are spot on!
Did you check for isomorphism?

https://en.wikipedia.org/wiki/Analogy

Re: 100 millions time faster than ODE methods

Posted: November 14th, 2019, 10:39 pm
by Alan
Speaking of Waymo, which I was, they are right now doing an AMA over at reddit: link

oops -- it's over already and my question did not get answered! 

Re: 100 millions time faster than ODE methods

Posted: November 15th, 2019, 7:05 am
by katastrofa
It's like learning to play the violin. Ten thousand hours of practice, but then you can rattle through all four seasons in minutes.

But 'ukulele is still better.
That analogy may be better than you think. When your ten thousand hours does not help me -- and I still have to practice/train the damn thing -- maybe that's a good criterion for a 'useless' machine learning application. 

On the other hand, when the app can be trained by a third party and then I can use it  --  I am thinking Waymo self-driving cars -- that's a different story!    
And, correct me if I'm wrong, they didn't test different values of hyperparameters (the whole tuning was only for two different pairs of "tuning" parameters). That's revolting! I considered this tuning in my paper, appendix A.5, very scanty: https://arxiv.org/pdf/1802.09427.pdf Doing this properly would take them much longer.

Re: 100 millions time faster than ODE methods

Posted: November 15th, 2019, 10:52 am
by Cuchulainn
It's like learning to play the violin. Ten thousand hours of practice, but then you can rattle through all four seasons in minutes.

But 'ukulele is still better.
That analogy may be better than you think. When your ten thousand hours does not help me -- and I still have to practice/train the damn thing -- maybe that's a good criterion for a 'useless' machine learning application. 

On the other hand, when the app can be trained by a third party and then I can use it  --  I am thinking Waymo self-driving cars -- that's a different story!    
And, correct me if I'm wrong, they didn't test different values of hyperparameters (the whole tuning was only for two different pairs of "tuning" parameters). That's revolting! I considered this tuning in my paper, appendix A.5, very scanty: https://arxiv.org/pdf/1802.09427.pdf Doing this properly would take them much longer.
Has A.5 to do with cross-validation, 5-fold and confusion matrix use cases?

Re: 100 millions time faster than ODE methods

Posted: November 15th, 2019, 3:18 pm
by katastrofa
Sort of. Yes.

Re: 100 millions time faster than ODE methods

Posted: December 3rd, 2019, 6:50 pm
by Cuchulainn
In December 2015, Google announced that the D-Wave 2X outperforms both simulated annealing and Quantum Monte Carlo by up to a factor of 100,000,000 on a set of hard optimization problems.[30]