Mc Ghee is outrunned, NNs compute today 100 millions times faster than ODE methods !!!
https://arxiv.org/pdf/1910.07291.pdf
"Generating these (training) data required over 10 days of computer time".Mc Ghee is outrunned, NNs compute today 100 millions times faster than ODE methods !!!
https://arxiv.org/pdf/1910.07291.pdf
Lies, damned lies and neural networks. No way, Jose."Generating these (training) data required over 10 days of computer time".Mc Ghee is outrunned, NNs compute today 100 millions times faster than ODE methods !!!
https://arxiv.org/pdf/1910.07291.pdf
As Wittgenstein would say, it is a description and not an explanation. Not a single formula. Reads like a piece of marketing.Mc Ghee is outrunned, NNs compute today 100 millions times faster than ODE methods !!!
https://arxiv.org/pdf/1910.07291.pdf
That analogy may be better than you think. When your ten thousand hours does not help me -- and I still have to practice/train the damn thing -- maybe that's a good criterion for a 'useless' machine learning application.It's like learning to play the violin. Ten thousand hours of practice, but then you can rattle through all four seasons in minutes.
But 'ukulele is still better.
Cheeky! I always think my analogies are spot on!That analogy may be better than you think. When your ten thousand hours does not help me -- and I still have to practice/train the damn thing -- maybe that's a good criterion for a 'useless' machine learning application.It's like learning to play the violin. Ten thousand hours of practice, but then you can rattle through all four seasons in minutes.
But 'ukulele is still better.
Did you check for isomorphism?Cheeky! I always think my analogies are spot on!That analogy may be better than you think. When your ten thousand hours does not help me -- and I still have to practice/train the damn thing -- maybe that's a good criterion for a 'useless' machine learning application.It's like learning to play the violin. Ten thousand hours of practice, but then you can rattle through all four seasons in minutes.
But 'ukulele is still better.
And, correct me if I'm wrong, they didn't test different values of hyperparameters (the whole tuning was only for two different pairs of "tuning" parameters). That's revolting! I considered this tuning in my paper, appendix A.5, very scanty: https://arxiv.org/pdf/1802.09427.pdf Doing this properly would take them much longer.That analogy may be better than you think. When your ten thousand hours does not help me -- and I still have to practice/train the damn thing -- maybe that's a good criterion for a 'useless' machine learning application.It's like learning to play the violin. Ten thousand hours of practice, but then you can rattle through all four seasons in minutes.
But 'ukulele is still better.
On the other hand, when the app can be trained by a third party and then I can use it -- I am thinking Waymo self-driving cars -- that's a different story!
Has A.5 to do with cross-validation, 5-fold and confusion matrix use cases?And, correct me if I'm wrong, they didn't test different values of hyperparameters (the whole tuning was only for two different pairs of "tuning" parameters). That's revolting! I considered this tuning in my paper, appendix A.5, very scanty: https://arxiv.org/pdf/1802.09427.pdf Doing this properly would take them much longer.That analogy may be better than you think. When your ten thousand hours does not help me -- and I still have to practice/train the damn thing -- maybe that's a good criterion for a 'useless' machine learning application.It's like learning to play the violin. Ten thousand hours of practice, but then you can rattle through all four seasons in minutes.
But 'ukulele is still better.
On the other hand, when the app can be trained by a third party and then I can use it -- I am thinking Waymo self-driving cars -- that's a different story!