Re: Universal Approximation theorem
Posted: October 7th, 2019, 3:06 pm
Serving the Quantitative Finance Community
https://forum.wilmott.com/
Leave us alone!So it's an advert for a Swiss AI research-paper bot.
Please, don't hit me with them negative waves so early in the morning. You know the answer already.Cuch, you complain that there's not enough maths in ML. Is this paper sufficiently mathy for you? https://arxiv.org/pdf/1908.10828.pdf
Thinking about this more, how is is possible to write reams and reams of mathematical results and publish so many articles in such a short space of time???????? Even worse, who benefits?Cuch, you complain that there's not enough maths in ML. Is this paper sufficiently mathy for you? https://arxiv.org/pdf/1908.10828.pdf
That's the sanity clause; it's in every AI paper...
"In the past few years deep artificial neural networks (DNNs) have been successfully employed in a large number of computational problems including,
e.g., language processing, image recognition, fraud detection, and computational advertisement."
.
I agree, There are a number of important issues that are not addressed. Maybe I am making a mountain from a mole hill:I have some simple questions about the Itkin paper. IMHO the experimental procedure is not described well enough:
1. What was the distribution of the sampled vectors [S, K, T, r, q, sigma]?
2. How did they split the complete sample into training and test sets?
3. Which optimiser was used for the results presented in the first sections of the paper: RMSProp or Adam?
4. What were the optimiser parameters, e.g. learning rate? How were they chosen? Did they do a parameter sweep and selected the best ones?
5. How many times was the test set used?
6. What were the values of no-arbitrage penalty constants lambda and m? How were they chosen?
7. How much accuracy was lost for out-of-distribution inputs?
Review? A rant about the universal approximation theorem being useless because it doesn't tell anything about the rate of convergence to a function - oh wait, a function you don't even know? We all know that it doesn't make sense. It's yet our another chaotic inconclusive debate made for show. Right?Someone (hint hint) should totally write a review of machine learning methods used for PDE to separate the good from the bad and the ugly, and submit it to JMLR. I've been told they do a good job with reviewing, albeit slow.
UAT is based on measurable functions which is too broad a class for numerical analysis. However, the consequences of relying on the UAT magic wand deserves attention.Review? A rant about the universal approximation theorem being useless because it doesn't tell anything about the rate of convergence to a function - oh wait, a function you don't even know? We all know that it doesn't make sense. It's yet our another chaotic inconclusive debate made for show. Right?Someone (hint hint) should totally write a review of machine learning methods used for PDE to separate the good from the bad and the ugly, and submit it to JMLR. I've been told they do a good job with reviewing, albeit slow.
I don't buy this paper !! Try to read it, you will understand what I meanCuch, you complain that there's not enough maths in ML. Is this paper sufficiently mathy for you? https://arxiv.org/pdf/1908.10828.pdf