For historical purposes, we should all remind what happen when you build an house on ricketty foundations, as the Universal Approximation Theorem is for numerical applications. Cohorts of researchers and practitionners today using neural networks should try to
read this article, and should be fascinated by its main Theorem 1.1 as well as some chosen phrases as
"Although many ANN training algorithms have demonstrated great success in practice, the reasons for this are generally not known, as no mathematically rigorous analysis exists for most algorithms"
or (doing my best to summarize)
"to lower the numerical error on a SGD batch gradient descent, starting from an intitial error \epsilon, it is enough to consider a Neural network of size 1/\epsilon^{4}"
Let me recall that NNs methods are used for industrial applications, and that these methods benefit from massive private and public investments, since at least 2015 in Finance. Nice job !