Serving the Quantitative Finance Community

  • 1
  • 5
  • 6
  • 7
  • 8
  • 9
  • 39
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

November 17th, 2017, 5:36 pm

Let's try another example: exponentially fitted methods for the Black Scholes PDE and other linear convection-diffusion-reaction PDE are stable for any values of drift and diffusion. Standard FDM fail when convection dominance kicks. We can prove this without having to write a single line of code.

Now, has AI classified input types/categories so that having done that you know what to expect and which methods work? The bespoke arx files seem to suggest not. DL is only a few years old so miracles take longer. Be carefui with hype.

BTW here is a great example of constructivist mathematics

https://en.wikipedia.org/wiki/Banach_fi ... nt_theorem

Maybe this is an eye-opener
https://en.wikipedia.org/wiki/Construct ... thematics)

Why beat one's head against a mathematical wall of trying to find a proof of a solution if said proof does not exist?
It's the future of humanity Jim what's at stake here: Newton did it, Stokes did it, so AI should do it. 
We are 100% agreed in the value of math for proving stuff. It's the same power that's found in the true engineering fields -- one can design a product and know exactly how it will perform without spending a penny making stuff.

We are agreed that AI does not have that. And we all agree that it would be really really good if it did. I think the show-stopper disagreement is in whether it's possible and tractable.

Note: there's another failure mode for math that is potentially latent in this problem. Even if one finds a proof that predicts which input systems are learnable by which AI methods, there's no guarantee that the proof encodes a simple calculation of existence or robustness. In seeking a proof of existence of a solution, we assume that the proof exists and that the phase space for well-behaved and ill-behaved systems is simple and easily calculated. Calculating whether a given AI method will work in a given context may well be harder than just running the AI method (Wolfram's point about irreducible complexity).

As much as I truly love the deductive power of math and engineering, I also know that sometimes trial-and-error is the best and cheapest method.
 
User avatar
outrun
Posts: 4573
Joined: January 1st, 1970, 12:00 am

Re: If you are bored with Deep Networks

November 17th, 2017, 5:44 pm

Let's try another example: exponentially fitted methods for the Black Scholes PDE and other linear convection-diffusion-reaction PDE are stable for any values of drift and diffusion. Standard FDM fail when convection dominance kicks. We can prove this without having to write a single line of code.

Now, has AI classified input types/categories so that having done that you know what to expect and which methods work? The bespoke arx files seem to suggest not. DL is only a few years old so miracles take longer. Be carefui with hype.

BTW here is a great example of constructivist mathematics

https://en.wikipedia.org/wiki/Banach_fi ... nt_theorem

Maybe this is an eye-opener
https://en.wikipedia.org/wiki/Construct ... thematics)

Why beat one's head against a mathematical wall of trying to find a proof of a solution if said proof does not exist?
It's the future of humanity Jim what's at stake here: Newton did it, Stokes did it, so AI should do it. 
We are 100% agreed in the value of math for proving stuff. It's the same power that's found in the true engineering fields -- one can design a product and know exactly how it will perform without spending a penny making stuff.

We are agreed that AI does not have that. And we all agree that it would be really really good if it did. I think the show-stopper disagreement is in whether it's possible and tractable.

Note: there's another failure mode for math that is potentially latent in this problem. Even if one finds a proof that predicts which input systems are learnable by which AI methods, there's no guarantee that the proof encodes a simple calculation of existence or robustness. In seeking a proof of existence of a solution, we assume that the proof exists and that the phase space for well-behaved and ill-behaved systems is simple and easily calculated. Calculating whether a given AI method will work in a given context may well be harder than just running the AI method (Wolfram's point about irreducible complexity).

As much as I truly love the deductive power of math and engineering, I also know that sometimes trial-and-error is the best and cheapest method.
To add to that: finding the global minimum in a 2 layer NN is NP complete.

... but the quantum computers are coming!

Edit: it's a good excersise to try and proof that it's NP complete.
 
User avatar
Cuchulainn
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

November 17th, 2017, 7:08 pm

. but the quantum computers are coming!
2018, you said.
 
User avatar
Cuchulainn
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

November 17th, 2017, 7:10 pm

As much as I truly love the deductive power of math 
That's a cliche. Here's what Halmos says.
  • Mathematics is not a deductive science — that's a cliché. When you try to prove a theorem, you don't just list the hypotheses, and then start to reason. What you do is trial and error, experimentation, guesswork. You want to find out what the facts are, and what you do is in that respect similar to what a laboratory technician does. Possibly philosophers would look on us mathematicians the same way as we look on the technicians, if they dared.
And Hadamard says this

The roots of creativity for Hadamard lie not in consciousness, but in the long unconscious work of incubation, and in the unconscious aesthetic selection of ideas that thereby pass into consciousness. His discussion of this process comprises a wide range of topics, including the use of mental images or symbols, visualized or auditory words, "meaningless" words, logic, and intuition. Among the important documents collected is a letter from Albert Einstein analyzing his own mechanism of thought.

A clever graduate student could teach Fourier something new, but surely no one claims that he could teach Archimedes to reason better.
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

November 17th, 2017, 9:12 pm

As much as I truly love the deductive power of math 
That's a cliche. Here's what Halmos says.
  • Mathematics is not a deductive science — that's a cliché. When you try to prove a theorem, you don't just list the hypotheses, and then start to reason. What you do is trial and error, experimentation, guesswork. You want to find out what the facts are, and what you do is in that respect similar to what a laboratory technician does. Possibly philosophers would look on us mathematicians the same way as we look on the technicians, if they dared.
And Hadamard says this

The roots of creativity for Hadamard lie not in consciousness, but in the long unconscious work of incubation, and in the unconscious aesthetic selection of ideas that thereby pass into consciousness. His discussion of this process comprises a wide range of topics, including the use of mental images or symbols, visualized or auditory words, "meaningless" words, logic, and intuition. Among the important documents collected is a letter from Albert Einstein analyzing his own mechanism of thought.

A clever graduate student could teach Fourier something new, but surely no one claims that he could teach Archimedes to reason better.
Whoever said that deductive science doesn't involve trial and error, experimentation, guesswork? That's a strawman argument.

But now that you've mentioned it, the fact that math is trial-and-error too makes it a non-robust solution finding process just like the trial-and-error DL methods. Math has no guarantees of finding a solution although it surely does have the major advantage that if it does find a solution, it's a good one.

Worse, unless math can create a universal proof that all systems and loss functions are unconditionally solvable by some DL method, there's the nasty issue that only empirical trial-and-error can resolve whether a given natural system obeys the prerequisites of a conditional existence proof.

Thus math is, at best, the perfect solution to only half the problem because the question of whether a natural system is modeled correctly by a given mathematical system cannot be proven with math.
 
User avatar
Cuchulainn
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

November 17th, 2017, 9:25 pm

Whoever said that deductive science doesn't involve trial and error, experimentation, guesswork?
You did.
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

November 18th, 2017, 1:26 am

Whoever said that deductive science doesn't involve trial and error, experimentation, guesswork?
You did.
Hmmm... I can't seem to find where I said that but I can find your call for less trial-and-error. Yet then you add the Halmos quote which says math has the very property you are trying to avoid.
 
User avatar
Cuchulainn
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

November 19th, 2017, 11:54 am

Whoever said that deductive science doesn't involve trial and error, experimentation, guesswork?
You did.
Hmmm...  I can't seem to find where I said that but I can find your call for less trial-and-error.  Yet then you add the Halmos quote which says math has the very property you are trying to avoid.
When buying a car, it's nice to compare different models. Ditto for CS models. We can compare and contrast.

“...the source of all great mathematics is the special case, the concrete example. It is frequent in mathematics that every instance of a concept of seemingly generality is, in essence, the same as a small and concrete special case.”
 -- Paul Halmos

Specific cases are better here too, at least in the short term. It is very difficult to give specific answers to general posts.
 
User avatar
Cuchulainn
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

November 22nd, 2017, 12:19 pm

For all these apps we'll need beta testers such as Jeremy Clarkson

https://www.unilad.co.uk/technology/jer ... -mistakes/
 
User avatar
ISayMoo
Topic Author
Posts: 2332
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

November 22nd, 2017, 11:04 pm

The AI said it's sorry, it will try harder next time.
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

December 4th, 2017, 1:28 pm

Maybe we should let the machines figure out all this deep learning stuff: Google's AI Built Its Own AI That Outperforms Any Made by Humans and Using NNs to design NNs.
 
User avatar
Cuchulainn
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

December 24th, 2017, 2:58 pm

Someone wrote this
  1. reinforcement learning will not replace option pricing models. If you say Black-Scholes because you believe that is what traders and quants actually use, then you are an idiot and have not understood how option pricing works since... 1987. machine learning will have its uses but it will not replace the existing structure of how quants work. Quants work in Q-world - risk neutral measure world - where the drift is given to you (risk free rate, collateral rate, whatever, etc) - and you calibrate vol to ensure E[f(S_T)] matches the market. Machine learning is used in P-world - physical measure world - drift and vol are unknown - and you must estimate them using statistical methods. using machine learning methods in Q-world is stupid, it is not needed.
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

December 24th, 2017, 3:23 pm

Someone wrote this
  1. reinforcement learning will not replace option pricing models. If you say Black-Scholes because you believe that is what traders and quants actually use, then you are an idiot and have not understood how option pricing works since... 1987. machine learning will have its uses but it will not replace the existing structure of how quants work. Quants work in Q-world - risk neutral measure world - where the drift is given to you (risk free rate, collateral rate, whatever, etc) - and you calibrate vol to ensure E[f(S_T)] matches the market. Machine learning is used in P-world - physical measure world - drift and vol are unknown - and you must estimate them using statistical methods. using machine learning methods in Q-world is stupid, it is not needed.
Interesting!

Quibbles:

1. A Q-world with time-varying drift would seem to become a P-world, no?

2. Can't the inputs and training process for ML be structured to impose or approximate Q-world as a constraint?

3. not(needed) != not(useful). ML may not replace Q-world methods but they may supplement them.
 
User avatar
outrun
Posts: 4573
Joined: January 1st, 1970, 12:00 am

Re: If you are bored with Deep Networks

December 24th, 2017, 5:03 pm

Reinforcement learning? Was this list1? That's like saying "apples won't replace black and Scholes".
 
User avatar
outrun
Posts: 4573
Joined: January 1st, 1970, 12:00 am

Re: If you are bored with Deep Networks

December 24th, 2017, 5:18 pm

The P Q bit is right IMO