SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
berndL
Posts: 164
Joined: August 22nd, 2007, 3:46 pm

Re: If you are bored with Deep Networks

December 24th, 2017, 5:29 pm

Cuchulainn wrote:
Someone wrote this

  1. reinforcement learning will not replace option pricing models. If you say Black-Scholes because you believe that is what traders and quants actually use, then you are an idiot and have not understood how option pricing works since... 1987. machine learning will have its uses but it will not replace the existing structure of how quants work. Quants work in Q-world - risk neutral measure world - where the drift is given to you (risk free rate, collateral rate, whatever, etc) - and you calibrate vol to ensure E[f(S_T)] matches the market. Machine learning is used in P-world - physical measure world - drift and vol are unknown - and you must estimate them using statistical methods. using machine learning methods in Q-world is stupid, it is not needed.


ML appears to me more like simulating intelligence.
Once the network can beat some humans in some task (for example playing go or chess) its considered it has  achieved its goal. But does this mean it is performing well? It appears intelligent because in a very special domain it can outperform humans.  But maybe humans do not play chess and go very well.
And in these domains (Go and chess) there is no Q measure. The best move is unknown in general.  Google (deepmind) has now published  Go and chess programs based on selflearning with no prior human knowledge. So where does this NN calibrate to (or is trained on)? Where is the Q-measure?

You can btw find an abstract of the paper on learning from scratch here:
Learning from scratch
 
User avatar
Paul
Posts: 8589
Joined: July 20th, 2001, 3:28 pm

Re: If you are bored with Deep Networks

December 24th, 2017, 5:42 pm

If you believe in complete markets and that implied vol is a perfect predictor of the future and that you know the correct model then you need no stats or forecasting. Sorry to be the Bag Magus at Xmas, bearing bad news: Complete markets don't exist; Implied vol contains some info, mostly about supply and demand; No one knows the perfect model.

But you have many options for using forecast drift and vol:

a) Find drift and vol in P world. Then throw drift away and just use vol for Q world. Vol arb, hedging with forecast vol.
b) Find drift and vol in P world. Vol arb, hedging with implied vol.
c) Find drift and vol in P world and use one of the many incomplete market models that need both.
d) Speculate to accumulate!
e) ...

And so on.

I always feel sorry for people who refer to P and Q worlds as if that's all there is! It shows a great lack of both knowledge and imagination. (Einstein would be very disappointed.)
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: If you are bored with Deep Networks

December 24th, 2017, 5:46 pm

It learns from selfplay. Playing against a younger self allows it to learn the values of moves and positions. That in turn is used to evaluate the positions it examines with Monte Carlo exploration.

The first version relied on a database of human gameplay, alphago zero doesn't rely on that anymore, it don't need to be taught by human teachers.

Reinforcement learning is very useful when you need to optimize something where the value of your action is not directly visible (the moves you make in a game of chess, the order a trading bot makes, the steering of a self driving car).

The aim of Google's deep mind it to come up with generic algorithms and techniques to solve these types of problems. Go and Chess are just (very impressive) demos of their algorithms. They eg also use it now to speed of DNA sequencing.
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Re: If you are bored with Deep Networks

December 24th, 2017, 5:51 pm

Perhaps everyone has been brainwashed into only minding their P's and Q's?

What fascinates me is the potential for a widely used model of the market to become self-reinforcing and quasi-stable until some serious imbalance knocks the system out of it's self-created equilibrium. It's a mathematical variant of Keynes beauty contest by which everyone must use the same wrong model because everyone is using the same wrong model of contestant rating.
 
User avatar
Paul
Posts: 8589
Joined: July 20th, 2001, 3:28 pm

Re: If you are bored with Deep Networks

December 24th, 2017, 6:42 pm

Traden4Alpha wrote:
What fascinates me is the potential for a widely used model of the market to become self-reinforcing and quasi-stable until some serious imbalance knocks the system out of it's self-created equilibrium.  It's a mathematical variant of Keynes beauty contest by which everyone must use the same wrong model because everyone is using the same wrong model of contestant rating.

Depends whether the feedback from using the model is positive or negative. I would say that it is negative. At a guess most things in life are negative (or neutral/zero) feedback. 
If you change a model then the downside is getting fired. 
Buy your wife chocolates on a non-anniversary date...you must be having an affair.
If there was positive feedback you'd expect something like a one-factor model being followed by two, three, four, ..., twenty seven,... That doesn't (quite) happen.
Of course, this has nothing to do with whether a model is right or successful.
 
User avatar
tagoma
Posts: 18233
Joined: February 21st, 2010, 12:58 pm

Re: If you are bored with Deep Networks

December 24th, 2017, 9:09 pm

Depends whether the feedback from using the model is positive or negative


And it is when (anti)fragility stepped in.
 
User avatar
Cuchulainn
Posts: 56926
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: If you are bored with Deep Networks

December 24th, 2017, 10:03 pm

Any white papers on all this interesting stuff? 
 
User avatar
Cuchulainn
Posts: 56926
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: If you are bored with Deep Networks

January 9th, 2018, 4:51 pm

What is the relationship between the Universal Approxiimation Theorem (Cybenko, Hornik) and SGD or more generally in  NN?

In particular, the assumptions (implicit or explicit) being used?

Or is that the wrong question?
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: If you are bored with Deep Networks

January 9th, 2018, 7:02 pm

SGD is method to search for minima, nothing specific to NN, you call also use SGD outside NN.

NN parameters can be trained/configured with SGD and other methods to minimize a loss function. There are also other methods to train a NN.

Minimizing a loss function is somewhat related to approximation, eg if you minimize the difference between the NN output and some target output then you can say that the NN will approximation the input->target output function at the input location points.

The Universal Approximation theorem says something about the function space of a NN, the types of functions it can express as its size goes to infinity. It doesn't say how accurate a finite NN will be, nor does it dictate that you should use SGD to find that approximation.

In practice NN are finite. Datasets (function sample points) are finite too.

The functions are often very complicated, nothing like y=sin(x), but more like "x = a 256x256 8bit RGB image, y = boolean, is there a cat in this picture?". You won't be able to find approximations to this cat image function in Abramowitz & Stegun (but I'm sure for sin(x) you will)
 
User avatar
Cuchulainn
Posts: 56926
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: If you are bored with Deep Networks

January 9th, 2018, 8:00 pm

The functions are often very complicated, nothing like y=sin(x), but more like "x = a 256x256 8bit RGB image, y = boolean, is there a cat in this picture?". 

That's the part I need to delve into, to see how the parts fit together.
 
User avatar
outrun
Posts: 4573
Joined: April 29th, 2016, 1:40 pm

Re: If you are bored with Deep Networks

January 9th, 2018, 8:36 pm

Cuchulainn wrote:
The functions are often very complicated, nothing like y=sin(x), but more like "x = a 256x256 8bit RGB image, y = boolean, is there a cat in this picture?". 

That's the part I need to delve into, to see how the parts fit together.

That's why it's called machine *learning*. Algorithms and datastructures that learn functions -too complex to explicitely code- by processing and modelling example data from that function.
 
User avatar
Cuchulainn
Posts: 56926
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: If you are bored with Deep Networks

January 10th, 2018, 12:32 pm

That's clear. But the role of the Universal Approximator of Antioch has not been addressed.

This approximates Borel measurable functions which are 'nearly continuous' but not quite. Or does it really matter in practice?
 
User avatar
ISayMoo
Topic Author
Posts: 1050
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

January 10th, 2018, 1:10 pm

outrun wrote:
Cuchulainn wrote:
The functions are often very complicated, nothing like y=sin(x), but more like "x = a 256x256 8bit RGB image, y = boolean, is there a cat in this picture?". 

That's the part I need to delve into, to see how the parts fit together.

That's why it's called machine *learning*. Algorithms and datastructures that learn functions -too complex to explicitely code- by processing and modelling example data from that function.

Or just memorising the inputs.
 
User avatar
Cuchulainn
Posts: 56926
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: If you are bored with Deep Networks

January 12th, 2018, 7:41 pm

I received a bunch of NN books from the publisher while back. This one looks a good cookbook so one could write ones' own 101 NN to demystify the data flow and terminology.

https://www.amazon.com/Neural-Networks- ... 0201513765

Do you need a graph or would a simpler container suffice?

BTW according to the authors, SD == GD.

Goodfellow et al is a good overview, it has the ingredients but it's not tell me how to  cooking.
 
User avatar
Cuchulainn
Posts: 56926
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: If you are bored with Deep Networks

January 28th, 2018, 12:02 pm

The fundamental working hypothesis of AI is that intelligent behavior can be precisely described as symbol manipulation and can be modeled with the symbol processing capabilities of the computer.

??
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...