SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
Cuchulainn
Posts: 59014
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Machine Learning and the physical sciences

May 23rd, 2019, 3:52 pm

maestro,
I took your advice and found this

http://www.physics.ox.ac.uk/phystat05/p ... stat05.pdf

Initial results are promising .. 10 hours for 10,000 samples.

Do you have an opinion yourself? Enlighten us. What's new, apart from the cute name?

I bet you won't give a technical answer.
 
User avatar
katastrofa
Posts: 7653
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Machine Learning and the physical sciences

May 23rd, 2019, 4:43 pm

Small dogs bark the loudest. No wonder everyone ignores you. Why don't you say something righteous and hopeful for a change?
Old crazy dogs try to bite the air. With no teeth.
 
User avatar
katastrofa
Posts: 7653
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Machine Learning and the physical sciences

May 23rd, 2019, 7:09 pm

maestro,
I took your advice and found this

http://www.physics.ox.ac.uk/phystat05/p ... stat05.pdf

Initial results are promising .. 10 hours for 10,000 samples.

Do you have an opinion yourself? Enlighten us. What's new, apart from the cute name?

I bet you won't give a technical answer.
Since you both seem to know virtually nothing about this stuff:
In Bayesian inference you can estimate not only the parameters P of your hypothesised model M, but also perform the model comparison (selection). The full Beyes' rule us p(P | data, M) = p(data | P, M) * p(P | M) / p(data | M), where p(data | M) is called "model evidence". Yo usolve the inverse problem p(M_i | data) = p(data | M_i) / \sum_j p(data | M_j) to find the best model, namely the one with the highest p(M_i | data) (which translates to the highest evidence given data).
Bayesian *neural* networks implement the above procedure in addition to the standard parameter fitting.

xx
 
User avatar
Cuchulainn
Posts: 59014
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Machine Learning and the physical sciences

May 24th, 2019, 1:55 pm

You seem to be using the usual Bayes' theorem for learning. I recall

[$]p(P | data, M) = p(data | P, M) * p(P|M) / p(data | M)[$]

Bhat and Prosper have a slightly different rule (see their equation) wrt prior

[$]p(P | data, M) = p(data | P, M) * p(P) / p(data | M)[$]

Does this trick warrant coining yet another name? It feels like NN++ overloading?


Bayesian *neural* networks implement the above procedure in addition to the standard parameter fitting.
Who uses this and where? I would be interested in an explanation in addition to this general description. 

Very few articles describe the 'how to' step-by-step process/algorithm from input to output.
 
User avatar
katastrofa
Posts: 7653
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Machine Learning and the physical sciences

May 25th, 2019, 4:03 pm

If the prior in the same in all models, then you can use the formula from those guys' work, obvsly.
I can't answer the question about the ML nomenclature. It amazes me too.

The AI craze seems to use BNNs to calculate the uncertainty of the estimated weights. I use it for model selection/testing strategies/etc., as stated above. I used the ML methods twice in the last 2 years. I'm not sure if what I'm doing isn't called ML now, though.

I'll name the longest maturing cheese in my fridge in your honour - Sniffy Irish...
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On