SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
Cuchulainn
Posts: 63371
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: If you are bored with Deep Networks

August 31st, 2019, 9:29 am

Understanding ML requires a solid knowledge of mathematics. The theory behind the RBF kernels, for example, involves functional analysis and infinitely dimensional Hilbert spaces.
I loved Functional Analysis and its applications to Probability and Finite Elements (a la French style, Farid and Jean-Marc know) at university, esp Banach and Sobolev spaces
 
User avatar
Cuchulainn
Posts: 63371
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: If you are bored with Deep Networks

August 31st, 2019, 9:32 am

Understanding ML requires a solid knowledge of mathematics. The theory behind the RBF kernels, for example, involves functional analysis and infinitely dimensional Hilbert spaces.
And that's just for Deep Learning. Can you tell how many concepts from different disciplines one must know to understand Reinforcement Learning?
A lot I would say

1. What % of the population know RL?
2. How far can you get  in ML with a vanilla CS degree?
 
User avatar
katastrofa
Posts: 9702
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

August 31st, 2019, 12:25 pm

In RL, not far imho. A Frenchie CS guy came to our conference this year (regular participants are statisticians, gov advisers, business people of all breads). He proposed a new revolutionary framework for modelling socio-ecological systems based on RL. After the presentation the only comment was dude, that's called Schelling diagrams, been known and widely applied since 1960, and Schelling got the Nobel Prize in economics for this. He didn't know what that meant, but the virtue of being a French male CS scientist is that nothing can shake up your self-esteem.
And that's more or less my reflection on almost any paper in RL i read so far (being not half as enlightened as my colleagues). I can forgive the CS people in ML that they reinvent the wheel, though. The bigger problem is that they don't see that the realistic roads go through steep mountains and seas - it calls for specialistic solutions.
Sorry for the rant :-D
 
User avatar
ISayMoo
Topic Author
Posts: 2368
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

August 31st, 2019, 3:37 pm

Understanding ML requires a solid knowledge of mathematics. The theory behind the RBF kernels, for example, involves functional analysis and infinitely dimensional Hilbert spaces.
And that's just for Deep Learning. Can you tell how many concepts from different disciplines one must know to understand Reinforcement Learning?
Add dynamic programming to the list.
 
User avatar
katastrofa
Posts: 9702
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

August 31st, 2019, 5:19 pm

For million of interacting agents? Where can I read up on it if it really is possible? My knowledge in this field is obviously miniscule compared to yours, but something tells me you'll soon be doing what I currently do - Bayesian optimisation :-)
 
User avatar
ISayMoo
Topic Author
Posts: 2368
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

August 31st, 2019, 8:13 pm

 
User avatar
katastrofa
Posts: 9702
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

August 31st, 2019, 8:51 pm

Thanks. I can't see from the abstract what it has to do with dynamic programming, but I'll try to read the whole text.
 
User avatar
ISayMoo
Topic Author
Posts: 2368
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

August 31st, 2019, 10:53 pm

Sorry, I meant that this is something one can use when training a large number of agents who compete with each other. That's what was used by DeepMind for training Star Craft II agents. And since we're talking about games, game theory is also important: https://arxiv.org/abs/1802.05642https://arxiv.org/abs/1901.08106.

Dynamic programming is still there under the hood, as a theoretical underpinning of the RL algorithms. But it turns out that importance sampling is also important to allow off-policy learning and increase data efficiency.

RL algorithm development oscillates between improving theory and adapting the algorithms to scale better on more powerful hardware.
 
User avatar
Cuchulainn
Posts: 63371
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: If you are bored with Deep Networks

September 1st, 2019, 7:53 am

For million of interacting agents? Where can I read up on it if it really is possible? My knowledge in this field is obviously miniscule compared to yours, but something tells me you'll soon be doing what I currently do - Bayesian optimisation :-)
Can you put a BN in a SGD?
 
User avatar
ponpoko
Posts: 23
Joined: May 20th, 2008, 1:07 am
Contact:

Re: If you are bored with Deep Networks

October 14th, 2019, 4:02 am

Completely disagree with it.
katastrofa:
Quantum annealers (e.g. D-wave) find global minima. What deep networks may do, unlike classical algorithms, is finding correlations of the minima positions (more complex than e.g. momentum methods) - it's a quantum algorithm's capability. Introducing memory might possibly help too...
This topics looks going on and on.  I believe for Wilmott perspective, NN can be used for super quick price reference engine by educating using  pre-calculated points.  Besides, it can be used for prop trading using reinforced learning.  And more importantly, if you get bored, let's play on it sometime.  I made some web apps all can enjoy just by opening up page below. 

https://randomwalkjapan.blogspot.com/2019/10/mobilenetusing-mobilenet-inside-blogspot.html

https://randomwalkjapan.blogspot.com/2019/10/yoloai-lets-play-on-yoloai-scene.html
 
User avatar
ponpoko
Posts: 23
Joined: May 20th, 2008, 1:07 am
Contact:

Re: If you are bored with Deep Networks

October 14th, 2019, 4:05 am

I forgot to add my view on NN as below. Pls take a look.  2nd one should be a great hint for all I bet!

https://randomwalkjapan.blogspot.com/2019/05/ai-reality-of-booming-ai.html

https://randomwalkjapan.blogspot.com/2019/06/play-on-neural-net-playground.html
 
User avatar
Cuchulainn
Posts: 63371
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: If you are bored with Deep Networks

October 14th, 2019, 1:44 pm

Bots ..
 
User avatar
Cuchulainn
Posts: 63371
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: If you are bored with Deep Networks

November 27th, 2019, 3:15 pm

Speaking of boredom, I don't get bored so often. However, having studied discrete maths optimisation methods (GD and it many variants and use cases) it would seem that they all break down unless you start tweaking. It feels like premature optimisation, like first buying  the windows and doors when building a house.
And  I said a few times .. Inside GD is hiding a (nasty) Euler scheme for an otherwise unpecified ODE. Why? Maybe because of ML's roots in linear algebra. Maybe it's better to start with the ODE leading to GD (the mathematical approach) rather than jumping head-first into GD. 

Standing on the yuge shoulders of Lyapunov and Poincaré leads to Gradient Flows
https://icml.cc/media/Slides/icml/2019/hallb(12-16-00)-12-17-05-5119-deep_generative.pdf

Looks promising but I bow to the ML experts here.
I have done several small experiments (POC) based on the ODE approximation (SGD as a ODE, optiminisation ODE, noisy data). It is much easier/elegant (for me) A-Z than the many algos in Nocedal and Wright. At the least, it keep the grey cells ticking, like in ye olde days here.
https://forum.wilmott.com/viewtopic.php?f=34&t=101662
 
User avatar
Cuchulainn
Posts: 63371
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: If you are bored with Deep Networks

December 5th, 2019, 1:43 pm

Education is elitism?

I will use this catchphrase the next time you complain about lack of rigor in deep learning. It's elitism!

Let's try to act as grown-ups here and address the underlying mathematcal issues. 
The thesis is that dynamical systems are more robust (or are at least worth investigating) than the somewhat slightly  flaky, ad-hoc gradient descent method and its myriad variants.
 
User avatar
katastrofa
Posts: 9702
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

December 6th, 2019, 4:19 pm

In what sense is GD more ad-hoc than say Newton?
What do you mean by "dynamical systems"? (Sorry, I've already worked in too fields to be clear about it.)  And the next question would probably be about the justification of your thesis...
Last edited by katastrofa on January 3rd, 2020, 10:20 pm, edited 1 time in total.
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On