SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
katastrofa
Posts: 7588
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Hacking ML for fun & profit

March 13th, 2019, 11:26 am

I thought it was because it was not the problem and model motivating the use of a technique (NNs), but the other way around - the technique being accidentally used to solve the problem. (I'm not criticising that approach, if someone has resources and enough domain knowledge not to kill anyone with such blind shots.) I've never understood why NNs are said to imitate a human brain, though - cognitive processes are continuous, AFAIK.
I don't think anyone sane is saying that NNs imitate a human brain. They were inspired by some aspects of how biological brains work. So was reinforcement learning. But that's not "imitation".
The name of the technique suggests that it's supposed to imitate neural networks and I've been under the impression that's the storyline. Every information processing procedure was inspired by brain's cognition, but only NNs make direct references to that.
ISayMoo, data may come in ordinal-indexed sequences, but their (real-valued) timestamp domain is continuous. What am I not getting?
What real-valued timestamps do you have when feeding a sentence into Google Translate? It's just a string of letters.
When processing digital recordings, timestamps are regular. You can discretise them.
The pace at which someone writes, speaks, plays an instrument, ... is not regular or constant. Things happen in real-valued time.
 
User avatar
Cuchulainn
Posts: 58987
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Hacking ML for fun & profit

March 13th, 2019, 12:47 pm

OK, maybe I'm too stupid to get this.
Can't be good at everything.
 
User avatar
ISayMoo
Topic Author
Posts: 1653
Joined: September 30th, 2015, 8:30 pm

Re: Hacking ML for fun & profit

March 13th, 2019, 4:23 pm

I thought it was because it was not the problem and model motivating the use of a technique (NNs), but the other way around - the technique being accidentally used to solve the problem. (I'm not criticising that approach, if someone has resources and enough domain knowledge not to kill anyone with such blind shots.) I've never understood why NNs are said to imitate a human brain, though - cognitive processes are continuous, AFAIK.
I don't think anyone sane is saying that NNs imitate a human brain. They were inspired by some aspects of how biological brains work. So was reinforcement learning. But that's not "imitation".
The name of the technique suggests that it's supposed to imitate neural networks and I've been under the impression that's the storyline. Every information processing procedure was inspired by brain's cognition, but only NNs make direct references to that.
ISayMoo, data may come in ordinal-indexed sequences, but their (real-valued) timestamp domain is continuous. What am I not getting?
What real-valued timestamps do you have when feeding a sentence into Google Translate? It's just a string of letters.
When processing digital recordings, timestamps are regular. You can discretise them.
The pace at which someone writes, speaks, plays an instrument, ... is not regular or constant. Things happen in real-valued time.
1, AI makes direct references to neuroscience because it tries to solve a similar problem: describe (and then build, which is another story) a cognitive system. A biological brain is the only working example of such system, no wonder people are relating to it.

2. Yes, things are happening at different speed, but if you sample frequently enough, you'll capture enough of the detail. And LSTM networks can handle data with different memory lengths.
 
User avatar
katastrofa
Posts: 7588
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Hacking ML for fun & profit

March 13th, 2019, 4:56 pm

Reproducing natural cognitive systems - OK, I can see that discrete time suffices (speech recognition works as you wrote), thanks for clarifying it. However, AFAIK, the whole idea of AI is to go beyond that and make such a "brain" learn new things to itself through interacting with other "brains" and some environment with some receptors connected to its central unit or something... Now the discrete time doesn't seem to be a natural choice. I'm just wondering (after Cuchulainn) why AI/ML techniques use it if it isn't so complicated to use continuous time, I believe (can't you just integrate a continuous input impulse? it could excite consecutive modes in a vector of neurons, the summed value of which would be an input to some activation functions - I'm thinking it up as I'm writing :-D I'm sure if it was so simple, someone would have already done it).
Last edited by katastrofa on March 13th, 2019, 6:51 pm, edited 1 time in total.
 
User avatar
ISayMoo
Topic Author
Posts: 1653
Joined: September 30th, 2015, 8:30 pm

Re: Hacking ML for fun & profit

March 13th, 2019, 8:36 pm

If you google for "continuous time reinforcement learning", you'll find some papers. The reasons why so much of recent AI/ML work doesn't use continuous time are two-fold:

1. they solve problems which have no temporal dimension, e.g image classification;
2. they either work on time-discretised data (as I wrote before) or interact with simulated environments (video games, artificial worlds like Mujoco or Unity sims) which live in discrete time. 

E.g. StarCraft looks like a continuous time environment, but I'm pretty sure there's an event loop in the code of the game which discretises the time. Same for old Atari 2600 console games. Board games like Go or Chess work in discrete time as well.

I'll add that when I worked in HFT, we wrote our models using a programming language which understood time (real-time algorithms), and handled inputs on the scales of microseconds, but we pretended the time is discrete with a high clock rate. It just makes some things easier to reason about.
 
User avatar
katastrofa
Posts: 7588
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Hacking ML for fun & profit

March 13th, 2019, 8:37 pm

I must be different then (as a former physicists I'm more accustomed to continuous time modelling) :-)

Crazy question: looking at the general NNs structure, if you discretise the input (of some continuous signal), how do you know/check if the discretisation doesn't introduce some artefacts to your "results"?
 
User avatar
ISayMoo
Topic Author
Posts: 1653
Joined: September 30th, 2015, 8:30 pm

Re: Hacking ML for fun & profit

March 13th, 2019, 9:25 pm

It depends. Taking voice recognition as an example, if discretisation allows you to classify the speaker accurately, you're good.

I mean, how do you check that the discretisation of music from analog tape to digital CD doesn't introduce artefacts? By listening to both and comparing.
 
User avatar
katastrofa
Posts: 7588
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Hacking ML for fun & profit

March 13th, 2019, 9:46 pm

Is that how ML people test their models?
 
User avatar
Cuchulainn
Posts: 58987
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Hacking ML for fun & profit

March 14th, 2019, 12:02 am

Not only are neurons continuous time they are also asychronous. BPN feels like deadwood in comparison.

A random search ==>
Attachments
CNTimeNN.pdf
(6.05 MiB) Downloaded 12 times
 
User avatar
Cuchulainn
Posts: 58987
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Hacking ML for fun & profit

March 14th, 2019, 12:45 pm

Reproducing natural cognitive systems - OK, I can see that discrete time suffices (speech recognition works as you wrote), thanks for clarifying it. However, AFAIK, the whole idea of AI is to go beyond that and make such a "brain" learn new things to itself through interacting with other "brains" and some environment with some receptors connected to its central unit or something... Now the discrete time doesn't seem to be a natural choice. I'm just wondering (after Cuchulainn) why AI/ML techniques use it if it isn't so complicated to use continuous time, I believe (can't you just integrate a continuous input impulse? it could excite consecutive modes in a vector of neurons, the summed value of which would be an input to some activation functions - I'm thinking it up as I'm writing :-D I'm sure if it was so simple, someone would have already done it).
I am not in this space really, but it seems to me that most mainstream AI centres around linear algebra (nothing wrong with LA, BTW, it does have its moments). I think professional neuroscientists use more advanced models as in Handbook of Brain Theory and Neural Networks where ODEs and Control Theory play centre stage. Maybe people are happy with the current methods and maybe they have (greedily) converged to a local minimum and are stuck in a valley.

It's not necessarily so that ODE solvers are slow. This urban myth is not a reason for not using them. In true engineering spirit, it does no harm to test them against other candidate solutions, like how they test cars on Top Gear.
 
User avatar
Cuchulainn
Posts: 58987
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Hacking ML for fun & profit

March 14th, 2019, 9:00 pm

OK, maybe 
Can't be good at everything.
Maybe a better example of artificial time is diffusions  for global optimisation, annealing etc. It's only a model.
 
User avatar
ISayMoo
Topic Author
Posts: 1653
Joined: September 30th, 2015, 8:30 pm

Re: Hacking ML for fun & profit

March 16th, 2019, 11:59 am

Learning rate schedule is a kind of "artificial time", too.
 
User avatar
Cuchulainn
Posts: 58987
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Hacking ML for fun & profit

March 16th, 2019, 5:47 pm

Learning rate schedule is a kind of "artificial time", too.
Indeed, just realised thata. And there are several schedules.

Follow-on 
1. are learning rates  heuristics just simpler versions of Wolfe et al?
2. are they essentially redundant?
3. do we need momentum when (assuming) Wolfe does the job?

Simulated annealing also uses artificial time.
 
User avatar
katastrofa
Posts: 7588
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Hacking ML for fun & profit

March 17th, 2019, 1:21 am

Stupid question: what numerical importance-sampling methods don't use artificial time (or "artificial time")?
 
User avatar
Cuchulainn
Posts: 58987
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Hacking ML for fun & profit

March 17th, 2019, 11:32 am

Stupid question: what numerical importance-sampling methods don't use artificial time (or "artificial time")?
Western civilisation and science seem to be founded on forward linear time (except the binomial method which goes backwards)  I suppose even iterative methods can be seen as discrete time. Even physicists know what t = 0 is. Time is a fabrication of the human brain.
One of the first things you learn is 1,2,3,4,... and mathematical induction

https://en.wikipedia.org/wiki/Peano_axioms

Giambattista Vico and James Joyce claimed that time is circular.

// All this probabilistic numerical analysis takes some getting used to.
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On