Serving the Quantitative Finance Community

 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Machine Learning: Frequency asked Questions

October 8th, 2020, 11:18 am

And a folder for DKE "deep kernel engineering". Only a matter of time.
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Machine Learning: Frequency asked Questions

October 8th, 2020, 12:14 pm

And a folder for DKE "deep kernel engineering". Only a matter of time.
I thought about it and created one. But finally this "deep kernel engineering" folder has moved and merged into the folder "kernel engineering" :) deep neural networks told me that they feel very comfortable inside !
 
User avatar
katastrofa
Posts: 7440
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Machine Learning: Frequency asked Questions

October 8th, 2020, 2:56 pm

I can see some superficial formal analogies between SVMs and NNs if I forget about all the conditions the SVN's kernels need to meet to produce trustworthy results (see Mercer conditions).
SVMs are based on the trick of changing the metric of the data space in such a way that datapoints, which they aren't linearly separable in the original metric, become linear-separable, the so-called kernel trick. Here is a very nice pictorial explanation.
Those kernel functions are simply the scalar products of the vectors (representing datapoints) in that space. For instance, let's say a polynomial kernel is a good candidate to fir our training d@ta: [$]K (x_i, x_j) = a x_i^T x_j + b[$]. You can call it "activation function" if you like. You can also see that the number of the model parameters grows with the number of datapoints. That's why SVMs were replaced with NNs, which scale with the dataset size better (at the cost of the mathematical rigour). NNs don't have this problem, because the actual activation function is [$]w_i x_j + b_i[$], where w and b are parameters of the NN units. Hence the dimension of the problem is controlled by your chosen size of the NN layer (not the data).
Summarising, I definitely wouldn't say that SVNs are NNs.

("d@ta" because Wordpress blocks me for "data"! - "A potentially unsafe operation has been detected in your request to this site")
kernel functions are simply the scalar products of the vectors  These are scalar product vectors, thus are kernels for SVMs. A kernel is ANY symmetrical function [$]k(x,y)[$] (more precisely admissible kernels). For instance [$]k(x,y) = \max(<x,y>,0)[$] is the RELU network of tensorflow. Ok u are speaking about scalar product kernels. What wrong with them ?

And no again to "You can call it "activation function" if you like. You can also see that the number of the model parameters grows with the number of datapoints"
I can work with the computational resources that you want or at a given precision, and taking into account all your data. This holds of course also for linear kernels, but these are not really interesting, they consist in linear regression. I have also an algorithm that is quite similar to learning. Same methods, but more generals, and theoretically bullet-proof.

And more interestingly, I can now explain why and when a Neural Network fails, and propose a patch to fix this mess when it occurs. This is the "Huff". But I can also "Puff" if I wish to...
Too many generals is indeed a problem. I have no idea what you're talking about, don't understand a word from you LinkedIn notes, and you could equally well speak Mandarin to me. I had the same problem with one or two other people here. Good luck to you out there in the blue, riding on a smile and a shoeshine! :-)
Image
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Machine Learning: Frequency asked Questions

October 8th, 2020, 3:27 pm

No problem, it is a public forum, and I am sorry if my posts are too technical, conflictual or not understandable. Obviously I am at your disposal, or at the disposal of anyone that wish to further discuss these matters, that could have a real impact on those who are investing in Neural Network technologies. 

But meanwhile, let's stick to lol cats !

Image
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Machine Learning: Frequency asked Questions

October 8th, 2020, 4:56 pm

More coherence is a good idea. I reckon kats can handle the technical level.

I hate blogs. LI is FB 2.0, it's junky. Use SSRN or arxiv.
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Machine Learning: Frequency asked Questions

October 8th, 2020, 5:16 pm

More coherence is a good idea. I reckon kats can handle the technical level.

I hate blogs. LI is FB 2.0, it's junky. Use SSRN or arxiv.
I am pushing on arxiv or SSRN pre-prints, joined with submission to peer reviewed papers. In these posts, I don't want to go that far : simple scientific message, ideas, or fun numerical experiments, nothing more, 10 mns reading.
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Machine Learning: Frequency asked Questions

October 8th, 2020, 5:55 pm

or fun numerical experiments,

Eh? These are not supposed to be fun! What's your secret?
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Machine Learning: Frequency asked Questions

October 8th, 2020, 7:30 pm

or fun numerical experiments,

Eh? These are not supposed to be fun! What's your secret?
Unfortunately, I find my job fun and exciting. Am I sick? I toy a lot with math or tech: a tribute to sex pistols, a universal digital bill of rights (actually an information system following it), a model for economic crash, etc etc...could you imagine pushing these experiments in arxiv ? Nonetheless I spent time building and toying with them, these small works on big ideas could be useful to others.
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Machine Learning: Frequency asked Questions

October 9th, 2020, 10:02 am

I was thinking to what Katastrofa was saying :
"You can also see that the number of the model parameters grows with the number of datapoints". the complexity of a Support Vector Machine is LINEAR in the number of datapoints, as are NNs: same methods, but one is understood and masterized, and the other not. And the first one can do things that the second can't. I think that most of people belonging to the IA community do not know what is really a SVM.

The real point behind all this mess concerning Neural Networks is that it is going to be a real pain to individuals or institutions that invested in these technonologies, as testimonized by Katastrofa reaction.
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Machine Learning: Frequency asked Questions

October 9th, 2020, 2:46 pm

or fun numerical experiments,

Eh? These are not supposed to be fun! What's your secret?
Unfortunately, I find my job fun and exciting. Am I sick? I toy a lot with math or tech: a tribute to sex pistols, a universal digital bill of rights (actually an information system following it), a model for economic crash, etc etc...could you imagine pushing these experiments in arxiv ? Nonetheless I spent time building and toying with them, these small works on big ideas could be useful to others.
Numerical accuracy is also important.

www.youtube.com/watch?v=DXI1byuVixA&lis ... dex=7&t=0s
 
User avatar
katastrofa
Posts: 7440
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: Machine Learning: Frequency asked Questions

October 9th, 2020, 4:17 pm

More coherence is a good idea. I reckon kats can handle the technical level.

I hate blogs. LI is FB 2.0, it's junky. Use SSRN or arxiv.
Coherence comes after knowledge and understanding of problems. It's not enough to put technical terms together like my AI text generators do. BTW, JohnLeMAI dice (trained on the last 1500 posts):

Your own generalized and big deep AI and the future. AI guys know thus a exact mathematical finance pde boundary conditions. I can understand their pde engine. They could exhibit alan a quick of the people. They for the shows not hesitate to a little bit eigenvalues.
Theoretically to the question having mathematical convergence with the heston, dont think that we blame convergence had to get economical crisis?
Understood the sabr process now with the constant and define theoretically for part of s in the matrix. I can compute applications.
BTW is the analysis of the quantlib are not non tell. No problem is the i see one being [yup, JohnLeMAI is a bidirectional LSTM :-/]. More a sampling method at the point I can test the other real times now i found. Theoretically why i read this your paper from all the sabr. You define such an transition. You test this problem to be a totally more general purpose scheme that this finite difference AI can compute order with.

We are reading those sentences involuntarily trying to attribute some sense them. There's none, just some familiar words and buzzwords.
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Machine Learning: Frequency asked Questions

October 9th, 2020, 5:31 pm

More coherence is a good idea. I reckon kats can handle the technical level.

I hate blogs. LI is FB 2.0, it's junky. Use SSRN or arxiv.
Coherence comes after knowledge and understanding of problems. It's not enough to put technical terms together like my AI text generators do. BTW, JohnLeMAI dice (trained on the last 1500 posts):

Your own generalized and big deep AI and the future. AI guys know thus a exact mathematical finance pde boundary conditions. I can understand their pde engine. They could exhibit alan a quick of the people. They for the shows not hesitate to a little bit eigenvalues.
Theoretically to the question having mathematical convergence with the heston, dont think that we blame convergence had to get economical crisis?
Understood the sabr process now with the constant and define theoretically for part of s in the matrix. I can compute applications.
BTW is the analysis of the quantlib are not non tell. No problem is the i see one being [yup, JohnLeMAI is a bidirectional LSTM :-/]. More a sampling method at the point I can test the other real times now i found. Theoretically why i read this your paper from all the sabr. You define such an transition. You test this problem to be a totally more general purpose scheme that this finite difference AI can compute order with.

We are reading those sentences involuntarily trying to attribute some sense them. There's none, just some familiar words and buzzwords.
:) that s a nice and fun mocking :)
Casewhere, I am a mathematician, and can argue with precise definitions / statements if needed.
Have a nice evening
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Machine Learning: Frequency asked Questions

October 10th, 2020, 12:26 pm

Why all the hullaboo in ML circles on SVM? It is just a special finite-dimensional case of a specific (hyperplane!) separation theorem in (geometric) Functional Analysis..It has been used in many fields for a long time, e.g. geometric modelling etc.

https://en.wikipedia.org/wiki/Hyperplan ... on_theorem

The logical result is Hahn-Banach theorem in (infinite-dimensional) topological vector spaces.

Of course, this is all Greek for yer average data scientist..
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: Machine Learning: Frequency asked Questions

October 10th, 2020, 2:40 pm

Why all the hullaboo in ML circles on SVM? It is just a special finite-dimensional case of a specific (hyperplane!) separation theorem in (geometric) Functional Analysis..It has been used in many fields for a long time, e.g. geometric modelling etc.

https://en.wikipedia.org/wiki/Hyperplan ... on_theorem

The logical result is Hahn-Banach theorem in (infinite-dimensional) topological vector spaces.

Of course, this is all Greek for yer average data scientist..
Well, to me, what is a real scandal is not even the math behind all this. What is driving me mad is that the AI community are heavily relying and promoting tools based mainly on framework as Tensorflow or Pytorch, supported by billions of investments. We know that these tools are working for toy problems. But nobody can tell if these tools works for production purposes on specific critical applications : tensorlow / pytorch are basically not reliable applications, because there is no reliable theory behind neural networks - except kernel engeenering, SVM-based technology, that are different frameworks than tensorflow.

Today we are now witnessing big institutions among our clients that started investing heavily in solution based on tensorflow pytorch. Guess who is now speaking with them, telling that Tensorflow might not do the job, because it might not provide convergent methods in some situations ? Guys like me, that already tried to warn years ago that a problem could occur. I should thank this community however, because there is now a big market, just cleaning out their incompetency problems, and I think that investors / shareholder should also take it as a funny joke.

Can you imagine the waste of time and money for professional teams, researchers, etc ... ? To reassure our clients, we have to say that, in any case, if Tensorflow does not work, any simple, SVM home made tools can save the situation, because they are better, faster and more reliable tools. This situation is surrealistic !
Last edited by JohnLeM on October 10th, 2020, 3:27 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Topic Author
Posts: 20254
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Machine Learning: Frequency asked Questions

October 10th, 2020, 3:10 pm

Can you imagine the waste of time and money for professional teams, researchers, etc ... ? To reassure our clients, we have to say that, in any case, if Tensorflow does not work, our home made tools can save the situation, because they are better, faster and reliable tools. This situation is surrealistic !

Why not just keep out marketing/sales spin from your posts?

I agree with you, nonetheless.