I thought about it and created one. But finally this "deep kernel engineering" folder has moved and merged into the folder "kernel engineering" deep neural networks told me that they feel very comfortable inside !And a folder for DKE "deep kernel engineering". Only a matter of time.
Too many generals is indeed a problem. I have no idea what you're talking about, don't understand a word from you LinkedIn notes, and you could equally well speak Mandarin to me. I had the same problem with one or two other people here. Good luck to you out there in the blue, riding on a smile and a shoeshine!kernel functions are simply the scalar products of the vectors These are scalar product vectors, thus are kernels for SVMs. A kernel is ANY symmetrical function [$]k(x,y)[$] (more precisely admissible kernels). For instance [$]k(x,y) = \max(<x,y>,0)[$] is the RELU network of tensorflow. Ok u are speaking about scalar product kernels. What wrong with them ?I can see some superficial formal analogies between SVMs and NNs if I forget about all the conditions the SVN's kernels need to meet to produce trustworthy results (see Mercer conditions).
SVMs are based on the trick of changing the metric of the data space in such a way that datapoints, which they aren't linearly separable in the original metric, become linear-separable, the so-called kernel trick. Here is a very nice pictorial explanation.
Those kernel functions are simply the scalar products of the vectors (representing datapoints) in that space. For instance, let's say a polynomial kernel is a good candidate to fir our training d@ta: [$]K (x_i, x_j) = a x_i^T x_j + b[$]. You can call it "activation function" if you like. You can also see that the number of the model parameters grows with the number of datapoints. That's why SVMs were replaced with NNs, which scale with the dataset size better (at the cost of the mathematical rigour). NNs don't have this problem, because the actual activation function is [$]w_i x_j + b_i[$], where w and b are parameters of the NN units. Hence the dimension of the problem is controlled by your chosen size of the NN layer (not the data).
Summarising, I definitely wouldn't say that SVNs are NNs.
("d@ta" because Wordpress blocks me for "data"! - "A potentially unsafe operation has been detected in your request to this site")
And no again to "You can call it "activation function" if you like. You can also see that the number of the model parameters grows with the number of datapoints"
I can work with the computational resources that you want or at a given precision, and taking into account all your data. This holds of course also for linear kernels, but these are not really interesting, they consist in linear regression. I have also an algorithm that is quite similar to learning. Same methods, but more generals, and theoretically bullet-proof.
And more interestingly, I can now explain why and when a Neural Network fails, and propose a patch to fix this mess when it occurs. This is the "Huff". But I can also "Puff" if I wish to...
I am pushing on arxiv or SSRN pre-prints, joined with submission to peer reviewed papers. In these posts, I don't want to go that far : simple scientific message, ideas, or fun numerical experiments, nothing more, 10 mns reading.More coherence is a good idea. I reckon kats can handle the technical level.
I hate blogs. LI is FB 2.0, it's junky. Use SSRN or arxiv.
Unfortunately, I find my job fun and exciting. Am I sick? I toy a lot with math or tech: a tribute to sex pistols, a universal digital bill of rights (actually an information system following it), a model for economic crash, etc etc...could you imagine pushing these experiments in arxiv ? Nonetheless I spent time building and toying with them, these small works on big ideas could be useful to others.or fun numerical experiments,
Eh? These are not supposed to be fun! What's your secret?
Numerical accuracy is also important.Unfortunately, I find my job fun and exciting. Am I sick? I toy a lot with math or tech: a tribute to sex pistols, a universal digital bill of rights (actually an information system following it), a model for economic crash, etc etc...could you imagine pushing these experiments in arxiv ? Nonetheless I spent time building and toying with them, these small works on big ideas could be useful to others.or fun numerical experiments,
Eh? These are not supposed to be fun! What's your secret?
Coherence comes after knowledge and understanding of problems. It's not enough to put technical terms together like my AI text generators do. BTW, JohnLeMAI dice (trained on the last 1500 posts):More coherence is a good idea. I reckon kats can handle the technical level.
I hate blogs. LI is FB 2.0, it's junky. Use SSRN or arxiv.
that s a nice and fun mockingCoherence comes after knowledge and understanding of problems. It's not enough to put technical terms together like my AI text generators do. BTW, JohnLeMAI dice (trained on the last 1500 posts):More coherence is a good idea. I reckon kats can handle the technical level.
I hate blogs. LI is FB 2.0, it's junky. Use SSRN or arxiv.
Your own generalized and big deep AI and the future. AI guys know thus a exact mathematical finance pde boundary conditions. I can understand their pde engine. They could exhibit alan a quick of the people. They for the shows not hesitate to a little bit eigenvalues.
Theoretically to the question having mathematical convergence with the heston, dont think that we blame convergence had to get economical crisis?
Understood the sabr process now with the constant and define theoretically for part of s in the matrix. I can compute applications.
BTW is the analysis of the quantlib are not non tell. No problem is the i see one being [yup, JohnLeMAI is a bidirectional LSTM :-/]. More a sampling method at the point I can test the other real times now i found. Theoretically why i read this your paper from all the sabr. You define such an transition. You test this problem to be a totally more general purpose scheme that this finite difference AI can compute order with.
We are reading those sentences involuntarily trying to attribute some sense them. There's none, just some familiar words and buzzwords.
Well, to me, what is a real scandal is not even the math behind all this. What is driving me mad is that the AI community are heavily relying and promoting tools based mainly on framework as Tensorflow or Pytorch, supported by billions of investments. We know that these tools are working for toy problems. But nobody can tell if these tools works for production purposes on specific critical applications : tensorlow / pytorch are basically not reliable applications, because there is no reliable theory behind neural networks - except kernel engeenering, SVM-based technology, that are different frameworks than tensorflow.Why all the hullaboo in ML circles on SVM? It is just a special finite-dimensional case of a specific (hyperplane!) separation theorem in (geometric) Functional Analysis..It has been used in many fields for a long time, e.g. geometric modelling etc.
https://en.wikipedia.org/wiki/Hyperplan ... on_theorem
The logical result is Hahn-Banach theorem in (infinite-dimensional) topological vector spaces.
Of course, this is all Greek for yer average data scientist..