what kind of research group is ? From my side, I started contacting some teams, do not hesitate to point me out alternative ones!Any suggestions ?

Yes

what kind of research group is ? From my side, I started contacting some teams, do not hesitate to point me out alternative ones!Any suggestions ?

Yes

- Cuchulainn
**Posts:**58401**Joined:****Location:**Amsterdam-
**Contact:**

What is your goal?

I think you need to define what you want to achieve.

*My thesis is that Artificial Intelligence (aka neural networks) applications to the numerical analysis of Partial Differential Equations are about to rediscover meshfreemethods. *

Then you need to prove this, at the very least. Unfortunately, this thread has revolved around notation and missing links. My modest proposal is to reduce the scope initially because the abstract (and sometimes idiosyncratic) notation is confounding things, at least that's what it seems to me.

I think you need to define what you want to achieve.

Then you need to prove this, at the very least. Unfortunately, this thread has revolved around notation and missing links. My modest proposal is to reduce the scope initially because the abstract (and sometimes idiosyncratic) notation is confounding things, at least that's what it seems to me.

To be very clear the point is the following : we developped these numerical methods years ago, to solve some industrial issues in mathematical Finance. Now I am witnessing a wide community of youg researchers trying to tackle the same problems. This might be a waste of time for everybody, specially for them.What is your goal?

I think you need to define what you want to achieve.

Hence we are thinking about making a release for research purposes of our PDE framework so that anybody could toy with it. Obviously, we need to partnership with some research group. Thus I am starting to contact some of them to start discussing this topic, to best fit their needs and our constraints, in order to kick-off this project.

- Cuchulainn
**Posts:**58401**Joined:****Location:**Amsterdam-
**Contact:**

Do you have open sources, github etc. What about Quantlib etc.

But your premise is (or seems to be) that your work is the same as AI and/or can replace it is not obvious, I would uncouple PDE from AI in your posts. But that's just my take.

*Now I am witnessing a wide community of youg researchers trying to tackle the same problems. This might be a waste of time for everybody, specially for them.*

That's crazy remark.I personally we;come the next generation whom I train in FDM and C++. These are skills.

It's like saying that you don't need to learn French because we have translators.

All products have a finite shelf life (5-10 years?) and I don't thinks yours is an exception. How many maintenance programmers?

But your premise is (or seems to be) that your work is the same as AI and/or can replace it is not obvious, I would uncouple PDE from AI in your posts. But that's just my take.

That's crazy remark.I personally we;come the next generation whom I train in FDM and C++. These are skills.

It's like saying that you don't need to learn French because we have translators.

All products have a finite shelf life (5-10 years?) and I don't thinks yours is an exception. How many maintenance programmers?

Cuchullain, you misunderstood my point, or I misexpressed myself. We probably share the same view over science and math. I welcome the new generation, since I am proposing to give them access to our numerical methods. I am simply saying : our methods perform [$]O(\epsilon^{-3/2})[$] operations to compute a solution to Fokker-Planck / Kolmogorov equations (and many others) with accuracy [$]\epsilon[$], whatever the dimension is. I welcome any new generation of method that will break this bound.That's crazy remark.I personally we;come the next generation whom I train in FDM and C++. These are skills.

It's like saying that you don't need to learn French because we have translators.

All products have a finite shelf life (5-10 years?) and I don't thinks yours is an exception. How many maintenance programmers?

Is there any error estimation available for AI method for PDE ? The only available estimation I know is https://arxiv.org/abs/1901.10854 : as far as i understood (good luck to the readers ...), this estimation is worse than a naive Monte-Carlo method.

[addendum] this reference (6/02/19, one week ago...) is far more readable and understandable. I read it quickly : their Theorem 4.1 seems to confirm that the convergence rate of their methods is precisely the one of a naive Monte-Carlo method.

- Cuchulainn
**Posts:**58401**Joined:****Location:**Amsterdam-
**Contact:**

I had a look a while back and sent feedback to the authors. It is basically a bunch of maths without the detail.

Is there any error estimation available for AI method for PDE ? The only available estimation I know is https://arxiv.org/abs/1901.10854 : as far as i understood (good luck to the readers ...), this estimation is worse than a naive Monte-Carlo method.

[addendum] this reference (6/02/19, one week ago...) is far more readable and understandable. I read it quickly : their Theorem 4.1 seems to confirm that the convergence rate of their methods is precisely the one of a naive Monte-Carlo method.

On another topic, where do scientists learn how to write articles? what constitutes a good article?

IMO it is 3 phases

. analysis model (e.g. PDE)

. numeric/design (e.g. FDM, meshless)

. Output + code

- Cuchulainn
**Posts:**58401**Joined:****Location:**Amsterdam-
**Contact:**

If you are considering https://arxiv.org/abs/1901.10854, I agree. We burnt our eyes during several hours before giving up, even if I sent a (quite) polite feedback to the authors. However this this reference (6/02/19) seems actually good and readable.I had a look a while back and sent feedback to the authors. It is basically a bunch of maths without the detail.

Well, I totally agree with you. I confess however that this exercize is hardened when you do private R&D as I do: for the curse of dimensionality, we decided to invest our own funds 12 years ago, because the public research did not wanted, or did not succeeded, to treat this important topic. We always kept a link with the public research, we are still keeping it tight, and our work will probably go public quite soon. However, now that the battle seems over since years (to my feeling, that is not anymore a feeling, that's a 4 years old Theorem with a proof), we are witnessing public research teams starting to pretend they solved the curse of dimensionality, even if they are still cursed ... maybe you can understand that I can be sometimes slightly upset concerning this topic. I recognize however that this is a communication problem : that's exactly why I am writing today in Wilmott site and trying to interact with some teams.On another topic, where do scientists learn how to write articles? what constitutes a good article?

IMO it is 3 phases

. analysis model (e.g. PDE)

. numeric/design (e.g. FDM, meshless)

. Output + code

- Cuchulainn
**Posts:**58401**Joined:****Location:**Amsterdam-
**Contact:**

I understand, and I don't believe any of the AI-DL-ML/PDE articles I have seen to date. As you say, so many unfounded wild claims.

My fav is**Deep Galerkin Method** .. meshuggah.

My fav is

meshuggah ?? ) We shall all be more deepen soonly : I am eagerly waiting for deep Crank Nicholson and deep entropic schemes , relying on deep meshes, and deep additions !I understand, and I don't believe any of the AI-DL-ML/PDE articles I have seen to date. As you say, so many unfounded wild claims.

My fav isDeep Galerkin Method.. meshuggah.

Deep salutations !

- Cuchulainn
**Posts:**58401**Joined:****Location:**Amsterdam-
**Contact:**

Meshuggah is a Yiddsih word we Amsterdammers use when we want to scream. And it is a metal group. BTW do you know the French group ghusa? My son is drummer in that group!

"DEEP CN", the last frontier.

"DEEP CN", the last frontier.

Interesting. I ended finding the yiddish meaning in google : " Indeed, prophets are referred to as meshuga inMeshuggah is a Yiddsih word we Amsterdammers use when we want to scream. And it is a metal group. BTW do you know the French group ghusa? My son is drummer in that group!

"DEEP CN", the last frontier

[addendum] Concerning Deep Galerkin Method, here is a arxiv ref over the topic. It is quite clear. It seems also quite clear for me, after a quick reading, that they are reinventing meshfree methods, except that they don't know how to use them: no error analysis, and that's fortunate, because they would be cursed too.

- Cuchulainn
**Posts:**58401**Joined:****Location:**Amsterdam-
**Contact:**

Th article on DGM looks good. Will try try to decipher it soon.

Problems in 666 dimensions is overkill if you wish to prove thesis. It is too abstract.

I wold benchmark (compare and contrast) DGM in 3 dimensions with this article

https://papers.ssrn.com/sol3/papers.cfm ... id=3288882

This is a real concrete test case.

Problems in 666 dimensions is overkill if you wish to prove thesis. It is too abstract.

I wold benchmark (compare and contrast) DGM in 3 dimensions with this article

https://papers.ssrn.com/sol3/papers.cfm ... id=3288882

This is a real concrete test case.

Cuchullain, that's a very nice suggestion, as I am currently integrating SABR model in CoDeFi for a client. Which table in this paper precisely would you like me to compete with ? Obviously I will not using DGM, but DES (deep entropic schemes ) .Th article on DGM looks good. Will try try to decipher it soon.

Problems in 666 dimensions is overkill if you wish to prove thesis. It is too abstract.

I wold benchmark (compare and contrast) DGM in 3 dimensions with this article

https://papers.ssrn.com/sol3/papers.cfm ... id=3288882

This is a real concrete test case.

[addendum] I had a first look. They are asserting that their schemes is 10 000 times faster than a finite difference crayg sneyd scheme ?

Last edited by JohnLeM on February 19th, 2019, 12:13 pm, edited 5 times in total.

I wrote to the authors to tell them that the work was very good and readable, but that these machine learning methods are still lacking of error estimations, and are probably still cursed, even if they should find quite soon a way to fix this up. I also added that we can provide these error estimations if they need to. I cc-ed to some AI authors found through arxiv working in the curse of dimensionality problem.Th article on DGM looks good. Will try try to decipher it soon.

[addendum] they kindly answered me that this work was part of this brazilian initiative. Indeed these guys are (quite good) students, testing the deep methods of the authors I cc-ed !

GZIP: On