SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
FaridMoussaoui
Posts: 356
Joined: June 20th, 2008, 10:05 am

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 12th, 2019, 9:01 pm

[$] \circ [$] is the standard composition operator.

The reviewer was asleep for the Hessian. Lower [$]#[$] follows Vellani notations.
Last edited by FaridMoussaoui on February 13th, 2019, 9:07 am, edited 1 time in total.
 
User avatar
JohnLeM
Topic Author
Posts: 204
Joined: September 16th, 2008, 7:15 pm

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 8:32 am

I think there's a typo on page 3; you say [$]\nabla^2[$] is the Jacobian but it is the Laplacian operator (or Hessian which become singular/negative definite).

BTW is reference [6] available yet?

I don't get what [$] A \circ S[$] does/is: it seem to be used a number of times without a sharp definition.

// Haven't got my head yet around article [1] and how to pull-back a measure..It uses upper case # while the article uses lower case #. Are they the same #?
[$]\nabla^2h = \nabla S[$], that is a Jacobian, seen as the Hessian of a convex function, that is important when one want to define S as a one-to-one map. 
[6] is written, we should have released it since years, we are all busy :/ I'll try to give a talk about it ASAP.
[$] A \circ S[$] composition
article [1] is magical. It tells you that any map $S$ can be decomposed as [$]S = (\nabla h) \circ T[$], h convex, T Lebesgue measure preserving. It is the analogous for functions of the polar decomposition for matrix [$]M =  A U[$].
Not really, but quite close : if [$] S_\# \mu = \nu[$], then [$] S^\# \nu = \mu[$], that is nothing but a variable change [$] \int \varphi \circ S d\mu = \int \varphi d\nu [$](AFAIR)
 
User avatar
Cuchulainn
Posts: 58122
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 10:03 am

I'm kind of allergic to magic :D It's very measure-theoretic which is computationally not obvious?

BTW if you write [$](\nabla S)[$] then it is the gradient,(vector) not Jacobi (matrix)??

Regarding mapping from n-d space to the unit cube [$](0,1)^n[$] I'm assuming we are talking about the independent variable?
Last edited by Cuchulainn on February 13th, 2019, 10:17 am, edited 8 times in total.
 
User avatar
JohnLeM
Topic Author
Posts: 204
Joined: September 16th, 2008, 7:15 pm

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 10:10 am

I'm kind of allergic to magic :D

BTW if you write [$](\nabla S)[$] then it is the gradient,(vector) not Jacobi (matrix)??
:) I just wanted to express my deep admiration for this work of Brenier.
[$](\nabla S)[$] is a field of vectors if S is a scalar function. It is a Jacobian (i.e. field of matrices) if [$]S : R^D \mapsto \
R^D[$] is a map ?
 
User avatar
Cuchulainn
Posts: 58122
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 10:21 am

I don't agree with the notation! I do agree with the conclusion.

Gradient is
https://nl.wikipedia.org/wiki/Gradi%C3%ABnt_(wiskunde)

Jacobian

https://nl.wikipedia.org/wiki/Jacobi-matrix

Or am I missing something?
Cal it [$]J(S)[$] otherwise everyone thinks it's [$]grad[$]. It confuses me no end,
Last edited by Cuchulainn on February 13th, 2019, 10:30 am, edited 1 time in total.
 
User avatar
JohnLeM
Topic Author
Posts: 204
Joined: September 16th, 2008, 7:15 pm

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 10:27 am

I don't agree with the notation. 

Gradient is
https://nl.wikipedia.org/wiki/Gradi%C3%ABnt_(wiskunde)
Well, this is the same notation : [$]\nabla f := (\partial_1 f, \cdots, \partial_D f) [$] for a scalar function [$]f : R^D \mapsto R[$]. [$]\nabla S := (\partial_i S_j )_{i,j=1,\cdots,D} [$] for the Jacobian of the map [$] S:=(S_1,\ldots,S_D)[$]. It seems correct: it is the gradient applied to each component. For instance, I can think to the Hessian operator as [$]\nabla^2 = \nabla \nabla^T[$], and the Laplacian one as [$]\Delta = \nabla^T \nabla[$] (with a slight notation abuse)
Last edited by JohnLeM on February 13th, 2019, 10:46 am, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 58122
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 10:45 am

OK, but the define it in the article! IMO you are still using notation I have never seen in this context.

The gradient is a special case of Jacobian and writing the latter using forrmer's notation is scary IMO.
 
User avatar
JohnLeM
Topic Author
Posts: 204
Joined: September 16th, 2008, 7:15 pm

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 11:03 am

Regarding mapping from n-d space to the unit cube [$](0,1)^n[$] I'm assuming we are talking about the independent variable?
I am not sure to understand your question. Yes all is mapped into a fixed set in this paper, for instance [$](0,1)^D[$], that is a localization principle. In other words, instead of solving the Fokker planck equation (1), describing a probability density [$]\mu(t,x),x\in R^D[$], we solved equivalently the equation satisfied by [$]S(t,x) : (0,1)^D \mapsto R^D[$], transporting the Lebesgue measure of the unit cube (not. [$]dx[$]) into [$]\mu(t,x)[$], i.e. [$]S(t,\cdot)_\# dx = d\mu(t,\cdot)[$].
 
User avatar
Cuchulainn
Posts: 58122
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 12:23 pm

Regarding mapping from n-d space to the unit cube [$](0,1)^n[$] I'm assuming we are talking about the independent variable?
I am not sure to understand your question. Yes all is mapped into a fixed set in this paper, for instance [$](0,1)^D[$], that is a localization principle. In other words, instead of solving the Fokker planck equation (1), describing a probability density [$]\mu(t,x),x\in R^D[$], we solved equivalently the equation satisfied by [$]S(t,x) : (0,1)^D \mapsto R^D[$], transporting the Lebesgue measure of the unit cube (not. [$]dx[$]) into [$]\mu(t,x)[$], i.e. [$]S(t,\cdot)_\# dx = d\mu(t,\cdot)[$].
I mean e.g. solving the heat equation u_t = u_xx on the real line. Define new independent variable y = x/(1+x)  to get a pde u_t = a(y)(a(y)u_y)_y on (0,1) where a(y) = 1-y*y.

I am not familiar with the term "localization principle". Maybe that's my confusion. Is it a term from physics?

Is it possible for motivation to give an example (even 1d/2d) to show how equations (5) and (6) work?
 
User avatar
JohnLeM
Topic Author
Posts: 204
Joined: September 16th, 2008, 7:15 pm

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 12:59 pm

I mean e.g. solving the heat equation u_t = u_xx on the real line. Define new independent variable y = x/(1+x)  to get a pde u_t = a(y)(a(y)u_y)_y on (0,1) where a(y) = 1-y*y.
If you consider a fixed-in-time change of variable, you are done. This is precisely an error that the AI community does today. Recall that Fokker-Planck equations are considered with Dirac measures as initial conditions. Try to solve numerically the heat equation in one dimension  [$]u_t = u_{xx}, u(0,x) = \delta_{x_0}(x)[$], where [$]x_0[$] can be anywhere, with your variable change [$]y = x/(1+x)[$] to see this.
 
User avatar
FaridMoussaoui
Posts: 356
Joined: June 20th, 2008, 10:05 am

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 1:05 pm

The change of variable is for the probabilty (transition) density (as he is solving the Fokker Planck equation).
For example for the GMB process [$] dX = X r dt + X \sigma dW [$], we have [$] X \in R_{+} [$].
We look for a map to localise from [$] R_{+} [$] to [$] (0~1)[$].
The map is defined in terms of the cumulative probability function which is in [$] (0~1)[$]

PS: sorry, this is duplicate to Jean-Marc answer. I was typing when he posted his answer.
 
User avatar
JohnLeM
Topic Author
Posts: 204
Joined: September 16th, 2008, 7:15 pm

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 1:08 pm

I am not familiar with the term "localization principle". Maybe that's my confusion. Is it a term from physics?

Is it possible for motivation to give an example (even 1d/2d) to show how equations (5) and (6) work?
I am not familiar too with this term, but it describes quite well what we wanted to do.

For one dimensional, there are plenty of example in my old paper.
For two dimensional, the CRAS paper contains one numerical example for Heston. Here is another example for log normal process.
 
User avatar
JohnLeM
Topic Author
Posts: 204
Joined: September 16th, 2008, 7:15 pm

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 1:11 pm

The change of variable is for the probabilty (transition) density (as he is solving the Fokker Planck equation).
For example for the GMB process [$] dX = X r dt + X \sigma dW [$], we have [$] X \in R_{+} [$].
We look for a map to localise from [$] R_{+} [$] to [$] (0~1)[$].
The map is defined in terms of the cumulative probability function which is in [$] (0~1)[$]

PS: sorry, this is duplicate to Jean-Marc answer. I was typing when he posted his answer.
No problem Farid, it is better to have several point of views. It is almost what you are saying : in one-dimension, the map is a quantile, and its inverse map is a cumulative. In several dimensions, the correct generalization of a quantile is a map [$] S = \nabla h : (0,1)^D \mapsto R^D[$], h convex (Brenier again).
 
User avatar
JohnLeM
Topic Author
Posts: 204
Joined: September 16th, 2008, 7:15 pm

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 6:40 pm

The article alone is not enough to implement the algorithm (in my point of view). As usual, you have to look to the references.
Farid, as I wrote, I think this is going to be really difficult to implement alone. A suggestion could be : find a public research group that could help to release a version for academical purposes, so that anybody could toy with it. Any suggestions ?
 
User avatar
Cuchulainn
Posts: 58122
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Re: Are Artificial Intelligence methods (AKA Neural Networks) for PDEs about to rediscover the wheel ?

February 13th, 2019, 8:31 pm

Any suggestions ?
Yes :D
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On