Page 10 of 12

Re: DL and PDEs

Posted: May 25th, 2019, 7:15 pm
by Cuchulainn
ISayMoo
You chose not  to answer the most important question

"But ... what's the problem, i.e. the question that is trying to get out?"

So, next time I will write

These questions are not even wrong. (JUST KIDDING)
But ... SERIOUSLY what's the problem, i.e. the question that is trying to get out?

Makes sure don't act the 6-year old.

Re: DL and PDEs

Posted: May 25th, 2019, 7:34 pm
by Cuchulainn
Instead of applying NNs to ODEs, apply ODEs to NNs: https://arxiv.org/abs/1806.07366
Another thing: this is an article of 10 pages and > 100 references! Something really odd here. Maybe it's ML style?

Re: DL and PDEs

Posted: May 25th, 2019, 8:01 pm
by ISayMoo
I think it's nice when people are conscious about giving credit for previous work.

Re: DL and PDEs

Posted: May 25th, 2019, 8:04 pm
by ISayMoo
ISayMoo
You chose not  to answer the most important question

"But ... what's the problem, i.e. the question that is trying to get out?"

So, next time I will write

These questions are not even wrong. (JUST KIDDING)
But ... SERIOUSLY what's the problem, i.e. the question that is trying to get out?

Makes sure don't act the 6-year old.
Seriously: the first Section explains what their goal is.

Re: DL and PDEs

Posted: August 13th, 2019, 10:14 am
by Cuchulainn
Question on FDM and ML

On model validation, an idea might be to adapt current techniques to PDE, e.g. holdout sets, cross-validation + variations etc. (e.g. book by van der Plas Python Data Science Handbook (pages 361-370)).

Feedback from august members welcome.

Re: DL and PDEs

Posted: August 13th, 2019, 11:29 am
by ISayMoo
I would look into methods used to test interpolation algorithms (which is what pricing models really are).

Re: DL and PDEs

Posted: August 29th, 2019, 7:28 pm
by Cuchulainn
I would look into methods used to test interpolation algorithms (which is what pricing models really are).
Somewhat related to this is the fact that 1st gen 'traditional Dl_PDE' approaches fail to approximate the greeks properly (non-C_2 activation function) and second they are not arbitrage-free. 
BTW what do you think of image-based implicit learning (Horvath)?

Hopefully we can tell more soon..in the context of Heston and SABR models.

Re: DL and PDEs

Posted: August 29th, 2019, 11:43 pm
by ISayMoo
Link?

Re: DL and PDEs

Posted: August 30th, 2019, 3:21 pm
by Cuchulainn

Re: DL and PDEs

Posted: August 30th, 2019, 4:58 pm
by ISayMoo
Looked at the abstract. Looks like a neat idea. Of course the devil is in the details.

Re: DL and PDEs

Posted: August 31st, 2019, 9:34 am
by Cuchulainn
Looked at the abstract. Looks like a neat idea. Of course the devil is in the details.
I don't want to jump the gun just yet but robust and accurate result for SABR et al are looking good. And very well explained. Even I understand the flow.

Corollary. Here, all activation  units are equal, but some are more equal than others.

Re: DL and PDEs

Posted: September 1st, 2019, 8:30 pm
by ISayMoo
If they calibrated directly to market quotes, they would run into the problem of overfitting to a limited dataset. But since they calibrate to model quotes, they can generate as many training and test samples as they like! I like that.

Re: DL and PDEs

Posted: September 2nd, 2019, 11:13 am
by Cuchulainn
If they calibrated directly to market quotes, they would run into the problem of overfitting to a limited dataset. But since they calibrate to model quotes, they can generate as many training and test samples as they like! I like that.
Yes, this approach feels good. In the current case they took 85% of data for training and 15% for testing to evaluate network performance. From the 85% ==> 65% for explicit training and 20% for validation. At my request they performed 5-fold cross validation. (I also requested confusion matrix to see how the classification works, but maybe time is up). Fold cross validation errors: MSE ~ 9 X 1.0e-8, MAE ~ 0.00020 (Thesis #1).
Synthetic SABR data based on 3'000'00 uniformly generated random samples for each model parameter.

For the Heston model, the idea is to generate by 1) analytical, 2) Soviet Splitting (Yanenko) (In the "West", Craig-Sneyd (ADI) is used). It will be interesting to see what comes out.

How often does the data need to be trained (again): every 6 months, ..., seconds??

Re: DL and PDEs

Posted: September 2nd, 2019, 8:14 pm
by ISayMoo
IMHO the NN doesn't need to be recalibrated. The model itself... ask Paul ;-)

Re: DL and PDEs

Posted: September 3rd, 2019, 5:47 pm
by Cuchulainn
Like the difference between Pure and Empirical Knowledge?