Serving the Quantitative Finance Community

 
User avatar
FinancialAlex
Topic Author
Posts: 1
Joined: April 11th, 2005, 10:34 pm

Adjoint and Automatic Differentiation (AAD) in computational finance

July 16th, 2011, 5:19 am

As described in various papers/conference presentations, the AAD approach can potentially reduce the computational cost of sensitivities by several orders of magnitude, while having no approximation error. It can be used either for computing the Greeks or, respectively, for computing exact (up to machine precision) gradient (or even Hessian matrix), the last one very useful when using a gradient-based local optimizer. A framework based on AD can also be developed for automatic computation of Greeks/sensitivities from existing code, similar to various work done in in the last 15-20 years in areas such as fluid dynamics, meteorology and data assimilation. An overview of the approach and of the relevant literature is presented in http://papers.ssrn.com/sol3/papers.cfm? ... =1828503If you know any other relevant references, please mention them in this thread. In case you have used recently any AD software, if possible please share your experience with it.Thank you
Last edited by FinancialAlex on July 15th, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
dkkhirec
Posts: 0
Joined: September 7th, 2011, 4:12 pm

Adjoint and Automatic Differentiation (AAD) in computational finance

September 8th, 2011, 11:30 am

Hi there,I have some very rudimentary code here :http://www.quantcode.com/modules/docman ... t_dir=19It is about greeks computations for vannilia (BS) and path-dep (asian) optionsI am interested as well in case anyone has somthing ...
 
User avatar
mathmarc
Posts: 2
Joined: March 18th, 2003, 6:50 am

Adjoint and Automatic Differentiation (AAD) in computational finance

September 9th, 2011, 11:38 am

QuoteOriginally posted by: FinancialAlexIf you know any other relevant references, please mention them in this thread. In case you have used recently any AD software, if possible please share your experience with it.As indicated in the introduction of your paper Two of the most important areas in computational finance: Greeks and calibration. I have written a note on how to combine them.The price of an exotic instruments is often related to a specific basket of vanilla instruments. The price of those vanilla option is computed in a given base model. The complex model parameters are calibrated to fit the prices of the vanilla option from the base model. This step is usually done through a generic numerical equation solver. The calibration can be a significant part of the computation time. The exotic instrument is then priced with the calibrated complex model.In the algorithmic differentiation process, we suppose that the pricing algorithm and its derivatives are implemented. We want to differentiate the exotic price in the complex model with respect to the parameters of the base model. In the bump and recompute approach, this corresponds to computing the risks with model recalibration. The note describes how adjoint algorithmic differentiation can be used together with financial model calibration. Thanks to the implicit theorem, the differentiation of the calibration process itself is not required to differentiate to full pricing process.The note is available on SSRN: Adjoint Algorithmic Differentiation: Calibration and Implicit Function Theorem.
 
User avatar
FinancialAlex
Topic Author
Posts: 1
Joined: April 11th, 2005, 10:34 pm

Adjoint and Automatic Differentiation (AAD) in computational finance

September 10th, 2011, 12:33 pm

I came across the paper just 2 days before you mentioned it, and I have found it quite interesting. I have already posted on Sept 5 a revised version of the AAD overview paper, to clarify some points on how to use AAD, and I will include your paper in my next revision. I have one question, please. Unless I am mistaken, your method is applicable in the cases where the calibration is reduced to solving a nonlinear equation, instead of solving a minimization problem. Is that a fair description of its applicability? I need to look deeper into it, but I can definitely see its potential.
 
User avatar
mathmarc
Posts: 2
Joined: March 18th, 2003, 6:50 am

Adjoint and Automatic Differentiation (AAD) in computational finance

September 12th, 2011, 12:23 pm

QuoteOriginally posted by: FinancialAlexI have one question, please. Unless I am mistaken, your method is applicable in the cases where the calibration is reduced to solving a nonlinear equation, instead of solving a minimization problem. Is that a fair description of its applicability?The way the examples are presented, it is fair to says that the calibration is done by solving a nonlinear equation (if you allow the equation to be multidimensional, if not it should be nonlinear equations) with as many instrument as number of parameters.Nevertheless you can read theoretical part in a way to allow a (unconstrained) minimization problem. If you use least square with h_i(C, \Theta, \Phi) = NPV(\Theta, C) - NPV(\Phi, C) (market value - model value) ant minimize |h(\Phi)|^2, the minimization implied that (D_\Phi h).h = 0 (minimum => first order derivative=0). Now you can apply the same approach using f(C, \Theta, \Phi) = (D_\Phi h).h =0. But then you need the second order derivative (Hessian) of h to compute the first order derivatives of f. This can maybe be simplified by using Hessian approximation in the lest-square approach (see Numerical recipies in C, section 14.4 Nonlinear Models).The theoretical Calibration Technique description is in the general framework but presents only the nonlinear equation solving case as an example. I have not done the least-square / Hessian (with or without approximation) implementation yet. So I don't know if it is good or not and I have not included explicitly in the note.
 
User avatar
FinancialAlex
Topic Author
Posts: 1
Joined: April 11th, 2005, 10:34 pm

Adjoint and Automatic Differentiation (AAD) in computational finance

September 13th, 2011, 10:11 am

I have posted small revision to AAD paper. In particular, there was a typo in the last line of the adjoint code given as example. I have also referenced your paper. I will take a closer look how your approach would be applicable in the case of minimization.
 
User avatar
Cuchulainn
Posts: 20203
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Adjoint and Automatic Differentiation (AAD) in computational finance

September 18th, 2011, 9:26 am

I am having difficulties getting a grip on this method, not sure why. 1. The method is formal; what are the underlying assumptions and scope? Is AAD mathematically rigorous?2. Are there any other more concrete examples such as the exact BS equation and use it as input?3. The heat PDE in section 4 is clear (btw, typo in 1st formula, term c is incorrect) but the notation in section 4.1 is not clear; what is u_dot in eq. 4.1 and how is differentiation justified? There seems to be a free mix of discrete and continuous variables..Eqs. 4.2 and 4.3 are also unclear to me. Am missing something vital? What about case 101; The function u(x;a,b) = exp(- ax/b). How does AAD work in this case?
Last edited by Cuchulainn on September 17th, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
FinancialAlex
Topic Author
Posts: 1
Joined: April 11th, 2005, 10:34 pm

Adjoint and Automatic Differentiation (AAD) in computational finance

September 24th, 2011, 10:08 pm

1. Yes, AAD is mathematically rigorous (see the 2008 book by Griweank and Walther). Regarding underlying assumptions, it depends on which approach is used: "discretize-and-differentiate" or "differentiate-then-discretize". If one uses the first approach, "discretize-and-differentiate", then no additional assumptions are needed. 2. I do not have yet other examples of AAD in quant finance, but I will have them soon. You may also take a look at the examples in the papers by Giles and Capriotti. AAD was employed in other areas, such as computational fluid dynamics or meteorology, and examples in those areas are presented in various papers at www.autodiff.org3. u_dot corresponds to "tangent linear" variable. Essentially it comes from differentiation of the original code line by line. In particular, equation 4.1 is obtained by differentiating the middle equation in the PDE discretization. Differentiating the first and third equations imply u_dot(k+1,1) = 0 and u_dot(k+1,N) = 0, but I have not put them in Eq. 4.1 (these equations are only included in matrix representation given by Eq 4/2). It is better to also put them as part of Eq 4.1, for additional clarity, and the next revision will contain this. (4.2) is the matrix representation of Eq (4.1). Eq (4.3) is the matrix representation of the equation obtained by differentiating the line containing the definition of the cost functional: F = sum(u(M,j)-Y(j)^2. If we differentiate it, then we have F_dot = 2*sum(u(M,j)-Y(j))*u_dot(M,j) which can be written in matrix form as shown in Eq (4.3)Here is how it works for u(x;a,b) = exp(- ax/b).Let's say that we want to compute derivatives wrt a and bThen tangent linear code is u_tglin = exp(-ax/b) * (-x/b)*a_tglin + exp(-ax/b) * (ax/b^2)*b_tglinAdjoint code is a_adj = exp(-ax/b) * (-x/b)* u_adjb_adj = exp(-ax/b) * (ax/b^2)*u_adj
 
User avatar
mathmarc
Posts: 2
Joined: March 18th, 2003, 6:50 am

Adjoint and Automatic Differentiation (AAD) in computational finance

December 9th, 2012, 12:47 pm

For those interested by Algorithmic Differentiation in Finance, there will be a (free) Webinar with that title on Wednesday December 12th, 2012:Webinar - Algorithmic Differentiation in Finance: Fast Greeks and Beyond (by OpenGamma)Disclaimer: I have a personal interest in the Webinar.Link should be fixed now ;-)
Last edited by mathmarc on December 8th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
spv205
Posts: 1
Joined: July 14th, 2002, 3:00 am

Adjoint and Automatic Differentiation (AAD) in computational finance

December 9th, 2012, 6:44 pm

do you want to fix that link https: ?
 
User avatar
FaridMoussaoui
Posts: 327
Joined: June 20th, 2008, 10:05 am
Location: Genève, Genf, Ginevra, Geneva

Adjoint and Automatic Differentiation (AAD) in computational finance

December 12th, 2012, 11:23 pm

Is the webinar presentation available?
Last edited by FaridMoussaoui on December 12th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 20203
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Adjoint and Automatic Differentiation (AAD) in computational finance

December 13th, 2012, 10:28 am

Is there a Boost Library for AD?
 
User avatar
mathmarc
Posts: 2
Joined: March 18th, 2003, 6:50 am

Adjoint and Automatic Differentiation (AAD) in computational finance

December 13th, 2012, 4:47 pm

QuoteOriginally posted by: FaridMoussaoui Is the webinar presentation available?Yes it is!The slides are available at http://docs.opengamma.com/display/DOC/Presentations.The Webinar recording itself is available at http://www.opengamma.com/downloads/Algo ... 2-2012.mov.
 
User avatar
Cuchulainn
Posts: 20203
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: Adjoint and Automatic Differentiation (AAD) in computational finance

April 23rd, 2018, 8:52 am

F# Automatic Differentiation (and applications) (can be called from C# and C++/CLI  via its dll)

http://diffsharp.github.io/DiffSharp/
 
User avatar
ISayMoo
Posts: 2332
Joined: September 30th, 2015, 8:30 pm

Re: Adjoint and Automatic Differentiation (AAD) in computational finance

April 25th, 2018, 9:28 pm

What do you people think about this paper? https://arxiv.org/abs/1802.05098

I'm sceptical whether their math really works.