Page **1** of **2**

### Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **July 16th, 2011, 5:19 am**

by **FinancialAlex**

As described in various papers/conference presentations, the

AAD approach can potentially reduce the computational cost of sensitivities by several orders of magnitude, while having no approximation error. It can be used either for computing the Greeks or, respectively, for computing exact (up to machine precision) gradient (or even Hessian matrix), the last one very useful when using a gradient-based local optimizer. A framework based on AD can also be developed for automatic computation of Greeks/sensitivities from existing code, similar to various work done in in the last 15-20 years in areas such as fluid dynamics, meteorology and data assimilation. An overview of the approach and of the relevant literature is presented in

http://papers.ssrn.com/sol3/papers.cfm? ... =1828503If you know any other relevant references, please mention them in this thread. In case you have used recently any AD software, if possible please share your experience with it.Thank you

### Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **September 8th, 2011, 11:30 am**

by **dkkhirec**

Hi there,I have some very rudimentary code here :

http://www.quantcode.com/modules/docman ... t_dir=19It is about greeks computations for vannilia (BS) and path-dep (asian) optionsI am interested as well in case anyone has somthing ...

### Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **September 9th, 2011, 11:38 am**

by **mathmarc**

QuoteOriginally posted by: FinancialAlexIf you know any other relevant references, please mention them in this thread. In case you have used recently any AD software, if possible please share your experience with it.As indicated in the introduction of your paper Two of the most important areas in computational finance: Greeks and calibration. I have written a note on how to combine them.The price of an exotic instruments is often related to a specific basket of vanilla instruments. The price of those vanilla option is computed in a given base model. The complex model parameters are calibrated to fit the prices of the vanilla option from the base model. This step is usually done through a generic numerical equation solver. The calibration can be a significant part of the computation time. The exotic instrument is then priced with the calibrated complex model.In the algorithmic differentiation process, we suppose that the pricing algorithm and its derivatives are implemented. We want to differentiate the exotic price in the complex model with respect to the parameters of the base model. In the bump and recompute approach, this corresponds to computing the risks with model recalibration. The note describes how adjoint algorithmic differentiation can be used together with financial model calibration. Thanks to the implicit theorem, the differentiation of the calibration process itself is not required to differentiate to full pricing process.The note is available on SSRN: Adjoint Algorithmic Differentiation: Calibration and Implicit Function Theorem.

### Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **September 10th, 2011, 12:33 pm**

by **FinancialAlex**

I came across the paper just 2 days before you mentioned it, and I have found it quite interesting. I have already posted on Sept 5 a revised version of the AAD overview paper, to clarify some points on how to use AAD, and I will include your paper in my next revision. I have one question, please. Unless I am mistaken, your method is applicable in the cases where the calibration is reduced to solving a nonlinear equation, instead of solving a minimization problem. Is that a fair description of its applicability? I need to look deeper into it, but I can definitely see its potential.

### Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **September 12th, 2011, 12:23 pm**

by **mathmarc**

QuoteOriginally posted by: FinancialAlexI have one question, please. Unless I am mistaken, your method is applicable in the cases where the calibration is reduced to solving a nonlinear equation, instead of solving a minimization problem. Is that a fair description of its applicability?The way the examples are presented, it is fair to says that the calibration is done by solving a nonlinear equation (if you allow the equation to be multidimensional, if not it should be nonlinear equations) with as many instrument as number of parameters.Nevertheless you can read theoretical part in a way to allow a (unconstrained) minimization problem. If you use least square with h_i(C, \Theta, \Phi) = NPV(\Theta, C) - NPV(\Phi, C) (market value - model value) ant minimize |h(\Phi)|^2, the minimization implied that (D_\Phi h).h = 0 (minimum => first order derivative=0). Now you can apply the same approach using f(C, \Theta, \Phi) = (D_\Phi h).h =0. But then you need the second order derivative (Hessian) of h to compute the first order derivatives of f. This can maybe be simplified by using Hessian approximation in the lest-square approach (see Numerical recipies in C, section 14.4 Nonlinear Models).The theoretical Calibration Technique description is in the general framework but presents only the nonlinear equation solving case as an example. I have not done the least-square / Hessian (with or without approximation) implementation yet. So I don't know if it is good or not and I have not included explicitly in the note.

### Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **September 13th, 2011, 10:11 am**

by **FinancialAlex**

I have posted small revision to AAD paper. In particular, there was a typo in the last line of the adjoint code given as example. I have also referenced your paper. I will take a closer look how your approach would be applicable in the case of minimization.

### Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **September 18th, 2011, 9:26 am**

by **Cuchulainn**

I am having difficulties getting a grip on this method, not sure why. 1. The method is formal; what are the underlying assumptions and scope? Is AAD mathematically rigorous?2. Are there any other more concrete examples such as the exact BS equation and use it as input?3. The heat PDE in section 4 is clear (btw, typo in 1st formula, term c is incorrect) but the notation in section 4.1 is not clear; what is u_dot in eq. 4.1 and how is differentiation justified? There seems to be a free mix of discrete and continuous variables..Eqs. 4.2 and 4.3 are also unclear to me. Am missing something vital? What about case 101; The function u(x;a,b) = exp(- ax/b). How does AAD work in this case?

### Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **September 24th, 2011, 10:08 pm**

by **FinancialAlex**

1. Yes,

AAD is mathematically rigorous (see the 2008 book by Griweank and Walther). Regarding underlying assumptions, it depends on which approach is used: "discretize-and-differentiate" or "differentiate-then-discretize". If one uses the first approach, "discretize-and-differentiate", then no additional assumptions are needed. 2. I do not have yet other examples of

AAD in quant finance, but I will have them soon. You may also take a look at the examples in the papers by Giles and Capriotti.

AAD was employed in other areas, such as computational fluid dynamics or meteorology, and examples in those areas are presented in various papers at

www.autodiff.org3. u_dot corresponds to "tangent linear" variable. Essentially it comes from differentiation of the original code line by line. In particular, equation 4.1 is obtained by differentiating the middle equation in the PDE discretization. Differentiating the first and third equations imply u_dot(k+1,1) = 0 and u_dot(k+1,N) = 0, but I have not put them in Eq. 4.1 (these equations are only included in matrix representation given by Eq 4/2). It is better to also put them as part of Eq 4.1, for additional clarity, and the next revision will contain this. (4.2) is the matrix representation of Eq (4.1). Eq (4.3) is the matrix representation of the equation obtained by differentiating the line containing the definition of the cost functional: F = sum(u(M,j)-Y(j)^2. If we differentiate it, then we have F_dot = 2*sum(u(M,j)-Y(j))*u_dot(M,j) which can be written in matrix form as shown in Eq (4.3)Here is how it works for u(x;a,b) = exp(- ax/b).Let's say that we want to compute derivatives wrt a and bThen tangent linear code is u_tglin = exp(-ax/b) * (-x/b)*a_tglin + exp(-ax/b) * (ax/b^2)*b_tglinAdjoint code is a_adj = exp(-ax/b) * (-x/b)* u_adjb_adj = exp(-ax/b) * (ax/b^2)*u_adj

### Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **December 9th, 2012, 12:47 pm**

by **mathmarc**

For those interested by Algorithmic Differentiation in Finance, there will be a (free) Webinar with that title on Wednesday December 12th, 2012:Webinar - Algorithmic Differentiation in Finance: Fast Greeks and Beyond (by OpenGamma)Disclaimer: I have a personal interest in the Webinar.Link should be fixed now

### Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **December 9th, 2012, 6:44 pm**

by **spv205**

do you want to fix that link https: ?

### Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **December 12th, 2012, 11:23 pm**

by **FaridMoussaoui**

Is the webinar presentation available?

### Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **December 13th, 2012, 10:28 am**

by **Cuchulainn**

Is there a Boost Library for AD?

### Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **December 13th, 2012, 4:47 pm**

by **mathmarc**

QuoteOriginally posted by: FaridMoussaoui Is the webinar presentation available?Yes it is!The slides are available at

http://docs.opengamma.com/display/DOC/Presentations.The Webinar recording itself is available at

http://www.opengamma.com/downloads/Algo ... 2-2012.mov.

### Re: Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **April 23rd, 2018, 8:52 am**

by **Cuchulainn**

F# Automatic Differentiation (and applications) (can be called from C# and C++/CLI via its dll)

http://diffsharp.github.io/DiffSharp/

### Re: Adjoint and Automatic Differentiation (AAD) in computational finance

Posted: **April 25th, 2018, 9:28 pm**

by **ISayMoo**

What do you people think about this paper?

https://arxiv.org/abs/1802.05098
I'm sceptical whether their math really works.