Serving the Quantitative Finance Community

 
User avatar
mj
Topic Author
Posts: 12
Joined: December 20th, 2001, 12:32 pm

CVA

January 25th, 2016, 9:52 pm

We have a new way of computing CVA. I'd be interested in any commentshttp://ssrn.com/abstract=2717250
 
User avatar
cenperro

CVA

January 26th, 2016, 3:01 pm

while the approach is nice (We did something like this for some exotic deals) I think it doesn't get along well with collateralization, does it?
 
User avatar
mj
Topic Author
Posts: 12
Joined: December 20th, 2001, 12:32 pm

CVA

January 26th, 2016, 6:50 pm

as long you model the early exercise decision and all cash-flows correctly, I don't see how collateral is a problem.
 
User avatar
cenperro

CVA

January 26th, 2016, 8:50 pm

My apologies for not using latex..hope you get the point:As collateral is , in general, a non linear function of the value of the portfolio at t-DeltaT (t being your exposure date) I would say that the approach stop being nice as:1- Now it is not that the indicator function is just "the value of the deal > 0" but you need to something in the lines "the value of the deal minus the value of the collateral > 0" so the range of validity of the regression is not only around zero. Anyway, that is just a detail and I do not think that is the main issue. I am more concerned about 2) below.2- After removing the positive part (by using the 'trick' of the indicator function with the regression) we are left now with something in the lines of E [IndicatorFunction x ("value of the portfolio" - "the value of the collateral")]. The value of the portfolio is easy by just following the approach in the paper as we continue to simulate the cashflows...however the value of the collateral...I might be missing something but it is not that easy anymore (think of threshold and MTAs..how do you get that value? you cannot follow the same approach as long as we are not talking of a linear function of the price.. I mean, you could apply the non linear collateral function to the "one path" estimator but I would not feel comfortable doing that (it is like saying f(E[]) = E[f()] )..I think This can be 'solved' but the solution is not straightforwardAm I missing something? let me say that I find the approach really nice (in fact I love it :-) ) and it is very useful to some extent (for certain deals/portfolios you can save an incredible amount of time and computations) and it would be great if I am wrong and the collateral stuff can be applied stisfactory in a straightforward fashion...let me know if i am missing some detailRegardscenperro :-)
Last edited by cenperro on January 26th, 2016, 11:00 pm, edited 1 time in total.
 
User avatar
mj
Topic Author
Posts: 12
Joined: December 20th, 2001, 12:32 pm

CVA

January 27th, 2016, 11:02 pm

OK i can see that that case would be trickier. I think there might be some ways around it, however. Can you point to, or send me, a spec of what you would like a model to be able to do?
 
User avatar
virtualphoton
Posts: 0
Joined: September 4th, 2006, 2:33 pm

CVA

January 28th, 2016, 2:59 pm

quick q: if higher order than quadratic regressions are used, would the accuracy gap between eq (5) and (6) be narrowed? thanks
 
User avatar
cenperro

CVA

January 28th, 2016, 3:30 pm

I do not think you are going to be specially sensitive to the order and higher order will not necessarily mean higher accuracy but let the authors confirm. Standard methods to improve the accuracy of (5) is to use more sofisticated regression (not using a global regression but something local, different regressions in different zones so you have a more accurate representation of the conditional expectation across the whole range of your regressors...bundling and alike) @mj: I will try to put together something sensible
 
User avatar
oaks
Posts: 0
Joined: March 5th, 2008, 9:00 pm

CVA

January 28th, 2016, 9:47 pm

You will most likely get better accuracy for (5) by using higher order polynomials and by fitting to all paths. However, in doing so you will probably lose some accuracy in determining the condition for positive exposure while not gaining anything in terms of knowing the direction of the bias. The point is that even with a simple quadratic regression (6) produces quite good results.As @cenperro mentioned, although using local fitting methods like the stochastic bundling method referenced in the paper will do a better job, it may be tricky and possibly time consuming to implement it for high dimensional models. I tried two local regressions, one for "ITM" and another for "OTM" paths, and for the examples in the paper (5) did quite well. But again, things will be harder for higher dimensional models. In any case, better regression methods will also improve the results for (6).
Last edited by oaks on January 27th, 2016, 11:00 pm, edited 1 time in total.
 
User avatar
mj
Topic Author
Posts: 12
Joined: December 20th, 2001, 12:32 pm

CVA

January 28th, 2016, 9:56 pm

Well, regression methods are convergent in the sense that as the number of basis functions and paths go to infinity, the regression converges to the true value. However, just increasing the number of basis functions whilst keeping the number of paths fixed is often unwise in that there is increased instability. As has already been said, using a more sophisticated regression methodology is generally smarter than just increasing the number of basis functions. The point of our paper is to decrease greatly the role of the regression functions.
 
User avatar
cenperro

CVA

February 3rd, 2016, 1:53 pm

I've thinking about this (not much, these days are being busy). I need to do some tests but at least mathematically, I think we can tackle the non linear collateral (MATA and thresholds) just by adding more indicators...We lose part of the advantage in that, as I said, now you depend of the regression function across the full range of the regressors but, its usage is still limited to establish if certain condition is met by the future value. So, while we do not have the 'full' advantage we have one that is, for me , crucial: perfect netting!: if we have 2 identical deals that net one to each other we will get perfect zero exposure. This is not guarantee when you use regressions unless both regression function are coming from the same calculation/system/model. Now you might not have perfect regressions and it could be the case that they do not net and the indicator is not 0 but, as you continue evolving your cashflows they will cancel out.We can always find more accurate methods in a per deal basis but the advantage of this technique is, in my view, the possibility of tackling something more generic in a automated way.
 
User avatar
mj
Topic Author
Posts: 12
Joined: December 20th, 2001, 12:32 pm

CVA

February 3rd, 2016, 8:24 pm

yes, that is what we were thinking. use the regression to determine the region you are in but evaluate the cash-flows directly.
 
User avatar
cenperro

CVA

February 3rd, 2016, 10:36 pm

I didn't realize there is still a caveat as the collateral part is to be calculated at tau-t and the indicator function requires the value at tau, so it is not included in the tau-t filtration. Reviewing some slides (global derivatives last year), and Andreasen used this technique and he proposed branching to tackle this last 'issue' ...makes sense..