Serving the Quantitative Finance Community

 
User avatar
EarlGrey
Posts: 4
Joined: April 5th, 2006, 4:55 pm

Re: Local Stochastic Volatility - Lorenzo Bergomi

March 31st, 2018, 1:07 pm

Hi Alan,

Yes, the fit is good; VIX smiles are upward sloping so can be captured by a mixture of 2 exponentials. The nice thing is that if you set one of the vols to zero, you have a floor on the VIX future: VIX implied vols go to zero for strikes below the floor - this can come in handy at times. If VIX smiles instead decrease past a certain strike - which I've witnessed at times - then you have to map your mixture of OUs into VIX futures with something else than a mixture of 2 exponentials - but there's no technical difficulty. It's explained in the presentation/book.

The most important thing for VIX exotics, however, is to be able to control the correlations between VIX futures + the distribution of their vol throughout their lifetime. There's a table of historical levels of VIX/VIX & VIX/SPX correlations in slide 16 of the presentation mentioned  in my previous post.
In slide 15 there's a chart that shows how you can control the distribution of their vol, while still keeping the same calibration to VIX smiles.

Regards,
      Lorenzo


PS 1: Incidentally, take a look at the chart in slide 6 of the vol-of-vol in the 2-factor model, as compared to a power-law benchmark. If I showed you both curves without their labels you would have a hard time telling which is which. You would need to look at the behaviour at very short maturities to see which one diverges.

For practical trading purposes, both are equivalent. Yet:
- one is generated by a Markovian model (the 2-factor model) driven by 2 easily simulable OU processes, that also gives you a handle on the correlation between implied vols of different maturities. You also get real-time vanilla smiles.
- the other is generated by a non-Markovian model (the rough vol model) where just getting the vanilla smile is an ordeal, and moreover gets you zero leverage on vol/vol correlations.

PS 2: Thank you for the tips for posting pictures. It's just too time-consuming.
 
User avatar
Quantuplet
Topic Author
Posts: 8
Joined: July 12th, 2014, 9:14 am

Re: Local Stochastic Volatility - Lorenzo Bergomi

April 3rd, 2018, 9:20 am

Hi Lorenzo,

Thank you for joining the thread and sharing your experience.
But most of all, thank you for this book, which I find simply amazing. 

Should you find some time to answer these follow-up questions, it would really help 
Consider first a pure stoch vol model, calibrated either to the TS of VS vols, or ATMF vols, of vols of whichever moneyness you like.
Let's pick a short maturity (say 3m), a long maturity (say 5y) and say that our target levels are, for example:
- vol of the 3m ATMF vol = 100% + decay of the vol of ATMF vols with exponent 0.6
- correlation between 3m and 5Y ATMF vols = 50%
- correlation between the spot and the 3m ATMF vol = -80%
- correlation between the spot and the 5Y ATMF vol = -40%

Here at SG traders use a code of mine that generates in real time the corresponding model parameters. For the values I've just mentioned,with the Eurostoxx50 term structure of VS vols of 20 jan 17, I get sigma = 256%, theta = 17.8%, k1 = 8.64, k2 = 0.68, rho_XY = -35%, rho_SX = -70.3%, rho_SY = -11%. Sorry, I would have preferred to post a snapshot of the spreadsheet, but it's not possible to post images here.
The beauty of the 2-factor model is that it is not overparameterized with respect to our specs (2 implied vols of different maturities, their vols, correlations and correlations with the spot) and parameter generation is instantaneous.
I guess that when considering N > 2 implied vols of different maturities and their associated (co)-variances the model ends up being overparameterised - i.e. the optimisation problem to be solved becomes overconstrained. It would also be my guess that since the 2-factor forward variance model is a market model for a 1 dimensional set of instruments, it will have a hard time fitting the whole vanilla market (2 dimensional set of instruments), such that if I introduce target ATMF skew and curvature levels I will experience the same kind of overdetermination, which pushes us towards LSV as well.
Now let's move to the same 2-factor model in its LSV version. The constraint of calibration to the market smile places restrictions on the levels you're able to get for your physical levels, but you can still code the real-time optimization that generates them using the approximate formulae that I give in chapter XII of my book. I'm presently coding this up. What do you do once you have model parameters?
These are indeed the formulas I've used. Yet, here also I am willing to work with a full term structure of (i) (lognormal) vol of ATMF vol, (ii) spot/ATMF vol correlations (iii) skew-stickiness ratios - this is to "correctly" embed the sizeable volga and vanna risks within my exotics prices - along with (iv) the whole vanilla volatility surface - this is so that the prices of my hedge instruments are directly "built in" the model as well. 

I extract the target levels for the physical quantities above from historical time series using the usual statistical estimators plus the one you describe in your book in the particular case of the skew-stickiness ratio. However it seems to lead to an overconstrained optimisation problem: it is straightforward to show that when fixing the vol of ATMF vols, their correlations with the spot price and the prevailing ATMF skew TS then the skew-stickiness ratio TS is unequivocally characterised, so this may partly explain why. 

Maybe I was too optimistic and should get rid of some of these constraints and/or replace some by others to get to a better-determined optimisation problem hence a more robust calibration step. Alternatively, maybe I should work with 2 implied vols as you suggest and trust the model to generate the dynamics and consistently "fill the gaps" in between?
In the SV model, levels of vols/correlations of physical quantities have little dependence on the TS of vols that the model takes as input. Thus, you're set.

In the LSV model, on the other hand, with fixed model parameters, as the market smile changes your break-even levels for physical quantities change. So, keeping model parameters fixed, you have to feed the LSV model various smiles from past history and check that model-generated break-even levels hover around your desired target levels. Otherwise, generate model parameters using a different market smile.
When you say different market smile do you mean some kind of synthetic smile which does not evolve through time? In that case, wouldn't that mean that the model is not fit to the vanilla market anymore?  Sticking with fixed model parameters to obtain a well-defined delta-vega hedging P&L over the instrument's life, what is your take on the frequency at which the SV parameters needs to be recalibrated (the local volatility component is recalibrated every day)? 

I guess what I am naively looking for are how do you recommend to perform the calibration of a 2-factor mixed Bergomi model in practice. 

Thanks a lot for your time and help and sorry for the profuse questions,

Kind regards
 
User avatar
EarlGrey
Posts: 4
Joined: April 5th, 2006, 4:55 pm

Re: Local Stochastic Volatility - Lorenzo Bergomi

April 4th, 2018, 8:35 pm

Hi Quantuplet,

Thank you for your kind comments on my book - writing it entailed a huge amount of work & sleep deprivation, so any token of appreciation is gratefully acknowledged.

Now regarding your questions.
First by overparameterized, you probably mean underparameterized, i.e. there are not enough parameters for the model to be able to comply with all of your requirements.

Second: it seems to me very ambitious to be as specific as you'de like to be. Let's just consider the choice of TS of vols of vols. If you look at historical data, you'll see that their dependence on the maturity of the vol is roughly a power law with an exponent around 0.4/0.6. Then you need to set one point on this curve. Let's pick the 3m maturity - typically its realized vol is around 60%.

These two numbers - 0.4 and 60% - are more than enough to define the breakeven levels you'd be comfortable with.
Maybe one day we'll be pricing vol of vol down to a fraction of a %. For now, +/- 10% relative difference with respect to our target level is more than adequate.

You have to realize that most houses (a) not only do not set for themselves target break-even levels for vol of vol/ spot-vol correls/vol-vol correls, (b) do not even have an idea of the break-even levels that their models generate - starting with LV.

So let's start modestly. You could then ask: why not use just one factor? One factor is really too rustic - in particular you have 100% correlation between all vols, which is kind of extreme. With two factors you get a very significant improvement in modeling capabilities and you're still only simulating two OU processes: you can easily simulate the vol process exactly (unlike, say, in Heston, where you're sweating just to simulate the vol process - and's it's barely a 1-factor model!)
Same thing for spot/vol and vol/vol correlations - as long as you're able to choose levels for 2 maturities and the interpolation is smooth, that's more than enough.

Now, regarding your last question. Imagine I've chosen the parameters of my LSV model such that, when calibrated to today's market smile, the model generates OK break-even levels. I want to make sure these levels will not become, say, too low, when the market smile steepens/flattens; so I check this by sticking in different market smiles - either actual past smiles or "artificial" smiles that express my view of what's likely to happen in the future.

There may - and there will be (one example is how the skew of the Nikkei has almost vanished for long maturities (> 5y) due to vega-hedging of autocalls) - times when the market smile really changes regime. Then you have no choice but to change your LSV model parameters, take the MtM hit but then risk-manage your position with reasonable break-even levels.

Hope this helps, and I'm sorry for the modesty of the goals - no haute couture here, just damage control, but with some tools.
Lorenzo
 
User avatar
Quantuplet
Topic Author
Posts: 8
Joined: July 12th, 2014, 9:14 am

Re: Local Stochastic Volatility - Lorenzo Bergomi

April 5th, 2018, 8:32 am

Hi Lorenzo,

I can only imagine the effort it took. An incredible number of "practical knowledge nuggets" are disseminated throughout. Your take on these topics is really insightful and profound: I've learned a lot already and each time I delve into it again I manage to find something new. It's by far the best book I own, so again thank you for that. Even have N=2 copies: one at work, one at home. N>2 would have made me some kind of creepy fan, wouldn't it? 

Underparametrised is absolutely what I meant, sorry for that. I get your point: I guess the excitement of being able to specify these levels took me a little too far.
You have to realize that most houses (a) not only do not set for themselves target break-even levels for vol of vol/ spot-vol correls/vol-vol correls, (b) do not even have an idea of the break-even levels that their models generate - starting with LV.
I see. One of the biggest realisation I had when reading your book for the first time was regarding the calibration problem you allude to on different occasions (e.g. in the epilogue). 

In your opinion, would it be right to say:
  •  By daily recalibrating a naked SV model to match some hedge instruments' spot prices (instead of implicitly embedding them through the LV layer of a mixed model), model parameters effectively turn into state variables. This introduces additional terms with no controllable break-even levels in the P&L equation of the hedged portfolio. 
  • This is a bit like "using a SV model but in a LV fashion". You show that in order to be a genuine market model the LV model needs to be recalibrated daily. But doing so with an SV model defeats all purpose. Especially since at most desks naked SV models are calibrated to vanilla prices so you actually get the worst scenario possible: (i) you know that due to the limited number of parameters you won't be able to recover your vanillas; (ii) you know that by daily recalibration you'll make the management of books tricky (uncontrollable break-even levels).
Thank you very much for these clarifications it really helps. And please don't apologise, this is a huge step from where I was before reading your book. I think now I can see the "damage". Now time to modestly control it. 

In that regards, you then propose (as described in your previous posts):
  • To work with 2 ATMF volatilities (1 short term, 1 long term) along with their (lognormal) volatilities, correlations, and correlation wrt spot estimated from historical time series. 
  • We then calibrate the parameters of the SV layer using the expressions derived in Chapter 12.
  • As time passes, keeping these parameters fixed we check the break-even levels implied by the model as the market evolves. 
  • If these become too far with respect to our expectations, we recalibrate, take the MtM hit but continue to manage the product with "comfortable" break-even levels.
Kind regards
 
User avatar
EarlGrey
Posts: 4
Joined: April 5th, 2006, 4:55 pm

Re: Local Stochastic Volatility - Lorenzo Bergomi

April 12th, 2018, 2:21 am

Hi Quantuple,
Regarding your first bullet point:
Yes. If you calibrate your naked SV model to, say, the SPX smile everyday, and by calibration you mean changing the value of not just the state variables - the forward variances - but also model parameters, then, yes, you're going to generate unexplained P&L. If you don't want this to happen, you should only calibrate state variables and leave model parameters constant.
If you use a forwardd variance model, this means you can at most calibrate a term structure of VS vols or, say a term structure of ATM vols.

And regarding your last bullet points: yes, that's what I meant.
Thank your for this interesting exchange.

Lorenzo
 
 
User avatar
Quantuplet
Topic Author
Posts: 8
Joined: July 12th, 2014, 9:14 am

Re: Local Stochastic Volatility - Lorenzo Bergomi

April 12th, 2018, 1:16 pm

Hi Lorenzo.

Thank you for kindly sharing your knowledge.

Best of luck in your future endeavours.