SERVING THE QUANTITATIVE FINANCE COMMUNITY

anky09
Topic Author
Posts: 3
Joined: July 5th, 2017, 12:33 pm

### VaR horizon

Hello Risk guru's; I am contemplating an ideal risk settings to compute my monthly historical VaR.

So the 2 choices I have is, i) either using a 20days returns (scaled form 1day returns) and compute a 20d VaR from it or ii) compute a 1day VaR and scale it by sqrt of 20.

I would like to know the pros and cons of both these options so as to help my decision making. I use 99% confidence and 2yrs look back and no lamda.

staassis
Posts: 30
Joined: April 12th, 2014, 5:10 pm

### Re: VaR horizon

None of the above. You should use 20-day returns calculated off the rolling 20-day window. The produced data will be correlated this way because neighboring windows will overlap by 18 days. This is not a problem though. Even for correlated observations barrier frequencies are unbiased estimates of the true barrier probabilities.

Note that the number of produced 20-day returns will be the number of 1-day returns minus 19. Almost the same sample size. Our discussion is based on the assumption that this sample size will be sufficient. If not, you will have to use parametric methods to extrapolate data into the tails. For example, to estimate the tails of a single variable you can use extreme value theory. To quickly postulate how various pieces move together, you can use copulas with non-zero tail dependence. I am not saying that you should model various constituents of your portfolio separately and throw them into a copula. I am saying that you may notice that your portfolio is sensitive to, say, oil and there has not been enough variation in oil over your trading history. Then you model oil separately and enforce some dependency structure onto oil and your portfolio.

Samsaveel
Posts: 436
Joined: April 20th, 2008, 5:47 am

### Re: VaR horizon

One method tests the other. Compute a VaR-based on 20-day overlapping time periods. In addition, compute the VaR using the daily observations and use the square root of time as an approximation- this will allow you to test the mathematical integrity of the Model and reinforces the validation of the square root rule. As regards to time & efficiency, If your PnL vectors have significant non-linearity then a full reveal VaR is required for historical simulation, based on the 1-day & 20-day perturbations, this will have to be at the risk factor level (Model choice is risk factor dependent either additive or Multiplicative ) on the other hand, if you PnL is linear, then approximate your VaR using a first-order approximation to the PnL using Taylor and compute the VaR-based on that.  In relation to data ( Very Important ), if this is an exercise to gain insight into mechanics of VaR, then the quantitative & qualitative criteria of how reliable the data is important. However, if you are doing this exercise in the real world, then your data must be robust and reliable, otherwise you VaR can potentially have many onerous interpretations. If your data is not sufficiently liquid, then you need to have a separate framework to asses the materiality of the underlying risk factor and then quantify its impact using non-direct risk metrics, such as sensitivity's, scenario's and stress testing, then you need to think of a long-term solution such for example,  may be a separate add-on on top of VaR or an add-on on the Market risk capital formula ( this is part of your RNIV & NMRF framework ) etc...

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...

 JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...

GZIP: On