March 11th, 2005, 3:28 pm
I tried this, and it worked better than I expected. I generated a multivariate Normal random set of data for 10 stocks (2% volatility of return, 0.3 pairwise correlations) for 500 days. I selected 10 random portfolios (independent uniformly distributed weight for each stock between -0.5 and +0.5). I computed the 99% 500-day historical VaR for each portfolio.Then I deleted 200 days of data for one stock and 300 for another. I filled in the data using the PC method you described. Then I recomputed the 99% 500-day historical VaR. The top numbers below are the recomputed numbers for each of the ten portfolios, the number below each one is the original estimate.-3.87% -2.05% -2.06% -5.35% -7.42% -3.01% -2.33% -3.55% -5.65% -4.07%-3.23% -2.41% -2.18% -6.17% -7.22% -2.92% -2.33% -2.66% -5.89% -3.76%Losing 10% of your data does make a significant difference to the estimate in some cases, for example the eighth portfolio where VaR increases from -2.66% to -3.55%. But the method clearly distinguishes between the high and low risk portfolios, it does not seem to be biased and the error in VaR is not large compared to all the uncertainties in this kind of calculation. In practice, with non-Normal data, complex correlation structure and highly offset portfolios; the error from filling in data this way is probably negligible.I still don't like it on general principles, I prefer simpler methods with fewer parameters. But it seems to work okay in your example.
Last edited by
Aaron on March 10th, 2005, 11:00 pm, edited 1 time in total.