August 27th, 2013, 3:13 pm
OK, here is the issue I alluded to, and I believe on-topic for this thread.Finished reading Kealhofer's "Quantifying Credit Risk II: Debt Valuation" in which he asserts (in 2003)QuoteThe KMV version of the Merton model, which has been extended over the years, has become a de factostandard for default-risk measurement in the world of credit risk As I understand it, the model produces, among other things, an estimate of the real-world default probability (real-world PD) over various time horizons.However, it does so without using directly either bond spreads or credit default swap spreads.It is apparently based mostly upon the equity behavior of the bond issuer.Now this seems to me to be an estimate that would likely be dominated by another estimator that did incorporatesuch market information (bond spreads and CDS spreads). I am just guessing about that, based on my knowledge of a similar issue with volatility estimators. For example, GARCH-style estimators for SPX volatility will be dominated by estimators that include bothGARCH predictions and current VIX. In general, its always suboptimal to ignore relevant market data when makingpredictions, real-world or otherwise. The point is nicely explained in this blog post, in which the author coins theterm "R probabilties" for such suboptimal estimates. My questions/confusions are:1. Why do/did clients pay for such suboptimal estimates, estimates which deliberatelyeschew market information, to use the language of the blogger? 2. How can such suboptimal estimates become the "de facto standard"? 3. Where are things today re the best real-world PD estimators? Again -- apologies if this is nonsense since it's not my area -- I am just extrapolating from the volatility case, where Ithink I understand the issue of failing to include relevant market data. Thoughts?
Last edited by
Alan on August 26th, 2013, 10:00 pm, edited 1 time in total.