April 13th, 2014, 5:12 pm
The precision of financial risk calculations may be several decimal points, but what about the accuracy? Can we really say that the discrepancy between, for example, implied volatility and realized volatility is somewhere deep in the Nth decimal place? How accurate are the estimated default rates or recovery rates? Most people and especially managers, investors, and regulators want certainty. And all these mathematical models and advanced algorithms seem designed to appear to satisfy the itch for getting another decimal point even if they they fail in practice.There are other ways to look at risk. In the Enterprise Risk Management world mentioned by quartz, some methodologies simply skip the estimation of likelihood and impact because they are virtually incalculable (at best, they might use a subjective ranking of a few of the "major" risks). The worst case is ALWAYS a total loss and the range of estimates of likelihood could vary by orders of magnitude (i.e., the number of decimal points of accuracy is effectively negative). For corporate risks, there are simply too many potential causes (obscure dependencies), too many power-law tails (earthquakes), too many novel cascades of failures (Iceland volcano), and too many ongoing changes in the risk landscape to accurately manage it all. Surveys of risk perceptions and maps of risk intensities show how poor the estimates are -- a depressingly larger percentage of crisis events fail to follow the expected locations and timings predicted by expected likelihood calculations.Instead of nailing down another decimal point on the expected value of risk, some ERM approaches emphasize reusable preparations for responding to events. That is, they emphasize business continuity plans, some amount of redundancy, and some amount of training/drills so that people know what to do when fecal-turbine collisions occur.