SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
frenchX
Topic Author
Posts: 5911
Joined: March 29th, 2010, 6:54 pm

Risk measure when you have very few historical data

August 20th, 2013, 5:21 am

Here a simple question. I'm trying to apply coherent risk measure used in quant finance (such as CVaR) in a different context (example for my case industrial project management). My point is: Usually in finance you have tons of historical data to calibrate your CVaR what about if you only have a few points (let's says between 10 and 30)? How do you produce a statistically significant risk measure in this case,Best and cheers,Ben
 
User avatar
katastrofa
Posts: 9578
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Risk measure when you have very few historical data

August 20th, 2013, 8:26 am

I'd ignore VaR and focus on stress testing and scenario analysis ("what if X happens?").
 
User avatar
frenchX
Topic Author
Posts: 5911
Joined: March 29th, 2010, 6:54 pm

Risk measure when you have very few historical data

August 20th, 2013, 8:50 am

That's indeed a safe and smart way. The question is how do you put quantified probability on your operational risk? What's the worst a loss of 1000 $ with a probability of 10 % or a loss of 1 M$ with a probability of 0.01% ? that's why I'd like to have an idea of the probability of occurence of a risk scenario.But your comment is indeed the smartest way so far but it lacks a bit of numbers. What is good with VaR is that it gives a kind of "confidence interval". Stress testing and scenario analysis are good if they are selected (for example I select the scenarii which in my opinion have a likelyhood to happen above 5 %) but then it's kind of highly subjective.
 
User avatar
MHill
Posts: 489
Joined: February 26th, 2010, 11:32 pm

Risk measure when you have very few historical data

August 20th, 2013, 10:27 am

I thought you'd been quiet recently - congrats, it sounds like you have a job!With that number of data points, you're going to struggle with statistical significance. How about a Chebyshev based number?What's the shape of your distribution? Can you assume normal? or assume Paretian tails?
 
User avatar
frenchX
Topic Author
Posts: 5911
Joined: March 29th, 2010, 6:54 pm

Risk measure when you have very few historical data

August 20th, 2013, 10:53 am

Thanks! Indeed I have a job now not in finance but in aerospace and defense.What I'd like to do is to check for "applied R&D project id est new product development" the historical overal cost to budget safely for a new one. But before to do a statistical analysis you need to find a model. I was thinking with something like Project Value=Time to completion x Overall Costs x Impact factor while the Project Budget= Overall Costs+Residual linked to Risks. My goal is for "similar product" estimate realistic expected time to completion, realistic expected overall costs and (the most complicated in my opinion) a risk reserve.It's the classical budgeting problem ask too much for a risk reserve and the top management will just say no, ask too little and you have a big risk that you project will end over budget (in time and costs). The question is given by "Having an historical panel of 10 R&D project quite similar in goals (although not similar in budgets) how do I propose a budget for a similar new product development in which I can say that I'm confident at X % that the project will finish under budget ? "PS: to answer your question I suspect a strongly non normal distribution. usually most of the project finish under budget but for those which not, the cost excess is usually pretty big so I expect a fat tail distribution.
Last edited by frenchX on August 19th, 2013, 10:00 pm, edited 1 time in total.
 
User avatar
rmax
Posts: 6080
Joined: December 8th, 2005, 9:31 am

Risk measure when you have very few historical data

August 20th, 2013, 11:10 am

QuoteOriginally posted by: frenchXPS: to answer your question I suspect a strongly non normal distribution. usually most of the project finish under budget but for those which not, the cost excess is usually pretty big so I expect a fat tail distribution.Indeed this is the case. There is a paper written by a couple of guys at Harvard where they explore this (I do not have the reference). What is interesting is that the tail is both fat and very long. So there are projects that over-run their budget by 1000s percent, and bring down the company that was implementing them. If the project is Software Engineering, I know that Steve McConnell was also looking at this kind of thing, and he used to have a database of project sizes (and overruns??). Might be worth checking out.
 
User avatar
frenchX
Topic Author
Posts: 5911
Joined: March 29th, 2010, 6:54 pm

Risk measure when you have very few historical data

August 20th, 2013, 12:05 pm

Interesting ! I also have heard of failed project wihich completely screwed the firm. The top management should put in place a kind of "anti sunk cost turn off switch" such as "if the project overrun cost (so more 100% additional cost) is above the initial budget the project is closed".usually the risk reserve is between 10% and 20% of the initial budget which seems in my opinion very optimistic (especially for quite challenging R&D project in high tech in very demanding environment such as space or military).
 
User avatar
rmax
Posts: 6080
Joined: December 8th, 2005, 9:31 am

Risk measure when you have very few historical data

August 20th, 2013, 12:49 pm

QuoteOriginally posted by: frenchXInteresting ! I also have heard of failed project wihich completely screwed the firm. The top management should put in place a kind of "anti sunk cost turn off switch" such as "if the project overrun cost (so more 100% additional cost) is above the initial budget the project is closed".usually the risk reserve is between 10% and 20% of the initial budget which seems in my opinion very optimistic (especially for quite challenging R&D project in high tech in very demanding environment such as space or military).I also don't think you can assess it necessarily on purely cost overrun. For example, the Manhattan Project original budget was about 100 MM USD, but over-ran to a cost more like 2bn USD (approx 25bn in today's terms). The cost can be rationalised by what the project success meant. Although I do not know whether there were nay'sayers in the project saying that the over-run was not worth it and there would be a better purchase of x100 aircraft carriers.20/20 hindsight is key, however sometimes thinking on a project gets bogged down in the detail, and does not see the wider picture. If the project is over-running by 20% and there is no real end in sight, you need to look at what you are doing. If you can prove you are close to completion, then you should probably continue. If you are not, then you need to reconsider. At 100% you need an internal investigation (although of course depends on the drivers to deliver).
 
User avatar
And2
Posts: 103
Joined: January 29th, 2007, 5:24 pm

Risk measure when you have very few historical data

August 20th, 2013, 1:39 pm

A couple things come to my mind:1. Bootstrapping - to increase effective sample size2. Using some overreaching concepts like entropy - to come up with the least biased distribution Statisticians should have quite a bit of staff on the subject.
 
User avatar
yugmorf2
Posts: 87
Joined: November 21st, 2010, 5:18 pm

Risk measure when you have very few historical data

August 21st, 2013, 2:37 am

One possibility might be to find an alternative data set that is highly correlated with your sparsely populated one, and then calculate your risk measures based on that. Possible?
 
User avatar
frenchX
Topic Author
Posts: 5911
Joined: March 29th, 2010, 6:54 pm

Risk measure when you have very few historical data

August 21st, 2013, 5:21 am

@Rmax: Yes I agree that the cost overrun should be balanced with the expected payoff at completion. It's funny because I "feel" two kinds of risk management. A very "quantitative one" based on scenario modelling, probability modelling for decision and statistical inference and a very "intuitive" one based on the personnal experience of the project manager, the collective intuition of the team and the corporate culture. In finance I think that the proportion for risk management is 90% quant/10% intuitive while for R&D project in the industry it's the inverse 10% quantitative/90% intuitive. My goal would be a 50/50 one @Yugmorf2: That's a good idea but a very hard one because I suspect the statistics to highly depends on:-the Area: project management in France is not the same than in China I think-the Sector: a project in space R&D is not the same than in software engineering-the Type of Firm: a project in a small family owned business is not the same as a project in a big global publicly traded firm-the Organization: a project is not handle in the same way in a very open matrix organization with strong internal network effect than in a very vertical silo firm organized by very strict functional area-the Vision and Culture: give the same R&D project to a firm whose the culture is specialized in strategic marketing and collaborative innovation management and to a firm specialized in operation management for industrial production with a focus on lean management and cost efficiency and I bet you will end with two totally different products. @And2: Bootstrapping may be a good idea. The entropic point of view could be interesting but with only 10 points I suspect it to be not very efficient.
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Risk measure when you have very few historical data

August 21st, 2013, 10:43 am

What an interesting problem, frenchX!I would guess that your data both under-estimates and over-estimates the variance of outcomes due to scope modulation and cancelation effects. That is, many projects show lower over-run levels than actual because the company cut the scope of the system mid-project. But some projects show horrible overruns because of scope creep. The data also under-estimates the cost-overrun risks due to survivor effects -- you data lacks all the events with 5000% over-runs that were cancelled (which suggests that a coherent risk measure must include cancellation risks).The other issue is that most of the variance of outcomes probably have two causes. The first is the intrinsic structure of technical feasibility risks in the project -- how many "tough engineering/scientific problems" are embedded in the project and how are the coupled to each other. The more feasibility risks, the worse the overrun -- the overrun being the max overrun across risks. And the greater the coupling between the risks, the worse the overrun because a change in one part of the system forces a redesign of another part of the system. If a project calls for several different new technologies, all tightly coupled to each other, then the likelihood of significant overruns would be quite high.The second cause is the management culture of the company and whether it promotes transparency and honesty with regard to go/no-go or scope decisions. To the extent that project teams can and do hide problems with the project, can maintain over-optimistic forecasts of the marginal completion date or cost (i.e., a perpetual 80% completion state) or tend to say "yes" to every scope increase, then the chance of overruns is much higher. This issue would cause heteroskedasticity in your data if the data come from a number of firms that is significantly less than the sample size (e.g., data on 10 to 30 projects from 3 firms really has a sample size of 3).But to answer your original question, I'd think about doing a meta-analysis of the literature starting with this fount of wisdom. A quick glance shows that many of these papers have decent sample sizes. That might give you a good idea of the family of distributions to use and then you estimate the parameters of the distribution for you data, including the meta-risks that the distribution of overruns may be worse than the maximum likelihood value due to your small sample size.
 
User avatar
Cuchulainn
Posts: 62898
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Risk measure when you have very few historical data

August 21st, 2013, 11:51 am

My 2 cents, devil's advocate: I think you might be trying to compare apples with coconuts as this form of analogical reasoning has no guarantee of being valid. And VAR history is much shorter than experience with Industrial projects.Industrial projects tend to take it in steps, starting with pilot projects etc. QuoteThe management question, therefore, is not whether to build a pilot system and throw it away. You will do that. [...] Hence plan to throw one away; you will, anyhow. Fred Brooks
Last edited by Cuchulainn on August 20th, 2013, 10:00 pm, edited 1 time in total.
Step over the gap, not into it. Watch the space between platform and train.
http://www.datasimfinancial.com
http://www.datasim.nl
 
User avatar
frenchX
Topic Author
Posts: 5911
Joined: March 29th, 2010, 6:54 pm

Risk measure when you have very few historical data

August 21st, 2013, 12:16 pm

I agree that "industrial project management" is far much older than VaR use in quant finance but what I'm searching to find is a better way to capital budgeting a risk reserve for a R&D project more quantitatively than the raw gut feeling approach. But I agree with you that the naive view of using classic vaR approach on a project outcome distribution is probably very wrong.
 
User avatar
Cuchulainn
Posts: 62898
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Risk measure when you have very few historical data

August 21st, 2013, 12:36 pm

What about QFD, Six-Sigma, Pert, Gert etc.?
Step over the gap, not into it. Watch the space between platform and train.
http://www.datasimfinancial.com
http://www.datasim.nl
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On