April 21st, 2002, 9:45 pm
<FONT face="Times New Roman">To 1 and 2: No, what I'm saying is that it is usually the case that only crude properties of historical price action and/or strategy performance are likely to persist. If you don't have a simple and convincing reason why a strategy should have worked well in the past then it probably won't work well in the future and your simulated historical results are probably just an exercise in curve-fitting. I believe that allocators should spend their time understanding 1) what inefficiency gives a given manager "edge", 2) do their managers seem to be smart and reliable individuals with a sound infrastructure, 3) which component funds/strategies are <I>logically</I> distinct and therefore likely <I>a priori</I> to generate uncorrelated returns and 4) are they likely to <I>become</I> (anti)correlated in the extreme events most allocators fear? For example, most relative value strategies can be expected to lose money in market dislocations even if they're trading totally different risks simply because the "contagion" phenomenon can link markets and risks that are usually largely independent of one another (this is an unproved assertion, obviously). At any rate, these are all qualitiative rather than quantitative issues...As for your other points/questions: If by "quantitative basis" you mean a claim that, say, a 22% allocation to a given manager can reliably be expected to generate better results than a 25% allocation then, yes, I think that's ridiculous and likely to be a sales tool or just plain misguided. But I do think that a quantitative framework in terms of correlations, Sharpes, etc. gives an allocator a useful framework in which to organize his thoughts and make rational (if imprecise) allocations.... I don't think the author in question expects <I>everyone</I> to lose money in the long run (if he does, it's certainly not in his fund's prospectus!)-Alex</FONT><FONT face="Times New Roman" size=2></FONT>