Serving the Quantitative Finance Community

 
User avatar
Kurtosis
Topic Author
Posts: 0
Joined: December 4th, 2001, 5:55 pm

Comments welcome (version 1)

May 6th, 2011, 2:06 pm

If my mapping is right, albeit simplified and limited to a known source of randomness, then the second step is evident: a measure of fragility in terms of concavity to a parameter.
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Comments welcome (version 1)

May 6th, 2011, 7:54 pm

QuoteOriginally posted by: MCarreiraI don't think one-dimensional skewness/asymmetry is the best description for the fragile/robust/antifragile triangle ... the cost of becoming antifragile (generating a steep gain function in one dimension) may be paid by generating a steep loss function in an unrelated dimension.The evolution of the brain and cooperation contribute to create an individual that will be considerable stronger when part of a group, but taken alone the individual's fragilities will be apparent.The Hydra picture would be helpful here, describing the multiple dimensions of fragility/antifragility in the same individual/object.Even in objects where creating a strong structure will lead to a resonant frequency or a specific direction on which cutting is easier.Basically what I'm trying to say is that the antifragile will be fragile on the dimension of the cost paid to raise the step (too heavy a cost and it'll die, too strong and it'll be brittle, ...).Exactly! There are serious multidimensional issues here. They are one root cause of the unintended effects of regulation. If regulators (or managers) define a specific metric for regulation and a specific threshold for acceptability, then clever financial innovators will create things that meet the defined spec but contain hideous fragility on some new, unmonitored and unregulated dimension.Negative skew payoffs are just an epiphenomena. That is they arise from something structural. If we are to acheive antifragility, we must understand the root causes of negative skew payoffs.I think the deeper issue is that for most objects, a relatively few combinations of conditions create high performance and the relatively much larger combinations of conditions create low performance -- there are few ways to be great and many ways to fail. Fragility is a measure of this cardinality ratio, the steepness with which small changes create large degradations in performance, and the probability or temporal speed with which small excursions can occur that lead to large losses.Also, I'd suggest that fragility must be more than a passive concept although maybe Nassim wants to use a different word for objects that are antifragile through active adaptation. I'm thinking of the aerospace engineering concept of "control authority" which reflects how much the pilot can affect the control surfaces to generate forces that recover the aircraft after a perturbation. Aircraft with low control authority seem fragile -- a minor perturbation leads to uncontrolled flight and a catastrophic triumph for gravity.
Last edited by Traden4Alpha on May 5th, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Comments welcome (version 1)

May 6th, 2011, 7:55 pm

QuoteOriginally posted by: KurtosisIf my mapping is right, albeit simplified and limited to a known source of randomness, then the second step is evident: a measure of fragility in terms of concavity to a parameter.I need to think some more, although it sounds OK. But I am still concerned about the time element of this and the difference between rapidly-applied and slowly-applied perturbations of the object.
Last edited by Traden4Alpha on May 5th, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
MCarreira
Posts: 64
Joined: January 1st, 1970, 12:00 am

Comments welcome (version 1)

May 6th, 2011, 8:29 pm

QuoteOriginally posted by: Traden4AlphaQuoteOriginally posted by: KurtosisIf my mapping is right, albeit simplified and limited to a known source of randomness, then the second step is evident: a measure of fragility in terms of concavity to a parameter.I need to think some more, although it sounds OK. But I am still concerned about the time element of this and the difference between rapidly-applied and slowly-applied perturbations of the object.Yes, and this time dimension is linked to the cost of keeping your self robust or antifragile over longer time horizons. Keeping oneself ready for the worst is hard (without stress is even harder).
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

Comments welcome (version 1)

May 6th, 2011, 9:35 pm

QuoteOriginally posted by: MCarreiraQuoteOriginally posted by: Traden4AlphaQuoteOriginally posted by: KurtosisIf my mapping is right, albeit simplified and limited to a known source of randomness, then the second step is evident: a measure of fragility in terms of concavity to a parameter.I need to think some more, although it sounds OK. But I am still concerned about the time element of this and the difference between rapidly-applied and slowly-applied perturbations of the object.Yes, and this time dimension is linked to the cost of keeping your self robust or antifragile over longer time horizons. Keeping oneself ready for the worst is hard (without stress is even harder).Yes, long term solvency antifragility does require deeper reserves (or bounded liabilities). But long term liquidity antifragility may only require minor changes in contracts that allow more time on redemptions or allow the entity to deliver assets in lieu of cash. Perhaps time buffers are easier to create and maintain than cash buffers.Liquidity crises have a way of becoming solvency crises when the afflicted entity is forced to dump illiquid long-term assets to meet a short-term liability.