May 4th, 2011, 1:18 pm
QuoteOriginally posted by: KurtosisOne point: if you take 1+(a(1) 1+a(2)) etc. it would be another calibration of a(n) but then you would need to increase a(2), then a(3), etc. Very true! Yet, unless a(i)=a(j), we need a higher order calibration to estimate the higher order vol of vol of vol of .....I think the difference between your figure 2 and my 1+(a(1) 1+a(2)) is the difference between progression dispersion (e.g., vol of vol in the markets in which volatility progressively evolves over time) and regression dispersion (i.e., the true sigma never was the prior/estimated sigma due to errors in models of errors in models of errors in ... of errors in models such as with Fukashima's reliability never ever being it's assumed a priori reliability). Perhaps we are speaking of nearly orthogonal phenomena. Vol of vol might be more about how the world changes over time and error on error might be more about how the world never was what we thought it was. I realize these two categories of dispersion may be hard to separate in the financial markets where change in price conflates both change in the world (e.g, a company announces new revenue & profit numbers) and change in our model of the world (e.g., we realize that constant IV models don't work).What might help solidify your paper is a better distinction between time-rate change in error vs. regressed discrepancy between true and estimated values.QuoteOriginally posted by: KurtosisIt seems that the end result would not change (under another calibration), but the math would be messy. When ever I hear "but the math would be messy" my mind fills with red flags and sirens. Making an assumption to make the math easy seems to be what got us in trouble in the first place. That said, I can appreciate the necessity of forcing a natural physical/economic system into an artificial mathematical framework because some answers are better than no answers. This situation also highlights the differences between epistemological limits driven by lack of data (which may be resolved with future events, samples, experiments) versus epistemological limits driven by lack of machinery (which may be resolved with future theories, theorems, and algorithms).Perhaps what would help is some justification for using a time-progressing vol of vol framework versus model-regressing errors on errors framework. For example, what is the physical meaning of a(1) or of a(i) vs. a(j)?QuoteOriginally posted by: KurtosisINDEPENDENCE: in a sampling framework, it matters. Here you can play with calibrations of a(n): the a(n) are the opinions.I was wondering about either ∂a(i)/∂a(j) ≠ 0 or CORR(a(i),a(j)) ≠ 0 for realized values of a(i) and a(j). Perhaps some exploration of the signs on those terms might make a nice sequel to this paper.QuoteOriginally posted by: KurtosisIn the end, what I am doing is set which assumptions that allow for thin-tails, etc. In other words, what are the conditions that allow us to suspend skepticisms, and at what point.Exactly! Your paper provides an interesting model for the time-rate of change of error and the bounds on error progressions that are necessary (but not sufficient) to ensure a non-fragile future. I look forward to draft 2.