March 10th, 2017, 12:08 pm
Does anyone know where to find guidance on the idea of using something other than the square root of time to scale volatilities (and, by extension, correlation) within the context of say, a mean-variance optimization? Other than ensuring the resulting matrix ends up positive semidefinite so that the estimated parameters tie together for the calculation of a frontier, can anyone relay any potential pitfalls to this approach? I know that there are resources on how to estimate Hurst and scale for individual time series (Mandelbrot, Peters, et al), but I don't know that there's a lot out there when working with multiple series for the purpose of optimization using this concept.