April 13th, 2009, 12:57 pm
Let me explain a little better
A paper I am reading notes that an unbiased estimator of standard deviation is |R| *sqrt(pi/2) and references Kotz and Johnson (1970), who note that when returns are normally distributed, an unbiased estimator of standard deviation is the mean deviation (|R-r|) multiplied by bn = sqrt((pi/2)*n/(n-1)). When n is large, it drops out, making the estimator |R-r| * sqrt(pi/2). Where R is the individual stock return and r is the mean portfolio return. To get |R| *sqrt(pi/2), you have to assume r=0 and my question is under what conditions can that assumption be made and why is that assumption made?Thanks again.QuoteOriginally posted by: AaronI don't understand what you are getting at. The assumption that the mean is zero is a financial one, it has nothing to do with the distribution you assume. And since the Normal distribution has only one scale parameter, any measure of scale is an unbiased estimator of standard deviation if multiplied by the correct constant.The simple answer is for many risky securities, like equities, over short time intervals, like one day, the standard deviation is so much greater than the expected return that the return minus the risk-free rate can be assumed to have zero mean for most practical purposes. For less volatile securities over short periods, you can sometimes assume the price minus the forward price has close to zero mean.