September 1st, 2010, 1:33 pm
If you want to test the significance of your regression, you should use the F-statistic, which is related to the adjusted r-squared. R^2 is not the right statistic to use because it doesn't adjust for the number of degrees of freedom you have used up by adding independent variables.Of course, if you want to know if your R-squared is "big enough", that does depend on your application. Suppose you have trading profitability regressed on some set of predictive variables. 1-R^2 is your residual variance. Whether that's too much or not will depend how big the mean of your profit is. If the mean is small, even a relative large R^2 could still give you a tragically bad Sharpe-ratio. On the other habd, if your predicted mean is high enough, even quite small R^2 can be useful, so long as your relationship is stable.The problem with the Fisher transform is that, if your variables are not bivariate normal, it gives you very bizarre results. The same is true of any confidence intervals/analysis of second or higher order moments. Particularly with financial time-series, your second moment is going to be unstable or non-finite. Any confidence intervals for risk or correlation is likely to be garbage unless you are very, very careful indeed.