Hi, We know that in a simple multivariate linear regression scheme, the betas have a multivariate normal distribution, with mean equal to the population beta and a variance proportional to the inverse of the matrix XTX multiplied by the variance of the error term. My question is while testing the statistical significance of any given beta, is it best to just test for that beta alone, meaning that analyze the behavior of the marginal distribution of that beta alone, or is it best to incorporate the effect of other betas, in other words analyze the mutivariate distribution. These two are distinct approaches. In the method 1, my t stat say for beta-1 is just beta-1/var(beta-1). In the second approach, I can build a test to say that, what is the probability that both beta-1 and beta-2 are zero, and use the multivariate distribution to build the right statistic. I think it'll involve the chi-square distribution in some form. What are the best practices around this?Thanks, and please let me know if I wasn't clear.

Just a casual reaction. First, the fact that you are testing the significance of your beta's probablyputs you ahead of 95% of casual regression users. Second, if you have to actually "build the right statistic",as opposed to simply looking it up in some econometrics book or reading it out from some software,suggests you have moved beyond best practices to ' very advanced practices'.

, trying Alan, but miles to go before I sleep! Statistics is fun but I never had a formal training, trying to self teach myself. I have always been a great fan of your insight and knowledge though and your willingness to always help. One of these days, I want to be able to work/collaborate with you!I'm sure you have some ideas around this one. Basically I think that a combined hypothesis lets you answer the question, what is the chance that beta-1 and beta-2 are both zero and under NH we have, And so on and so forth for more betas considered together. This is distinctly different from testing for a single coefficient beta-1 or beta-2.

Last edited by deskquant on November 19th, 2011, 11:00 pm, edited 1 time in total.

Yes, multivariate testing also makes more sense to me as far as multivariate hypotheses are concerned.That also goes for concepts that are intricately tied to testing like confidence intervals -- recall the multiple-comparisons problem and the implications of the Bonferroni inequality (inequalities):http://mathworld.wolfram.com/Bonferroni ... 473.htmSee also:http://en.wikipedia.org/wiki/T-test#Mul ... testingFor your particular problem, see "2.1 Usage of the F-test" / "2.1.1 Test of joint significance" here:http://www.mattblackwell.org/files/teaching/ftests.pdf

deskquant, very kind, thank you. I know very little econometrics, but had the pleasure of sitting in on a class given by the late Jack Johnston,at the suggestion of my boss Sheen Kassouf. So, I tend to look things up in his book. Indeed, I see he has adiscussion starting on page 138 of `Econometric Methods', (second edition). It begins "To obtain a joint test for several or all beta_i ... "Apparently the trick is to first transform the hat{beta'}s to a set of hat{beta*}'s which are normally and independently distributed about the true beta*'s. See the book for details.(cross-posted with Polter, so maybe his links have the same)

Last edited by Alan on November 19th, 2011, 11:00 pm, edited 1 time in total.

Thanks all, will have a look!

GZIP: On