Serving the Quantitative Finance Community

 
User avatar
drona
Topic Author
Posts: 0
Joined: February 10th, 2002, 1:34 pm

Methodology to compute Beta and why ?

May 23rd, 2002, 11:43 pm

Why would one prefer to compute beta's from weekly data vs daily data. What is being captured andwhat could be the advantage/disadvantage of one method over the other.Apart from the frequency of the data used, what horizon should be used i,e yearly, 2 yearlyetc.Greatly appreciate any ideas.
 
User avatar
reza
Posts: 6
Joined: August 30th, 2001, 3:40 pm

Methodology to compute Beta and why ?

May 23rd, 2002, 11:56 pm

one simplistic idea is that a weekly sampling is smoother than dailythis is not specific to betas and is true for example for vols as wellas for horizon, it depends on how you are going to use it I guess ... what time horizon are you trying to forecast?
Last edited by reza on May 23rd, 2002, 10:00 pm, edited 1 time in total.
 
User avatar
jungle
Posts: 4
Joined: September 24th, 2001, 1:50 pm

Methodology to compute Beta and why ?

May 24th, 2002, 5:38 am

bear in mind any major changes to the company's operations ( i think especially with operating leverage)over the measurement period - e.g. GEC changing to marconi would mean a shift in the beta.
 
User avatar
drona
Topic Author
Posts: 0
Joined: February 10th, 2002, 1:34 pm

Methodology to compute Beta and why ?

May 24th, 2002, 11:16 am

You mean depending on the horizon of the trade we could use different beta's (i,e long-term trades vs short-term trades).Also, what does raw and adjusted beta really mean, and what is the practice of squeezing beta's to 1 really mean andwhy do it.thanks
 
User avatar
Aaron
Posts: 4
Joined: July 23rd, 2001, 3:46 pm

Methodology to compute Beta and why ?

May 28th, 2002, 4:42 pm

The main error in estimating Beta is usually nonsimultaneous quotes. The more frequently you measure, the more serious this problem. On the other hand, using a longer measurement interval throws away information that can be used to reduce sampling error. In this case you can get the best of both worlds by estimating using overlapping weekly (or any other interval) periods. Sometimes people use an autocorrelation adjustment."Adjusted" Beta can mean adjusted for many different things: capital structure, autocorrelation, events and others.Shrinkage is a general statistical technique for improving estimates. It is usually better to shrink Betas to an industry mean than to 1, although you can use 1 because you know it is the average Beta of stocks in the index weighted by index weight. The idea, in simple terms, is that your highest and lowest Betas are likely to have positive and negative respectively measurement errors.For example, suppose you want to predict a baseball player's batting average in the second half of the season, based on first half average (this was the example in Stein's famous paper). You might simply use the first half average as your prediction. But it turns out if you shrink the estimates toward the population mean, you get smaller errors. If you shrink toward position average, you do even better.
 
User avatar
Buckaroo
Posts: 0
Joined: May 17th, 2002, 3:48 am

Methodology to compute Beta and why ?

May 29th, 2002, 11:10 am

Aaron & colleaguesWould you suggest that a markovitz model would have more predictive powers than that CAPM evaluation technique given that it has an error term which would explain some of the problems which you describe. Would it not be possible to then identify an arbitrage position based upon the error term which is achieved over the data and therefore be able to identify over sold or over bought positions.
 
User avatar
Aaron
Posts: 4
Joined: July 23rd, 2001, 3:46 pm

Methodology to compute Beta and why ?

May 29th, 2002, 4:03 pm

Aaron & colleaguesWould you suggest that a markovitz model would have more predictive powers than that CAPM evaluation technique given that it has an error term which would explain some of the problems which you describe. Would it not be possible to then identify an arbitrage position based upon the error term which is achieved over the data and therefore be able to identify over sold or over bought positions. >>It's certainly true that a full covariance matrix prediction will have a smaller error than a one-factor model. More important, you can use some of the extra information to correct for estimation problems. A lot of people try to arbitrage this. A simple approach is to analyze stocks within a small industry, something like 20 mid-cap stocks not included in major indices works best. You use an estimation technique that explicitly allows for bid/ask spreads, quotation errors and non-simultaneous quotes to come up with a theoretic price for each stock based on current and lagged prices of the other 19. You design a trading system to buy the cheap and short the expensive stocks, based on the model, in a way to have net zero market exposure. This is sometimes called "proxy arbitrage" because you buy the stock and short the tradable portion of the proxy or vice versa.