July 2nd, 2007, 9:40 am
I am reviewing an equity factor model for a friend and need a little help. It is an equity selection model. The model is pretty basic. A wide range of common stock and fundamental data is downloaded into a spreadsheet (e.g., P/E, trading volume, dividend yield, earning estimate chagges, etc.). The data is normalized and weights are created for each factor. For my models, I have regressed normalized data from individual stocks against recent stock performance to create the weights. I have also done the regression just for stocks in each sector. The model I am helping evaluate does a few things I have not seen before.The factors are calculated for each industry based are an average for stocks in the stocks for a particular sector. The sector P/E is an arithmetic average of the P/E for all stocks in that sector. It seems like the model creates one "synthetic" company based on the average of each factor. What do people think about the idea of using an "averaged" company?The average factors for each sector and the most recent monthly return are used to calculate the weights. The Excel Solver is used to calculate the weights. It is run with a goal of maximizing the corrleation between the sector returns and the sector scores. The sector score is the weight (varied by solver) times the factor. The Solver constrains each weight to between negative one and positive one. The Solver starts each weight with an initial guess of -1.0. The Solver is run 20 times with the initial guess varying it from -1.0 to +1.0 via 0.1 increments. What do people think about this method of finding weights?The last step is billed as a Monte Carlos simulation. 1000 sets of factor weights are randomly created. The performance for each sector is calculated and saved. Is this useful and mainstream?Thanks in advance for the thoughts,JDF