- challenged
**Posts:**55**Joined:**

There is a VBA example for use in excel at…http://www.geocities.com/WallStreet/9245/vba11.htm

searching choleskey decomposition in google lead me to this page.I have such topic this quarter and what my instructor`s equation is X=chol(cov())*randn()+Mu()I have no idea why it is the product of chol, not directly of covariance? Cholesky decoposition remain the properties of the original covariance?and what is the second term in your folks description(Sqrt ( 1 - rho^2 ) Z2), it`s the same to the one of mine(Mu)? I also wonder if it converts the mean properties to the X with such addition.I appreciate your all inputs.Hai

- nastradamus
**Posts:**13**Joined:**

QuoteOriginally posted by: blueseasearching choleskey decomposition in google lead me to this page.I have such topic this quarter and what my instructor`s equation is X=chol(cov())*randn()+Mu()I have no idea why it is the product of chol, not directly of covariance? Cholesky decoposition remain the properties of the original covariance?and what is the second term in your folks description(Sqrt ( 1 - rho^2 ) Z2), it`s the same to the one of mine(Mu)? I also wonder if it converts the mean properties to the X with such addition.I appreciate your all inputs.HaiYou can do the math ... If x = sqrt(var)*randn(), then x~N(0,var). Likewise, if X=chol(cov())*randn(), then X ~N(0, Cov()). You can think of chol as a square root function for matrix.

A more challenging question on how to generate correlated random nubmers. From the discussion it is quite simple, at least theoretically, to generate correlated random numbers in N(0,1), Chol would suffice. In my work, we need to generate correlated random numbers that also correlated to a given set of random numbers. For specifically, we have a set of random numbers x1, x2, x3, (say just 3 to make it simple), that could be from history. We can not or do not want to, change them. The question is to find y1, y2, y3, y4 ( 4 is not equal to 3), such as x1, x2, x3, y1, y2, y3, y4 satisfy certain variance-covariance matrix, say, A. Let's analyze it a little more. A must be a 7 by 7 matrix (positive defnite). We also notice that x1, x2, x3 are fixed, we can not generate them any more, so the upper left submatrix (3 by 3) of A is exactly the variance covariance matrix of x1, x2, x3. Then A is decompose into A = (A11, A12; A21, A22). A22 is the variance covariance matrix of y1, y2, y3, y4 while A12 is the correlation matrix of X's and Y's. That is great, so far. Now the problem is: 1. design a algorithm to solve for and simulate y1, y2, y3, y4, prove it theoretically correct 2. practical issue. I have not seen any algorithm or programs that give me truly independent random numbers. not sure if any body has one ---- if you do, please share. 3. as the sizes increase, that is, not 3 or 4, instead, N and M, it quickly become a challenge to see the number of simulations we have to do in order to truly get what we expected Any good ideas?ThanksXYZ

I have used the VBA code mentioned earlier and found here:http://www.geocities.com/WallStreet/9245/vba11.htmIt worked very well and can be modified to increase the number of variables. I would not try to generate too many RV's though, as I believe there may be some problems with matrices that are too large for the typical PC. I was using ten and the results looked very good.Mike

The Cholesky method is probably the most literally correct, and assumes you have a correct correlation matrix. Problem in reality is the correlation matrices give you far more parameters than you can ever hope to estimate with any fair degree of reliability and stability.An alternative along the lines at CAPM and APT that I have been trying to adopt for practice is to select just a few factors that explain most of the correlation between the variables, so that any residuals should be and are statistically uncorrelated. These are much easier to simulate, since instead of an NxN cholesky decomposed matrix (which you can't fudge), you have only a few coefficients for each factor (which can be modelled so they are independent, or correlated with a much smaller, more stable matrix), and can generate each variable and incremental variables with far greater ease and flexibility.

If the multivariate probability density of (dependent) random variables is a finite tensor product of arbitrary univariate densities, then it very easy to generate (draw) points from it: first draw the parcel, and then, for each variable, draw a univariate point from its univariate density.A bivariate example is given in this thesis, page 24.

- rainingday
**Posts:**1**Joined:**

Hi, I have same question as you. Do you have any solution now? I am eager to know your answer soon.Thanks!QuoteOriginally posted by: xyzyangA more challenging question on how to generate correlated random nubmers. From the discussion it is quite simple, at least theoretically, to generate correlated random numbers in N(0,1), Chol would suffice. In my work, we need to generate correlated random numbers that also correlated to a given set of random numbers. For specifically, we have a set of random numbers x1, x2, x3, (say just 3 to make it simple), that could be from history. We can not or do not want to, change them. The question is to find y1, y2, y3, y4 ( 4 is not equal to 3), such as x1, x2, x3, y1, y2, y3, y4 satisfy certain variance-covariance matrix, say, A. Let's analyze it a little more. A must be a 7 by 7 matrix (positive defnite). We also notice that x1, x2, x3 are fixed, we can not generate them any more, so the upper left submatrix (3 by 3) of A is exactly the variance covariance matrix of x1, x2, x3. Then A is decompose into A = (A11, A12; A21, A22). A22 is the variance covariance matrix of y1, y2, y3, y4 while A12 is the correlation matrix of X's and Y's. That is great, so far. Now the problem is: 1. design a algorithm to solve for and simulate y1, y2, y3, y4, prove it theoretically correct 2. practical issue. I have not seen any algorithm or programs that give me truly independent random numbers. not sure if any body has one ---- if you do, please share. 3. as the sizes increase, that is, not 3 or 4, instead, N and M, it quickly become a challenge to see the number of simulations we have to do in order to truly get what we expected Any good ideas?ThanksXYZ

newbie question so apologies if allready covered - what is the deal with simulation of random variables IF the distributions are known to be NOT normal or lognormal - some stable levy type say (a,b,g,d) - does Cholesky require a normal distribution? thanks in advance

Quote Hi, I have same question as you. Do you have any solution now? I am eager to know your answer soon. Thanks!Theoretically, it is to compute the conditional distribution of the random variables conditioned on those known ones. For normal cases, the analytic solution of the conditional distribution can be used to simulate such rv's.

Last edited by erain on November 13th, 2007, 11:00 pm, edited 1 time in total.

1. correlation and dependence are different. Correlation only captures the linear dependence, or the first order dependence. Dependence can be described by joint distribution and all cross moments (1 to infinite)2. two way to generate correlated samples based solely on marginal distributions and correlation matrix 1) copula method: the key is to find a copula dependence function, usually no closed-form solution for the final joint distribution, so discretization is needed; 2) a method called Normal to Anything (NORTA) can use multivariate normal to approximate the target joint distribution up to the first order3. another way is to "fit" a full joint distribution by moment matching. This is in theory can finally converge to the true one, given you have all marginal moments and cross moments. However, hard to do since you need solve large nonlinear equations.

QuoteOriginally posted by: exotiqThe Cholesky method is probably the most literally correct, and assumes you have a correct correlation matrix. Problem in reality is the correlation matrices give you far more parameters than you can ever hope to estimate with any fair degree of reliability and stability.An alternative along the lines at CAPM and APT that I have been trying to adopt for practice is to select just a few factors that explain most of the correlation between the variables, so that any residuals should be and are statistically uncorrelated. These are much easier to simulate, since instead of an NxN cholesky decomposed matrix (which you can't fudge), you have only a few coefficients for each factor (which can be modelled so they are independent, or correlated with a much smaller, more stable matrix), and can generate each variable and incremental variables with far greater ease and flexibility.I agree with you. Regression ( may be nonlinear regression) does descent jobs to capture dependence, of course mainly from the computation considerations. Principal component may work fine as well.

A common way to simulate correlated values is to: 1) Simulated Standalone Deviates from the margins, and 2)Rearrange the simulated deviates to obtain the desired correlation between them. The optimization steps can be performed with heuristic approaches, such as Simulated Annealing. If we follow this approach, we will preserve the moments of the distribution.

GZIP: On