There are a few ways you can go about this. If you know the variance-covariance matrix, of course, you know the correlation matrix, so I'm not going to say much more about correlations directly. All these methods enforce positive-definiteness of the variance-covariance matrix.1) If your data observations are multivariate Gaussian, N(m, V), and you have no missing observations then, if you can express your priors on the inverse variance-covariance matrix, V^{-1}, as a Wishart distribution, you are OK. This is because, the Bayesian posterior distribution of V^{-1} is also Wishart. Have a look at
http://en.wikipedia.org/wiki/Wishart_distribution, and also read about conjugate priors in Bayesian statistics.2) If you know (or assume) the likelihood of your data, you can estimate the variance-covariance matrix using maximum-likelihood estimation. This way, you can impose whichever restrictions you like, quite explicitly. This method is not particularly good for imposing vague restrictions, but if you, say, want to fix \rho_{12} to have a particular value, you can do that easily and, furthermore, can use liklihood ratio tests to tell how much your restriction affects things.3) As above, you have to know the likelihood. You can assume completely general priors on V or, if you like, P, the correlation matrix. If you know the posterior distribution (prior.likelihood), you can use Markov chain Monte-Carlo simulation (MCMC) to estimate the posterior distribution of your parameters in almost complete generality.