acos() is not sacred but it is extremely useful because acos(rho_ij) is the angle between data vectors i and j that created the covariance matrix and correlation matrix. It enables checking (and potentially correcting) elements in a matrix of rho values to be consistent with the geometry of the data. For example, given rho_ij, and rho_ik, |acos(rho_ij)-acos(rho_jk)| < acos(rho_ik) < |acos(rho_ij)+acos(rho_jk)| where all operators are variants needed to handle the wrapping of angles WRT [-π,+π]. It's the triangle inequality in spherical coordinates. One can develop similar constraints on longer circuits of rho but it involves more combinatoric logic about the comparisons and interval wrapping effects.
What's nice is that these constraints on angle are somewhat "local" in that it may be quite easy to enable conclusions that a particular rho is the likely cause of the trouble (the percentage of violated inequalities is higher for a particular i,j) which can induce better estimation efforts for that particular rho.
It's just a lot of crank turning although it does scale poorly to large N. I'd not be surprised if the determinant of a covariance or correlation matrix is a bit easier due to symmetry but I've never looked into it.
negative e-values -> 0
Yep! It's just what outrun said.
cos() is a wiggly function going to eternity
True, but in this case it's restricted to [-π,+π] in being angles between vectors in spherical coordinates.