I discovered an area where this approach might be helpful.
It is the "multivariate" normal distribution, which is important in machine learning and data analysis.
Most people are familiar with the univariate Gaussian e ^ - (x^2 / s^2) where s^2 is the variance or "width" of the bell shaped curve, and if you want it to be a probability distribution you can normalize it so its integral adds up to 1, and you can shift over the mean by making the x term (x-u)^2. In univariate mode the normalizing factor is usually written as 1 / (s * sqrt(2π)), which just says the curve gets taller as it gets narrower.
In multivariate mode the normalizing factor has a determinant, it is the determinant of the covariance matrix. It is written as
1 / sqrt ( (2π)^n * | det(v) | )
where v is the covariance matrix and n is the dimensionality (which in this case is assumed and defined to result in a square covariance matrix).
There's a plethora of statistical methods involving "dimensionality reduction". For example principal component analysis is one such popular method, and it involves manipulating the covariance matrix in a way that might be amenable to the method in the OP.