On 11/25/2012 10:34 AM, Veit Elser wrote:
You can make life a lot simpler by using normal distributions on the matrix elements instead of uniform ones.
First off, suppose you are interested in (the different problem of) generating random vectors [x y] with a distribution that is rotationally symmetric. You would choose independent normal distributions on x and y (rather than uniform) because, well, e^(-x^2) e^(-y^2) = e^(-x^2-y^2).
The most general probability distribution on 2x2 Hermitian matrices that is symmetric with respect to arbitrary unitary transformations is (in your notation),
dP = f(H) dp dq dr ds
where the function f is invariant, i.e. f(UHU^(-1)) = f(H) for arbitrary unitary U. A "natural" choice is f(H) = e^(-Tr H^2) = e^(-p^2-q^2-r^2-s^2). This is also Wigner's choice, as the distribution having maximum entropy over all distributions having a given expectation value for Tr H^2.
If you are interested in eigenvalue distributions you would prefer a different set of coordinates from p,q,r,s. Two coordinates can be the eigenvalues e1 and e2, leaving two additional "angular" coordinates. It's a relatively straightforward exercise to compute the Jacobian of the transformation, integrate out the angular coordinates, and arrive at (up to a normalization constant):
dP' = f(H) (e1-e2)^2 de1 de2
The (e1-e2)^2 is the famous "level repulsion" factor, which explains the statistics of energy level spacings and apparently also the spacing of zeroes of the zeta function. Since f(H)=e^(-e1^2-e2^2), the problem of calculating the probability that both e1 and e2 are positive is just a matter of doing some Gaussian integrals ...
But why not just take e1 and e2 to independently be ~N(0,1)? Brent