You can make life a lot simpler by using normal distributions on the matrix elements instead of uniform ones. First off, suppose you are interested in (the different problem of) generating random vectors [x y] with a distribution that is rotationally symmetric. You would choose independent normal distributions on x and y (rather than uniform) because, well, e^(-x^2) e^(-y^2) = e^(-x^2-y^2). The most general probability distribution on 2x2 Hermitian matrices that is symmetric with respect to arbitrary unitary transformations is (in your notation), dP = f(H) dp dq dr ds where the function f is invariant, i.e. f(UHU^(-1)) = f(H) for arbitrary unitary U. A "natural" choice is f(H) = e^(-Tr H^2) = e^(-p^2-q^2-r^2-s^2). This is also Wigner's choice, as the distribution having maximum entropy over all distributions having a given expectation value for Tr H^2. If you are interested in eigenvalue distributions you would prefer a different set of coordinates from p,q,r,s. Two coordinates can be the eigenvalues e1 and e2, leaving two additional "angular" coordinates. It's a relatively straightforward exercise to compute the Jacobian of the transformation, integrate out the angular coordinates, and arrive at (up to a normalization constant): dP' = f(H) (e1-e2)^2 de1 de2 The (e1-e2)^2 is the famous "level repulsion" factor, which explains the statistics of energy level spacings and apparently also the spacing of zeroes of the zeta function. Since f(H)=e^(-e1^2-e2^2), the problem of calculating the probability that both e1 and e2 are positive is just a matter of doing some Gaussian integrals ... -Veit On Nov 25, 2012, at 11:21 AM, "W. Edwin Clark" <wclark@mail.usf.edu> wrote:
What we are asking is when is a Hermitian matrix positive definite? By Sylvester's criterion <http://en.wikipedia.org/wiki/Sylvester%27s_criterion> a Hermitian matrix is positive definite iff the leading principle minors are all positive. So for the 2x2 matrix H = [r, p + qi] [p - qi, s] this becomes simply r > 0 and rs - (p^2 + q^2) >0. It should be possible to calculate the exact probability. Since I am lazy I will only do it for q = 0. Which is itself interesting.
In case q = 0 and H is real symmetric and these conditions become r > 0 and rs > p^2. In this case we have the interior of a surface lying in the region of r,s,p space with r > 0 and s > 0, |p| < sqrt(rs). If we consider the solid lying in the box |r| <= n, |s| <= n, |p| <= n, and if my calculations are correct the volume is 8*n^3/9 and since the volume of the whole box is (2*n)^3. The quotient is 1/9. Note that this holds no matter how large n is, so it is natural to take this as the probability that a 2x2 symmetric matrix has two positive eigenvalues. Note that 1/9 < 1/4. So perhaps it is not surprising that this probability is also < 1/4 for 2x2 Hermitian matrices and that Viet is right.
--Edwin
On Sat, Nov 24, 2012 at 9:32 PM, Pacher Christoph < Christoph.Pacher@ait.ac.at> wrote:
It seems that the question whether both eigenvalues are positive can be reduced to "when is lambda2 positve?" if p and/or q are too large the discriminant gets too large and lambda2 gets negative.
So, maybe we should try to take |p+qi| with uniform distribution in [-1,1] and not p and q separately? Note that 4p^2 + 4q^2 = 4|p+iq|^2 is symmetric in p and q.
Christoph
________________________________________ From: math-fun-bounces@mailman.xmission.com [ math-fun-bounces@mailman.xmission.com] on behalf of W. Edwin Clark [ wclark@mail.usf.edu] Sent: Saturday, November 24, 2012 9:21 PM To: math-fun Subject: Re: [math-fun] complex hermitian matrix
Consider the case N = 2. A general Hermitian 2 x 2 matrix may be written as H = [r, p + qi] [p - qi, s] where p,q,r,s are arbitrary real numbers.
Let's experiment. Take p,q,r,s to be random real numbers. Since multiplication by a positive scalar c gives eigenvalues of cH of with the same signs we can take the random reals to be in the interval [-1,1].
The eigenvalues of H are
lambda1 = 1/2*s+1/2*r+1/2*((s-r)^2+4*p^2+4*q^2)^(1/2) lambda2 = 1/2*s+1/2*r-1/2*((s-r)^2+4*p^2+4*q^2)^(1/2)
After a million random choices for p,q,r,s in [-1,1] -- using the Maple command rand(-10^10..10^10)()/10.0^10 to generate the random p,q,r,s -- I get the following frequencies:
two positive eigenvalues: 0.0489030 one positive eigenvalue: 0.9021480 zero positive eigenvalues: 0.0489490
So either I'm doing something wrong, or Maple is, or the claim prob(all eigevalues>0) = 2^(-N) is wrong.
---Edwin
On Thu, Nov 22, 2012 at 11:41 AM, Warren Smith <warren.wds@gmail.com> wrote:
--I'm not buying Veit's claims:
The definition of "random hermitian matrix" is key. If I take it to mean "pick the eigenvalues from a symmetric real prob.distrib. then multiply resulting diagonal matrix D to get H = U^(-1) D U = hermitian with U random unitary (Haar measure) -- which is a perfectly good probability distribution -- then the proposed solution prob(all eigevalues>0) = 2^(-N) is right, regardless of what Veit says.
--Veit: The claim 2^(-N) assumes much more, that the eigenvalues are independently distributed. In fact, they are very far from independent, making the actual probability much smaller: c^(-N^2). This result is asymptotic, for large N (the limit of interest in statistical mechanics). The number c is well known, but it is not 2.
As Andy pointed out, the probability measure should be invariant under arbitrary unitary transformations, i.e. M -> U M U^(-1). But the Hermitian matrices live in an N^2 dimensional space while the space of unitary matrices has only N(N-1) dimensions. The extra N dimensions correspond to the eigenvalues of M.
Wigner had the idea of using the maximum entropy probability distribution, constrained by just two properties: the expectation values of Tr M and Tr M^2. If we want the expectation value of Tr M to be zero, then our probability distribution is simply the Gaussian e^(-Tr M^2) times the unitary-invariant measure.
--that would yield 2^(-N) as above!
If you marginalize this distribution on just the eigenvalues (i.e. integrate out the unitary transformations) you get, say in the case of N=3 (unnormalized)
dP = e^(-E1^2-E2^2-E3^2) (E1-E2)^2 (E2-E3)^2 (E3-E1)^2 dE1 dE2 dE3.
It's the product over all eigenvalue pairs -- their differences squared -- that ruins the independence of the eigenvalue distribution. BTW, this very same distribution seems to perfectly model the distribution of Riemann zeta function zeros, but nobody understands why!
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun