--I'm not buying Veit's claims: The definition of "random hermitian matrix" is key. If I take it to mean "pick the eigenvalues from a symmetric real prob.distrib. then multiply resulting diagonal matrix D to get H = U^(-1) D U = hermitian with U random unitary (Haar measure) -- which is a perfectly good probability distribution -- then the proposed solution prob(all eigevalues>0) = 2^(-N) is right, regardless of what Veit says. --Veit: The claim 2^(-N) assumes much more, that the eigenvalues are independently distributed. In fact, they are very far from independent, making the actual probability much smaller: c^(-N^2). This result is asymptotic, for large N (the limit of interest in statistical mechanics). The number c is well known, but it is not 2. As Andy pointed out, the probability measure should be invariant under arbitrary unitary transformations, i.e. M -> U M U^(-1). But the Hermitian matrices live in an N^2 dimensional space while the space of unitary matrices has only N(N-1) dimensions. The extra N dimensions correspond to the eigenvalues of M. Wigner had the idea of using the maximum entropy probability distribution, constrained by just two properties: the expectation values of Tr M and Tr M^2. If we want the expectation value of Tr M to be zero, then our probability distribution is simply the Gaussian e^(-Tr M^2) times the unitary-invariant measure. --that would yield 2^(-N) as above! If you marginalize this distribution on just the eigenvalues (i.e. integrate out the unitary transformations) you get, say in the case of N=3 (unnormalized) dP = e^(-E1^2-E2^2-E3^2) (E1-E2)^2 (E2-E3)^2 (E3-E1)^2 dE1 dE2 dE3. It's the product over all eigenvalue pairs -- their differences squared -- that ruins the independence of the eigenvalue distribution. BTW, this very same distribution seems to perfectly model the distribution of Riemann zeta function zeros, but nobody understands why!