With 160 statistics, we should expect the smallest p-value to be around 3 * 10^-3, I suppose? I think there's something called Bonferroni that we can use to figure out whether a p-value as small as this one, in one out of 160 tests, should actually make us suspicious. Or run the test again and see what we get. This one looks a bit unusually small but not really surprisingly small when there are so many tests being done. One crude calculation would be to take (1 - 3.1 * 10^-4)^160 to see how likely it is to get at least one p-value this small or smaller, and that calculation comes out right around 5%, which is slightly surprising but not very surprising if the numbers generated were truly random. --Joshua Zucker On Tue, Feb 7, 2012 at 12:07 PM, Gareth McCaughan <gareth.mccaughan@pobox.com> wrote:
Number of statistics: 160 Total CPU time: 06:19:48.03 The following tests gave p-values outside [0.001, 0.9990]: (eps means a value < 1.0e-300): (eps1 means a value < 1.0e-15):
Test p-value ---------------------------------------------- 7 CollisionOver, t = 7 3.1e-4 ----------------------------------------------