Perhaps (I am not optimistic) the following will make things clearer. This is the best coin-testing protocol currently known to me (although its validity is not currently proven, it appears valid in some tests): COINTEST(K){ //unfairness test with confidence>=1-K of correctness assert(0<K and K<1); A=7.1; B=2.7; C=1.02; N=1; LoopForever{ TossCoin and let D=T-H where T=#tails and H=#heads so far and T+H=N; X = (D*D)/(2*N) - ln(A+ln(B+N)); if(X>C*ln(1.0/K)){ return( COIN_SEEMS_UNFAIR ); } N = N+1; } } This procedure does not give a damn whether your coin is biased to 0.51, or 0.5001, or 0.500001, or any number at all. No matter what number it is, if the number is not exactly 0.5, then this procedure will eventually (with probability=1) return "UNFAIR". If the coin is fair, i.e. the bias is exactly 0.5, then this procedure will either never terminate (with probability >= 1-K) or will terminate wrongly reporting UNFAIR (with probability <= K). The question is whether this procedure really meets those guarantees, and what is the "best" procedure which does. Here "best" has a precise meaning, e.g. least expected runtime before termination if coin bias uniform[0,1]. -- Warren D. Smith http://RangeVoting.org <-- add your endorsement (by clicking "endorse" as 1st step)