=Richard Schroeppel For finite groups, I can check isomorphism in time roughly N^(log2(N)).
How's that? If you're given the groups in some reasonable form how can it take more than the N^2 required to completely compare them element-wise? Perhaps this time includes making the inputs presentable? By reasonable form I mean something like, say, labeling the elements / permuting the rows & columns so that some linear scan (for example row-major 1,1 ... 1,N ... N,N) is lexicographically minimized. For instance the two groups 1 2 3 4 1 2 3 4 2 3 4 1 and 2 1 4 3 3 4 1 2 3 4 1 2 4 1 2 3 4 3 2 1 (Z4 and A4?) might become 1 2 3 4 2 3 4 1 3 4 1 2 4 1 2 3 and ^ 1 2 3 4 2 1 4 3 3 4 1 2 4 3 2 1 ^ first differing ^ at the 2,2 = 6th element. Actually of course the first row and column belong to the identity element, so here the first difference must occur at N+2 or later, and the constraints of the group structure certainly requires it to occur sometime before N^2. (This raises the question of what the minimum and maximum are for a given N, and what scan order (row-major? anti-diagonal?) optimizes them.) In any case, if you can characterize a set of specific elements which are sufficient to distinguish groups pair-wise then you could certainly construct such a detector out of element-wise differences. Perhaps, like an error-detecting code checksum, it might even be a fairly efficient one. Admittedly compared to the "pretty fancy concepts" this may be an ugly concrete way to do the job (but then isn't one of your mottos "Nice algorithms finish last!"?<;-)