From the four dimensional "Gauss" system in pathiart.pdf, the N matrix becomes [ (n + i) (n + j) ] [ --------------- 1 ] N(i, j, k, n) := [ (n + 1) (n + k) ] [ ] [ 0 1 ]
after the transformation k -> k-i-j and a "crosstie" by 1/n. But shouldn't this be the five dimensional [ (n + i) (n + j) ] [ --------------- 1 ] N(h, i, j, k, n) := [ (n + h) (n + k) ] [ ] [ 0 1 ] ? I thought not, because h is redundant--it can simply be an arbitrary constant, shifted away by substituting n-h+1 for n, k+h-1 for k, j+h-1 for j, and i+h-1 for i, purely within the 4D system. (Merely shifting variables by arbitrary constants preserves path invariance.) Without h, it is still possible to coordinate transform any of the four factors in N to the form (x n + y k + z j + w i + a), where a is arbitrary and x, y, z, and w are arbitrary integers, by a sequence like n -> p n - q k ... + b, k -> r k + u j +..., j -> s k + t j + ... . But it's not always obvious or convenient. For example, pathiart.pdf shows how coordinate-transform and specialize [ (n + i) (n + j) ] [ --------------- 1 ] N(i, j, k, n) := [ (n + 1) (n + k) ] [ ] [ 0 1 ] to a Ϛ(2) system, with [ 2 ] [ (n + k) ] N(k, n) := [ ------------------------- 1 ] [ (n + k + 1) (n + 2 k + 1) ] [ ] [ 0 1 ] and K(k,1) giving a 3F2[1/4]. This is easy: k->k+1, j->0, i->0, n->n+k. Suppose instead we want to see what sort of K matrix we get from specializing N to [ 2 ] [ n ] [ -------- 1 ] N(k, n) := [ 2 ] [ (n + k) ] [ ] [ 0 1 ] This makes a nice puzzle: (Ignoring the other matrices) How do we get here from N(i, j, k, n)? Try it! The legal moves are n -> n+ IntegerLinearCombination(i,j,k) n -> -n, which also inverts the n matrix! i,j,k -> IntegerLinearCombinations(i,j,k) i or j -> constant. Here's one solution: n->n-1, i->i+1, j->j+1, k->k+1, j->j+i,k->k+i,n->n-i, i->i-k, i->0, j->0 . Not too obvious. Incidentally, this gives a K(k,1) essentially identical to the former one, but substantially simpler matrices for K(k,n) with n=2 (for Ϛ(2)-1), n=1/2 (odd terms of Ϛ(2)), n=1/4 (Catalan's constant) etc. But suppose we had the redundant system with h: [ (n + i) (n + j) ] [ --------------- 1 ] N(h, i, j, k, n) := [ (n + h) (n + k) ] [ ] [ 0 1 ] All we'd need is h->k, i->0, j->0. So how do we get h? I always shifted it into N(i,j,k,n) as an arbitrary constant: N(n+h-1,i-h+1,j-h+1,k-h+1) and likewise for I, J, and K. Then noting that the whole system was symmetric in h and k, redefining K(h,i,j,k,n):=K(i,j,k,n) etc, then manually grafting in H(h,i,j,k,n) := K(k,i,j,h,n). But today's first surprise is a much simpler way to do this: Simply define H(h,i,j,k,n):=the 2x2 identity matrix, and K(h,i,j,k,n):=K(i,j,k,n) etc., even though there is no h! Now use the formal coordinate transformer to n->n+h, i->i-h, j->j-h ,k->k-h and H is magically the right thing! The transformation machinery can boost the dimension of the grid. The same trick works to add the index f to the Rosetta system: Graft on F(f,g,h,i,j,k,n):=3x3 identity, then g->g+1-f, h->h+1-f, i->i+1-f, j->j+1-f,k+1-f, n->n+f-1, is almost right, but changes the 1,3 element of N from n to n+f-1, which we fix with a crosstie by 1 f-1 0 0 1 0 0 0 1 Now the final surprise: Without h, transformations like k -> 2k are legal, but irreversible, at least without a fancy Abramov-style recurrence solver. But with h, if we have [ (n + i) (n + j) ] [ ----------------- 1 ] N(h, i, j, k, n) := [ (n + h) (n + 2 k) ] [ ] [ 0 1 ] we can change the 2k back to k: n->n-h, i->i+h, j->j+h, h->h+k, n->n+h, i->i-h, j->j-h, and we've (reversibly) undone the irreversible! Equally easy would be 69k -> k. (Except the K matrix would already be intractable.) --rwg (Resending "staggering" predecessor to remedy a crucial omission:) On Tue, Jan 24, 2012 at 1:51 PM, Bill Gosper <billgosper@gmail.com> wrote:
(Over the last fortnight, we have discovered or rediscovered three path-invariance surprises which I hadn't reported, thinking no one would care.)
Products of 3x3 matrices don't typically compute sums unless they're upper-triangular:
r a b 0 s c 0 0 t
(where we usually scale out the t), producing ordinary sums in elements 1,2 and 2,3, and a triangular double sum in 1,3. If a path-invariant system is upper triangular, we can discard any (same) number of top rows and left columns, and still be path-invariant. Furthermore, by "cross-tying" (as in pathiart.pdf) a diagonal matrix and an auxiliary matrix of the form
1 d 0 0 1 0 0 0 1
it is often possible to annihilate the 1,2 elements in a 3x3 system,
r 0 b 0 s c 0 0 t
giving you a second 2x2 system by discarding the middle row and column. Also, if one can annihilate this 1,2 element in a non-triangular system, getting
r 0 b a s c 0 0 t
then you get upper-triangular simply by cross-tying with
0 1 0 1 0 0 , 0 0 1
interchanging the left two columns and top two rows.
Since, without specialization, only one of the six matrices in the 3F2 Rosetta system is upper triangular, the identities you get from closing contours relate recurrences usually inexpressible as traditional sums and products, although they provide valuable accelerations for the 3F2s computed by the (triangular) N matrix. (And can relate continued fractions.)
Before venturing into 3x3s, I spent years puzzling over why the N matrix for my 2x2 Dixon system was N(...,k,n) :=
[ (n - h) (n - i) (n - j) ] [ ----------------------------------------- 1 ] [ (n + 2 k + h) (n + 2 k + i) (n + 2 k + j) ] [ ] [ 0 1 ]
and not
[ (n - h) (n - i) (n - j) ] [ ----------------------------------- 1 ] [ (n + k + h) (n + k + i) (n + k + j) ] [ ] [ 0 1 ]
From the latter, I could get the (predictably messy) former simply by redefining K(k) :=K(2*k).K(2k+1), pairwise grouping the K matrices. I yearned for the (presumably more elegant) latter, for making neater identities.
But the
***K(k) product of the***
former computes pFp-1[-1/27]. This would require the latter to compute qFq-1[i/sqrt(27)], clearly incompatible with the N matrix!
But the 3x3 Rosetta permits *any* pattern of integer coefficients in any of the six numerator and denominator factors, e.g. (n+6g-9h...).../(n-2j+8k)..., so let's insist on (n-h).../(n+k+h)..., and see why K(k) only computes a sum when grouped pairwise. It's of the form
0 r a s 0 b ! 0 0 1
With r and s quite dissimilar. Setting h=i=j=0 for displayability, K(k):=
[ 3 (n - 1) (2 n + 5 k - 2) 2 ] [ 5 5 (------------------------- + k ) ] [ k 10 ] [ 0 - ------------------- ---------------------------------- ] [ 2 3 2 ] [ 18 (k - -) (n + k) 9 (k - -) ] [ 3 3 ] [ ] [ n - 1 ] [ 3 2 (----- + k) ] [ 2 k 2 ] [ ------------------ 0 ------------- ] [ 1 3 1 ] [ 3 (k - -) (n + k) 3 (k - -) ] [ 3 3 ] [ ] [ 0 0 1 ]
Notice -1/27 = (-1/18)(2/3).
(if idiot GMail ruined this as usual, just delete the linebreaks that don't immediately precede [.)
So instead of elements 1,3 and 2,3 of the product being
a(0) + r(0) a(1) + r(0) r(1) a(2) + r(0) r(1) r(2) a(3) +... and b(0) + s(0) b(1) + s(0) s(1) b(2) + s(0) s(1) s(2) b(3) +...
we get
a(0) + r(0) b(1) + r(0) s(1) a(2) + r(0) s(1) r(2) b(3) +... and b(0) + s(0) a(1) + s(0) r(1) b(2) + s(0) r(1) s(2) a(3) +...
Forcing the above K(k) into conventional sum notation requires K(2k) K(2k+1), doubling the size of the expressions in the left two columns, and putting a LARGE irreducible quintic and sextic in the right column.
Well, it staggered ME, anyway. --rwg Two other little surprises pending.