"(Dear me! I am not certain quite
That even now I've got it right!)"
- "Eletelephony", by Laura Elizabeth Richards
Dan has pointed out that I still didn't pose my problem correctly,
so let me see if I can finally get it straight:
Let's say a stochastic matrix A is posipotent if some power of A
has all its entries positive. (This is Dan's nice coinage; I had
used the standard but bulkier phrase "irreducible and aperiodic".)
For each positive integer n > 0, what is the maximum value of N
for which there exists an n-by-n posipotent stochastic matrix A
and a probability vector v such that the vector A^N v has all its
entries positive but the vectors
v , A v , A^2 v , ... , A^(N-1) v
do not?
This can be shown to be equivalent to the following question:
For each positive integer n > 0, what is the maximum value of N
for which there exists an n-by-n posipotent stochastic matrix A
such that the matrix A^N has all its entries positive but the
matrices
I , A , A^2 , ... , A^(N-1)
do not?
This can also be recast as a combinatorial question. Suppose
we restrict attention to finite directed graphs with n vertices
with the property that for every ordered pair of vertices (v,w)
there is a way to get from v to w in exactly M moves. Define
the "inconvenience" of such a digraph as the smallest M that
works for all v,w. How large can the inconvenience be? (Note
that we are limiting ourselves to directed graphs for which
there exists a universal M such that for every ordered pair
of vertices (v,w) you can get from v to w in M moves. Of
course, if M is a number with this property, so is M+1.)
To clarify what I mean by "moves": When I say "There is a way to
get from v to w in exactly K moves" I mean "There exists a sequence
of not necessarily distinct vertices v = v_0, v_1, ..., v_K = w
such that for each j in {1,2,...,K-1} there is a directed edge
from v_{j-1} to v_j."
Example: Consider the direct graph on 4 vertices {A,B,C,D} with
edges A->B, B->C, C->D, D->A, and D->B. Then, starting from A,
the vertices that are reachable in a specified number of moves
are as follows:
In 1 move: B
In 2 moves: C
In 3 moves: D
In 4 moves: A,B
In 5 moves: B,C
In 6 moves: C,D
In 7 moves: A,B,D
In 8 moves: A,B,C
In 9 moves: B,C,D
In 10 moves: A,B,C,D
So we have shown that the inconvenience of this digraph is
at least 10.
Correspondingly, consider the matrix and vector
( 0 1 0 0 ) ( 1 )
( ) ( )
( 0 0 1 0 ) ( 0 )
A = ( ) , v = ( ) ;
( 0 0 0 1 ) ( 0 )
( ) ( )
(1/2 1/2 0 0 ) ( 0 )
A^10 v has all its entries positive, but A^9 v does not.
Here is a proof that the inconvenience can be at most 2^n-2:
Each probability vector can be represented by an n-tuple of
0's and +'s, where 0 stands for 0 and + stands for any positive
number. Call this the signature of the vector. It is easy to see
that the signature of a vector v determines the signature of its
image A v. If the 2^n - 1 vectors v , A v , A^2 v , ... ,
A^(2^n-2) v all have signatures different from (0,0,...,0) and
(+,+,...,+), then by the pigeonhole principle two of the vectors
must have the same signature. But then the sequence of signatures
must cycle forever and never hit (+,+,...,+), which contradicts
our assumptions. So either (0,0,...,0) must occur (which is
impossible since all our vectors are probability vectors) or
(+,+,...,+) must occur (in which case that signature recurs
forever after). Hence, for every v, A^(2^n-2) v has all
its entries positive.
Clearly one can do better than the upper bound N(n) leq 2^n - 2.
My question is, how much better can one do?
Jim Propp