Re: [math-fun] Hilbert curve applications
From: rcs@xmission.com
Steve's calendar lights up a feature that's so disconcerting about space-filling curves: Our definition of "continuous" is a high- falutin' way of saying "keeps nearby things nearby". What if we adopted another requirement, either in addition to, or instead of, the Nearby rule: How about "Keeps far-apart things far-apart."? This would address the anomaly in Steve's calendar that Jan 6 and Aug 7 are adjacent. What sort of "continuity" would we get?
Ah! Gray codes also "keep near things near." I've played with what I call "error-minimizing codes" that keep far things far. Not in two dimensions, just with Hamming distance among bit strings. (As I understand it, the more dimensions you have, the more Euclidian distance acts like Hamming distance.) Others have worked at this, too: D.J. Smith, P.H. Stanford, A Random Walk in Hamming Space P.H. Stanford, D.J. Smith, The Multidimensional Scatter Code http://www.santafe.edu/~dsmith/publications.html In scatter codes, you flip some fixed number of randomly-picked bits between adjacent values. This counts on the random walk's sqrt property (not sure how it translates into Hamming distance). I wanted far (value) things to be guaranteed >= ~log2(far) Hamming-wise, because I wanted the result of damaging n bits to be <= 2^n. Here's from something I wrote:
An error-minimizing code is like an error-correcting code except that instead of insuring perfect recovery of the intended value in the face of some maximum number of bit errors, it insures that the difference between the original value and the recovered value is limited to a function of the number of bit errors.
In my programs so far, the rule is that zero errors recover the exact original value one error => in the range of the original value +/- 1 two errors => original +/- 2 3 => +/- 4 4 => +/- 8 etc. n>0 => +/- 2^(n-1)
(Note that if you take real values and round them to integers, the "original" integer is the real number +/- 2^-1. Seen this way the above rule is almost consistent.)
Some people, when I first explain this, ask whether this isn't what Gray codes do. What's true for Gray codes is that for a given difference between values, the Hamming distance between their code words is limited: Hamming distance <= log2( value difference + 1 ) (roughly)
but for error minimizing codes, for a given Hamming distance, the value distance is limited: value difference <= 2^( (Hamming distance) - 1 ) or Hamming distance >= log2( value difference + 1 )
So the inequality is reversed. Gray codes keep similar values together, but it's okay if some pairs of dissimilar values are also close. Error-minimizing codes keep dissimilar values apart, and it's okay if some pairs of similar values are also far apart.
I haven't found a systematic way to generate anti-Gray codes. I wrote various greedy search programs, and for example, Width (bits) Length (# of values encoded) ------------ ---------------------------- 11 17 (my highest exhaustive search) ... ... 31 228 32 258 (narrowest code I found for >= 256 values) ... ... 64 9237 There's an information-theoretic lower bound on the width of the code, buried in my notes somewhere. --Steve
participants (1)
-
Steve Witham