Ok, so here's a complete algorithm for representing *any*
complex number z=x+yi=L*e^(i*theta) using digits "0" and "1"
in radix exp(i)=e^i, to any precision you please.
We first pick integer k, s.t. e^(i*k) ~ e^(i*theta) to the
desired degree of precision.
We then approximate L ~ sum(2*cos(j)), for j suitably chosen,
as described below.
Then z = L*e^(i*theta) = sum(2*cos(j),j in suitable set)*e^(i*k).
I don't believe that there are any "collisions", whereby
the product produces any bit positions which require a
digit other than "0" or "1", but if there were, one could
always choose a different set of j's in a way to avoid
such collisions.
As an example, let's approximate -pi = pi*exp(i*pi).
We first approximate exp(i*pi) to within 1 angular degree.
k=22 works, so exp(22*i) ~ exp(pi*i).
We now approximate pi using a cosine series.
pi = sum(2*cos(j), suitable j), so
pi ~ 1 + 2*cos(6) + 2*cos(11) + 2*cos(14) + 2*cos(77) ~ 3.14.
Finally, we have
pi*e^(pi*i) ~
(r^0+r^6+r^-6+r^11+r^-11+r^14+r^-14+r^77+r^-77)*r^22
= r^-55+r^8+r^11+r^16+r^22+r^28+r^33+r^36+r^99
Check using Maxima:
(%i1) r^-55+r^8+r^11+r^16+r^22+r^28+r^33+r^36+r^99;
99 36 33 28 22 16 11 8 1
(%o1) r + r + r + r + r + r + r + r + ---
55
r
(%i2) %,r=exp(%i),rectform;
(%o2) %i (sin(99) - sin(55) + sin(36) + sin(33) + sin(28) + sin(22) + sin(16)
+ sin(11) + sin(8)) + cos(99) + cos(55) + cos(36) + cos(33) + cos(28)
+ cos(22) + cos(16) + cos(11) + cos(8)
(%i3) bfloat(%);
(%o3) - 2.77994517385043b-2 %i - 3.140593309047521b0
(%i4) log(%);
(%o4) 1.144450908369626b0 - 3.132741228718346b0 %i
(%i5) exp(realpart(%));
(%o5) 3.140716342230068b0
(%i6)
At 08:17 AM 5/5/2018, Henry Baker wrote:
>I've been looking at expressing numbers in radix-r notation,
>for |r|=1, e.g., r=exp(i)=e^i.
>
>Real r, |r|=1 is too boring, so I looked at complex r, |r|=1.
>
>r=exp(i)=e^i is "representative" of this class, and is easy
>to deal with in a computer algebra system.
>
>We first notice that "most significant bit" and "least
>significant bit" have no meaning, because |r^n|=1 for
>all real n.
>
>We're obviously not in Kansas, anymore, Toto!
>
>Zero = "0", as usual.
>
>One = "1", as usual.
>
>Since r^k+r^(-k) = 2*cos(k), we can represent real numbers
>in the range [-2,2] to any degree of accuracy we wish with
>only 2 "1" bits using an appropriate k.
>
>Every other real number outside this range can be expressed
>(very inefficiently) using a "greedy" algorithm that subtracts
>2*cos(k) for various k's. "Inefficient" here means that
>expressing n takes O(n) 1 bits, and most likely O(n) bits in
>total.
>
>I also plotted all of the complex numbers I could construct
>using m bits on both sides of 1 -- i.e., binary numbers from
>2^(-m) up to 2^m. These form pretty & mostly symmetrical
>patterns with blotches that eventually merge as m grows in
>size. (I don't have any way to show you these plots.)