Well, there's actually 2 different questions here: 1. Which complex rational do you choose? 2. How do you represent the answer? I'll answer #2 first. You have several choices for how to represent complex rationals: a) a pair of rationals in reduced (n/d) form, b) a pair of integers with a common natural number denominator, c) a ratio of Gaussian integers (N/D) in reduced (no Gaussian integer factors) form. a) is often used in Common Lisp implementations, and may 'save space' if the real & imaginary parts are incommensurable. However, the moment you try to do very much calculation, you have to put the real & imaginary parts over a common denominator, so b) might seem preferable. c) is an interesting representation suggested in http://home.pipeline.com/~hbaker1/Gaussian.html I would tend to recommend b) for most calculational purposes, even though many/most integers will overflow into 'bignums'. Regarding question #2, I hate solving polynomial equations and getting answers like x = 1.000 + 5.3e-15 I, where I know darn well that x is actually real. If we use a continued fraction algorithm on the whole complex number, rather than the real and imaginary parts separately, we can get rid of these pesky things like '5.3e-15 I' above by producing a complex rational which is the closest _in the complex domain_. The following is from an email dated 7/11/2012: --------------------------------------- I've hated those pesky imaginary parts ever since Fortran has had complex numbers. I think I finally know how to get rid of them! ------ Here's a much better explanation for what I was trying to do: I've been hacking a new version of the Common Lisp & Maxima "rationalize" function which produces the worst rational approximation to a complex number that falls within epsilon of that complex number. In the case of Maxima, the epsilon is set by the global variable "ratepsilon". The traditional Euclidean algorithm for this process focuses on the number itself (which I call c), and only later worries about the epsilon and the radius of convergence. My algorithm starts with the _circle_ whose center is the given complex number c and whose real radius is r; we can denote this circle by (c,r). Since translation and inversion in the complex plane both take circles to circles (more accurately, "generalized circles" to "generalized circles"), we can perform a Euclidean procedure on the _circle_ (c,r) itself. At each stage we keep track of the transformations (translations & inversions), so that we can recover the original circle, if necessary. We first look at the given circle (c,r). If there is a (Gaussian) integer point within this circle, then we are done, because such a point is rational, so this is our answer. If we compute cr=round(c) (= round(realpart(c))+i*round(imagpart(c))), then this number cr is (one of) the closest integer points to c. If cr is "inside" the circle (c,r), then we are done. Now if r>sqrt(2)/2, then cr _has_ to be inside the circle (c,r). So we now consider the case where cr is not inside (c,r). After translating the origin to cr, we know that this new center is within sqrt(2)/2 from the origin -- i.e., inside the unit circle. We then invert the given circle in the unit circle, which will put the center of that new circle outside the unit circle. Furthermore, the radius of that new circle will grow bigger. We can now compute the closest integral point to the center of that new circle, and so forth. At each of these Euclidean steps, the circle radius grows larger, and must eventually enclose an integral point, at which time we invert the sequence of transformations to compute the final rational answer. This answer will be rational, because the integral point was transformed by inversions and translations by (Gaussian) integers. This discussion is not a _proof_, but shows how the algorithm was conceived. Here's the algorithm in Maxima: euclid(zc,m):= block([q,iqmat], /* zc is input circle in Schwerdtfeger's representation */ /* m accumulates transforms; m starts out as the 2x2 identity matrix */ /* Original circle is always transpose(m^^-1).zc.conjugate(m^^-1) */ q:cround(center(zc)), if is(inside(q,zc)) then m.matrix([q],[1]) else (iqmat:matrix([1,q],[0,1]), euclid(transpose(iqmat.imat).zc.conjugate(iqmat.imat),m.iqmat.imat))); imat:matrix([0,1],[1,0]); cround(z):=round(realpart(z))+%i*round(imagpart(z)); This produces is a column vector matrix([num],[denom]); the result is num/denom. euclid(gcircle(%pi,1.0e-7),ident(2)) produces [ - 122453 ] [ ] [ - 38978 ] = 122453/38978, which is within 10^-7 of pi. (%i13) abs((-122453)/(-38978)-%pi),numer; (%o13) 3.9724384226502707E-8 We use Hans Schwerdtfeger's representation for generalized circles as 2x2 Hermitian matrices. See this Wikipedia article for more details. http://en.wikipedia.org/wiki/Generalised_circle /* We test for "inside" by substituting the point into the equation. */ inside(w,c):=is(equation(c,w)<=0); Although very elegant, there is a problem with Schwerdtfeger's representation for circles: the squared radius of the circle is computed by the difference of two large quantities, so when we approach the limit of machine precision, we may inadvertently end up with a negative number, and hence an _imaginary_ radius! This example shows the problem in the lower right hand corner: (%i1) gcircle(%pi,1.0e-7); [ 1 - %pi ] (%o1) [ ] [ 2 ] [ - %pi %pi - 9.9999999999999984E-15 ] Perhaps this problem can be ameliorated with a slightly different representation for circles. Using the closest integer ("round" = least absolute remainder) isn't new. I don't have my Knuth V.I handy, but I think that using "round" instead of "floor" is several hundred years old. Rounding is also necessary for convergence in the complex plane -- e.g., for Gaussian integer gcd's. See this discussion: http://home.pipeline.com/~hbaker1/Gaussian.html BTW, the inverse of a circle (C,r) is the circle (C',r)/(CC'-r^2), and as you can imagine, things get very hairy when r^2=CC' -- i.e., when the circle goes directly through the origin. If the circle to be inverted encloses the origin, then its inverse will be "inside out" -- i.e., it will enclose infinity (oo), but exclude the origin. For small r, one could approximate 1/(CC'-r^2) with a Taylor expansion: 1/(CC'-r^2) ~ 1/(CC')(1+r^2/(CC')+...) So if we choose a smaller radius to take account of the approximation for the center point, then we might be able to use this Taylor expansion until r^2>1/2, which is all we will ever need. ---- Note that the sequence of partial quotients is slightly different from the classical algorithm. In particular, my algorithm might be "too" good, in the sense that it might not return the _worst_ approximation that fits within the original circle, but one that is "one click" better. If you are really looking for the worst approximation -- i.e., the one with the smallest denominator, my algorithm may not produce it. ---- It should be possible to get an estimate on the roundoff error involved in these procedures by carrying out 2 parallel computations: the traditional one using floating point numbers, and mine using rational numbers; we just have to agree on the partial quotients. Since the Mobius transform is rational and hence exact, if we ever arrive at a situation where the point to be approximated no longer lies "inside" its own circle, then we have clear evidence of roundoff error, and we must stop -- this is the best approximation we're going to be able to get within the limits of machine precision. ------ Hope this is more understandable. At 12:11 PM 8/14/2013, James Cloos wrote:
"HB" == Henry Baker <hbaker1@pipeline.com> writes:
HB> I guess this is a continuation of my rant of one year ago re HB> rationalizing a complex number instead of rationalizing its real & HB> imaginary parts separately.
Is that to say that, for a library offering "complex rationals", you'd prefer ratios of gaussian integers to cartesian or polar tuples of Q?
Or did I parse backwards?
-JimC -- James Cloos <cloos@jhcloos.com> OpenPGP: 1024D/ED7DAEA6