Here's a favorite paradox of mine concerning analytic continuation. Suppose you have massive computational power at your fingertips. I'll give you as many terms, to as much accuracy as you request, for the Taylor series at 1 of a certain function that can be analytically continued in C \ 0. Your task: to analytically continue it once around the origin. The function happens to be sqrt(z), but you don't know that. So, perhaps you calculate the first 10^10 terms to high accuracy, then go 1/12 of the way around the circle, now (thinking you are being very conservative) compute 10^10 terms at the new point to high accuracy, etc until you've returned back. You get the identical value you started with! you've just recomputed the degree 10^10 polynomial 12 times, so you end up back where you were. This fails because the error you get from truncating the power series affects the later terms very severely. If you throw away a large fraction, 90% of the terms each step you take in analytic continuation, it works better. If you have information about the Riemann surface on which analytic continuation is defined, you can make rigorous estimates. One particular method: if you know a neighborhood along which the function is defined and bounded by some particular constant, then you can do a Riemann mapping that make this neighborhood contained within a disk, and compose with the power series to get a new power series that converges, at a known rate. There are other techniques known to numerical analysts that are much better than truncating the series to get a good approximation --- Conway once explained them to me, but I don't have a solid grasp. The basic idea is to use a weighting function, exponentially decreasing I think, that damps down the influence of later terms gradually, thus avoiding artifacts caused by truncation. Even with good computational strategies, there is a basic limit to analytic continuation techniques. You can consider all analytic functions bounded say by 1 in some region, and ask to approximate them on a disk of radius 1-epsilon to within some bound, say .1. The number of functions distinguishable to within .1 in a disk of radius R in the Poincare metric for the domain grows quickly --- it grows proportionally to area of the disk in the Poincare metric, exponentially with R. There are a lot of analytic functions to distinguish --- and also, the taylor series is usually inefficient at distinguishing them, proportionally tiny changes in coefficients can make huge changes in values of the function. Bill On Jun 9, 2008, at 3:17 AM, Dan Asimov wrote:
Gareth wrote:
<< On Monday 09 June 2008, Andy Latto wrote:
On Sun, Jun 8, 2008 at 5:05 PM, Dan Asimov <dasimov@earthlink.net> wrote:
Speaking of math software, can anyone tell me if there exists analytic continuation software? ... . . . and all this to a pre-specified level of numerical accuracy.
I don't think this is possible. In finite time the software can only . . .
The same argument proves that numerical integration and differentiation are impossible. And yet, if you pick a function you're interested in and ask Maple or Mathematica or whatever to integrate it for you over a given range to a given level of accuracy, they can generally do it pretty well.
Yes, though usually Maple or Mathematica has more than just the ability to evaluate the function at a given point, but an algorithm to do so -- whose details must go into determining the accuracy of the calculation of the function's derivative or integral.
In fact, I can provide an algorithm for the initial function (for what I need) . . . but it would be somewhat tricky for it to work backward through a sequence of n power series to be sure that the final answer is of a given accuracy.
--Dan
_____________________________________________________________________ "It don't mean a thing if it ain't got that certain je ne sais quoi." --Peter Schickele
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun