="Henry Baker" <hbaker1@pipeline.com>
You are correct about the Taylor series approximation.
The cool thing is that it works with complex numbers, getting the direction right (or wrong, as the case may be!).
Yes. Again, what is often overlooked is that the classical Newton transform being an invertible map from functions to functions means that iterating pretty much ANY function, no matter how wretchedly it converges, is the "Newton's method" way to compute SOME other function. Moreover, *in general* the Newton transform might just as easily make convergence WORSE as it can improve it. It isn't any sort of magic bullet; it just happens to be good for functions that are "sufficiently-well approximated" by the first few terms of their Taylor series. Not every function meets this criterion, nor does it exclude better approximations being produced by truncating somebody else's flavor of expansion.
However, since it is easy to overshoot, you need to be very close to the thing you're looking for.
Note that this statement is true of ALL functional iterations and has nothing specifically more to do with Newton's method than any other. You want to cozy into the "basin of attraction" of any fixed point you seek.
Having had lots of experience with Newton in the neighborhood of multiple roots: it sucks! You need to divide out a multiple root, either explicitly or implicitly.
Tsk tsk, such language. My theme is to advocate expressing things in terms of fixed points and get away from all this talk of roots and zeros--try it!
Many people (including me) have tried making an analogy with some sort of gravitational force field. I've never been completely successful with this analogy. Perhaps someone else has?
Sure, this can be done. To make a "flow" field (as I think RCS has called them) you "just" have to be able to express the n-th iterate f^[n] for fractional n, and an "infinitesimal generator" analogous to the derivative operator. (Strangely, once one can express non-integer iterates, it is often possible to plug virtually anything into n, for instance a complex number, but the "meaning" of this construction defies my easy interpretation).
Once you have the concept of a Newton iteration, you can extend it to all sorts of other things: power series, matrices, etc.
Right. Moreover, once you have the concept of deriving computational processes that preserve fixpoints you can go far beyond just extending the domain and co-domain. Standard Newton's method is a very special instance of a broad class of computations, and may be generalized along many facets. What's usually taught as "Newton's method" might be more precisely called something like the "singly-iterated first-order Newton-Taylor fixpoint-preserving transform". So enough with the rooting for zeros already. That's just needlessly application-specific language. Fine of itself, but beside the core ideas.