I've been looking at animating the motion of some (rather nonstandard) robots:
this involves interpreting the kinematic constraints as a bunch of simultaneous
polynomial equations in several (7) variables with nullity 1, then
tracing the motion
path via continuous solution of the equations as a function of the
time parameter.
If possible, I want to solve these in real time --- i.e. fast enough
to keep pace
with the graphical display (say 30 frames per second).
The classical Adams-Bashforth-Moulton (predictor-corrector) method for
numerical
solution of a differential equation seems a suitable vehicle for the
task: the third
order version costs only 2 iterations for acceptable accuracy. [For
some reason I
had thought first of Runge-Kutta-Merson, which however requires the
differentials
to be available as an explicit function of time.] To cast the motion
equations as an
ODE system, they must be differentiated and the resulting linear
equations for the
derivatives solved --- in practice numerically, at each step.
Now, we already have to hand an approximate solution to these linear
equations ---
the derivatives at the previous step --- so an iterative algorithm
looks worth trying.
The textbook method is Gauss-Seidel, which (with luck, and a
diagonally dominant
matrix) converges geometrically.
I have to admit I was already prejudiced; and after spending an
evening establishing
how cumbersome and ineffective this idea proves to be [at any rate, in
this application],
in desperation I cooked up a home-grown alternative.
It's a matrix variation of an old trick assembly-coders used in the days before
hardware division: starting from an approximation b to 1/a, iterate b
:= 2 b - a b^2.
In the matrix context this becomes simply
B := 2 B - B A B.
It's easy to show that if I - A B = D, then for the next iterate B' we
have I - A B' = D^2;
so provided the spectral radius (maximum eigenvalue modulus) of D is d < 1, the
algorithm converges and is second-order. If A is rectangular or
singular, it converges
instead to the Moore-Penrose pseudo-inverse.
This is dead simple to program, and fast --- 2 iterations suffice for
the application.
Admittedly it is still slightly slower (maybe 25%) than solving the
equations directly;
though if matrix multiplication were hardwired --- as already in some graphics
accelerators --- it might well be faster. Yet I don't recall ever
seeing it mentioned in
print --- can anybody out there enlighten me?
To use it for matrix inversion in general, we'd need to construct an
initial approximation
B for which d < 1. This unfortunately looks nontrivial. [In the
original real number context,
it was worked around using the fact that binary representation gives
an easy approx.
base 2 log; and for reduced a ~ 1 we can just choose b = 1.]
Fred Lunnon