It might be worth mentioning that given any way to do this factoring, say M = A x B , then for any kxk matrix G having an inverse H we also have M = (AxG) x (HxB) Also, which ways to find an approximate factorization are best will probably depend on what kind of measure of error is used. If the error used is what is perhaps the most obvious one, namely Err(M; A, B) = L_2(M - AxB) where L_2(N) is defined as the sum of the squared entries of N (or sum of squared absolute values if over the complexes), then probably just a garden-variety optimization algorithm will work well. (E.g., a conjugate-gradient method from "Practical Optimization" by Murray, Gill, and Wright — a book I highly recommend.) —Dan
On Feb 24, 2016, at 10:43 AM, Mike Stay <metaweta@gmail.com> wrote:
Say I've got a mxn matrix that I'd like to factor into an mxk and a kxn matrix. The factorization doesn't have to be exact; I'd just like to minimize some measure of error.
What are your favorite algorithms to do this? What assumptions about the matrix and the error measure do they make?
There's one blurb in Wikipedia about some nonnegative matrix factorization algorithms:
There are several ways in which the W and H may be found: Lee and Seung's multiplicative update rule [10] has been a popular method due to the simplicity of implementation. Since then, a few other algorithmic approaches have been developed.
Some successful algorithms are based on alternating non-negative least squares: in each step of such an algorithm, first H is fixed and W found by a non-negative least squares solver, then W is fixed and H is found analogously. The procedures used to solve for W and H may be the same[22] or different, as some NMF variants regularize one of W and H.[17] Specific approaches include the projected gradient descent methods,[22][23]the active set method,[4][24] and the block principal pivoting method[25] among several others.