Non-negative Least Squares (NNLS) algorithm

  • Permalink
  • submit to reddit
  • Email
  • Follow


A little while ago, I looked at this group and the web for an
implementation of the NNLS algorithm, but all I found was a post in
this group from 1997 from somebody also looking for an implementation,
so I had to do it myself.

The problem: Given a matrix A (usually more rows than columns) and
vector f, find vector x such that:
1) All elements of x are non-negative
2) subject to contraint (1), the Euclidian norm (2-norm) of vector
(A.x-f) is minimized.

(Note that the unconstrained problem - find x to minimize (A.x-f) - is
a simple application of QR decomposition.)

The NNLS algorithm is published in chapter 23 of Lawson and Hanson,
"Solving Least Squares Problems" (Prentice-Hall, 1974, republished
SIAM, 1995)

Some preliminary comments on the code:
1) It hasn't been thoroughly tested. 
2) I only have a few months experience programming Mathematica (but
lots of programming experience in general) so there are probably style
issues. I am happy to receive constructive criticism.
3) There is some ugliness where a chunk of code is repeated twice to
be able to have a simple 'while' loop, rather than one that breaks out
in the middle. Arguably breaking the middle of the loop would be
better than repeating the code.
4) I've left in a bunch of debugging statements. 
5) I've cut-and-pasted this from my notebook, which is a bit ugly with
the "\[LeftDoubleBracket]" instead of "[[" etc.
6) I haven't paid much attention to efficiency - e.g. I multiply by
"Inverse[R]"  rather than trying to do a backsubstitution (R is a
triangular matrix.)

====================================================================

(* Coded by Michael Woodhams, from algorithm by Lawson and Hanson, *)
(* "Solving Least Squares Problems", 1974 and 1995. *)
bitsToIndices[v_] := Select[Table[i, {i, Length[v]}], v[[#]] == 1 &];
NNLS[A_, f_] := Module[
      {x, zeroed, w, t, Ap, z, q, \[Alpha], i, zeroedSet, positiveSet,
        toBeZeroed, compressedZ, Q, R},
      (* Use delayed evaluation so that these are recalculated on the
fly as \
needed : *)
      zeroedSet := bitsToIndices[zeroed];
      positiveSet := bitsToIndices[1 - zeroed];
      (* Init x to vector of zeros, same length as a row of A *)
      debug["A=", MatrixForm[A]];
      x = 0 A\[LeftDoubleBracket]1\[RightDoubleBracket];
      debug["x=", x];
      (* Init zeroed to vector of ones,
        same length as x *)
      zeroed = 1 - x;
      debug["zeroed=", zeroed];
      w = Transpose[A].(f - A.x);
      debug["w=", w];
      While[zeroedSet != {} 
          && Max[w\[LeftDoubleBracket]zeroedSet\[RightDoubleBracket]]
> 0,
        debug["Outer loop starts."];
        (* The index t of the largest element of w, *)
        (* subject to the constraint t is zeroed *)
        t = 
          Position[w zeroed, Max[w zeroed], 1, 
                1]\[LeftDoubleBracket]1\[RightDoubleBracket]\
\[LeftDoubleBracket]1\[RightDoubleBracket];
        debug["t=", t];
        zeroed\[LeftDoubleBracket]t\[RightDoubleBracket] = 0;
        debug["zeroed=", zeroed];
        (* Ap = the columns of A indexed by positiveSet *)
        Ap = 
          Transpose[
            Transpose[A]\[LeftDoubleBracket]
              positiveSet\[RightDoubleBracket]];
        debug["Ap=", MatrixForm[Ap]];
        (* Minimize (Ap . compressedZ - f) by QR decomp *)
        {Q, R} = QRDecomposition[Ap];
        compressedZ = Inverse[R].Q.f;
        (* 
          Create vector z with 0 in zeroed indices and compressedZ
entries \
elsewhere *)
        z = 0 x;
        z\[LeftDoubleBracket]positiveSet\[RightDoubleBracket] =
compressedZ;
        debug["z=", z];
        While[Min[z] < 0,
          (* There is a wart here : x can have zeros, 
            giving infinities or indeterminates. They don't matter, 
            
            as we ignore those elements (not in postitiveSet) but it
will \
produce warnings. *)
          debug["Inner loop start"];
          (* 
            find smallest x\[LeftDoubleBracket]
                  q\[RightDoubleBracket]/(x\[LeftDoubleBracket]q\
\[RightDoubleBracket] - z\[LeftDoubleBracket]q\[RightDoubleBracket])
*)
          (* such that : q is not zeroed, 
               z\[LeftDoubleBracket]q\[RightDoubleBracket] < 0 *)
          \[Alpha] = Infinity;
          For[q = 1, q <= Length[x], q++,
            
            If[zeroed\[LeftDoubleBracket]q\[RightDoubleBracket] == 0
&&
                z\[LeftDoubleBracket]q\[RightDoubleBracket] < 0,
              \[Alpha] = 
                Min[\[Alpha], 
                  x\[LeftDoubleBracket]q\[RightDoubleBracket]/(x\
\[LeftDoubleBracket]q\[RightDoubleBracket] - 
                        z\[LeftDoubleBracket]q\[RightDoubleBracket])];
              debug["After trying index q=", q, " \[Alpha]=",
\[Alpha]];
              ]; (* if *)
            ]; (* for *)
          debug["\[Alpha]=", \[Alpha]];
          x = x + \[Alpha](z - x);
          debug["x=", x];
          
          toBeZeroed = 
            Select[positiveSet, 
              Abs[x\[LeftDoubleBracket]#\[RightDoubleBracket]] <
10^-13 &];
          debug["toBeZeroed=", toBeZeroed];
          zeroed\[LeftDoubleBracket]toBeZeroed\[RightDoubleBracket] =
1;
          x\[LeftDoubleBracket]toBeZeroed\[RightDoubleBracket] = 0;
          
          (* Duplicated from above *)
          (* Ap = the columns of A indexed by positiveSet *)
          
          Ap = Transpose[
              Transpose[
                  A]\[LeftDoubleBracket]positiveSet\[RightDoubleBracket]];
          debug["Ap=", MatrixForm[Ap]];
          (* Minimize (Ap . compressedZ - f) by QR decomp *)
          {Q, R} = QRDecomposition[Ap];
          compressedZ = Inverse[R].Q.f;
          (* 
            Create vector z with 0 in zeroed indices and compressedZ
entries \
elsewhere *)
          z = 0 x;
          
          z\[LeftDoubleBracket]positiveSet\[RightDoubleBracket] = 
            compressedZ;
          debug["z=", z];
          ]; (* end inner while loop *)
        x = z;
        debug["x=", x];
        w = Transpose[A].(f - A.x);
        debug["w=", w];
        ]; (* end outer while loop *)
      Return[x];
      ]; (* end module *)

====================================================================

And some final comments:

Don't 'reply' to the e-mail address in the header of this post - it is
old and defunct, and not updated because of spammers. I am at
massey.ac.nz, and my account name is m.d.woodhams. (3 year post-doc
appointment, so by the time you read this, that address might also be
out of date.)

I put this code in the public domain - but I'd appreciate it if you
acknowledge my authorship if you use it. I will be writing a
bioinformatics paper about "modified closest tree" algorithm that uses
this, and I might (not decided about this) give a link to Mathematica
code, which will include the above. If this happens, you could cite
that paper.

This software comes with no warrantee etc etc. Your warrantee is that
you have every opportunity to test the code yourself before using it.

If you are reading this long after it was posted, I may have an
improved version by then. Feel free to enquire by e-mail.

0
Reply mdw1 (2) 10/3/2003 6:53:03 AM

See related articles to this posting

mdw@ccu1.auckland.ac.nz (Michael Woodhams) wrote in message news:<blj6cf$1j6$1@smc.vnet.net>...
> A little while ago, I looked at this group and the web for an
> implementation of the NNLS algorithm, but all I found was a post in
> this group from 1997 from somebody also looking for an implementation,
> so I had to do it myself.
> 
> The problem: Given a matrix A (usually more rows than columns) and
> vector f, find vector x such that:
> 1) All elements of x are non-negative
> 2) subject to contraint (1), the Euclidian norm (2-norm) of vector
> (A.x-f) is minimized.
[...]

Michael,

Congratulations for your NNLS code!
I had a 10-liner naive solution (see below) using FindMinimum. 
It seems to work most of the time, but your solution is more accurate 
and... at least sixty times faster!

NNLSFindMinimum[A_, f_] := 
Module[{nbx = Length[First[A]], xi, x, axf, xinit},
      xi = Array[x,nbx];
      axf = A.xi^2 - f;
      xinit = PseudoInverse[A].f;
      If[And@@( #>=0& /@ xinit),
        	xinit,
        	fm = FindMinimum[Evaluate[axf.axf],
                Evaluate[Sequence@@Transpose[{xi, xinit}]]];
        	xi^2 /. fm[[2]]
      ]
];

(* a small test : *)
a = Table[Random[],{i,10},{j,20}];
f = Table[Random[],{i,10}];
x = NNLSFindMinimum[a,f]//Timing
% // Chop[#, 10.^-5] &
{1.001 Second,{0.0580067,0.0246345,0,0.0227949,0,0,0,0.386312,0,0,0,0,0,0,0,0,
0,0,0.245039,0.215131}}

x = NNLS[a, f] // Timing
{0.016 Second,{0.058114,0.02338271,0,0.0227961,0,0,0,0.386253,0,0,0,0,0,0,0,0,
0,0,0.246019,0.215755}}
---
jcp

0
Reply poujadej (31) 10/4/2003 6:15:38 AM
comp.soft-sys.math.mathematica 28738 articles. 7 followers.

1 Replies
108 Views

Similar Articles

[PageSpeed] 25

  • Permalink
  • submit to reddit
  • Email
  • Follow


Reply:

Similar Artilces:

Non Negative Least Squares
Hi Folks, does somebody have a working implementation of the NNLS and can kindly provide this? Cheers CR A generalization of NNLS is available at http://www-astro.physics.ox.ac.uk/~mxc/idl/bvls.pro --Wayne On Nov 3, 3:21=A0pm, chris <rog...@googlemail.com> wrote: > Hi Folks, > does somebody have a working implementation of the NNLS and can kindly > provide this? For simple problems, you might also consider running the MPFIT family of functions with parameter constraints that force the parameters to be non-negative. Craig ...

Large-Scale Non-Negative Linear Least Squares
All non-negative linear least squares algorithms that I've seen, including lsqnonneg, require an explicit representation of the matrix. I'm looking for a large scale version that allows me to specify function handles instead. Help! "Gordon Wetzstein" <gordon.wetzstein@gmail.com> wrote in message <i7u6am$2ue$1@fred.mathworks.com>... > All non-negative linear least squares algorithms that I've seen, including lsqnonneg, require an explicit representation of the matrix. I'm looking for a large scale version that allows me to specify function handles ins...

Non-negative least squares with large spare matrix
Hi, I have a sparse matrix of size ~750,000x10,000, and I am looking to find the non-negative least-squares solution of the problem Ax = b. I have tried using lsqnonneg but after about 30 minutes it spit out an error saying out of memory (I guess this is because the matrix is too large?). I don't have the optimization toolbox but I would like to try lsqlin with only specifying the lower bound. I would consider buying it if I was sure it would solve my problem, so does anyone know if it will work on a matrix of the size with reasonable speed? If not does anyone have any other solution in m...

least squares algorithm
Hello All! Any one know where can I find some php code (or "php-able" code) implementing least squares method? Thx in advance and pls excuse my poor eng J.SanTanA schreef: > Hello All! > Any one know where can I find some php code (or "php-able" code) > implementing least squares method? > > Thx in advance and pls excuse my poor eng http://www.google.nl/search?hl=nl&q=php+least+squares JW On 13 Feb, 12:42, Janwillem Borleffs <j...@jwscripts.com> wrote: > J.SanTanA schreef: > > > Hello All! > > Any one know where can I find s...

Does anybody know a non-linearly inequality constrainted non-linear least square solver called ENLSIP?
I saw in some previous posts people mentioned about it. I've looked into it from the decision tree - it is very old and there is no documentation, no tutorial, etc., only Fortran code. I don't know how to use it. If anybody know where to find the user guide or documentation, it would be great! If there is Matlab interface floating around, that's even greater... Moreover, I would appreciate if anybody can give me some pointers about non-linearly inequality constrainted non-linear least square solvers that more modern? Thanks a lot! On Sep 2, 7:12 pm, "Gino Linus&qu...

genetic algorithms and nonlinear least squares
hi, I am working about genetic algorithms. I want to solve y = q(1)*exp(q(2)*t) equation.(Nonlinear) In this eq. %problem discription% y is a data q(1) and q(2) are parameters that we want to estimate and than t is a time %my purpose is minimize this function% S=sum[ydata - yestimate]^2 with this equation (nonlinear least squares), I want to estimate q(1) and q(2). how can I solve this poblem with genetic algorithm in MATLAB. please help me Thank you ...

non linear least square minimization....
Hi all, I need a non linear least square minimization routine for correlated data (PCR or partial least square perhaps) but I don't know how I can download it (probably because I didn't know the name of my problem and its solution). You know where I can find it? Best regards, Dr. G. Pileio ...

least squares with non-constant variance
I'm trying to perform least squares regression to a simple x,y data set, and to get the regression statistics so that I can model the data set using Monte-Carlo techniques. The regression is non-linear (ax^b power function). I've been successful doing this using nlinfit and related functions for the case where a constant variance is assumed (as is common for least squares). However, the data set would be better modeled using a non-constant variance and I'm having some difficulty getting the regression statistics for this case. Specifically, I would like to perform weighted ...

Non Linear Least squares problem
Hi I have been trying to use 'lsqnonlin' function to solve a nonlinear least squares problem with Matlab educational copy. But matlab is gerneating an error of unknown function. Let me know on how one uses this function to estimate parameters for a nonlinear function which bounds estimated parameters in upper and lower bound. Thanks in advance. Ganesh Ganesh Lolge wrote: > Hi > I have been trying to use 'lsqnonlin' function to solve a nonlinear > least squares problem with Matlab educational copy. But matlab is > gerneating an error of unknown function. > >...