Non-negative Least Squares (NNLS) algorithm

  • Permalink
  • submit to reddit
  • Email
  • Follow


A little while ago, I looked at this group and the web for an
implementation of the NNLS algorithm, but all I found was a post in
this group from 1997 from somebody also looking for an implementation,
so I had to do it myself.

The problem: Given a matrix A (usually more rows than columns) and
vector f, find vector x such that:
1) All elements of x are non-negative
2) subject to contraint (1), the Euclidian norm (2-norm) of vector
(A.x-f) is minimized.

(Note that the unconstrained problem - find x to minimize (A.x-f) - is
a simple application of QR decomposition.)

The NNLS algorithm is published in chapter 23 of Lawson and Hanson,
"Solving Least Squares Problems" (Prentice-Hall, 1974, republished
SIAM, 1995)

Some preliminary comments on the code:
1) It hasn't been thoroughly tested. 
2) I only have a few months experience programming Mathematica (but
lots of programming experience in general) so there are probably style
issues. I am happy to receive constructive criticism.
3) There is some ugliness where a chunk of code is repeated twice to
be able to have a simple 'while' loop, rather than one that breaks out
in the middle. Arguably breaking the middle of the loop would be
better than repeating the code.
4) I've left in a bunch of debugging statements. 
5) I've cut-and-pasted this from my notebook, which is a bit ugly with
the "\[LeftDoubleBracket]" instead of "[[" etc.
6) I haven't paid much attention to efficiency - e.g. I multiply by
"Inverse[R]"  rather than trying to do a backsubstitution (R is a
triangular matrix.)

====================================================================

(* Coded by Michael Woodhams, from algorithm by Lawson and Hanson, *)
(* "Solving Least Squares Problems", 1974 and 1995. *)
bitsToIndices[v_] := Select[Table[i, {i, Length[v]}], v[[#]] == 1 &];
NNLS[A_, f_] := Module[
      {x, zeroed, w, t, Ap, z, q, \[Alpha], i, zeroedSet, positiveSet,
        toBeZeroed, compressedZ, Q, R},
      (* Use delayed evaluation so that these are recalculated on the
fly as \
needed : *)
      zeroedSet := bitsToIndices[zeroed];
      positiveSet := bitsToIndices[1 - zeroed];
      (* Init x to vector of zeros, same length as a row of A *)
      debug["A=", MatrixForm[A]];
      x = 0 A\[LeftDoubleBracket]1\[RightDoubleBracket];
      debug["x=", x];
      (* Init zeroed to vector of ones,
        same length as x *)
      zeroed = 1 - x;
      debug["zeroed=", zeroed];
      w = Transpose[A].(f - A.x);
      debug["w=", w];
      While[zeroedSet != {} 
          && Max[w\[LeftDoubleBracket]zeroedSet\[RightDoubleBracket]]
> 0,
        debug["Outer loop starts."];
        (* The index t of the largest element of w, *)
        (* subject to the constraint t is zeroed *)
        t = 
          Position[w zeroed, Max[w zeroed], 1, 
                1]\[LeftDoubleBracket]1\[RightDoubleBracket]\
\[LeftDoubleBracket]1\[RightDoubleBracket];
        debug["t=", t];
        zeroed\[LeftDoubleBracket]t\[RightDoubleBracket] = 0;
        debug["zeroed=", zeroed];
        (* Ap = the columns of A indexed by positiveSet *)
        Ap = 
          Transpose[
            Transpose[A]\[LeftDoubleBracket]
              positiveSet\[RightDoubleBracket]];
        debug["Ap=", MatrixForm[Ap]];
        (* Minimize (Ap . compressedZ - f) by QR decomp *)
        {Q, R} = QRDecomposition[Ap];
        compressedZ = Inverse[R].Q.f;
        (* 
          Create vector z with 0 in zeroed indices and compressedZ
entries \
elsewhere *)
        z = 0 x;
        z\[LeftDoubleBracket]positiveSet\[RightDoubleBracket] =
compressedZ;
        debug["z=", z];
        While[Min[z] < 0,
          (* There is a wart here : x can have zeros, 
            giving infinities or indeterminates. They don't matter, 
            
            as we ignore those elements (not in postitiveSet) but it
will \
produce warnings. *)
          debug["Inner loop start"];
          (* 
            find smallest x\[LeftDoubleBracket]
                  q\[RightDoubleBracket]/(x\[LeftDoubleBracket]q\
\[RightDoubleBracket] - z\[LeftDoubleBracket]q\[RightDoubleBracket])
*)
          (* such that : q is not zeroed, 
               z\[LeftDoubleBracket]q\[RightDoubleBracket] < 0 *)
          \[Alpha] = Infinity;
          For[q = 1, q <= Length[x], q++,
            
            If[zeroed\[LeftDoubleBracket]q\[RightDoubleBracket] == 0
&&
                z\[LeftDoubleBracket]q\[RightDoubleBracket] < 0,
              \[Alpha] = 
                Min[\[Alpha], 
                  x\[LeftDoubleBracket]q\[RightDoubleBracket]/(x\
\[LeftDoubleBracket]q\[RightDoubleBracket] - 
                        z\[LeftDoubleBracket]q\[RightDoubleBracket])];
              debug["After trying index q=", q, " \[Alpha]=",
\[Alpha]];
              ]; (* if *)
            ]; (* for *)
          debug["\[Alpha]=", \[Alpha]];
          x = x + \[Alpha](z - x);
          debug["x=", x];
          
          toBeZeroed = 
            Select[positiveSet, 
              Abs[x\[LeftDoubleBracket]#\[RightDoubleBracket]] <
10^-13 &];
          debug["toBeZeroed=", toBeZeroed];
          zeroed\[LeftDoubleBracket]toBeZeroed\[RightDoubleBracket] =
1;
          x\[LeftDoubleBracket]toBeZeroed\[RightDoubleBracket] = 0;
          
          (* Duplicated from above *)
          (* Ap = the columns of A indexed by positiveSet *)
          
          Ap = Transpose[
              Transpose[
                  A]\[LeftDoubleBracket]positiveSet\[RightDoubleBracket]];
          debug["Ap=", MatrixForm[Ap]];
          (* Minimize (Ap . compressedZ - f) by QR decomp *)
          {Q, R} = QRDecomposition[Ap];
          compressedZ = Inverse[R].Q.f;
          (* 
            Create vector z with 0 in zeroed indices and compressedZ
entries \
elsewhere *)
          z = 0 x;
          
          z\[LeftDoubleBracket]positiveSet\[RightDoubleBracket] = 
            compressedZ;
          debug["z=", z];
          ]; (* end inner while loop *)
        x = z;
        debug["x=", x];
        w = Transpose[A].(f - A.x);
        debug["w=", w];
        ]; (* end outer while loop *)
      Return[x];
      ]; (* end module *)

====================================================================

And some final comments:

Don't 'reply' to the e-mail address in the header of this post - it is
old and defunct, and not updated because of spammers. I am at
massey.ac.nz, and my account name is m.d.woodhams. (3 year post-doc
appointment, so by the time you read this, that address might also be
out of date.)

I put this code in the public domain - but I'd appreciate it if you
acknowledge my authorship if you use it. I will be writing a
bioinformatics paper about "modified closest tree" algorithm that uses
this, and I might (not decided about this) give a link to Mathematica
code, which will include the above. If this happens, you could cite
that paper.

This software comes with no warrantee etc etc. Your warrantee is that
you have every opportunity to test the code yourself before using it.

If you are reading this long after it was posted, I may have an
improved version by then. Feel free to enquire by e-mail.

0
Reply mdw1 (2) 10/3/2003 6:53:03 AM

See related articles to this posting


mdw@ccu1.auckland.ac.nz (Michael Woodhams) wrote in message news:<blj6cf$1j6$1@smc.vnet.net>...
> A little while ago, I looked at this group and the web for an
> implementation of the NNLS algorithm, but all I found was a post in
> this group from 1997 from somebody also looking for an implementation,
> so I had to do it myself.
> 
> The problem: Given a matrix A (usually more rows than columns) and
> vector f, find vector x such that:
> 1) All elements of x are non-negative
> 2) subject to contraint (1), the Euclidian norm (2-norm) of vector
> (A.x-f) is minimized.
[...]

Michael,

Congratulations for your NNLS code!
I had a 10-liner naive solution (see below) using FindMinimum. 
It seems to work most of the time, but your solution is more accurate 
and... at least sixty times faster!

NNLSFindMinimum[A_, f_] := 
Module[{nbx = Length[First[A]], xi, x, axf, xinit},
      xi = Array[x,nbx];
      axf = A.xi^2 - f;
      xinit = PseudoInverse[A].f;
      If[And@@( #>=0& /@ xinit),
        	xinit,
        	fm = FindMinimum[Evaluate[axf.axf],
                Evaluate[Sequence@@Transpose[{xi, xinit}]]];
        	xi^2 /. fm[[2]]
      ]
];

(* a small test : *)
a = Table[Random[],{i,10},{j,20}];
f = Table[Random[],{i,10}];
x = NNLSFindMinimum[a,f]//Timing
% // Chop[#, 10.^-5] &
{1.001 Second,{0.0580067,0.0246345,0,0.0227949,0,0,0,0.386312,0,0,0,0,0,0,0,0,
0,0,0.245039,0.215131}}

x = NNLS[a, f] // Timing
{0.016 Second,{0.058114,0.02338271,0,0.0227961,0,0,0,0.386253,0,0,0,0,0,0,0,0,
0,0,0.246019,0.215755}}
---
jcp

0
Reply poujadej (31) 10/4/2003 6:15:38 AM
comp.soft-sys.math.mathematica 28834 articles. 8 followers.

1 Replies
151 Views

Similar Articles

[PageSpeed] 36


  • Permalink
  • submit to reddit
  • Email
  • Follow


Reply:

Similar Artilces:

Non Negative Least Squares
Hi Folks, does somebody have a working implementation of the NNLS and can kindly provide this? Cheers CR A generalization of NNLS is available at http://www-astro.physics.ox.ac.uk/~mxc/idl/bvls.pro --Wayne On Nov 3, 3:21=A0pm, chris <rog...@googlemail.com> wrote: > Hi Folks, > does somebody have a working implementation of the NNLS and can kindly > provide this? For simple problems, you might also consider running the MPFIT family of functions with parameter constraints that force the parameters to be non-negative. Craig ...

Large-Scale Non-Negative Linear Least Squares
All non-negative linear least squares algorithms that I've seen, including lsqnonneg, require an explicit representation of the matrix. I'm looking for a large scale version that allows me to specify function handles instead. Help! "Gordon Wetzstein" <gordon.wetzstein@gmail.com> wrote in message <i7u6am$2ue$1@fred.mathworks.com>... > All non-negative linear least squares algorithms that I've seen, including lsqnonneg, require an explicit representation of the matrix. I'm looking for a large scale version that allows me to specify function handles ins...

Non-negative least squares with large spare matrix
Hi, I have a sparse matrix of size ~750,000x10,000, and I am looking to find the non-negative least-squares solution of the problem Ax = b. I have tried using lsqnonneg but after about 30 minutes it spit out an error saying out of memory (I guess this is because the matrix is too large?). I don't have the optimization toolbox but I would like to try lsqlin with only specifying the lower bound. I would consider buying it if I was sure it would solve my problem, so does anyone know if it will work on a matrix of the size with reasonable speed? If not does anyone have any other solution in...

least squares algorithm
Hello All! Any one know where can I find some php code (or "php-able" code) implementing least squares method? Thx in advance and pls excuse my poor eng J.SanTanA schreef: > Hello All! > Any one know where can I find some php code (or "php-able" code) > implementing least squares method? > > Thx in advance and pls excuse my poor eng http://www.google.nl/search?hl=nl&q=php+least+squares JW On 13 Feb, 12:42, Janwillem Borleffs <j...@jwscripts.com> wrote: > J.SanTanA schreef: > > > Hello All! > > Any one know where can I find s...

Does anybody know a non-linearly inequality constrainted non-linear least square solver called ENLSIP?
I saw in some previous posts people mentioned about it. I've looked into it from the decision tree - it is very old and there is no documentation, no tutorial, etc., only Fortran code. I don't know how to use it. If anybody know where to find the user guide or documentation, it would be great! If there is Matlab interface floating around, that's even greater... Moreover, I would appreciate if anybody can give me some pointers about non-linearly inequality constrainted non-linear least square solvers that more modern? Thanks a lot! On Sep 2, 7:12 pm, "Gino Linus&qu...

genetic algorithms and nonlinear least squares
hi, I am working about genetic algorithms. I want to solve y = q(1)*exp(q(2)*t) equation.(Nonlinear) In this eq. %problem discription% y is a data q(1) and q(2) are parameters that we want to estimate and than t is a time %my purpose is minimize this function% S=sum[ydata - yestimate]^2 with this equation (nonlinear least squares), I want to estimate q(1) and q(2). how can I solve this poblem with genetic algorithm in MATLAB. please help me Thank you ...

non linear least square minimization....
Hi all, I need a non linear least square minimization routine for correlated data (PCR or partial least square perhaps) but I don't know how I can download it (probably because I didn't know the name of my problem and its solution). You know where I can find it? Best regards, Dr. G. Pileio ...

least squares with non-constant variance
I'm trying to perform least squares regression to a simple x,y data set, and to get the regression statistics so that I can model the data set using Monte-Carlo techniques. The regression is non-linear (ax^b power function). I've been successful doing this using nlinfit and related functions for the case where a constant variance is assumed (as is common for least squares). However, the data set would be better modeled using a non-constant variance and I'm having some difficulty getting the regression statistics for this case. Specifically, I would like to perform weighted ...

Non Linear Least squares problem
Hi I have been trying to use 'lsqnonlin' function to solve a nonlinear least squares problem with Matlab educational copy. But matlab is gerneating an error of unknown function. Let me know on how one uses this function to estimate parameters for a nonlinear function which bounds estimated parameters in upper and lower bound. Thanks in advance. Ganesh Ganesh Lolge wrote: > Hi > I have been trying to use 'lsqnonlin' function to solve a nonlinear > least squares problem with Matlab educational copy. But matlab is > gerneating an error of unknown function. > >...

non-linear least squares method
Hi I hope that someone can help me on this. Any help would be apreciated. I need to use the equation below as a model function, to estimated ka and ks by the non-linear least squares method. k = (ka*ks*f )/ (ka + ks*f) % some measured values that I have are H = [100, 200,250,300, 320] kind regards, Armindo "Armindo" wrote in message <lerbf3$1br$1@newscl01ah.mathworks.com>... > Hi > > I hope that someone can help me on this. Any help would be apreciated. > > I need to use the equation below as a model function, to estimated > ka a...

non-linear least square method
i'm trying to fit a curve using the lsqcurvefit. I have experimental data over the time. I need to estimate the following parameters f(1), f(2), f(3), f(4) and M. I have a constante k = 9.8 I established this initial guess parameters x0=[1,1,1,1,1]; I call the function f=lsqcurvefit(@myfun, x0, t, F); And I get this f (t) = exp(-f(1).*t).*(f(2).*cos(f(3).*t)+f(4).*sin(f(3).*t)) + M*9.8 If I dont use the last parameteres (+ M * 9.8 ) the fit is ok but when I add this I get problems. This is possible to performe with this function? If yes can any one help me please? If not The...

non linear least square (using lsqnonlin)
i'm doing curve fitting using nonlinear square technique basic Matlab help says: *Step 1: Write an M-file myfun.m that computes the objective function values. function F=myfun(x) k=0:0.1:150; F=x(1)*x(2)*(x(3)+1)*(((k-x(4))./x(5)).^(x(3))).*exp(-x(2)*(((k-x(4))./x(5)).^(x(3)+1)))+(1-x(1))*x(6)*(x(7)+1)*(((k-x(4))./x(8)).^(x(7))).*exp(-x(6)*(((k-x(4))./x(8)).^(x(7)+1))); Step 2: Call the nonlinear least-squares routine. x0=[0.5 5 2 -4 10 6 1 80]; [x,resnorm]=lsqnonlin(@myfun,x0) i'm getting as a result: Optimization terminated: first-order optimality less than OPTIONS.TolFun, and...

p-value for non-negtive least square
Hi, I am trying to get p-value for non-negtive least square with three dependent variables using the "lsqnonneg" function. But Matlab only have coefficients and residuals. Is there any way to get the p-values corresponding to all of the dependent variables? I have checked the glmfit functions - but that's for regular regression not for "lsqnonneg". I just want to get p-value for each dependent variable and see how important they are for the independent variable, respectively. Thanks! ...

non-linear least squares _ MLE
hi all, I am confused about the concept of maximum likelihood estimation (MLE) and non-linear least squares. I am using lsnonlin in matlab what seems to me to be a straight forward non linear least squares estimation. However, the book I used to find the cost function (i.e. objective error function) for the non linear least square estimation referred to thefinal solution ofthe unknown parameters as the maximum likelihood solution and thus a MLE.But did not provide an explanation. If anyone can share some light, I would be grateful. cheers aiden can anyone suggest a possible explanation? ...

Non-Linear Least Squares with Step Function
Hi, I am trying to run a non-linear least squares regression on some data. As part of the fitted function, I want to include a coefficient that is itself a step (ie. dichotomous variable) function of one of the dependent variables. That is, say my non-linear function is y = A * B(x) * exp(x) where A and B are coefficients, and a sample of data is (made up on the spot): x = 0 0 0 1 1 1 2 3 4 5 y = 1 2 3 5 6 8 2 5 8 13 So, the estimator A is common to all points but the estimator B varies depending on x. In this case, there are really 7 coefficients to estimate: A, B(0), B(1), ... , B(5). ...

One-sided Non-Linear Least Squares
Hello everybody For a specific application, I need to solve the problem min_x || r(x) ||_"2" where r(x) is a (non-linear) function from R^n -> R^m, m > n. Now, had the norm been the usual euclidean norm, I am aware of the range of methods available for solving this problem (e.g. Levenberg-Marquardt). However, for this application, only the positive residuals for any given x should be considered. Thus, the problem could probably be expressed as min_x || W(x) r(x) ||_2 With W(x) a diagonal matrix with W_i=1 if r_i>0, else W_i=0. Do any of you have any id...

Recursive Least Square Algorithm in Equalizer Design
Hi all, I am working in Wireless communications. I need to design Equalizer using Recursive Least Square Algorithm for Wireless Communications. I need help how to start. So please send Suggestions or if any one had sample code please send it to me. Thanks in advance Chowdary, ...

Help with least squares on non-linear function
Hey I finally want to enjoy at least some time of the weekend, but I am stuck a= t one problem tonight for which I still would like to find a solution.=20 I have a bunch of measurements in an array y which depending basically on a= nother variable x. I want to do a least-squares fit to a non-linear functio= n, which would be very easy if I just wanted e.g. a polynomial fit. My prob= lem is now that there is another parameter coming into play (here: k), so t= he function I want to fit in the end looks like this (though the order shou= ld be variable in the end which in this example ...

minimize non linear system with boundary (least square)
Does anyone know a free fortran code similar to the Matlab lsqnonlin function ? I have seen MINPACK but I don't want to calculate the Jacobian Matrix and so on... I want to minimize the system of 2 parameters: f(x,y) - fo g(x,y)-go (with f and g non linear functions) with x> xmin y>ymin In Matlab, I use to do: Ko = [x_init; y_init]; K = lsqnonlin('mymatrixfuntion',[xmin xmax; ymin ymax],Ko); with my function defined as below: ____________ function rest = mymatrixfunction(K) x=K(1); y=K(2); rest = [f(x,y) - fo; g(x,y)-go]; ______________ In a previous ar...

Help needed with Non-Linear Least Square Fitting
Hi Guys, So I'm measuring Reflectance(v2) vs. Wavelength(v1) and need to find the thickness of my sample, for that I'm trying to use fitting to a function using the nonlinear square method to find couple of unknowns. If anyone can help please email me at nimakev@yahoo.com so I'll send you more information, any help is greatly appericiated. thank you. "Nima Azar" <nimakev@yahoo.com> wrote in message <g0i43e$n5i$1@fred.mathworks.com>... > Hi Guys, > So I'm measuring Reflectance(v2) vs. Wavelength(v1) and > need to find the thi...

Error probability of Recursive Least Square (RLS) algorithm
Dear All, I have used RLS algorithm to predict channel SNR (lets define SNR as x). SNR is received in frame by frame. The frame duration is 5ms. Lets define the channel SNR in each frame is xi, for i = 1, 2, 3, ..., t I used a window of size 4 in RLS algorithm. I want to predict the SNR 2 times in every 4 frame. At any given frame t, the SNR vector will look like as below. [xt-3 xt-2 xt-1 xt] Now after predicting the SNR for next 2 frames, the SNR vector will look like [xt-1 xt x't+1 x't+2] Where x' is predicted SNR. After prediction there could be some ...

Three Stage Least Squares with Non-linear Restrictions
I would like to conduct the following three-stage least squares: (eq.1) lnY= a0 + a1*lnX + a2* lnB (eq.2) lnX = b0 + b1*lnY + b2* lnC I need to put a restriction on coefficients as: a1>b1 I would like to seek advice on how to code it using MATLAB. Thanks in advance. "MAY Chew" <fukumoto_may@yahoo.co.jp> wrote in message <i2e1d8$mci$1@fred.mathworks.com>... > I would like to conduct the following three-stage least squares: > > (eq.1) lnY= a0 + a1*lnX + a2* lnB > (eq.2) lnX = b0 + b1*lnY + b2* lnC > > I need to put a restrict...

Least Squares fit non-linear 3d plot
If I have an x,y,z matrix such as: x y z 1 10 500 2 10 700 3 10 800 1 20 750 2 20 850 3 20 950 How can I correctly implement the LSQCURVEFIT command: X = LSQCURVEFIT(FUN,X0,XDATA,YDATA,LB,UB,OPTIONS) I need the equation relating these points in a 3d space so i can create a nice surf or mesh plot. Inside the function fun, I want to use the cost function: F = a(1)*x + a(2)*y + a(3)*x*y + a(4)*x^2*y +a(5)*x*y^2 to find the coefficients. and Initially set XO=0. Be excellent to each other and party on. "John " <tarpoon7@gmail.com> wrote in message <hkv...

Optimization of integers in a non-linear least squares sense
Hello! I have a question concerning a non-linear least square optimization problem using lsqnonlin. Unfortunately, I am not able to optimize integers, although rational number are not a problem. Is there a way to constrain optimization parameters as integers? Example: Consider the optimization criterium as: min sum {FUN(X).^2} a b Then it can be implemented in Matlab using lsqnonlin as: a0 = 0.3; % Rational integer b0 = 20; % INTEGER!!! % Boundary conditions: LB = [0.3; 1]; % lower bound UB = [0.9; 25]; % upper bound param = lsqnonlin(@FUN,[a0;b0],LB,UB,X,...); a = param...