
Nonnegative Least Squares (NNLS) algorithm
A little while ago, I looked at this group and the web for an
implementation of the NNLS algorithm, but all I found was a post in
this group from 1997 from somebody also looking for an implementation,
so I had to do it myself.
The problem: Given a matrix A (usually more rows than columns) and
vector f, find vector x such that:
1) All elements of x are nonnegative
2) subject to contraint (1), the Euclidian norm (2norm) of vector
(A.xf) is minimized.
(Note that the unconstrained problem  find x to minimize (A.xf)  is
a simple application of QR decomposition.)
The NNLS algorithm is published in chapter 23 of Lawson and Hanson,
"Solving Least Squares Problems" (PrenticeHall, 1974, republished
SIAM, 1995)
Some preliminary comments on the code:
1) It hasn't been thoroughly tested.
2) I only have a few months experience programming Mathematica (but
lots of programming experience in general) so there are probably style
issues. I am happy to receive constructive criticism.
3) There is some ugliness where a chunk of code is repeated twice to
be able to have a simple 'while' loop, rather than one that breaks out
in the middle. Arguably breaking the middle of the loop would be
better than repeating the code.
4) I've left in a bunch of debugging statements.
5) I've cutandpasted this from my notebook, which is a bit ugly with
the "\[LeftDoubleBracket]" instead of "[[" etc.
6) I haven't paid much attention to efficiency  e.g. I multiply by
"Inverse[R]" rather than trying to do a backsubstitution (R is a
triangular matrix.)
====================================================================
(* Coded by Michael Woodhams, from algorithm by Lawson and Hanson, *)
(* "Solving Least Squares Problems", 1974 and 1995. *)
bitsToIndices[v_] := Select[Table[i, {i, Length[v]}], v[[#]] == 1 &];
NNLS[A_, f_] := Module[
{x, zeroed, w, t, Ap, z, q, \[Alpha], i, zeroedSet, positiveSet,
toBeZeroed, compressedZ, Q, R},
(* Use delayed evaluation so that these are recalculated on the
fly as \
needed : *)
zeroedSet := bitsToIndices[zeroed];
positiveSet := bitsToIndices[1  zeroed];
(* Init x to vector of zeros, same length as a row of A *)
debug["A=", MatrixForm[A]];
x = 0 A\[LeftDoubleBracket]1\[RightDoubleBracket];
debug["x=", x];
(* Init zeroed to vector of ones,
same length as x *)
zeroed = 1  x;
debug["zeroed=", zeroed];
w = Transpose[A].(f  A.x);
debug["w=", w];
While[zeroedSet != {}
&& Max[w\[LeftDoubleBracket]zeroedSet\[RightDoubleBracket]]
> 0,
debug["Outer loop starts."];
(* The index t of the largest element of w, *)
(* subject to the constraint t is zeroed *)
t =
Position[w zeroed, Max[w zeroed], 1,
1]\[LeftDoubleBracket]1\[RightDoubleBracket]\
\[LeftDoubleBracket]1\[RightDoubleBracket];
debug["t=", t];
zeroed\[LeftDoubleBracket]t\[RightDoubleBracket] = 0;
debug["zeroed=", zeroed];
(* Ap = the columns of A indexed by positiveSet *)
Ap =
Transpose[
Transpose[A]\[LeftDoubleBracket]
positiveSet\[RightDoubleBracket]];
debug["Ap=", MatrixForm[Ap]];
(* Minimize (Ap . compressedZ  f) by QR decomp *)
{Q, R} = QRDecomposition[Ap];
compressedZ = Inverse[R].Q.f;
(*
Create vector z with 0 in zeroed indices and compressedZ
entries \
elsewhere *)
z = 0 x;
z\[LeftDoubleBracket]positiveSet\[RightDoubleBracket] =
compressedZ;
debug["z=", z];
While[Min[z] < 0,
(* There is a wart here : x can have zeros,
giving infinities or indeterminates. They don't matter,
as we ignore those elements (not in postitiveSet) but it
will \
produce warnings. *)
debug["Inner loop start"];
(*
find smallest x\[LeftDoubleBracket]
q\[RightDoubleBracket]/(x\[LeftDoubleBracket]q\
\[RightDoubleBracket]  z\[LeftDoubleBracket]q\[RightDoubleBracket])
*)
(* such that : q is not zeroed,
z\[LeftDoubleBracket]q\[RightDoubleBracket] < 0 *)
\[Alpha] = Infinity;
For[q = 1, q <= Length[x], q++,
If[zeroed\[LeftDoubleBracket]q\[RightDoubleBracket] == 0
&&
z\[LeftDoubleBracket]q\[RightDoubleBracket] < 0,
\[Alpha] =
Min[\[Alpha],
x\[LeftDoubleBracket]q\[RightDoubleBracket]/(x\
\[LeftDoubleBracket]q\[RightDoubleBracket] 
z\[LeftDoubleBracket]q\[RightDoubleBracket])];
debug["After trying index q=", q, " \[Alpha]=",
\[Alpha]];
]; (* if *)
]; (* for *)
debug["\[Alpha]=", \[Alpha]];
x = x + \[Alpha](z  x);
debug["x=", x];
toBeZeroed =
Select[positiveSet,
Abs[x\[LeftDoubleBracket]#\[RightDoubleBracket]] <
10^13 &];
debug["toBeZeroed=", toBeZeroed];
zeroed\[LeftDoubleBracket]toBeZeroed\[RightDoubleBracket] =
1;
x\[LeftDoubleBracket]toBeZeroed\[RightDoubleBracket] = 0;
(* Duplicated from above *)
(* Ap = the columns of A indexed by positiveSet *)
Ap = Transpose[
Transpose[
A]\[LeftDoubleBracket]positiveSet\[RightDoubleBracket]];
debug["Ap=", MatrixForm[Ap]];
(* Minimize (Ap . compressedZ  f) by QR decomp *)
{Q, R} = QRDecomposition[Ap];
compressedZ = Inverse[R].Q.f;
(*
Create vector z with 0 in zeroed indices and compressedZ
entries \
elsewhere *)
z = 0 x;
z\[LeftDoubleBracket]positiveSet\[RightDoubleBracket] =
compressedZ;
debug["z=", z];
]; (* end inner while loop *)
x = z;
debug["x=", x];
w = Transpose[A].(f  A.x);
debug["w=", w];
]; (* end outer while loop *)
Return[x];
]; (* end module *)
====================================================================
And some final comments:
Don't 'reply' to the email address in the header of this post  it is
old and defunct, and not updated because of spammers. I am at
massey.ac.nz, and my account name is m.d.woodhams. (3 year postdoc
appointment, so by the time you read this, that address might also be
out of date.)
I put this code in the public domain  but I'd appreciate it if you
acknowledge my authorship if you use it. I will be writing a
bioinformatics paper about "modified closest tree" algorithm that uses
this, and I might (not decided about this) give a link to Mathematica
code, which will include the above. If this happens, you could cite
that paper.
This software comes with no warrantee etc etc. Your warrantee is that
you have every opportunity to test the code yourself before using it.
If you are reading this long after it was posted, I may have an
improved version by then. Feel free to enquire by email.


0




Reply

mdw1 (2)

10/3/2003 6:53:03 AM 

See related articles to this posting
mdw@ccu1.auckland.ac.nz (Michael Woodhams) wrote in message news:<blj6cf$1j6$1@smc.vnet.net>...
> A little while ago, I looked at this group and the web for an
> implementation of the NNLS algorithm, but all I found was a post in
> this group from 1997 from somebody also looking for an implementation,
> so I had to do it myself.
>
> The problem: Given a matrix A (usually more rows than columns) and
> vector f, find vector x such that:
> 1) All elements of x are nonnegative
> 2) subject to contraint (1), the Euclidian norm (2norm) of vector
> (A.xf) is minimized.
[...]
Michael,
Congratulations for your NNLS code!
I had a 10liner naive solution (see below) using FindMinimum.
It seems to work most of the time, but your solution is more accurate
and... at least sixty times faster!
NNLSFindMinimum[A_, f_] :=
Module[{nbx = Length[First[A]], xi, x, axf, xinit},
xi = Array[x,nbx];
axf = A.xi^2  f;
xinit = PseudoInverse[A].f;
If[And@@( #>=0& /@ xinit),
xinit,
fm = FindMinimum[Evaluate[axf.axf],
Evaluate[Sequence@@Transpose[{xi, xinit}]]];
xi^2 /. fm[[2]]
]
];
(* a small test : *)
a = Table[Random[],{i,10},{j,20}];
f = Table[Random[],{i,10}];
x = NNLSFindMinimum[a,f]//Timing
% // Chop[#, 10.^5] &
{1.001 Second,{0.0580067,0.0246345,0,0.0227949,0,0,0,0.386312,0,0,0,0,0,0,0,0,
0,0,0.245039,0.215131}}
x = NNLS[a, f] // Timing
{0.016 Second,{0.058114,0.02338271,0,0.0227961,0,0,0,0.386253,0,0,0,0,0,0,0,0,
0,0,0.246019,0.215755}}

jcp


0




Reply

poujadej (31)

10/4/2003 6:15:38 AM



1 Replies
307 Views
Similar Articles
[PageSpeed]
27

Similar Artilces:
Non Negative Least SquaresHi Folks,
does somebody have a working implementation of the NNLS and can kindly
provide this?
Cheers
CR
A generalization of NNLS is available at
http://wwwastro.physics.ox.ac.uk/~mxc/idl/bvls.pro
Wayne
On Nov 3, 3:21=A0pm, chris <rog...@googlemail.com> wrote:
> Hi Folks,
> does somebody have a working implementation of the NNLS and can kindly
> provide this?
For simple problems, you might also consider running the MPFIT family
of functions with parameter constraints that force the parameters to
be nonnegative.
Craig
... LargeScale NonNegative Linear Least SquaresAll nonnegative linear least squares algorithms that I've seen, including lsqnonneg, require an explicit representation of the matrix. I'm looking for a large scale version that allows me to specify function handles instead. Help!
"Gordon Wetzstein" <gordon.wetzstein@gmail.com> wrote in message <i7u6am$2ue$1@fred.mathworks.com>...
> All nonnegative linear least squares algorithms that I've seen, including lsqnonneg, require an explicit representation of the matrix. I'm looking for a large scale version that allows me to specify function handles ins... Nonnegative least squares with large spare matrixHi, I have a sparse matrix of size ~750,000x10,000, and I am looking to find the nonnegative leastsquares solution of the problem Ax = b.
I have tried using lsqnonneg but after about 30 minutes it spit out an error saying out of memory (I guess this is because the matrix is too large?). I don't have the optimization toolbox but I would like to try lsqlin with only specifying the lower bound. I would consider buying it if I was sure it would solve my problem, so does anyone know if it will work on a matrix of the size with reasonable speed? If not does anyone have any other solution in... least squares algorithmHello All!
Any one know where can I find some php code (or "phpable" code)
implementing least squares method?
Thx in advance and pls excuse my poor eng
J.SanTanA schreef:
> Hello All!
> Any one know where can I find some php code (or "phpable" code)
> implementing least squares method?
>
> Thx in advance and pls excuse my poor eng
http://www.google.nl/search?hl=nl&q=php+least+squares
JW
On 13 Feb, 12:42, Janwillem Borleffs <j...@jwscripts.com> wrote:
> J.SanTanA schreef:
>
> > Hello All!
> > Any one know where can I find s... Does anybody know a nonlinearly inequality constrainted nonlinear least square solver called ENLSIP?I saw in some previous posts people mentioned about it.
I've looked into it from the decision tree  it is very old and there is no
documentation, no tutorial, etc., only Fortran code.
I don't know how to use it.
If anybody know where to find the user guide or documentation, it would be
great! If there is Matlab interface floating around, that's even greater...
Moreover, I would appreciate if anybody can give me some pointers about
nonlinearly inequality constrainted nonlinear least square solvers that
more modern?
Thanks a lot!
On Sep 2, 7:12 pm, "Gino Linus&qu... genetic algorithms and nonlinear least squareshi,
I am working about genetic algorithms. I want to solve y =
q(1)*exp(q(2)*t) equation.(Nonlinear)
In this eq.
%problem discription%
y is a data
q(1) and q(2) are parameters that we want to estimate
and than t is a time
%my purpose is minimize this function%
S=sum[ydata  yestimate]^2 with this equation (nonlinear least
squares), I want to estimate q(1) and
q(2).
how can I solve this poblem with genetic algorithm in MATLAB.
please help me
Thank you
... non linear least square minimization....Hi all,
I need a non linear least square minimization routine for
correlated data (PCR or partial least square perhaps) but
I don't know how I can download it (probably because I
didn't know the name of my problem and its solution).
You know where I can find it?
Best regards,
Dr. G. Pileio
... least squares with nonconstant varianceI'm trying to perform least squares regression to a simple x,y data set, and to get the regression statistics so that I can model the data set using MonteCarlo techniques. The regression is nonlinear (ax^b power function). I've been successful doing this using nlinfit and related functions for the case where a constant variance is assumed (as is common for least squares). However, the data set would be better modeled using a nonconstant variance and I'm having some difficulty getting the regression statistics for this case.
Specifically, I would like to perform weighted ... Non Linear Least squares problemHi
I have been trying to use 'lsqnonlin' function to solve a nonlinear
least squares problem with Matlab educational copy. But matlab is
gerneating an error of unknown function.
Let me know on how one uses this function to estimate parameters for
a nonlinear function which bounds estimated parameters in upper and
lower bound.
Thanks in advance.
Ganesh
Ganesh Lolge wrote:
> Hi
> I have been trying to use 'lsqnonlin' function to solve a nonlinear
> least squares problem with Matlab educational copy. But matlab is
> gerneating an error of unknown function.
>
>... nonlinear least squares methodHi
I hope that someone can help me on this. Any help would be apreciated.
I need to use the equation below as a model function, to estimated
ka and ks by the nonlinear least squares method.
k = (ka*ks*f )/ (ka + ks*f)
% some measured values that I have are
H = [100, 200,250,300, 320]
kind regards,
Armindo
"Armindo" wrote in message <lerbf3$1br$1@newscl01ah.mathworks.com>...
> Hi
>
> I hope that someone can help me on this. Any help would be apreciated.
>
> I need to use the equation below as a model function, to estimated
> ka a... Restricted least squares with large matrix, keywords: mldivide, OLS, constrained least squaresHi,
I've searched this group and the web, and it seems like there is no
solution to this yet for MATLAB.
Given
y = n x 1
X = n x k
We want b = k x 1
that minimizes sum(y  Xb)^2
Suppose X is nonsingular, possibly rectangular and not symmetric. X
is also sparse with at most 2n elements.
To fix ideas, n is about 1 000 000 and k is about 10 000.
This is just a ordinary least squares regression.
In MATLAB, this is b = X\y
How about b = k x 1 that minimizes sum(y  Xb)^2 SUBJECT to Rb = 0?
R is a r x k matrix of linear constraints on solution b.
There is a closed form solution for it give ... nonlinear least square methodi'm trying to fit a curve using the lsqcurvefit.
I have experimental data over the time.
I need to estimate the following parameters
f(1), f(2), f(3), f(4) and M.
I have a constante k = 9.8
I established this initial guess parameters
x0=[1,1,1,1,1];
I call the function
f=lsqcurvefit(@myfun, x0, t, F);
And I get this
f (t) = exp(f(1).*t).*(f(2).*cos(f(3).*t)+f(4).*sin(f(3).*t)) + M*9.8
If I dont use the last parameteres (+ M * 9.8 ) the fit is ok but when I add this I get problems. This is possible to performe with this function? If yes can any one help me please? If not The... non linear least square (using lsqnonlin)
i'm doing curve fitting using nonlinear square technique
basic Matlab help says:
*Step 1: Write an Mfile myfun.m that computes the objective function
values.
function F=myfun(x)
k=0:0.1:150;
F=x(1)*x(2)*(x(3)+1)*(((kx(4))./x(5)).^(x(3))).*exp(x(2)*(((kx(4))./x(5)).^(x(3)+1)))+(1x(1))*x(6)*(x(7)+1)*(((kx(4))./x(8)).^(x(7))).*exp(x(6)*(((kx(4))./x(8)).^(x(7)+1)));
Step 2: Call the nonlinear leastsquares routine.
x0=[0.5 5 2 4 10 6 1 80];
[x,resnorm]=lsqnonlin(@myfun,x0)
i'm getting as a result:
Optimization terminated: firstorder optimality less than OPTIONS.TolFun,
and... pvalue for nonnegtive least squareHi,
I am trying to get pvalue for nonnegtive least square with three dependent variables using the "lsqnonneg" function. But Matlab only have coefficients and residuals.
Is there any way to get the pvalues corresponding to all of the dependent variables? I have checked the glmfit functions  but that's for regular regression not for "lsqnonneg". I just want to get pvalue for each dependent variable and see how important they are for the independent variable, respectively.
Thanks!
... nonlinear least squares _ MLEhi all,
I am confused about the concept of maximum likelihood estimation (MLE) and nonlinear least squares.
I am using lsnonlin in matlab what seems to me to be a straight forward non linear least squares estimation. However, the book I used to find the cost function (i.e. objective error function) for the non linear least square estimation referred to thefinal solution ofthe unknown parameters as the maximum likelihood solution and thus a MLE.But did not provide an explanation.
If anyone can share some light, I would be grateful.
cheers
aiden
can anyone suggest a possible explanation?
... NonLinear Least Squares with Step FunctionHi,
I am trying to run a nonlinear least squares regression on some data. As part of the fitted function, I want to include a coefficient that is itself a step (ie. dichotomous variable) function of one of the dependent variables.
That is, say my nonlinear function is y = A * B(x) * exp(x) where A and B are coefficients, and a sample of data is (made up on the spot):
x = 0 0 0 1 1 1 2 3 4 5
y = 1 2 3 5 6 8 2 5 8 13
So, the estimator A is common to all points but the estimator B varies depending on x. In this case, there are really 7 coefficients to estimate: A, B(0), B(1), ... , B(5).
... Onesided NonLinear Least SquaresHello everybody
For a specific application, I need to solve the problem
min_x  r(x) _"2"
where r(x) is a (nonlinear) function from R^n > R^m, m > n. Now, had the norm been the usual euclidean norm, I am aware of the range of methods available for solving this problem (e.g. LevenbergMarquardt).
However, for this application, only the positive residuals for any given x should be considered. Thus, the problem could probably be expressed as
min_x  W(x) r(x) _2
With W(x) a diagonal matrix with W_i=1 if r_i>0, else W_i=0.
Do any of you have any id... Recursive Least Square Algorithm in Equalizer DesignHi all,
I am working in Wireless communications. I need to design Equalizer
using Recursive Least Square Algorithm for Wireless Communications. I
need help how to start. So please send Suggestions or if any one had
sample code please send it to me.
Thanks in advance
Chowdary,
... Help with least squares on nonlinear functionHey
I finally want to enjoy at least some time of the weekend, but I am stuck a=
t one problem tonight for which I still would like to find a solution.=20
I have a bunch of measurements in an array y which depending basically on a=
nother variable x. I want to do a leastsquares fit to a nonlinear functio=
n, which would be very easy if I just wanted e.g. a polynomial fit. My prob=
lem is now that there is another parameter coming into play (here: k), so t=
he function I want to fit in the end looks like this (though the order shou=
ld be variable in the end which in this example ... error estimation in a nonlinear least squares fittingDear pythonusers,
I am trying to do a nonlinear least squares fitting. Maybe trying is
not the best word, as I already succeeded in that. At the moment I am
using leastSquaresFit from Scientific Python. I know of other least
squares routines, such as the one in scipy.optimize and I believe
there is also one in numpy. Now here is my question: in my search for
a good algorithm, I was not able to find any nonlinear least squares
fitting routine which in the end doesn't only give me the parameters
which I asked for, but also the estimated errors on these parameters.
Is there anyone who can... minimize non linear system with boundary (least square)Does anyone know a free fortran code similar to the Matlab lsqnonlin
function ?
I have seen MINPACK but I don't want to calculate the Jacobian Matrix and so
on...
I want to minimize the system of 2 parameters:
f(x,y)  fo
g(x,y)go
(with f and g non linear functions)
with
x> xmin
y>ymin
In Matlab, I use to do:
Ko = [x_init; y_init];
K = lsqnonlin('mymatrixfuntion',[xmin xmax; ymin ymax],Ko);
with my function defined as below:
____________
function rest = mymatrixfunction(K)
x=K(1);
y=K(2);
rest = [f(x,y)  fo; g(x,y)go];
______________
In a previous ar... Help needed with NonLinear Least Square FittingHi Guys,
So I'm measuring Reflectance(v2) vs. Wavelength(v1) and
need to find the thickness of my sample, for that I'm
trying to use fitting to a function using the nonlinear
square method to find couple of unknowns.
If anyone can help please email me at nimakev@yahoo.com so
I'll send you more information,
any help is greatly appericiated.
thank you.
"Nima Azar" <nimakev@yahoo.com> wrote in message
<g0i43e$n5i$1@fred.mathworks.com>...
> Hi Guys,
> So I'm measuring Reflectance(v2) vs. Wavelength(v1) and
> need to find the thi... non linear least square fit with x errorsHi, I wonder if there is a way to do non linear least square fit for a
function yhat=f(x) with x uncertainties only.
Uncertainties are usually found in y, not x in conventional least
square problems. When there is only x uncertainty, how can I do the
fit.
(and by the way, the x=g(y) expression is very messy.)
... novice question: nonlinear least squares fitHi all,
I'm still a novice at Matlab and am having some problems implimenting
this algorithm:
My data consists of a matrix Data(n,m,3) i.e. i have 3 calibration
frames for my nxm sensor. I also have the x data for those frames.
I have found that the equation that best fits my data is of the form:
y = a(1)/(a(2)+a(3)*x^a(4)) + a(5)/(a(6)+a(7)*x^a(8))
How would I go about performing a fit so that i can generate a
coefficient matrix for each element of the nxm matrix? I guess the
final form of a will be a(n,m,8).
I'm not sure if I have to perform this calculation on a cellbyce...



