f

#### Least-squares fitting a square pulse

I am trying to fit a square pulse to data in the least-squares sense in Matlab. The parameters that are being optimized are the height, start, and end - x(1), x(2) and x(3) in the program, respectively. However, the least-squares program is only varying the height, for some reason. Why is this occurring? How else could I fit a square pulse using least-squares?
Although in this example the pulse would be easy to fit by inspection, for the purpose of my project, it is not an option.

Code:
%FITTING
xdata=0:0.1:10;
ydata=-.05+.1*rand(1,numel(xdata))+heaviside1(xdata-1)-heaviside1(xdata-4);
x0=[.8 0.9 4];
[x,resnorm,residual,exitflag]=lsqcurvefit(@fit_pulse,x0,xdata,ydata)
fit=x(1).*(heaviside1(xdata-x(2))-heaviside1(xdata-x(3)));
plot(xdata,ydata,'or',xdata,fit,'b')

The program is calling the functions heaviside1 and fit_pulse:
%HEAVISIDE1 FUNCTION
function Y=heaviside1(x)
Y=zeros(size(x));
Y(x>0)=1;
end

%FIT_PULSE FUNCTION
function F=fit_pulse(x,xdata)
F=x(1).*(heaviside1(xdata-x(2))-heaviside1(xdata-x(3)));
end

Thanks

 0
1/30/2012 3:01:11 PM
comp.soft-sys.matlab 211266 articles. 25 followers. lunamoonmoon (257) is leader.

4 Replies
558 Views

Similar Articles

[PageSpeed] 20

"Riku" wrote in message <jg6bbn$d4s$1@newscl01ah.mathworks.com>...
> I am trying to fit a square pulse to data in the least-squares sense in Matlab. The parameters that are being optimized are the height, start, and end - x(1), x(2) and x(3) in the program, respectively. However, the least-squares program is only varying the height, for some reason. Why is this occurring?
============

Probably because the gradient of the fitting function is zero with respect to
x(2) and x(3). If your current start and end lie in between your xdata sampling locations, tweaking x(2) and x(3) by small amounts produces no change in the value of fit_pulse.

>How else could I fit a square pulse using least-squares?
============

Why does it have to be least squares? Can't you use some educated thresholding approach like,

threshold=max(ydata(:))/2;
idx=ydata>=threshold;
height=mean(ydata(idx));
area=sum(ydata)*(xdata(2)-xdata(1));
width=area/height;
centroid=dot(xdata,ydata)/sum(ydata);

x(1)=height;
x(2)=centroid-width/2;
x(3)=centroid+width/2;

 0
mattjacREMOVE (3196)
1/30/2012 4:11:10 PM
 > Probably because the gradient of the fitting function is zero with respect to
> x(2) and x(3). If your current start and end lie in between your xdata sampling locations, tweaking x(2) and x(3) by small amounts produces no change in the value of fit_pulse.

That makes sense. I looked in Help for lsqcurvefit, but couldn't find anything about changing the step size of the parameters of the fitted equation.

> Why does it have to be least squares? Can't you use some educated thresholding approach like,
>
> threshold=max(ydata(:))/2;
> idx=ydata>=threshold;
> height=mean(ydata(idx));
> area=sum(ydata)*(xdata(2)-xdata(1));
> width=area/height;
> centroid=dot(xdata,ydata)/sum(ydata);
>
> x(1)=height;
> x(2)=centroid-width/2;
> x(3)=centroid+width/2;

That works very well. The reasons I can think of why it needs to be least-squares are that I've been instructed to use that approach (university project); it may be more accurate for noisier data; it should work with more complicated fitted functions, that I will need to eventually use.
Thanks

 0
1/30/2012 5:49:09 PM
"Riku" wrote in message <jg6l6l$isa$1@newscl01ah.mathworks.com>...
>  > Probably because the gradient of the fitting function is zero with respect to
> > x(2) and x(3). If your current start and end lie in between your xdata sampling locations, tweaking x(2) and x(3) by small amounts produces no change in the value of fit_pulse.
>
> That makes sense. I looked in Help for lsqcurvefit, but couldn't find anything about changing the step size of the parameters of the fitted equation.
================

That wouldn't be a good approach, anyway. LSQCURVEFIT just isn't designed to handle discontinuous/non-differentiable functions.

If the solution I already gave you is insufficient, you should replace your definition of the heaviside function with something that is continuous and differentiable, so that your fitting function will likewise be continuous and differentiable. Perhaps the following

function Y=heaviside1(x)

Y=(x>=.1) + (x>=0 & x<=.1).*spline([0,.1],[0 0 1 0],x);

 0
mattjacREMOVE (3196)
1/30/2012 7:23:09 PM
"Matt J" wrote in message <jg6qmt$62c$1@newscl01ah.mathworks.com>...
>
> If the solution I already gave you is insufficient, you should replace your definition of the heaviside function with something that is continuous and differentiable, so that your fitting function will likewise be continuous and differentiable. Perhaps the following
>
>
> function Y=heaviside1(x)
>
> Y=(x>=.1) + (x>=0 & x<=.1).*spline([0,.1],[0 0 1 0],x);

Forget it... That won't do much.

 0
mattjacREMOVE (3196)
1/30/2012 7:29:10 PM

Similar Artilces:

Running Absolute Least Squares Regression in Matlab
Hello, I was just wondering if anybody could provide the code to carry out the Least Absolute Deviation Least Squares algorithm in Mablab. More so, if you know how to obtain the best value for the slope parameters (mathematically), that would be great as well. Thanks, Elton In article <ef196b8.-1@webx.raydaftYaTP>, Elton <elton.cronin@utoronto.ca> wrote: > I was just wondering if anybody could provide the code to carry out > the Least Absolute Deviation Least Squares algorithm in Mablab. More > so, if you know how to obtain the best value for the slope parameters > (mathematically), that would be great as well. Its not a least squares algorithm. Do you want to minimize maximum absolute deviation? Or do you want to minimize the sum of the absolute deviations? Both can be done. I show how to do both in my tips and tricks document. Here is the URL, beware, its been line wrapped. http://www.mathworks.com/matlabcentral/fileexchange/loadFile.do?objectId= 8553&objectType=FILE You will need either the optimization toolbox or some other linear programming code from the file exchange. I know I saw at least one. HtH, John D'Errico -- The best material model of a cat is another, or preferably the same, cat. A. Rosenblueth, Philosophy of Science, 1945 Those who can't laugh at themselves leave the job to others. Anonymous Hi Elton, Try L1LinSolve in TOMLAB Base Module - any linear subsolver can be used with this conversion function. Bes...

Is there a way to add constraints to Matlab's non-linear least square solver?
Hi there, I am solving a non-linear least squares problem(constrained and bounded) and I hope to use the Matlab's non-linear least squares solver: help lsqnonlin LSQNONLIN solves non-linear least squares problems. But it doesn't allow specifying constraints(equality and inequality), does anybody know a way to get around this? I know there are many tricks in optimization and there are many experts on these boards, please shed some lights on me. Thank you! Hi, as you can see in the LS section of http://plato.asu.edu/guide.html there are various Matlab codes addressing only some of the aspects of your problem. For the most general problem you better use a full NLP solver. You find them in the Constrained Optimization/NLP section and at least two are in Matlab: SOLNP and SQPLab. Others have a Matlab interface such as IPOPT. On Jul 23, 8:01 pm, "Luna Laurent" <luna_laur...@yahoo.com> wrote: > Hi there, > > I am solving a non-linear least squares problem(constrained and bounded) and > I hope to use the Matlab's non-linear least squares solver: > > help lsqnonlin > LSQNONLIN solves non-linear least squares problems. > > But it doesn't allow specifying constraints(equality and inequality), does > anybody know a way to get around this? > > I know there are many tricks in optimization and there are many experts on > these boards, please shed some lights on me. Thank you! ...

Least square method to fit y=f(y,x0,x1,x2,x3,a0,a1,...,a6) ?
I would like to perform a fit by using a least square method. My fonction is something like that :y=a0*f0+a1*f1+a2*f2+a3*f3+a4*f4+a5*f5+a6*f6with :f0=x0*x1f1=x0*x3f2=x0*x3f3=x0f4=-y *x1f5=-y *x2f6=-y *x3I tried the linear fit .vi but although I don't get any error messages the results does not seem correct.(I attached the files I made to create the H matrix and use the .vi.)Could someone explain me how to do this fit ?Thanks,Juliane calcul des coeffs A.vi: http://forums.ni.com/attachments/ni/170/207782/1/calcul des coeffs A.vi calcul H et y.vi: http://forums.ni.com/attachments/ni/170/207782/2/calcul H et y.vi Juliane wrote:y=a0*f0+a1*f1+a2*f2+a3*f3+a4*f4+a5*f5+a6*f6with : f0=x0*x1f1=x0*x3f2=x0*x3f3=x0f4=-y *x1f5=-y *x2f6=-y *x3 Juliane, Your function does not make sense, because y is also a&nbsp;function of y. Mathematically, f4 reduces to x1. Could it be your data is in two dimensions? Can you put real data in your controls, make the current values default, save and repost your VI? Thanks! :) Hello , I put in atachments the vi with the real data in the controls.Regards, calcul des coeffs A.vi: http://forums.ni.com/attachments/ni/170/207901/1/calcul des coeffs A.vi calcul H et y.vi: http://forums.ni.com/attachments/ni/170/207901/2/calcul H et y.vi ...

Calibrating Optics: Is square still a square?
Hello Everyone This might be a redundant questions, however i am still asking it because i am sort of stuck here. I am trying to calibrate the optics. For this, i have taken an image of a grid (made out of plexiglass) which has 1/8" square blocks machined on it. Now, i want to make sure that in the image square is still a square i.e. pixel distance in x is same as that in y. The image is: 840x1728x3 in .bmp format. This is what i have written so far: A = imread('Calib_001.bmp'); A = double(A(:,:,1)); % Bicubic Interpolation B = imresize(A, 1.0, 'bicubic'); % selecting a square region C = B(500:510, 250:260); w = size(C,1); h = size(C,2); How should i proceed next? Do i just calculate distance between two pixels? Or do i have to identify the boundaries of the square using pixel intensity at that point? Any help will be appreciated. Thanks Prakhar "Prakhar " <prakhar_cool@yahoo.com> wrote in message news:jl024c$d6m$1@newscl01ah.mathworks.com... > Hello Everyone > > This might be a redundant questions, however i am still asking it because > i am sort of stuck here. I am trying to calibrate the optics. For this, i > have taken an image of a grid (made out of plexiglass) which has 1/8" > square blocks machined on it. Now, i want to make sure that in the image > square is still a square i.e. pixel distance in x is same as that in y. > The image is: 840x1728x3 in .bmp format. > &g...

least mean square
hi , &nbsp; &nbsp; I am trying to implement a least mean square algorithm with a dsk 6713 . &nbsp; The steps are as follows . &nbsp; - Generate a random set of data, " a ",&nbsp;( 1 D array ) , 500 bits . &nbsp; &nbsp; -&nbsp;convolute &nbsp;data array , " a " , with the channel component , " h " , (1 D array ) , 4 bits , which results in "ah " (taking into consideration only the first 500 bits after convolution). &nbsp; - add some noise to&nbsp;" ah " , which results in " u '. &nbsp; - parallely , i generate a set of initial&nbsp;weight &nbsp;values , " w " (containing zeros to begin with) &nbsp; - Now , i multiply the first 11 bits of&nbsp; " u " with " w " . Subtract the resultant value from the&nbsp;correesponding element ( it is the 5th elemnt to begin with and keeps incrementing ) of " a " , which results in the error component " e". &nbsp; &nbsp; - Multiple " e " with a constant " mu " and i also multiply this with the first 11 bits of&nbsp; "u ". then&nbsp;i add that to the weight value , thereby updating the weight value. &nbsp; - Now , i take the square of the error and have to store the error value in a array for each loop sequence ( for a total of 488 iterations, which should result in a 1D array of 488 bits). &nbsp; I am facing a ...

R squared vs. R square adjusted
I am new to working in Matlab and I am currently working with R^2, but need to switch to R^2 adjusted. I am not sure how to do this. Below is the code I am currently using. I saw something online that said I should be able to bring up what "statistics" I want to generate and i can just choose R2 adj. However, I can't get this to work... Any thoughts? Thanks. alpha = 0.05; [b,bint,r,rint,stats] = regress(y,X,alpha); b,bint,stats On 4/21/2012 12:48 PM, Allison wrote: > I am new to working in Matlab and I am currently working with R^2, but > need to switch to R^2 adjusted. I am not sure how to do this. Below is > the code I am currently using. I saw something online that said I should > be able to bring up what "statistics" I want to generate and i can just > choose R2 adj. However, I can't get this to work... Any thoughts? Thanks. > > alpha = 0.05; > [b,bint,r,rint,stats] = regress(y,X,alpha); > b,bint,stats That appears to be associated w/ fit() from the Curve Fitting Toolbox <http://www.mathworks.com/help/toolbox/curvefit/fit.html> not regress() which is in Statistics. (Don't you just _love_ the idea of toolboxes... :) ) It's easy enough to convert Rsq_adj = 1 - (1-Rsq)(n-1/(n-p-1) = 1-SSerr/SStot*DFpop/DFerr where n = population size and DFpop = DOF of est of population variance of dependent variable (=n-1) and DFerr is DOF of underlying population error variance of model (=n-p-1). -- ...

least square method
Hi, I am looking for a code which calculates the coefficients of a polynom with respect to the least square method for a given set of x-y data points Herbert Herbert Voss <Herbert.Voss@FU-Berlin.de> writes: > I am looking for a code which calculates the coefficients > of a polynom with respect to the least square method for a > given set of x-y data points If you are thinking about curve fitting, polynomials tend to overshoot quite a lot after a certain degree. You tend to get better results using splines (piecewise low-order polynomials fitted with continuity conditions). Anyway: the equations for linear fitting are pretty easy: I presume you already have them and just want to save yourself the effort for coding? -- David Kastrup Am 21.04.2010 10:34, schrieb David Kastrup: > Anyway: the equations for linear fitting are pretty easy: I presume you > already have them and just want to save yourself the effort for coding? correct, _linear_ fitting is no problem, I have it already running. Herbert Herbert Voss wrote: > Hi, > > I am looking for a code which calculates the coefficients > of a polynom with respect to the least square method for a > given set of x-y data points IIRC Don Lancaster had something like this, long ago. BugBear Herbert Voss <Herbert.Voss@FU-Berlin.de> writes: > Am 21.04.2010 10:34, schrieb David Kastrup: > >> Anyway: the equations for linear fitting are pretty easy: I presume you ...

least squares algorithm

Least Square Plane Equation
Hello, I am trying to write a program that snaps a cluster of points to their average plane using the Least Square Plane Equation. I know there are quite a few sites online and many posts here about this lovely equation, but I can't wrap my head around the math. I have some math background, but not nearly enough to understand. Question: can someone share a piece of computer code that would calculate the average plane of a point cloud? I know how to program so I can decipher any language. Or if someone is even nicer maybe write step by step how to iterate through the point cloud and end up with a plane equation. I don't need to know the math...that I save for next life :) Thanks a lot! Robin Given n >= 3 points (xi, yi, zi), you want to determine the parameters a, b and c of the planar equation, z = ax + by + c that minimizes sum( (zi - z(xi,yi))^2 ). i First, find a general LS solver that accepts input as matrices, A and d. In matrix form, the general problem is written like this: min (d - Au)T(d - Au) u For your specific problem, uT = [a b c] (to be determined) dT = [z1 z2 ... zn] A = | x1 y1 1 | | x2 y2 1 | | . . . | | . . . | | . . . | | xn yn 1 | Now all you need to do is find LS code that accepts input in matrix form. That should be relatively easy. robin90024@gmail.com wrote: > Hello, > > I am trying to write a program that snaps a cluster of points to their > average plane using the Least Squ...

Optimization problem of least square
Hi, experts I have very complex question. I am not expert in matrix mathemetics. Let me define sum of square of some matrix like this Sum(i=1 to N) | Oi - A*Pt*Ci*Ph*A' |^2 where the size of each matrix are Oi=MxM, A=Mx(M/2), Pt and Ph = (M/2)x(M/2) diagonal matrix, C=(M/2)x(M/2), A'= transpose of A matrix How can I find optimal solution of Pt and Ph. Looking forward some help.... I can approach something like below, For least square condition to satisfy, O = A*Pt*C*Ph*A' Apply A' from left, A'O=Pt*C*Ph*A' Apply A from right A'OA=Pt*C*Ph Now since Pt is diagonal, C'Pt=PtC' hence C'A'OA=Pt*C' *C*Ph=Pt*Ph I hope A,O and C are known. I think I have checked the dimensionality before making above matrix operation. Since Pt and Ph are diagonal will it be too difficult now? Cheers, Santosh ...

Re: Partial Least Square
Michael.Ni@COGNIGENCORP.COM wrote: >Hi, > >I recently just asked a question about stepwise regression. Could anyone >confirm that if I use partial least square/Proc PLS, the regression >method is priinciple component regression, etc. already account for all >the variables in the model. There are no variable selection is invovled >in the modeling. So all the variables go to the model as factors. >Because all the variables go to models, how to interpret the model. I >think, as of predictive purpose, the model is good. Right? Please help >me out. > >Thanks so much for your time and help! > >Michael Yes, if you use PROC PLS, the model that you get uses all your regressors. The default method is Partial Least Squares, which I recommend over principal component regression. The regressors are not 'factors' in the sense of factor analysis. How to interpret the model? That's not the goal of PLS. The goal is to provide a linear compbination that has strong predictive abilities. Don't try to interpret the model. If what you need is interpretability of the model, then you need to start from a completely different perspective, and build your potential models based on research done on the current understanding of the theories supporting your field. HTH, David -- David L. Cassell mathematical statistician Design Pathways 3115 NW Norwood Pl. Corvallis OR 97330 _________________________________________________________________ ...

Recursive Least Squares Lattice
I've been trying to implement the "Recursive Least Squares Lattice Algorithm Using A Posteriori Estimation Errors" described in Haykin's Adpative Filter Theory book, but the results are way off. Can anyone share a successful implementation with me or point me to the common mistakes in implementing this algorithm? ...

LMSE Estimation and Least Squares
Hello, Consider the problem: Given observation Y, estimate X via a linear approach: \hat{X}=HY+b such that (1) J1=E[ (X-\hat{X})^T (X-\hat{X}) ] is minimized. (2) J1 is minimized, and we know that Y=Mx + n (3) J2=E[ (Y- M \hat{X})^T (Y- M \hat{X}) is minimized Solution for (1): The solution of this Linear Mean Square Error (LMSE) error problem is given by b=m_x+H m_y, where H is solution of K_yy H = K_yx. [Notation: m_x mean of vector X, K_yx=E[ (y-m_y)^T (y-m_y ] ] Solution for (2): Calculation of K_yy and K_yx leads to H=M K_x (MK_xM^T+K_n)^(-1) Solution for (3): Solution for this least squares problem is H=(M^T M)^(-1) M^T Question: - What are the differences and common points of the estimator obtained in case of the LMSE and the least squares case? How are they related? - Solution for the weighted least squares problem is given by H=(M^T S^(-1) M)^(-1) M^T S^(-1) , where for weighting S often the autocorrelation matrix K_n is taken. Can we motivate the choice of the weight S by statistical considerations? Thank you! Michael newsgroups.reader@gmail.com said the following on 17/08/2006 12:53: > Consider the problem: > Given observation Y, estimate X via a linear approach: \hat{X}=HY+b > such that > > (1) J1=E[ (X-\hat{X})^T (X-\hat{X}) ] is minimized. > (2) J1 is minimized, and we know that Y=Mx + n > (3) J2=E[ (Y- M \hat{X})^T (Y- M \hat{X}) is minimized > > Solution for (1): > The solution of this Linear Mean Square Error (LMS...

Square Root Raised Cosine Pulse
Hi guys! I have an M-File which models a unit-energy Raised Cosine Pulse; function y = rcpuls(a,t) tau = 1; % Set the symbol time t = t + 10^(-7); % This avoids dividing by zero tpi = pi/tau; atpi = tpi*a; at = 4*a^2/tau^2; y = sin(tpi*t).*cos(atpi*t)./(tpi*t.*(1-at*t.^2)); A typical example would be typing in the command window; >> rcpulse = rcpuls(0.3,[-6:0.01:6]); >> rcpulse = rcpulse/norm(rcpulse); This returns a 30% Raised Cosine Pulse. My problem is that I need a unit-energy Square Root Raised Cosine Pulse!! Does anyone know what I need to do to edit this code? Thanks Will Don't Worry found an equation and came up with this; Works well; function y = rtrcpuls(a,t) tau = 1; t = t+0.0000001 tpi = pi/tau; amtpi = tpi*(1-a); aptpi = tpi*(1+a); ac = 4*a/tau; at = 16*a^2/tau^2; y = (sin(amtpi*t) + (ac*t).*cos(aptpi*t))./(tpi*t.*(1-at*t.^2)); y = y/sqrt(tau); Regards Will ...

R-squared value of an Exponential fit
My problem is that I want to fit an exponential curve to some data, no problem doing that using the 'exponential fit.vi'. Thing is, I would also like to generate the R-squared value for the fit, as you can do in MS Excel. But the 'exponential fit.vi' only outputs the Mean Squared Error. If anyone has any ideas or can even point me in the right direction of this, I'd be very grateful. Thanks, Rob Upgrading to LabVIEW 8 or 8.2 is one right direction. There is a huge amount of new curve fitting functions available, including a VI called Goodness of Fit that evaluates the output of a curve fitting function versus the original data and outputs the R-squared value. Here's a <a href="http://digital.ni.com/express.nsf/bycode/exu45q" target="_blank">link</a> on What's New in 8.0. 8.2 brings with it a number of other benefits, but the math and signal processing functions were mostly added in 8.0. Hope this helps! ...

Re: docs for partial least square:PLS
I don't think you can do classification with proc pls unless you just want a pseudo solution. On 6/19/06, adel F. <adel_tangi@yahoo.fr> wrote: > Hi, > I am looking for a SUGI documents or any relevant documents with SAS code for partial least square:PLS, both for linear regression and logistic regression > Thanks a lot in advance > Adel > > > --------------------------------- > Yahoo! Mail r�invente le mail ! D�couvrez le nouveau Yahoo! Mail et son interface r�volutionnaire. > -- WenSui Liu (http://spaces.msn.com/statcompute/blog) Senior Decision Support Analyst Health Policy and Clinical Effectiveness Cincinnati Children Hospital Medical Center ...

how to display R-square on the polynomial fitting chart?
I have a set of data, and got 3 R-square values using polynomial 2 degree, 3 degree, and 4 degree fitting. How can I display the three R-square values on the chart? like '2nd: R^2= 3rd: R^2= 4th: R^2= ' Any help would be appreciated. On 6/27/2012 6:30 AM, Qu wrote: > I have a set of data, and got 3 R-square values using polynomial 2 > degree, 3 degree, and 4 degree fitting. > How can I display the three R-square values on the chart? > like '2nd: R^2= > 3rd: R^2= > 4th: R^2= ' > Any help would be appreciated. doc legend doc text would be two likely candidates... -- ...

Least square circle calculation on a series of data points
I am taking a series of (x,y) part&nbsp;measurements that describe a circle&nbsp; (radius &amp; rotation angle). The center of the part and center of rotation are offset. Looking for code to do a least square circle calculation on this data set so diameter and out-of-roundness can be estimated. Platform LabView 8.2. Hi whale3, are x and y&nbsp;the radius and rotation angle or are this real points? Mike There was a recent message thread on the same topic, only in that case it was about X, Y pairs rather than R, theta pairs.&nbsp; But the coordinate translation should be easy enough. &nbsp; <a href="http://forums.ni.com/ni/board/message?board.id=170&amp;view=by_date_ascending&amp;message.id=307606#M307606" target="_blank">http://forums.ni.com/ni/board/message?board.id=170&amp;view=by_date_ascending&amp;message.id=307606#M307606</a> Thanks for responding to my inquiry, Mike.The equipment has an encoder to divide one revolution of the part into as many as 1024 segments. The radius is obtained by taking a reading with a measurement probe on the part, and doing a little math. Up to this point I have been counting encoder pulses, which would represent an angle to corespond to a measurement. Thanks, Ravens Fan. Downloaded all of the circle fit VI's to look over and digest, but this looks like essentially what I want to do. Should be easy enough to convert my data (I tried doing this first with a cosine fa...

Error estimation from nonlinear least squares, singular jacobian
Hei there, i have a problem calculating the estimated errors in parameters from a fit: i fit specific nonlinear function to a dataset. Afterwards i calculate the jacobian of the fitted parameters, however when i want to use nlparci it tells me that the matrix is ill conditioned and taht the matrix is singular to working precision. Is there any workaround for this? Can i determine the errors from something else, or should i transform, or calculate the jacobian in a specific way? Best wishes Clemens "Clemens" wrote in message <jkvcq3$qni$1@newscl01ah.mathworks.com>... > Hei there, > > i have a problem calculating the estimated errors in parameters from a fit: > > i fit specific nonlinear function to a dataset. Afterwards i calculate the jacobian of the fitted parameters, however when i want to use nlparci it tells me that the matrix is ill conditioned and taht the matrix is singular to working precision. > > Is there any workaround for this? Can i determine the errors from something else, or should i transform, or calculate the jacobian in a specific way? > A singular matrix here means that you CANNOT solve your problem in a way that will yield finite confidence limits. It means that some linear combination of the parameters is unspecified. Either you have defined the model poorly, or your data is so poor as to provide no information. Sorry, but sometimes there is no magic solution. John "John D'Errico"...

Recursive Least-Squares (RLS)
Hello, Would anyone know of where to find or have a C++ implmentation of this algorithm? I am presently using one of the implementations in the Matlab Filter Design Toolbox, but have found it to be slow. I'd like to try and speed up the processing if possible. Any suggestion or comments are greatly appreciated. Michael. "M. Wirtzfeld" <mwirtzfe@uwo.ca> wrote in message news:2dOdnfUB7YDaBz3enZ2dnUVZ_tmdnZ2d@mycybernet.net... > Hello, > > Would anyone know of where to find or have a C++ implmentation of this > algorithm? > > I am presently using one of the implementations in the Matlab Filter Design > Toolbox, but have found it to be slow. I'd like to try and speed up the > processing if possible. > > Any suggestion or comments are greatly appreciated. > > > Michael. > > > Did you expect it to be fast? How many parameters are you estimating? Ing Ing, No, I did not expect the algorithm to be fast. I understand what the algorithm is doing. I am applying it in a system modeling project and I would like have three tap values (64, 128, 256) for each forgetting-factor to get an idea of modeling performance. Michael. "Ingeniur" <ing@ingers.com> wrote in message news:dV2of.8129$vH5.414673@news.xtra.co.nz... > > "M. Wirtzfeld" <mwirtzfe@uwo.ca> wrote in message > news:2dOdnfUB7YDaBz3enZ2dnUVZ_tmdnZ2d@mycybernet.net... > > Hello, > > > > W... labview: biphasic square wave pulse starting at 0 I'm new to Labview and am trying to figure out how to generate biphasic square wave pulses that start &amp; end at 0. I'm using a 33120A function generator / arb using GPIB. I've tried using the driver (the burst modulation and wave configuration VIs), but the pulses they give me start from the bottom of the square wave instead of from 0. Any thoughts? Thanks. ... Re: GPLOT When you ask for quad regression, it's not guaranteed you can get unique solution, so SAS back down and did the linear instead, also give the warning. That's my understanding. -----Original Message----- From: Lane, Jim [mailto:jim.lane@rbc.com] Sent: Thursday, July 09, 2009 11:59 AM To: Huang, Ya; SAS-L@LISTSERV.UGA.EDU Subject: RE: Re: GPLOT - Least-square solutions for the parameters are not unique. I asked for quadratic regression because that's what I wanted. I had already plotted linear by asking for that. My question is why won't SAS give me a quadratic regression when I explicitly asked for it? -----Original Message----- From: SAS(r) Discussion [mailto:SAS-L@LISTSERV.UGA.EDU] On Behalf Of Ya Huang Sent: 2009, July, 09 2:54 PM To: SAS-L@LISTSERV.UGA.EDU Subject: Re: GPLOT - Least-square solutions for the parameters are not unique. You basically asked gplot to fit quadratic regression, not linear regression, because you used i=rqclm95, where 'q' is for quadratic. If you really just need linear regression, change i=r instead. On Thu, 9 Jul 2009 14:43:51 -0400, Lane, Jim <jim.lane@RBC.COM> wrote: >Hi, All > >I'm getting a warning message from PROC GPLOT which I've never seen >before. My code looks like this: > >SYMBOL1 V=dot C=RED i=rqclm95 h=0.75 w=0.75; >axis1 value=(h=0.75) label=none; >axis2 value=(h=0.75) label=none order=('31may2008'd to '29jun2009'd by >month); proc gplot data=... How can i figure this 'nonlinear least square problem'? Hi all! Now i have this formula: min sum(|A*M1(n)*exp(j*theta*n)-M2(n)|^2) ,n =0,...,N-1; Unknown parameters A: complex value theta:real value Known data: M1(n):samples M2(n):samples I want to make that formula minimum,how can i do?I only know that it's a nonlinear least squares problem.please help me,thank you! On 15 Jun., 04:54, "flying " <flyingh...@163.com> wrote: > Hi all! > =A0 =A0 Now i have this formula: > =A0 =A0 =A0 =A0 =A0 min sum(|A*M1(n)*exp(j*theta*n)-M2(n)|^2) =A0,n =3D0,= ....,N-1; > Unknown parameters > A: complex value > theta:real value > > Known data: > M1(n):samples > M2(n):samples > > I want to make that formula minimum,how can i do?I only know that it's a = nonlinear least squares problem.please help me,thank you! Use lsqnonlin and define the functions f_k as f_k =3D Sqrt((A1^2+A2^2)*M1(k)*conj(M1(k)) - 2*real(M2(k)*(A1- i*A2)*conj(M1(k))*exp(-j*theta*n))+M2(k)*conj(M2(k))) where A=3DA1+i*A2. Thus there are three parameters (A1,A2 and theta) to be fitted. Best wishes Torsten. "flying" wrote in message <jre84f$opf\$1@newscl01ah.mathworks.com>... > Hi all! > Now i have this formula: > min sum(|A*M1(n)*exp(j*theta*n)-M2(n)|^2) ,n =0,...,N-1; > Unknown parameters > A: complex value > theta:real value > > Known data: > M1(n):samples > M2(n):samples > > I want to ...

Least square inversion (normal equation vs Ax=b solution)
Hello All, Looking for some help regarding some linear algebra in Matlab (Linear Least Square Inversion). I have a problem of the form Ax=b+e where A is full rank and is sparse and e is some error. I also have a regularization matrix (R) and standard deviation error vector w I am trying to minimize: (b - A*x)'*diag(1/w)*(b - A*x) + y(R'*x')(R*x) (i.e. least square with regularization and known error (weights) where y is the scaling factor for the regularization component) This should be equal to solving the normal equation: x = inv(A'*diag(1/w)*A+y(R'*R))*A'*diag(1/w)*b I read somewhere that solving an inverse is not the best way so it would be better to reformulate the above problem as (Ax=b): x =[(A'*diag(1/w)*A+y(R'*R))]\[A'*diag(1/w)*b] But then I read another thing that says its better to avoid normal equations altogether which then would make the solution something like: x=[A*diag(1/w);yR]\[b*diag(1/w);zeros] However, the solution using the normal equation and the one above does not match. I would be glad if someone could point out my error. Thanks in advance. You forget the square-root When minimizing f(x) := (Ax-b)'*W*(Ax-b) + || R*x ||^2 W sdp matrix (W = diag(1./w) in your case) The solution is x = (A'*W*A + R'*R) \ (A'*W*b) % or sqrtW = sqrtm(W); % diag(1./sqrt(w)) in your case x = [sqrtW*A; R] \ [sqrtW*b; zeros(length(R),1)] % Bruno "Michal...

Web resources about - Least-squares fitting a square pulse - comp.soft-sys.matlab

Least squares - Wikipedia, the free encyclopedia
The method of least squares is a standard approach to the approximate solution of overdetermined systems , i.e., sets of equations in which there ...

Least Squares for iPhone, iPod touch, and iPad on the iTunes App Store
Get Least Squares on the App Store. See screenshots and ratings, and read customer reviews.

How The Witcher 3’s economy was saved by polynomial least squares
For a game as complex and as huge as The Witcher 3 , it's hard to imagine that one of its core gameplay elements—one that ties the entire game ...

Resources last updated: 3/30/2016 2:36:19 PM