AR, Burg methods worse resolution than FFT

Hi, I've been told Autoregressive Process (AR) offers better frequency resolution than classic Periodogram methods.

So I used an example available in the MATLAB manual and compared with Periodogram. Results showed that Periodogram had better frequency resolution than AR via Burg method.

I also studied pyulear, pcov, pmcov, their spectral looks almost identical. I've two questions.
1. Why my Periodogram has better frequency resolution than AR-based methods?
2. Have you experienced very similar spectrum in your data analysis?

Thank you all,
Cheers
----------Code----------------------
randn('state',0)
fs = 1000;           % Sampling frequency
t = (0:fs)/fs;       % One second worth of samples
A = [1 2];           % Sinusoid amplitudes
f = [150;140];       % Sinusoid frequencies
xn = A*sin(2*pi*f*t) + 0.1*randn(size(t));
% [P1,f] = pwelch(xn,hamming(256),128,1024,fs);
[P1,f] = periodogram(xn,rectwin(length(xn)),1024,fs);
[P2,f] = pburg(xn,14,1024,fs);
plot(f,10*log10(P1),'r-',f,10*log10(P2),'b--'); grid
legend('Periodogram','AR-Burg')
ylabel('PSD Estimates (dB/Hz)');
xlabel('Frequency (Hz)');
0
Dean
8/7/2010 10:56:03 PM
comp.soft-sys.matlab 209714 articles. 11 followers. lunamoonmoon (258) is leader. Post Follow

8 Replies
895 Views

Similar Articles

[PageSpeed] 33

On Aug 7, 3:56=A0pm, "Dean " <jiangwei0...@gmail.com> wrote:
> Hi, I've been told Autoregressive Process (AR) offers better frequency re=
solution than classic Periodogram methods.
> ...

Perhaps you have rediscovered a quaint historical artifact that a
number of algorithms commonly labeled "high resolution" should have
been labeled "high SNR only".

Dale B. Dalrymple
0
dbd
8/8/2010 12:37:13 AM
dbd <dbd@ieee.org> wrote in message <c855b29e-c1e0-447d-a6d4-8a6ac14dbffe@m35g2000prn.googlegroups.com>...
> On Aug 7, 3:56&nbsp;pm, "Dean " <jiangwei0...@gmail.com> wrote:
> > Hi, I've been told Autoregressive Process (AR) offers better frequency resolution than classic Periodogram methods.
> > ...
> 
> Perhaps you have rediscovered a quaint historical artifact that a
> number of algorithms commonly labeled "high resolution" should have
> been labeled "high SNR only".
> 
> Dale B. Dalrymple

Hi Dale,

   I also found all these AR methods, such as Burg, Covariance, YW, Modified Covariance actually producing almost visually identical spectrum (even with noise added). Is that something textbook won't tell me?

Cheers,

Dean
0
Dean
8/8/2010 3:14:06 PM
"Dean " <jiangwei0515@gmail.com> wrote in message <i3koa2$2nv$1@fred.mathworks.com>...
> Hi, I've been told Autoregressive Process (AR) offers better frequency resolution than classic Periodogram methods.
> 
> So I used an example available in the MATLAB manual and compared with Periodogram. Results showed that Periodogram had better frequency resolution than AR via Burg method.
> 
> I also studied pyulear, pcov, pmcov, their spectral looks almost identical. I've two questions.
> 1. Why my Periodogram has better frequency resolution than AR-based methods?
> 2. Have you experienced very similar spectrum in your data analysis?
> 
> Thank you all,
> Cheers
> ----------Code----------------------
> randn('state',0)
> fs = 1000;           % Sampling frequency
> t = (0:fs)/fs;       % One second worth of samples
> A = [1 2];           % Sinusoid amplitudes
> f = [150;140];       % Sinusoid frequencies
> xn = A*sin(2*pi*f*t) + 0.1*randn(size(t));
> % [P1,f] = pwelch(xn,hamming(256),128,1024,fs);
> [P1,f] = periodogram(xn,rectwin(length(xn)),1024,fs);
> [P2,f] = pburg(xn,14,1024,fs);
> plot(f,10*log10(P1),'r-',f,10*log10(P2),'b--'); grid
> legend('Periodogram','AR-Burg')
> ylabel('PSD Estimates (dB/Hz)');
> xlabel('Frequency (Hz)');

Hi Dean, Typically resolution improvements with AR methods for sinusoids are seen with short data records. So for example:

reset(RandStream.getDefaultStream);
fs = 1000;               
t = (0:fs/15)/fs;            
A = [2 1];               
f = [140;150];            
xn = A*sin(2*pi*f*t) + 0.1*randn(size(t));
hburg = spectrum.burg(24);
psd(hburg,xn,'Fs',fs);
hper = spectrum.periodogram;
figure;
psd(hper,xn,'Fs',fs);

Note how the AR estimate resolves the two peaks, but the periodogram smears them together because they are closer than 14.92 hertz resolution of the periodogram (in this case). 

Don't get me wrong, I'm a big fan of nonparametric methods and have used them much more frequently than parametric methods (which have their own issues), but the resolution superiority is usually brought up in the context of short data records. Of course, if you overmodel the data using parametric methods, you can get line splitting where one true sinusoid gets split into two separate ones. Kinda the opposite problem :)

Also, there are the so-called superresolution methods, which begin with a signal model that is a superposition of complex exponentials plus noise. These methods decompose the autocorrelation matrix into a signal subspace and a noise subspace. They don't yield a PSD estimate, so they're used when you want to estimate frequencies, see spectrum.music and spectrum.eigenvector for examples in MATLAB.

Wayne
0
Wayne
8/8/2010 3:32:07 PM
On 8 Aug, 00:56, "Dean " <jiangwei0...@gmail.com> wrote:
> Hi, I've been told Autoregressive Process (AR) offers better frequency resolution than classic Periodogram methods.

Don't believe what people tell you. There are plenty people around
who tell you all sort of stuff, that they - when challenged - turn
out to know nothing whatsoever about.

The AR methods don't provide frequency resolution. They characterize
the covariance structure of the signal in terms of an AR model.
The good part about this is that you can get good characteristics
with small ampunts of data (a novice or amateur might interpret
this as 'better frequency resolution'). The downside is that the
charactersitics might not be easily interpreted. The really bad side
is that if the the data do not comply to the assumed model (AR in
this case), the computed parameters might not make sense at all.

Rune
0
Rune
8/8/2010 5:29:37 PM
Hi All, 

Thank you for all your comments, really useful and encouraging.

I feel I also like classic non-parametric ones better than AR methods.

To my understanding, the model order P should be selected equivalent to 2*number of sinusoidal assuming in an ideal case. However, I got an error message like shown.

The signal is synthesized without noise added. I don't understand what makes matlab think I should use smaller P? Of course, I will never see three peaks for P=4.

Cheers,

Dean

----------Error Message ---------------------------------
??? Error using ==> arspectra at 87
The variance estimate of the white noise input to the AR model is negative. Try to lower the order of the AR model.
----------END of Error Message ----------------------



-------------Code-------------------------------------------
reset(RandStream.getDefaultStream);
fs = 1000; 
t = (0:fs/15)/fs; 
A = [2 1 1]; 
f = [140;150;155]; 
xn = A*sin(2*pi*f*t);
[Pxx_cov,f]=pcov(xn,6,2048,fs);
0
Dean
8/8/2010 5:50:08 PM
"Dean " <jiangwei0515@gmail.com> wrote in message <i3mqog$j8q$1@fred.mathworks.com>...
> Hi All, 
> 
> Thank you for all your comments, really useful and encouraging.
> 
> I feel I also like classic non-parametric ones better than AR methods.
> 
> To my understanding, the model order P should be selected equivalent to 2*number of sinusoidal assuming in an ideal case. However, I got an error message like shown.
> 
> The signal is synthesized without noise added. I don't understand what makes matlab think I should use smaller P? Of course, I will never see three peaks for P=4.
> 
> Cheers,
> 
> Dean
> 
> ----------Error Message ---------------------------------
> ??? Error using ==> arspectra at 87
> The variance estimate of the white noise input to the AR model is negative. Try to lower the order of the AR model.
> ----------END of Error Message ----------------------
> 
> 
> 
> -------------Code-------------------------------------------
> reset(RandStream.getDefaultStream);
> fs = 1000; 
> t = (0:fs/15)/fs; 
> A = [2 1 1]; 
> f = [140;150;155]; 
> xn = A*sin(2*pi*f*t);
> [Pxx_cov,f]=pcov(xn,6,2048,fs);

Hi Dean, Fitting an AR(p) models explicitly models the output, y(n), as
y(n)-a(1)y(n-1)-a(2)y(n-2)- a(p)y(n-p)=e(n)
where e(n) is white noise. There should be a white noise input.

To be fair to AR spectral methods, they do have a lot of important uses. There are applications where the physical nature of the signal generation mechanism makes an AR model a good a priori choice. If you look at human speech analysis, you find a number of applications where AR models are quite useful. It just so happens that the things I've used spectral analysis for, nonparametric methods have been more appropriate.

Another thing you might find interesting about AR processes given your other posts, AR processes have nice closed-form expressions for their PSDs. Therefore, you can generate a realization of an AR process, determine its true PSD, and then test spectral estimators against it. Note for the example the use of an AR(4) process to illustrate the problem of bias in the periodogram:


reset(RandStream.getDefaultStream);
x = randn(1024,1);
%AR(4) Process
A = [1 -2.7670 3.8106 -2.65335 0.9238];
y = filter(1,A,x);
sigma = 1;
f =0:(1/length(x)):0.5;
tmp = zeros(length(2:length(A)),length(f));
for k = 2:length(A)
    tmp(k-1,:) = A(k)*exp(-1j*(k-1)*2*pi*f);
end
tmp = sum(tmp);
denom = abs(1+tmp).^2;
psdtrue = sigma^2./denom;
h = spectrum.periodogram;
psdest = psd(h,y,'Fs',1,'NFFT',length(x));
figure;
plot(f,10*log10(psdest.Data)); hold on;
plot(f,10*log10(psdtrue),'r','linewidth',2);

See how the periodogram blows the estimate above about 0.2 Hz? That is a demonstration of the known bias effect in the periodogram. Note how the use of a Hamming window alleviates that problem.

hwin = spectrum.periodogram('Hamming');
psdwin = psd(hwin,y,'Fs',1,'NFFT',length(x));
figure;
plot(f,10*log10(psdtrue),'r','linewidth',2);
hold on;
plot(f,10*log10(psdwin.Data));


Wayne
0
Wayne
8/8/2010 8:07:07 PM
On 8 Aug, 19:50, "Dean " <jiangwei0...@gmail.com> wrote:
> Hi All,
>
> Thank you for all your comments, really useful and encouraging.
>
> I feel I also like classic non-parametric ones better than AR methods.

Agreed. Better than that: There exists a scientific argument to
support that subjective feeling. See below.

> To my understanding, the model order P should be selected equivalent to 2*number of sinusoidal assuming in an ideal case.

Yep indeed!

Congratulations! You are one of the extremely few who are able
to figure that one out on your own! Seriously. For once none of
my irony or sarcasm; merely sheer joy and happiness that somebody
are able to actually figure this one out.

Yes, the model order is the CATCH 22 of DSP. The feasible range of
model orders depends on the number data points available (the
exact details depend on the method in use). For simplicity, say
P = N/2 where N is the number of data points and P is the maximum
allowable order. Denote the *true* number of sinusoidals as D.
For the AR-type predictors to work, D < P. If one uses the Levinson
recursion one can estimate D on the fly, using some sort of
order estimator.

If one contemplates this, one finds that one necessary criterion
for AR-type methods to work, is that the relation

D < P = N/2

holds.

In other words, the designer / analyst who configures the AR-type
methods can not choose N arbitrarily, but must know in advance what
scenario he is working in. Feed the method too few data points and
one ends up with

D >= N/2

and the method unconditionally breaks down.

Just try it. Use N = 12 data points, no noise, and try and feed
the AR method first data from an AR(1) process, then and AR(2),
then AR(3), and so on. With the 'usual' model, you will be able
to recover the AR parameters (possibly with errors due to estimator
bias) for models up to and including AR(5).From AR(6) and beyond,
the computations need not make any sense at all.

All this forms the argument in support of non-parametric methods:
They might not produce the best results, but they have no such
Achilles heels either. The non-parametrics work where the
parametrics break down.

Rune
0
Rune
8/9/2010 8:07:01 AM
On Aug 8, 10:50=A0am, "Dean " <jiangwei0...@gmail.com> wrote:
> Hi All,
>
> Thank you for all your comments, really useful and encouraging.
>
> I feel I also like classic non-parametric ones better than AR methods.
>
> ...
> Dean

The classical non-parametric method for tones is really a two part
process. First the peaks of the periodogram and then an interpolation
method applied to the dft coefficients about the peaks. See:

D. C. Rife, and R. R. Boorstyn, =93Single tone parameter estimation
from discrete-time observations,=94 IEEE Trans. Inform. Theory, vol.
IT-20, pp591-598, 1974

D. C. Rife, and R. R. Boorstyn, =93Multiple tone parameter estimation
from discrete-time observations,=94 Bell Sysr. Tech. J., vol. 55, pp.
1389-1410, 1976.

The stationary tone case has been frequently presented in the IEEE
Trans on Instrumentation and Measurement:

http://wwwtw.vub.ac.be/elec/Papers%20on%20web/Papers/JohanSchoukens/IM92Sch=
oukens-The%20Interpolated.pdf

For the non-stationary case, one of the approaches can be found under
the label "frequency reasignment", for example:

http://perso.ens-lyon.fr/patrick.flandrin/0065-Ch05.pdf

Anyone who limits their comparisons to non-parametric techniques to
peak-picking the periodogram is either uninformed or intentionally
misleading. You can find both in the literature.

Dale B. Dalrymple
0
dbd
8/9/2010 6:00:17 PM
Reply: