COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

Windowed sinc

• Follow

```  When making a filter for a time domain interpolation of a signal using
a windowed sinc kernel, what should be the right way of aligning the
window towards sinc?

I can think of three possibilities:

1) The center of the window could be set at 0.
2) The center of the window could be set to the value of the time
domain shift of the interpolator.
3) The center if the window could be aligned to the "center of mass"
of the set of sinc coefficients.

What is the best option ?

DSP and Mixed Signal Design Consultant
http://www.abvolt.com

```
 0

```On Sep 2, 7:55=A0pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> =A0 When making a filter for a time domain interpolation of a signal usin=
g
> a windowed sinc kernel, what should be the right way of aligning the
> window towards sinc?
>
> I can think of three possibilities:
>
> =A0 1) The center of the window could be set at 0.
> =A0 2) The center of the window could be set to the value of the time
> domain shift of the interpolator.
> =A0 3) The center if the window could be aligned to the "center of mass"
> of the set of sinc coefficients.
>
> What is the best option ?
>
> DSP and Mixed Signal Design Consultanthttp://www.abvolt.com

I would align the center of the window with the center of the sinc
then sample as necessary.

Dale B. Dalrymple
```
 0
Reply dbd1 (1034) 9/3/2010 3:19:55 AM

```On 9/2/2010 7:55 PM, Vladimir Vassilevsky wrote:
>
> When making a filter for a time domain interpolation of a signal using a
> windowed sinc kernel, what should be the right way of aligning the
> window towards sinc?
>
> I can think of three possibilities:
>
> 1) The center of the window could be set at 0.
> 2) The center of the window could be set to the value of the time domain
> shift of the interpolator.
> 3) The center if the window could be aligned to the "center of mass" of
> the set of sinc coefficients.
>
> What is the best option ?
>
> DSP and Mixed Signal Design Consultant
> http://www.abvolt.com
>
>

Not sure I understand the question - although I probably do well

I get that you want to use a windowed sinc to interpolate a signal in time.

What I don't get is "aligning the window towards sinc"... ??

Also, I presume that you have a windowed sinc that is sampled at a rate
higher than the time signal sequence.  Thus it behaves as an
interpolator when convolved.

The original "window" is a perfect lowpass filter ("brick wall") and,
thus, the windowing function serves to smooth the edges of the brick
wall, maybe a lot.  So the window and the lowpass filter functions are
centered at f=0.  I don't think it's an option.

Help me understand better and I'll try to do better as well.

Fred
```
 0

```On Thu, 2 Sep 2010 20:19:55 -0700 (PDT), dbd <dbd@ieee.org> wrote:

>On Sep 2, 7:55=A0pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
>> =A0 When making a filter for a time domain interpolation of a signal usin=
>g
>> a windowed sinc kernel, what should be the right way of aligning the
>> window towards sinc?
>>
>> I can think of three possibilities:
>>
>> =A0 1) The center of the window could be set at 0.
>> =A0 2) The center of the window could be set to the value of the time
>> domain shift of the interpolator.
>> =A0 3) The center if the window could be aligned to the "center of mass"
>> of the set of sinc coefficients.
>>
>> What is the best option ?
>>
>> DSP and Mixed Signal Design Consultanthttp://www.abvolt.com
>
>I would align the center of the window with the center of the sinc
>then sample as necessary.
>
>Dale B. Dalrymple

I think I agree with Dale, but there's an easy experiment to sort it
out.

Since the effect of the multiplication of the window in the time
domain is convolution in the frequency domain, in order to assess the
effect of the positions of the window just take the FFT of the
combinations you're interested in.   Intuitively from this perspective
the window can easily be applied centered over the "window" of
non-zero sinc samples.

It's then easy to move it around and see what the effects are in the
frequency domain.

Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.abineau.com
```
 0
Reply eric.jacobsen (2391) 9/3/2010 3:55:17 AM

```On Sep 2, 10:55=A0pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> =A0 When making a filter for a time domain interpolation of a signal usin=
g
> a windowed sinc kernel, what should be the right way of aligning the
> window towards sinc?
>
> I can think of three possibilities:
>
> =A0 1) The center of the window could be set at 0.
> =A0 2) The center of the window could be set to the value of the time
> domain shift of the interpolator.
> =A0 3) The center if the window could be aligned to the "center of mass"
> of the set of sinc coefficients.
>
> What is the best option ?

i'm sorta with Fred.

Vlad, is it reasonable to expect that interpolating the time-reversed
signal should result in the same as what you get when interpolating
the original signal, except time reversed?

i don't get why any interpolation wouldn't be symmetrical and constant
delay (w.r.t. frequency), unless maybe you wanted less delay for some
frequencies.  but i don't really get the unsymmetrical interpolater
response.

r b-j
```
 0

```
dbd wrote:
> On Sep 2, 7:55 pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
>
>>  When making a filter for a time domain interpolation of a signal using
>>a windowed sinc kernel, what should be the right way of aligning the
>>window towards sinc?
>>
>>I can think of three possibilities:
>>
>>  1) The center of the window could be set at 0.
>>  2) The center of the window could be set to the value of the time
>>domain shift of the interpolator.
>>  3) The center if the window could be aligned to the "center of mass"
>>of the set of sinc coefficients.
>>
>>What is the best option ?
>>
> I would align the center of the window with the center of the sinc
> then sample as necessary.
>
> Dale B. Dalrymple

I.e. the option (2). That was my thought, too; however there is a
subtlety. Let's say we have signal sampled at the points -3, -2, -1, 0,
1, 2, 3 and we need to get the interpolated value at the point 0.5. So
we calculate the coefficients as Window(x) sinc(x) centered at 0.5. We
have 3 filter coefficients at the right from 0.5, and 4 coefficients at
the left from 0.5. I can either make the window of the length = 6 and
drop the extra coefficient from the left, or make the window of the
length = 7, which is not going to be symmetric. When working with the
windows of the length ~ 10, the +/- one coefficient could make
substantial difference.

VLV
```
 0

```Fred Marshall  <fmarshall_xremove_the_xs@xacm.org> wrote:

>On 9/2/2010 7:55 PM, Vladimir Vassilevsky wrote:

>> When making a filter for a time domain interpolation of a signal using a
>> windowed sinc kernel, what should be the right way of aligning the
>> window towards sinc?

>> I can think of three possibilities:

>> 1) The center of the window could be set at 0.
>> 2) The center of the window could be set to the value of the time domain
>> shift of the interpolator.
>> 3) The center if the window could be aligned to the "center of mass" of
>> the set of sinc coefficients.
>>
>> What is the best option ?

>Not sure I understand the question - although I probably do well

I am opposite; I understand the question, but do not know the answer
and would have to simulate it.

To me, (1) only makes sense if the effect of the windowing is to
interpolate from an odd number of points.

(2) is how I would initially choose to do it, and (3) is interesting
and I would want to see the results.

Steve
```
 0

```On Sep 2, 9:18=A0pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> ...
> there is a
> subtlety. Let's say we have signal sampled at the points -3, -2, -1, 0,
> 1, 2, 3 and we need to get the interpolated value at the point 0.5. So
> we calculate the coefficients as Window(x) sinc(x) centered at 0.5. We
> have 3 filter coefficients at the right from 0.5, and 4 coefficients at
> the left from 0.5. I can either make the window of the length =3D 6 and
> drop the extra coefficient from the left, or make the window of the
> length =3D 7, which is not going to be symmetric. When working with the
> windows of the length ~ 10, the +/- one coefficient could make
> substantial difference.
>
> VLV

The continuous weighted sinc designed waveform is symmetric. Your
interpolator samples can only be symmetric if you are calculating the
output value at the time of an input sample or at a time half way
between input sample times.

Dale B. Dalrymple
```
 0

```On Sep 3, 1:29=A0am, dbd <d...@ieee.org> wrote:
> On Sep 2, 9:18=A0pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
>
> > ...
> > there is a
> > subtlety. Let's say we have signal sampled at the points -3, -2, -1, 0,
> > 1, 2, 3 and we need to get the interpolated value at the point 0.5. So
> > we calculate the coefficients as Window(x) sinc(x) centered at 0.5. We
> > have 3 filter coefficients at the right from 0.5, and 4 coefficients at
> > the left from 0.5. I can either make the window of the length =3D 6 and
> > drop the extra coefficient from the left, or make the window of the
> > length =3D 7, which is not going to be symmetric. When working with the
> > windows of the length ~ 10, the +/- one coefficient could make
> > substantial difference.
>
> The continuous weighted sinc designed waveform is symmetric.

at least it *should* be.  if you want to have the property of "reverse

> Your
> interpolator samples can only be symmetric if you are calculating the
> output value at the time of an input sample or at a time half way
> between input sample times.

not if you're exactly half way *and* you are using 4 coefficients on
the left and 3 coefficients on the right.

r b-j
```
 0

```
robert bristow-johnson wrote:

> On Sep 3, 1:29 am, dbd <d...@ieee.org> wrote:
>
>>On Sep 2, 9:18 pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
>>
>>
>>>...
>>>there is a
>>>subtlety. Let's say we have signal sampled at the points -3, -2, -1, 0,
>>>1, 2, 3 and we need to get the interpolated value at the point 0.5. So
>>>we calculate the coefficients as Window(x) sinc(x) centered at 0.5. We
>>>have 3 filter coefficients at the right from 0.5, and 4 coefficients at
>>>the left from 0.5. I can either make the window of the length = 6 and
>>>drop the extra coefficient from the left, or make the window of the
>>>length = 7, which is not going to be symmetric. When working with the
>>>windows of the length ~ 10, the +/- one coefficient could make
>>>substantial difference.
>>
>>The continuous weighted sinc designed waveform is symmetric.

Continuous has to be sampled to make a filter, hence the coefficients
are generally not going to be symmetric.

> at least it *should* be.  if you want to have the property of "reverse

"Vlad, is it reasonable to expect that interpolating the time-reversed
signal should result in the same as what you get when interpolating
the original signal, except time reversed?"

If the filter is symmetric, yes. Otherwise the filter has to be time
reversed also. What is the point?

>>Your
>>interpolator samples can only be symmetric if you are calculating the
>>output value at the time of an input sample or at a time half way
>>between input sample times.
>
> not if you're exactly half way *and* you are using 4 coefficients on
> the left and 3 coefficients on the right.

VLV

```
 0

```On 09/02/2010 09:18 PM, Vladimir Vassilevsky wrote:
>
>
> dbd wrote:
>> On Sep 2, 7:55 pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
>>
>>> When making a filter for a time domain interpolation of a signal using
>>> a windowed sinc kernel, what should be the right way of aligning the
>>> window towards sinc?
>>>
>>> I can think of three possibilities:
>>>
>>> 1) The center of the window could be set at 0.
>>> 2) The center of the window could be set to the value of the time
>>> domain shift of the interpolator.
>>> 3) The center if the window could be aligned to the "center of mass"
>>> of the set of sinc coefficients.
>>>
>>> What is the best option ?
>>>
>> I would align the center of the window with the center of the sinc
>> then sample as necessary.
>>
>> Dale B. Dalrymple
>
> I.e. the option (2). That was my thought, too; however there is a
> subtlety. Let's say we have signal sampled at the points -3, -2, -1, 0,
> 1, 2, 3 and we need to get the interpolated value at the point 0.5. So
> we calculate the coefficients as Window(x) sinc(x) centered at 0.5. We
> have 3 filter coefficients at the right from 0.5, and 4 coefficients at
> the left from 0.5. I can either make the window of the length = 6 and
> drop the extra coefficient from the left, or make the window of the
> length = 7, which is not going to be symmetric. When working with the
> windows of the length ~ 10, the +/- one coefficient could make
> substantial difference.

Perhaps in that case the smart thing to do would be to truncate the
window by one sample on one side.

No matter what you do, if the point at which you want to interpolate is
anything other than an integer or 1/2 you can't make your window
symmetric -- so start figuring out what's the best in general, and see
what drops out for the shift = 0.5 and shift = 0 cases.

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html
```
 0

```On Sep 3, 12:00=A0pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> robert bristow-johnson wrote:
....
> > at least it *should* be. =A0if you want to have the property of "revers=
e
t?
>
> > "Vlad, is it reasonable to expect that interpolating the time-reversed
> > signal should result in the same as what you get when interpolating
> > the original signal, except time reversed?"
>
> If the filter is symmetric, yes. Otherwise the filter has to be time
> reversed also. What is the point?

the point is that the coefficients can't be reversed.  yet, it surely
seems to me, that if the input data is reversed, you should get
exactly the same output, except reversed.  that means, if your
"breakpoints" are at the sample times, the number of samples to the
left and to the right have to be equal and there are an even number of
coefficients (and these coefficients depend solely on the fractional
part of the interpolated time).

it is possible to have an odd number of coefficients, but then the
"breakpoint" occurs midway between the sample times.  but in that
case, let's say you're slowly increasing the fractional time from 0 to
1/2 sample, you have 4 coefficients to the left and 3 to the right.
we'll call those coefficients {h[3], h[2], h[1], h[0], h[-1], h[-2],
h[-3]}.  as you get closer to the midway position between x[0] and
x[1], the coefficient h[3] (which gets attached to x[-3]) goes to zero
and becomes exactly zero when you get to x(1/2).  then as the
fractional time gets slightly beyond 1/2 the coefficient set becomes
{h[2], h[1], h[0], h[-1], h[-2], h[-3], h[-4]} but they would be
renamed {h[3] ... h[-3]} and the set of samples would go from x[-2] up
to x[4].

but i wouldn't do it that way.  i would always insist on an even
number of coefficients and the breakpoints would be at the sample
times.  then it is also conceptually simple; for the continuous-time
index, the integer part solely determines what set of samples are to
be used for the interpolation, and the fractional part solely
determines how that set of samples shall be combined (what the
coefficients are) to yield an interpolated result.

and i would always insist upon time-reverse invariability.  for an
even number of coefficients, that means an equal number on the left
and right of the interpolated point.  for an odd number, that means
one more sample to the left if the fractional part of the time index
is between 0 and 1/2 or one more sample to the right if the fractional
part of the time index is between 1/2 and 1.

r b-j
```
 0

```robert bristow-johnson  <rbj@audioimagination.com> wrote:

>but i wouldn't do it that way.  i would always insist on an even
>number of coefficients and the breakpoints would be at the sample
>times.  then it is also conceptually simple; for the continuous-time
>index, the integer part solely determines what set of samples are to
>be used for the interpolation, and the fractional part solely
>determines how that set of samples shall be combined (what the
>coefficients are) to yield an interpolated result.

>and i would always insist upon time-reverse invariability.  for an
>even number of coefficients, that means an equal number on the left
>and right of the interpolated point.  for an odd number, that means
>one more sample to the left if the fractional part of the time index
>is between 0 and 1/2 or one more sample to the right if the fractional
>part of the time index is between 1/2 and 1.

The mathematical convention is a little different.  If you're
interpolating from an odd number of sample points, the result is
defined for an abcsissa between -1 and +1; whereas if you're interpolating
from an even number of sample points, the result is defined for an absissa
between 0 and +1.  Thus it both remains symmetrical, and you do
not have sample points "falling off" when the abscissa goes
through a fractional value.

Steve
```
 0

```On Fri, 03 Sep 2010 11:00:33 -0500, Vladimir Vassilevsky
<nospam@nowhere.com> wrote:

>
>
>robert bristow-johnson wrote:
>
>> On Sep 3, 1:29 am, dbd <d...@ieee.org> wrote:
>>
>>>On Sep 2, 9:18 pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
>>>
>>>
>>>>...
>>>>there is a
>>>>subtlety. Let's say we have signal sampled at the points -3, -2, -1, 0,
>>>>1, 2, 3 and we need to get the interpolated value at the point 0.5. So
>>>>we calculate the coefficients as Window(x) sinc(x) centered at 0.5. We
>>>>have 3 filter coefficients at the right from 0.5, and 4 coefficients at
>>>>the left from 0.5. I can either make the window of the length = 6 and
>>>>drop the extra coefficient from the left, or make the window of the
>>>>length = 7, which is not going to be symmetric. When working with the
>>>>windows of the length ~ 10, the +/- one coefficient could make
>>>>substantial difference.
>>>
>>>The continuous weighted sinc designed waveform is symmetric.
>
>Continuous has to be sampled to make a filter, hence the coefficients
>are generally not going to be symmetric.
>
>> at least it *should* be.  if you want to have the property of "reverse
>
>"Vlad, is it reasonable to expect that interpolating the time-reversed
>signal should result in the same as what you get when interpolating
>the original signal, except time reversed?"
>
>If the filter is symmetric, yes. Otherwise the filter has to be time
>reversed also. What is the point?
>
>>>Your
>>>interpolator samples can only be symmetric if you are calculating the
>>>output value at the time of an input sample or at a time half way
>>>between input sample times.
>>
>> not if you're exactly half way *and* you are using 4 coefficients on
>> the left and 3 coefficients on the right.
>
>VLV
>
>

A highly oversampled polyphase FIR filter that potentially
interpolates N phases between input samples works just fine as long as
the aggregate FIR impulse response is constructed properly.   In other
words, if the aggregate Nx oversamples coefficient set is symmetric
and well behaved, then all of the asymmetric subfilters will have the
same response.   You can try this yourself and see.   So I don't think
it matters much and it makes the argument that the window be centered
over the center of the IPR peak for a symmetric sinc.

Arbitrarily lopping off a sample on either end of the window to make
it fit takes the original window function and multiplies it by a
length N-1 rectangular window.   The IPR main lobe will spread and the
sidelobes will come up as a result.

As previously mentioned, this sort of thing is easily demonstrated in
simulation without a ton of effort.

Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.abineau.com
```
 0

```On Sep 3, 7:53=A0am, robert bristow-johnson <r...@audioimagination.com>
wrote:

> ...
> not if you're exactly half way *and* you are using 4 coefficients on
> the left and 3 coefficients on the right.
>
> r b-j

If you are generating an output at the point halfway between inputs
and there are only three valid samples to the right in the window,
there are only three valid samples in the window to the left and they
are symmetric with those on the right. An implementation that
calculates 4 points on one side is broken. If the implementation
then that coefficient should be set to zero so that it does no harm.

This seems to be the case where the change from 7 coefficients to 6 is
disturbing to VLV's intuition. I think this is a case where the
theoretical/continuous/infinite based intuition of the mixed-signal
designer needs to adapt to the nature of the discrete sampled versions
of symmetry and finite impulse response.

Dale B. Dalrymple
```
 0

```
Eric Jacobsen wrote:

> A highly oversampled polyphase FIR filter that potentially
> interpolates N phases between input samples works just fine as long as
> the aggregate FIR impulse response is constructed properly.   In other
> words, if the aggregate Nx oversamples coefficient set is symmetric
> and well behaved, then all of the asymmetric subfilters will have the
> same response.

This is not so.

The responses of the subfilters are going to be quite different from
each other and much worse then that of the original filter. Several
people from this NG run into exactly this problem. The rule of thumb is
if you are designing a prototype filter which is going to be decimated
by a factor of N, the passband flatness and stopband attenuation
requirements to this filter should be increased by N times.

> You can try this yourself and see.

Try this for yourself and see.

> So I don't think
> it matters much and it makes the argument that the window be centered
> over the center of the IPR peak for a symmetric sinc.

It does matter, especially for the small filter lengths.

> Arbitrarily lopping off a sample on either end of the window to make
> it fit takes the original window function and multiplies it by a
> length N-1 rectangular window.   The IPR main lobe will spread and the
> sidelobes will come up as a result.
>
> As previously mentioned, this sort of thing is easily demonstrated in
> simulation without a ton of effort.

The actual problem is a bit more involved: I need to minimize the
difference between interpolator responses computed to the arbitrary time
shift.

VLV

```
 0

```On Sep 3, 3:24=A0pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
>...
>
> The responses of the subfilters are going to be quite different from
> each other and much worse then that of the original filter. Several
> people from this NG run into exactly this problem. The rule of thumb is
> if you are designing a prototype filter which is going to be decimated
> by a factor of N, the passband flatness and stopband attenuation
> requirements to this filter should be increased by N times.
> ...

> VLV

Are you still using Remez exchange equiripple designed prototype
filters?

The (unoptimized) subsets of the optimized prototype will not match
the performance of the prototype and will not match each other. A
degradation by N may be a good rule of thumb for the loss. I've never
seen a reference to a hard limit. Do you have one?

A conventional window design prototype filter such as Kaiser-Bessel
will not perform as well as the equiripple prototype of the same
length, but the subsets will have the same performance as the
prototype.

Dale B. Dalrymple
```
 0

```On Fri, 03 Sep 2010 17:24:08 -0500, Vladimir Vassilevsky
<nospam@nowhere.com> wrote:

>
>
>Eric Jacobsen wrote:
>
>
>> A highly oversampled polyphase FIR filter that potentially
>> interpolates N phases between input samples works just fine as long as
>> the aggregate FIR impulse response is constructed properly.   In other
>> words, if the aggregate Nx oversamples coefficient set is symmetric
>> and well behaved, then all of the asymmetric subfilters will have the
>> same response.
>
>This is not so.

It is, for a fixed decimation rate and only shifting of the phases.
i.e., don't compare across decimation rates, which is not something
you mentioned before.

Try it and see.

>The responses of the subfilters are going to be quite different from
>each other and much worse then that of the original filter. Several
>people from this NG run into exactly this problem. The rule of thumb is
>if you are designing a prototype filter which is going to be decimated
>by a factor of N, the passband flatness and stopband attenuation
>requirements to this filter should be increased by N times.

You've added a new constraint of constancy over decimation rate, when
you hand't even said it was a decimating filter previously.    As I
mentioned, for the case of only changing the sampling phase, which is

>> You can try this yourself and see.
>
>Try this for yourself and see.

I have many times, and as I've mentioned before in other threads, for
a comm system where you can quantifiably measure performance within
about 0.1dB or so, it does not matter.

Been there, done that, have thousands of products in the field to
prove it.

>> So I don't think
>> it matters much and it makes the argument that the window be centered
>> over the center of the IPR peak for a symmetric sinc.
>
>It does matter, especially for the small filter lengths.

constraints up front rather than change the target later.

You're extremely harsh on other people who do that.   Should I call
you a name?

>> Arbitrarily lopping off a sample on either end of the window to make
>> it fit takes the original window function and multiplies it by a
>> length N-1 rectangular window.   The IPR main lobe will spread and the
>> sidelobes will come up as a result.
>>
>> As previously mentioned, this sort of thing is easily demonstrated in
>> simulation without a ton of effort.
>
>The actual problem is a bit more involved: I need to minimize the
>difference between interpolator responses computed to the arbitrary time
>shift.
>
>VLV

Which is exactly the case I'm describing, i.e., a polyphase FIR
resampling filter that can adjust output sample timing phase by
selecting a coefficient subset on the fly.    If the decimation rate
is constant, the filter response across the possible filter subsets is
not distinguishable (in BER) down to about 0.1dB (or more, I've just
not been able to measure more finely) in the many systems I've built
and verified this way.

Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.abineau.com
```
 0

```
Eric Jacobsen wrote:
> On Fri, 03 Sep 2010 17:24:08 -0500, Vladimir Vassilevsky
> <nospam@nowhere.com> wrote:
>
>
>>
>>Eric Jacobsen wrote:
>>
>>
>>
>>>A highly oversampled polyphase FIR filter that potentially
>>>interpolates N phases between input samples works just fine as long as
>>>the aggregate FIR impulse response is constructed properly.   In other
>>>words, if the aggregate Nx oversamples coefficient set is symmetric
>>>and well behaved, then all of the asymmetric subfilters will have the
>>>same response.
>>
>>This is not so.
>
>
> It is, for a fixed decimation rate and only shifting of the phases.
> i.e., don't compare across decimation rates, which is not something
> you mentioned before.
>
> Try it and see.
>
>>The responses of the subfilters are going to be quite different from
>>each other and much worse then that of the original filter. Several
>>people from this NG run into exactly this problem. The rule of thumb is
>>if you are designing a prototype filter which is going to be decimated
>>by a factor of N, the passband flatness and stopband attenuation
>>requirements to this filter should be increased by N times.
>
>
> You've added a new constraint of constancy over decimation rate, when
> you hand't even said it was a decimating filter previously.    As I
> mentioned, for the case of only changing the sampling phase, which is

>>>You can try this yourself and see.
>>Try this for yourself and see.

http://www.abvolt.com/misc/filters_comparison.xls

Surprisingly, Kaiser-Bessel windowed sinc showed more variation between
subfilters then Parks-Mcclellan design.

> I have many times, and as I've mentioned before in other threads, for
> a comm system where you can quantifiably measure performance within
> about 0.1dB or so, it does not matter.

The O.1dB could be terrible for some other cases, as this corresponds to
the mismatch error of ~ 1%.

> Been there, done that, have thousands of products in the field to
> prove it.

Sure. That's a solid argument revealing the confidence.

>>>So I don't think
>>>it matters much and it makes the argument that the window be centered
>>>over the center of the IPR peak for a symmetric sinc.
>>
>>It does matter, especially for the small filter lengths.
>
> constraints up front rather than change the target later.

> You're extremely harsh on other people who do that.   Should I call
> you a name?

Yes, please, call a name and tell me what to do.

>>>Arbitrarily lopping off a sample on either end of the window to make
>>>it fit takes the original window function and multiplies it by a
>>>length N-1 rectangular window.   The IPR main lobe will spread and the
>>>sidelobes will come up as a result.
>>>
>>>As previously mentioned, this sort of thing is easily demonstrated in
>>>simulation without a ton of effort.
>>
>>The actual problem is a bit more involved: I need to minimize the
>>difference between interpolator responses computed to the arbitrary time
>>shift.

> Which is exactly the case I'm describing, i.e., a polyphase FIR
> resampling filter that can adjust output sample timing phase by
> selecting a coefficient subset on the fly.    If the decimation rate
> is constant, the filter response across the possible filter subsets is
> not distinguishable (in BER) down to about 0.1dB (or more, I've just
> not been able to measure more finely) in the many systems I've built
> and verified this way.

The 0.1 dB not very close to Nyqust is simple...

VLV

```
 0

```
dbd wrote:

> On Sep 3, 3:24 pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
>
>>...
>>
>>The responses of the subfilters are going to be quite different from
>>each other and much worse then that of the original filter. Several
>>people from this NG run into exactly this problem. The rule of thumb is
>>if you are designing a prototype filter which is going to be decimated
>>by a factor of N, the passband flatness and stopband attenuation
>>requirements to this filter should be increased by N times.
>>...
>
> Are you still using Remez exchange equiripple designed prototype
> filters?
>
> The (unoptimized) subsets of the optimized prototype will not match
> the performance of the prototype and will not match each other. A
> degradation by N may be a good rule of thumb for the loss. I've never
> seen a reference to a hard limit. Do you have one?

I haven't seen any. This is just my observations.

> A conventional window design prototype filter such as Kaiser-Bessel
> will not perform as well as the equiripple prototype of the same
> length, but the subsets will have the same performance as the
> prototype.

It depends. Window designs are affected by decimation also; how much
they are affected wrt similar Parks-McClellan design - depends on the
particulars.

I think of sharpening a time domain interpolation filter by brute force
optimization.

VLV

```
 0

```On Sep 3, 1:21=A0pm, spop...@speedymail.org (Steve Pope) wrote:
> robert bristow-johnson =A0<r...@audioimagination.com> wrote:
>
> >but i wouldn't do it that way. =A0i would always insist on an even
> >number of coefficients and the breakpoints would be at the sample
> >times. =A0then it is also conceptually simple; for the continuous-time
> >index, the integer part solely determines what set of samples are to
> >be used for the interpolation, and the fractional part solely
> >determines how that set of samples shall be combined (what the
> >coefficients are) to yield an interpolated result.
> >and i would always insist upon time-reverse invariability. =A0for an
> >even number of coefficients, that means an equal number on the left
> >and right of the interpolated point. =A0for an odd number, that means
> >one more sample to the left if the fractional part of the time index
> >is between 0 and 1/2 or one more sample to the right if the fractional
> >part of the time index is between 1/2 and 1.
>
> The mathematical convention is a little different. =A0If you're
> interpolating from an odd number of sample points, the result is
> defined for an abscissa between -1 and +1;

i think it's between -1/2 and +1/2.  we don't need overlapping
segments.

with this odd number of coefficients (let's call that number "N"), you
might have to add 1/2 before conceptually applying the floor()
function, but it's the same thing, the integer part of the precision
time (or abscissa) defines which N samples to use and the fractional
part defines the set of coefficients used to combine them.  it should
be well defined which precludes overlapping segments.

> whereas if you're interpolating
> from an even number of sample points, the result is defined for an abscis=
sa
> between 0 and +1.

agreed.

> =A0Thus it both remains symmetrical, and you do
> not have sample points "falling off" when the abscissa goes
> through a fractional value.

well, sorta you do.

let's say that N=3D7 (an odd number).  if you're interpolating at some
precision time between n-1/2 and n+1/2, then the samples x[n-3],
x[n-2], ... x[n+3]  are used.  when t crosses from just below n+1/2 to
just above it, then x[n-3] falls off the edge on the left and x[n+4]
slides into the picture from the right.  but then, instead of saying
that you're just above n+1/2, you should increment n by 1 and say
you're just above n-1/2 and it's the same thing as before.

in both the N even or N odd cases, the interpolation kernel (the
windowed sinc function) goes to zero at +/- N/2.  the difference
between N=3Deven and N=3Dodd cases is that the latter is midway between
samples, which makes one sample "disappear" (have coefficient that
goes to zero) while other samples do not.  in the N=3Deven case, all
samples (other than the obvious one) have coefficients approaching
zero only as you approach an integer sample time.
```
 0

```robert bristow-johnson  <rbj@audioimagination.com> wrote:

>On Sep 3, 1:21ï¿½pm, spop...@speedymail.org (Steve Pope) wrote:

>> robert bristow-johnson ï¿½<r...@audioimagination.com> wrote:

>> >but i wouldn't do it that way. ï¿½i would always insist on an even
>> >number of coefficients and the breakpoints would be at the sample
>> >times. ï¿½then it is also conceptually simple; for the continuous-time
>> >index, the integer part solely determines what set of samples are to
>> >be used for the interpolation, and the fractional part solely
>> >determines how that set of samples shall be combined (what the
>> >coefficients are) to yield an interpolated result.
>> >and i would always insist upon time-reverse invariability. ï¿½for an
>> >even number of coefficients, that means an equal number on the left
>> >and right of the interpolated point. ï¿½for an odd number, that means
>> >one more sample to the left if the fractional part of the time index
>> >is between 0 and 1/2 or one more sample to the right if the fractional
>> >part of the time index is between 1/2 and 1.

>> The mathematical convention is a little different. ï¿½If you're
>> interpolating from an odd number of sample points, the result is
>> defined for an abscissa between -1 and +1;

>i think it's between -1/2 and +1/2.  we don't need overlapping
>segments.

I agree that is more logical.

Since I wrote the above, I look up the section on Lagrange
interpolators in my copy of Abramowitz and Stegun, and it's
even stranger than I thought -- they actually define interpolators
for almost any abscissa.  e.g. if the sample points are at
-2, -1, 0, 1 and 2, you can still compute an interpolation value
for an abscissa of -1.8 -- why you would want to, not sure, but
the formula is there.

>with this odd number of coefficients (let's call that number "N"), you
>might have to add 1/2 before conceptually applying the floor()
>function, but it's the same thing, the integer part of the precision
>time (or abscissa) defines which N samples to use and the fractional
>part defines the set of coefficients used to combine them.  it should
>be well defined which precludes overlapping segments.

I was thinking that an odd interpolator could give you values
for the interval from -1 to +1, and then you'd hop forward by
two, so there is no overlap.

>> whereas if you're interpolating
>> from an even number of sample points, the result is defined for an abscissa
>> between 0 and +1.
>
>agreed.
>
>> ï¿½Thus it both remains symmetrical, and you do
>> not have sample points "falling off" when the abscissa goes
>> through a fractional value.

>well, sorta you do.

>let's say that N=7 (an odd number).  if you're interpolating at some
>precision time between n-1/2 and n+1/2, then the samples x[n-3],
>x[n-2], ... x[n+3]  are used.  when t crosses from just below n+1/2 to
>just above it, then x[n-3] falls off the edge on the left and x[n+4]
>slides into the picture from the right.  but then, instead of saying
>that you're just above n+1/2, you should increment n by 1 and say
>you're just above n-1/2 and it's the same thing as before.

>in both the N even or N odd cases, the interpolation kernel (the
>windowed sinc function) goes to zero at +/- N/2.  the difference
>between N=even and N=odd cases is that the latter is midway between
>samples, which makes one sample "disappear" (have coefficient that
>goes to zero) while other samples do not.  in the N=even case, all
>samples (other than the obvious one) have coefficients approaching
>zero only as you approach an integer sample time.

Yes, thanks.  The behavior you just described is there in the
Lagrange formulae in Abramowitz and Stegun.  (Which I just checked
on Alibris, there are plenty of copies of this tome available
for not too much.)

Steve
```
 0

```On Sep 3, 10:08=A0pm, spop...@speedymail.org (Steve Pope) wrote:
>  ...
>
> Yes, thanks. =A0The behavior you just described is there in the
> Lagrange formulae in Abramowitz and Stegun. =A0(Which I just checked
> on Alibris, there are plenty of copies of this tome available
> for not too much.)
>
> Steve

Dale B. Dalrymple
```
 0

```On Fri, 03 Sep 2010 23:30:26 -0500, Vladimir Vassilevsky
<nospam@nowhere.com> wrote:

>
>
>Eric Jacobsen wrote:
>> On Fri, 03 Sep 2010 17:24:08 -0500, Vladimir Vassilevsky
>> <nospam@nowhere.com> wrote:
>>
>>
>>>
>>>Eric Jacobsen wrote:
>>>
>>>
>>>
>>>>A highly oversampled polyphase FIR filter that potentially
>>>>interpolates N phases between input samples works just fine as long as
>>>>the aggregate FIR impulse response is constructed properly.   In other
>>>>words, if the aggregate Nx oversamples coefficient set is symmetric
>>>>and well behaved, then all of the asymmetric subfilters will have the
>>>>same response.
>>>
>>>This is not so.
>>
>>
>> It is, for a fixed decimation rate and only shifting of the phases.
>> i.e., don't compare across decimation rates, which is not something
>> you mentioned before.
>>
>> Try it and see.
>>
>>>The responses of the subfilters are going to be quite different from
>>>each other and much worse then that of the original filter. Several
>>>people from this NG run into exactly this problem. The rule of thumb is
>>>if you are designing a prototype filter which is going to be decimated
>>>by a factor of N, the passband flatness and stopband attenuation
>>>requirements to this filter should be increased by N times.
>>
>>
>> You've added a new constraint of constancy over decimation rate, when
>> you hand't even said it was a decimating filter previously.    As I
>> mentioned, for the case of only changing the sampling phase, which is
>
>
>>>>You can try this yourself and see.
>>>Try this for yourself and see.
>
>http://www.abvolt.com/misc/filters_comparison.xls

I've no way to tell what the columns and comparisons are in that
spreadsheet.   They seem to show the same passband response for all
cases, though.

>Surprisingly, Kaiser-Bessel windowed sinc showed more variation between
>subfilters then Parks-Mcclellan design.
>
>> I have many times, and as I've mentioned before in other threads, for
>> a comm system where you can quantifiably measure performance within
>> about 0.1dB or so, it does not matter.
>
>The O.1dB could be terrible for some other cases, as this corresponds to
>the mismatch error of ~ 1%.

I don't know about every application, but in systems I've worked on
0.1dB difference in performance qualifies as essentially
indistinguishable.

>> Been there, done that, have thousands of products in the field to
>> prove it.
>
>Sure. That's a solid argument revealing the confidence.
>
>>>>So I don't think
>>>>it matters much and it makes the argument that the window be centered
>>>>over the center of the IPR peak for a symmetric sinc.
>>>
>>>It does matter, especially for the small filter lengths.
>>
>> constraints up front rather than change the target later.
>
>
>> You're extremely harsh on other people who do that.   Should I call
>> you a name?
>
>Yes, please, call a name and tell me what to do.
>
>>>>Arbitrarily lopping off a sample on either end of the window to make
>>>>it fit takes the original window function and multiplies it by a
>>>>length N-1 rectangular window.   The IPR main lobe will spread and the
>>>>sidelobes will come up as a result.
>>>>
>>>>As previously mentioned, this sort of thing is easily demonstrated in
>>>>simulation without a ton of effort.
>>>
>>>The actual problem is a bit more involved: I need to minimize the
>>>difference between interpolator responses computed to the arbitrary time
>>>shift.
>
>> Which is exactly the case I'm describing, i.e., a polyphase FIR
>> resampling filter that can adjust output sample timing phase by
>> selecting a coefficient subset on the fly.    If the decimation rate
>> is constant, the filter response across the possible filter subsets is
>> not distinguishable (in BER) down to about 0.1dB (or more, I've just
>> not been able to measure more finely) in the many systems I've built
>> and verified this way.
>
>The 0.1 dB not very close to Nyqust is simple...

If you need the stopbands to be identical, I'll assert that's an
unusual application.

Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.abineau.com
```
 0

```On 9/2/2010 9:08 PM, robert bristow-johnson wrote:
> On Sep 2, 10:55 pm, Vladimir Vassilevsky<nos...@nowhere.com>  wrote:
>>    When making a filter for a time domain interpolation of a signal using
>> a windowed sinc kernel, what should be the right way of aligning the
>> window towards sinc?
>>
>> I can think of three possibilities:
>>
>>    1) The center of the window could be set at 0.
>>    2) The center of the window could be set to the value of the time
>> domain shift of the interpolator.
>>    3) The center if the window could be aligned to the "center of mass"
>> of the set of sinc coefficients.
>>
>> What is the best option ?
>
> i'm sorta with Fred.
>
> Vlad, is it reasonable to expect that interpolating the time-reversed
> signal should result in the same as what you get when interpolating
> the original signal, except time reversed?
>
> i don't get why any interpolation wouldn't be symmetrical and constant
> delay (w.r.t. frequency), unless maybe you wanted less delay for some
> frequencies.  but i don't really get the unsymmetrical interpolater
> response.
>
> r b-j

It appears that this isn't a matter of interpolating in a streaming
sense.  OK - that helps.  So, we have a fixed sequence and we want to
interpolate it, right?

It seems still that a lot of confusion has to do with the actual
operations needed and what a "windowed sinc" really means.  It could mean:
- a tapered sinc
or, as the usual context:
- a gate function (whose FT is s sinc) that is *windowed* .. resulting
in a generally faster-decaying and slightly fatter main lobe sinc-like
function.
It's the sinc-like function that is the kernel for the interpolating
convolution step.  Do we agree?

Now, what if one does this:
Start with a continuous sinc centered at zero but with finite length.
[The FT of this is a gate convolved with a sinc because of the finite
temporal length.]
So, we can say that there's a rectangular window in time and that there
is a semi-rectangular window in frequency plus the effects of convolving
in freq with that short sinc.
The idea is that this sinc is going to be used to convolve in time in
order to interpolate (in this case "reconstruct" but we can sample the
sinc later...).

Interpolation is a matter of increasing the sample rate if we're going
to keep things discrete.  So, we want to start out by adding zero-valued
samples where we want the interpolated samples to appear.  This takes
care of increasing the sample rate.
Next, we want to remove all the frequency content around the *new* fs/2.
So that looks like a lowpass filter.
The lowpass filter looks something like a sinc in time.
So, multiply in freq or convolve in time is just a choice we make.

If we want to avoid things  like Gibbs phenomenon in time then we change
the shape of the lowpass filter in frequency with a "window function"
which, in turn, changes the nature of the sinc in time that we'll use
for interpolating.

There are bunches of decent discrete interpolators:
[0.5  1.0  0.5] which is sampled at 2fs, keeps the original samples
intact and puts an interpolated sample which is the mean of the two
adjacent samples (i.e. straight line interpolation).

Anyway, are you raising the question:
"What happens if you take a finite sinc and multipy it by a window
function?"
In that case, the lowpass filter is convolved with the FT of the window
function.  And, doing that seems a bit unusual to me and I've not
pondered that question - at least not knowingly....

I believe the original question was about where/how to apply the
"window".  But, as you can see, I'm unclear as to which domain the
window would be applied.  In any case I don't see that it matters
because, in the end, one is going to do the equivalent of a time domain
convolution and the kernel of that convolution applies for all values of
time - it isn't "centered" anywhere in that context.  Well, unless one
is determined to maintain temporal registration.  In that case, an in
all the things I've mentioned above, then it's best to use a symmetrical
kernel and the temporal registration is with the center of it.

That is, if the window is in time (unusual case) then it should be
centered at the center of the underlying sinc.
If the window function is in frequency then it should begin and end at
the band edges of the LPF (-fc to +fc).
And it has nothing to do with the data except for matching up sample rates.

I hope this helps.... ?

Fred

```
 0

```On Sep 4, 1:08=A0am, spop...@speedymail.org (Steve Pope) wrote:
> robert bristow-johnson =A0<r...@audioimagination.com> wrote:
>
>
>
>
>
> >On Sep 3, 1:21=A0pm, spop...@speedymail.org (Steve Pope) wrote:
> >> robert bristow-johnson =A0<r...@audioimagination.com> wrote:
> >> >but i wouldn't do it that way. =A0i would always insist on an even
> >> >number of coefficients and the breakpoints would be at the sample
> >> >times. =A0then it is also conceptually simple; for the continuous-tim=
e
> >> >index, the integer part solely determines what set of samples are to
> >> >be used for the interpolation, and the fractional part solely
> >> >determines how that set of samples shall be combined (what the
> >> >coefficients are) to yield an interpolated result.
> >> >and i would always insist upon time-reverse invariability. =A0for an
> >> >even number of coefficients, that means an equal number on the left
> >> >and right of the interpolated point. =A0for an odd number, that means
> >> >one more sample to the left if the fractional part of the time index
> >> >is between 0 and 1/2 or one more sample to the right if the fractiona=
l
> >> >part of the time index is between 1/2 and 1.
> >> The mathematical convention is a little different. =A0If you're
> >> interpolating from an odd number of sample points, the result is
> >> defined for an abscissa between -1 and +1;
> >i think it's between -1/2 and +1/2. =A0we don't need overlapping
> >segments.
>
> I agree that is more logical.
>
> Since I wrote the above, I look up the section on Lagrange
> interpolators in my copy of Abramowitz and Stegun, and it's
> even stranger than I thought -- they actually define interpolators
> for almost any abscissa. =A0e.g. if the sample points are at
> -2, -1, 0, 1 and 2, you can still compute an interpolation value
> for an abscissa of -1.8 -- why you would want to, not sure, but
> the formula is there.
>
> >with this odd number of coefficients (let's call that number "N"), you
> >might have to add 1/2 before conceptually applying the floor()
> >function, but it's the same thing, the integer part of the precision
> >time (or abscissa) defines which N samples to use and the fractional
> >part defines the set of coefficients used to combine them. =A0it should
> >be well defined which precludes overlapping segments.
>
> I was thinking that an odd interpolator could give you values
> for the interval from -1 to +1, and then you'd hop forward by
> two, so there is no overlap.
>
>
>
>
>
> >> whereas if you're interpolating
> >> from an even number of sample points, the result is defined for an abs=
cissa
> >> between 0 and +1.
>
> >agreed.
>
> >> =A0Thus it both remains symmetrical, and you do
> >> not have sample points "falling off" when the abscissa goes
> >> through a fractional value.
> >well, sorta you do.
> >let's say that N=3D7 (an odd number). =A0if you're interpolating at some
> >precision time between n-1/2 and n+1/2, then the samples x[n-3],
> >x[n-2], ... x[n+3] =A0are used. =A0when t crosses from just below n+1/2 =
to
> >just above it, then x[n-3] falls off the edge on the left and x[n+4]
> >slides into the picture from the right. =A0but then, instead of saying
> >that you're just above n+1/2, you should increment n by 1 and say
> >you're just above n-1/2 and it's the same thing as before.
> >in both the N even or N odd cases, the interpolation kernel (the
> >windowed sinc function) goes to zero at +/- N/2. =A0the difference
> >between N=3Deven and N=3Dodd cases is that the latter is midway between
> >samples, which makes one sample "disappear" (have coefficient that
> >goes to zero) while other samples do not. =A0in the N=3Deven case, all
> >samples (other than the obvious one) have coefficients approaching
> >zero only as you approach an integer sample time.
>
> Yes, thanks. =A0The behavior you just described is there in the
> Lagrange formulae in Abramowitz and Stegun. =A0(Which I just checked
> on Alibris, there are plenty of copies of this tome available
> for not too much.)
>
> Steve- Hide quoted text -
>
> - Show quoted text -- Hide quoted text -
>
> - Show quoted text -

In fact if you take the Lagrange formula and look at it in the limit
of a very high order polynomial and equally spaced samples, then each
of the cononical products in the Lagrange formula becomes a form of
sin(x)/x. Being able to let your abscissa be any value such as 1/8
allows you resample the signal at a different phase. Imagine doing
delay equalization and you need to delay a signal by 0,03 samples. You
wil need some sort of interpolator to do this. Commonly done is desig
a polyphase filter and then decimate the resulting impulse response.
But there are lot's of ways to do this. You may even apply a simple
Taylor's or MacLauren series to do the trick. E.g.,

y(x+dx) =3D y(x) + dx*y'(x) + (dx^2)*y''(x)/2 +...

Clay

```
 0
Reply clay (735) 9/5/2010 1:43:40 AM

```
Clay wrote:

> In fact if you take the Lagrange formula and look at it in the limit
> of a very high order polynomial and equally spaced samples, then each
> of the cononical products in the Lagrange formula becomes a form of
> sin(x)/x. Being able to let your abscissa be any value such as 1/8
> allows you resample the signal at a different phase. Imagine doing
> delay equalization and you need to delay a signal by 0,03 samples. You
> wil need some sort of interpolator to do this. Commonly done is desig
> a polyphase filter and then decimate the resulting impulse response.

This requires care as the practical decimated sets of the original
filter of finite length are somewhat different in the amplitude and the
phase response.

> But there are lot's of ways to do this. You may even apply a simple
> Taylor's or MacLauren series to do the trick. E.g.,
>
> y(x+dx) = y(x) + dx*y'(x) + (dx^2)*y''(x)/2 +...

The derivatives for the Taylor formula could be calculated directly, by
means of FIR differentiators. This approach is commonly referred as
Farrow interpolation. It would be interesting to see the comparison of
accuracy vs computing needs for the optimized implementations of direct
interpolation vs Farrow's.

DSP and Mixed Signal Design Consultant
http://www.abvolt.com
```
 0

```On Sep 3, 9:30 pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> ...

>
> I just corrected your assertions.

> >>>You can try this yourself and see.
> >>Try this for yourself and see.
>
..> http://www.abvolt.com/misc/filters_comparison.xls
>
> Surprisingly, Kaiser-Bessel windowed sinc showed more variation between
> subfilters then Parks-Mcclellan design.
> ...

> VLV

Remez
line:73,  level:1dB,  range:0.33dB
line:122, level:2dB,  range:0.07dB
line:268, level:-6dB, range:0.06dB
K-B
line:73,  level:0.00xdB, range:0.0038dB
line:122, level:-.02dB,  range:0.0011dB
line:268, level:-.37dB,  range:0.01dB

The K-B have greater relative range (they twiddle more least
significant digits in the floating point format), but smaller values
variation in K-B than in Remez by a factor of 5 or more.

Your example is an interesting one. The filter transition band is so
wide and the passband tolerance so great that even the prototype
response has no region with multiple equal ripples. The individual
phases have enough coefficients to define the extrema in the response.
They perform quite well. In the cases I usually encounter, the
application involves decimation and the filter has large regions with
equal ripple behavior and it can be difficult to find any phase from a
Remez prototype that meets the filter spec.

Dale B. Dalrymple
```
 0

```On Sun, 5 Sep 2010 11:04:20 -0700 (PDT), dbd <dbd@ieee.org> wrote:

>On Sep 3, 9:30 pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
>> ...
>
>>
>> I just corrected your assertions.
>
>> >>>You can try this yourself and see.
>> >>Try this for yourself and see.
>>
>.> http://www.abvolt.com/misc/filters_comparison.xls
>>
>> Surprisingly, Kaiser-Bessel windowed sinc showed more variation between
>> subfilters then Parks-Mcclellan design.
>> ...
>
>> VLV
>
>Remez
>line:73,  level:1dB,  range:0.33dB
>line:122, level:2dB,  range:0.07dB
>line:268, level:-6dB, range:0.06dB
>K-B
>line:73,  level:0.00xdB, range:0.0038dB
>line:122, level:-.02dB,  range:0.0011dB
>line:268, level:-.37dB,  range:0.01dB
>
>The K-B have greater relative range (they twiddle more least
>significant digits in the floating point format), but smaller values
>variation in K-B than in Remez by a factor of 5 or more.
>
>Your example is an interesting one. The filter transition band is so
>wide and the passband tolerance so great that even the prototype
>response has no region with multiple equal ripples. The individual
>phases have enough coefficients to define the extrema in the response.
>They perform quite well. In the cases I usually encounter, the
>application involves decimation and the filter has large regions with
>equal ripple behavior and it can be difficult to find any phase from a
>Remez prototype that meets the filter spec.
>
>Dale B. Dalrymple

How tight of a spec are you trying to meet?

As mentioned, in my case the main metric is BER for a matched filter
in a receiver, which is easily quantifiable and a good measure (in
AWGN) of how well the filters match at the sample points.   I've
always used P-M (with Remez) to design these filters, and haven't
found a case yet where the subfilters had difference performance than
the aggregate prototype filter.

And, like your case, sometimes the decimation rate is high.

Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.abineau.com
```
 0

```On Sep 5, 11:42 am, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> Clay wrote:
> > In fact if you take the Lagrange formula and look at it in the limit
> > of a very high order polynomial and equally spaced samples, then each
> > of the cononical products in the Lagrange formula becomes a form of
> > sin(x)/x. Being able to let your abscissa be any value such as 1/8
> > allows you resample the signal at a different phase. Imagine doing
> > delay equalization and you need to delay a signal by 0,03 samples. You
> > wil need some sort of interpolator to do this. Commonly done is desig
> > a polyphase filter and then decimate the resulting impulse response.
>
> This requires care as the practical decimated sets of the original
> filter of finite length are somewhat different in the amplitude and the
> phase response.
>
> > But there are lot's of ways to do this. You may even apply a simple
> > Taylor's or MacLauren series to do the trick. E.g.,
>
> > y(x+dx) = y(x) + dx*y'(x) + (dx^2)*y''(x)/2 +...
>
> The derivatives for the Taylor formula could be calculated directly, by
> means of FIR differentiators. This approach is commonly referred as
> Farrow interpolation. It would be interesting to see the comparison of
> accuracy vs computing needs for the optimized implementations of direct
> interpolation vs Farrow's.

can someone explain to me essentially what good is the Farrow
structure?  as best as i can tell, it appears pretty expensive.  you
need to maintain N FIR filters simultaneously and "mix" the outputs of
those filters according to the fractional delay parameter.  what
happens if the fractional delay advances past a sample boundary (goes
from 0.99 to 1.02 sample)?  then, it seems that all of the FIR filters
must be offset by one sample and 1 is subtracted from that fractional
delay so that it stays in between 0 and 1.

but, whatever the number of taps on the polyphase FIR, if you have a
time-varying delay, why not just use the integer part to select which
N adjacent samples and the fractional part to select the N
coefficients for combining those N samples.  that's it.  you have to
increment the precision time to whatever its value is, separate the
integer and fractional part, use the integer part to point to your N
samples and use the fractional part (scaled to be an offest) to point
to the N coefficients.  then do N MAC instructions.

how is Farrow better or more efficient than that?

r b-j

```
 0

```On Sep 5, 3:03=A0pm, robert bristow-johnson <r...@audioimagination.com>
wrote:

>
> can someone explain to me essentially what good is the Farrow
> structure?
> ...

>
> how is Farrow better or more efficient than that?
> ...

The point of the Farrow architecture is to support interpolation that
is continuously variable and of arbitrary fractional resolution with a
fixed coefficient storage and a some number of calculations instead of
simpler logic and an arbitrarily large coefficient set. Which is more
efficient will depend on the costs of adds, multiplies, logic
implementation and coefficient storage.

Dale B. Dalrymple

```
 0

```On Sep 5, 3:03=A0pm, robert bristow-johnson <r...@audioimagination.com>
wrote:
> can someone explain to me essentially what good is the Farrow
> structure? =A0as best as i can tell, it appears pretty expensive. =A0you
> need to maintain N FIR filters simultaneously and "mix" the outputs of
> those filters according to the fractional delay parameter. =A0what
> happens if the fractional delay advances past a sample boundary (goes
> from 0.99 to 1.02 sample)? =A0then, it seems that all of the FIR filters
> must be offset by one sample and 1 is subtracted from that fractional
> delay so that it stays in between 0 and 1.
>
> but, whatever the number of taps on the polyphase FIR, if you have a
> time-varying delay, why not just use the integer part to select which
> N adjacent samples and the fractional part to select the N
> coefficients for combining those N samples. =A0that's it. =A0you have to
> increment the precision time to whatever its value is, separate the
> integer and fractional part, use the integer part to point to your N
> samples and use the fractional part (scaled to be an offest) to point
> to the N coefficients. =A0then do N MAC instructions.
>
> how is Farrow better or more efficient than that?

Trade-off for when you have a lot more arithmetic than constant
storage (tables of fractional coefficients) available.  Or possibly
for when the arithmetic pipeline can clock faster than the table
memory access time, or the table doesn't even fit in a tiny data
cache, etc.  Sometimes seen in certain ASIC processes and some
FPGA architectures.

IMHO. YMMV.
--
rhn A.T nicholson d.0.t C-o-M
http://www.nicholson.com/rhn/dsp.html
```
 0

```On Sep 5, 2:07=A0pm, eric.jacob...@ieee.org (Eric Jacobsen) wrote:
> ...
>
> As mentioned, in my case the main metric is BER for a matched filter
> in a receiver, which is easily quantifiable and a good measure (in
> AWGN) of how well the filters match at the sample points. =A0 I've
> always used P-M (with Remez) to design these filters, and haven't
> found a case yet where the subfilters had difference performance than
> the aggregate prototype filter.
>
> And, like your case, sometimes the decimation rate is high.
>
> Eric Jacobsen
> Minister of Algorithms
> Abineau Communicationshttp://www.abineau.com

My typical application would be a channelizer in a spectrum analyzer
where the sell-off criteria include a direct measurement of the filter
frequency response, in-band and out, of each phase. Artifacts that
could fail the system on those criteria might have no measurable
effect on system parameters such as minimum discernible signal (MDS).
Where MDS is defined as the SNR required to achieve a high enough
probability of detect and a small enough probability of false alarm.

Dale B. Dalrymple
```
 0

```On Sun, 5 Sep 2010 16:06:22 -0700 (PDT), dbd <dbd@ieee.org> wrote:

>On Sep 5, 3:03=A0pm, robert bristow-johnson <r...@audioimagination.com>
>wrote:
>
>>
>> can someone explain to me essentially what good is the Farrow
>> structure?
>> ...
>
>>
>> how is Farrow better or more efficient than that?
>> ...
>
>The point of the Farrow architecture is to support interpolation that
>is continuously variable and of arbitrary fractional resolution with a
>fixed coefficient storage and a some number of calculations instead of
>simpler logic and an arbitrarily large coefficient set. Which is more
>efficient will depend on the costs of adds, multiplies, logic
>implementation and coefficient storage.
>
>Dale B. Dalrymple
>

Exactly.   I've seen them employed occassionally in ASICs where the
real-estate occupied by the processing logic was less than what would
be required for coefficient storage.   This depends, of course, on how
finely one needs to discern the interpolation phases and how
efficiently the processing logic can be implemented.

..

Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.abineau.com
```
 0

```On Sun, 5 Sep 2010 15:03:56 -0700 (PDT), robert bristow-johnson
<rbj@audioimagination.com> wrote:
....
>samples and use the fractional part (scaled to be an offest) to point
>to the N coefficients.  then do N MAC instructions.
>
>how is Farrow better or more efficient than that?

As others have said, Farrow is useful in systems where there is no
memory and no MAC instructions ;-) ie ASICs and FPGAs (mostly as
prototyping platforms for the former) where you can implement it in
hard-wired form at a much faster/lower power structure using exactly
sized multipliers/adders, power-of-two numbers etc. etc.
--
Muzaffer Kal

DSPIA INC.
ASIC/FPGA Design Services

http://www.dspia.com
```
 0

```On 9/4/2010 2:01 PM, Fred Marshall wrote:

>
> I hope this helps.... ?
>
> Fred
>

```
 0

35 Replies
401 Views

Similiar Articles:

7/23/2012 2:36:38 AM