COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

### conventional wisdom how to upsample very large arrays, accurately?

• Email
• Follow

Hello all,

I'd like to know what methods are available, if any, for upsampling a
large array. I have an application where I need to sinc interpolate an
array of 1E9 (billion) double precision data points output by an ADC by 4x.
The array captures a clock waveform centered at zero volts and is
oversampled (meets Nyquist). Don't know if it matters, but my goal is
simply to very accurately find the zero-point crossings. By upsampling a
factor of 4 times, I'll then use linear interpolation to get the final
number. The point is, I really only need very accurate data during the
rise/fall times (not sure if this opens a new degree of freedom). Sample
spacing is periodic. I can throw out the initial and final 5% or so of the
array if needed to eliminate artifacts, but I need the majority of data in
the middle.

My current system takes 10 hours to computer a 1E9 FFT (actually, size is
2^30, or 1.074G points). The CPU only works ~1%, while virtual memory is
57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Would like
to get it down to as short as possible (10 minutes?).

I see the Overlap-Add method in Chapter 18 of dspguide.com handles large
arrays, but I'm not sure how to use it for upsampling.

Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
(http://www.dsprelated.com/showarticle/63.php) shows how to get an DFT
using smaller FFTs, but I'm not sure how to do the upsampling.

Could someone help me understand what the conventional wisdom is to do
this? I'm not a DSP expert, although I'm a fast study, if you could
reference literature I could access (no research papers where someone
created an algorithm no one uses though) I'd be very much appreciative.


 0
Reply all4dsp2 (1) 1/6/2010 12:36:03 PM

See related articles to this posting

all4dsp <all4dsp@comcast.net> wrote:
(snip)

> My current system takes 10 hours to computer a 1E9 FFT (actually, size is
> 2^30, or 1.074G points). The CPU only works ~1%, while virtual memory is
> 57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Would like
> to get it down to as short as possible (10 minutes?).

I am not sure FFT is the best way, but you should be able to do
it faster.  Remeber that a big FFT is made up of smaller FFTs.
You should be able to group most of them such that they fit in
real memory.   That is, don't do it in the usual order.

-- glen

 0

On Jan 6, 7:36=A0am, "all4dsp" <all4...@comcast.net> wrote:
> Hello all,
>
> I'd like to know what methods are available, if any, for upsampling a
> large array. I have an application where I need to sinc interpolate an
> array of 1E9 (billion) double precision data points output by an ADC by 4=
x.
> The array captures a clock waveform centered at zero volts and is
> oversampled (meets Nyquist). Don't know if it matters, but my goal is
> simply to very accurately find the zero-point crossings. By upsampling a
> factor of 4 times, I'll then use linear interpolation to get the final
> number. The point is, I really only need very accurate data during the
> rise/fall times (not sure if this opens a new degree of freedom). Sample
> spacing is periodic. I can throw out the initial and final 5% or so of th=
e
> array if needed to eliminate artifacts, but I need the majority of data i=
n
> the middle.
>
> My current system takes 10 hours to computer a 1E9 FFT (actually, size is
> 2^30, or 1.074G points). The CPU only works ~1%, while virtual memory is
> 57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Would like
> to get it down to as short as possible (10 minutes?).
>
> I see the Overlap-Add method in Chapter 18 of dspguide.com handles large
> arrays, but I'm not sure how to use it for upsampling.
>
> Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
> (http://www.dsprelated.com/showarticle/63.php) shows how to get an DFT
> using smaller FFTs, but I'm not sure how to do the upsampling.
>
> Could someone help me understand what the conventional wisdom is to do
> this? I'm not a DSP expert, although I'm a fast study, if you could
> reference literature I could access (no research papers where someone
> created an algorithm no one uses though) I'd be very much appreciative.

You can use OLA for 4X upsampling. Insert 3 replicas of the spectrum
in the middle of the 1X fwd FFT, do a 4X inverse FFT, then overlap-add
on the time-domain output. You can also use OLS. I would expect either
to run much faster than your huge FFT.

John

 0


all4dsp wrote:

> Hello all,
>
> I'd like to know what methods are available, if any, for upsampling a
> large array. I have an application where I need to sinc interpolate an
> array of 1E9 (billion) double precision data points output by an ADC by 4x.
> The array captures a clock waveform centered at zero volts and is
> oversampled (meets Nyquist). Don't know if it matters, but my goal is
> simply to very accurately find the zero-point crossings. By upsampling a
> factor of 4 times, I'll then use linear interpolation to get the final
> number. The point is, I really only need very accurate data during the
> rise/fall times (not sure if this opens a new degree of freedom). Sample
> spacing is periodic. I can throw out the initial and final 5% or so of the
> array if needed to eliminate artifacts, but I need the majority of data in
> the middle.
>
> My current system takes 10 hours to computer a 1E9 FFT (actually, size is
> 2^30, or 1.074G points). The CPU only works ~1%, while virtual memory is
> 57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Would like
> to get it down to as short as possible (10 minutes?).
>
> I see the Overlap-Add method in Chapter 18 of dspguide.com handles large
> arrays, but I'm not sure how to use it for upsampling.
>
> Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
> (http://www.dsprelated.com/showarticle/63.php) shows how to get an DFT
> using smaller FFTs, but I'm not sure how to do the upsampling.
>
> Could someone help me understand what the conventional wisdom is to do
> this? I'm not a DSP expert, although I'm a fast study, if you could
> reference literature I could access (no research papers where someone
> created an algorithm no one uses though) I'd be very much appreciative.

1. If you have interpolate by 4, use a conventional time domain
polyphase filter. 1e9 points should be done in a minute or so.

2. There is no point in interpolating the whole array if your goal is
only to find zero crossings.

3. If you are not DSP expert, hire a consultant.

DSP and Mixed Signal Design Consultant
http://www.abvolt.com


 0

Thanks Glen,

>I am not sure FFT is the best way, but you should be able to do
>it faster.  Remeber that a big FFT is made up of smaller FFTs.
>You should be able to group most of them such that they fit in
>real memory.   That is, don't do it in the usual order.

I'm not sure if FFT is best either. I also thought about using simple
polynomial interpolation or other type of interpolation instead of
sinc-based (more accurate than linear interpolation) to find zero crossings
directly from the ADC array, but there are so many ways to interpolate a
curve through a few points near the zero-crossing, I don't have a good
enough grasp of the theory to guide me as to which method would work in
this situation. At least the sinc interpolation relies on the frequency
content of the entire waveform to decide where the new points should be
located, so I see it as the gold standard for this sort of thing (although,
unfortunately I don't need 99% of the interpolated points this method
returns).

I believe what you are referring to about a big FFT made up of smaller
FFTs is documented here: http://www.dsprelated.com/showarticle/63.php .
While this helps, it doesn't take me all the way.

Thanks John,

>You can use OLA for 4X upsampling. Insert 3 replicas of the spectrum
>in the middle of the 1X fwd FFT, do a 4X inverse FFT, then overlap-add
>on the time-domain output.

I assume by OLA you mean overlap-add. When you say "insert 3 replicas of
the spectrum in the middle of the 1x fwd FFT" do you mean to insert 3x
zero-padding in the middle of the original 1x FFT spectrum, as discussed
here?

If so I will end up with an two arrays (one real one imaginary) that are
4G pts (32 GB) long. Maybe it's possible. What to do after that? I can't do
an IFFT on this array (it's too big). Is there an overlap-add method for
doing IFFT? I haven't seen one, could you provide a reference if so? Could
you explain in more detail what you mean by "4X inverse FFT, then

I Googled OLS and came up with overlap-save. I will have to research what
this is. Any pros/cons?  Thanks again in advance!


 0

On Jan 6, 10:50=A0am, "all4dsp" <all4...@comcast.net> wrote:
> Thanks Glen,
>
> >I am not sure FFT is the best way, but you should be able to do
> >it faster. =A0Remeber that a big FFT is made up of smaller FFTs.
> >You should be able to group most of them such that they fit in
> >real memory. =A0 That is, don't do it in the usual order.
>
> I'm not sure if FFT is best either. I also thought about using simple
> polynomial interpolation or other type of interpolation instead of
> sinc-based (more accurate than linear interpolation) to find zero crossin=
gs
> directly from the ADC array, but there are so many ways to interpolate a
> curve through a few points near the zero-crossing, I don't have a good
> enough grasp of the theory to guide me as to which method would work in
> this situation. At least the sinc interpolation relies on the frequency
> content of the entire waveform to decide where the new points should be
> located, so I see it as the gold standard for this sort of thing (althoug=
h,
> unfortunately I don't need 99% of the interpolated points this method
> returns).
>
> I believe what you are referring to about a big FFT made up of smaller
> FFTs is documented here:http://www.dsprelated.com/showarticle/63.php.
> While this helps, it doesn't take me all the way.
>
> Thanks John,
>
> >You can use OLA for 4X upsampling. Insert 3 replicas of the spectrum
> >in the middle of the 1X fwd FFT, do a 4X inverse FFT, then overlap-add
> >on the time-domain output.
>
> I assume by OLA you mean overlap-add. When you say "insert 3 replicas of
> the spectrum in the middle of the 1x fwd FFT" do you mean to insert 3x
> zero-padding in the middle of the original 1x FFT spectrum, as discussed
> here?
>
> http://www.dspguru.com/dsp/howtos/how-to-interpolate-in-time-domain-b...
>
> If so I will end up with an two arrays (one real one imaginary) that are
> 4G pts (32 GB) long. Maybe it's possible. What to do after that? I can't =
do
> an IFFT on this array (it's too big). Is there an overlap-add method for
> doing IFFT? I haven't seen one, could you provide a reference if so? Coul=
d
> you explain in more detail what you mean by "4X inverse FFT, then
> overlap-add on the time-domain output"?
>
> I Googled OLS and came up with overlap-save. I will have to research what
> this is. Any pros/cons? =A0Thanks again in advance!

First of all, I would consider VLVs points 1 and 2. If you decide that
full 4X interpolation of the array is required and want to speed it
up, OLA or OLS may be worth considering if the interpolation filter is
long. These techniques can improve performance when you have 100% CPU,
but more relevant to your bottleneck is that they consume the input in
small sequential blocks on the order of the filter length. They do not
require the whole array to be in memory at once. The performance
difference between OLA and OLS is small enough that I wouldn't worry
about it right now. Just find a good tutorial and pick the one that
looks simplest to you.

John


 0

Thanks Vladimir, John,

Are Vladimir's points 1 and 2 related? That is, does polyphase filtering
not interpolate the whole array? Otherwise, I'm not sure where to go with
point 2 (can you explain)?

Does polyphase method return equally accurate data as sinc interpolation
(e.g. by zero-padding in frequency domain in middle of 1xFFT spectrum, and
then doing an IFFT to get interpolated points, thereby avoiding non-ideal
filter characteristics in time domain if one were to use conventional
time-domain low pass filter method), or does the error increase (dependent
on the filter size, etc?)? Where do the interpolated errors come from? Best
regards

 0

On Jan 6, 11:34=A0am, "all4dsp" <all4...@comcast.net> wrote:
>
> Are Vladimir's points 1 and 2 related? That is, does polyphase filtering
> not interpolate the whole array? Otherwise, I'm not sure where to go with
> point 2 (can you explain)?
>
> Does polyphase method return equally accurate data as sinc interpolation
> (e.g. by zero-padding in frequency domain in middle of 1xFFT spectrum, an=
d
> then doing an IFFT to get interpolated points, thereby avoiding non-ideal
> filter characteristics in time domain if one were to use conventional
> time-domain low pass filter method), or does the error increase (dependen=
t
> on the filter size, etc?)? Where do the interpolated errors come from? Be=
st
> regards

If done properly, an FFT-based filter should produce identical results
to a time domain filter. In both cases you will not be able to realize
an ideal sinc because those go on forever.

John

 0

I mean, polyphase completes on the order of a minute!? Nature doesn't
generally give you something for nothing, and accuracy is extremely
important here, so what is the downside?

 0

On Jan 6, 7:36=A0am, "all4dsp" <all4...@comcast.net> wrote:
> Hello all,
>
> I'd like to know what methods are available, if any, for upsampling a
> large array. I have an application where I need to sinc interpolate an
> array of 1E9 (billion) double precision data points output by an ADC by 4=
x.
> The array captures a clock waveform centered at zero volts and is
> oversampled (meets Nyquist). Don't know if it matters, but my goal is
> simply to very accurately find the zero-point crossings. By upsampling a
> factor of 4 times, I'll then use linear interpolation to get the final
> number. The point is, I really only need very accurate data during the
> rise/fall times (not sure if this opens a new degree of freedom). Sample
> spacing is periodic. I can throw out the initial and final 5% or so of th=
e
> array if needed to eliminate artifacts, but I need the majority of data i=
n
> the middle.
>
> My current system takes 10 hours to computer a 1E9 FFT (actually, size is
> 2^30, or 1.074G points). The CPU only works ~1%, while virtual memory is
> 57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Would like
> to get it down to as short as possible (10 minutes?).
>
> I see the Overlap-Add method in Chapter 18 of dspguide.com handles large
> arrays, but I'm not sure how to use it for upsampling.
>
> Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
> (http://www.dsprelated.com/showarticle/63.php) shows how to get an DFT
> using smaller FFTs, but I'm not sure how to do the upsampling.
>
> Could someone help me understand what the conventional wisdom is to do
> this? I'm not a DSP expert, although I'm a fast study, if you could
> reference literature I could access (no research papers where someone
> created an algorithm no one uses though) I'd be very much appreciative.

Here is a way.

http://www.claysturner.com/dsp/DSPinterpolation.pdf

This can be coded up and runs reasonably well. If the "spectral
occupancy" is a fair bit less than Nyquist then the sync kernal can be
replaced by the quicker decaying interpolation function.

Clay


 0

all4dsp <all4dsp@comcast.net> wrote:
(snip)

> I'm not sure if FFT is best either. I also thought about using simple
> polynomial interpolation or other type of interpolation instead of
> sinc-based (more accurate than linear interpolation) to find zero crossings
> directly from the ADC array, but there are so many ways to interpolate a
> curve through a few points near the zero-crossing, I don't have a good
> enough grasp of the theory to guide me as to which method would work in
> this situation. At least the sinc interpolation relies on the frequency
> content of the entire waveform to decide where the new points should be
> located, so I see it as the gold standard for this sort of thing (although,
> unfortunately I don't need 99% of the interpolated points this method
> returns).

The sinc interpolation does use the frequency content, but then you
do linear interpolation, which probably loses some of that.
Many people would do cubic spline interpolation.  If you need to
do better, then a higher odd order polynomial could be used.
The wikipedia entry for Cubic spline has a good explanation.
There is also Numerical Recipes for explanation and source code.

-- glen

 0

On Wed, 06 Jan 2010 06:36:03 -0600, "all4dsp" <all4dsp@comcast.net>
wrote:

>Hello all,
>
>I'd like to know what methods are available, if any, for upsampling a
>large array. I have an application where I need to sinc interpolate an
>array of 1E9 (billion) double precision data points output by an ADC by 4x.
>The array captures a clock waveform centered at zero volts and is
>oversampled (meets Nyquist). Don't know if it matters, but my goal is
>simply to very accurately find the zero-point crossings. By upsampling a
>factor of 4 times, I'll then use linear interpolation to get the final
>number. The point is, I really only need very accurate data during the
>rise/fall times (not sure if this opens a new degree of freedom). Sample
>spacing is periodic. I can throw out the initial and final 5% or so of the
>array if needed to eliminate artifacts, but I need the majority of data in
>the middle.
>
>My current system takes 10 hours to computer a 1E9 FFT (actually, size is
>2^30, or 1.074G points). The CPU only works ~1%, while virtual memory is
>57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Would like
>to get it down to as short as possible (10 minutes?).
>
>I see the Overlap-Add method in Chapter 18 of dspguide.com handles large
>arrays, but I'm not sure how to use it for upsampling.
>
>Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
>(http://www.dsprelated.com/showarticle/63.php) shows how to get an DFT
>using smaller FFTs, but I'm not sure how to do the upsampling.
>
>Could someone help me understand what the conventional wisdom is to do
>this? I'm not a DSP expert, although I'm a fast study, if you could
>reference literature I could access (no research papers where someone
>created an algorithm no one uses though) I'd be very much appreciative.

Hello all4dsp,
If your input signal (the signal that's to be
interpolated) is a "squarewave-like" signal, then maybe
the linear interpolation method in:

[1] R. Losada and R. lyons, "Reducing CIC Filter Complexity",
IEEE Signal processing magazine, July 2006, pp.124-126

would be of some help to you.  That article describes a
linear time-domain interpolation scheme that requires no
multiplications regardless of the interpolation factor.

A slightly expanded version of the material in the above
article can be found in Chapter 6 in:

[2] R. Lyons, Editor, "Streamlining Digital Signal Processing,
A Tricks of the Trade Guidebook", IEEE-Wiley Interscience
Publishing, 2007.

all4dsp, if you're unable to gain access to the above article or
the above book chapter, then let me know.

Good Luck,
[-Rick-]


 0

On Wed, 06 Jan 2010 06:36:03 -0600, "all4dsp" <all4dsp@comcast.net>
wrote:

[Snipped by Lyons]
>
>I see the Overlap-Add method in Chapter 18 of dspguide.com handles large
>arrays, but I'm not sure how to use it for upsampling.
>
>Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
>(http://www.dsprelated.com/showarticle/63.php) shows how to get an DFT
>using smaller FFTs, but I'm not sure how to do the upsampling.

Hi all4dsp,
If your input data block length (and FFT size) is N,
then you can interpolate by four by:

(1) Perform your initial N-point FFT to compute the X(m)
freq samples.  (Freq index m goes from 0 -to- N-1.)

(2) Create a new freq-domain sequence comprising:

the first N/2 X(m) samples, followed by 3*N
zero-valued samples, followed by the last
N/2 X(m) samples.

This step merely inserts 3*N zero-valued samples
right in the middle of your original FFT's X(m) samples.

(3) Perform a 4N-point inverse FFT on this "expanded"
freq-domain sequence to obtain your interpolated-by-four
time-domain sequence.

I'm assuming that the initial FFT samples, X(m), are *VERY*
low in magnitude in the vicinity of Fs/2 Hz (Fs is the
original sample rate in Hz).

Of course the above scheme's gonna take a super long time
to execute because your working with such unbelievably
humungous (is that a word?) large data block lengths.
Your problem give a whole new meaning to the phrase
"number crunching".

the above freq-domain interpolation is the most
computationally-efficient thing to do.  Time-domain
interpolation requires digital filtering, and if the
number of filter taps is not too large maybe time-domain
interpolation is the "way to go".  If so, then polyphase
filtering methods should be used.

all4dsp, ya' know what I'd do if I were you?  I'd carefully
double-check the need (the necessity) for working with
such large-sized blocks of data.

And another thing, as Veronica said in the movie "The Fly",
I'd "Be afraid.  Be very afraid" of numerical errors in
performing such large-sized FFTs.  In those FFTs you're
expecting the software to accuarately estimate the sine and
cosine of angles in the range of 360/2^30 degrees!!!
Are sine and cosine approximation algorithms (in the FFT
software) accurate enough for such super-small angles???

Good Luck,
[-Rick-]

 0

On Jan 6, 4:36=A0am, "all4dsp" <all4...@comcast.net> wrote:
> I'd like to know what methods are available, if any, for upsampling a
> large array. I have an application where I need to sinc interpolate an
> array of 1E9 (billion) double precision data points output by an ADC by 4=
x.
> The array captures a clock waveform centered at zero volts and is
> oversampled (meets Nyquist). Don't know if it matters, but my goal is
> simply to very accurately find the zero-point crossings. By upsampling a
> factor of 4 times, I'll then use linear interpolation to get the final
> number. The point is, I really only need very accurate data during the
> rise/fall times (not sure if this opens a new degree of freedom). Sample
> spacing is periodic. I can throw out the initial and final 5% or so of th=
e
> array if needed to eliminate artifacts, but I need the majority of data i=
n
> the middle.
>
> My current system takes 10 hours to computer a 1E9 FFT ...

What is the density of zero crossings to samples?  If it's low enough,
you can do an initial linear or quadratic interpolation, then a sinc
interpolation (polyphase, etc.) only surrounding these first estimate
points.  Interpolate again to get a second estimate (and possibly
rinse-and-repeat until you converge sufficiently, if needed).

YMMV.
--
rhn A.T nicholson d.0.t C-o-M
http://www.nicholson.com/dsp.html

 0

On 6 Jan., 18:40, Clay <c...@claysturner.com> wrote:
> On Jan 6, 7:36=A0am, "all4dsp" <all4...@comcast.net> wrote:
>
>
>
>
>
> > Hello all,
>
> > I'd like to know what methods are available, if any, for upsampling a
> > large array. I have an application where I need to sinc interpolate an
> > array of 1E9 (billion) double precision data points output by an ADC by=
4x.
> > The array captures a clock waveform centered at zero volts and is
> > oversampled (meets Nyquist). Don't know if it matters, but my goal is
> > simply to very accurately find the zero-point crossings. By upsampling =
a
> > factor of 4 times, I'll then use linear interpolation to get the final
> > number. The point is, I really only need very accurate data during the
> > rise/fall times (not sure if this opens a new degree of freedom). Sampl=
e
> > spacing is periodic. I can throw out the initial and final 5% or so of =
the
> > array if needed to eliminate artifacts, but I need the majority of data=
in
> > the middle.
>
> > My current system takes 10 hours to computer a 1E9 FFT (actually, size =
is
> > 2^30, or 1.074G points). The CPU only works ~1%, while virtual memory i=
s
> > 57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Would li=
ke
> > to get it down to as short as possible (10 minutes?).
>
> > I see the Overlap-Add method in Chapter 18 of dspguide.com handles larg=
e
> > arrays, but I'm not sure how to use it for upsampling.
>
> > Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
> > (http://www.dsprelated.com/showarticle/63.php) shows how to get an DFT
> > using smaller FFTs, but I'm not sure how to do the upsampling.
>
> > Could someone help me understand what the conventional wisdom is to do
> > this? I'm not a DSP expert, although I'm a fast study, if you could
> > reference literature I could access (no research papers where someone
> > created an algorithm no one uses though) I'd be very much appreciative.
>
> Here is a way.
>
> http://www.claysturner.com/dsp/DSPinterpolation.pdf
>
> This can be coded up and runs reasonably well. If the "spectral
> occupancy" is a fair bit less than Nyquist then the sync kernal can be
> replaced by the quicker decaying interpolation function.

Hi Clay

The problem with your method is to find the zero-crossings in the
sampled data. It is well possible that zero-crossings can't be found
by multiplying two consecutive numbers and checking the sign of the
result. This effect does not depend on the "spectral occupancy". To
see this, consider sampling

x(t) =3D sin(pi f t) + sin(pi (1-f) t)

for arbitrary f (however, the effect is very nice for f close to 0 or
1) with sampling period T =3D 1. Your algorithm will not find any zero-
crossing. An easy modification to your searching criterion can by
circumvented by adding a small offset to x (in this case, only about
half of the zero crossings will be found).

Regards,
Andor

 0

On Jan 7, 3:31=A0am, Andor <andor.bari...@gmail.com> wrote:
> On 6 Jan., 18:40, Clay <c...@claysturner.com> wrote:
>
>
>
>
>
> > On Jan 6, 7:36=A0am, "all4dsp" <all4...@comcast.net> wrote:
>
> > > Hello all,
>
> > > I'd like to know what methods are available, if any, for upsampling a
> > > large array. I have an application where I need to sinc interpolate a=
n
> > > array of 1E9 (billion) double precision data points output by an ADC =
by 4x.
> > > The array captures a clock waveform centered at zero volts and is
> > > oversampled (meets Nyquist). Don't know if it matters, but my goal is
> > > simply to very accurately find the zero-point crossings. By upsamplin=
g a
> > > factor of 4 times, I'll then use linear interpolation to get the fina=
l
> > > number. The point is, I really only need very accurate data during th=
e
> > > rise/fall times (not sure if this opens a new degree of freedom). Sam=
ple
> > > spacing is periodic. I can throw out the initial and final 5% or so o=
f the
> > > array if needed to eliminate artifacts, but I need the majority of da=
ta in
> > > the middle.
>
> > > My current system takes 10 hours to computer a 1E9 FFT (actually, siz=
e is
> > > 2^30, or 1.074G points). The CPU only works ~1%, while virtual memory=
is
> > > 57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Would =
like
> > > to get it down to as short as possible (10 minutes?).
>
> > > I see the Overlap-Add method in Chapter 18 of dspguide.com handles la=
rge
> > > arrays, but I'm not sure how to use it for upsampling.
>
> > > Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
> > > (http://www.dsprelated.com/showarticle/63.php) shows how to get an DF=
T
> > > using smaller FFTs, but I'm not sure how to do the upsampling.
>
> > > Could someone help me understand what the conventional wisdom is to d=
o
> > > this? I'm not a DSP expert, although I'm a fast study, if you could
> > > reference literature I could access (no research papers where someone
> > > created an algorithm no one uses though) I'd be very much appreciativ=
e.
> > > Thanks in advance.
>
> > Here is a way.
>
> >http://www.claysturner.com/dsp/DSPinterpolation.pdf
>
> > This can be coded up and runs reasonably well. If the "spectral
> > occupancy" is a fair bit less than Nyquist then the sync kernal can be
> > replaced by the quicker decaying interpolation function.
>
> Hi Clay
>
> The problem with your method is to find the zero-crossings in the
> sampled data. It is well possible that zero-crossings can't be found
> by multiplying two consecutive numbers and checking the sign of the
> result. This effect does not depend on the "spectral occupancy". To
> see this, consider sampling
>
> x(t) =3D sin(pi f t) + sin(pi (1-f) t)
>
> for arbitrary f (however, the effect is very nice for f close to 0 or
> 1) with sampling period T =3D 1. Your algorithm will not find any zero-
> crossing. An easy modification to your searching criterion can by
> circumvented by adding a small offset to x (in this case, only about
> half of the zero crossings will be found).
>
> Regards,
> Andor- Hide quoted text -
>
> - Show quoted text -

Certainly true, I did this for an application years ago where the
location is used for ranging lightning. If I had to do this today, I
would use better methods involving optimized interpolation kernals.

Thanks for the input,

Clay


 0

Thanks everyone for your replies, you've given me a lot to think about.
There are obviously a lot of tradeoffs in selecting a method.

>I'm assuming that the initial FFT samples, X(m), are *VERY*
>low in magnitude in the vicinity of Fs/2 Hz (Fs is the
>original sample rate in Hz).

This is true.

>In those FFTs you're expecting the software to accuarately
>estimate the sine and cosine of angles in the range
>of 360/2^30 degrees

Wow, good point! It sounds like polyphase filtering might be the best
choice.


 0

On 7 Jan., 16:42, Clay <c...@claysturner.com> wrote:
> On Jan 7, 3:31=A0am, Andor <andor.bari...@gmail.com> wrote:
>
>
>
>
>
> > On 6 Jan., 18:40, Clay <c...@claysturner.com> wrote:
>
> > > On Jan 6, 7:36=A0am, "all4dsp" <all4...@comcast.net> wrote:
>
> > > > Hello all,
>
> > > > I'd like to know what methods are available, if any, for upsampling=
a
> > > > large array. I have an application where I need to sinc interpolate=
an
> > > > array of 1E9 (billion) double precision data points output by an AD=
C by 4x.
> > > > The array captures a clock waveform centered at zero volts and is
> > > > oversampled (meets Nyquist). Don't know if it matters, but my goal =
is
> > > > simply to very accurately find the zero-point crossings. By upsampl=
ing a
> > > > factor of 4 times, I'll then use linear interpolation to get the fi=
nal
> > > > number. The point is, I really only need very accurate data during =
the
> > > > rise/fall times (not sure if this opens a new degree of freedom). S=
ample
> > > > spacing is periodic. I can throw out the initial and final 5% or so=
of the
> > > > array if needed to eliminate artifacts, but I need the majority of =
data in
> > > > the middle.
>
> > > > My current system takes 10 hours to computer a 1E9 FFT (actually, s=
ize is
> > > > 2^30, or 1.074G points). The CPU only works ~1%, while virtual memo=
ry is
> > > > 57GB. Not sure I could stuff 3G pts into the FFT then IFFT it. Woul=
d like
> > > > to get it down to as short as possible (10 minutes?).
>
> > > > I see the Overlap-Add method in Chapter 18 of dspguide.com handles =
large
> > > > arrays, but I'm not sure how to use it for upsampling.
>
> > > > Rick Lyons excellent blog on Computing Large DFTs Using Small FFTs
> > > > (http://www.dsprelated.com/showarticle/63.php) shows how to get an =
DFT
> > > > using smaller FFTs, but I'm not sure how to do the upsampling.
>
> > > > Could someone help me understand what the conventional wisdom is to=
do
> > > > this? I'm not a DSP expert, although I'm a fast study, if you could
> > > > reference literature I could access (no research papers where someo=
ne
> > > > created an algorithm no one uses though) I'd be very much appreciat=
ive.
> > > > Thanks in advance.
>
> > > Here is a way.
>
> > >http://www.claysturner.com/dsp/DSPinterpolation.pdf
>
> > > This can be coded up and runs reasonably well. If the "spectral
> > > occupancy" is a fair bit less than Nyquist then the sync kernal can b=
e
> > > replaced by the quicker decaying interpolation function.
>
> > Hi Clay
>
> > The problem with your method is to find the zero-crossings in the
> > sampled data. It is well possible that zero-crossings can't be found
> > by multiplying two consecutive numbers and checking the sign of the
> > result. This effect does not depend on the "spectral occupancy". To
> > see this, consider sampling
>
> > x(t) =3D sin(pi f t) + sin(pi (1-f) t)
>
> > for arbitrary f (however, the effect is very nice for f close to 0 or
> > 1) with sampling period T =3D 1. Your algorithm will not find any zero-
> > crossing. An easy modification to your searching criterion can by
> > circumvented by adding a small offset to x (in this case, only about
> > half of the zero crossings will be found).
>
> > Regards,
> > Andor- Hide quoted text -
>
> > - Show quoted text -
>
> Certainly true, I did this for an application years ago where the
> location is used for ranging lightning. If I had to do this today, I
> would use better methods involving optimized interpolation kernals.

I don't think the interpolation kernels are the problem (although they
certainly can be).

I think the main problem is to find the rough location of the zero-
crossing in the sampled data. As my example shows, a zero-crossing of
the underlying sampled function will not necessarily show up as a
"zero-crossing" in the samples. I would opt for Ron's idea to use
quick polynomial interpolation with order >=3D 3 (can be implemented
with very small kernel FIR filters) to oversample the data by 4 or so
to find "zero-crossings" in the samples and then use the method of
sinc-interpolation outlined in your paper to find the exact location.

This reminds me of the old comp.dsp debate on the maximum number of
zero-crossings that a bandlimited function may have on any given
interval. As it turned out (I think Matt Timmermans came up with the
example of the prolate spheroidal wave functions), the number of zero-
crossings is not limited ...

>
> Thanks for the input,

Thanks for the paper :-).

Regards,
Andor

 0

On 7 Jan., 16:48, "all4dsp" <all4...@comcast.net> wrote:
> Thanks everyone for your replies, you've given me a lot to think about.
> There are obviously a lot of tradeoffs in selecting a method.
>
> >I'm assuming that the initial FFT samples, X(m), are *VERY*
> >low in magnitude in the vicinity of Fs/2 Hz (Fs is the
> >original sample rate in Hz).
>
> This is true.
>
> >In those FFTs you're expecting the software to accuarately
> >estimate the sine and cosine of angles in the range
> >of 360/2^30 degrees
>
> Wow, good point! It sounds like polyphase filtering might be the best
> choice.

Since you are upsampling by 4 (an integer factor) you don't even need
to implement a polyphase resampler. All you need is a filterbank with
three filters. As I wrote in another post, I would suggest a two step
procedure:

1. Interpolate by 4 with polynomial FIR filters (very small kernels,
e.g. 4 taps for third-order polynomial interpolation):

x(n) ---+-----------------> x(n)
|
|   +---------+
|-->|  H1(z)  |---> x1(n)
|   +---------+
|
|   +---------+
|-->|  H2(z)  |---> x2(n)
|   +---------+
|
|   +---------+
+-->|  H3(z)  |---> x3(n)
+---------+

H1, H2 and H3 are the interpolation filters (polynomial filters are
simple to design from scratch), your upsampled sequence will be:

(... x(n),x1(n),x2(n),x3(n),x(n+1),x1(n+1),x2(n+1),x3(n+1), ... )

Using a third-order polynomial filter requires 3*4*n MACS to resample
by 4. For n = 1e9 you have 12e9 MACS, which should take in a couple of
seconds on a normal PC.

2. Use a simple method to find zero-crossings in the upsampled data
and then "polish" the zero-crossing location with more accurate
interpolation - this is outlined here: http://www.claysturner.com/dsp/DSPinterpolation.pdf

Regards,
Andor

 0

On 7 Jan., 17:18, Andor <andor.bari...@gmail.com> wrote:
> On 7 Jan., 16:48, "all4dsp" <all4...@comcast.net> wrote:
>
> > Thanks everyone for your replies, you've given me a lot to think about.
> > There are obviously a lot of tradeoffs in selecting a method.
>
> > >I'm assuming that the initial FFT samples, X(m), are *VERY*
> > >low in magnitude in the vicinity of Fs/2 Hz (Fs is the
> > >original sample rate in Hz).
>
> > This is true.
>
> > >In those FFTs you're expecting the software to accuarately
> > >estimate the sine and cosine of angles in the range
> > >of 360/2^30 degrees
>
> > Wow, good point! It sounds like polyphase filtering might be the best
> > choice.
>
> Since you are upsampling by 4 (an integer factor) you don't even need
> to implement a polyphase resampler. All you need is a filterbank with
> three filters. As I wrote in another post, I would suggest a two step
> procedure:
>
> 1. Interpolate by 4 with polynomial FIR filters (very small kernels,
> e.g. 4 taps for third-order polynomial interpolation):
>
> x(n) ---+-----------------> x(n)
> =A0 =A0 =A0 =A0 |
> =A0 =A0 =A0 =A0 | =A0 +---------+
> =A0 =A0 =A0 =A0 |-->| =A0H1(z) =A0|---> x1(n)
> =A0 =A0 =A0 =A0 | =A0 +---------+
> =A0 =A0 =A0 =A0 |
> =A0 =A0 =A0 =A0 | =A0 +---------+
> =A0 =A0 =A0 =A0 |-->| =A0H2(z) =A0|---> x2(n)
> =A0 =A0 =A0 =A0 | =A0 +---------+
> =A0 =A0 =A0 =A0 |
> =A0 =A0 =A0 =A0 | =A0 +---------+
> =A0 =A0 =A0 =A0 +-->| =A0H3(z) =A0|---> x3(n)
> =A0 =A0 =A0 =A0 =A0 =A0 +---------+
>
> H1, H2 and H3 are the interpolation filters (polynomial filters are
> simple to design from scratch), your upsampled sequence will be:
>
> (... x(n),x1(n),x2(n),x3(n),x(n+1),x1(n+1),x2(n+1),x3(n+1), ... )
>
> Using a third-order polynomial filter requires 3*4*n MACS to resample
> by 4. For n =3D 1e9 you have 12e9 MACS, which should take in a couple of
> seconds on a normal PC.
>
> 2. Use a simple method to find zero-crossings in the upsampled data
> and then "polish" the zero-crossing location with more accurate
> interpolation - this is outlined here:http://www.claysturner.com/dsp/DSPi=
nterpolation.pdf

hmm. For fun I implemented this scheme and applied it to the samples
of the signal

x(t) =3D sin(pi f t) + sin(pi (1-f) t) + eps s((1-f)/2 t)

for 0.5 < f < 1.0, eps > 0 and s(t) the square wave function with
period 1, sampled with sampling interval T =3D 1. The sampling sequence x
[n] =3D x(n) has the property that all samples from the time interval

0 <=3D t < 1/(1-f)

are positive, and all samples from the interval

1/(1-f) <=3D t < 2/(1-f)

are negative. For small eps, the number of zero-crossings of x(t) in

Unfortunately, upsampling by a factor of 4 with third-order polynomial
FIR did not change the fact that the first half of the samples are all
positive and the second half are all negative (meaning we can't find
the zero-crossings from the sampled signal using the "multiply-two-
consecutive-samples-and-check-the-sign" trick).

So I think the OP might possibly not get around to using high-order
sinc-interpolation filters for the factor 4 upsampling on the whole
data set to find the zero-crossings.

Regards,
Andor

 0

On Jan 8, 4:01=A0am, Andor <andor.bari...@gmail.com> wrote:
> On 7 Jan., 17:18, Andor <andor.bari...@gmail.com> wrote:
>
>
>
>
>
> > On 7 Jan., 16:48, "all4dsp" <all4...@comcast.net> wrote:
>
> > > Thanks everyone for your replies, you've given me a lot to think abou=
t.
> > > There are obviously a lot of tradeoffs in selecting a method.
>
> > > >I'm assuming that the initial FFT samples, X(m), are *VERY*
> > > >low in magnitude in the vicinity of Fs/2 Hz (Fs is the
> > > >original sample rate in Hz).
>
> > > This is true.
>
> > > >In those FFTs you're expecting the software to accuarately
> > > >estimate the sine and cosine of angles in the range
> > > >of 360/2^30 degrees
>
> > > Wow, good point! It sounds like polyphase filtering might be the best
> > > choice.
>
> > Since you are upsampling by 4 (an integer factor) you don't even need
> > to implement a polyphase resampler. All you need is a filterbank with
> > three filters. As I wrote in another post, I would suggest a two step
> > procedure:
>
> > 1. Interpolate by 4 with polynomial FIR filters (very small kernels,
> > e.g. 4 taps for third-order polynomial interpolation):
>
> > x(n) ---+-----------------> x(n)
> > =A0 =A0 =A0 =A0 |
> > =A0 =A0 =A0 =A0 | =A0 +---------+
> > =A0 =A0 =A0 =A0 |-->| =A0H1(z) =A0|---> x1(n)
> > =A0 =A0 =A0 =A0 | =A0 +---------+
> > =A0 =A0 =A0 =A0 |
> > =A0 =A0 =A0 =A0 | =A0 +---------+
> > =A0 =A0 =A0 =A0 |-->| =A0H2(z) =A0|---> x2(n)
> > =A0 =A0 =A0 =A0 | =A0 +---------+
> > =A0 =A0 =A0 =A0 |
> > =A0 =A0 =A0 =A0 | =A0 +---------+
> > =A0 =A0 =A0 =A0 +-->| =A0H3(z) =A0|---> x3(n)
> > =A0 =A0 =A0 =A0 =A0 =A0 +---------+
>
> > H1, H2 and H3 are the interpolation filters (polynomial filters are
> > simple to design from scratch), your upsampled sequence will be:
>
> > (... x(n),x1(n),x2(n),x3(n),x(n+1),x1(n+1),x2(n+1),x3(n+1), ... )
>
> > Using a third-order polynomial filter requires 3*4*n MACS to resample
> > by 4. For n =3D 1e9 you have 12e9 MACS, which should take in a couple o=
f
> > seconds on a normal PC.
>
> > 2. Use a simple method to find zero-crossings in the upsampled data
> > and then "polish" the zero-crossing location with more accurate
> > interpolation - this is outlined here:http://www.claysturner.com/dsp/DS=
Pinterpolation.pdf
>
> hmm. For fun I implemented this scheme and applied it to the samples
> of the signal
>
> x(t) =3D sin(pi f t) + sin(pi (1-f) t) + eps s((1-f)/2 t)
>
> for 0.5 < f < 1.0, eps > 0 and s(t) the square wave function with
> period 1, sampled with sampling interval T =3D 1. The sampling sequence x
> [n] =3D x(n) has the property that all samples from the time interval
>
> 0 <=3D t < 1/(1-f)
>
> are positive, and all samples from the interval
>
> 1/(1-f) <=3D t < 2/(1-f)
>
> are negative. For small eps, the number of zero-crossings of x(t) in
> both intervals is about 1/(1-f)/f.
>
> Unfortunately, upsampling by a factor of 4 with third-order polynomial
> FIR did not change the fact that the first half of the samples are all
> positive and the second half are all negative (meaning we can't find
> the zero-crossings from the sampled signal using the "multiply-two-
> consecutive-samples-and-check-the-sign" trick).
>
> So I think the OP might possibly not get around to using high-order
> sinc-interpolation filters for the factor 4 upsampling on the whole
> data set to find the zero-crossings.
>
> Regards,
> Andor- Hide quoted text -
>
> - Show quoted text -

A question about pathelogical signals is do they represent a common or
rare occurance for what the OP is looking for.

Certainly things like this have to be tested.

Clay


 0

On 8 Jan., 22:08, Clay <c...@claysturner.com> wrote:
> On Jan 8, 4:01=A0am, Andor <andor.bari...@gmail.com> wrote:
>
>
>
>
>
> > On 7 Jan., 17:18, Andor <andor.bari...@gmail.com> wrote:
>
> > > On 7 Jan., 16:48, "all4dsp" <all4...@comcast.net> wrote:
>
> > > > Thanks everyone for your replies, you've given me a lot to think ab=
out.
> > > > There are obviously a lot of tradeoffs in selecting a method.
>
> > > > >I'm assuming that the initial FFT samples, X(m), are *VERY*
> > > > >low in magnitude in the vicinity of Fs/2 Hz (Fs is the
> > > > >original sample rate in Hz).
>
> > > > This is true.
>
> > > > >In those FFTs you're expecting the software to accuarately
> > > > >estimate the sine and cosine of angles in the range
> > > > >of 360/2^30 degrees
>
> > > > Wow, good point! It sounds like polyphase filtering might be the be=
st
> > > > choice.
>
> > > Since you are upsampling by 4 (an integer factor) you don't even need
> > > to implement a polyphase resampler. All you need is a filterbank with
> > > three filters. As I wrote in another post, I would suggest a two step
> > > procedure:
>
> > > 1. Interpolate by 4 with polynomial FIR filters (very small kernels,
> > > e.g. 4 taps for third-order polynomial interpolation):
>
> > > x(n) ---+-----------------> x(n)
> > > =A0 =A0 =A0 =A0 |
> > > =A0 =A0 =A0 =A0 | =A0 +---------+
> > > =A0 =A0 =A0 =A0 |-->| =A0H1(z) =A0|---> x1(n)
> > > =A0 =A0 =A0 =A0 | =A0 +---------+
> > > =A0 =A0 =A0 =A0 |
> > > =A0 =A0 =A0 =A0 | =A0 +---------+
> > > =A0 =A0 =A0 =A0 |-->| =A0H2(z) =A0|---> x2(n)
> > > =A0 =A0 =A0 =A0 | =A0 +---------+
> > > =A0 =A0 =A0 =A0 |
> > > =A0 =A0 =A0 =A0 | =A0 +---------+
> > > =A0 =A0 =A0 =A0 +-->| =A0H3(z) =A0|---> x3(n)
> > > =A0 =A0 =A0 =A0 =A0 =A0 +---------+
>
> > > H1, H2 and H3 are the interpolation filters (polynomial filters are
> > > simple to design from scratch), your upsampled sequence will be:
>
> > > (... x(n),x1(n),x2(n),x3(n),x(n+1),x1(n+1),x2(n+1),x3(n+1), ... )
>
> > > Using a third-order polynomial filter requires 3*4*n MACS to resample
> > > by 4. For n =3D 1e9 you have 12e9 MACS, which should take in a couple=
of
> > > seconds on a normal PC.
>
> > > 2. Use a simple method to find zero-crossings in the upsampled data
> > > and then "polish" the zero-crossing location with more accurate
> > > interpolation - this is outlined here:http://www.claysturner.com/dsp/=
DSPinterpolation.pdf
>
> > hmm. For fun I implemented this scheme and applied it to the samples
> > of the signal
>
> > x(t) =3D sin(pi f t) + sin(pi (1-f) t) + eps s((1-f)/2 t)
>
> > for 0.5 < f < 1.0, eps > 0 and s(t) the square wave function with
> > period 1, sampled with sampling interval T =3D 1. The sampling sequence=
x
> > [n] =3D x(n) has the property that all samples from the time interval
>
> > 0 <=3D t < 1/(1-f)
>
> > are positive, and all samples from the interval
>
> > 1/(1-f) <=3D t < 2/(1-f)
>
> > are negative. For small eps, the number of zero-crossings of x(t) in
> > both intervals is about 1/(1-f)/f.
>
> > Unfortunately, upsampling by a factor of 4 with third-order polynomial
> > FIR did not change the fact that the first half of the samples are all
> > positive and the second half are all negative (meaning we can't find
> > the zero-crossings from the sampled signal using the "multiply-two-
> > consecutive-samples-and-check-the-sign" trick).
>
> > So I think the OP might possibly not get around to using high-order
> > sinc-interpolation filters for the factor 4 upsampling on the whole
> > data set to find the zero-crossings.
>
> > Regards,
> > Andor- Hide quoted text -
>
> > - Show quoted text -
>
> A question about pathelogical signals is do they represent a common or
> rare occurance for what the OP is looking for.

Clearly. I was just trying to counter the widely accepted notion that
zero-crossings of sampled signals can easily be found from the (non-
up-)sampled data.

>
> Certainly things like this have to be tested.

In the end I had to use 7th order polynomial interpolation FIR filters
on my test function to actually catch the zero-crossings using
upsampling by factor 4 (5th order was not enough). This can still be
implemented efficiently (8*3*1e9 =3D 24e9 MACS should easily be
computable in under a minute on a normal PC).

Regards,
Andor

 0

Andor wrote:

...

> In the end I had to use 7th order polynomial interpolation FIR filters
> on my test function to actually catch the zero-crossings using
> upsampling by factor 4 (5th order was not enough). This can still be
> implemented efficiently (8*3*1e9 = 24e9 MACS should easily be
> computable in under a minute on a normal PC).

I've been puzzling over this for a few days now. Set aside the search
for actual locations of the zero crossings. If there are at least two
samples in the period of the highest frequency present in the sampled
signal, how can any zero crossing fail to be among those counted?

Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������

 0

On Jan 7, 9:59=A0am, Andor <andor.bari...@gmail.com> wrote:

> I think the main problem is to find the rough location of the zero-
> crossing in the sampled data. As my example shows, a zero-crossing of
> the underlying sampled function will not necessarily show up as a
> "zero-crossing" in the samples. I would opt for Ron's idea to use
> quick polynomial interpolation with order >=3D 3 (can be implemented
> with very small kernel FIR filters) to oversample the data by 4 or so
> to find "zero-crossings" in the samples and then use the method of
> sinc-interpolation outlined in your paper to find the exact location.
>
> This reminds me of the old comp.dsp debate on the maximum number of
> zero-crossings that a bandlimited function may have on any given
> interval. As it turned out (I think Matt Timmermans came up with the
> example of the prolate spheroidal wave functions), the number of zero-
> crossings is not limited ...

But if there could be infinitely many zero-crossings, then no matter
how
much we oversample, we could never be sure that we have found them
all, could we?

If a (low-pass) signal is truly band-limited to W Hz, meaning that
X(f) =3D 0 for |f| > W, then the magnitude of its derivative is bounded
by
2\pi W max |x(t)|  (cf. Temes, Barcilon, and Marshall, The
optimization
of bandlimited systems, Proc. IEEE, Feb. 1973).  So, depending on
the sampling rate and the value of two successive positive samples,
we can be sure in *some* cases that there is no zero-crossing between
the two samples: the bound on how quickly x(t) can change does not
allow for the possibility that the signal could dip down below zero in
between the two positive samples.

--Dilip Sarwate

 0

dvsarwate wrote:
> On Jan 7, 9:59 am, Andor <andor.bari...@gmail.com> wrote:
>
>> I think the main problem is to find the rough location of the zero-
>> crossing in the sampled data. As my example shows, a zero-crossing of
>> the underlying sampled function will not necessarily show up as a
>> "zero-crossing" in the samples. I would opt for Ron's idea to use
>> quick polynomial interpolation with order >= 3 (can be implemented
>> with very small kernel FIR filters) to oversample the data by 4 or so
>> to find "zero-crossings" in the samples and then use the method of
>> sinc-interpolation outlined in your paper to find the exact location.
>>
>> This reminds me of the old comp.dsp debate on the maximum number of
>> zero-crossings that a bandlimited function may have on any given
>> interval. As it turned out (I think Matt Timmermans came up with the
>> example of the prolate spheroidal wave functions), the number of zero-
>> crossings is not limited ...
>
> But if there could be infinitely many zero-crossings, then no matter
> how
> much we oversample, we could never be sure that we have found them
> all, could we?
>
> If a (low-pass) signal is truly band-limited to W Hz, meaning that
> X(f) = 0 for |f| > W, then the magnitude of its derivative is bounded
> by
> 2\pi W max |x(t)|  (cf. Temes, Barcilon, and Marshall, The
> optimization
> of bandlimited systems, Proc. IEEE, Feb. 1973).  So, depending on
> the sampling rate and the value of two successive positive samples,
> we can be sure in *some* cases that there is no zero-crossing between
> the two samples: the bound on how quickly x(t) can change does not
> allow for the possibility that the signal could dip down below zero in
> between the two positive samples.

But doesn't that imply some frequency well above half the sample rate?

Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������

 0

On Jan 11, 12:37=A0pm, Jerry Avins <j...@ieee.org> wrote:
> dvsarwate wrote:
> > On Jan 7, 9:59 am, Andor <andor.bari...@gmail.com> wrote:
>
> >> I think the main problem is to find the rough location of the zero-
> >> crossing in the sampled data. As my example shows, a zero-crossing of
> >> the underlying sampled function will not necessarily show up as a
> >> "zero-crossing" in the samples. I would opt for Ron's idea to use
> >> quick polynomial interpolation with order >=3D 3 (can be implemented
> >> with very small kernel FIR filters) to oversample the data by 4 or so
> >> to find "zero-crossings" in the samples and then use the method of
> >> sinc-interpolation outlined in your paper to find the exact location.
>
> >> This reminds me of the old comp.dsp debate on the maximum number of
> >> zero-crossings that a bandlimited function may have on any given
> >> interval. As it turned out (I think Matt Timmermans came up with the
> >> example of the prolate spheroidal wave functions), the number of zero-
> >> crossings is not limited ...
>
> > But if there could be infinitely many zero-crossings, then no matter
> > how
> > much we oversample, we could never be sure that we have found them
> > all, could we?
>
> > If a (low-pass) signal is truly band-limited to W Hz, meaning that
> > X(f) =3D 0 for |f| > W, then the magnitude of its derivative is bounded
> > by
> > 2\pi W max |x(t)| =A0(cf. Temes, Barcilon, and Marshall, The
> > optimization
> > of bandlimited systems, Proc. IEEE, Feb. 1973). =A0So, depending on
> > the sampling rate and the value of two successive positive samples,
> > we can be sure in *some* cases that there is no zero-crossing between
> > the two samples: the bound on how quickly x(t) can change does not
> > allow for the possibility that the signal could dip down below zero in
> > between the two positive samples.
>
> But doesn't that imply some frequency well above half the sample rate?
>
> Jerry
> --
> Engineering is the art of making what you want from things you can get.
> =AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=
=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=
=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF

I think Jerry and I are talking at cross-purposes.  Suppose we have
two
samples spaced T seconds apart from a signal x(t) that is strictly
band-limited
to have bandwidth W; X(f) =3D 0 for |f| > W.  Suppose also (for
simplicity)
that |x(t)| < 1 and so the magnitude of the derivative is bounded by
2\pi W.
If x(0) =3D a > 0, then the signal cannot decrease to 0 earlier than t =3D
a/(2\pi W)
because the rate of change of x(t) is bounded.  Similarly, if x(T) =3D b
> 0 is
the next sample (note that we are not assuming any relation between T
and W),
then the signal must be above zero during the time interval (T - b/
(2\pi W), T)
since otherwise it cannot change fast enough to reach b at t =3D T.
Thus, if
a/(2\pi W) > T - b/(2\pi W), then there is no zero-crossing between 0
and T.
Note that the stated criterion is equivalent to a + b > 2\pi WT, and
of course
since a + b =3D x(0) + x(T) < 2, the criterion will never be satisfied
if 2\pi WT > 2.
But, assuming that \pi WT < 1, it is possible that we can deduce for
*some*
adjacent positive samples that x(t) does not cross zero between the
samples.  If T is very small (some upsampling may be necessary to get
this),
then maybe we can say that for *most* positive adjacent sample values
a and b, there is no zero-crossing between the samples.

Hope this helps

--Dilip Sarwate

 0

On 11 Jan., 16:16, Jerry Avins <j...@ieee.org> wrote:
> Andor wrote:
>
> =A0 =A0...
>
> > In the end I had to use 7th order polynomial interpolation FIR filters
> > on my test function to actually catch the zero-crossings using
> > upsampling by factor 4 (5th order was not enough). This can still be
> > implemented efficiently (8*3*1e9 =3D 24e9 MACS should easily be
> > computable in under a minute on a normal PC).
>
> I've been puzzling over this for a few days now. Set aside the search
> for actual locations of the zero crossings. If there are at least two
> samples in the period of the highest frequency present in the sampled
> signal, how can any zero crossing fail to be among those counted?

Like this:

http://www.zhaw.ch/~bara/files/zero-crossings_example.png

Plot of the function described above with f=3D0.98. The function has 80
zero-crossings in the displayed time window, the samples of the
function have exactly one zero-crossing.

Regards,
Andor

 0

Andor wrote:
> On 11 Jan., 16:16, Jerry Avins <j...@ieee.org> wrote:
>> Andor wrote:
>>
>>    ...
>>
>>> In the end I had to use 7th order polynomial interpolation FIR filters
>>> on my test function to actually catch the zero-crossings using
>>> upsampling by factor 4 (5th order was not enough). This can still be
>>> implemented efficiently (8*3*1e9 = 24e9 MACS should easily be
>>> computable in under a minute on a normal PC).
>> I've been puzzling over this for a few days now. Set aside the search
>> for actual locations of the zero crossings. If there are at least two
>> samples in the period of the highest frequency present in the sampled
>> signal, how can any zero crossing fail to be among those counted?
>
> Like this:
>
> http://www.zhaw.ch/~bara/files/zero-crossings_example.png
>
> Plot of the function described above with f=0.98. The function has 80
> zero-crossings in the displayed time window, the samples of the
> function have exactly one zero-crossing.

I see now, thanks. Very nice!

What procedures fail if we count zero touchings as zero crossings? Does
it come down to the difference between open and closed intervals?

Maybe I don't see after all. are the samples close to but not actually
on the axis? I see only a grapg, no equation. How does .98 come in?

Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������

 0

On 12 Jan., 19:48, Jerry Avins <j...@ieee.org> wrote:
> Andor wrote:
> > On 11 Jan., 16:16, Jerry Avins <j...@ieee.org> wrote:
> >> Andor wrote:
>
> >> =A0 =A0...
>
> >>> In the end I had to use 7th order polynomial interpolation FIR filter=
s
> >>> on my test function to actually catch the zero-crossings using
> >>> upsampling by factor 4 (5th order was not enough). This can still be
> >>> implemented efficiently (8*3*1e9 =3D 24e9 MACS should easily be
> >>> computable in under a minute on a normal PC).
> >> I've been puzzling over this for a few days now. Set aside the search
> >> for actual locations of the zero crossings. If there are at least two
> >> samples in the period of the highest frequency present in the sampled
> >> signal, how can any zero crossing fail to be among those counted?
>
> > Like this:
>
> >http://www.zhaw.ch/~bara/files/zero-crossings_example.png
>
> > Plot of the function described above with f=3D0.98. The function has 80
> > zero-crossings in the displayed time window, the samples of the
> > function have exactly one zero-crossing.
>
> I see now, thanks. Very nice!
>
> What procedures fail if we count zero touchings as zero crossings? Does
> it come down to the difference between open and closed intervals?
>
> Maybe I don't see after all. are the samples close to but not actually
> on the axis? I see only a grapg, no equation. How does .98 come in?

Yup, the samples don't touch the axis. The equation for this function
can be found higher up in the thread. For convenience:

x(t) =3D sin(pi f t) + sin(pi (1-f) t) + eps s((1-f)/2 t)

where s(t) is the square-wave function with period 1 (replace with
truncated Fourier series to make x(t) bandlimited). x(t) is sampled
with sampling period T=3D1. The samples close to the axis have value eps
(in this example, eps =3D 0.02) and f=3D0.98.

Regards,
Andor

 0
Reply andor.bariska (1307) 1/13/2010 7:59:09 AM

On 11 Jan., 19:17, dvsarwate <dvsarw...@gmail.com> wrote:
> On Jan 7, 9:59=A0am, Andor <andor.bari...@gmail.com> wrote:
>
> > I think the main problem is to find the rough location of the zero-
> > crossing in the sampled data. As my example shows, a zero-crossing of
> > the underlying sampled function will not necessarily show up as a
> > "zero-crossing" in the samples. I would opt for Ron's idea to use
> > quick polynomial interpolation with order >=3D 3 (can be implemented
> > with very small kernel FIR filters) to oversample the data by 4 or so
> > to find "zero-crossings" in the samples and then use the method of
> > sinc-interpolation outlined in your paper to find the exact location.
>
> > This reminds me of the old comp.dsp debate on the maximum number of
> > zero-crossings that a bandlimited function may have on any given
> > interval. As it turned out (I think Matt Timmermans came up with the
> > example of the prolate spheroidal wave functions), the number of zero-
> > crossings is not limited ...
>
> But if there could be infinitely many zero-crossings, then no matter
> how
> much we oversample, we could never be sure that we have found them
> all, could we?
>
> If a (low-pass) signal is truly band-limited to W Hz, meaning that
> X(f) =3D 0 for |f| > W, then the magnitude of its derivative is bounded
> by
> 2\pi W max |x(t)| =A0(cf. Temes, Barcilon, and Marshall, The
> optimization
> of bandlimited systems, Proc. IEEE, Feb. 1973). =A0So, depending on
> the sampling rate and the value of two successive positive samples,
> we can be sure in *some* cases that there is no zero-crossing between
> the two samples: the bound on how quickly x(t) can change does not
> allow for the possibility that the signal could dip down below zero in
> between the two positive samples.

Yes, nice Popper-esque argument :-).

Regards,
Andor

 0
Reply andor.bariska (1307) 1/13/2010 10:45:09 AM

dvsarwate wrote:
> On Jan 7, 9:59 am, Andor <andor.bari...@gmail.com> wrote:
>
>> I think the main problem is to find the rough location of the zero-
>> crossing in the sampled data. As my example shows, a zero-crossing of
>> the underlying sampled function will not necessarily show up as a
>> "zero-crossing" in the samples. I would opt for Ron's idea to use
>> quick polynomial interpolation with order >= 3 (can be implemented
>> with very small kernel FIR filters) to oversample the data by 4 or so
>> to find "zero-crossings" in the samples and then use the method of
>> sinc-interpolation outlined in your paper to find the exact location.
>>
>> This reminds me of the old comp.dsp debate on the maximum number of
>> zero-crossings that a bandlimited function may have on any given
>> interval. As it turned out (I think Matt Timmermans came up with the
>> example of the prolate spheroidal wave functions), the number of zero-
>> crossings is not limited ...
>
> But if there could be infinitely many zero-crossings, then no matter
> how
> much we oversample, we could never be sure that we have found them
> all, could we?
>
> If a (low-pass) signal is truly band-limited to W Hz, meaning that
> X(f) = 0 for |f| > W, then the magnitude of its derivative is bounded
> by
> 2\pi W max |x(t)|  (cf. Temes, Barcilon, and Marshall, The
> optimization
> of bandlimited systems, Proc. IEEE, Feb. 1973).  So, depending on
> the sampling rate and the value of two successive positive samples,
> we can be sure in *some* cases that there is no zero-crossing between
> the two samples: the bound on how quickly x(t) can change does not
> allow for the possibility that the signal could dip down below zero in
> between the two positive samples.

But the derivative grows with both frequency and amplitude. Can a bound
be assigned to the derivative without knowing the pre-sampling amplitude?

Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������

 0
Reply jya (12872) 1/13/2010 5:25:42 PM

Andor wrote:
> On 12 Jan., 19:48, Jerry Avins <j...@ieee.org> wrote:
>> Andor wrote:
>>> On 11 Jan., 16:16, Jerry Avins <j...@ieee.org> wrote:
>>>> Andor wrote:
>>>>    ...
>>>>> In the end I had to use 7th order polynomial interpolation FIR filters
>>>>> on my test function to actually catch the zero-crossings using
>>>>> upsampling by factor 4 (5th order was not enough). This can still be
>>>>> implemented efficiently (8*3*1e9 = 24e9 MACS should easily be
>>>>> computable in under a minute on a normal PC).
>>>> I've been puzzling over this for a few days now. Set aside the search
>>>> for actual locations of the zero crossings. If there are at least two
>>>> samples in the period of the highest frequency present in the sampled
>>>> signal, how can any zero crossing fail to be among those counted?
>>> Like this:
>>> http://www.zhaw.ch/~bara/files/zero-crossings_example.png
>>> Plot of the function described above with f=0.98. The function has 80
>>> zero-crossings in the displayed time window, the samples of the
>>> function have exactly one zero-crossing.
>> I see now, thanks. Very nice!
>>
>> What procedures fail if we count zero touchings as zero crossings? Does
>> it come down to the difference between open and closed intervals?
>>
>> Maybe I don't see after all. are the samples close to but not actually
>> on the axis? I see only a grapg, no equation. How does .98 come in?
>
> Yup, the samples don't touch the axis. The equation for this function
> can be found higher up in the thread. For convenience:
>
> x(t) = sin(pi f t) + sin(pi (1-f) t) + eps s((1-f)/2 t)
>
> where s(t) is the square-wave function with period 1 (replace with
> truncated Fourier series to make x(t) bandlimited). x(t) is sampled
> with sampling period T=1. The samples close to the axis have value eps
> (in this example, eps = 0.02) and f=0.98.

Aside from the possibility of Gibbs's phenomenon with the truncated s,
it's clear now. Thanks again.

Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������

 0
Reply jya (12872) 1/13/2010 5:54:41 PM

31 Replies
573 Views

Similar Articles

12/5/2013 11:49:10 AM
page loaded in 5184 ms -0