COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

### Time-Frequency Resolution, Uncertainty Principle

• Follow

Hi

I am little bit confused about the issue of time-frequency resolution
and the principle of uncertainty. The latter states that product
DT*Dw, where DT, Dw denotes respectively the time and frequency
resolution is bounded. If the time resolution *increases* then the
frequency resolution *decreases*

However i am confused in the case where we interpolate a signal, then
its time resolution *increases*. Since we get larger number of samples,
the frequency resolution *increases* as well. The fourier transform will
contain larger number of samples spanning the interval [-pi,pi].
Therefore the frequency resolution has increased even that the time
resolution has increased as well which goes against the uncertainty
principle.

I can't see what's wrong with my analysis? any help .

--
Posted via Mailgate.ORG Server - http://www.Mailgate.ORG

 0

"Hristo Stevic" <hristostev@yahoo.com> wrote in message
news:d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org...
> Hi
>
> I am little bit confused about the issue of time-frequency resolution
> and the principle of uncertainty. The latter states that product
> DT*Dw, where DT, Dw denotes respectively the time and frequency
> resolution is bounded. If the time resolution *increases* then the
> frequency resolution *decreases*
>
> However i am confused in the case where we interpolate a signal, then
> its time resolution *increases*. Since we get larger number of samples,
> the frequency resolution *increases* as well. The fourier transform will
> contain larger number of samples spanning the interval [-pi,pi].
> Therefore the frequency resolution has increased even that the time
> resolution has increased as well which goes against the uncertainty
> principle.
>
>
> I can't see what's wrong with my analysis? any help .

Hristo Stevic,

With interpolation, the number of points increases but the resolution
doesn't.  This means that a given resolution will remain.
Example:
- two sinusoids are 1Hz apart
- they are sampled for .2 seconds giving around 5Hz resolution.
Spectral analysis of the .2 second record won't resolve the two sinusoids
because the resolution is 5Hz.
If the frequency domain is interpolated, you still won't be able to resolve
the two sinusoids because there's still only 0.2 seconds of input data.  The
sinusoids will still be smushed together (that's the technical term for it).

Same thing if we interpolate in time.  We add zeros to the spectrum - is at
least one way.  There's still only the data that we originally had in
frequency.

That's a quickie answer at least.

Fred


 0

Hristo Stevic wrote:

> Hi
>
> I am little bit confused about the issue of time-frequency resolution
> and the principle of uncertainty. The latter states that product
> DT*Dw, where DT, Dw denotes respectively the time and frequency
> resolution is bounded. If the time resolution *increases* then the
> frequency resolution *decreases*
>
> However i am confused in the case where we interpolate a signal, then
> its time resolution *increases*.

Interpolation doesn't really add new samples.  Interpolation lets you find
peaks
between the grid points.

You really need to be careful about "uncertainty principles" in classical
macroscopic systems.

In most sorts of audio to RF signals you can measure both the amplitude and
phase simultaneously at the output of a DFT bin fairly easily. The
precision of the measurement is bounded by the signal to noise ratio.  In
some cases, at a high enough SNR, you can resolve differences that the
"uncertainty principle" would tell you that you can't.

I'll give you a real easy example,  we have 2 signals   A sin(wt)  and
(-A) sin(wt) .

If are constrained to look at the magnitude squared of a DFT bin centered
at radian frequency w, you can't tell the difference between the two, but
these 2 signals are the basic examples in binary communications.  They have
exactly the same frequency and the ability to distinguish between them is a
function of local clock synchronized to the transmitter's clock.

In a quantum mechanical system, the measurement has to be real.  There is a
significant difference between a classical and quantum mechanical
measurement.

> Since we get larger number of samples,
> the frequency resolution *increases* as well. The fourier transform will
> contain larger number of samples spanning the interval [-pi,pi].
> Therefore the frequency resolution has increased even that the time
> resolution has increased as well which goes against the uncertainty
> principle.
>
> I can't see what's wrong with my analysis? any help .
>
> --
> Posted via Mailgate.ORG Server - http://www.Mailgate.ORG


 0

Thanks for both of you, however i am still confused....

I was speaking about the fourier transform of a discrete signal.
My point is that if we upsample the signal, we ll get more samples at
the frequency
.. Thus higher frequency resolution. you can see that the time resolution
has not changed also. However, the uncertainty principle claims that
these two types of resolution are inversely proportional :-(

so what i am missing...

you can see from the DFT expression, the frequency resolution is defined
by 2pi/N, where N is the number of input signal sample
Thus if N increases, the time resolution increases,also did the
frequency resolution

with decimation, time resolution decreases and also the frequnecy
resolution
decreases [the number of samples in both domains diminish]

with interpolation, time resolution increases and also the frequnecy
resolution
increases [the number of samples in both domains increase]

these two cases contradict the uncertainty principle according to my
analysis

--
Posted via Mailgate.ORG Server - http://www.Mailgate.ORG

 0

"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:hCoMa.2112$Jk5.1132401@feed2.centurytel.net... > > "Hristo Stevic" <hristostev@yahoo.com> wrote in message > news:d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org... > > Hi > > > > I am little bit confused about the issue of time-frequency resolution > > and the principle of uncertainty. The latter states that product > > DT*Dw, where DT, Dw denotes respectively the time and frequency > > resolution is bounded. If the time resolution *increases* then the > > frequency resolution *decreases* > > > > However i am confused in the case where we interpolate a signal, then > > its time resolution *increases*. Since we get larger number of samples, > > the frequency resolution *increases* as well. The fourier transform will > > contain larger number of samples spanning the interval [-pi,pi]. > > Therefore the frequency resolution has increased even that the time > > resolution has increased as well which goes against the uncertainty > > principle. > > > > > > I can't see what's wrong with my analysis? any help . > > Hristo Stevic, > > With interpolation, the number of points increases but the resolution > doesn't. This means that a given resolution will remain. > Example: > - two sinusoids are 1Hz apart > - they are sampled for .2 seconds giving around 5Hz resolution. > Spectral analysis of the .2 second record won't resolve the two sinusoids > because the resolution is 5Hz. > If the frequency domain is interpolated, you still won't be able to resolve > the two sinusoids because there's still only 0.2 seconds of input data. The > sinusoids will still be smushed together (that's the technical term for it). > > Same thing if we interpolate in time. We add zeros to the spectrum - is at > least one way. There's still only the data that we originally had in > frequency. > > That's a quickie answer at least. One that isn't completely true. Consider that you might have 0.2s samples of such sinusoids. Maybe sampled at 1MHz with a 30 bit A/D converter, and that there is no (measurable) noise in the sinusoids. I believe, then, that you will have plenty enough information to separate the sinusoids. (I overdid the numbers slightly to emphasize the point.) It is only when you can't measure the signal itself that the uncertainty principle comes in. There was a story about someone designing a complicated RADAR system, believing in the uncertainty principle decided that it couldn't be built. It was something like using a radar to measure the speed and distance of an airplane. You are, of course, limited in the measurement of the airplane. It is that, and not the measurement of the phase and amplitude of the reflected electromagnetic wave, that is limiting. OK, consider this one, maybe applicable to DSP. The amplitude of the vibration of your eardrum for the softest sounds you can hear is much less than the diameter of an atom. With the uncertainty prinicple you wouldn't be able to measure such an atom. But the vibration averaged over the many atoms of your eardrum is measurable. In the RADAR case, it is not one photon that is being measured, and would be limited by the uncertainty principle, but an average over many photons, and the fact that you can directly measure the electromagentic field. -- glen   0 Reply Glen 7/2/2003 4:21:21 PM "Hristo Stevic" <hristostev@yahoo.com> wrote in message news:<d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org>... > Hi > > I am little bit confused about the issue of time-frequency resolution > and the principle of uncertainty. The latter states that product > DT*Dw, where DT, Dw denotes respectively the time and frequency > resolution is bounded. I cannot find any such statement in "Modern Physics" by Serway et al. Can you please provide a very precise and specific reference? Serway talks of uncertainty in energy and time, and in momentum and position (which I believe is the standard form of Heisenberg's uncertainty principle), but not between time and frequency. > If the time resolution *increases* then the > frequency resolution *decreases* I submit that the frequency of a perfect noiseless complex sinusoid can be measured perfectly in two samples: measure the phase at sample 1 and the phase at sample 2 and then use 2*pi*f = (phase 2 - phase 1)/T, where the phases are in radians and T is the sample period in seconds. > However i am confused in the case where we interpolate a signal, then > its time resolution *increases*. Since we get larger number of samples, > the frequency resolution *increases* as well. The fourier transform will > contain larger number of samples spanning the interval [-pi,pi]. > Therefore the frequency resolution has increased even that the time > resolution has increased as well which goes against the uncertainty > principle. > > > I can't see what's wrong with my analysis? any help . The bin width of the DFT decreases as the number of samples increases. This is due to the process involved in the DFT and not some inherent limitation in frequency measurement, as the gedanken I suggested above shows. I think your basic problem (and many others have made the same mistake) is thinking that the uncertainty that crops up in physics has some equivalent in signal processing. I assert that it does not.   0 Reply yates 7/2/2003 6:17:02 PM "Hristo Stevic" <hristostev@yahoo.com> wrote in message news:<d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org>... > Hi > > I am little bit confused about the issue of time-frequency resolution > and the principle of uncertainty. The latter states that product > DT*Dw, where DT, Dw denotes respectively the time and frequency > resolution is bounded. I cannot find any such statement in "Modern Physics" by Serway et al. Can you please provide a very precise and specific reference? Serway talks of uncertainty in energy and time, and in momentum and position (which I believe is the standard form of Heisenberg's uncertainty principle), but not between time and frequency. > If the time resolution *increases* then the > frequency resolution *decreases* I submit that the frequency of a perfect noiseless complex sinusoid can be measured perfectly in two samples: measure the phase at sample 1 and the phase at sample 2 and then use 2*pi*f = (phase 2 - phase 1)/T, where the phases are in radians and T is the sample period in seconds. > However i am confused in the case where we interpolate a signal, then > its time resolution *increases*. Since we get larger number of samples, > the frequency resolution *increases* as well. The fourier transform will > contain larger number of samples spanning the interval [-pi,pi]. > Therefore the frequency resolution has increased even that the time > resolution has increased as well which goes against the uncertainty > principle. > > > I can't see what's wrong with my analysis? any help . The bin width of the DFT decreases as the number of samples increases. This is due to the process involved in the DFT and not some inherent limitation in frequency measurement, as the gedanken I suggested above shows. I think your basic problem (and many others have made the same mistake) is thinking that the uncertainty that crops up in physics has some equivalent in signal processing. I assert that it does not.   0 Reply yates 7/2/2003 6:17:02 PM Hristo Stevic wrote: > Thanks for both of you, however i am still confused.... > > I was speaking about the fourier transform of a discrete signal. If the discrete signal is equivalent to one that was sampled under the nyquist criteria then it should also have an equivalence to its reconstructed analog signal. > My point is that if we upsample the signal, we ll get more samples at > the frequency > . Thus higher frequency resolution. you can see that the time resolution > has not changed also. A Hz is a Hz. All you're changing is the number of bins that span a Hz. Sampling is not magic. Isn't an analog signal the case of infinite up-sampling? However, the uncertainty principle claims that > these two types of resolution are inversely proportional :-( > > so what i am missing... > > you can see from the DFT expression, the frequency resolution is defined > by 2pi/N, where N is the number of input signal sample > Thus if N increases, the time resolution increases,also did the > frequency resolution > > > with decimation, time resolution decreases and also the frequnecy > resolution > decreases [the number of samples in both domains diminish] > > with interpolation, time resolution increases and also the frequnecy > resolution > increases [the number of samples in both domains increase] > > > these two cases contradict the uncertainty principle according to my > analysis > > > > >   0 Reply Stan 7/2/2003 7:32:46 PM yates@ieee.org (Randy Yates) wrote in message news:<567ce618.0307021017.4d898645@posting.google.com>... > "Hristo Stevic" <hristostev@yahoo.com> wrote in message news:<d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org>... > > Hi > > > > I am little bit confused about the issue of time-frequency resolution > > and the principle of uncertainty. The latter states that product > > DT*Dw, where DT, Dw denotes respectively the time and frequency > > resolution is bounded. > > I cannot find any such statement in "Modern Physics" by Serway et al. > Can you please provide a very precise and specific reference? Serway > talks of uncertainty in energy and time, and in momentum and position > (which I believe is the standard form of Heisenberg's uncertainty > principle), but not between time and frequency. > > > If the time resolution *increases* then the > > frequency resolution *decreases* > > I submit that the frequency of a perfect noiseless complex sinusoid can > be measured perfectly in two samples: measure the phase at sample 1 and > the phase at sample 2 and then use 2*pi*f = (phase 2 - phase 1)/T, where > the phases are in radians and T is the sample period in seconds. Some pedantic remarks: The uncertainty principle as I know it from DSP is related to the ability to resolve *two* complex sinusoidals in a data set. There is another, equally important but different, question on how to localize *one* sinusoidal with the greatest accuracy. Having said that, you're right. There is nothing inherent in the time/frequency formalism that states that it is impossible to resolve two sines closer that the Fourier limit. All the high resolution frequency/DoA estimators use some sort of trick to circumvent the Fourier limit. > > However i am confused in the case where we interpolate a signal, then > > its time resolution *increases*. Since we get larger number of samples, > > the frequency resolution *increases* as well. The fourier transform will > > contain larger number of samples spanning the interval [-pi,pi]. > > Therefore the frequency resolution has increased even that the time > > resolution has increased as well which goes against the uncertainty > > principle. > > > > > > I can't see what's wrong with my analysis? any help . > > The bin width of the DFT decreases as the number of samples > increases. This is due to the process involved in the DFT and > not some inherent limitation in frequency measurement, as the > gedanken I suggested above shows. Agreed, under the condition that you by "samples" means "measured data points". > I think your basic problem > (and many others have made the same mistake) is thinking that > the uncertainty that crops up in physics has some equivalent > in signal processing. I assert that it does not. I believe the term "Heissenberg's inequality" is used for these types of limitations in physics. Dym & McKean's "Fourier Series and Integrals" (Academic Press, 1972) includes a section on the inequality. I also believe Papoulis mentioned Heissenberg's inequality by name in his book on Fourier integrals. This "recycling" of terms, that mean slightly differnt things in slightly different contexts, could be part of the problem. Rune   0 Reply allnor 7/2/2003 9:18:17 PM "Randy Yates" <yates@ieee.org> wrote in message news:567ce618.0307021017.4d898645@posting.google.com... > "Hristo Stevic" <hristostev@yahoo.com> wrote in message news:<d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org>... > > Hi > > > > I am little bit confused about the issue of time-frequency resolution > > and the principle of uncertainty. The latter states that product > > DT*Dw, where DT, Dw denotes respectively the time and frequency > > resolution is bounded. > > I cannot find any such statement in "Modern Physics" by Serway et al. > Can you please provide a very precise and specific reference? Serway > talks of uncertainty in energy and time, and in momentum and position > (which I believe is the standard form of Heisenberg's uncertainty > principle), but not between time and frequency. The one people usually learn first are Dp*Dx < h/4pi and DE*Dt < h/4pi. (D is Delta, p is momentum, E is energy, t is time, x is distance). But E=(h/2pi)w = hf and p=(h/2pi)k (why don't they put hbar keys on keyboards?) w is omega = 2pi f, and k is wave number (or wave vector) 2pi/wavelength. So Dw*Dt< 1/2 and Dk*Dx<1/2 are equivalent uncertainty principles. h and hbar are unit conversions, so this would be the dimensionless form. Because of this, physics likes to do Fourier transforms between t and w, or between x and k, where the 2pi is included in the w and k. No extra 2pi lying around. Just exp(i w t) and exp(i k x). In three dimensions, k and x are vectors, and it is k dot x. -- glen   0 Reply Glen 7/2/2003 9:22:01 PM "Glen Herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message news:5EDMa.100$Ix2.16@rwcrnsc54...
>
> "Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
> news:hCoMa.2112$Jk5.1132401@feed2.centurytel.net... > > > > "Hristo Stevic" <hristostev@yahoo.com> wrote in message > > news:d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org... > > > Hi > > > > > > I am little bit confused about the issue of time-frequency resolution > > > and the principle of uncertainty. The latter states that product > > > DT*Dw, where DT, Dw denotes respectively the time and frequency > > > resolution is bounded. If the time resolution *increases* then the > > > frequency resolution *decreases* > > > > > > However i am confused in the case where we interpolate a signal, then > > > its time resolution *increases*. Since we get larger number of samples, > > > the frequency resolution *increases* as well. The fourier transform will > > > contain larger number of samples spanning the interval [-pi,pi]. > > > Therefore the frequency resolution has increased even that the time > > > resolution has increased as well which goes against the uncertainty > > > principle. > > > > > > > > > I can't see what's wrong with my analysis? any help . > > > > Hristo Stevic, > > > > With interpolation, the number of points increases but the resolution > > doesn't. This means that a given resolution will remain. > > Example: > > - two sinusoids are 1Hz apart > > - they are sampled for .2 seconds giving around 5Hz resolution. > > Spectral analysis of the .2 second record won't resolve the two sinusoids > > because the resolution is 5Hz. > > If the frequency domain is interpolated, you still won't be able to > resolve > > the two sinusoids because there's still only 0.2 seconds of input data. > The > > sinusoids will still be smushed together (that's the technical term for > it). > > > > Same thing if we interpolate in time. We add zeros to the spectrum - is > at > > least one way. There's still only the data that we originally had in > > frequency. > > > > That's a quickie answer at least. > > One that isn't completely true. Consider that you might have 0.2s samples > of such sinusoids. Maybe sampled at 1MHz with a 30 bit A/D converter, and > that there is no (measurable) noise in the sinusoids. I believe, then, > that you will have plenty enough information to separate the sinusoids. (I > overdid the numbers slightly to emphasize the point.) Glen, You changed "0.2sec sample" to "0.2sec samples". Is that significant? Whatever, I've missed your point here. Are you suggesting that you know the frequencies of the sinusoids a priori? That would be a different question. The question I was dealing with was, more precisely: "starting with a signal that is the sum of two sinusoids of unknown frequency and amplitude, given a single temporal segment or epoch of that signal, can you tell that there are two sinusoids present in it and readily estimate their frequencies and amplitudes if the frequency separation between the sinusoids is much less than the reciprocal of the length of the temporal sample (epoch) that was provided to you?" I still think the answer is no. The question wasn't about solving simultaneous equations based on having more information. So, as long as the SNR is reasonable, there's no worry about the signal being noiseless unless you want to get into the accuracy of the estimates. This was more about simply being able to "resolve" more than about being able to estimate. Fred   0 Reply Fred 7/2/2003 9:26:43 PM "Randy Yates" <yates@ieee.org> wrote in message news:567ce618.0307021017.4d898645@posting.google.com... > "Hristo Stevic" <hristostev@yahoo.com> wrote in message news:<d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org>... > > Hi > > > > I am little bit confused about the issue of time-frequency resolution > > and the principle of uncertainty. The latter states that product > > DT*Dw, where DT, Dw denotes respectively the time and frequency > > resolution is bounded. > > I cannot find any such statement in "Modern Physics" by Serway et al. > Can you please provide a very precise and specific reference? Serway > talks of uncertainty in energy and time, and in momentum and position > (which I believe is the standard form of Heisenberg's uncertainty > principle), but not between time and frequency. > Yeah, they're not the same thing - but there is just a similarity. So using "uncertainty principle" in this case is probably unfortunate. Yet, many know what's meant. .................. > The bin width of the DFT decreases as the number of samples > increases. This is due to the process involved in the DFT and > not some inherent limitation in frequency measurement, as the > gedanken I suggested above shows. I think your basic problem > (and many others have made the same mistake) is thinking that > the uncertainty that crops up in physics has some equivalent > in signal processing. I assert that it does not. Hmmmm.... even so, I don't see how the DFT has anything to do with this question although this question and its answer may shed light on how one approaches doing FTs, DFTs, FFTs according to their purpose. There *is* an inherent limitation in frequency resolution that is roughly 1/T where T is the length of the temporal epoch being pondered / analyzed. Is there not? I'm rather surpised that by now no one has brought up time-bandwidth product and ambiguity. If one carries this discussion one step further and asks: how well can I determine time and (Doppler) frequency of a radar return or sonar echo assuming a point target? If one is using a constant frequency sinusoidal burst, the time-bandwidth product is 1.0 or thereabouts and the same limits we're talking about above apply. It takes a longer pulse to estimate echo frequency more finely / i.e. frequency resolution. It takes a shorter pulse to estimate echo range more finely. So, there is a trade between time resolution and frequency resolution. However, if a pulse with larger time-bandwidth product, such as an FM'd sinusoidal burst, then there can theoretically be better resolution in time and there can be better resolution in frequency because the TW product is higher. However, doing this leads to ambiguity - there is a continuum of times and frequencies that coincide, rather than one value of each. So, other high bandwidth waveforms might be designed. Note that using multiple pulses is one minor method of increasing the TW product and that these remarks are limited to single-pulse estimates. Fred   0 Reply Fred 7/2/2003 9:45:09 PM Fred Marshall wrote: > "Randy Yates" <yates@ieee.org> wrote in message > news:567ce618.0307021017.4d898645@posting.google.com... > >>"Hristo Stevic" <hristostev@yahoo.com> wrote in message > > news:<d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org>... > >>>Hi >>> >>>I am little bit confused about the issue of time-frequency resolution >>>and the principle of uncertainty. The latter states that product >>>DT*Dw, where DT, Dw denotes respectively the time and frequency >>>resolution is bounded. >> >>I cannot find any such statement in "Modern Physics" by Serway et al. >>Can you please provide a very precise and specific reference? Serway >>talks of uncertainty in energy and time, and in momentum and position >>(which I believe is the standard form of Heisenberg's uncertainty >>principle), but not between time and frequency. >> > > > Yeah, they're not the same thing - but there is just a similarity. So using > "uncertainty principle" in this case is probably unfortunate. Yet, many > know what's meant. > ................. > > >>The bin width of the DFT decreases as the number of samples >>increases. This is due to the process involved in the DFT and >>not some inherent limitation in frequency measurement, as the >>gedanken I suggested above shows. I think your basic problem >>(and many others have made the same mistake) is thinking that >>the uncertainty that crops up in physics has some equivalent >>in signal processing. I assert that it does not. > > > Hmmmm.... even so, I don't see how the DFT has anything to do with this > question although this question and its answer may shed light on how one > approaches doing FTs, DFTs, FFTs according to their purpose. There *is* an > inherent limitation in frequency resolution that is roughly 1/T where T is > the length of the temporal epoch being pondered / analyzed. Is there not? > > I'm rather surpised that by now no one has brought up time-bandwidth product > and ambiguity. If one carries this discussion one step further and asks: > how well can I determine time and (Doppler) frequency of a radar return or > sonar echo assuming a point target? My understanding of the ambuguity function is that it describes the output of the replica (of the transmitted signal) correlator under conditions of missmatch to delay and frequency shift. One can design a waveform that has a sharp response to either the delay (range) or the dopler (range rate) shift but not both. A perfect ML processor on the other hand, in a sense uses an infinite set of filters for all possible ranges and velocities. I believe that there are circumstances, such as very high SNR, where there is a unique replica with maximum response at the correct range and velocity. Most active people I know tend to point out that reverberation makes the theoretical point moot. My understanding is that people tend to design waveforms more on the basis of rejecting reverberation. Hopefuly some one who knows more than I do can comment. > If one is using a constant frequency sinusoidal burst, the time-bandwidth > product is 1.0 or thereabouts and the same limits we're talking about above > apply. It takes a longer pulse to estimate echo frequency more finely / > i.e. frequency resolution. It takes a shorter pulse to estimate echo range > more finely. So, there is a trade between time resolution and frequency > resolution. > > However, if a pulse with larger time-bandwidth product, such as an FM'd > sinusoidal burst, then there can theoretically be better resolution in time > and there can be better resolution in frequency because the TW product is > higher. However, doing this leads to ambiguity - there is a continuum of > times and frequencies that coincide, rather than one value of each. So, > other high bandwidth waveforms might be designed. Note that using multiple > pulses is one minor method of increasing the TW product and that these > remarks are limited to single-pulse estimates. > > Fred > >   0 Reply Stan 7/2/2003 10:40:00 PM "Stan Pawlukiewicz" <stanp@nospam_mitre.org> wrote in message news:bdvn00$4n8$1@newslocal.mitre.org... > Fred Marshall wrote: > > "Randy Yates" <yates@ieee.org> wrote in message > > news:567ce618.0307021017.4d898645@posting.google.com... > > > >>"Hristo Stevic" <hristostev@yahoo.com> wrote in message > > > > news:<d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org>... > > > >>>Hi > >>> > >>>I am little bit confused about the issue of time-frequency resolution > >>>and the principle of uncertainty. The latter states that product > >>>DT*Dw, where DT, Dw denotes respectively the time and frequency > >>>resolution is bounded. > >> > >>I cannot find any such statement in "Modern Physics" by Serway et al. > >>Can you please provide a very precise and specific reference? Serway > >>talks of uncertainty in energy and time, and in momentum and position > >>(which I believe is the standard form of Heisenberg's uncertainty > >>principle), but not between time and frequency. > >> > > > > > > Yeah, they're not the same thing - but there is just a similarity. So using > > "uncertainty principle" in this case is probably unfortunate. Yet, many > > know what's meant. > > ................. > > > > > >>The bin width of the DFT decreases as the number of samples > >>increases. This is due to the process involved in the DFT and > >>not some inherent limitation in frequency measurement, as the > >>gedanken I suggested above shows. I think your basic problem > >>(and many others have made the same mistake) is thinking that > >>the uncertainty that crops up in physics has some equivalent > >>in signal processing. I assert that it does not. > > > > > > Hmmmm.... even so, I don't see how the DFT has anything to do with this > > question although this question and its answer may shed light on how one > > approaches doing FTs, DFTs, FFTs according to their purpose. There *is* an > > inherent limitation in frequency resolution that is roughly 1/T where T is > > the length of the temporal epoch being pondered / analyzed. Is there not? > > > > I'm rather surpised that by now no one has brought up time-bandwidth product > > and ambiguity. If one carries this discussion one step further and asks: > > how well can I determine time and (Doppler) frequency of a radar return or > > sonar echo assuming a point target? > > > My understanding of the ambuguity function is that it describes the > output of the replica (of the transmitted signal) correlator under > conditions of missmatch to delay and frequency shift. > > One can design a waveform that has a sharp response to either the delay > (range) or the dopler (range rate) shift but not both. > > A perfect ML processor on the other hand, in a sense uses an infinite > set of filters for all possible ranges and velocities. I believe that > there are circumstances, such as very high SNR, where there is a unique > replica with maximum response at the correct range and velocity. > > > Most active people I know tend to point out that reverberation makes the > theoretical point moot. My understanding is that people tend to > design waveforms more on the basis of rejecting reverberation. > Stan, Yes, I'm sure that's the case. There are limits in what can be done so "designing waveforms" is a bit generous. After years of effort windowed cw is still pretty good. I recall one attempt to use a "thumbtack" ambiguity function based on pseudorandom chip sequences. The problem with it was that the "noise floor" of the thumbtack was too high relative to the peak output and real reverberation caused too much uncertainty in "which peak is it?". There are two issues: coherence time and processing in the presence of reverberation. Costas, long after RAKE, proposed a sonar version of the same idea in order to deal with shorter coherence times. You might call that "extended waveform design"! Radar designers have the advantage of being able to deal with lots of pulses - so the problem is made abit easier. I suppose that's why you said "reverberation" instead of "clutter"??? Back to the ambiguity function. For an FM burst, I don't think it's usually possible to distinguish between range and Doppler because one range and one Doppler will map identically to another range and another Doppler. The resulting range Doppler map has a diagonal ridge which is displaced in time and frequency. If increasing range is up and increasing Doppler is to the right, a particular ridge center can be due to a multiplicity of valid ranges and Dopplers, no? So, even in a noiseless case, the result is ambiguous. The guys knew what they were talking about when they named it an ambiguity function. At least that's my understanding. Fred   0 Reply Fred 7/3/2003 12:44:55 AM "Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message news:Z_HMa.2132$Jk5.1187975@feed2.centurytel.net...
>
> "Glen Herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
> news:5EDMa.100$Ix2.16@rwcrnsc54... (snip) > > One that isn't completely true. Consider that you might have 0.2s samples > > of such sinusoids. Maybe sampled at 1MHz with a 30 bit A/D converter, and > > that there is no (measurable) noise in the sinusoids. I believe, then, > > that you will have plenty enough information to separate the sinusoids. > (I > > overdid the numbers slightly to emphasize the point.) > You changed "0.2sec sample" to "0.2sec samples". Is that significant? It means that you have samples somewhere in a time period that lasts 0.2s long. > Whatever, I've missed your point here. Are you suggesting that you know the > frequencies of the sinusoids a priori? That would be a different question. > > The question I was dealing with was, more precisely: > "starting with a signal that is the sum of two sinusoids of unknown > frequency and amplitude, > given a single temporal segment or epoch of that signal, > can you tell that there are two sinusoids present in it > and readily estimate their frequencies and amplitudes > if the frequency separation between the sinusoids is much less than the > reciprocal of the length of the temporal sample (epoch) that was provided to > you?" > I still think the answer is no. > > The question wasn't about solving simultaneous equations based on having > more information. So, as long as the SNR is reasonable, there's no worry > about the signal being noiseless unless you want to get into the accuracy of > the estimates. This was more about simply being able to "resolve" more than > about being able to estimate. OK, I just posted this in response to a different question. | If you have samples over 0.1s, and you don't do a Fourier transform, you can | ask different questions. If you know that the signal is the sum of 1Hz, | 2Hz, and 3Hz components only, six points will determine the amplitude and | phase of those components. | Consider a signal, f, sampled over time T, from t=0 to t=T. Assume that | f(0)=f(T)=0 for now. All the components must be sine with periods that are | multiples of 2T. If it is known to have a maximum frequency component < Fn | then the number of possible frequency components is 2 T Fn. A system with 2 | T Fn unknowns needs 2 T Fn equations, so 2 T Fn sampling points. 2 T Fn | sampling points uniformly distributed over time T are 2 Fn apart. So, if you have a signal over any time period that is known to be the sum of N sinusoids, 2 N sample points will determine the amplitude and phase of those sinusoids. You are free to pick (almost) any N frequencies you want. You can also pick (almost) any N sample points you want, though some choices work better than others. Nyquist comes in because 2 T Fn sample points are enough over a time period T, with the value fixed on the end points, to uniquely determine the function over that time. If the end points are not 0, you just add two more sampling points, so 2 T Fn + 2 are required. Assuming that T >> 1/Fn, or N >> 1, it doesn't really matter much. (Nyquist pretty much assumes the limit as N --> infinity.) Does that help any? -- glen   0 Reply Glen 7/3/2003 1:03:06 AM Glen Herrmannsfeldt wrote: > > "Randy Yates" <yates@ieee.org> wrote in message > news:567ce618.0307021017.4d898645@posting.google.com... > > "Hristo Stevic" <hristostev@yahoo.com> wrote in message > news:<d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org>... > > > Hi > > > > > > I am little bit confused about the issue of time-frequency resolution > > > and the principle of uncertainty. The latter states that product > > > DT*Dw, where DT, Dw denotes respectively the time and frequency > > > resolution is bounded. > > > > I cannot find any such statement in "Modern Physics" by Serway et al. > > Can you please provide a very precise and specific reference? Serway > > talks of uncertainty in energy and time, and in momentum and position > > (which I believe is the standard form of Heisenberg's uncertainty > > principle), but not between time and frequency. > > The one people usually learn first are Dp*Dx < h/4pi and DE*Dt < h/4pi. > > (D is Delta, p is momentum, E is energy, t is time, x is distance). > > But E=(h/2pi)w = hf and p=(h/2pi)k (why don't they put hbar keys on > keyboards?) > > w is omega = 2pi f, and k is wave number (or wave vector) 2pi/wavelength. > > So Dw*Dt< 1/2 and Dk*Dx<1/2 are equivalent uncertainty principles. Excellent point, Glen. I agree. But I would still say that the whole concept of the HUP applies to frequencies and times of matter, not of a signal in a computer. Do you get what I mean? -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://home.earthlink.net/~yatescr   0 Reply Randy 7/3/2003 8:25:01 AM Jerry Avins wrote: > > Randy Yates wrote: > > > ... > > > > I submit that the frequency of a perfect noiseless complex sinusoid can > > be measured perfectly in two samples: measure the phase at sample 1 and > > the phase at sample 2 and then use 2*pi*f = (phase 2 - phase 1)/T, where > > the phases are in radians and T is the sample period in seconds. > > > ... > > Isn't the amplitude also needed? A*sin(Bt) has two unknowns. Perhaps you > mean two complex samples? Hello Jerry, Yes, I meant two complex samples. How can you sample a complex sinusoid otherwise? -- % Randy Yates % "...the answer lies within your soul %% Fuquay-Varina, NC % 'cause no one knows which side %%% 919-577-9882 % the coin will fall." %%%% <yates@ieee.org> % 'Big Wheels', *Out of the Blue*, ELO http://home.earthlink.net/~yatescr   0 Reply Randy 7/3/2003 8:45:47 AM "Randy Yates" <yates@ieee.org> wrote in message news:3F03E905.56839BD5@ieee.org... > Glen Herrmannsfeldt wrote: > > > > "Randy Yates" <yates@ieee.org> wrote in message > > news:567ce618.0307021017.4d898645@posting.google.com... > > > "Hristo Stevic" <hristostev@yahoo.com> wrote in message > > news:<d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org>... > > > > Hi > > > > > > > > I am little bit confused about the issue of time-frequency resolution > > > > and the principle of uncertainty. The latter states that product > > > > DT*Dw, where DT, Dw denotes respectively the time and frequency > > > > resolution is bounded. > > > > > > I cannot find any such statement in "Modern Physics" by Serway et al. > > > Can you please provide a very precise and specific reference? Serway > > > talks of uncertainty in energy and time, and in momentum and position > > > (which I believe is the standard form of Heisenberg's uncertainty > > > principle), but not between time and frequency. > > > > The one people usually learn first are Dp*Dx < h/4pi and DE*Dt < h/4pi. > > > > (D is Delta, p is momentum, E is energy, t is time, x is distance). > > > > But E=(h/2pi)w = hf and p=(h/2pi)k (why don't they put hbar keys on > > keyboards?) > > > > w is omega = 2pi f, and k is wave number (or wave vector) 2pi/wavelength. > > > > So Dw*Dt< 1/2 and Dk*Dx<1/2 are equivalent uncertainty principles. > > Excellent point, Glen. I agree. But I would still say that the whole > concept of the HUP applies to frequencies and times of matter, > not of a signal in a computer. Do you get what I mean? Yes. I posted a description of that somewhere else, with a story about misdesigning a RADAR system by misunderstanding the uncertainty principle. Though one could be computing the properties of matter on a computer, in which case it would apply. One of the quantum mechanics classes that I had some years ago explained the uncertainty principle through Fourier transforms. If you consider the distribution of a particle in (t, x, E, p) space, its distribution in (E, p, t, x) space, respectively, is the Fourier transform with a factor of hbar included. You can then find that a gaussian shape minimizes the product of the width of the real and transform space distributions, where the width is the uncertainty in that variable. Also, using w or k, the pi that would otherwise occur is included, which makes the transform expression look nicer. -- glen   0 Reply Glen 7/3/2003 10:39:34 AM keith2d@yahoo.com (Keith) wrote in message news:<3c786308.0307021556.1f0f53bc@posting.google.com>... > I remember seeing a paper by Leon Cohen on Time-Frequency and > uncertainty principles. I believe he delves into it in his book also > but I haven't read it. And yes it's the same uncertainty principle as > in quantum mechacics. The position and momentum measurement operators > for a wave don't commute just like time and frequency operators don't, > to use QM lingo. I don't know anything about quantum mechanics (I only have college-level background in general physics), so I don't know what the QM uncertainty principle is based on. To the little extent I have thought about it, I always believed position s and velocity v (momentum, if mass is constant) "do not commute" because v=ds/dt. If you want to do non-analytical differentiation or integration, one needs to observe over some time interval. I have never thought of the Fourier limit in those terms. After having read your post, I agree that there is a similar relation between frequency w and time t, namely w=d phi/dt where phi is the argument of the complex exponential in the Fourier transform. I'll have to think of it a bit further. Anyway, the Fourier resolution problem has always appeared to me as a problem with the transform, not the physical quantities to be processed. Parametric methods are alternative ways of distilling similar information from the data, that sometimes can be used and some times can not. More on that below. > I'm not so sure about your version of the uncertainty principle there. > Time resolution comes from a larger bandwidth, not from a more finely > sampled version of the same spectrum. Sampling at higher frequencies provides that larger bandwidth. Increase Df in the Heissenberg inequality, and get more information from a given observation window Dt. > Same for spectrum resolution, > you have to sample over a longer time. Duality. Sampling over a longer period provides finer, true spectrum resolution within the same bandwidth. Increase Dt in the Heissenberg inequality and get more information from a given bandwidth Df. > Anyway this is the most common > uncertainty principle everyone usually means. There are plenty of > other versions of course but I don't remember anything about the > product of the resolutions in both domains. > > As for interpolation or other processing after the fact and trying to > cheat the uncertainty principle. I look at it this way, the > uncertainty refers to nonparametric descriptions of the signal. I think this is a very important point. I don't completely agree with the word "cheat", though. In quite a few of the problems I have worked with, it is safe to assume that at least some types of spectra can be viewed as line spectra, i.e. that the "useful" parts of the spectra can be modeled as Dirac's delta functions. Under these conditions one can impose parametric sum-of-sines models on the data and go on with analysis. I like to view these types of models as "more efficient" versions of the DFT, in that one abandons the demands to orthogonality and completness of the Fourier basis vectors, and try to use as few linearly independent vectors of the same type as possible, to do the same job. With a good parametric model (and data generated by a process that fit the model) I can resolve spectrum lines that are way closer than the Fourier limit. Again, to me the problem has always been with the transform, not the quantities to be processed. Of course, if the sum-of-sines model don't fit the process, one finds oneself in big trouble. That's *the* main problems with parametric methods, to establish what type of model to use. One has exhanged one set of problems for another, possibly more difficult one. > It's > uncertainty in the choice out of all possible signals fitting those > data points. If you cheat and know beforehand that the signal fits > some parametric model, then of course you can resolve everything > perfectly in all domains. When you choose some method to interpolate, > you are making these a priori assumptions. In reality, you still could > have gotten one out of an infinite number of alternative signals--all > within the range the uncertainty principle specifies--which all > might've been sampled by you to get those very same samples you got. > And when you do more sampling, you rule out more and more of them. By > the time you have sampled for an infinite time... you have ruled out > all but one possible spectrum that could yield your samples. I agree with you on this. > btw in image processing I believe they talk about the resolution which > depends on the camera and lenses and all, as opposed to the pixel > spacing which you can interpolate to get to whatever you want. Rune   0 Reply allnor 7/3/2003 11:56:17 AM "Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message news:<VSKMa.2240$Jk5.1196768@feed2.centurytel.net>...
> Yes, I'm sure that's the case.  There are limits in what can be done so
> "designing waveforms" is a bit generous.  After years of effort windowed cw
> is still pretty good.  I recall one attempt to use a "thumbtack" ambiguity
> function based on pseudorandom chip sequences.  The problem with it was that
> the "noise floor" of the thumbtack was too high relative to the peak output
> and real reverberation caused too much uncertainty in "which peak is it?".

Heh, these days that kind of behaviour is taken as a very robust basis
for processing. The current philosophy in data analysis is that this
signal, with reverbration, bad and possibly dinamic coherence, and
what have you, is so complex that only one parametric deterministic model
will be able to reproduce that set of data. Needless to say, the parameters
in this wonder-model can be determined with arbitrary precission, so
anything at all can be seen in the data, be it in the water column, in
the sea floor, or even the chemical composition.

As for wishful thinking, I wish I could say "only kidding!", but I can't.
This is indeed what "renowned researchers" are thinking these days. Check
out the paper

Soares, Siderius and Jesus: "Source localization in a time-varying
ocean waveguide", Journ. Ac. Soc. Am., V 112 no 5, p 1879, 2002

for an example.

I think I'll go find a cup of coffee or I get started with another
of my rants...

Rune

 0

Jerry Avins wrote:

> With a complex DAC? :-)

But not a "complicated" one...

bye,

--
Piergiorgio Sartor


 0

Keith wrote:
> I remember seeing a paper by Leon Cohen on Time-Frequency and
> uncertainty principles. I believe he delves into it in his book also
> but I haven't read it. And yes it's the same uncertainty principle as
> in quantum mechacics. The position and momentum measurement operators
> for a wave don't commute just like time and frequency operators don't,
> to use QM lingo.

QM measurement isn't the same as classical measurement.
Look at the IEEE Signal Processing Mag article on Quantum Signal
Processing that came out last year.

>
> I'm not so sure about your version of the uncertainty principle there.
> Time resolution comes from a larger bandwidth, not from a more finely
> sampled version of the same spectrum. Same for spectrum resolution,
> you have to sample over a longer time. Anyway this is the most common
> uncertainty principle everyone usually means. There are plenty of
> other versions of course but I don't remember anything about the
> product of the resolutions in both domains.
>
> As for interpolation or other processing after the fact and trying to
> cheat the uncertainty principle. I look at it this way, the
> uncertainty refers to nonparametric descriptions of the signal. It's
> uncertainty in the choice out of all possible signals fitting those
> data points. If you cheat and know beforehand that the signal fits
> some parametric model, then of course you can resolve everything
> perfectly in all domains. When you choose some method to interpolate,
> you are making these a priori assumptions. In reality, you still could
> have gotten one out of an infinite number of alternative signals--all
> within the range the uncertainty principle specifies--which all
> might've been sampled by you to get those very same samples you got.
> And when you do more sampling, you rule out more and more of them. By
> the time you have sampled for an infinite time... you have ruled out
> all but one possible spectrum that could yield your samples.
>
> btw in image processing I believe they talk about the resolution which
> depends on the camera and lenses and all, as opposed to the pixel
> spacing which you can interpolate to get to whatever you want.


 0

Randy Yates wrote:
>
> Jerry Avins <jya@ieee.org> wrote in message news:<3F043B19.77262AA8@ieee.org>...
> > Randy Yates wrote:
> > >
> > > Jerry Avins wrote:
> > > >
> > > > Randy Yates wrote:
> > > > >
> >  ...
> > > > >
> > > > > I submit that the frequency of a perfect noiseless complex sinusoid can
> > > > > be measured perfectly in two samples: measure the phase at sample 1 and
> > > > > the phase at sample 2 and then use 2*pi*f = (phase 2 - phase 1)/T, where
> > > > > the phases are in radians and T is the sample period in seconds.
> > > > >
> > > >   ...
> > > >
> > > > Isn't the amplitude also needed? A*sin(Bt) has two unknowns. Perhaps you
> > > > mean two complex samples?
> > >
> > > Hello Jerry,
> > >
> > > Yes, I meant two complex samples. How can you sample a complex sinusoid
> > > otherwise?
> > > --
> > With a complex DAC? :-)
>
> Jerry,
>
> This seems to be a recurring problem between us. Let me attempt to
> make myself absolutely clear.

I see not a problem, but different valid viewpoints.
>
> The mathematical process of taking a sample involves selecting a
> single element from a set S. In the case of real sampling, S is the
> set of real numbers. In the case of complex sampling, S is the set of
> complex numbers. In either case, a sample is a single element from the
> set S.

The electronic process of taking a sample consists of locking a signal
on a wire with a sample-and-hold, then measuring it. The only samples
available in that case are from the set of reals.
>
> Anytime I use the concept of complex sampling on this newsgroup, I
> will mean sampling as defined above unless I explicitly state
> otherwise. Thus when I speak of a single, complex sample, I will mean
> a single element from the set of complex numbers.

That's fine. Note that in the statement I asked about, you weren't
explicit. See below for what I believe is the common assumption. There
are discussions in which your position might produce confusion. The
number of bytes to store a sample might be one. I think it's best to be
explicit whenever we utter generalities.
>
> The problem of constructing an A/D converter that can actually form
> complex samples is a practical one that, in my opinion, has no place

Agreed. Nevertheless, practical measurements are usually the subject of
threads like this. How should we proceed?
>
> I don't mean to be mean-spirited here, Jerry, but I do believe that
> you have often confused and confounded the issue of complex sampling
> with the question of whether it involves two real samples or a single
> complex sample. I submit that, speaking from a purely set-theoretic
> point of view, a complex sample is a single sample, i.e., a single
> element from the set of complex numbers. The question of how one
> represents that complex element is in some cases (such as this one)
> irrelevent.

The question of how many samples are needed to make a particular
determination depends critically on the kind of samples taken. I feel
certain that when you write that two samples are required, many here
interpret that to mean two readings of a DAC. Complex samples are fine
abstractions, but keep in mind that when the complex sampling rate is
Fs, a bandwidth of Fs is possible. That the bandwidth limit is commonly
said to be Fs/2 reflects the common notion that samples are real unless
explicitly stated otherwise.

Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������

 0

"Stan Pawlukiewicz" <stanp@nospam_mitre.org> wrote in message
news:be1v5a$92v$1@newslocal.mitre.org...
> Fred Marshall wrote:
> > "Stan Pawlukiewicz" <stanp@nospam_mitre.org> wrote in message
> > news:bdvn00$4n8$1@newslocal.mitre.org...
> >
> >>Fred Marshall wrote:
> >>
> >>>"Randy Yates" <yates@ieee.org> wrote in message
> >>>
> >>>
> >>>>"Hristo Stevic" <hristostev@yahoo.com> wrote in message
> >>>
> >>>news:<d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org>...
> >>>
> >>>
> >>>>>Hi
> >>>>>
> >>>>>I am little bit confused about the issue of time-frequency resolution
> >>>>>and the principle of uncertainty. The latter states that product
> >>>>>DT*Dw, where DT, Dw denotes respectively the time and frequency
> >>>>>resolution is bounded.
> >>>>
> >>>>I cannot find any such statement in "Modern Physics" by Serway et al.
> >>>>Can you please provide a very precise and specific reference? Serway
> >>>>talks of uncertainty in energy and time, and in momentum and position
> >>>>(which I believe is the standard form of Heisenberg's uncertainty
> >>>>principle), but not between time and frequency.
> >>>>
> >>>
> >>>
> >>>Yeah, they're not the same thing - but there is just a similarity.  So
> >
> > using
> >
> >>>"uncertainty principle" in this case is probably unfortunate.  Yet,
many
> >>>know what's meant.
> >>>.................
> >>>
> >>>
> >>>
> >>>>The bin width of the DFT decreases as the number of samples
> >>>>increases. This is due to the process involved in the DFT and
> >>>>not some inherent limitation in frequency measurement, as the
> >>>>gedanken I suggested above shows. I think your basic problem
> >>>>(and many others have made the same mistake) is thinking that
> >>>>the uncertainty that crops up in physics has some equivalent
> >>>>in signal processing. I assert that it does not.
> >>>
> >>>
> >>>Hmmmm.... even so, I don't see how the DFT has anything to do with this
> >>>question although this question and its answer may shed light on how
one
> >>>approaches doing FTs, DFTs, FFTs according to their purpose.  There
*is*
> >
> > an
> >
> >>>inherent limitation in frequency resolution that is roughly 1/T where T
> >
> > is
> >
> >>>the length of the temporal epoch being pondered / analyzed. Is there
> >
> > not?
> >
> >>>I'm rather surpised that by now no one has brought up time-bandwidth
> >
> > product
> >
> >>>and ambiguity.  If one carries this discussion one step further and
> >
> >
> >>>how well can I determine time and (Doppler) frequency of a radar return
> >
> > or
> >
> >>>sonar echo assuming a point target?
> >>
> >>
> >>My understanding of the ambuguity function is that it describes the
> >>output of the replica (of the transmitted signal) correlator under
> >>conditions of missmatch to delay and frequency shift.
> >>
> >>One can design a waveform that has a sharp response to either the delay
> >>(range) or the dopler (range rate) shift but not both.
> >>
> >>A perfect ML processor on the other hand, in a sense uses an infinite
> >>set of filters for all possible ranges and velocities.  I believe that
> >>there are circumstances, such as very high SNR, where there is a unique
> >>replica with maximum response at the correct range and velocity.
> >>
> >>
> >>Most active people I know tend to point out that reverberation makes the
> >>   theoretical point moot.  My understanding is that people tend to
> >>design waveforms more on the basis of rejecting reverberation.
> >>
> >
> >
> > Stan,
> >
> > Yes, I'm sure that's the case.  There are limits in what can be done so
> > "designing waveforms" is a bit generous.  After years of effort windowed
cw
> > is still pretty good.  I recall one attempt to use a "thumbtack"
ambiguity
> > function based on pseudorandom chip sequences.  The problem with it was
that
> > the "noise floor" of the thumbtack was too high relative to the peak
output
> > and real reverberation caused too much uncertainty in "which peak is
it?".
> >
> > There are two issues: coherence time and processing in the presence of
> > reverberation.
> >
> > Costas, long after RAKE, proposed a sonar version of the same idea in
order
> > to deal with shorter coherence times.  You might call that "extended
> > waveform design"!
> >
> > Radar designers have the advantage of being able to deal with lots of
> > pulses - so the problem is made abit easier.  I suppose that's why you
said
> > "reverberation" instead of "clutter"???
> >
> > Back to the ambiguity function.  For an FM burst, I don't think it's
usually
> > possible to distinguish between range and Doppler because one range and
one
> > Doppler will map identically to another range and another Doppler.  The
> > resulting range Doppler map has a diagonal ridge which is displaced in
time
> > and frequency.  If increasing range is up and increasing Doppler is to
the
> > right, a particular ridge center can be due to a multiplicity of valid
> > ranges and Dopplers, no?  So, even in a noiseless case, the result is
> > ambiguous.  The guys knew what they were talking about when they named
it an
> > ambiguity function.  At least that's my understanding.
>
> It looks like I'm going to be hunting in my basement for my copy of Cook
> and Bernfield over the Holiday.  I dimly recall your point about the FM
> pulse and the ridge is correct.  The thing that I need to look at is if
> the result was based on modeling the doppler as a simple frequency
> shift, or was it based pulse compression/dialation. For an ideal point
> reflector in the absence of clutter it may make a difference.

It does make a difference.  The pulse length will be modified at the same
ratio that the wavelength or frequency is modified.  As a practical matter I
think ambiguity holds for the real thing - modeling aside.  If all one used
in modeling was time (range) and frequency shift, then it would be truly
ambiguous, no?  Hoever, even if the modeling were more precise and included
pulse compression / dilation you would have to be able to detect second
order effects to know the difference would you not?  Real SNR's and receiver
bandwidths probably don't allow that precise a measurement.  Is there
theoretical ambiguity at infinite SNR and bandwidth?  Perhaps not.....  But
I've never seen a sensor system designed that tried to determine Doppler by
examining the pulse length!

Let's look at a real case:

15kHz sonar, T=50msec cw pulse, 20 knots of positive Doppler, roughly 11fps.
Speed of sound in water roughly 5000 fps.
Doppler shift is 15Khz*2*11/5000 = 66Hz.
temporal resolution around 4msec == 20 feet.
Pulse compression due to Doppler is T*2*11/5000 = 0.2msec == 1 foot = 3
wavelengths.
So, pulse compression can't be resolved in a reasonable bandwidth needed for
getting adequate SNR and Doppler coverage.

Fred


 0

allnor@tele.ntnu.no (Rune Allnor) wrote in message news:<f56893ae.0307021318.30c370e1@posting.google.com>...
> yates@ieee.org (Randy Yates) wrote in message news:<567ce618.0307021017.4d898645@posting.google.com>...
> > "Hristo Stevic" <hristostev@yahoo.com> wrote in message news:<d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org>...
> > > Hi
> > >
> > > I am little bit confused about the issue of time-frequency resolution
> > > and the principle of uncertainty. The latter states that product
> > > DT*Dw, where DT, Dw denotes respectively the time and frequency
> > > resolution is bounded.
> >
> > I cannot find any such statement in "Modern Physics" by Serway et al.
> > Can you please provide a very precise and specific reference? Serway
> > talks of uncertainty in energy and time, and in momentum and position
> > (which I believe is the standard form of Heisenberg's uncertainty
> > principle), but not between time and frequency.
> >
> > > If the time resolution *increases* then the
> > > frequency resolution *decreases*
> >
> > I submit that the frequency of a perfect noiseless complex sinusoid can
> > be measured perfectly in two samples: measure the phase at sample 1 and
> > the phase at sample 2 and then use 2*pi*f = (phase 2 - phase 1)/T, where
> > the phases are in radians and T is the sample period in seconds.
>
> Some pedantic remarks: The uncertainty principle as I know it from
> DSP is related to the ability to resolve *two* complex sinusoidals
> in a data set. There is another, equally important but different,
> question on how to localize *one* sinusoidal with the greatest
> accuracy.
>
> Having said that, you're right. There is nothing inherent in the
> time/frequency formalism that states that it is impossible to
> resolve two sines closer that the Fourier limit. All the high
> resolution frequency/DoA estimators use some sort of trick to
> circumvent the Fourier limit.

The problem is everything I read so far in this discussion seems to
centre around the notion of a signal whos frequency does not change
with time. If you can assume that then why use JTFA and then of course
the "uncertainty principle" does not apply. But JTFA is for the case
when you might have a signal like,

v(t) = A sin (w(t)t)

where w(t), the radian frequency, changes with time. In this case I
don't see how you could possibly be able to resovle the signal exactly
with only 2 sample points, complex or otherwise. You have to keep in
mind that JTFA assumes non stationary frequency content otherwise
what's the point of having time in it?

Paavo Jumppanen,
Author of AtSpec : A 2 channel PC based FFT spectrum analyzer
http://www.taquis.com

 0

Paavo Jumppanen wrote:
> allnor@tele.ntnu.no (Rune Allnor) wrote in message news:<f56893ae.0307021318.30c370e1@posting.google.com>...
>
>>yates@ieee.org (Randy Yates) wrote in message news:<567ce618.0307021017.4d898645@posting.google.com>...
>>
>>>"Hristo Stevic" <hristostev@yahoo.com> wrote in message news:<d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org>...
>>>
>>>>Hi
>>>>
>>>>I am little bit confused about the issue of time-frequency resolution
>>>>and the principle of uncertainty. The latter states that product
>>>>DT*Dw, where DT, Dw denotes respectively the time and frequency
>>>>resolution is bounded.
>>>
>>>I cannot find any such statement in "Modern Physics" by Serway et al.
>>>Can you please provide a very precise and specific reference? Serway
>>>talks of uncertainty in energy and time, and in momentum and position
>>>(which I believe is the standard form of Heisenberg's uncertainty
>>>principle), but not between time and frequency.
>>>
>>>
>>>>If the time resolution *increases* then the
>>>>frequency resolution *decreases*
>>>
>>>I submit that the frequency of a perfect noiseless complex sinusoid can
>>>be measured perfectly in two samples: measure the phase at sample 1 and
>>>the phase at sample 2 and then use 2*pi*f = (phase 2 - phase 1)/T, where
>>>the phases are in radians and T is the sample period in seconds.
>>
>>Some pedantic remarks: The uncertainty principle as I know it from
>>DSP is related to the ability to resolve *two* complex sinusoidals
>>in a data set. There is another, equally important but different,
>>question on how to localize *one* sinusoidal with the greatest
>>accuracy.
>>
>>Having said that, you're right. There is nothing inherent in the
>>time/frequency formalism that states that it is impossible to
>>resolve two sines closer that the Fourier limit. All the high
>>resolution frequency/DoA estimators use some sort of trick to
>>circumvent the Fourier limit.
>
>
> The problem is everything I read so far in this discussion seems to
> centre around the notion of a signal whos frequency does not change
> with time. If you can assume that then why use JTFA and then of course
> the "uncertainty principle" does not apply. But JTFA is for the case
> when you might have a signal like,
>
> v(t) = A sin (w(t)t)

Our own Peter K published a paper in IEEE trans on SP where w(t) was
quadratic using an EKF. If I recall correctly, the uncertainty principle
didn't encumber his results.

>
> where w(t), the radian frequency, changes with time. In this case I
> don't see how you could possibly be able to resovle the signal exactly
> with only 2 sample points, complex or otherwise. You have to keep in
> mind that JTFA assumes non stationary frequency content otherwise
> what's the point of having time in it?
>
> Paavo Jumppanen,
> Author of AtSpec : A 2 channel PC based FFT spectrum analyzer
> http://www.taquis.com


 0

Stan Pawlukiewicz <stanp@nospam_mitre.org> writes:

> Paavo Jumppanen wrote:
> >
> > The problem is everything I read so far in this discussion seems to
> > centre around the notion of a signal whos frequency does not change
> > with time. If you can assume that then why use JTFA and then of course
> > the "uncertainty principle" does not apply. But JTFA is for the case
> > when you might have a signal like,
> >
> > v(t) = A sin (w(t)t)
>
> Our own Peter K published a paper in IEEE trans on SP where w(t) was
> quadratic using an EKF. If I recall correctly, the uncertainty principle
> didn't encumber his results.

Well, it wasn't explicitly referred to at all.

The EKF paper [1] actually requires that you know the signal over a
longish interval.  It makes the further assumtion that, on that
interval, the phase is polynomial in nature.  Exactly what this says
about the uncertainty principle, I'm not sure.

Another thing I'm not sure of is how the Cramer-Rao Lower bound fits
into this idea of uncertainty.  As Randy (I think) pointed out
elsewhere, if you know you have a cisoid and you know it's noiseless,
then you can estimate the frequency from just two adjancent samples.

> > where w(t), the radian frequency, changes with time. In this case I
> > don't see how you could possibly be able to resovle the signal exactly
> > with only 2 sample points, complex or otherwise.

Nor do I, at least not without one or more simplifying assumptions.

> > You have to keep in
> > mind that JTFA assumes non stationary frequency content otherwise
> > what's the point of having time in it?

I tend to distinguish between "non-stationary" frequency changes and
"deterministic" frequency changes.  I realise this is pretty
arbitrary, but "non-stationary" changes would be those that are
stochastic in nature and have a component that is unpredictable;
"deterministic" frequency changes can be known precisely if you know
something about the manner of deterministic variation.

Ciao,

Peter K.

[1] P.J. Kootsookos and J.M. Spanjaard, "An Extended Kalman Filter for
Demodulating Polynomial Phase Signals," IEEE Signal Processing
Letters, vol. 5(3), March 1998, pp. 69-70.

Available from UQ's eprint server for personal use at:

http://eprint.uq.edu.au/archive/00000069/01/PPEKF_IEEE_SPL_Kootsookos_Spanjaard.pdf

--
Peter J. Kootsookos

"Na, na na na na na na, na na na na"
- 'Hey Jude', Lennon/McCartney

 0

p.kootsookos@remove.ieee.org (Peter J. Kootsookos) wrote in message news:<s68smpmkd7e.fsf@mango.itee.uq.edu.au>...
> PaavoJumppanen@iname.com (Paavo Jumppanen) writes:
>
> > Is w(t) quadratic because you know it to be quadratic? If so you are
> > using apriori information.
>
> Yes, that was one of the points I just made in a parallel post.
>
> > The point I'm trying to make is what can you expect when you don't
> > know the functional form of what you are trying to quantify other
> > than it has a non-stationary frequency composition? IMHO that is the
> > case for which the whole notion of the "uncertainty principle"
> > applies.
>
> Yes, having w(t) stochastic says something about uncertainty, but I'm
> not sure if this is the same thing as what people refer to as the
> uncertainty principle.

My impression (and the way I use the term) is that the uncertainty
principle relates to recognizing two different components (narrow-band
spectrum lines in frequency domain, pulses in time domain) as two small/
narrow features and not one broad feature.

A related issue is to use the phase of the spectrum to establish at what
precise frequency one spectrum line is. I have seen ways of doing this
based on the non-parametric spectrum. However, after the parametric issue
was brought into this thread, I'm not sure if those accurate estimates
are also based on the parametric assumption of a single, narrow spectrum
line. I suspect that's the case, which would mean that the uncertainty
principle actually limits the ability to see any details in a
non-parametric spectrum analysis.

Rune

> Ciao,
>
> Peter K.

 0

Randy Yates wrote:
>
> Jerry Avins wrote:
> > [...]
> > Note that in the statement I asked about, you weren't
> > explicit.
>
> and
>
> > I feel
> > certain that when you write that two samples are required, many here
> > interpret that to mean two readings of a DAC. Complex samples are fine
> > abstractions, but keep in mind that when the complex sampling rate is
> > Fs, a bandwidth of Fs is possible. That the bandwidth limit is commonly
> > said to be Fs/2 reflects the common notion that samples are real unless
> > explicitly stated otherwise.
>
> Jerry,
>
> I could have been more explicit, but what I said was actually
> unambiguous if you know what is meant by a "complex sinusoid."
> By complex sinusoid, I mean a signal of the form
> x(t) = r*e^{j*2*pi*f*t}. This is inherently a complex signal that
> MUST be sampled using complex sampling in order to be unambiguously
> sampled.
>
> I'll grant that this was perhaps excessive verbal brevity.
> --
You did indeed write "I submit that the frequency of a perfect noiseless
complex sinusoid can be measured perfectly in two samples: ..." I missed
that you had changed the basis of the discussion. I ought to have read
more carefully.

Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������

 0

On Sat, 05 Jul 2003 01:06:11 GMT, Randy Yates <yates@ieee.org> wrote:

(snipped)
>
>Hey Jerry, if that's all I had to eat for crow supper around here, I'd
>be elated. Your accountability speaks volumes.
>--

Ha Ha!  If crow were fattening, I'd have gained 30 pounds in the last
couple of years!!

[-Rick-]


 0

Paavo Jumppanen wrote:
> Stan Pawlukiewicz <stanp@nospam_mitre.org> wrote in message news:<be2ho7$p11$1@newslocal.mitre.org>...
>
>>Paavo Jumppanen wrote:
>>
>>>allnor@tele.ntnu.no (Rune Allnor) wrote in message news:<f56893ae.0307021318.30c370e1@posting.google.com>...
>>>
>>>
>>>>yates@ieee.org (Randy Yates) wrote in message news:<567ce618.0307021017.4d898645@posting.google.com>...
>>>>
>>>>
>>>>>"Hristo Stevic" <hristostev@yahoo.com> wrote in message news:<d32b134353ea65fb7da3e1a13028a28c.52609@mygate.mailgate.org>...
>>>>>
>>>>>
>>>>>>Hi
>>>>>>
>>>>>>I am little bit confused about the issue of time-frequency resolution
>>>>>>and the principle of uncertainty. The latter states that product
>>>>>>DT*Dw, where DT, Dw denotes respectively the time and frequency
>>>>>>resolution is bounded.
>>>>>
>>>>>I cannot find any such statement in "Modern Physics" by Serway et al.
>>>>>Can you please provide a very precise and specific reference? Serway
>>>>>talks of uncertainty in energy and time, and in momentum and position
>>>>>(which I believe is the standard form of Heisenberg's uncertainty
>>>>>principle), but not between time and frequency.
>>>>>
>>>>>
>>>>>
>>>>>>If the time resolution *increases* then the
>>>>>>frequency resolution *decreases*
>>>>>
>>>>>I submit that the frequency of a perfect noiseless complex sinusoid can
>>>>>be measured perfectly in two samples: measure the phase at sample 1 and
>>>>>the phase at sample 2 and then use 2*pi*f = (phase 2 - phase 1)/T, where
>>>>>the phases are in radians and T is the sample period in seconds.
>>>>
>>>>Some pedantic remarks: The uncertainty principle as I know it from
>>>>DSP is related to the ability to resolve *two* complex sinusoidals
>>>>in a data set. There is another, equally important but different,
>>>>question on how to localize *one* sinusoidal with the greatest
>>>>accuracy.
>>>>
>>>>Having said that, you're right. There is nothing inherent in the
>>>>time/frequency formalism that states that it is impossible to
>>>>resolve two sines closer that the Fourier limit. All the high
>>>>resolution frequency/DoA estimators use some sort of trick to
>>>>circumvent the Fourier limit.
>>>
>>>
>>>The problem is everything I read so far in this discussion seems to
>>>centre around the notion of a signal whos frequency does not change
>>>with time. If you can assume that then why use JTFA and then of course
>>>the "uncertainty principle" does not apply. But JTFA is for the case
>>>when you might have a signal like,
>>>
>>>v(t) = A sin (w(t)t)
>>
>>Our own Peter K published a paper in IEEE trans on SP where w(t) was
>>quadratic using an EKF. If I recall correctly, the uncertainty principle
>>didn't encumber his results.
>>
>
>
> Is w(t) quadratic because you know it to be quadratic? If so you are
> using apriori information. The point I'm trying to make is what can
> you expect when you don't know the functional form of what you are
> trying to quantify other than it has a non-stationary frequency
> composition? IMHO that is the case for which the whole notion of the
> "uncertainty principle" applies.
>
> Paavo.

Heisenburg's principle is a physical bound, the concept of prior
information  is meaningless (although it would be neat to see a
counterexample). Many people insist that time-frequency analysis and
Quantum  Mechanical measurement are directly equivalent. If you look at
the typical "uncertainty principle" derivation there is inevitably a
point where a complex quantity is multiplied by its conjugate.  In QM
measurement, the physics dictates that, in time-frequency analysis they
just sort of apriori decide that throwing away phase information is
necessary.

All the bounds that I've seen depend on inequalities. Inequalities
are inherently mapped on to the real line.  Matrix inequalities are an
example where the ordering is an ordering of real determinants. Complex
numbers are typically ordered by their real magnitude.  I kinda think
that throwing away information isn't really the same as using prior
information.  To put it a another way, the universe doesn't require a
dumb ass measurement.  I think the notion of an uncertainty principle is
misused.


 0

Peter J. Kootsookos wrote:
> Stan Pawlukiewicz <stanp@nospam_mitre.org> writes:
>
>
>>Paavo Jumppanen wrote:
>>
>>>The problem is everything I read so far in this discussion seems to
>>>centre around the notion of a signal whos frequency does not change
>>>with time. If you can assume that then why use JTFA and then of course
>>>the "uncertainty principle" does not apply. But JTFA is for the case
>>>when you might have a signal like,
>>>
>>>v(t) = A sin (w(t)t)
>>
>>Our own Peter K published a paper in IEEE trans on SP where w(t) was
>>quadratic using an EKF. If I recall correctly, the uncertainty principle
>>didn't encumber his results.
>
>
> Well, it wasn't explicitly referred to at all.
>
> The EKF paper [1] actually requires that you know the signal over a
> longish interval.  It makes the further assumtion that, on that
> interval, the phase is polynomial in nature.  Exactly what this says
> about the uncertainty principle, I'm not sure.
>
> Another thing I'm not sure of is how the Cramer-Rao Lower bound fits
> into this idea of uncertainty.  As Randy (I think) pointed out
> elsewhere, if you know you have a cisoid and you know it's noiseless,
> then you can estimate the frequency from just two adjancent samples.
>
>
>>>where w(t), the radian frequency, changes with time. In this case I
>>>don't see how you could possibly be able to resovle the signal exactly
>>>with only 2 sample points, complex or otherwise.
>
>
> Nor do I, at least not without one or more simplifying assumptions.
>
>
>>>You have to keep in
>>>mind that JTFA assumes non stationary frequency content otherwise
>>>what's the point of having time in it?
>
>
> I tend to distinguish between "non-stationary" frequency changes and
> "deterministic" frequency changes.  I realise this is pretty
> arbitrary, but "non-stationary" changes would be those that are
> stochastic in nature and have a component that is unpredictable;
> "deterministic" frequency changes can be known precisely if you know
> something about the manner of deterministic variation.

You must have a sign at the gate of your ivory tower that says "No Chaos
Allowed." ;)

>
> Ciao,
>
> Peter K.
>
> [1] P.J. Kootsookos and J.M. Spanjaard, "An Extended Kalman Filter for
>     Demodulating Polynomial Phase Signals," IEEE Signal Processing
>     Letters, vol. 5(3), March 1998, pp. 69-70.
>
>     Available from UQ's eprint server for personal use at:
>
> http://eprint.uq.edu.au/archive/00000069/01/PPEKF_IEEE_SPL_Kootsookos_Spanjaard.pdf
>


 0

Paavo Jumppanen wrote:
> [...]
> Remember that the uncertainty principle (as applied to signal
> processing) refers to joint time frequency analysis only.

Hello Paavo,

This is part of my confusion - where exactly is such an uncertainty
principle defined? Got a reference other than Clay's "The Transforms
and Applications Handbook" edited by Alexander Poularikas?
--
%% Fuquay-Varina, NC            %       'cause no one knows which side
%%% 919-577-9882                %                   the coin will fall."
%%%% <yates@ieee.org>           %  'Big Wheels', *Out of the Blue*, ELO

 0

Randy Yates wrote:
>
...
>
> Excellent point, Glen. I agree. But I would still say that the whole
> concept of the HUP applies to frequencies and times of matter,
> not of a signal in a computer. Do you get what I mean?
> --
I hear Heisenberg's Uncertainty Principle cited as an analog for the
time/frequency limitation, then abused by equating the two. To measure
the frequency a pure 10 MHz signal (WWV's carrier, say) accurate to .1
Hz (which is consistent with what is transmitted), you need about 10
seconds of observation. Let the signal drift a bit, and it becomes
meaningless to ask what the frequency is precisely at time T exactly.
The product of the frequency and time uncertainties is a constant
analogous to h-bar. The analogy can be pushed too far, extending it to
areas where it doesn't apply without any flags being raised.

Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������

 0

Stan Pawlukiewicz <stanp@nospam_mitre.org> writes:

> You must have a sign at the gate of your ivory tower that says "No Chaos
> Allowed." ;)

Oh, OK.  Chaos is allowed, but not KAOS.

Ciao,

Peter K.

--
Peter J. Kootsookos

"Na, na na na na na na, na na na na"
- 'Hey Jude', Lennon/McCartney

 0

Randy Yates <yates@ieee.org> writes:

> This is part of my confusion - where exactly is such an uncertainty
> principle defined? Got a reference other than Clay's "The Transforms
> and Applications Handbook" edited by Alexander Poularikas?

In Leon Cohen's book [1], he devotes and entire chapter (chapter 3) to
the uncertainty principle and its proof.

His statement of it is:

Definite the signal's duration, T, and bandwidth, B, implicitly as:

T^2 = \integral (t - <t>)^2 |s(t)|^2 dt

B^2 = \integral ( \omega - <\omega> )^2 |S(\omega)|^2 d\omega

Then

TB > 1/2

"Therefore, one cannot have or construct a signal for which both T and
B are arbitrarily small."

Ciao,

Peter K.

[1] L. Cohen, "Time-Frequency Analysis", Prentice-Hall, 1995.

--
Peter J. Kootsookos

"Na, na na na na na na, na na na na"
- 'Hey Jude', Lennon/McCartney

 0

"Peter J. Kootsookos" wrote:
>
> Randy Yates <yates@ieee.org> writes:
>
> > This is part of my confusion - where exactly is such an uncertainty
> > principle defined? Got a reference other than Clay's "The Transforms
> > and Applications Handbook" edited by Alexander Poularikas?
>
> In Leon Cohen's book [1], he devotes and entire chapter (chapter 3) to
> the uncertainty principle and its proof.
>
> His statement of it is:
>
> Definite the signal's duration, T, and bandwidth, B, implicitly as:
>
> T^2 = \integral (t - <t>)^2 |s(t)|^2 dt
>
> B^2 = \integral ( \omega - <\omega> )^2 |S(\omega)|^2 d\omega

Peter,

Thank you for the reference and the explicit definition. What does
the notation "<t>" and "<\omega>" mean?
--
%% Fuquay-Varina, NC            %       'cause no one knows which side
%%% 919-577-9882                %                   the coin will fall."
%%%% <yates@ieee.org>           %  'Big Wheels', *Out of the Blue*, ELO

 0

On 07 Jul 2003 16:59:30 +1000, p.kootsookos@remove.ieee.org (Peter J.
Kootsookos) wrote:

>eric.jacobsen@ieee.org (Eric Jacobsen) writes:
>
>>
>> So just where exactly did you catch up on so much 60s American TV?
>>
>
>70s Australia.
>
>I blame it on North American* cultural imperialism.
>
>Ciao,
>
>Peter K.
>

Hi Dr. K,
Let me get this straight.  Are you actually implying that
Get Smart was NOT every bit as good as the "The Prisoner"
television show?

[-Rick-]
Check out: http://www.wouldyoubelieve.com/


 0

"Peter J. Kootsookos" <p.kootsookos@remove.ieee.org> wrote in message
news:s68he5zjik5.fsf@mango.itee.uq.edu.au...

> In Leon Cohen's book [1], he devotes and entire chapter (chapter 3) to
> the uncertainty principle and its proof.
>
> His statement of it is:
>
> Definite the signal's duration, T, and bandwidth, B, implicitly as:
>
> T^2 = \integral (t - <t>)^2 |s(t)|^2 dt
>
> B^2 = \integral ( \omega - <\omega> )^2 |S(\omega)|^2 d\omega

Hello Peter et. al.,

I think I should mention here that the functions s(t) and S(omega) need to
be nomalized to have unity energy.

The normalization just means that  integral |s(t)|^2 dt = 1
and integral |S(omega)|^2 d omega = 1

The normalization is done by simply dividing out the energy which is just

E = integral |s(t)|^2 dt

which by Bessel's form of Parseval's Identity

E = (1/(2pi)) integral |S(omega)|^2 d omega

Randy,

Basically what the uncertainty theorem says is the product of the standard
deviations (one from the time domain and the other from the frequency
domain)
is greater than or equal to 1/2 (finite energy case). Equality occurs with
Gaussian functions.

The standard deviation is a measure of the spread of the data. Exactly the
same is done in statistics except that the function in the Fourier case
needs to be magnitude squared. In statistics the p.d.f. is already a
nomalized nonnegative real valued function.

The similarity with quantum mechanics stems from the mathematical
presentation. The difference stems from measurment and application.

Stan has mentioned that in QM, the act of measuring (although exactly what
constitutes a measurment is the topic of much debate) actually alters the
data itself. As wierd as QM is, so far all conducted experiments agree with
theory. I can provide many references if you are interested in this.

To give an explicit idea of the difference. In QM the mathematical
representation of a measurment is performed by an operator. This operator
can look for positition, momentum, energy, etc. Each operator will have
associated with it certain allowed states. The concept of measuring forces
the real world signal to be changed to be one of the allowed states for the
operator. No furthur measurements after the 1st measurement can reveal

When we talk about a real world signal being sampled by an A/D, we will not
have the QM problem unless we are talking about incredibly weak signals.
So when I collect my data and go analyze it for when the signal occured, and
then I later analyze the same signal for its frequency content, I won't have
the QM problem. Hence the time analysis doesn't change my signal so my freq.
analysis is unaffected by the time measurment.

With the Fourier stuff, you can measure most reasonable signals without
affecting it in any significant way. Certainly if you have very small
signals down in the quantum domain, then the quantum physics applies. So
this is our 1st difference. Second, when you go to measure frequency, you
won't just take the FT of some data and compute the mean and standard
deviation of the power spectrum to provide an estimate of the frequency.
However as Jerry mentioned, if you are collecting sample from a sine wave
that wanders a little, then you can apply this theorem to get an estimate
for the average frequency.

In Peter's paper and in many schemes a model is proposed and then analysis
is applied. Look at Barry Quinn's book for many examples. I think you even
mentioned how a few samples of a sinusoid can be used to find the frequency
accurately. The major hurdle here is dealing with noise.

I think a good basis for understanding these ideas covers two areas of math.
The 1st is the theory of moments. A lot can be found in most statistics
books. Quantum and Classical Mechanics make much use of moments as well.

The second area has to do with statistical estimators especially unbiased
ones where the Cramer-Rao bound appies.

I hope this clears some things up.

Clay

Example:

Let f(t) = exp(-a|t|)   a>0

Then F(w) = 2a/(a^2 + w^2)

The energy is 1/a

and both fucntions have zero means (makes the integrals easier)

so  T = sqrt(2)/(2a)

and B = a

thus

T*B = sqrt(2)/2

which is a little bigger than 1/2


 0

"Clay S. Turner" <physicsNOOOOSPPPPAMMMM@bellsouth.net> writes:

> I think I should mention here that the functions s(t) and S(omega) need to
> be nomalized to have unity energy.

Yup!

Ciao,

Peter K.

--
Peter J. Kootsookos

"Na, na na na na na na, na na na na"
- 'Hey Jude', Lennon/McCartney

 0

Stan Pawlukiewicz <stanp@nospam_mitre.org> writes:

> Does Leon Cohen say you can't tell the difference between
>    s1(t) = A sin( w t)  and
>    s2(t) = -A sin( w t)

As far as their time-bandwidth product is concerned, they are
identical, aren't they? (Given you haven't specified the duration of
either).

> Many estimation problems, particularly in parametric estimation, involve
> <t>  and <omega>, i.e. the location of the peaks, not the widths.

Yup.

> The  other oddity is that |s(t)|^2 and |S(w)|^2 are probabilistic
> abominations, they don't define meaningfull marginal or joint
> probability densities in time and frequency.

:-) Yes, although as Clay pointed out they need to be normalised to 1
so that they make more sense as densityies.

Ciao,

Peter K.

--
Peter J. Kootsookos

"Na, na na na na na na, na na na na"
- 'Hey Jude', Lennon/McCartney

 0

Stan Pawlukiewicz <stanp@nospam_mitre.org> writes:

> So  according to the uncertainty principle, antipodal modulation would
> have a problem.

As far as I can see, the uncertainty principle says nothing (directly)

> Would you expect t and f to form 2 independent variables?

Isn't that what the uncertainty principle is saying: that t and f are
NOT independent.

Ciao,

Peter K.

--
Peter J. Kootsookos

"Na, na na na na na na, na na na na"
- 'Hey Jude', Lennon/McCartney

 0

Peter J. Kootsookos wrote:
> Stan Pawlukiewicz <stanp@nospam_mitre.org> writes:
>
>
>>So  according to the uncertainty principle, antipodal modulation would
>>have a problem.
>
>
> As far as I can see, the uncertainty principle says nothing (directly)
>
>
>>Would you expect t and f to form 2 independent variables?
>
>
> Isn't that what the uncertainty principle is saying: that t and f are
> NOT independent.

No argument there,  but then a probability density of the form p(t,f)
would have some odd properties compared to say, a 2d iid probability
density. As an example p(t,f) may not exist, or be meaningful for all
t and f.  i.e.  all t,f,  -infinity < t < infinity and -infinity < f <
infinity.

Oddball probability densities like this exist in estimation theory as
well, such as the pdf of a sample covariance matrix where the number of
snap shots is smaller than the actual rank of the covariance.
Fortunately the multivariate characteristic function exists.

>
> Ciao,
>
> Peter K.
>


 0

Stan Pawlukiewicz <stanp@nospam_mitre.org> writes:

> No argument there,  but then a probability density of the form p(t,f)
> would have some odd properties compared to say, a 2d iid probability
> density.

Well an iid pdf would just be

p(t,f) = p1(t)p2(f)

wouldn't it?

> As an example p(t,f) may not exist, or be meaningful for all t and
> f.  i.e.  all t,f, -infinity < t < infinity and -infinity < f <
> infinity.

All the uncertainty principle says in this context is that the two
"variances" (second order mean-corrected moments) of p(t,f) can't be
arbitrarily small if you're using the fourier transform to relate
them.

From the equations I wrote a while ago, I think this means

\integral over all t p(t,f) dt = |S(f)|^2
\integral over all f p(t,f) df = |s(t)|^2

and I suspect that several time-frequency representations exist for
which this is true. It was a while ago, but my Masters thesis tells me
that Born-Jordan-Cohen, Choi-Williams, Rihaczek, Page and Wigner-Ville
representations have tis property (or something like it).

> Oddball probability densities like this exist in estimation theory as
> well, such as the pdf of a sample covariance matrix where the number of
> snap shots is smaller than the actual rank of the covariance.
> Fortunately the multivariate characteristic function exists.

Fortunately, I gave up such time-frequency representations as a bad
joke 1.4 decades ago.  Actually, if you look at the bottom left-hand
corner of page 1977 in

http://eprint.uq.edu.au/archive/00000152/01/00149998.pdf

you'll notice that I (and my co-authors) refer to the Wigner-Ville
representation as a W_a(n,k).

Ciao,

Peter K.

--
Peter J. Kootsookos

"Na, na na na na na na, na na na na"
- 'Hey Jude', Lennon/McCartney

 0

PaavoJumppanen@iname.com (Paavo Jumppanen) wrote in message news:<1a72ed76.0307031513.35d46aeb@posting.google.com>...

> The problem is everything I read so far in this discussion seems to
> centre around the notion of a signal whos frequency does not change
> with time. If you can assume that then why use JTFA and then of course
> the "uncertainty principle" does not apply. But JTFA is for the case
> when you might have a signal like,
>
> v(t) = A sin (w(t)t)
>
> where w(t), the radian frequency, changes with time. In this case I
> don't see how you could possibly be able to resovle the signal exactly
> with only 2 sample points, complex or otherwise. You have to keep in
> mind that JTFA assumes non stationary frequency content otherwise
> what's the point of having time in it?

If you have a sine wave whose frequency changes with time, isn't the
correct way to express it is v(t) = A sin (phi(t)), where phi(t) is
the running integral of w(t)?  I think there was a published paper by
well-known professor in a US unversity in which wrong results were
obtained because he and his student used sin (w(t) t) instead of
sin(phi(t)).  I think there is a paper published in the mid-forties by
van der Pol that points out the error in using sin(w(t) t).

 0

On 11 Jul 2003 01:55:24 -0700, vanamali@netzero.net (Vanamali) wrote:

>PaavoJumppanen@iname.com (Paavo Jumppanen) wrote in message news:<1a72ed76.0307031513.35d46aeb@posting.google.com>...
>
>> The problem is everything I read so far in this discussion seems to
>> centre around the notion of a signal whos frequency does not change
>> with time. If you can assume that then why use JTFA and then of course
>> the "uncertainty principle" does not apply. But JTFA is for the case
>> when you might have a signal like,
>>
>> v(t) = A sin (w(t)t)
>>
>> where w(t), the radian frequency, changes with time. In this case I
>> don't see how you could possibly be able to resovle the signal exactly
>> with only 2 sample points, complex or otherwise. You have to keep in
>> mind that JTFA assumes non stationary frequency content otherwise
>> what's the point of having time in it?
>
>If you have a sine wave whose frequency changes with time, isn't the
>correct way to express it is v(t) = A sin (phi(t)), where phi(t) is
>the running integral of w(t)?  I think there was a published paper by
>well-known professor in a US unversity in which wrong results were
>obtained because he and his student used sin (w(t) t) instead of
>sin(phi(t)).  I think there is a paper published in the mid-forties by
>van der Pol that points out the error in using sin(w(t) t).

You're correct. The "instantaneous frequency" of a signal is d(phi)/dt
where phi is the phase. I was surprised the first time I saw this (in
connection with linear-frequency chirps) but convinced myself that
this was the correct quantity to interpret as frequency.

- Randy


 0

45 Replies
200 Views

Similiar Articles:

7/24/2012 8:47:33 PM