COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

### Energy in a signal as a function of sampling

• Email
• Follow

```Hello again gurus,
assume we sample an analog signal: gi=g(i*dt).

The real energy of the signal is given by the integral of the square
of the signal:
E=integral[ |g(t)|^2*dt ]

However when we use n samples we approximate this integral with a
finite sum:
En=sum[ |gi|^2*dt ]

This sum should approach the integral as n is increasing:
lim n->infinity { En } = E

Since En != E there is an "error" associated with sampling a signal.

Assume we find the frequency spectrum of the signal:
G(f) = DFT[ g(t) ]

1.
How will this "error" appear in the Fourier space?

2.
We all know that when sampling we should follow Nyquist to avoid
aliasing:
dt<=1/(2*fmax)
Should we also study how the energy resulting from our sampling: En
approaches E as a function of n,
and then choose a sampling so large that En/E ~= 1?

Andreas Werner Paulsen
```
 0

See related articles to this posting

```
Andy365 wrote:

> Hello again gurus,

No gurus.
DSP = Dipshits, Stupidents and Posers (c) Jacobsen.

> assume we sample an analog signal: gi=g(i*dt).
>
> The real energy of the signal is given by the integral of the square
> of the signal:
> E=integral[ |g(t)|^2*dt ]
>
> However when we use n samples we approximate this integral with a
> finite sum:
> En=sum[ |gi|^2*dt ]
>
> This sum should approach the integral as n is increasing:
> lim n->infinity { En } = E
>
> Since En != E there is an "error" associated with sampling a signal.

This is what the aliasing error is about.
Finite n -> finite length -> infinite spectrum -> aliasing.

> Assume we find the frequency spectrum of the signal:
> G(f) = DFT[ g(t) ]
>
> 1.
> How will this "error" appear in the Fourier space?

As the classic set of the Fourier artifacts due to the finite length of
the transform window.

> 2.
> We all know that when sampling we should follow Nyquist to avoid
> aliasing:
> dt<=1/(2*fmax)

I don't know what you all know.

> Should we also study how the energy resulting from our sampling: En
> approaches E as a function of n,
> and then choose a sampling so large that En/E ~= 1?

It depends on what do you have, what do you need, and what are you
trying to do.

>
> Andreas Werner Paulsen

Stupident.

VLV

```
 0

```On Mar 24, 11:14=A0am, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> Andy365 wrote:
> > Hello again gurus,
>
> No gurus.
> DSP =3D Dipshits, Stupidents and Posers (c) Jacobsen.
>
....
>
> Stupident.

i guess i'm a poser.

r b-j
```
 0

```On Mar 24, 10:45=A0am, Andy365 <andreas_w_paul...@yahoo.com> wrote:
> Hello again gurus,

"Ooooommmm..."

> assume we sample an analog signal: gi=3Dg(i*dt).
>
> The real energy of the signal is given by the integral of the square
> of the signal:
> E=3Dintegral[ |g(t)|^2*dt ]
>
> However when we use n samples we approximate this integral with a
> finite sum:
> En=3Dsum[ |gi|^2*dt ]
>
> This sum should approach the integral as n is increasing:
> lim n->infinity { En } =3D E

on thing that you have to think about a little is dimensional
analysis.  you must compare apples to apples and your En is in terms
of volts^2 (assuming g_i or g(t) is in volts) and E is in volt^2 *
time.  not the same species of animal.

> Since En !=3D E there is an "error" associated with sampling a signal.
>
> Assume we find the frequency spectrum of the signal:
> G(f) =3D DFT[ g(t) ]
>

i don't think that the *discrete* FT operates on the continuous-time
g(t).

> 1.
> How will this "error" appear in the Fourier space?
>
> 2.
> We all know that when sampling we should follow Nyquist to avoid
> aliasing:
> dt<=3D1/(2*fmax)
> Should we also study how the energy resulting from our sampling: En
> approaches E as a function of n,
> and then choose a sampling so large that En/E ~=3D 1?
>
> Andreas Werner Paulsen

```
 0

```On Mar 24, 4:14=A0pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> Andy365 wrote:
> > Hello again gurus,
>
> No gurus.
> DSP =3D Dipshits, Stupidents and Posers (c) Jacobsen.
>
> > assume we sample an analog signal: gi=3Dg(i*dt).
>
> > The real energy of the signal is given by the integral of the square
> > of the signal:
> > E=3Dintegral[ |g(t)|^2*dt ]
>
> > However when we use n samples we approximate this integral with a
> > finite sum:
> > En=3Dsum[ |gi|^2*dt ]
>
> > This sum should approach the integral as n is increasing:
> > lim n->infinity { En } =3D E
>
> > Since En !=3D E there is an "error" associated with sampling a signal.
>
> This is what the aliasing error is about.
> Finite n -> finite length -> infinite spectrum -> aliasing.
>
> > Assume we find the frequency spectrum of the signal:
> > G(f) =3D DFT[ g(t) ]
>
> > 1.
> > How will this "error" appear in the Fourier space?
>
> As the classic set of the Fourier artifacts due to the finite length of
> the transform window.
>
> > 2.
> > We all know that when sampling we should follow Nyquist to avoid
> > aliasing:
> > dt<=3D1/(2*fmax)
>
> I don't know what you all know.
>
> > Should we also study how the energy resulting from our sampling: En
> > approaches E as a function of n,
> > and then choose a sampling so large that En/E ~=3D 1?
>
> It depends on what do you have, what do you need, and what are you
> trying to do.
>
>
>
> > Andreas Werner Paulsen
>
> Stupident.
>
> VLV

So you are saying that the difference in energy between the continous
and the discrete case is the same effect as aliasing?
A practical problem with Nyquist is that we need to know fmax
beforehand (e.g. maybe the signal was lowpassfiltered to fmax by an
analog process).
If I have an analog signal with an unknown fmax; could I not then use
a plot of En(n) to determine an appropriate n?
As n -> infinity En would approach E. From such a plot I may e.g.
conclude that n=3D1024 samples would be sufficient since E1024 seemed to
be very close to the asymptotical E.

Andreas W. P.

```
 0

```On Mar 24, 4:28=A0pm, robert bristow-johnson <r...@audioimagination.com>
wrote:
> On Mar 24, 10:45=A0am, Andy365 <andreas_w_paul...@yahoo.com> wrote:
>
> > Hello again gurus,
>
> "Ooooommmm..."
>
> > assume we sample an analog signal: gi=3Dg(i*dt).
>
> > The real energy of the signal is given by the integral of the square
> > of the signal:
> > E=3Dintegral[ |g(t)|^2*dt ]
>
> > However when we use n samples we approximate this integral with a
> > finite sum:
> > En=3Dsum[ |gi|^2*dt ]
>
> > This sum should approach the integral as n is increasing:
> > lim n->infinity { En } =3D E
>
> on thing that you have to think about a little is dimensional
> analysis. =A0you must compare apples to apples and your En is in terms
> of volts^2 (assuming g_i or g(t) is in volts) and E is in volt^2 *
> time. =A0not the same species of animal.
>
> > Since En !=3D E there is an "error" associated with sampling a signal.
>
> > Assume we find the frequency spectrum of the signal:
> > G(f) =3D DFT[ g(t) ]
>
> i don't think that the *discrete* FT operates on the continuous-time
> g(t).
>
>
>
> > 1.
> > How will this "error" appear in the Fourier space?
>
> > 2.
> > We all know that when sampling we should follow Nyquist to avoid
> > aliasing:
> > dt<=3D1/(2*fmax)
> > Should we also study how the energy resulting from our sampling: En
> > approaches E as a function of n,
> > and then choose a sampling so large that En/E ~=3D 1?
>
> > Andreas Werner Paulsen- Hide quoted text -
>
> - Show quoted text -

Sorry Robert,
should be
Gi=3DG(fi) =3D DFT[ gi=3Dg(ti) ]

When it comes to dimension:
no both the integral and the sum has the dimension: Volt^2*time.

A.W.P.
```
 0

```On Mar 24, 9:45=A0am, Andy365 <andreas_w_paul...@yahoo.com> wrote:

> The real energy of the signal is given by the integral of the square
> of the signal:
> E=3Dintegral[ |g(t)|^2*dt ]

OK.  The signal is rect(t) which has value 1 for |t| < 0.5
and value 0 for |t| > 0.5.  Thus, it is easy to compute
that E =3D 1.

>
> However when we use n samples we approximate this integral with a
> finite sum:
> En=3Dsum[ |gi|^2*dt ]
>
> This sum should approach the integral as n is increasing:
> lim n->infinity { En } =3D E

OK.  Let's sample rect(t) at n points spaced 0.3 apart, i.e.
at t =3D ...., -0.6, -0.3, 0, 0.3, 0.6, ... etc.  So, En =3D 3
as n increases and so the limit does not approach E =3D 1; the
limit is 3.

Oh, you mean the n points are spaced closer and closer together
as n increases?  Well, in that case, more and more points fall
in the interval (-0.5,0.5) and so En is an increasing function
on n that does not converge to a limit.

So, how exactly are you doing the sampling, and in what sense
does sum[ |gi|^2*dt ], which I assume means sum[ |g(t_i)|^2 ],
attention to the details of the Delta's might help here.]

--Dilip Sarwate

```
 0

```On Mar 24, 5:12=A0pm, dvsarwate <dvsarw...@yahoo.com> wrote:
> On Mar 24, 9:45=A0am, Andy365 <andreas_w_paul...@yahoo.com> wrote:
>
> > The real energy of the signal is given by the integral of the square
> > of the signal:
> > E=3Dintegral[ |g(t)|^2*dt ]
>
> OK. =A0The signal is rect(t) which has value 1 for |t| < 0.5
> and value 0 for |t| > 0.5. =A0Thus, it is easy to compute
> that E =3D 1.
>
>
>
> > However when we use n samples we approximate this integral with a
> > finite sum:
> > En=3Dsum[ |gi|^2*dt ]
>
> > This sum should approach the integral as n is increasing:
> > lim n->infinity { En } =3D E
>
> OK. =A0Let's sample rect(t) at n points spaced 0.3 apart, i.e.
> at t =3D ...., -0.6, -0.3, 0, 0.3, 0.6, ... etc. =A0So, En =3D 3
> as n increases and so the limit does not approach E =3D 1; the
> limit is 3.
>
> Oh, you mean the n points are spaced closer and closer together
> as n increases? =A0Well, in that case, more and more points fall
> in the interval (-0.5,0.5) and so En is an increasing function
> on n that does not converge to a limit.
>
> So, how exactly are you doing the sampling, and in what sense
> does sum[ |gi|^2*dt ], which I assume means sum[ |g(t_i)|^2 ],
> attention to the details of the Delta's might help here.]
>
> --Dilip Sarwate

Hello Dilip,
in the case of rect(t) the sum goes from t=3D-0.5->0.5 over n samples
so: dt=3D(0.5-(-0.5))/n =3D 1/n.
Therefore:
En =3D sum[ |1|^2*1/n] =3D 1 for every n.

A.W.P.

```
 0

```On Mar 24, 12:38=A0pm, Andy365 <andreas_w_paul...@yahoo.com> wrote:

>
> Hello Dilip,
> in the case of rect(t) the sum goes from t=3D-0.5->0.5 over n samples
> so: dt=3D(0.5-(-0.5))/n =3D 1/n.
> Therefore:
> En =3D sum[ |1|^2*1/n] =3D 1 for every n.
>
> A.W.P.

But n samples have n-1 spaces between them, so dt =3D 1/(n-1)
and so En =3D n/(n-1) and converges to 1 from above. You
really need to pay attention to the details if you are
going to be asking questions like the ones you raised in
your original post, where you didn't bother to define dt,
seemed unaware of the difference between the DFT and the
continuous-time Fourier transform, and so on.

--Dilip Sarwate

```
 0

```On Mar 24, 7:00=A0pm, dvsarwate <dvsarw...@yahoo.com> wrote:
> On Mar 24, 12:38=A0pm, Andy365 <andreas_w_paul...@yahoo.com> wrote:
>
>
>
> > Hello Dilip,
> > in the case of rect(t) the sum goes from t=3D-0.5->0.5 over n samples
> > so: dt=3D(0.5-(-0.5))/n =3D 1/n.
> > Therefore:
> > En =3D sum[ |1|^2*1/n] =3D 1 for every n.
>
> > A.W.P.
>
> But n samples have n-1 spaces between them, so dt =3D 1/(n-1)
> and so En =3D n/(n-1) and converges to 1 from above. You
> really need to pay attention to the details if you are
> going to be asking questions like the ones you raised in
> your original post, where you didn't bother to define dt,
> seemed unaware of the difference between the DFT and the
> continuous-time Fourier transform, and so on.
>
> --Dilip Sarwate

Hello Dilip,
yes you are right.
So from the above E101 =3D 101/100=3D1.01.
This is 1 % more energy than the continous case (E=3D1).
SInce this is a very small "error" can we conclude that choosing n =3D
101 samples would suffice?
How does this "error" relate to aliasing?
To me it seems that Vladimir advocates that these are the same thing.
I do not understand how this can be, since Nyquist specify a sampling
rate above which there will be no aliasing,
whereas the "energy error" (En/E - 1) will never fully vanish.

Andreas Werner Paulsen

```
 0

```On Mar 24, 11:48=A0am, Andy365 <andreas_w_paul...@yahoo.com> wrote:
> On Mar 24, 4:28=A0pm, robert bristow-johnson <r...@audioimagination.com>
> wrote:
>
>
>
> > On Mar 24, 10:45=A0am, Andy365 <andreas_w_paul...@yahoo.com> wrote:
>
> > > Hello again gurus,
>
> > "Ooooommmm..."
>
> > > assume we sample an analog signal: gi=3Dg(i*dt).
>
> > > The real energy of the signal is given by the integral of the square
> > > of the signal:
> > > E=3Dintegral[ |g(t)|^2*dt ]
>
> > > However when we use n samples we approximate this integral with a
> > > finite sum:
> > > En=3Dsum[ |gi|^2*dt ]
>
> > > This sum should approach the integral as n is increasing:
> > > lim n->infinity { En } =3D E
>
> > on thing that you have to think about a little is dimensional
> > analysis. =A0you must compare apples to apples and your En is in terms
> > of volts^2 (assuming g_i or g(t) is in volts) and E is in volt^2 *
> > time. =A0not the same species of animal.
>
> > > Since En !=3D E there is an "error" associated with sampling a signal=
..
>
> > > Assume we find the frequency spectrum of the signal:
> > > G(f) =3D DFT[ g(t) ]
>
> > i don't think that the *discrete* FT operates on the continuous-time
> > g(t).
>
> > > 1.
> > > How will this "error" appear in the Fourier space?
>
> > > 2.
> > > We all know that when sampling we should follow Nyquist to avoid
> > > aliasing:
> > > dt<=3D1/(2*fmax)
> > > Should we also study how the energy resulting from our sampling: En
> > > approaches E as a function of n,
> > > and then choose a sampling so large that En/E ~=3D 1?
>
> > > Andreas Werner Paulsen- Hide quoted text -
>
> > - Show quoted text -
>
> Sorry Robert,
> should be
> Gi=3DG(fi) =3D DFT[ gi=3Dg(ti) ]
>
> When it comes to dimension:
> no both the integral and the sum has the dimension: Volt^2*time.

but you have done nothing about the dimensional difference.

hint:  somewhere in the mix you will need the sampling period (which
is the reciprocal of the sampling frequency) which has dimension of
time.

another hint, g(t) need not have dimension of voltage, i used that for
illustration only.  but, i think (unless you toss in the V_ref of the
A/D converter), g(t) and g[i] have the same dimension (although their
arguments do not).

r b-j

```
 0

```On 03/24/2011 07:45 AM, Andy365 wrote:
> Hello again gurus,

Who?

> assume we sample an analog signal: gi=g(i*dt).
>
> The real energy of the signal is given by the integral of the square
> of the signal:
> E=integral[ |g(t)|^2*dt ]

Actually, the real energy of the signal is unknown.  If the signal is an
amplitude, and if it is being applied to a load that acts as a pure
impedance (i.e., if the signal is volts being applied to a known
resistance) then we can compute the _real_ energy ("real" in this case
meaning "actual", not "the non-imaginary part of a complex number").

But that's often the case, so we can fudge it and call 'E' the energy of
the signal.

> However when we use n samples we approximate this integral with a
> finite sum:
> En=sum[ |gi|^2*dt ]

Taking 'dt' as 'delta t', or the sampling interval, which I'll call 'Ts'
-- yes.  But the term 'dt' from calculus has no meaning here.

So I would recast this as

gi(n) = g(Ts * n),
En = sum {gi^2 * Ts}

> This sum should approach the integral as n is increasing:
> lim n->infinity { En } = E

Not without putting restrictions on n that affect Ts.  I would say
rather that the sum should approach the integral as the sample rate
decreases:

lim Ts -> 0 sum oven n {gi(n)^2 * Ts} = E

> Since En != E there is an "error" associated with sampling a signal.

Under what conditions is En != E?  Is this necessarily so for all Ts and
all g(t)?

> Assume we find the frequency spectrum of the signal:
> G(f) = DFT[ g(t) ]
>
> 1.
> How will this "error" appear in the Fourier space?
>
> 2.
> We all know that when sampling we should follow Nyquist to avoid
> aliasing:
> dt<=1/(2*fmax)
> Should we also study how the energy resulting from our sampling: En
> approaches E as a function of n,
> and then choose a sampling so large that En/E ~= 1?

Here we make the jump that beginning students of DSP make: we forget
that there are four different flavors of Fourier transform, and we use
the one that gets stressed in practical DSP, to the exclusion of all others.

g(t) is a continuous function of time, and therefore cannot have it's
DFT taken.  Further, since you haven't put restrictions on it, _I'm_
going to assume that g(t) is defined over infinite time.  Thus, we can
take it's _Fourier transform_:

G(omega) = F{g(t)}

gi(n) is not a continuous function of time -- it's a function of the
discrete index n.  Thus, we can't take it's (continuous time) Fourier
transform.  We can, instead, take it's discrete-time Fourier transform.
For lack of better notation, I'll also define this as F{gi}:

Gi(w) = F{gi(n)}

Here we run into some notational difficulty: omega and w are different
critters, existing in different spaces and having different units.  The
frequency variable omega has units radians/sec, and is defined over the
entire real line.  Conversely, the frequency variable w has units of
radians, and can be taken as being defined over any segment of the real
line that is 2*pi radians in extent, but after that we find that G(w) is
periodic.

But, _if_ G(omega) is zero for any |omega| >= omega_0,
and _if_ omega_0 <= pi/Ts, then we find (possibly with some scaling
differences) that Gi(omega * Ts) = G(omega).

Then I note without proof (it'll be in your books) that Parseval's
theorem works for the discrete-time Fourier transform just as it does
for the continuous-time Fourier transform.  Thus (again with scaling
difficulties, because I can never remember where to put my factors of pi
without reference to books, and I'm lazy):

int from -infinity to infinity (G(omega)^2 domega) =
K int from -infinity to infinity (g(t)^2 dt)

and

int from -pi to pi (Gi(w)^2 domega) =
K sum from n = -infinity to infinity {gi(n)^2}

Now you're just a factor of Ts, and maybe the odd 2*pi, from finding out
that _for a bandlimited signal_, _sampled above Nyquist_, _properly
scaled_, E = En.

Unless I got my math wrong, and I've done enough free work for one day!

--

Tim Wescott
Wescott Design Services
http://www.wescottdesign.com

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.wescottdesign.com/actfes/actfes.html
```
 0

```On Mar 24, 9:46=A0pm, Tim Wescott <t...@seemywebsite.com> wrote:
> On 03/24/2011 07:45 AM, Andy365 wrote:
>
> > Hello again gurus,
>
> Who?
>
> > assume we sample an analog signal: gi=3Dg(i*dt).
>
> > The real energy of the signal is given by the integral of the square
> > of the signal:
> > E=3Dintegral[ |g(t)|^2*dt ]
>
> Actually, the real energy of the signal is unknown. =A0If the signal is a=
n
> amplitude, and if it is being applied to a load that acts as a pure
> impedance (i.e., if the signal is volts being applied to a known
> resistance) then we can compute the _real_ energy ("real" in this case
> meaning "actual", not "the non-imaginary part of a complex number").
>
> But that's often the case, so we can fudge it and call 'E' the energy of
> the signal.
>
> > However when we use n samples we approximate this integral with a
> > finite sum:
> > En=3Dsum[ |gi|^2*dt ]
>
> Taking 'dt' as 'delta t', or the sampling interval, which I'll call 'Ts'
> -- yes. =A0But the term 'dt' from calculus has no meaning here.
>
> So I would recast this as
>
> gi(n) =3D g(Ts * n),
> En =3D sum {gi^2 * Ts}
>
> > This sum should approach the integral as n is increasing:
> > lim n->infinity { En } =3D E
>
> Not without putting restrictions on n that affect Ts. =A0I would say
> rather that the sum should approach the integral as the sample rate
> decreases:
>
> lim Ts -> 0 sum oven n {gi(n)^2 * Ts} =3D E
>
> > Since En !=3D E there is an "error" associated with sampling a signal.
>
> Under what conditions is En !=3D E? =A0Is this necessarily so for all Ts =
and
> all g(t)?
>
> > Assume we find the frequency spectrum of the signal:
> > G(f) =3D DFT[ g(t) ]
>
> =A0>
> =A0> 1.
> =A0> How will this "error" appear in the Fourier space?
> =A0>
> =A0> 2.
> =A0> We all know that when sampling we should follow Nyquist to avoid
> =A0> aliasing:
> =A0> dt<=3D1/(2*fmax)
> =A0> Should we also study how the energy resulting from our sampling: En
> =A0> approaches E as a function of n,
> =A0> and then choose a sampling so large that En/E ~=3D 1?
>
> Here we make the jump that beginning students of DSP make: we forget
> that there are four different flavors of Fourier transform, and we use
> the one that gets stressed in practical DSP, to the exclusion of all othe=
rs.
>
> g(t) is a continuous function of time, and therefore cannot have it's
> DFT taken. =A0Further, since you haven't put restrictions on it, _I'm_
> going to assume that g(t) is defined over infinite time. =A0Thus, we can
> take it's _Fourier transform_:
>
> G(omega) =3D F{g(t)}
>
> gi(n) is not a continuous function of time -- it's a function of the
> discrete index n. =A0Thus, we can't take it's (continuous time) Fourier
> transform. =A0We can, instead, take it's discrete-time Fourier transform.
> =A0 For lack of better notation, I'll also define this as F{gi}:
>
> Gi(w) =3D F{gi(n)}
>
> Here we run into some notational difficulty: omega and w are different
> critters, existing in different spaces and having different units. =A0The
> frequency variable omega has units radians/sec, and is defined over the
> entire real line. =A0Conversely, the frequency variable w has units of
> radians, and can be taken as being defined over any segment of the real
> line that is 2*pi radians in extent, but after that we find that G(w) is
> periodic.
>
> But, _if_ G(omega) is zero for any |omega| >=3D omega_0,
> and _if_ omega_0 <=3D pi/Ts, then we find (possibly with some scaling
> differences) that Gi(omega * Ts) =3D G(omega).
>
> Then I note without proof (it'll be in your books) that Parseval's
> theorem works for the discrete-time Fourier transform just as it does
> for the continuous-time Fourier transform. =A0Thus (again with scaling
> difficulties, because I can never remember where to put my factors of pi
> without reference to books, and I'm lazy):
>
> int from -infinity to infinity (G(omega)^2 domega) =3D
> =A0 =A0K int from -infinity to infinity (g(t)^2 dt)
>
> and
>
> int from -pi to pi (Gi(w)^2 domega) =3D
> =A0 =A0K sum from n =3D -infinity to infinity {gi(n)^2}
>
> Now you're just a factor of Ts, and maybe the odd 2*pi, from finding out
> that _for a bandlimited signal_, _sampled above Nyquist_, _properly
> scaled_, E =3D En.
>
> Unless I got my math wrong, and I've done enough free work for one day!
>
> --
>
> Tim Wescott
> Wescott Design Serviceshttp://www.wescottdesign.com
>
> Do you need to implement control loops in software?
> "Applied Control Theory for Embedded Systems" was written for you.
> See details athttp://www.wescottdesign.com/actfes/actfes.html

Thank you Tim for a very well written answer!
(I wish I had been as precise in my initial post.)

Andreas
```
 0

```You have ignored a basic requirement for a sampled signal. To wit: the samp=
le rate must exceed twice the highest frequency present in the sampled sign=
al. If that requirement is met, there is no loss of information. All that c=
an be learned about the original signal can be learned from the samples.

To be sure, there are circumstances that don't need complete information. S=
ervo systems can gain more from eliminating the delay of a proper anti-alia=
s filter than they lose from the aliasing incurred. The questions you ask a=
ren't appropriate those circumstances.

Jerry
--=20
Engineering is the art of making what you want from things you can get.
```
 0

```On Mar 24, 4:46=A0pm, Tim Wescott <t...@seemywebsite.com> wrote:
....
> Then I note without proof (it'll be in your books) that Parseval's
> theorem works for the discrete-time Fourier transform just as it does
> for the continuous-time Fourier transform. =A0Thus (again with scaling
> difficulties, because I can never remember where to put my factors of pi
> without reference to books, and I'm lazy):

the solution to the problem of remembering when and where to put the
factors of pi or 2*pi is to use the "unitary, ordinary frequency"
convention of the FT:

http://en.wikipedia.org/wiki/Fourier_transform#Other_conventions

you need to remember the 2*pi that goes with "f".  use df and dt.  the
inverse FT is just like the forward FT (the -j or +j is of no
consequence).  there are no external factors for either the
convolution theorem nor Parsevals.  and using "duality" (like what is
the FT of the sinc function?) is trivial.
```
 0

```On Mar 24, 10:45=A0am, Andy365 <andreas_w_paul...@yahoo.com> wrote:
> Hello again gurus,
> assume we sample an analog signal: gi=3Dg(i*dt).
>
> The real energy of the signal is given by the integral of the square
> of the signal:
> E=3Dintegral[ |g(t)|^2*dt ]
>
> However when we use n samples we approximate this integral with a
> finite sum:
> En=3Dsum[ |gi|^2*dt ]
>
> This sum should approach the integral as n is increasing:
> lim n->infinity { En } =3D E
>
> Since En !=3D E there is an "error" associated with sampling a signal.
>
> Assume we find the frequency spectrum of the signal:
> G(f) =3D DFT[ g(t) ]
>
> 1.
> How will this "error" appear in the Fourier space?
>
> 2.
> We all know that when sampling we should follow Nyquist to avoid
> aliasing:
> dt<=3D1/(2*fmax)
> Should we also study how the energy resulting from our sampling: En
> approaches E as a function of n,
> and then choose a sampling so large that En/E ~=3D 1?
>
> Andreas Werner Paulsen

Hello Andy,

There are several ways of looking at this. If you assume your signal
were bandlimited before sampling, then you have have all of the data
to find your integral exactly. Think of your signal as being a linear
combination of time shifted sin(x)/x functions. Then write down your
integral of this composite continuous function in terms of the
individual sin(x)/x functions and your integral now becomes a simple
sum. The errors you see stem from the sin(x)/x functions existing to
infinity in both directions on the x axis so if you are finding a
definite integral then your sin(x)/x functions are truncated. Well
these integrals may be precomputed and thus in terms of your sum
1,1,1,1,1,1, you end up of the weights near the center of your
interval being nearly equal to one and the ones near the edge being
close to 0.5.

IHTH,

Clay

```
 0

```
robert bristow-johnson wrote:

> On Mar 24, 11:14 am, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
>
>>DSP = Dipshits, Stupidents and Posers (c) Jacobsen.
>
> i guess i'm a poser.
>
> which one are you, Vlad?

Don't flatter yourself, Robert.

VLV
```
 0

```On 3/24/2011 7:45 AM, Andy365 wrote:
> Hello again gurus,
> assume we sample an analog signal: gi=g(i*dt).
>
> The real energy of the signal is given by the integral of the square
> of the signal:
> E=integral[ |g(t)|^2*dt ]
>
> However when we use n samples we approximate this integral with a
> finite sum:
> En=sum[ |gi|^2*dt ]
>
> This sum should approach the integral as n is increasing:
> lim n->infinity { En } = E
>
> Since En != E there is an "error" associated with sampling a signal.
>
> Assume we find the frequency spectrum of the signal:
> G(f) = DFT[ g(t) ]
>
> 1.
> How will this "error" appear in the Fourier space?
>
> 2.
> We all know that when sampling we should follow Nyquist to avoid
> aliasing:
> dt<=1/(2*fmax)
> Should we also study how the energy resulting from our sampling: En
> approaches E as a function of n,
> and then choose a sampling so large that En/E ~= 1?
>
>
> Andreas Werner Paulsen

Andreas,

Great question!  I don't have a "pat" answer but I'll try anyway:

Vlad makes a point re: aliasing but I think the issue is much greater
that that:

continous frequency.   Then multiply by a regular infinite sequence of
unit Diracs (impulse functions)  in time.  Depending on the weighting of
the Diracs, the integral can be held constant before and after.  How you
get the weights is an exercise for the student I guess (meaning that I
don't know).  The convention is to weigh them according to the value of
the function being multiplied.

(i.e. absolutely lowpassed) continous frequency.   Then multiply by a
regular infinite sequence of Diracs (impulse functions)  in time - which
makes the frequency go periodic.   Shannon helped us figure out that the
temporal sequence can be perfectly interpolated with sincs centered on
the samples... which is the same as convolving with a sinc and is the
same as applying a perfect brick-wall lowpass filter.
As each sinc is weighted by the aligned sample, and because of the
perfect reconstruction, the result is the same as the original - so
energy is preserved.  Then you can figure out the relationship between
the integral of a sinc and this result.

Then (I hope not) one can argue about Diracs, and unit samples, etc. etc.

N doesn't have to increase to go through these steps because it's

A separate and important question is:

"How did you set the limits of integration in your first equation?"
If the limits of integration were infinite then the result will be
infinite in most cases.  Reducing the limits of integration reduce the
"window" in which power is summed.  That doesn't make it wrong at all -
it's just defined.  So, in the sampled case, increasing N doesn't make
the energy measure "more correct".  But, it may make the "average power"
measure "more correct" if you define the objective accordingly.

Fred

```
 0

```On Mar 24, 4:45=A0pm, Andy365 <andreas_w_paul...@yahoo.com> wrote:
> Hello again gurus,
> assume we sample an analog signal: gi=3Dg(i*dt).
>
> The real energy of the signal is given by the integral of the square
> of the signal:
> E=3Dintegral[ |g(t)|^2*dt ]
>
> However when we use n samples we approximate this integral with a
> finite sum:
> En=3Dsum[ |gi|^2*dt ]
>
> This sum should approach the integral as n is increasing:
> lim n->infinity { En } =3D E
>
> Since En !=3D E there is an "error" associated with sampling a signal.
>
> Assume we find the frequency spectrum of the signal:
> G(f) =3D DFT[ g(t) ]
>
> 1.
> How will this "error" appear in the Fourier space?

It doesn't. The DFT is exact, to within the usual issues regarding
numerical round-off, etc.

> 2.
> We all know that when sampling we should follow Nyquist to avoid
> aliasing:

No, we don't. Nyquist told us how to *reduce* aliasing.
One can never remove it completely.

> dt<=3D1/(2*fmax)
> Should we also study how the energy resulting from our sampling: En
> approaches E as a function of n,
> and then choose a sampling so large that En/E ~=3D 1?

No. One should choose a sampling frequency that

1) Allows one to do whatever one likes to do, to the signal
2) Is economically and practically feasible

Prioritized in that order.

Rune
```
 0

18 Replies
167 Views

Similar Articles

12/12/2013 5:14:30 AM
[PageSpeed]

Similar Artilces:

Energy functions?
What are energy functions and why are they so important? Thanks. ...

Converting a sampled signal
Hello, I thought this question would touch somehow this community since this task belongs to signal processing... I always learned, that when I have a signal which has a known information frequency band, I must sample this signal with at least double this frequency, and take care to use a Low Pass Filter. What if I have a already sampled signal but not quantized nor digitalized. Should I use any kind of filter before trying to convert this signal? (Sampled -> Digital) Would you add any filter between the AD conversor and this signal output? Thanks in advance Raul Fajardo Raul Fajardo...

LCAS functionality for PDH signals..
Hai Hubb, I have been following this group from a while and am finding very interesting and useful for my work on LCAS. Can you clarify me on the implementation of the LCAS protocol for PDH signals like DS1, E1, DS3 and E3. Can you also point me to any standard which specifies the same. And how will this be different from that of the LCAS protocol for ethernet traffic? Thanks in advance. Hai Huub, I have one more querie regarding the implementation of UPSR and LCAS in the same system. Since UPSR provides protection mechanism, i think LCAS should not act upon the failure...

Hopfield's energy functional
Dear All, I was reading Hopfield's paper "Neural networks and physical systems with emergent collective computational abilities" and am having trouble digesting the little tidbit about the energy functional. He just goes and says the energy functional is a decreasing entity that reaches a stable state. He says it is a decreasing entity because of the way the Vi and Vj's are defined, but nowhere in the paper can I find this definition. How can you just take the derivative with respect to Vi (or Vj), bring the change in Vi to the other side and say change in E will be negat...

simulink: change the input signal from a function
Hi, I created a simulink model. This model is run in a function created for the optimization toolbox. This model must be run for twice for each function execution. For each simulation I need to change the input signal. How can I get this? thanks Pietro "pietro " <bracardi82@email.it> wrote in message news:ijtsap\$guh\$1@fred.mathworks.com... > Hi, > I created a simulink model. This model is run in a function created for > the optimization toolbox. This model must be run for twice for each > function execution. For each simulation I need to change the...

Using Filtfilt function for OFDM signals
Hello: I have a question regarding filtfilt function in Matlab. Due to forwar and backward filtering, filtfilt function will have zero-phase response a any frequency. If I apply this filter to a time domain OFDM sequence, doe that mean I can set the FFT start window the same as the transmitted IFF starting point? In my test, I have an OFDM sequence of 5 symbols each with 80 samples a 20 MHz clock. Only the third symbol has valid data, all others set to zero After passing though a 5th order Butterworth filter implemented usin filtfilt function, the data are spread over previous symbols and fo...

Interrupt signal sampling (Level or edge?)
Hi, What is the fair criteria for sampling the interrupt signal? For high priority critical interrupts and low priority interrupts, when should i decide to configure them to be level triggered or edge triggered. Thanks Ashish It depends on the hardware that generates the interrupts, and nothing else. If the interrupt is held until the service routine removes it, use level. If the interrupt "just happens", and the device removes it automatically, use edge. Priority is a different issue. Thanks. This question came from other perspective, there are chances that noise might ca...

s-function: multiple I/O signals
Hi. Just start writing some s-functions in simulink. A Q: how to manage multiple input signals: MUX-block? DEMUX -block for output signals? Or are there any other technique. - BG -- Sven-Erik Tiberg Lulea Sweden ...

DR: Sampling and Reconstruction of Non-Bandlimited Signals
Part 2: Exponentially Decaying Sinusoids In Noise Hello everyone, a recent post of Jerry's has inspired a new DSP riddle (DR). Here goes: Consider a signal f(t) defined by f(t) = sum_{k=0}^{N-1} A_k exp(-mu_k t) sin(w_k t + p_k), t >= 0, (1) ie. a finite sum of weighted, exponentially decaying sinusoids. We know that f has this form, and that w_k <= w for all k, where w is some known value. We don't know N nor the vectors A, mu, w and p. We can sample this function using any sampling period T > 0 of our choice, but we only get a noisy measurement. This means that we only...

folding function calls which may signal errors
Hi, Is it OK to compile (defun foo (a b) (sbit a b) t) into code equivalent to (defun foo0 (a b) t) as long as the safety setting is not 3? foo and foo0 are semantically equivalent as long as sbit does not signal errors (which it probably would if called with bad arguments). sbit is not documented to have any "Exceptional Situations". 1.4.4.3 says that "Except as explicitly specified otherwise, the consequences are undefined if these type restrictions [i.e., those stemming from arg type descriptions] are violated." thus the answer appears to be "yes". Note that...

Max Sample Rate for Signal Tap in Altera Quartus?
Hello, does anyone know what is the max Sample Rate used in the Signal Tap utility under Altera's Quartus? Thanks, joe Hi Joe, Since SignalTap is just like any other circuit in your FPGA, the answer would be the same as for, "How fast can a circuit run in my device?" It entirely depends on what else is part of the FPGA design, on what embedded memory SignalTap uses for buffers, how many signals you are watching, the device itself, and so on. SignalTap can slow down, or be slowed down by, the circuit it is inspecting. Practically, I haven't had much problem with SignalTa...

s-func: Getting Input signal into Initialisation functions
Hi all, I've written a S-function block that works fine. Currently I have a parametre that is used in the Initisation functions to set the sizes of vectors etc. This being set in the window that opens when you click on the block. I would prefer to be able to set this parametre as an input instead. Basically because the same parameter (number) is used in alot of different blocks in the same model. If I could link them to the same constant input then I just have to change the number once instead of having to change it in all the blocks when I run a new simulation. Does that make sense? Is ...

about the function "spgrambw" to calculate spectrogram of speech signals
Hi, I am doing the research using function "spgrambw" to do the speech signal processing, it is from the speech signal processing toolbox, but there is no discription about the parameters, would you tell me what is BW and FMAX?? ---------------------------- [tt,f,b]=spgrambw(data,fs,bw,fmax) ---------------------------- Thanks so much: ) delinda he wrote: > Hi, I am doing the research using function "spgrambw" to do the speech signal processing, > it is from the speech signal processing toolbox, but there is no discription about the > parameters, would you tell...

Max Sample Rate using Signal Tap in Quartus 6.0?
Hello, I have a question concerning the maximum sample rate using Quartus' Signal Tap, I've heard that the maximum rate is 270Mhz, is that correct? Also, if you wanted to use a higher sample clock would you simply use a PLL to generate the clock and then route the clock to an output pin, which I think is a requirement of Signal Tap? If anyone has some experience using Signal Tap, could you share your thoughts on the matter. Thanks, joe ...

How can I use a digital signal to trigger a linear encoder to start and stop sampling?
I have a 6601 PCI card and am recieving a digital signal from the z-index on a rotary encoder and a count edges signal from a linear encoder.&nbsp; I want to use the digital signal to trigger the linear encoder to start and stop sampling from the linear encoder.&nbsp; I want to use the DAQmx read counter mutliple samples with a specified number of samples per channel and the DAQmx timing (sample clock) function to specify the sampling rate.&nbsp; I am having trouble how to use the DAQmx trigger function to start and stop the DAQmx read function.&nbsp; If you have any suggestion...

Another sample, Another possible "component" or "user defined function"
For other reasons, I was looking at the presentation foils for a SHARE (IBM user group) presentation "Got COBOL? Bet you didn't know you could do THIS in COBOL" at: http://www-01.ibm.com/support/docview.wss?uid=swg27009580&aid=1 In it (among other things) is a sample of how to convert an input string to "printable Hex". I decided to take that code and come up with a possible "component" to answer how to do this. I know that we have been asked how to do this in CLC on several occasions. I hadn't stored any of the sample, so I just went ...

function Object,function statement,function operator
I am so confused with these three concept,who can explained it?thanks so much? e.g. var f= new Function("x", "y", "return x * y"); function f(x,y){ return x*y } var f=function(x,y){ return x*y; } alex schreef: > I am so confused with these three concept,who can explained it?thanks > so much? > e.g. > var f= new Function("x", "y", "return x * y"); > > function f(x,y){ > return x*y > } > > var f=function(x,y){ > return x*y; > } > The new Function syntax has a bad performance and shouldn'...

Function return function???
function F(e) { return function(){P(e)} } Can anybody tell me what the code is doing? If return another function all in a function I would do function F(e) { return P(e) } Is it defining another function? I don't recall other languages can define another function inside one function. Why would the programmer do this? "strout" <zjjiang@hotmail.com> wrote in message news:1109273953.202236.117440@o13g2000cwo.googlegroups.com... > > If return another function all in a function I would do > function F(e) > { > return P(e) > } That returns the *result* of ...

Function stopping a function
Hi All, We all know that a function can launch the execution of another function. How can a function stop the execution of another function? For instance, lenghty_function() executes, when an external event triggers cancel(), which is supposed to abruptly stop lengthy_function(), reset some variables and exit immediately. Thanks for your advice SxN ____________________________________________________________________________________ Be a better sports nut! Let your teams follow you with Yahoo Mobile. Try it now. http://mobile.yahoo.com/sports;_ylt=At9_qDKvtAbMuh1G1SQtBI7ntAcJ On ...

(new Function) as function.
How can I create a function nF such that for example var sum = nF('a','b','return a+b'); and var sum = new Function('a','b','return a+b'); do the same? function nF(){ return new Function(arguments); } // won't work. because Function expects first argument to be a string, not an agruments list. function nF(){ return (new Function).apply(null,arguments); } // won't work. (new Function) is not a function. var nF = Function; // wont' work. HopfZ wrote: > How can I create a function nF such that for example > var sum = nF(...