COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

### Q: Approximating signals by shift and summation of other signals?

• Email
• Follow

```Dear folks,

Given a functions g(x) and a set of function f_1(x),...,f_n(x). My
goal is to approximate g(x) by (non-weihghted) sum of shifted f's.
Formally:
Min int( (g(x)-sum_i f_i(x+t_i))^2 )

Where int is integral with respect to x from -infinity to +infinity,
and minimization is over shift variables t_1,...t_n.

All functions are real valued and have compact support (i.e. are zero
valued beyond some bounded interval).

1. I believe this problem should arise a lot in signal processing.
Does it have a special name? Is there some representative references
discussing it?

2. What is the best way to compute t_i's? A naive way, of course, is
gradient descent, but I was wondering if there is any smarter way
which has some "performance guarantee". I strongly feel gradient
decent will do poorly, as the cost function seems to have a lot of
local minima.

Regards

Golabi
```
 0
Reply golabidoon (10) 8/29/2009 7:19:36 AM

See related articles to this posting

```Golabi Doon wrote:

> Given a functions g(x) and a set of function f_1(x),...,f_n(x).
> My goal is to approximate g(x) by (non-weihghted) sum of shifted
> f's.

Look into Matching Pursuit.

Martin

--
Quidquid latine scriptum est, altum videtur.
```
 0
Reply martin.eisenberg (676) 8/29/2009 10:59:23 AM

```On Aug 29, 5:59=A0am, Martin Eisenberg <martin.eisenb...@udo.edu> wrote:
> Look into Matching Pursuit.
>
> Martin

Martin,

Thank you for the response. Matching pursuit, however, deals only with
summation part, and there is not "shifting" of functions involved. I
mean it does not try to find optimal shift of functions. So I need
something more able than matching pursuit.

Thanks

Golabi
```
 0
Reply golabidoon (10) 8/29/2009 11:19:06 AM

```Golabi Doon wrote:

> Thank you for the response. Matching pursuit, however, deals
> only with summation part, and there is not "shifting" of
> functions involved. I mean it does not try to find optimal shift
> of functions. So I need something more able than matching
> pursuit.

You can put all possible shifts in the MP dictionary, and making it
so that only one instance of the base data actually takes up storage
is a matter of code design.

Martin

--
Quidquid latine scriptum est, altum videtur.
```
 0
Reply martin.eisenberg (676) 8/29/2009 9:07:37 PM

```
Golabi Doon wrote:
> Dear folks,
>
> Given a functions g(x) and a set of function f_1(x),...,f_n(x). My
> goal is to approximate g(x) by (non-weihghted) sum of shifted f's.
> Formally:
> Min int( (g(x)-sum_i f_i(x+t_i))^2 )
>
> Where int is integral with respect to x from -infinity to +infinity,
> and minimization is over shift variables t_1,...t_n.
>
> All functions are real valued and have compact support (i.e. are zero
> valued beyond some bounded interval).
>
>
> 1. I believe this problem should arise a lot in signal processing.
> Does it have a special name? Is there some representative references
> discussing it?

What you described is the typical problem for the audio and video
compression.

> 2. What is the best way to compute t_i's? A naive way, of course, is
> gradient descent, but I was wondering if there is any smarter way
> which has some "performance guarantee". I strongly feel gradient
> decent will do poorly, as the cost function seems to have a lot of
> local minima.

Unless f[i] can be made orthogonal, this is NP-complete problem. So you
have to do the full search over all shift variables. The typical
sub-optimal approaches are trying to optimize the search using the
features of the g(x) and f[i].
You might want to look into VSELP and G.729.A to find out the details.

DSP and Mixed Signal Design Consultant
http://www.abvolt.com
```
 0
Reply nospam (2805) 8/29/2009 9:31:48 PM

```Thank you very much for the hints, Martin and Vladimir.

I am aware that the problem is NP-complete in general, but my question
was if there exists an algorithm with "performance guarantee" (at
least under some conditions, but conditions more general than
orthogonal f_i''s). I think matching pursuit does not offer any
performance guarantee (unless f_i's are orthogonal), please correct me
if I am wrong.

I will look into VSELP and G.729.A coding schemes, but if they have no
performance guarantee either, then that leaves my question open.

Regards

Golabi

On Aug 29, 4:31=A0pm, Vladimir Vassilevsky <nos...@nowhere.com> wrote:
> Golabi Doon wrote:
> > Dear folks,
>
> > Given a functions g(x) and a set of function f_1(x),...,f_n(x). My
> > goal is to approximate g(x) by (non-weihghted) sum of shifted f's.
> > Formally:
> > Min int( (g(x)-sum_i f_i(x+t_i))^2 )
>
> > Where int is integral with respect to x from -infinity to +infinity,
> > and minimization is over shift variables t_1,...t_n.
>
> > All functions are real valued and have compact support (i.e. are zero
> > valued beyond some bounded interval).
>
>
> > 1. I believe this problem should arise a lot in signal processing.
> > Does it have a special name? Is there some representative references
> > discussing it?
>
> What you described is the typical problem for the audio and video
> compression.
>
> > 2. What is the best way to compute t_i's? A naive way, of course, is
> > gradient descent, but I was wondering if there is any smarter way
> > which has some "performance guarantee". I strongly feel gradient
> > decent will do poorly, as the cost function seems to have a lot of
> > local minima.
>
> Unless f[i] can be made orthogonal, this is NP-complete problem. So you
> have to do the full search over all shift variables. The typical
> sub-optimal approaches are trying to optimize the search using the
> features of the g(x) and f[i].
> You might want to look into VSELP and G.729.A to find out the details.
>
> DSP and Mixed Signal Design Consultanthttp://www.abvolt.com

```
 0
Reply golabidoon (10) 8/30/2009 4:00:23 AM

```On Aug 29, 3:19=A0am, Golabi Doon <golabid...@gmail.com> wrote:
> Dear folks,
>
> Given a functions g(x) and a set of function f_1(x),...,f_n(x). My
> goal is to approximate g(x) by (non-weihghted) sum of shifted f's.
> Formally:
> Min int( (g(x)-sum_i f_i(x+t_i))^2 )
>
> Where int is integral with respect to x from -infinity to +infinity,
> and minimization is over shift variables t_1,...t_n.
>
> All functions are real valued and have compact support (i.e. are zero
> valued beyond some bounded interval).
>
>
> 1. I believe this problem should arise a lot in signal processing.
> Does it have a special name? Is there some representative references
> discussing it?
>
> 2. What is the best way to compute t_i's? A naive way, of course, is
> gradient descent, but I was wondering if there is any smarter way
> which has some "performance guarantee". I strongly feel gradient
> decent will do poorly, as the cost function seems to have a lot of
> local minima.
>
> Regards
>
> Golabi

Is this what you are looking for?

http://lcavwww.epfl.ch/research/topics/sampling_FRI.html

In some circles it is known as signals with finite rate of innovation,
being defined as sums of shifts of known or unknown pulse
shapes.

Have fun,
Julius
```
 0
Reply juliusk (652) 8/30/2009 1:13:15 PM

```On 30 Aug, 06:00, Golabi Doon <golabid...@gmail.com> wrote:
> Thank you very much for the hints, Martin and Vladimir.
>
> I am aware that the problem is NP-complete in general, but my question
> was if there exists an algorithm with "performance guarantee" (at
> least under some conditions, but conditions more general than
> orthogonal f_i''s).

The definition of "NP complete" problems is that the
best-case performance guarantee is ridiculously bad.
I don't remember off the top of my head if the guarantee
is polynomial(O(c^N)) or factorial (O(N!)), but that
doesn't really matter: NP-complete problems can not
be solved in practical time for non-trivial N.

Rune
```
 0
Reply allnor (8506) 8/30/2009 1:31:01 PM

```On Aug 30, 8:31=A0am, Rune Allnor <all...@tele.ntnu.no> wrote:
> The definition of "NP complete" problems is that the
> best-case performance guarantee is ridiculously bad.
>
> Rune

NP problems are hard in general case, but special instances of
problems possessing some structure sometimes are easy. For example,
integer programming is hard, but there is a special binary programming
that is easy. Also, minimizing bilinear functions is hard, but
assuming the variables are nonnegative and also have nonnegative
coefficients, the problem can be reformulated as a geometric program
which is convex and solvable eficiently.

In a nut shell, I was asking if there has been any similar work on the
problem I asked, meaning there is any performance guarantee, even
under some conditions on f_i's and g.

Golabi
```
 0
Reply golabidoon (10) 8/31/2009 12:44:44 AM

8 Replies
49 Views

Similar Articles

12/10/2013 4:36:33 PM
[PageSpeed]