COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

FFT complexity

• Email
• Follow

```hi,
If the number of input data for my FFT block is N=m+k, m data are non zero
and k data are zero, how much is the order of computational complexity of
my FFT block? As you know if all inputs were non zero it will be
O(N/2*logN) but how about if I know that k of these N inputs are zero?
Regards
```
 0

See related articles to this posting

```On Aug 16, 10:37=A0am, "ahmadagha23" <ahmadagha23@n_o_s_p_a_m.yahoo.com>
wrote:
> hi,
> If the number of input data for my FFT block is N=3Dm+k, m data are non z=
ero
> and k data are zero, how much is the order of computational complexity of
> my FFT block? As you know if all inputs were non zero it will be
> O(N/2*logN) but how about if I know that k of these N inputs are zero?
> Regards

i don't have my O&S handy at the moment.  i can't remember if there
are radix-2 forms (Decimation-in-Time or Frequency) that begin with
"little" butterflys (that use their neighboring bins) and do not bit-
reverse the data going in, but if there are such forms, you would be
able to write code that does not compute those butterflies (because
they just have zeros going in and will have zeros coming out).

but that code will be messy and i would bet the result would still be
O(N*logN).

also, the application of that (on the data, not an impulse response)
appears to be Overlap-Add, whereas i might suggest you try Overlap-
Save instead.  it's a little cheaper (you don't have to add the tails)
and there is no zero-padding of the data.

r b-j

```
 0
Reply robert 8/16/2010 4:17:04 PM

```robert bristow-johnson <rbj@audioimagination.com> wrote:
> On Aug 16, 10:37�am, "ahmadagha23" <ahmadagha23@n_o_s_p_a_m.yahoo.com>

>> If the number of input data for my FFT block is N=m+k, m data are non zero
>> and k data are zero, how much is the order of computational complexity of
>> my FFT block? As you know if all inputs were non zero it will be
>> O(N/2*logN) but how about if I know that k of these N inputs are zero?
>> Regards

> i don't have my O&S handy at the moment.  i can't remember if there
> are radix-2 forms (Decimation-in-Time or Frequency) that begin with
> "little" butterflys (that use their neighboring bins) and do not bit-
> reverse the data going in, but if there are such forms, you would be
> able to write code that does not compute those butterflies (because
> they just have zeros going in and will have zeros coming out).

> but that code will be messy and i would bet the result would still be
> O(N*logN).

With Big-Oh notation, the assumption is always for large (limit is
it goes to infinity) N.  Nice for doing the math, but not always
so useful in real problems.  Now, if k is fixed as m increases, such
that N goes to infinity then pretty obviously it is still O(NlogN).

If k is a constant fraction (approximately) of N, then maybe not.
Remember, though, that Big-Oh also ignores factors that don't
scale with N.  There are no algorithms O(2N), for example.

Even more interesting, you could make m=(N/2)+1.  Now it is
sort of obvious that O(NlogN) is also O(m log m).

-- glen
```
 0
Reply glen 8/16/2010 5:33:23 PM

2 Replies
455 Views

Similar Articles

12/7/2013 3:36:36 PM
page loaded in 104669 ms. (0)

Similar Artilces:

Complexity of medfilt2
We've got some computing time issues while invoking medfilt2. I spent successfully some time on finding several approaches for optimizing exactly this operation. Unfortunately, this information is kind of useless as I cannot find the complexity (or the implementation's basic approach) for the Matlab medfilt2 function. Has anybody got any idea what the Matlab implementation's performance might be or alternatively, what algorithm it is based on?! (The documentation does not contain any of these information, nor can I find alternative implementations on File Exchange).

Complexity of Appromixate String Matching
Dear All, Does someone know what is the best theoretical complexity in the literature, as O(f(N)), where N is the number of rows in a table t and where approximate string matching on some field in t is to be done. I am not asking about the complexity of a single approximate match between 2 strings with lengths n and m, but on the cost of the lookup over the entire table. In other words, what is the cost of the following, in terms of N, the number of rows in t: SELECT * FROM t WHERE APPROXIMATELY_EQUAL(t.a, threshold) I thank you in advance for any feedback. Best Regards, Walid Saba On 2011-01-18, PRAGMATECH <walid.saba@gmail.com> wrote: [ Selecting all rows having some approximately matching string field ] > Does someone know what is the best theoretical complexity in the > literature, as O(f(N)), where N is the number of rows in a table t > and where approximate string matching on some field in t is to be > done. Surely the complexity will also depend upon string length and threshold? Are these assumed to be bounded above by some constant, or does your computational model make them irrelevant? It can't be less than O(N), since (in the absence

Reinforcement Learning Algorithms Complexity
What is the complexity of RL learning algorithms (for example: Q-learning, Q(lambda)-learning) etc.? Thanks, Uri. http://www.compactech.com/kartoun/ "kartoun" <kartoun@gmail.com> wrote in news:1162473685.806493.194310 @b28g2000cwb.googlegroups.com: > What is the complexity of RL learning algorithms (for example: > Q-learning, Q(lambda)-learning) etc.? > > Thanks, > > Uri. > > http://www.compactech.com/kartoun/ > > Since most RL algorithms are based on a set of linear equations for updating cost/gain state variables, the complexity is roughly O(KM), were M is the max number of terms in any of the linear equations, and K is the total number of (linear) equations, usually the same as the number of distinct actions. Note, however, that this is the computational complexity for each update cycle, not the whole convergence process. Since most RL systems closely resemble an ARMA(a,b) linear system, any "AR" term (i.e. feedback) makes the system -asymptotically- stable. This means that, if the environment remains stationary for a long time, the RL system converges to its optimal performance asymptotically, i.e.

Search 1D Array complexity
Hi, all,Can someone please tell me the order-of-magnitude complexity of the "Search 1D Array" VI in the number of elements of the array?I'd expect O(log n) but wanted to confirm that for a fact.Thanks,- Steve.Message Edited by SPM on 06-23-2008 09:30 AM I think the LabVIEW Help has the answer for you:Search 1D ArraySearches for an element in a 1D array starting at start index. Because the search is linear, you need not sort the array before calling this function. LabVIEW stops searching as soon as the element is found.Note that the search is linear.Message Edited... where the "seven searches" comes from, but binary search on a sorted array is O(log n). I'm just used to the OO standard libraries and would prefer not to implement it myself if LV already does it for me somewhere. (Most OO language standard libraries have some sort of an O(log n) lookup via a height-balanced tree or hash table, with O(log n) complexity for insertion and deletion.)Thanks,- Steve.EDIT: Thanks, Altenbach! Since I only have to sort the array once in its lifetime, this will do it. -S.Message Edited by SPM on 06-23-2008 10:21 AM Of course if you sort the array first

Essay on Complexity in Programming Languages?
Hello, I'm looking for general reading material, related to this http://biharmonic.org/czero/archives/000203.html, specifically, something on how markup and general programming languages tend to become more and more general/complex over time. Or on the design of domain specific markup languages. I'm not a comp. sci. major, just an amateur programmer, so adjust recommendations accordingly :) Thanks, Alex

Q: Proving time complexity
How would one go about proving the following: For any two numbers a and b, a > b > 1, it is not the case that a^n is O(b^n). What's the procedure for proving that something is _not_ ordo of something else. Just reversing the case and proving that b^n is O(a^n) for instance wouldn't prove the claim since it still would be possible that a^n is O(b^n)...?! Best regards, Smalmatskungen In article <2qs10mF1333e5U1@uni-berlin.de>, reply.to@group.please says... > How would one go about proving the following: > > For any two numbers a and b, a > b > 1, >

time complexity of positive definiteness
Hello, For a non-symmetric square matrix of size n x n. What is the time complexity of finding out whether the matrix is positive definite or not. Also, which would be the fastest, in terms of time complexity, to compute the same in matlab I want to know the answer for finding singularity of a matrix too. Which would be the fastest in terms of the time complexity of n and how fast will it be? Thanks, Dilip

JMLR: On the Complexity of Learning Lexicographic Strategies
[[Redistributed from JMLR announce]] ~From: elm@cs.umass.edu ~Date: Wed, 28 Dec 2005 14:48:24 -0500 ~Subject: [Jmlr-announce] On the Complexity of Learning Lexicographic Strategies The Journal of Machine Learning Research (www.jmlr.org) is pleased to announce publication of a new paper: ------------------------------------------------------------------------ ------- On the Complexity of Learning Lexicographic Strategies Michael Schmitt, Laura Martignon JMLR 7(Jan):55--83, 2006. Abstract Fast and frugal heuristics are well studied models of bounded rationality. Psychological research has proposed the take-the-best heuristic as a successful strategy in decision making with limited resources. Take-the-best searches for a sufficiently good ordering of cues (or features) in a task where objects are to be compared lexicographically. We investigate the computational complexity of finding optimal cue permutations for lexicographic strategies and prove that the problem is NP-complete. It follows that no efficient (that is, polynomial-time) algorithm computes optimal solutions, unless P=NP. We further analyze the complexity of approximating optimal cue

Q: Time, memory complexity of eigs
Hello cssm! Does somebody have information on time and memory requirements of the eigenvector computation (eigs) in matlab? It is for rather large (>10k^2) symmetrical matrices and I'm interested in the smallest eigenvalues - if that makes any difference to the answer ;) Some O(n) numbers would be ok, although some more exact information is interesting for me, too. Thanks a lot, Mathias

Best Practices to Manage Complexity in Hardward/Software Design?
Hello, I'm interested in how individuals or design groups manage complexity in their design projects. What things do you do or things the group does that can take complex tasks and break them into simpler or more manageable tasks? It may sound like a weird question, but there must be some guidelines, best practices, or habits used to achieve success in designing/developing a complex project. I'm sure there must be some individuals out there that are constantly taking complex tasks and just about every time have success with it. Short of speaking, I want to know what's the secret to their success. All comments are welcomed, even the most obvious suggestions. As an engineer, I'm constantly trying to improve my design processes. Thanks everyone, joe <jjlindula@hotmail.com> wrote in message news:1121970065.257771.127820@g43g2000cwa.googlegroups.com... > Hello, I'm interested in how individuals or design groups manage > complexity in their design projects. What things do you do or things > the group does that can take complex tasks and break them into simpler > or more manageable tasks? It may sound like a weird question, but there > must