f



O( log n /log log n)

I saw a algorithm that has O( log n /log log n) lookup hops.  I can't
figure out how each case happen?  I even can't understand why big-o can
have "log log n" as denominator.

Thx

0
josephso (1)
2/7/2006 9:45:32 AM
comp.programming 11491 articles. 2 followers. Post Follow

4 Replies
683 Views

Similar Articles

[PageSpeed] 26

copernic wrote:
> I saw a algorithm that has O( log n /log log n) lookup hops.  I can't
> figure out how each case happen?  I even can't understand why big-o can
> have "log log n" as denominator.


Possibly a fusion tree.  Basically a B-Tree like structure with a
branching factor of (log(n))**(1/5) (which translates into a tree
height of approximately (log n / log log n) - hence the big-o).  The
trick is to convert the search key into a sequence of bits that
identify which branch in each tree node you want to select.  In some
sense you're building a hash function to do an O(1) lookup in each
node.

I can't do the details justice, but of you have an ACM membership there
are several papers on the subject.

0
robertwessel2 (1674)
2/7/2006 10:23:30 AM
copernic wrote:
> I saw a algorithm that has O( log n /log log n) lookup hops.  I can't
> figure out how each case happen?  I even can't understand why big-o can
> have "log log n" as denominator.


Possibly a fusion tree.  Basically a B-Tree like structure with a
branching factor of (log(n))**(1/5) (which translates into a tree
height of approximately (log n / log log n) - hence the big-o).  The
trick is to convert the search key into a sequence of bits that
identify which branch in each tree node you want to select.  In some
sense you're building a hash function to do an O(1) lookup in each
node.

I can't do the details justice, but of you have an ACM membership there
are several papers on the subject.

0
robertwessel2 (1674)
2/7/2006 11:01:44 AM
robertwessel2@yahoo.com wrote:
> copernic wrote:
> 
>>I saw a algorithm that has O( log n /log log n) lookup hops.  I can't
>>figure out how each case happen?  I even can't understand why big-o can
>>have "log log n" as denominator.
> 
> 
> 
> Possibly a fusion tree.  Basically a B-Tree like structure with a
> branching factor of (log(n))**(1/5) (which translates into a tree
> height of approximately (log n / log log n) - hence the big-o).  The
> trick is to convert the search key into a sequence of bits that
> identify which branch in each tree node you want to select.  In some
> sense you're building a hash function to do an O(1) lookup in each
> node.
> 
> I can't do the details justice, but of you have an ACM membership there
> are several papers on the subject.

I *think* the following is correct.  If not, I'm sure someone will
correct me...

Any algorithm whose work can be partitioned in a way that can be
described logarithmically, and have THAT work again be partitioned
logarithmically, will have its big O divided by lgN(lgN).

A search process is a good illustration.  You can reduce the depth of
the search space by lgN by partitioning the search into a (balanced)
binary tree.  If you can binary partition those subtrees yet again, the
total search work is divided by O(lg(lgN)).

An illustration.  You have 64 elements.  lg(64) is 6.  The depth of a
binary tree with 64 leaves is 6.  For example, if you merged the 64
elements using 2^5 parallel processes, this could be done in 6
concurrent binary merge steps.  Therefore, with 32 processors, the time
needed to merge the 64 elements can be divided by 6, or O(64/6), or
O(N/lgN).

If each process can diminish its work again by lgN, you're there.
Parallel search algorithms do this naturally, since you're often looking
for a needle in the haystack, and not all of the haystack needs to be
searched.  If *any* of the processes finds the needle, then all
processes can stop early.  That diminishes the search space not by the
typical factor of 2, but in the case of an exponential search (a binary
tree), the search space can be reduced exponentially (by lgN).  By
stopping early, the largest part of the tree need not be searched AND
the work was already partitioned exponentially by using lgN processes.

This is also the phenomenon that sometime arises when you decompose a
task using parallel processes, and suddenly the amount of data belonging
to each process shrinks enough to fit in L1/L2 cache, which is
exponentially faster than main memory was.  Voila, you get a superlinear
speedup using only N processes: O(N/lg(lgN)).

    Randy

-- 
Randy Crawford   http://www.ruf.rice.edu/~rand   rand AT rice DOT edu

"Dice play god with the world." -- Anon
0
joe304 (205)
2/7/2006 11:18:54 PM
Randy wrote:
....
> This is also the phenomenon that sometime arises when you decompose a
> task using parallel processes, and suddenly the amount of data belonging
> to each process shrinks enough to fit in L1/L2 cache, which is
> exponentially faster than main memory was.  Voila, you get a superlinear
> speedup using only N processes: O(N/lg(lgN)).

Oops.  That probably should be O(g(N)/NlgN) -- a linear speedup of task
g (via parallelism) combined with an exponential speedup (via cache).

> 
>     Randy
> 

-- 
Randy Crawford   http://www.ruf.rice.edu/~rand   rand AT rice DOT edu

"Dice play god with the world." -- Anon
0
joe304 (205)
2/8/2006 12:03:44 AM
Reply: