
Dominance Tree Sorts
Dominance Tree Sorts
"Dominance tree sorts" is a neologism. If anyone recognizes this
class of sorts I would take it kindly if they would point to the
literature (web preferred) where this class of sorts is described
and named. The idea doesn't seem to be explored in Knuth, though
something may be hidden in the exercises. Dominance tree sorts
aare selection sorts but not tournament sorts.
Suppose we have a set S of elements and an ordering relationship
R defined over S. For example, S might be an array of integers,
and R might be >, the greater than relationship. Let x and y be
two elements of S. Element x is said to dominate y if (x R y).
Again from the example, let x be 8 and y be 5. Then x dominates
y.
A dominance tree T over S is a tree such that (a) every element
of S appears exactly once in T, and (b) for every element (except
the root) the element's parent dominates it. Here are some
examples of dominance trees over the set {1,2,3,4,5}
A B C
5 5 5
  
  4
      
1 2 3 4 4 2 3
 
 2
  
1 3 1
Heaps are dominance trees, but not all dominance trees are heaps.
Example B is a heap, example C is a sorted list, and in example A
all we know is that '5' is maximal.
A set K of comparisons (xRy) is a minimum comparison set for
a dominance tree T if it consists exclusively of the comparisons
in the child/parent linkages. Thus, in our example, the minimum
comparison sets are:
K(A): 5>1, 5>2, 5>3, 5>4
K(B): 5>4, 5>2, 4>1, 4>3
K(C): 5>4, 4>3, 3>2, 2>1
Note that a minimum comparison set always has n1 members where n
is the cardinality of S.
An algorithm is an efficient dominance tree construction (EDTC)
algorithm if it specifies a minimum comparison sequence that in
turn specifies a dominance tree.
One of the important features of EDTC algorithms is that if an
element 'loses' in a comparison it cannot appear in a later
comparison in the comparison's sequence. Another is that there
is no EDTC algorithm that can guarantee that the root element has
less than log2(n) children. In particular there are no
EDTC algorithms for creating heaps.
The general plan for dominance tree sorts runs as follows: For a
given set S we first construct a dominance tree T. We then emit
the root of the tree. The children of the root are compared
until a new root is established. In each comparison the 'loser'
becomes a new child of the 'winner'. EDTC algorithms are used in
both the initial construction phase and in each emission step.
There are quite a number of variations depending on which EDTC
algorithms are used.
There are two major EDTC algorithms that occur to me. The first
is tournament pairing, also used in the tournament sort and in
heapsort. In the tournament pairing algorithm the set is divided
into pairs that are compared. The winners go on to the second
round where they are again divided into pairs that are compared.
This is continued until a final winner is determined. This
requires n1 comparisons, log2(n) rounds, and the winner, the
root of the tree, will have log2(n) children.
The second is sequential comparison. The first element in a
list is compared with the second. The winner is compared with
the third, and so on. This also requires n1 comparisons. The
number of children of the root depends on the location of the
root element in the list. If the ordering of the list elements
is random, the average number of children will n/2. On the other
hand, the number of children will be 1 if the list is ordered.
It turns out that we get a very efficient (as measured by the
number of comparisons used) sort if we use tournament
pairing to construct the initial dominance tree and sequential
comparison in the update after emission of the root. Each
element has a list associated with it of elements that it
dominates. During construction when one element dominates
another the losing element is appended to the winning elements
list. Here is a very simple example. Let the set to be sorted
be {2,4,3,1}. The actions are:
Construction Phase
compare (2,4); append 4 to 2's list
compare (3,1); append 3 to 1's list
compare (2,1); append 2 to 1's list; 1 is the root
Emission Phase
emit (1); compare (3,2); append 3 to 2's list; 2 is root
emit (2); compare (3,4); append 4 to 3's list; 3 is root
emit (3); 4 is root
emit (4); done
How efficient is this sort? The tests that I ran showed it to
about equivalent to mergeinsert (see Knuth, Vol 3, 5.3.1). For
example, sorting 32 items requires a minimum of 118 comparisons
on average (log2(32!)). Mergeinsert needs 121. The number
needed by the above algorithm depends on the precise permutation.
The average number needed is about 121.5. For large n the number
of comparisons required was ~ n*(log2(n)  1.27) in the tests I
performed; this is not far off from the best possible of whic is
~ n*(log2(n)  log2(e)) = n*(log2(n)  1.44).
Why is this sort efficient? It has to do with an order
preservation property in the construction and updating of the
dominance lists. In the construction phase a dominance list has
two special properties: (a) its length is longer than the lengths
of the lists of the elements in its list, and (b) the elements in
the list are in order of the length of their lists. These
properties are almost always preserved by the above algorithm.
In turn this implies that the lengths of the dominance lists are
slightly less than O(log2(n)) on average.
Can this be improved upon? Possibly. The only
reference I can find to dominance tree sorts is a web article by
someone called Supercat. In the web article he in effect forces
property (b) to hold. (He calls it a tournament sort which is a
bit of a misnomer.) He gives the following results for a test
case sorting one million items.
Number of comparisons by a perfect sort (lg(n!)) : 18,488,865
Avg # of comparisons for Tournament Sort: 18,646,425
(100.85% of perfect)
Avg # of comparisons for Borland C's qsort: 22,439,507
(121.37% of perfect)
This gives (n = 1000000) n*(log2(n)  1.284) which is tad better
than the simpler algorithm given here. On the other hand
Supercat's version requires addition comparisons of the element
weights (integers rather than keys).
Is this a practical sort? Possibly not. It requires extra space
O(n) for the lists and O(n) for the list headers. Each
comparison involves adding an item to a linked list, so the
cost of the inner loop operations will be higher than it is in
simpler sorts. On the other hand memory is often plentiful and
the inner loop costs should be no worse than those for an
ordinary tree sort.
References:
URL: http://www.halfbakery.com/idea/Tournament_20Sort
Title: Tournament sort
Knuth, The Art of Computer Programming, Vol 3, Searching
and Sorting.
Richard Harter, cri@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Tragedy is living to an old age
and not having any enemies to outlive.


0




Reply

cri (1432)

8/17/2004 9:08:50 PM 

See related articles to this posting
On Tue, 17 Aug 2004 21:08:50 GMT, cri@tiac.net (Richard Harter) wrote:
>
>Dominance Tree Sorts
Oops, should have been comp.theory. Mea Culpa.
Richard Harter, cri@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Tragedy is living to an old age
and not having any enemies to outlive.


0




Reply

cri

8/17/2004 9:11:34 PM


On Tue, 17 Aug 2004 21:08:50 GMT, cri@tiac.net (Richard Harter) wrote:
[snip original]
In the original text it is stated that a dominance tree is not a heap,
even though a heap is a dominance tree. There are a couple of
corrections to make here. The first is that 'heap' should have been
'binary heap'. The second is that dominance trees have the heap
property and accordingly are heaps in the most general sense of
'heap'.
No mention is made in the original of fibonacci heaps and fibonacci
heap sorts. There is an intimate relationship betweem fibonacci heap
sorts and the dominance tree sorts mentioned in my article. However
they are different animals.
"Dominance tree sorts" may not be entirely appropriate since a
dominance tree is a heap. However the appropriate names have all been
appropriated.
It should be possible to create a heap type that corresponds to the
heap structure used in dominance tree sorts. I will look into this
and report my findings.
Richard Harter, cri@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Tragedy is living to an old age
and not having any enemies to outlive.


0




Reply

cri (1432)

8/23/2004 6:28:45 PM


In article <41227327.365327442@news.sbtc.net>, cri@tiac.net (Richard Harter) writes:
>
> Each
> comparison involves adding an item to a linked list, so the
> cost of the inner loop operations will be higher than it is in
> simpler sorts. On the other hand memory is often plentiful and
> the inner loop costs should be no worse than those for an
> ordinary tree sort.
On the other other hand, all that linkedlist traversal likely means
poor locality of reference, which means bad cache effects. Which,
come to think of it, is generally true of heap sorts. If you keep
the children as an array rather than a linked list (overflows can be
handled in various ways) that should help.
Interesting idea and nice writeup, though. I'm still thinking about
it. On casual inspection it does look like it might be quite
amenable to being parallelized, and with SMP systems becoming ever
more common that's certainly useful. (Of course, mergesorts are
inherently parallel and very easy to code.)

Michael Wojcik michael.wojcik@microfocus.com
Auden often writes like Disney. Like Disney, he knows the shape of beasts 
(& incidently he, too, might have a company of artists producing his lines) 
unlike Lawrence, he does not know what shapes or motivates these beasts.
 Dylan Thomas


0




Reply

mwojcik (1879)

8/26/2004 7:57:18 PM


mwojcik@newsguy.com (Michael Wojcik) wrote in message news:<cglfau0kjv@news1.newsguy.com>...
> In article <41227327.365327442@news.sbtc.net>, cri@tiac.net (Richard Harter) writes:
> >
> > Each
> > comparison involves adding an item to a linked list, so the
> > cost of the inner loop operations will be higher than it is in
> > simpler sorts. On the other hand memory is often plentiful and
> > the inner loop costs should be no worse than those for an
> > ordinary tree sort.
>
> On the other other hand, all that linkedlist traversal likely means
> poor locality of reference, which means bad cache effects. Which,
> come to think of it, is generally true of heap sorts. If you keep
> the children as an array rather than a linked list (overflows can be
> handled in various ways) that should help.
Performance problems with heapsort are often blamed on cache effects.
No doubt this is true; however the standard heapsort has further problems
that degrade perfomance. An obvious one is the complexity of the inner
loops.
A more subtle issue is that the heapify process throws information
away. For example suppose we have three elements x,y,z at locations
a[n], a[2*n], a[2*n+1] and we heapify them. We first compare y and z
and pick the dominant one, z for the sake of argument. We then compare
z with x, the top point in our little heap; if it dominates x we swap,
moving z up and x down. We now have a[n] > {a[2*n],a[2*n+1]} and that is
all we have. However if x was the dominant one we lose the information that
a[2*n+1] > a[2*n]. That is, the data structure doesn't distinguish between
x>z>y and z>x,z>y; in the former case it records x>z (true by transitivity)
and x>y, but not z>y. In effect 1/3 of a decision is lost. This information
loss occurs at several points in the standard heap sort. For this reason
the required number of comparisons for heapsort is n*[a*log2(n)+b] where a>1.
This kind of information loss does not occur either quicksort or mergesort.
Using a simplified analysis of the mergesort cost function we get
n*(log2(n)1.0), which is pretty close to the optimal n*(log2(n)log2(e)).
Although quicksort doesn't throw information away, it doesn't use it
quite as efficiently (partitions are equisized) so its 'a' is greater than 1.
As noted the dominance tree sort (my version anyway) has a comparison cost
function of ~ n*(log2(n)1.27) or better (on random data), which is better
than mergesort but not by much.
I'm not sure that putting the linked lists in arrays helps particularly with
the cache problem although it is easy enough to put them in arrays. The
catch is that the references in the linked lists are scattered. There may
be something clever that one can do with moving the data so that cache misses
are reduced but I haven't thought it out though.
> Interesting idea and nice writeup, though.
Thanks.
>I'm still thinking about
> it. On casual inspection it does look like it might be quite
> amenable to being parallelized, and with SMP systems becoming ever
> more common that's certainly useful. (Of course, mergesorts are
> inherently parallel and very easy to code.)
Offhand it seems quite straightforward to parallelize it.
As a final note both mergesort and quicksort are intrinsically cache
friendly as long as there are at least two caches.


0




Reply

cri (1432)

8/28/2004 3:14:34 PM


On Tue, 17 Aug 2004 21:08:50 GMT, cri@tiac.net (Richard Harter) wrote:
>
>Dominance Tree Sorts
>
[snip original article]
I have looked into this matter in some detail. In particular I have
gone through the relevant sections of Knuth, vol 3, Sorting and
Searching. One problem here is that my copy is the 1973 edition (I
don't know what changes, if any, are in later editions.) In
consequence there is a lot of more modern material, e.g., fibonacci
heaps, that is not included. That said:
(1) A dominance tree is a heap (using the most general definition of
heap) with the additional requirement that each element loses in a
comparison no more than once. There doesn't seem to be any need for
"dominance trees" as a description of a data structure. On the other
hand the the term "dominance tree sorts" is useful since the term
"heap sorts", which ought to refer the class of sorts based on heaps
has instead been preempted as the label for a particular kind of
binary heap sort.
(2) The category of dominance tree sorts is broader than that of
tournament sorts; however the standard tournament sort is isomorphic
to a dominance tree sort in which the tree is constructed using
tournament pairing and then in the selection phase the child lists of
each selection is processed sequentially.
(3) Tournament sorts (and the equivalent dominance tree sorts) are
highly efficient in terms of the average number of comparions and the
minimax number of comparisons required. Both quicksort and heapsort
require substantially more; mergesort is comparable but is slightly
more costly. MergeInsert (JohnsonTrotter) sorts are slightly more
efficient but the gain is minimal.
(4) Tournament sorts have the same comparison cost function as binary
insertion sorts. However binary insertion sorts are expensive in
terms of data move costs. Tournament sorts are better in terms of
data move costs but have (IMHO) messy data structures. The equivalent
dominance tree sorts have "nicer" data structures.
Richard Harter, cri@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Tragedy is living to an old age
and not having any enemies to outlive.


0




Reply

cri (1432)

9/21/2004 6:28:18 AM



5 Replies
172 Views
Similar Articles
[PageSpeed]
0

Similar Artilces:
dominance tree sorts, followup
This is a followup on the "dominance tree sorts" article I did a
while back. I wrote an implementation in C (the code is below).
Comments on the code and/or the results are welcome.
I did some timing experiments to get a handle on the efficiency
of domsort. The data sets for the tests were random integers;
the sizes of the data sets were powers of two, ranging from 2**8
to 2**20 (256 to 1048576). All tests were run on a 333 mhz PC
running windows 98, using the djgpp environment. The
optimization level (gcc) was O2.
The codes tested were the domsort routine listed below, ... Tree sortingHello Everyone,
Have anybody used Tree sorter.vi in vi library?
\Program Files\National Instruments\LabVIEW 7.1\vi.lib\tree\Tree Point Converter.llb\Tree Sorter.vi
If you have used, can you tell me, what does the inputs 'Tag' and 'Sort specific vi' mean? I could not figure out and the help is not available for this.
&am... finding all dominator treesHi,
Is there an efficient algorithm for finding all dominator trees of a
graph? That is, for every node, find its dominator tree. I'm looking
for something better than simply running a dominator tree algorithm for
each node in the graph.
Amir
> Is there an efficient algorithm for finding all dominator trees of a
> graph? That is, for every node, find its dominator tree. I'm looking
> for something better than simply running a dominator tree algorithm
> for each node in the graph.
Unless I am missing some subtlety of your situation, you should just be
able to run the do... Sorted heap trees
This is a repost to comp.programming only.
I'm looking at a data structure that I have (re)invented that I
am calling a sorted heap tree. I dare say there is an official
name but I haven't a clue as to what it might be. What I am
looking for are references, preferably online, and anything
useful I can snarf up.
A sorted heap tree is a heap tree with the further requirement
that left and right subtrees are partitions. That is, all of the
elements of the left subtree are less than the elements of the
right subtree. For example, here is a sorted heap tree in
outline form.
... finding all dominator treesHi,
Is there an efficient algorithm for finding all dominator trees of a
graph? That is, for every node, find its dominator tree. I'm looking
for something better than simply running a dominator tree algorithm for
each node in the graph.
Amir
... Sort Data With TreeI am trying to set up a tree to sort my data.
what i mean is like in access when you insert a subdata sheet and you get
the plus sign and then you click on that
i am trying to organize pictures by year then month then date then all the
pictures in that date in to a preview pane
how would i set this up in vb6
thanks and Happy Holidays !!!
Take a look at the VB TreeView control.
http://www.freevbcode.com/ShowCode.Asp?ID=2017
Cheers,
 Merlin
"WStoreyII" <papastoreyii@hotmail.com> wrote in message
news:aZGHb.2977$_O1.2439@newssvr25.news.prodigy.com...
> I am trying to... sorts with binary trees
Lately, I've been looking at aspects of sorts in fortran. To compare and
contrast, I've been looking at bubble and selection:
subroutine bubble_sort(a)
! simple bubble sort.
implicit none
integer, parameter :: long =selected_int_kind(12)
real, intent(inout) :: a(:)
real :: temp
integer(kind=long) :: start, bubble, count1, count3
count1 = 0
count3 = 0
do start = (size(a)1), 1, 1
do bubble = start, (size(a)1)
count3 = count3+1
if ( a(bubble) .le. a(bubble+1) ) exit
count... Binqry Tree sortHi all,
We have an unsorted set like S. We put the elements of the set
sequentially on the leaves on a conplete binary tree. I'm looking for
an algorithm to sort the set S with this binary tree structure. The
result should be available on the leaves of the tree again.
Example:
S = {5, 2, 3, 1}
Initial Tree:
__
/ \
__ __
/ \ / \
... Sorting Tree AlgorithmHello,
I am seeking a very efficient algorithm for sorting nodes of a tree
representation in
what I call ParentChildGrandchild order. The Apl I use is APL+Win.
The input representation of the tree, is a two column nested integer matrix,
and might might look like this:
Parent Children
1 (2 3 4)
2 (5 6 )
3 zilda
4 zilda
5 (7 8)
6 zilda
7 zilda
8 zilda
9 zilda ( this is an orphan)
10 zilda ( another orphan)
The r... Sorted heap trees #2
This is a repost to comp.theory.
I'm looking at a data structure that I have (re)invented that I
am calling a sorted heap tree. I dare say there is an official
name but I haven't a clue as to what it might be. What I am
looking for are references, preferably online, and anything
useful I can snarf up.
A sorted heap tree is a heap tree with the further requirement
that left and right subtrees are partitions. That is, all of the
elements of the left subtree are less than the elements of the
right subtree. For example, here is a sorted heap tree in
outline form.
T: 1
L:... non dominated sorting procedureHello,
Has anyone every written a fast non dominating sorting procedure under
matlab? Because my version of it is quite slow, and its speed is a key
issue for my algorithms performance.
Regards
Qreg
Qreg wrote:
>
>
> Hello,
>
> Has anyone every written a fast non dominating sorting procedure
> under
> matlab? Because my version of it is quite slow, and its speed is a
> key
> issue for my algorithms performance.
>
> Regards
> Qreg
>
I do not know what a nondominating sorting procedure is, but if you
provide this group with some code you have written... rbtree creation from sorted sequenceHi everyone!
I could not find an AlgorithmNewsgroup so I tried comp.theory
as most people an this list seem to have a rather broad knowlede
of CS. :)
I've been trying to find a Paper describing a method to
construct a redblack tree from a sorted sequence in
linear time.
Does anyone remember articles/papers about this issue?
Thanks
Arne
Arne Ehrlich wrote:
> Hi everyone!
>
> I could not find an AlgorithmNewsgroup so I tried comp.theory
> as most people an this list seem to have a rather broad knowlede
> of CS. :)
>
> I've been trying to find a P... Sorting the Cell ArrayHey,
I am having some trouble with phylogentic trees. After opening up a tree, I use the get function to extract the branch distances and node names (10 external leafs and 9 internal branches). However, they are not in the proper order. I want to sort the leafs and branches so I have it set up as shown below:
*distance will be the branch and leaf distances extracted from the tree.
Leaf1 distance
Leaf2 distance
....
Leaf10 distance
Branch1 distance
Branch2 distance
....
Branch9 distance
However, the results are usually Branches first then Leafs and they are out of order... for ... Sorted List (binary tree) why no builtin/module?Unless I've totally missed it, there isn't a binary tree/sorted list
type arrangement in Python. Is there a particular reason for this?
Sometimes it might be preferable over using a list and calling
list.sort() all the time ;)
On a somewhat unrelated note, does anyone know how python searches
lists when you do things list list.index(n), is it a binary search,
or does it just scan the list?
... Binary Search Tree Vs Quick SortI have started to implement my own version of a binary search tree, i use
this data structure to sort my own version of a doubly linked link list.
It seems to work rather well but i would like to try and speed it up.
Is using a Binary Search Tree as good as Quick Sort?
From what i remember from the lectures I attended they are both O ( n log
n ). I understand that both of these sorts run best when the pivot/root
item is the half value from the list. If the list is already in order
they both become O(n^2) to do nothing at all.
What is the best way of selecting Pseudo Random items from... Is it possible to autosort the feature tree in Numerical order?Our parts and assys are named with the parts number. Ex. 9990012345
Is it possible to sort the feature tree in numerical order?
We are using SolidWorks 2003, should be upgrading in a couple of weeks
to 2004.
I have it in my list of macro's to create. You could only do it on
assemblys because the assembly component order isn't history based.
Corey
"SW Monkey" <google311.50.spydermonkey@spamgourmet.com> wrote in message
news:1104261482.443260.314050@f14g2000cwb.googlegroups.com...
> Our parts and assys are named with the parts number. Ex. 9990012345
>
> ... recommended libraries for Btrees / linked lists / sort etc . . .Hi,
I'd like to write an applet, which will create many objects (all of the
same type) , which can be identified by two integers X and Y.
At a later moment I'd like to retrieve the previously created /
calculated object by specifying it's X and Y value.
I think, that a Btree might be the right data structure for storing my
data, as I will continiously add / search / remove elements.
Instead of implementing one more version of a BTree or a similiar
structure I wanted to know whether there are already good open source
libraries, that implement such data structures (BTree... Trees? Subprojects? Threads? Or How do I sort multiple levels of nested relations?Hi all,
I'm not sure exactly what words to use or how to Google this. (I keep
coming up with BOL links.) So, I'm going to ask the group for a
starting point or proper terms to even describe what I'm doing. Is
this a "thread handler? Or a "hierarchy tree" or what?

Basically, I'm working on a project that will be a task and project
tracking tool. What I've been asked to do is provide functionality
for two things at a Manager level.
1. Radar lists
2. sub projects.
Both of them involve the possibility of having a
1. Project
1.A Sub... trees, trees, treesthere is a table with the following data:
col1 col2
A B
B C
Z A
Z D
M N
M P
X Y
columns col1 and col2 are in parentchild relation. from the above table
one can conclude the following three hierarchies:
ZABC M X
\ / \ 
D N P Y
the problem: is there a way to get the following result?
col1 col3
A Z/A/B/C/D
B Z/A/B/C/D
Z Z/A/B/C/D
M M/N/P
X X/Y
or
col1 col3
A Z/A/B/C/D
B Z/A/B/C/D
C Z/A/B/C/D
D Z/A/B/C/D
Z Z/A/B/C/D
M M/N/P
N M/N/P
P M/N/P
X X... Sorting out sortI'm trying to extract a column from a flatfile database and print it
alphabetically. I can get the data out, but I can't get it to sort.
The database is pipeseparated in a plain text file and looks like
this:
Ref_noTitleCountyAnother_fieldAnd_anotherAnd_another
1234A NameDevonData hereMore data hereAnd more data
1234A NameSomersetData hereMore data hereAnd more data
1234A NameNottinghamshireData hereMore data hereAnd more data
1234A NameEssexData hereMore data hereAnd more data
.... and so on
My routine is as follows:
#open the file
open (INFILE, 'dat... sorted or .sort() ?My poor understanding is that the difference between `sorted(somelist,
key=lambda x:...)` and `somelist.sort(lambda x,y...)` is that one
returns a new list and the other sorts inplace.
Does that mean that .sort() is more efficient and should be favored
when you can (i.e. when you don't mind changing the listish object)?
Peter Bengtsson <peterbe@gmail.com> writes:
> My poor understanding is that the difference between `sorted(somelist,
> key=lambda x:...)` and `somelist.sort(lambda x,y...)` is that one
> returns a new list and the other sorts inplace.
Yes.
> Does th... B tree, B+ tree and B* treeI understand what B tree is. However I don't have any material covering B+
and B* trees.
Is B+ tree also the same as B* tree?
What differences between B tree and B+/B* trees?
Thanks for your help!
"Stub" <stub@asof.com> wrote:
> I understand what B tree is. However I don't have any material covering B+
> and B* trees.
>
> Is B+ tree also the same as B* tree?
>
> What differences between B tree and B+/B* trees?
In comp.lang.c only C trees are ontopic.
For C++ trees try comp.lang.c++.
For B/B+/B* trees try comp.programming.
Regards

... Rtrees, R+trees and R*treesHi,
I see Rtrees, R+trees and R*trees on the web. And there are tons of
paper. But I'm not sure what is the difference between them. Would you
please help me? And are there any free implemenation of these trees in
C++?
Best wishes,
Peng
PengYu.UT@gmail.com wrote:
> Hi,
>
> I see Rtrees, R+trees and R*trees on the web. And there are tons of
> paper. But I'm not sure what is the difference between them. Would you
> please help me? And are there any free implemenation of these trees in
> C++?
>
Go here:
http://www.nist.gov/dads/
Regards
Stefan
> Bes...



