Dominance Tree Sorts

Dominance Tree Sorts

"Dominance tree sorts" is a neologism.  If anyone recognizes this 
class of sorts I would take it kindly if they would point to the 
literature (web preferred) where this class of sorts is described 
and named.  The idea doesn't seem to be explored in Knuth, though 
something may be hidden in the exercises.  Dominance tree sorts 
aare selection sorts but not tournament sorts.

Suppose we have a set S of elements and an ordering relationship 
R defined over S.  For example, S might be an array of integers, 
and R might be >, the greater than relationship.  Let x and y be 
two elements of S.  Element x is said to dominate y if (x R y).  
Again from the example, let x be 8 and y be 5.  Then x dominates 
y.

A dominance tree T over S is a tree such that (a) every element 
of S appears exactly once in T, and (b) for every element (except 
the root) the element's parent dominates it.  Here are some 
examples of dominance trees over the set {1,2,3,4,5}

          A          B        C

          5          5        5
          |          |        |
       -------     -----      4
       | | | |     |   |      |
       1 2 3 4     4   2      3
                   |          |
                  ---         2
                  | |         |
                  1 3         1
                  
Heaps are dominance trees, but not all dominance trees are heaps.  
Example B is a heap, example C is a sorted list, and in example A 
all we know is that '5' is maximal.  

A set K of comparisons (xRy) is a minimum comparison set for 
a dominance tree T if it consists exclusively of the comparisons
in the child/parent linkages.  Thus, in our example, the minimum 
comparison sets are:

K(A):  5>1, 5>2, 5>3, 5>4
K(B):  5>4, 5>2, 4>1, 4>3
K(C):  5>4, 4>3, 3>2, 2>1

Note that a minimum comparison set always has n-1 members where n 
is the cardinality of S.

An algorithm is an efficient dominance tree construction (EDTC) 
algorithm if it specifies a minimum comparison sequence that in 
turn specifies a dominance tree.

One of the important features of EDTC algorithms is that if an 
element 'loses' in a comparison it cannot appear in a later 
comparison in the comparison's sequence.  Another is that there 
is no EDTC algorithm that can guarantee that the root element has 
less than log2(n) children.  In particular there are no 
EDTC algorithms for creating heaps.

The general plan for dominance tree sorts runs as follows:  For a 
given set S we first construct a dominance tree T.  We then emit 
the root of the tree.  The children of the root are compared 
until a new root is established. In each comparison the 'loser' 
becomes a new child of the 'winner'.  EDTC algorithms are used in 
both the initial construction phase and in each emission step.  
There are quite a number of variations depending on which EDTC 
algorithms are used.

There are two major EDTC algorithms that occur to me.  The first 
is tournament pairing, also used in the tournament sort and in 
heapsort.  In the tournament pairing algorithm the set is divided
into pairs that are compared.  The winners go on to the second 
round where they are again divided into pairs that are compared.  
This is continued until a final winner is determined.  This 
requires n-1 comparisons, log2(n) rounds, and the winner, the 
root of the tree, will have log2(n) children.

The second is sequential comparison.  The first element in a 
list is compared with the second.  The winner is compared with 
the third, and so on.  This also requires n-1 comparisons.  The 
number of children of the root depends on the location of the 
root element in the list.  If the ordering of the list elements 
is random, the average number of children will n/2.  On the other 
hand, the number of children will be 1 if the list is ordered.

It turns out that we get a very efficient (as measured by the 
number of comparisons used) sort if we use tournament 
pairing to construct the initial dominance tree and sequential 
comparison in the update after emission of the root.  Each 
element has a list associated with it of elements that it 
dominates.  During construction when one element dominates 
another the losing element is appended to the winning elements 
list.  Here is a very simple example.  Let the set to be sorted 
be {2,4,3,1}.  The actions are: 

    Construction Phase
        compare (2,4); append 4 to 2's list
        compare (3,1); append 3 to 1's list
        compare (2,1); append 2 to 1's list; 1 is the root
    Emission Phase
        emit (1); compare (3,2); append 3 to 2's list; 2 is root
        emit (2); compare (3,4); append 4 to 3's list; 3 is root
        emit (3); 4 is root
        emit (4); done 

How efficient is this sort?  The tests that I ran showed it to 
about equivalent to merge-insert (see Knuth, Vol 3, 5.3.1).  For 
example, sorting 32 items requires a minimum of 118 comparisons 
on average (log2(32!)).  Merge-insert needs 121.  The number 
needed by the above algorithm depends on the precise permutation.  
The average number needed is about 121.5.  For large n the number 
of comparisons required was ~ n*(log2(n) - 1.27) in the tests I 
performed; this is not far off from the best possible of whic is
~ n*(log2(n) - log2(e)) = n*(log2(n) - 1.44).

Why is this sort efficient?  It has to do with an order 
preservation property in the construction and updating of the 
dominance lists.  In the construction phase a dominance list has 
two special properties: (a) its length is longer than the lengths 
of the lists of the elements in its list, and (b) the elements in 
the list are in order of the length of their lists.  These 
properties are almost always preserved by the above algorithm.  
In turn this implies that the lengths of the dominance lists are 
slightly less than O(log2(n)) on average.

Can this be improved upon?  Possibly.  The only 
reference I can find to dominance tree sorts is a web article by 
someone called Supercat.  In the web article he in effect forces 
property (b) to hold.  (He calls it a tournament sort which is a 
bit of a misnomer.)  He gives the following results for a test 
case sorting one million items.

Number of comparisons by a perfect sort (lg(n!)) : 18,488,865
Avg # of comparisons for Tournament Sort: 18,646,425 
   (100.85% of perfect)
Avg # of comparisons for Borland C's qsort: 22,439,507 
   (121.37% of perfect)

This gives (n = 1000000) n*(log2(n) - 1.284) which is tad better 
than the simpler algorithm given here.  On the other hand 
Supercat's version requires addition comparisons of the element 
weights (integers rather than keys).

Is this a practical sort?  Possibly not.  It requires extra space 
O(n) for the lists and O(n) for the list headers.  Each 
comparison involves adding an item to a linked list, so the 
cost of the inner loop operations will be higher than it is in 
simpler sorts.  On the other hand memory is often plentiful and 
the inner loop costs should be no worse than those for an 
ordinary tree sort.


References:
   URL:   http://www.halfbakery.com/idea/Tournament_20Sort
   Title: Tournament sort
   
   Knuth, The Art of Computer Programming, Vol 3, Searching
   and Sorting.

Richard Harter, cri@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Tragedy is living to an old age 
and not having any enemies to outlive.
0
cri (1432)
8/17/2004 9:08:50 PM
comp.programming 11102 articles. 1 followers. Post Follow

5 Replies
625 Views

Similar Articles

[PageSpeed] 19
On Tue, 17 Aug 2004 21:08:50 GMT, cri@tiac.net (Richard Harter) wrote:

>
>Dominance Tree Sorts
Oops, should have been comp.theory.  Mea Culpa.


Richard Harter, cri@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Tragedy is living to an old age 
and not having any enemies to outlive.
0
cri
8/17/2004 9:11:34 PM
On Tue, 17 Aug 2004 21:08:50 GMT, cri@tiac.net (Richard Harter) wrote:


[snip original]

In the original text it is stated that a dominance tree is not a heap,
even though a heap is a dominance tree.  There are a couple of
corrections to make here.  The first is that 'heap' should have been
'binary heap'.  The second is that dominance trees have the heap
property and accordingly are heaps in the most general sense of
'heap'.

No mention is made in the original of fibonacci heaps and fibonacci
heap sorts.  There is an intimate relationship betweem fibonacci heap
sorts and the dominance tree sorts mentioned in my article.  However
they are different animals.

"Dominance tree sorts" may not be entirely appropriate since a
dominance tree is a heap.  However the appropriate names have all been
appropriated.

It should be possible to create a heap type that corresponds to the
heap structure used in dominance tree sorts.  I will look into this
and report my findings.




Richard Harter, cri@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Tragedy is living to an old age 
and not having any enemies to outlive.
0
cri (1432)
8/23/2004 6:28:45 PM
In article <41227327.365327442@news.sbtc.net>, cri@tiac.net (Richard Harter) writes:
> 
> Each 
> comparison involves adding an item to a linked list, so the 
> cost of the inner loop operations will be higher than it is in 
> simpler sorts.  On the other hand memory is often plentiful and 
> the inner loop costs should be no worse than those for an 
> ordinary tree sort.

On the other other hand, all that linked-list traversal likely means
poor locality of reference, which means bad cache effects.  Which,
come to think of it, is generally true of heap sorts.  If you keep
the children as an array rather than a linked list (overflows can be
handled in various ways) that should help.

Interesting idea and nice writeup, though.  I'm still thinking about
it.  On casual inspection it does look like it might be quite
amenable to being parallelized, and with SMP systems becoming ever
more common that's certainly useful.  (Of course, mergesorts are
inherently parallel and very easy to code.)

-- 
Michael Wojcik                  michael.wojcik@microfocus.com

Auden often writes like Disney.  Like Disney, he knows the shape of beasts --
(& incidently he, too, might have a company of artists producing his lines) --
unlike Lawrence, he does not know what shapes or motivates these beasts.
   -- Dylan Thomas
0
mwojcik (1879)
8/26/2004 7:57:18 PM
mwojcik@newsguy.com (Michael Wojcik) wrote in message news:<cglfau0kjv@news1.newsguy.com>...
> In article <41227327.365327442@news.sbtc.net>, cri@tiac.net (Richard Harter) writes:
> > 
> > Each 
> > comparison involves adding an item to a linked list, so the 
> > cost of the inner loop operations will be higher than it is in 
> > simpler sorts.  On the other hand memory is often plentiful and 
> > the inner loop costs should be no worse than those for an 
> > ordinary tree sort.
> 
> On the other other hand, all that linked-list traversal likely means
> poor locality of reference, which means bad cache effects.  Which,
> come to think of it, is generally true of heap sorts.  If you keep
> the children as an array rather than a linked list (overflows can be
> handled in various ways) that should help.

Performance problems with heapsort are often blamed on cache effects.
No doubt this is true; however the standard heapsort has further problems
that degrade perfomance.  An obvious one is the complexity of the inner
loops.  

A more subtle issue is that the heapify process throws information
away. For example suppose we have three elements x,y,z at locations
a[n], a[2*n], a[2*n+1] and we heapify them.  We first compare y and z
and pick the dominant one, z for the sake of argument.  We then compare 
z with x, the top point in our little heap; if it dominates x we swap, 
moving z up and x down.  We now have a[n] > {a[2*n],a[2*n+1]} and that is
all we have.  However if x was the dominant one we lose the information that
a[2*n+1] > a[2*n].  That is, the data structure doesn't distinguish between
x>z>y and z>x,z>y; in the former case it records x>z (true by transitivity)
and x>y, but not z>y.  In effect 1/3 of a decision is lost.  This information
loss occurs at several points in the standard heap sort.  For this reason
the required number of comparisons for heapsort is n*[a*log2(n)+b] where a>1.

This kind of information loss does not occur either quicksort or mergesort.
Using a simplified analysis of the mergesort cost function we get 
n*(log2(n)-1.0), which is pretty close to the optimal n*(log2(n)-log2(e)).
Although quicksort doesn't throw information away, it doesn't use it
quite as efficiently (partitions are equisized) so its 'a' is greater than 1.
As noted the dominance tree sort (my version anyway) has a comparison cost
function of ~ n*(log2(n)-1.27) or better (on random data), which is better 
than mergesort but not by much.  

I'm not sure that putting the linked lists in arrays helps particularly with
the cache problem although it is easy enough to put them in arrays.  The
catch is that the references in the linked lists are scattered.  There may
be something clever that one can do with moving the data so that cache misses
are reduced but I haven't thought it out though.


> Interesting idea and nice writeup, though.  

Thanks.

>I'm still thinking about
> it.  On casual inspection it does look like it might be quite
> amenable to being parallelized, and with SMP systems becoming ever
> more common that's certainly useful.  (Of course, mergesorts are
> inherently parallel and very easy to code.)

Offhand it seems quite straightforward to parallelize it.

As a final note both mergesort and quicksort are intrinsically cache
friendly as long as there are at least two caches.
0
cri (1432)
8/28/2004 3:14:34 PM
On Tue, 17 Aug 2004 21:08:50 GMT, cri@tiac.net (Richard Harter) wrote:

>
>Dominance Tree Sorts
>


[snip original article]

I have looked into this matter in some detail.  In particular I have
gone through the relevant sections of Knuth, vol 3, Sorting and
Searching.  One problem here is that my copy is the 1973 edition (I
don't know what changes, if any, are in later editions.)  In
consequence there is a lot of more modern material, e.g., fibonacci
heaps, that is not included.  That said:

(1) A dominance tree is a heap (using the most general definition of
heap) with the additional requirement that each element loses in a
comparison no more than once.  There doesn't seem to be any need for
"dominance trees" as a description of a data structure.  On the other
hand the the term "dominance tree sorts" is useful since the term
"heap sorts", which ought to refer the class of sorts based on heaps
has instead been preempted as the label for a particular kind of
binary heap sort. 

(2) The category of dominance tree sorts is broader than that of
tournament sorts; however the standard tournament sort is isomorphic
to a dominance tree sort in which the tree is constructed using
tournament pairing and then in the selection phase the child lists of
each selection is processed sequentially.

(3) Tournament sorts (and the equivalent dominance tree sorts) are
highly efficient in terms of the average number of comparions and the
minimax number of comparisons required.  Both quicksort and heapsort
require substantially more; mergesort is comparable but is slightly
more costly.  Merge-Insert (Johnson-Trotter) sorts are slightly more
efficient but the gain is minimal.  

(4) Tournament sorts have the same comparison cost function as binary
insertion sorts.  However binary insertion sorts are expensive in
terms of data move costs.  Tournament sorts are better in terms of
data move costs but have (IMHO) messy data structures.  The equivalent
dominance tree sorts have "nicer" data structures.



 
Richard Harter, cri@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Tragedy is living to an old age 
and not having any enemies to outlive.
0
cri (1432)
9/21/2004 6:28:18 AM
Reply:
Similar artilces about - Dominance Tree Sorts: