STL implementation performance

Hello,

We are using standard containers in our code except task specific parts
where we write our own specific containers that can handle huge data
and/or optimized for specific algorithms. But there are places where
some standard containers are used but implementation could be faster.
Correct me if i'm wrong, but in both Dinkumware & STLPort
implementations of rb_tree::equal_range are similar:

pair<iterator,iterator> equal_range(const key_type& x)
{
return pair<iterator, iterator>(lower_bound(x), upper_bound(x));
}

which has double cost for set/map and is not optimal for
multiset/multimap. Sure in case of set/map the usage of equal_range can
be replaced by lower_bound, but having standard function with obvious
flaw is not a good practice.

What do you think?

Regards,
    Armen


      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
10/19/2005 8:48:40 AM
comp.lang.c++.moderated 10738 articles. 1 followers. allnor (8506) is leader. Post Follow

15 Replies
451 Views

Similar Articles

[PageSpeed] 8

"Nemra�" <armen_grigoryan@hotmail.com> wrote in message
news:1129707345.606089.294840@o13g2000cwo.googlegroups.com...

> We are using standard containers in our code except task specific parts
> where we write our own specific containers that can handle huge data
> and/or optimized for specific algorithms. But there are places where
> some standard containers are used but implementation could be faster.
> Correct me if i'm wrong, but in both Dinkumware & STLPort
> implementations of rb_tree::equal_range are similar:
>
> pair<iterator,iterator> equal_range(const key_type& x)
> {
> return pair<iterator, iterator>(lower_bound(x), upper_bound(x));
> }
>
> which has double cost for set/map and is not optimal for
> multiset/multimap. Sure in case of set/map the usage of equal_range can
> be replaced by lower_bound, but having standard function with obvious
> flaw is not a good practice.
>
> What do you think?

I think you need to find a realistic case where the difference
amounts to a hill of beans. If you do, you already know how
to do it faster, don't you?

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com



      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
P
10/20/2005 11:25:43 AM
Nemra� wrote:
> Hello,
>
> We are using standard containers in our code except task specific parts
> where we write our own specific containers that can handle huge data
> and/or optimized for specific algorithms. But there are places where
> some standard containers are used but implementation could be faster.
> Correct me if i'm wrong, but in both Dinkumware & STLPort
> implementations of rb_tree::equal_range are similar:
>
> pair<iterator,iterator> equal_range(const key_type& x)
> {
> return pair<iterator, iterator>(lower_bound(x), upper_bound(x));
> }
>
> which has double cost for set/map and is not optimal for
> multiset/multimap. Sure in case of set/map the usage of equal_range can
> be replaced by lower_bound, but having standard function with obvious
> flaw is not a good practice.
>
> What do you think?

I think you could call corresponding member function of set/map
directly. Or provide your own my::equal_range that does the right thing
for you, or hack the library and overload std::equal_range() for
std::set/map.


      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
Maxim
10/20/2005 12:53:24 PM
Nemra� wrote:
> Hello,
>
> We are using standard containers in our code except task specific parts
> where we write our own specific containers that can handle huge data
> and/or optimized for specific algorithms. But there are places where
> some standard containers are used but implementation could be faster.
> Correct me if i'm wrong, but in both Dinkumware & STLPort
> implementations of rb_tree::equal_range are similar:
>
> pair<iterator,iterator> equal_range(const key_type& x)
> {
> return pair<iterator, iterator>(lower_bound(x), upper_bound(x));
> }
>
> which has double cost for set/map and is not optimal for
> multiset/multimap. Sure in case of set/map the usage of equal_range can
> be replaced by lower_bound, but having standard function with obvious
> flaw is not a good practice.
>
> What do you think?

I think that describing the equal_range implementation above as
"obviously flawed" is not justified. On the contrary, the
implementation is obviously correct, combining as it does its two
logical sub-operations. And correctness has tremendous value,
especially in a library.

I would be disappointed though with any routine, that is the logical
combination of other routines, not to take advantage of any obvious
optimizations that the combination makes possible. To keep the
discussion abstract and free of reference to any existing library, I
will choose as an example, a different routine, say
std::string::replace. Now calling string::replace logically deletes
then inserts characters into a std::string. But I would not expect that
an implementation of std::string simply to call erase() and insert().
For one, an obvious opportunity for optimizing this operation would
have been lost, and worse, there is no further opportunity left to the
client to optimize this operation outside of std::string itself. In
other words, unless the optimization is made in the library, then it
cannot be made at all.

But there is a complication. Even though the library author is often in
the best position to make optimizations, it is the application author
who will know best which optimizations are needed for a particular
program. So there has to be some information flow back to the library
author from the library clients to the ensure that useful optimizations
can be made in the appropriate places.

So as a client of the library I would point out (or even supply code
with the suggested improvement) to the library author where a useful
optimization could be made. I would be careful to do so in a
constructive manner, especially when the existing implementation is
obviously correct, if not optimally efficient.

Greg


      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
Greg
10/21/2005 9:12:50 AM
Greg Herlihy wrote:

    [...]
> I would be disappointed though with any routine, that is the
> logical combination of other routines, not to take advantage
> of any obvious optimizations that the combination makes
> possible.  To keep the discussion abstract and free of
> reference to any existing library, I will choose as an
> example, a different routine, say std::string::replace.  Now
> calling string::replace logically deletes then inserts
> characters into a std::string. But I would not expect that an
> implementation of std::string simply to call erase() and
> insert().  For one, an obvious opportunity for optimizing this
> operation would have been lost, and worse, there is no further
> opportunity left to the client to optimize this operation
> outside of std::string itself.  In other words, unless the
> optimization is made in the library, then it cannot be made at
> all.

Not really relevant to the current discussion (unless to say
that you chose a bad example), but it would seem more reasonable
for erase and insert to be implemented in terms of replace.
After all, erase() is just replace() with an empty string, and
insert() replace a substring of length 0.

I don't know about current implementations, but that's the way
my pre-standard string was implemented.

(For the rest of what you said, concerning library correctness
and ways of communicating with library implementors, I can only
concur.)

--
James Kanze                                           GABI Software
Conseils en informatique orient�e objet/
                   Beratung in objektorientierter Datenverarbeitung
9 place S�mard, 78210 St.-Cyr-l'�cole, France, +33 (0)1 30 23 00 34


      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
kanze
10/21/2005 4:15:59 PM
Even if multiset is not the most used container, but here nobody could
make search faster unless he will hack the library code. So its usage
for "heavy" cases will bring to performance problem without easy
solution.
As i already wrote we replaced equal_range by lower_bound because we
know that we are using std::set, but not every user knows that they
differ in a time twice for some libraries. Becides we need to replace
this simple code:

std::pair<SetIt, SetIt> range = someSet.equal_range(someVal);
if (range.first != range.second)
{
     range.first = someSet.insert(range.first, someVal);
}
// use range.first which points to the required value

With the following:

SetIt it = someSet.lower_bound(someVal);
if (it == someSet.end() || someSet.key_comp(*it, someVal) ||
someSet.key_comp(someVal, *it))
{
     it = someSet.insert(it, someVal);
}
// use it which points to the required value

I'd prefer using first one...

Regards,
     Armen


      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
iso
10/22/2005 1:49:19 PM
"Greg Herlihy" <greghe@pacbell.net> wrote in message 
news:1129850301.991311.94290@g14g2000cwa.googlegroups.com...

> But there is a complication. Even though the library author is often in
> the best position to make optimizations, it is the application author
> who will know best which optimizations are needed for a particular
> program. So there has to be some information flow back to the library
> author from the library clients to the ensure that useful optimizations
> can be made in the appropriate places.
>
> So as a client of the library I would point out (or even supply code
> with the suggested improvement) to the library author where a useful
> optimization could be made. I would be careful to do so in a
> constructive manner, especially when the existing implementation is
> obviously correct, if not optimally efficient.

And the library author might politely decline to make the
"obvious" improvement, in some cases, particularly  if there
is any chance that improving one use of the library might
deoptimize another use. Library authors must continually
guess, on the basis of incomplete information, what are
the best tradeoffs to make.

In the case originally cited in this thread, the OP presumed
that equal_range is suboptimal if it merely calls lower_bound
and upper_bound. But that is often the best approach, even
for a set or map with unique elements. If a user has reason
to believe that a *particular* set or map can find an
equal range faster -- say by calling lower_bound and testing
the element it designates (if any) -- and if the user has
reason to believe that the difference in performance *warrants*
writing, debugging, and maintaining a bespoke function,
then the user is of course free to do so. But there are
enough "if"s here to dissuade most library writers from
tinkering much with equal_range.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com



      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
P
10/22/2005 1:56:52 PM
On 22 Oct 2005 09:49:19 -0400, "Nemra�" <armen_grigoryan@hotmail.com>
wrote:

> As i already wrote we replaced equal_range by lower_bound because we
> know that we are using std::set, but not every user knows that they
> differ in a time twice for some libraries. Becides we need to replace
> this simple code:

> std::pair<SetIt, SetIt> range = someSet.equal_range(someVal);
> if (range.first != range.second)
> {
>      range.first = someSet.insert(range.first, someVal);
> }
> // use range.first which points to the required value

Since it does nothing, it is a good thing to replace it.  I'm sure the
real code must have == rather than !=.

> With the following:

> SetIt it = someSet.lower_bound(someVal);
> if (it == someSet.end() || someSet.key_comp(*it, someVal) ||
> someSet.key_comp(someVal, *it))

Some knowledge of lower_bound would help.  It returns end, something
larger, or the target.  Something less is impossible because that would
not be a lower bound.  Remove the redundant middle test.

> {
>      it = someSet.insert(it, someVal);
> }
> // use it which points to the required value

> I'd prefer using first one...

I can't imagine why.  The logic of the code makes no sense for a
multi_set because it prevents insertion of duplicates.  The best
possible code for equal_range in a set would be to determine lower_bound
and then make the above test to determine whether upper_bound is the
same or next item.  Now your code adds an extra test on top of that not
to mention returning two iterators when one was enough.  Using
equal_range on a set makes no sense.

John

      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
John
10/24/2005 12:16:09 AM
John, thanks for pointing out mistakes!
My statement was that either equal_range should be optimized or it have
to be well documented, because it is easy for user who doesnt know the
library implementation well to get a performance flaw. Concerning
multi_set/multi_map: it might be hard to imagine where equal_range can
be used in a way that it is hard to replace it with different
processing, but again we come to the same conclusion: if equal_range
should be replaced for performance then why we need it at all?

Regards,
    Armen


      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
iso
10/24/2005 9:01:21 AM
In article <1129707345.606089.294840@o13g2000cwo.googlegroups.com>,
 "Nemra�" <armen_grigoryan@hotmail.com> wrote:

> Hello,
> 
> We are using standard containers in our code except task specific parts
> where we write our own specific containers that can handle huge data
> and/or optimized for specific algorithms. But there are places where
> some standard containers are used but implementation could be faster.
> Correct me if i'm wrong, but in both Dinkumware & STLPort
> implementations of rb_tree::equal_range are similar:
> 
> pair<iterator,iterator> equal_range(const key_type& x)
> {
> return pair<iterator, iterator>(lower_bound(x), upper_bound(x));
> }
> 
> which has double cost for set/map and is not optimal for
> multiset/multimap. Sure in case of set/map the usage of equal_range can
> be replaced by lower_bound, but having standard function with obvious
> flaw is not a good practice.
> 
> What do you think?

Here's a test case I'm running:

#include <iostream>
#include <vector>
#include <numeric>
#include <algorithm>
#include <time.h>

template <class F>
float
test(F f)
{
    std::vector<float> t;
    int i;
    for (i = 0; i < 10; ++i)
        t.push_back(f());
    double total_time = std::accumulate(t.begin(), t.end(), 0.0);
    while (i < 1000 && total_time < 1.0)
    {
        t.push_back(f());
        total_time += t.back();
        ++i;
    }
    std::sort(t.begin(), t.end());
    t.resize(t.size() * 9 / 10);
    return (float)(std::accumulate(t.begin(), t.end(), 0.0) / t.size());
}

const int M = 5;
const int N = 1000000;
const int F = N-1;

#include <set>

std::multiset<int> s;

void load(std::multiset<int>& s)
{
    for (int j = 0; j < N; ++j)
        for (int i = 0; i < M; ++i)
            s.insert(j);
}

float
time_multiset_equal_range()
{
    clock_t t = clock();
    {
    std::pair<std::set<int>::iterator, std::set<int>::iterator> p = 
s.equal_range(F);
    }
    t = clock() - t;
    return (float)(t/(double)CLOCKS_PER_SEC);
}

float
time_multiset_lower_upper()
{
    clock_t t = clock();
    {
    std::pair<std::set<int>::iterator, std::set<int>::iterator> p;
    p.first = s.lower_bound(F);
    p.second = s.upper_bound(F);
    }
    t = clock() - t;
    return (float)(t/(double)CLOCKS_PER_SEC);
}

int main()
{
    load(s);
    float t1 = test(time_multiset_equal_range);
    float t2 = test(time_multiset_lower_upper);
    std::cout << t2/t1 << '\n';
}

For me it prints out (approximately):

1.24239

YMMV.

The point of the test is to compare equal_range to calls to 
lower/upper_bound.  In the implementation I'm running, something has 
obviously been done to consistently increase the speed of equal_range 
over calls to lower/upper_bound by about 24%.  Maybe that is significant 
for you, maybe it isn't.  Obviously the number you get is going to vary 
with the tweaking parameters M, N and F.  But if you start consistently 
getting a number very close to 1 (and you can define "very close" 
yourself), perhaps you and your vendor should have a little talk.  If 
the number I've reported here doesn't amount to a hill of beans for you, 
then no action at all is warranted.

-Howard

      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
Howard
10/25/2005 12:45:34 PM
On 24 Oct 2005 05:01:21 -0400, "Nemra�" <armen_grigoryan@hotmail.com>
wrote:

> My statement was that either equal_range should be optimized or it have
> to be well documented, because it is easy for user who doesnt know the
> library implementation well to get a performance flaw. Concerning
> multi_set/multi_map: it might be hard to imagine where equal_range can
> be used in a way that it is hard to replace it with different
> processing, but again we come to the same conclusion: if equal_range
> should be replaced for performance then why we need it at all?

Well, we need it when it makes sense.  It might make sense for a
multi-set/map or other sorted container with duplicates.  Your real
question is about the associatives not using a more efficient algorithm.
My implementation has an efficient <algorithm> and your less than
efficient <tree>.  Since many libraries are derived from the original
HP/SGI library, yours may be the same.

Here is a little test program which compares the <algorithm> to the
<tree>.  The <algorithm> version is very inefficient, but is used to
show the number of compares which is the unit to count for search.

#include <algorithm>
#include <cstdlib>
#include <iostream>
#include <iterator>
#include <set>
using namespace std;
template <class N>
struct LinearProgression {
    N val;
    N inc;
    LinearProgression (N const& val = 0, N const& inc = 1)
: val(val - inc), inc(inc) {
        }
    N operator() () {
        return val += inc;
        }
    };
struct I {
    int i;
    I (int i = 0) : i(i) {
        }
    static int compares;
    };
int I::compares;
bool operator< (I lhs, I rhs) {
    ++ I::compares;
    return lhs.i < rhs.i;
    }
struct Test {
    multiset<I> const& s;
    int ca, cs;
    Test (multiset<I> const& s) : s(s), ca(0), cs(0) {
        }
    void operator() (I t) {
        I::compares = 0;
        equal_range(s.begin(), s.end(), t);
        ca += I::compares;
        I::compares = 0;
        s.equal_range(t);
        cs += I::compares;
        }
    ostream& print (ostream& os) const {
        return os << "algo " << ca << " member " << cs << endl;
        }
    };
int main (int, char** argv) {
    int max(atoi(argv[1]));
    max |= 1;
    multiset<I> data;
    for (int x(1); x != max; x += 2)
        for (int y = x; y & 1; y /= 2)
            data.insert(x);
    set<I> test;
    generate_n(inserter(test, test.end()), max / 2,
            LinearProgression<int>(1, 2));
    for_each(test.begin(), test.end(),
            Test(data)).print(cout << "Hit  ");
    test.clear();
    generate_n(inserter(test, test.end()), max / 2 + 1,
            LinearProgression<int>(0, 2));
    for_each(test.begin(), test.end(),
            Test(data)).print(cout << "Miss ");
    test.clear();
    generate_n(inserter(test, test.end()), max,
            LinearProgression<int>(0, 1));
    for_each(test.begin(), test.end(),
            Test(data)).print(cout << "All  ");
    }

My results for 1000 were:

Hit  algo 8183 member 10362
Miss algo 7526 member 10388
All  algo 15709 member 20750

The efficient algorithm is only 25% better than the inefficient member.
Is the difference worth changing the <tree> algorithm?  I wonder why the
two algorithms are not the same in a library.  History?

Maybe Code Warrior beats this with an efficient <tree>?

Maybe nobody cares?

John

      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
John
10/25/2005 1:21:58 PM
On 25 Oct 2005 08:45:34 -0400, Howard Hinnant <howard.hinnant@gmail.com>
wrote:

> Here's a test case I'm running:

[snip]

> const int M = 5;
> const int N = 1000000;
> const int F = N-1;

[snip]

> float
> time_multiset_equal_range()
> {
>     clock_t t = clock();
>     {
>     std::pair<std::set<int>::iterator, std::set<int>::iterator> p =
> s.equal_range(F);

Is your implementation that slow or your clock that fast?

>     }
>     t = clock() - t;
>     return (float)(t/(double)CLOCKS_PER_SEC);
> }

[snip]

> int main()
> {
>     load(s);
>     float t1 = test(time_multiset_equal_range);
>     float t2 = test(time_multiset_lower_upper);
>     std::cout << t2/t1 << '\n';
> }

> For me it prints out (approximately):

> 1.24239

Knowing that it is your library, I will assume that you remember or
checked that equal_range in the tree really does do something other than
call lower and upper like most libraries.  However, I wonder what you
really measured with this test.

For me it prints NaNq because nothing is ever measured.  Adding a for
loop between the two calls of clock and executing 100 times gives
results.  I kicked it up to 100000 to get meaningful results.

Now I get 0.99 to 1.01 as expected for the gcc implementation.

> The point of the test is to compare equal_range to calls to
> lower/upper_bound.  In the implementation I'm running, something has
> obviously been done to consistently increase the speed of equal_range
> over calls to lower/upper_bound by about 24%.  Maybe that is significant
> for you, maybe it isn't.  Obviously the number you get is going to vary
> with the tweaking parameters M, N and F.  But if you start consistently
> getting a number very close to 1 (and you can define "very close"
> yourself), perhaps you and your vendor should have a little talk.  If
> the number I've reported here doesn't amount to a hill of beans for you,
> then no action at all is warranted.

Replacing the multiset with vector and using the general algorithms
where I know that equal_range is done well gives nice results.  Kicked
the loop up to 1000000 to get meaningful results.  Considering that you
used a search for N-1 which should go to a leaf before a match, I would
expect equal_range to be about twice as fast as the two calls.

I get 1.62 to 1.68.

If that 1.24 is really measuring something other than random clock tics,
maybe you should have a talk with your vendor.  :)

John

      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
John
10/26/2005 1:07:27 PM
On 25 Oct 2005 08:45:34 -0400, Howard Hinnant <howard.hinnant@gmail.com>
wrote:

> For me it prints out (approximately):

> 1.24239

I modified the gcc library to do a "smart" equal_range.  Using your
tests with the loops to get meaningful results, I got about 2.  This is
a best case.  Changing F to 0 which searches down the other side making
two compares at each step, I got about 1.3.  This is a worst case.  On
average, we should expect about 1.5.

With the builtin < for int, turning on optimization gives more
improvements because the two compares are reduced to one.  The worst
case changed to about 1.7.  The unoptomized results above are realistic
for a complex compare.

John

      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
John
10/26/2005 1:16:57 PM
In article <sJx7f.113$2y.40@newsread2.news.atl.earthlink.net>,
 John Potter <jpotter@lhup.edu> wrote:

> Is your implementation that slow or your clock that fast?

The clock is that fast.

> > For me it prints out (approximately):
> 
> > 1.24239
> 
> Knowing that it is your library, I will assume that you remember or
> checked that equal_range in the tree really does do something other than
> call lower and upper like most libraries.

<nod>

> However, I wonder what you
> really measured with this test.
> 
> For me it prints NaNq because nothing is ever measured.  Adding a for
> loop between the two calls of clock and executing 100 times gives
> results.  I kicked it up to 100000 to get meaningful results.

I should have mentioned the fast clock.

> Replacing the multiset with vector and using the general algorithms
> where I know that equal_range is done well gives nice results.  Kicked
> the loop up to 1000000 to get meaningful results.  Considering that you
> used a search for N-1 which should go to a leaf before a match, I would
> expect equal_range to be about twice as fast as the two calls.
> 
> I get 1.62 to 1.68.
> 
> If that 1.24 is really measuring something other than random clock tics,
> maybe you should have a talk with your vendor.  :)

To tell you the truth I was expecting something closer to what you got 
too.  But ...

In article <N4B7f.224$2y.141@newsread2.news.atl.earthlink.net>,
 John Potter <jpotter@lhup.edu> wrote:

> I modified the gcc library to do a "smart" equal_range.  Using your
> tests with the loops to get meaningful results, I got about 2.  This is
> a best case.  Changing F to 0 which searches down the other side making
> two compares at each step, I got about 1.3.  This is a worst case.  On
> average, we should expect about 1.5.
> 
> With the builtin < for int, turning on optimization gives more
> improvements because the two compares are reduced to one.  The worst
> case changed to about 1.7.  The unoptomized results above are realistic
> for a complex compare.

I'll just trust you to check that in, and thanks. :-)

-Howard

      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
Howard
10/27/2005 9:29:25 AM
Hello,

Thanks for benchmarking! As you see difference can be big, because even
20% is a big number, when working with long-lasting algorithm.
Anyway, it's up to library implementors to provide effective solution
vs. easy-to-maintain one, so let's hope they will do us a favour...

Regards,
     Armen


      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
iso
10/27/2005 10:36:30 AM
Nemra� wrote:
> Hello,
> 
> Thanks for benchmarking! As you see difference can be big, because even
> 20% is a big number, when working with long-lasting algorithm.
> Anyway, it's up to library implementors to provide effective solution
> vs. easy-to-maintain one, so let's hope they will do us a favour...
> 

Both STLport and libstdc++-v3 (the implementation of the C++ standard
library in g++) are open source. I know libstdc++-v3 is always happy to
consider patches which improve the library, and I'm sure STLport is
similar. Remember a reasonable mumber of the "library implementators"
are just C++ users who decided to start hacking at whatever
implementation of the standard library they happened to have lying around :)

Chris

      [ See http://www.gotw.ca/resources/clcm.htm for info about ]
      [ comp.lang.c++.moderated.    First time posters: Do this! ]

0
chris
10/27/2005 4:40:43 PM
Reply: