COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

### Why is recursion useful?

• Email
• Follow

Hi;

I was reading a digg article the other day.  It was buy some uber geek
who has been interviewing people for jobs and who has started to feel
as if programmers don't have the level of skill that they used to
have.

One of his examples was that many applicants didn't know what
recursion was or how to solve a problem with it.

I guess I am in the middle between the "real programmers" of
yesteryear and the glorified web developers applying for entry level
positions.

Why is recursion useful,  why do you need it instead of using an
ordinary loop inside of a function?

Thanks  much in advance for any thoughts.

 0
Reply tinker123 (180) 3/15/2007 12:11:53 AM

See related articles to this posting

Steve wrote:
> Why is recursion useful,  why do you need it instead of using an
> ordinary loop inside of a function?

Recursion is useful for two reasons:
(1) it often makes the code much easier to understand, and
(2) not all cases are as simple as linearly iterating through a
list of things; sometimes when you see one thing, that means
you need to iterate over several things.

An example of #2 is traversing a directory tree.  You have, say,
this structure:

foo/
1/
a
b
2/
c
3/
d
e
f

When you're done looking at foo, you need to look at 1 and all its
children, then when you're done with that, you need to look at 2
and all its children, and so on.  This type of thing is just a
zillion times easier and clearer when expressed recursively.  Here's
some example pseudo code:

traverse_directory(file f) {
if (f.is_dir()) {
print "I am in directory " + f;

foreach subdir in f.list_dir() {
traverse_directory (subdir);
}
}

if (f.is_plain_file()) {
print "I see file " + f;
}
}

It is certainly not impossible to do this iteratively, but it's
not easy, especially when compared to the above.

- Logan
 0

Steve wrote:

> Why is recursion useful,  why do you need it instead of using an
> ordinary loop inside of a function?

Recurse when you need a LIFO stack, and borrowing the system stack makes

A Last In First Out stack works like one of those springy plate dispensers
in cafeteria. (The dispenser is springy, not the plates.) You put a plate on
top, and it goes down; you take a plate off the top, and the stack comes up.
You can't easily get to the plates at the bottom when the dispenser is full.

This is exactly how variables in methods work. The more deeply you call
methods,

So here's a little tree:

<
<
/   <
<
\   <
<
<

If you have a Composite Pattern in memory, you can "walk" the object model
by calling your own walk function twice, once for the left node and again
for the right node. You could do that without recursion, by storing the list
of parent nodes between your current point and the root. Recursion
encapsulates that list, making your code cleaner.

Now bone up on Design Patterns, to become a Real Software Engineer!

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!

 0

"Steve" <tinker123@gmail.com> wrote:
>
> I was reading a digg article the other day.  It was buy some uber geek
> who has been interviewing people for jobs and who has started to feel
> as if programmers don't have the level of skill that they used to
> have.
>
> One of his examples was that many applicants didn't know what
> recursion was or how to solve a problem with it.
>
> I guess I am in the middle between the "real programmers" of
> yesteryear and the glorified web developers applying for entry level
> positions.
>
> Why is recursion useful,  why do you need it instead of using an
> ordinary loop inside of a function?
>
> Thanks  much in advance for any thoughts.

There are several reasons why recursion is an extremely useful tool for
constructing programs.  One is that recursive program structures echo
the elegant simplicity of inductive definitions, and that makes it much
easier to write programs and reason about their behaviour.  A second is
that recursion is a more general control mechanism than a loop, in that
recursion semantics can model loop semantics without additional
structure -- this permits a very powerful language to be constructed
without a large number of primitive abstractions.

As a consequence, recursion provides a very expressive and rather
beautiful basic tool -- and clearly one that nearly every modern
programming language supports to some degree.  And I contend that even
if you need a more practical reason to make an effort to understand the
subtleties of recursion, its ubiquity alone ought to provide sufficient
argument.

That said, I think it would be a mistake to treat any single
abstraction, even one as powerful as recursion, as a silver bullet.  In
the words of the late Alan Perlis, "beware the Turing tar pit, in which
everything is possible, but nothing of interest is easy."  Well, with
recursion, many interesting things ARE easy, but of course it helps to
have a few other program structuring tools as well, as part of a
complete language.

Cheers,
-M

--
Michael J. Fromberger             | Lecturer, Dept. of Computer Science
http://www.dartmouth.edu/~sting/  | Dartmouth College, Hanover, NH, USA
 0

On Mar 14, 5:11 pm, "Steve" <tinker...@gmail.com> wrote:
> Hi;
>
> I was reading a digg article the other day.  It was buy some uber geek
> who has been interviewing people for jobs and who has started to feel
> as if programmers don't have the level of skill that they used to
> have.
>
> One of his examples was that many applicants didn't know what
> recursion was or how to solve a problem with it.
>
> I guess I am in the middle between the "real programmers" of
> yesteryear and the glorified web developers applying for entry level
> positions.
>
> Why is recursion useful,  why do you need it instead of using an
> ordinary loop inside of a function?
>
> Thanks  much in advance for any thoughts.

There are lots of problems for which a 'divide and conquer' strategy
is useful.
For these problems, recursion is an elegant and natural fit.

Often, the teaching examples (e.g. 'use recursion to calculate the
factorial') are actually the wrong way to solve it.  In these
instances, iteration is really superior.  But you may have a problem
which will have a much better solution implemented recursively.

Mathematical recurrence relations are a simple example of something
that can be solved with recursion and the solution will be easily
understandable.  For example, many orthogonal polynomial sets are
defined by recurrence relations.

The quicksort algorithm uses the divide and conquor strategy.  We
recursively divide the data set into partitions (<,=,> pivot) until
the partitions are of size zero.

Sometimes it is worthwhile to transform a recursive solution into an
iterative one, and sometimes it is more bother than it is worth (e.g
the transformed solution may not be much faster and may be far more
complex to maintain).  That's the real art of it (knowing when
transformation to iteration is a good idea or not).

 0

user923005 wrote:

> The quicksort algorithm uses the divide and conquor strategy.  We
> recursively divide the data set into partitions (<,=,> pivot) until
> the partitions are of size zero.
>
> Sometimes it is worthwhile to transform a recursive solution into an
> iterative one, and sometimes it is more bother than it is worth (e.g
> the transformed solution may not be much faster and may be far more
> complex to maintain).  That's the real art of it (knowing when
> transformation to iteration is a good idea or not).

I've never seen or heard of an iterative version of mergesort.
I've thought about attempting it a couple of times
but I drew a blank.

--
pete
 0

On Mar 15, 6:24 pm, pete <pfil...@mindspring.com> wrote:
> user923005 wrote:
> > The quicksort algorithm uses the divide and conquor strategy.  We
> > recursively divide the data set into partitions (<,=,> pivot) until
> > the partitions are of size zero.
>
> > Sometimes it is worthwhile to transform a recursive solution into an
> > iterative one, and sometimes it is more bother than it is worth (e.g
> > the transformed solution may not be much faster and may be far more
> > complex to maintain).  That's the real art of it (knowing when
> > transformation to iteration is a good idea or not).
>
> I've never seen or heard of an iterative version of mergesort.
> I've thought about attempting it a couple of times
> but I drew a blank.
>
> --
> pete

http://penguin.ewu.edu/~trolfe/NaturalMerge/index.html
http://ds4beginners.wordpress.com/2006/09/22/merge-sort-iteration/
http://p2p.wrox.com/TopicIndex/17558.htm

 0

pete <pfiland@mindspring.com> writes:
> I've never seen or heard of an iterative version of mergesort.

Really?  All of my linked-list merge sorts are iterative.
--
Ben Pfaff
blp@cs.stanford.edu
http://benpfaff.org
 0

pete wrote in message

> I've never seen or heard of an iterative version of mergesort.
> I've thought about attempting it a couple of times
> but I drew a blank.

That's what I meant by recursion borrows the system stack for a data stack.
If you write a quicksort with an explicit data stack, then your function
might technically not call itself, yet your algorithm is still recursive -
defined in terms of itself.

--
Phlip
http://www.greencheese.us/ZeekLand <-- NOT a blog!!!

 0

pete wrote:
>
.... snip ...
>
> I've never seen or heard of an iterative version of mergesort.
> I've thought about attempting it a couple of times
> but I drew a blank.

ANY recursive algorithm can be converted into iteration by use of
an auxiliary stack.  Very straightforward process.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>

--
Posted via a free Usenet account from http://www.teranews.com

 0

pete wrote:
> I've never seen or heard of an iterative version of mergesort.
> I've thought about attempting it a couple of times
> but I drew a blank.

The how and why of an iterative mergesort dawned on me one time
when I was considering how one would use mergesort to sort data
using only tapes (i.e. when there is not enough RAM to store the
whole data set in memory).

My first thought was that you would want to write a single 1-element
list to each of two tapes.  Then sequentially read both tapes and
merge those two onto a third.  Then you'd have a tape with a sorted
2-element list.  If you then had another tape with another sorted
2-element list, you could merge those two 2-element lists onto another
tape and get a 4-element list.

Then it occurred to me that in order not to have an arbitrarily
large number of tapes (one for each item in the list, since you
are starting with trivially-sorted 1-element lists), you could
write a series of lists to each tape, putting some kind of marker
between the lists.  (Many tape drives support such markers in
hardware.)  Thus, the sorting process would consist of reading
the initial unsorted items off a single tape and writing half
of them to one tape ('tape A') and half to another ('tape B').
Now, if you had N items to start, tape A doesn't have a single
list with N/2 items in it; instead, it has a N/2 sorted lists of
length 1.  Same thing for tape B.  Now you do N/2 merge operations.
You merge the first list from tape A with the first list from
tape B and write the result to tape C.  Then you merge the next
list from tape A and the next list from tape B and write the
result to tape D.  Then you merge another two and write to C,
and you keep alternating between C and D.

Now tapes C and D each have N/4 sorted lists of size 2.  You
repeat the process but bounce it back the other direction
so that you're reading from C and D and writing to A and B.
When you're done, A and B have N/8 sorted lists of size 4.
You can see where it's going.  And it's definitely not recursive.
It's not even random access, for crying out loud!

- Logan
 0

On Fri, 16 Mar 2007 01:24:07 GMT, pete <pfiland@mindspring.com> wrote:

>user923005 wrote:
>
>> The quicksort algorithm uses the divide and conquor strategy.  We
>> recursively divide the data set into partitions (<,=,> pivot) until
>> the partitions are of size zero.
>>
>> Sometimes it is worthwhile to transform a recursive solution into an
>> iterative one, and sometimes it is more bother than it is worth (e.g
>> the transformed solution may not be much faster and may be far more
>> complex to maintain).  That's the real art of it (knowing when
>> transformation to iteration is a good idea or not).
>
>I've never seen or heard of an iterative version of mergesort.
>I've thought about attempting it a couple of times
>but I drew a blank.

The iterative version of mergesort does the merging explicitly bottom
up.  You have an outer loop that goes from n/2 to 1 dividing by 2 in
each step, and an inner merge loop of length (n/2)/outer_index. You have
to do some fiddles if n is not a power of two.  Details are left to the
student.

 0

Logan Shaw wrote:
>
.... snip ...
>
> Now tapes C and D each have N/4 sorted lists of size 2.  You
> repeat the process but bounce it back the other direction
> so that you're reading from C and D and writing to A and B.
> When you're done, A and B have N/8 sorted lists of size 4.
> You can see where it's going.  And it's definitely not recursive.
> It's not even random access, for crying out loud!

Look up polyphase sort.  Wirth has a good description in
"Algorithms + Data Structures = Programs".

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>

--
Posted via a free Usenet account from http://www.teranews.com

 0

On Mar 16, 1:36 pm, c...@tiac.net (Richard Harter) wrote:
[snip]
> Looking up polyphase sort is always a good thing to do.  However it is
> only tangentially relevent to the issue at hand.

Though it (polyphase sort/merge) is used all over the place [e.g
PostgreSQL], polyphase sort/merge is obsolete.  So is replacement-
selection (for that matter).

There is no need to perform log2(subfiles) passes over the data, as it
is a simple matter to merge the entire sequence of subfiles in a
single pass.  Simply place the heads of the subfiles in a priority
queue that allows change of priority and empty all of the lists.

Consider 1024 subfiles.  That would require 10 + 1 = 11 read/write
passes using polyphase sort/merge.
Using a priority queue cuts it down to 1 + 1 = 2 read/write passes.
Consider 2 subfiles.  Polyphase needs 3 passes, and priority queue
requires 2.  This is the minimum advantage for priority queue based
merge.

So looking up polyphase sort/merge is useful and interesting for
historical purposes.  Just don't implement it in modern software
unless you have a striking and powerful reason to do so.

 0

On Fri, 16 Mar 2007 02:29:44 -0500, CBFalconer <cbfalconer@yahoo.com>
wrote:

>Logan Shaw wrote:
>>
>... snip ...
>>
>> Now tapes C and D each have N/4 sorted lists of size 2.  You
>> repeat the process but bounce it back the other direction
>> so that you're reading from C and D and writing to A and B.
>> When you're done, A and B have N/8 sorted lists of size 4.
>> You can see where it's going.  And it's definitely not recursive.
>> It's not even random access, for crying out loud!
>
>Look up polyphase sort.  Wirth has a good description in
>"Algorithms + Data Structures = Programs".

Looking up polyphase sort is always a good thing to do.  However it is
only tangentially relevent to the issue at hand.

 0

user923005 wrote:
>
> Often, the teaching examples (e.g. 'use recursion to calculate the
> factorial') are actually the wrong way to solve it.  In these
> instances, iteration is really superior.  But you may have a problem
> which will have a much better solution implemented recursively.
>

I struggled mightily with recursion in freshman CS.  Part of it was my
own stupidity, part was the teaching method, and part was the tools
available.  CS was in its infancy; K&R C hadn't even been published.

Years later, when I taught CS, I used natural examples and fractals
(patterns in a leaf, doodles on a sheet of paper, and so on) and my
students caught on very quickly.

Recursion is very natural - so natural that it is counterintuitive.
Most people think it is something esoteric, so they don't want to
comprehend just how easy it really is.
 0

Richard Harter wrote:
> CBFalconer <cbfalconer@yahoo.com> wrote:
>> Logan Shaw wrote:
>>
>>... snip ...
>>>
>>> Now tapes C and D each have N/4 sorted lists of size 2.  You
>>> repeat the process but bounce it back the other direction
>>> so that you're reading from C and D and writing to A and B.
>>> When you're done, A and B have N/8 sorted lists of size 4.
>>> You can see where it's going.  And it's definitely not recursive.
>>> It's not even random access, for crying out loud!
>>
>> Look up polyphase sort.  Wirth has a good description in
>> "Algorithms + Data Structures = Programs".
>
> Looking up polyphase sort is always a good thing to do.  However
> it is only tangentially relevent to the issue at hand.

Why do you say that?  It is a mergesort optimized to minimize i/o.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>

--
Posted via a free Usenet account from http://www.teranews.com

 0

"user923005" writes:

>> Looking up polyphase sort is always a good thing to do.  However it is
>> only tangentially relevent to the issue at hand.
>
> Though it (polyphase sort/merge) is used all over the place [e.g
> PostgreSQL], polyphase sort/merge is obsolete.  So is replacement-
> selection (for that matter).
>
> There is no need to perform log2(subfiles) passes over the data, as it
> is a simple matter to merge the entire sequence of subfiles in a
> single pass.  Simply place the heads of the subfiles in a priority
> queue that allows change of priority and empty all of the lists.
>
> Consider 1024 subfiles.  That would require 10 + 1 = 11 read/write
> passes using polyphase sort/merge.
> Using a priority queue cuts it down to 1 + 1 = 2 read/write passes.
> Consider 2 subfiles.  Polyphase needs 3 passes, and priority queue
> requires 2.  This is the minimum advantage for priority queue based
> merge.
>
> So looking up polyphase sort/merge is useful and interesting for
> historical purposes.  Just don't implement it in modern software
> unless you have a striking and powerful reason to do so.

Most sites didn't have 1024 tape units.

 0

On Mar 16, 6:13 pm, "osmium" <r124c4u...@comcast.net> wrote:
[snip]
> Most sites didn't have 1024 tape units.

Yes, but 1024 subfiles is possible on many modern disk subsystems.
Generally, sorting is done on disk now-days, even in mainframe shops.
Disk is cheaper and faster.  Tape drives today are only used for
backup.
Look at what a terrabyte of disk costs today:
http://www.nextag.com/LaCie-Big-Disk-Extreme-77082608/prices-html

 0

On Mar 16, 3:20 pm, CBFalconer <cbfalco...@yahoo.com> wrote:
> Richard Harter wrote:
> > CBFalconer <cbfalco...@yahoo.com> wrote:
> >> Logan Shaw wrote:
>
> >>... snip ...
>
> >>> Now tapes C and D each have N/4 sorted lists of size 2.  You
> >>> repeat the process but bounce it back the other direction
> >>> so that you're reading from C and D and writing to A and B.
> >>> When you're done, A and B have N/8 sorted lists of size 4.
> >>> You can see where it's going.  And it's definitely not recursive.
> >>> It's not even random access, for crying out loud!
>
> >> Look up polyphase sort.  Wirth has a good description in
> >> "Algorithms + Data Structures = Programs".
>
> > Looking up polyphase sort is always a good thing to do.  However
> > it is only tangentially relevent to the issue at hand.
>
> Why do you say that?  It is a mergesort optimized to minimize i/o.

However, it fails to accomplish that goal.  There are clearly better
ways.
It is (nevertheless) quite an interesting study.

 0

On Mar 16, 3:29 pm, "user923005" <dcor...@connx.com> wrote:
> There is no need to perform log2(subfiles) passes over the data, as it
> is a simple matter to merge the entire sequence of subfiles in a
> single pass.  Simply place the heads of the subfiles in a priority
> queue that allows change of priority and empty all of the lists.
>
> Consider 1024 subfiles.  That would require 10 + 1 = 11 read/write
> passes using polyphase sort/merge.
> Using a priority queue cuts it down to 1 + 1 = 2 read/write passes.
> Consider 2 subfiles.  Polyphase needs 3 passes, and priority queue
> requires 2.  This is the minimum advantage for priority queue based
> merge.

Treating disk like a random access device for sorting purposes is a
classic beginner mistake which leads to horrible sort performance.
The number of subfiles you can usefully merge at once is limited by
the amount of memory you have available to do large sequential reads
(large enough to minimize the seek penalty) of each subfile.
Obviously reducing the sort order increases the number of merge
passes.

Somewhat more concretely, doing a 32-way merge instead of 1024-way
merge allows you 32 times bigger input buffers.  If those 32 fold
bigger input buffers allow you to double or triple the subfile read
throughput, you'll sort faster by doing two 32-way merges rather than
a single 1024 merge.  If, for example, you move from 4K buffers to
128K buffers (4-8MB total input buffer space), you'll probably get a
15-20x improvement in throughput.  OTOH, moving from 1MB to 32MB
buffers (1-2GB total) will have only a modest impact.

Of course all that assumes that you've taken some care in disk layout
as regards your sort work areas.

 0

On Mar 16, 11:31 pm, "robertwess...@yahoo.com"
<robertwess...@yahoo.com> wrote:
> On Mar 16, 3:29 pm, "user923005" <dcor...@connx.com> wrote:
>
> > There is no need to perform log2(subfiles) passes over the data, as it
> > is a simple matter to merge the entire sequence of subfiles in a
> > single pass.  Simply place the heads of the subfiles in a priority
> > queue that allows change of priority and empty all of the lists.
>
> > Consider 1024 subfiles.  That would require 10 + 1 = 11 read/write
> > passes using polyphase sort/merge.
> > Using a priority queue cuts it down to 1 + 1 = 2 read/write passes.
> > Consider 2 subfiles.  Polyphase needs 3 passes, and priority queue
> > requires 2.  This is the minimum advantage for priority queue based
> > merge.
>
> Treating disk like a random access device for sorting purposes is a
> classic beginner mistake which leads to horrible sort performance.
> The number of subfiles you can usefully merge at once is limited by
> the amount of memory you have available to do large sequential reads
> (large enough to minimize the seek penalty) of each subfile.
> Obviously reducing the sort order increases the number of merge
> passes.
>
> Somewhat more concretely, doing a 32-way merge instead of 1024-way
> merge allows you 32 times bigger input buffers.  If those 32 fold
> bigger input buffers allow you to double or triple the subfile read
> throughput, you'll sort faster by doing two 32-way merges rather than
> a single 1024 merge.  If, for example, you move from 4K buffers to
> 128K buffers (4-8MB total input buffer space), you'll probably get a
> 15-20x improvement in throughput.  OTOH, moving from 1MB to 32MB
> buffers (1-2GB total) will have only a modest impact.
>
> Of course all that assumes that you've taken some care in disk layout
> as regards your sort work areas.

1024 buffers of 1MB each is 1 GB.  That's about \$100.
http://www.dealtime.com/xPO-Kingston-1GB-PC2-4300U-C4-N-ECC-2-BANK-1GB-PC2-4300U-C4-N-ECC-2-BANK

I have 2GB on this machine, and could easily perform a sort with 1024
megabyte sized buffers.

Now that we have 64 bit CPUs going mainstream, and soon to follow 64
bit operating systems, it's time to rethink some fundamental ways of

 0

On 17 Mar 2007 00:17:47 -0700
"user923005" <dcorbit@connx.com> wrote:

> Now that we have 64 bit CPUs going mainstream, and soon to follow 64
> bit operating systems,

Huh there have been 64 bit operating systems for many years now,
even for commodity (PC) hardware.

--
C:>WIN                                      |   Directable Mirror Arrays
The computer obeys and wins.                | A better way to focus the sun
You lose and Bill collects.                 |    licences available see
|    http://www.sohara.org/
 0

Steve O'Hara-Smith wrote:
> On 17 Mar 2007 00:17:47 -0700
> "user923005" <dcorbit@connx.com> wrote:
>
>> Now that we have 64 bit CPUs going mainstream, and soon to follow 64
>> bit operating systems,
>
> 	Huh there have been 64 bit operating systems for many years now,
> even for commodity (PC) hardware.

Now that we have 64 bit CPUs going mainstream, and soon to
follow 64 bit operating systems also going mainstream, . . .

- Logan
 0

robertwessel2@yahoo.com wrote:

> Treating disk like a random access device for sorting purposes is a
> classic beginner mistake which leads to horrible sort performance.

I've been waiting for someone to point that out.  I have seen a couple
of orders of magnitude difference (hours vs. minutes) made by
designing/choosing algorithms on the basis that seeks were likely to
dominate execution time.  Not at all a trivial or even just secondary
issue.

-- chris
 0

user923005 wrote:
>
> On Mar 15, 6:24 pm, pete <pfil...@mindspring.com> wrote:

> > I've never seen or heard of an iterative version of mergesort.
> > I've thought about attempting it a couple of times
> > but I drew a blank.

> http://penguin.ewu.edu/~trolfe/NaturalMerge/index.html
> http://ds4beginners.wordpress.com/2006/09/22/merge-sort-iteration/
> http://p2p.wrox.com/TopicIndex/17558.htm

Thank you.

--
pete
 0

On Sat, 17 Mar 2007 10:35:01 -0500
Logan Shaw <lshaw-usenet@austin.rr.com> wrote:

> Steve O'Hara-Smith wrote:
> > On 17 Mar 2007 00:17:47 -0700
> > "user923005" <dcorbit@connx.com> wrote:
> >
> >> Now that we have 64 bit CPUs going mainstream, and soon to follow 64
> >> bit operating systems,
> >
> > 	Huh there have been 64 bit operating systems for many years now,
> > even for commodity (PC) hardware.
>
>
> 	Now that we have 64 bit CPUs going mainstream, and soon to

I thought 64 bit CPUs had been mainstream since the AMD64 started
mass marketing. I've been seeing them in supermarkets for some time now.

--
C:>WIN                                      |   Directable Mirror Arrays
The computer obeys and wins.                | A better way to focus the sun
You lose and Bill collects.                 |    licences available see
|    http://www.sohara.org/
 0

On Mar 15, 12:11 am, "Steve" <tinker...@gmail.com> wrote:
> Hi;
>
> I was reading a digg article the other day.  It was buy some uber geek
> who has been interviewing people for jobs and who has started to feel
> as if programmers don't have the level of skill that they used to
> have.
>
> One of his examples was that many applicants didn't know what
> recursion was or how to solve a problem with it.
>
> I guess I am in the middle between the "real programmers" of
> yesteryear and the glorified web developers applying for entry level
> positions.
>
> Why is recursion useful,  why do you need it instead of using an
> ordinary loop inside of a function?
>

The answer is strongly related to the answer to "why is induction
useful?". A Google on <induction recursion> will give examples.

Jon C.

 0

Steve O'Hara-Smith wrote:
> On Sat, 17 Mar 2007 10:35:01 -0500
> Logan Shaw <lshaw-usenet@austin.rr.com> wrote:
>
>> Steve O'Hara-Smith wrote:
>>> On 17 Mar 2007 00:17:47 -0700
>>> "user923005" <dcorbit@connx.com> wrote:
>>>
>>>> Now that we have 64 bit CPUs going mainstream, and soon to follow 64
>>>> bit operating systems,
>>> 	Huh there have been 64 bit operating systems for many years now,
>>> even for commodity (PC) hardware.
>>
>> 	Now that we have 64 bit CPUs going mainstream, and soon to
>
> 	I thought 64 bit CPUs had been mainstream since the AMD64 started
> mass marketing. I've been seeing them in supermarkets for some time now.

Well, I guess what it depends on what you mean by mainstream.  If you mean
"you can buy them in any regular store", then 64-bit CPUs have been
mainstream for a few years now.  But if you mean "most people have a 64-bit
CPU in their machine, and you can assume the process is 64-bit when you
write your software", they are only just starting to be mainstream.

- Logan
 0

On Mar 17, 2:17 am, "user923005" <dcor...@connx.com> wrote:
> On Mar 16, 11:31 pm, "robertwess...@yahoo.com"
> > Treating disk like a random access device for sorting purposes is a
> > classic beginner mistake which leads to horrible sort performance.
> > The number of subfiles you can usefully merge at once is limited by
> > the amount of memory you have available to do large sequential reads
> > (large enough to minimize the seek penalty) of each subfile.
> > Obviously reducing the sort order increases the number of merge
> > passes.
>
> > Somewhat more concretely, doing a 32-way merge instead of 1024-way
> > merge allows you 32 times bigger input buffers.  If those 32 fold
> > bigger input buffers allow you to double or triple the subfile read
> > throughput, you'll sort faster by doing two 32-way merges rather than
> > a single 1024 merge.  If, for example, you move from 4K buffers to
> > 128K buffers (4-8MB total input buffer space), you'll probably get a
> > 15-20x improvement in throughput.  OTOH, moving from 1MB to 32MB
> > buffers (1-2GB total) will have only a modest impact.
>
> 1024 buffers of 1MB each is 1 GB.  That's about \$100.http://www.dealtime.com/xPO-Kingston-1GB-PC2-4300U-C4-N-ECC-2-BANK-1G...
>
> I have 2GB on this machine, and could easily perform a sort with 1024
> megabyte sized buffers.

The numbers I provided were indented to illustrate the effect of
excessively small buffers.  Trivially if you have enough memory to
have a large number of large buffers it would be silly to reduce your
merge order.   That's been understood since the early sixties.  You're
also assuming that just because a machine has multiple GB of memory
available you'll be able to dedicate that much to your sort process.

Or just pick a bigger dataset to sort.  Back in the sixties, a one GB
dataset was a really, really big external sort (which would have
required something on the order of 50 2314 disk drives just for sort
work space), now I'd often expect that to be an internal sort.  Things
change, datasets get bigger.  Saying that all merges should be order N
for an N-run sort is as silly now as it would have been in 1965.
Obviously increasing main memory sizes allow for bigger runs and
higher merge orders, but it certainly doesn't eliminate the issue
entirely.

 0

pete wrote:
>
> user923005 wrote:
> >
> > On Mar 15, 6:24 pm, pete <pfil...@mindspring.com> wrote:
>
> > > I've never seen or heard of an iterative version of mergesort.
> > > I've thought about attempting it a couple of times
> > > but I drew a blank.

> Thank you.

I translated imerge.c to an e_type interface.
It's about 2.5 times as slow on arrays of integers
as the recursive mergesort that I already have ,
but I'm not done rewriting it yet.
I think it might have potential.

void imsort(e_type *array, size_t nmemb)
{
size_t i, j, k, n_1, k2;
e_type *b;

if (nmemb == 0) {
return;
}
b = malloc(nmemb * sizeof *b);
if (b == NULL) {
exit(EXIT_FAILURE);
}
n_1 = nmemb - 1;
for (k = 1; n_1 > k; k = k2) {
k2 = k * 2;
for (i = 0; nmemb > i + k; i += k2) {
j = i + k2;
if (j > nmemb) {
j = nmemb;
}
i_merge(array, b, i, i + k, j);
}
}
free(b);
}

static void
i_merge(e_type *a, e_type *b, size_t L, size_t r, size_t u)
{
size_t i, j, k;

k = i = L;
j = r;
while (r > i && u > j) {
b[k++] = a[(*(GT(a + i, a + j) ? &j : &i))++];
}
while (r > i) {
b[k++] = a[i++];
}
while (u > j) {
b[k++] = a[j++];
}
for (k = L; u > k; ++k) {
a[k] = b[k];
}
}

--
pete
 0

pete wrote:
>
> pete wrote:
> >
> > user923005 wrote:
> > >
> > > On Mar 15, 6:24 pm, pete <pfil...@mindspring.com> wrote:
> >
> > > > I've never seen or heard of an iterative version of mergesort.
> > > > I've thought about attempting it a couple of times
> > > > but I drew a blank.
>
>
> > Thank you.
>
> I translated imerge.c to an e_type interface.
> It's about 2.5 times as slow on arrays of integers
> as the recursive mergesort that I already have ,
> but I'm not done rewriting it yet.
> I think it might have potential.
>
> void imsort(e_type *array, size_t nmemb)
> {
>     size_t i, j, k, n_1, k2;
>     e_type *b;
>
>     if (nmemb == 0) {
>         return;
>     }
>     b = malloc(nmemb * sizeof *b);
>     if (b == NULL) {
>         exit(EXIT_FAILURE);
>     }
>     n_1 = nmemb - 1;
>     for (k = 1; n_1 > k; k = k2) {
>         k2 = k * 2;
>         for (i = 0; nmemb > i + k; i += k2) {
>             j = i + k2;
>             if (j > nmemb) {
>                 j = nmemb;
>             }
>             i_merge(array, b, i, i + k, j);
>         }
>     }
>     free(b);
> }

The n_1 variable should be removed from the function and
for (k = 1; n_1 > k; k = k2) {
should be
for (k = 1; nmemb > k; k = k2) {

--
pete
 0

But humans are not supposed to understand
the code !

A few good programmers code about
a megabyte Kernel and then

no human ever uses at low level ,

they become computer operators ,
they never code ....

They will never need to know Recursion,
but a picture of it in their head would be useful .
So they can use the s/w ..

Why make EVERYONE a programmer ?

Why not make people users ?

If you use C++ , you will force the User

Thats not productive .

Lets program once , then make everyone
in to Users .....

This requires putting down bloaters

Every time you see/hear a person
who wants to program , old fashioned

SHOUT  THEM DOWN !
They are taking your taxes !
They are violating the public trust .

They only want to belabor programming
for job security ..
C++ is the most time comsuming method
to create low level code .

Low level code must be created ONCE .

 0

32 Replies
218 Views

Similar Articles

12/12/2013 10:00:32 AM
page loaded in 41334 ms. (0)

Similar Artilces:

va_arg... recursion: changing arguments and the using recursion
Hello... is there a nice and portable way of changing some variable arguments, and then passing these changed arguments to the same function as recursion?? Here's a non-general example: #include <stdio.h> int go(int num, ...) { int *p = &num; int *p2 = p; int i, n; for (i = 1; i <= num; ++i) printf("%d ", *(++p2)); putchar('\n'); for (p2 = p+num; p2 > p; --p2) { n = *(p2); if (n > 0) { --(*p2); //recurr p2 = p+1; go(num, *p2, *(...

Question on recursion using prolog
Hi, I am trying to understand conditional recursion using prolog. I have a list say [a, b, c, d, e, f]. I have another list [a, a, d]. How do I find the count of each element in the 2nd list. That is I need a list of the form [[a,2],[d,1]] ...

Recursive inheritance after using h2xs
I'm going a bit nuts after trying to build a package with h2xs and getting the error... Recursive inheritance detected while looking for method 'import' in package... I tried to simplify it down as much as possible, so I took the "empty subclass test" example out of Perl Cookbook, section 13.9 and plugged it into the .pm file produced by h2xs and get the above error soon as I try to call new from a script that uses that package. All the texts I've read show how to do classes, packages, modules, but nothing seems to put it all together! :) A gentle nudge would be...

Problems using recursion while browsing directories
Hello, I'm trying to learn some C, reading my book, Beginning Linux Programming I came across the following program. The program is supposed to walk through directory's and print all its subdirectories. However the program never seems to get out of the loop its in. I tried finding the error but I don't see where it goes wrong, hopefully someone can shed a light on this, source code is pasted below /* * This program should list the directory's + subdirs * The search should begin on the location specief in * main() * * This is a program taken fro...

using re: hitting recursion limit
I have done a fair amount of regular expression text processing in Perl, and am currently trying to convert a running Perl script into Python (for a number of reasons I won't go into here). I have not had any problems with memory limits using Perl, but in trying to clip out a particular table from a web page, I am hitting Python's recursion limit. The RE is pretty simple: pat = '(<table.*?%s.*?</table>)' % magic_string This seems about as simple as a "real-world" RE can get. If I cut down the web page to about 150 lines, this works, but that's...

Simplifying expressions using rules with recursion
I would like to use Mathematica to simplify a long expression consisting of tems of products of matrices. These matrices obey equations of the form A = A.B.A, and I would like Mathematica to use these identities to simplify my expression. How can I tell Mathematica to do that? Just entering [1]: A = A.B.A does not work, as Mathematica will complain about too man recursions (which I understand). But how do I do it properly? ...

using recursion to create a new list
I've written a function that takes a number and returns a list of all the integer multiples of that number from 1 to 10, in as simple a way as I can think of. Given the number 3, my function returns the list: (define (multiples-list num) (append (list num) (map (lambda (x) (* num x)) '(2 3 4 5 6 7 8 9 10)))) ==> (3 6 9 12 15 18 21 24 27 30) How do I go about thinking recursively to do this by making a new list? I.e., using a mapping function ala ;; Return a list of the square roots of the numbers in a list (define (square-roots a-list) (if (null? a-list) '...

how to use rdir in Net Ftp recursive
Hi everybody I managed to use quite easily rget but i cannot with rdir would anyone have an example? <romieuc@free.fr> wrote in message news:1166691500.493008.274890@a3g2000cwd.googlegroups.com... > Hi everybody > > I managed to use quite easily rget but i cannot with rdir > would anyone have an example? > For me, the relevant snippet is simply: .. .. open(\$wr, '>', 'rec.txt') or die "Can't open rec.txt for writing: \$!"; \$ftp->rdir(Filehandle => \$wr); # \$ftp is a Net::FTP::Recursive object close(\$wr) or die "Can't cl...

Prom deleting directory when using recursive mkdir
Php guru's, At the moment I'm working on a class that makes handling files and directories easier within my framework. For this I'm combining standard php filesystem functies into my own functions. One of these functions creates directories, recursively if needed. However when testing this code I ran into a problem. I use php 5.2.8 and since php 5.0.0 you should be able to use mkdir recursively by setting the third parameter to TRUE. Using this however I ran into some strange things. I used the following code to create 3 directories: <?php mkdir('test_dir2', 0777, F...

Using sudo to deny chown/chmod recursively
Howdy Group, I'm trying to tighten up security on a RHEL 3 box I have and would like to deny the chown'ing and chmod'ing of an entire directory. Is there a way to tell sudo to NOT allow any changes to be made recursively on a directory? Something like /usr/sbin/chown *, !/usr/sbin/chown -R <directory> ? TIA for any help, Ray Actually, I think I mis-stated what I'm really trying to do here. I'd like to define an entire directory (and all subdirectories) as not being able to be chown'd/chmod'd. Is that possible? In article <1120839134.633665.11823...

Using a top-down parser to parse a left recursive grammar
After reading an Article in the January 2006 Dr. Dobbs ("Recursive Descent, Tail Recursion, & the Dreaded Double Divide."), I became slightly interested in how you could parse a left recursive grammar using a sort of top-down parser. I came up with something like this (sorry for my poor pseudocode): (simple example grammar) E := E ('+' T)* T := T ('*' F)* F := <integer> typedef struct tree { int value; /* integer value */ int operator; /* character operator */ } Tree Parse_E() { Tree S0 = Parse_T(); while(cur_token == '+') { ...

ANNOUNCE: Thesaurus
--Boundary-00=_pEF7Qa1xdxrjXPU Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Thesaurus: A different way to call a dictionary. Thesaurus is a new a dictionary subclass which allows calling keys as if they are class attributes and will search through nested objects recursively when __getitem__ is called. You will notice that the code is disgusting simple. However I have found that this has completely changed the way I program in python. I've re-written some exiting programs using Thesaurus, and often realized 15-30% code reduction. ...

Re: recursive non linear fit using proc nlin #2
Hey guys, I agree with you guys that fitting three parameters based on five observations at a single point in time is not the smartest thing to do. What i am trying to do here is to establish a continuous function across the five instantaneous volatilities for the bond market. i have approximated using instananeous intensity-based volatilities . Instead of simply assuming a particular structural form for the volatility function of bond prices across different maturities, I want to fit a nelson-siegel like function to the volatilites and investigate the stability of these estimated parameters...

Just When I Was Getting Used To Using Using
I have a class Packet that contains a nested class Payload and an enumeration: struct Packet { struct Payload { } ; } ; I had gotten into the habit of writing: using Packet::Payload; so I could write: Payload::..blah.. instead of Packet::Payload::..blah I have done this... (gulp) in many, many .cpp files. I tried out the new Visual C++ Express compiler from you-know-who on my source code and got 1000's of error messages: ...\Packet\Packet.cpp(13) : error C2885: 'Packet::Payload': not a valid using-declaration at non-class scope What does the standard say? Is th...

ActGen to use or not to use?
Hello, I was doing a project with Actel FPGA ex128 with Libero Platinum Eval version software. in my project I had to use some counters,Multiplexers etc. which I wrote myself. Now as I was a beginner I didnt use the ActGen. Is it worth the effort to write the code partly by using ActGen macros for counter and muxes and etc. Thanks Naimesh Naimesh wrote: > I was doing a project with Actel FPGA ex128 with Libero Platinum Eval > version software. > > in my project I had to use some counters,Multiplexers etc. which I > wrote myself. Good work. Writing your own code makes y...

file in use, could not use
Hi. I have two databases that are both linked to the same backend table. I can't use them both at the same time, yet other users can have both functioning simultaneously on their machines. Opening them is not the problem. ..but the error message appears when I run any queries linked to the table. It works in one database, but not the other. All users are running XP sp2 with Access 2003. The error message is the same regardless if anyone is using the backend or not. I believe it is machine specific...MY machine. I haven't a clue. Any ideas? -- Message posted via http://www.accessmon...

USE and EC-FLOW-USE
As I understand it, EC-FLOW-USE is a sort of indicator for possible recursive USE directives. But what is supposed to happen in this situation ? And how to detect it ? Roger "Roger While" <simrw@sim-basis.de> wrote in message news:e69kf1\$g8i\$00\$1@news.t-online.com... > As I understand it, EC-FLOW-USE is a sort of > indicator for possible recursive USE directives. > But what is supposed to happen in > this situation ? > And how to detect it ? Using WD 1.6 as the reference: Page 584, 14.8.45.3 USE statement, General rules, "2) During the execution of a ...

using setTimeout when using prototype
I have an object, and I define the following: var processForm=new Object(); processForm=function(inservleturl) { this.inservleturl = inservleturl; this.submitForm(); } processForm.prototype.submitForm2=function() { } processForm.prototype.submitForm=function() { setTimeout("submitStep2()", 20); } How can the submitForm function's setTimeout call submitStep2? Thank you. processForm=function(inservleturl) { this.inservleturl = inservleturl; this.submitForm(); } processForm.prototype.submitForm2=function() { ...

How to use not while greping using Regex
Hi, I would like to know how to use 'Not' option while grepping using regex I have a file "File1.txt" whose contents are as below ---- INFO: error did not occur ERROR: an error has occured INFO: the machine has been started --- cat File1.txt | grep '\error' ----> returns both the lines which has error I would like to retrieve only the lines which has "error" but not "Info" I appreciate your help. Thanks in advance Regards, Fahad On Mon, Jan 08, 2007 at 01:36:04AM -0800, Kimi wrote: > > I would like to retrieve only the lines whic...

How to use LDAP using PHP
Heylo, I have installed PHP and make the necessary changes required in PHP.ini the changes were uncommenting LDAP and GD files. after that i had run PHPinfo command thru web page and it was not showing me both LDAP and GD config. Is i am doing sumthing worng if yes plz guide else tell me sum solution for how to go further. You may need to restart the web server if you're using mod_php. I would double-check that the Configuration File (php.ini) Path setting shown by phpinfo() is the file that was updated... On Jan 31, 8:39 am, dilipbail...@gmail.com wrote: > Heylo, I have installed PH...