f



Functional programming in mainstream languages

I'm curious to what extent a solid understanding of functional
programming would help me if I still had to do most real work in
mainstream imperative languages.

Mainstream languages, almost by definition, are more likely to be used
in mainstream commercial projects. So if I'm involved in such a
project (and I do get involved in lots of them), what benefits would
functional programming expertise give me, and how could I achieve
those benefits?

I'm working through SICP with Scheme. I've discovered that I'm going
through interesting stages: first discovering how to use Scheme to
solve certain example problems (using recursion, for ex.), then
finding that approach generalizing into my brain as just a way to
*see* the problem independent of Scheme (which, of course, is the
point of SICP), then discovering that I can transcribe that way of
seeing the problem into mainstream languages that I've been using for
years, even if those languages make it a little harder.

Even though I've used recursion occasionally for years, for example, I
never really had a "sense" of it until I used it for a while in a
language really designed for it. Now, I'm wondering how much else I
could learn from Scheme, Haskell, or Ocaml, what additional "senses" I
could develop, how much of it is applicable even in mainstream
languages, and how much of it simply isn't of practical use unless you
actually use an FP (or FP-oriented) language.

And when using mainstream languages, are there ways to get more out of
them -- such as special libraries with FP-style datatypes and
functions on them -- or if such things simply aren't realistic.

Thanks.
0
tuanglen (55)
2/11/2004 9:25:21 PM
comp.lang.functional 2791 articles. 0 followers. Post Follow

182 Replies
1714 Views

Similar Articles

[PageSpeed] 13

On 11 Feb 2004 13:25:21 -0800
tuanglen@hotmail.com (Tuang) wrote:

> I'm curious to what extent a solid understanding of functional
> programming would help me if I still had to do most real work in
> mainstream imperative languages.
> 
> Mainstream languages, almost by definition, are more likely to be used
> in mainstream commercial projects. So if I'm involved in such a
> project (and I do get involved in lots of them), what benefits would
> functional programming expertise give me, and how could I achieve
> those benefits?
> 
> I'm working through SICP with Scheme. I've discovered that I'm going
> through interesting stages: first discovering how to use Scheme to
> solve certain example problems (using recursion, for ex.), then
> finding that approach generalizing into my brain as just a way to
> *see* the problem independent of Scheme (which, of course, is the
> point of SICP), then discovering that I can transcribe that way of
> seeing the problem into mainstream languages that I've been using for
> years, even if those languages make it a little harder.
> 
> Even though I've used recursion occasionally for years, for example, I
> never really had a "sense" of it until I used it for a while in a
> language really designed for it. Now, I'm wondering how much else I
> could learn from Scheme, Haskell, or Ocaml, what additional "senses" I
> could develop, how much of it is applicable even in mainstream
> languages, and how much of it simply isn't of practical use unless you
> actually use an FP (or FP-oriented) language.

Here's a page that describes a bunch of "patterns" inspired by
functional programming.  This should give you an idea of the kinds of
techniques more prevalent in functional programming and how they can
benefit even imperative programs:
http://www.c2.com/cgi/wiki?FunctionalPatternSystemForObjectOrientedDesign

In general, functional programming encourages a more bottom-up style of
programming which can lead to entirely different ways of approaching
problems.

> And when using mainstream languages, are there ways to get more out of
> them -- such as special libraries with FP-style datatypes and
> functions on them -- or if such things simply aren't realistic.

There are libraries that attempt to minimize the pain of doing some
functional techniques in languages that don't really have the right
support, e.g. FC++.
0
ddarius (161)
2/11/2004 9:44:31 PM
tuanglen@hotmail.com (Tuang) writes:

> I'm curious to what extent a solid understanding of functional
> programming would help me if I still had to do most real work in
> mainstream imperative languages.

[snip]

> I'm working through SICP with Scheme. I've discovered that I'm going
> through interesting stages: first discovering how to use Scheme to
> solve certain example problems (using recursion, for ex.), then
> finding that approach generalizing into my brain as just a way to
> *see* the problem independent of Scheme (which, of course, is the
> point of SICP), then discovering that I can transcribe that way of
> seeing the problem into mainstream languages that I've been using for
> years, even if those languages make it a little harder.

I think you just answered your own question!


-- 
~jrm
0
2/12/2004 12:59:09 AM
tuanglen@hotmail.com (Tuang) wrote in message news:<df045d93.0402111325.73b437c3@posting.google.com>...

> I'm working through SICP with Scheme. I've discovered that I'm going
> through interesting stages: first discovering how to use Scheme to
> solve certain example problems (using recursion, for ex.), then
> finding that approach generalizing into my brain as just a way to
> *see* the problem independent of Scheme (which, of course, is the
> point of SICP), then discovering that I can transcribe that way of
> seeing the problem into mainstream languages that I've been using for
> years, even if those languages make it a little harder.

Same for me. Just an anedoctical example: I recently wrote a state 
machine in Scheme which (quite naturally) came out to be full of closures.
Then I translated it in Python and discovered (with my surprise) that I
could maintain the functional structure essentially unchanged. Nevertheless,
I would never have used that style if I had coded in Python from the 
beginning (I would have used classes instead). So, certainly the 
functional paradigm is independent from the language. Still, I do think 
it is bad practice to code Python with a Scheme mindset, just as it is
bad practice to code Scheme with a Python mindset. I personally make a
personality split and I code with different styles and paradigm according 
to the target language,unless for some reason I need to translate literally
from one to the other (as in that case). OTOH, it is by no means guaranteed 
that a paradigm which is superior in language X is also superior in language 
Y, so my advice is to use the paradigm the language support the best.

      Michele Simionato

P.S. about my other postings on Python-like "for" loops in Scheme: I want
to make clear that I want to implement them just for educational purpose, 
NOT to use them in real code. Actually, my Scheme code is full of named let, 
my favorite looping construct and I will continue to use it over the 
Python-like "for" loops, if not for other reasons, because I want my
code to be readable to others than myself. I am not interested in 
making Scheme similar to Python or coding Python in Scheme, otherwise
the purpose of learning a new language would be completely lost. Still,
it is quite fun to see how much you can stretch a language and I enjoy
"stretchable" languages ;-)
0
2/12/2004 7:40:25 AM
I've been interviewing candidates for two Senior Developer positions
over the past few weeks, testing their knowledge of SQL, a mainstream
language.  I go through a series of four problems.  Hardly anyone gets
the fourth one, but I watch how they attack the problem to gauge their
problem-solving abilities.  The best solution to the problem involves
a construct that is an expression in SQL, but a statement in most
imperative languages.  People who think of it as a statement won't
likely arrive at the solution that puts it inside an expression.


-- 
(for-each (lambda (str) (display (string-append (make-string (- 40
            (quotient (string-length str) 2)) #\space) str)) (newline))
          '("" "Bruce Lewis" "MIT 1990" " http://brl.codesimply.net/
"          ))
0
brlspam (219)
2/12/2004 2:01:08 PM
Michele Simionato wrote:


 > P.S. about my other postings on Python-like "for" loops in Scheme: I want
 > to make clear that I want to implement them just for educational purpose,
 > NOT to use them in real code. Actually, my Scheme code is full of named let,
 > my favorite looping construct and I will continue to use it over the
 > Python-like "for" loops, if not for other reasons, because I want my
 > code to be readable to others than myself. I am not interested in
 > making Scheme similar to Python or coding Python in Scheme, otherwise
 > the purpose of learning a new language would be completely lost. Still,
 > it is quite fun to see how much you can stretch a language and I enjoy
 > "stretchable" languages ;-)

You may want to start thinking about functional loops:

(module range mzscheme

   (require (lib "etc.ss") (lib "contract.ss"))

   (provide/contract
    (range
     (case-> (natural-number? . -> . (listof natural-number?))
             (natural-number? natural-number?  . -> . (listof natural-number?))
             (natural-number? natural-number? natural-number? . -> .
                              (listof natural-number?))))

    #| ----------------------------------------------------------------------------
    This is Python's range function, which separates Algol's notion of stepping
    in a for-loop from the actual looping construct.

    (range n)     = (list 0 ... n-1)
    (range n m)   = (list n ... m-1),
                    if n < m
    (range n m k) = (list n n+1*k n+2*k ... n+l*k)
                    if n < m, k > 0 for l = ceiling[m-n/k]
    |#
    )

   (define range
     (case-lambda
       [(n) (build-list n identity)]
       [(n m)
        (if (< n m)
            (build-list (- m n) (lambda (i) (+ n i)))
            (error 'range (format range1 n m)))]
       [(n m k)
        (if (and (< n m) (> k 0))
            (build-list (ceiling (/ (- m n) k)) (lambda (i) (+ n (* i k))))
            (error 'range (format range2 n m k)))]))

   (define range1 "(range n m) for n, m in N, and n < m expected; given ~e ~e")
   (define range2
     "(range n m k) with n, m, k in N, and n < m, k  0 expected; given: ~e ~e ~e")

   )

-- Matthias


0
2/12/2004 2:24:45 PM
michele.simionato@poste.it (Michele Simionato) writes:

> Still, I do think it is bad practice to code Python with a Scheme
> mindset, just as it is bad practice to code Scheme with a Python
> mindset.

I don't.

A friend of mine once remarked that even my C code looked like
Scheme.  He meant it as a compliment and I took it as one.
0
jrm (1310)
2/12/2004 3:36:06 PM
Joe Marshall <prunesquallor@comcast.net> wrote in message news:<7jyt6rl2.fsf@comcast.net>...
> tuanglen@hotmail.com (Tuang) writes:
> 
> > I'm curious to what extent a solid understanding of functional
> > programming would help me if I still had to do most real work in
> > mainstream imperative languages.
> 
> [snip]
> 
> > I'm working through SICP with Scheme. I've discovered that I'm going
> > through interesting stages: first discovering how to use Scheme to
> > solve certain example problems (using recursion, for ex.), then
> > finding that approach generalizing into my brain as just a way to
> > *see* the problem independent of Scheme (which, of course, is the
> > point of SICP), then discovering that I can transcribe that way of
> > seeing the problem into mainstream languages that I've been using for
> > years, even if those languages make it a little harder.
> 
> I think you just answered your own question!

Well, not really. I didn't ask whether it was ever possible to use FP
concepts to advantage in non-FP languages. The fact that I knew the
answer to that one led me to want to know more.

Specifically, when working in *non-FP languages*:

-- which FP concepts/techniques/advantages are still
available/usable/practical in real production work?
-- for each, to what extent? (on rare occasions, almost always, etc.)
-- which ones might I be tempted to use but should probably avoid (and
why)?
-- any other advice/tips on when and when not to use them?
-- any advice for *how* best to do so (good libraries, a style of
modularization, etc.)?
-- any other practical advice for getting the most benefit out of FP
study if most real production work will still be in mainstream
languages for the foreseeable future? (particular books or online docs
you recommend, language recommendations for this purpose, study
techniques such as "I always prototype X-type apps in FP language Y
before implementing in non-FP lang Z", things to be wary of, etc.)

Any advice or suggestions regarding any of the above items would be
appreciated.

Thanks.
0
tuanglen (55)
2/12/2004 8:20:27 PM
(With the caveat that this is just one person's perspective and
 undoubtedly an incomplete answer...)

tuanglen@hotmail.com (Tuang) once said:
>Specifically, when working in *non-FP languages*:
>
>-- which FP concepts/techniques/advantages are still
>available/usable/practical in real production work?
>-- for each, to what extent? (on rare occasions, almost always, etc.)

I would cite
 - higher order functions
 - lazy evaluation
 - parametric polymorphism
 - monads

I think HOFs are the most useful (and are often applicable).  If you
understand them well and have witnessed some cool combinator libraries,
you'll be provided with a new perspective on how to do design in
certain domains.

Lazy evaluation I've only found to be useful/essential on rare
occasions, but it's a useful tool to have in your toolbox for when
certain problems arise.

I'm not sure of "parametric polymorphism" counts as an FP concept (it's
becoming common in mainstream languages), but I think the FP community
has the most advanced experience with it and how to use it well.

I dunno if "monads" should actually be on the list (yet), but they've
shown me an interesting way to structure programs that modularizes
various concerns in a way akin to aspect-oriented programming (which
itself seems to be on a "mainstream/popularity" rise).


>-- which ones might I be tempted to use but should probably avoid (and
>why)?
>-- any other advice/tips on when and when not to use them?
>-- any advice for *how* best to do so (good libraries, a style of
>modularization, etc.)?

I would tie these all together and say that "using FP techniques in
mainstream languages" only works well when you have the necessary
language/library support to do it.  What constitutes "sufficient
support" is probably open to opinion: there's a kind of continuuum with
points like "impossible", "possible but difficult", "straightforward
but klunky" and "easy/effortless".


>-- any other practical advice for getting the most benefit out of FP
>study if most real production work will still be in mainstream
>languages for the foreseeable future? (particular books or online docs
>you recommend, language recommendations for this purpose, study

The paper "FC++: Functional Tools for OO Tasks" (an old version is
available at the FC++ web site: http://www.cc.gatech.edu/~yannis/fc++/)
describes applying FP techniques (mainly HOFs and parametric
polymorphism) to the implementation of object-oriented design
patterns.  The concrete examples there give a sense of some ways to
reap benefits from FP techniques.

-- 
 Brian M. McNamara   lorgon@acm.org  :  I am a parsing fool!
   ** Reduce - Reuse - Recycle **    :  (Where's my medication? ;) )
0
gt5163b (53)
2/12/2004 9:57:45 PM
In article <df045d93.0402121220.4abc743c@posting.google.com>, Tuang wrote:
> Joe Marshall <prunesquallor@comcast.net> wrote in message news:<7jyt6rl2.fsf@comcast.net>...
>> tuanglen@hotmail.com (Tuang) writes:
>> >
>> > I'm working through SICP with Scheme. I've discovered that I'm going
>> > through interesting stages: first discovering how to use Scheme to
>> > solve certain example problems (using recursion, for ex.), then
>> > finding that approach generalizing into my brain as just a way to
>> > *see* the problem independent of Scheme (which, of course, is the
>> > point of SICP), then discovering that I can transcribe that way of
>> > seeing the problem into mainstream languages that I've been using for
>> > years, even if those languages make it a little harder.
>> 
>> I think you just answered your own question!
> 
> Well, not really. I didn't ask whether it was ever possible to use FP
> concepts to advantage in non-FP languages. The fact that I knew the
> answer to that one led me to want to know more.
> 
> Specifically, when working in *non-FP languages*:
> 
> -- which FP concepts/techniques/advantages are still
> available/usable/practical in real production work?
> -- for each, to what extent? (on rare occasions, almost always, etc.)
> -- which ones might I be tempted to use but should probably avoid (and
> why)?
> -- any other advice/tips on when and when not to use them?
> -- any advice for *how* best to do so (good libraries, a style of
> modularization, etc.)?


> -- any other practical advice for getting the most benefit out of FP
> study if most real production work will still be in mainstream
> languages for the foreseeable future? (particular books or online docs
> you recommend, language recommendations for this purpose, study
> techniques such as "I always prototype X-type apps in FP language Y
> before implementing in non-FP lang Z", things to be wary of, etc.)
> 
> Any advice or suggestions regarding any of the above items would be
> appreciated.


-- 
Neel Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
2/12/2004 10:04:44 PM
Joe Marshall wrote:
> michele.simionato@poste.it (Michele Simionato) writes:
> 
> 
>>Still, I do think it is bad practice to code Python with a Scheme
>>mindset, just as it is bad practice to code Scheme with a Python
>>mindset.
> 
> 
> I don't.
> 
> A friend of mine once remarked that even my C code looked like
> Scheme.  He meant it as a compliment and I took it as one.

Python is another beast. It offers some (pretty limited but...) support 
for fp constructs, but with *really* bad performances. As Python is 
already a bit slow, slowing it down some more is not a good idea.

My 2 cents

0
2/12/2004 10:49:47 PM
tuanglen@hotmail.com (Tuang) writes:

> Joe Marshall <prunesquallor@comcast.net> wrote in message news:<7jyt6rl2.fsf@comcast.net>...
>> 
>> I think you just answered your own question!
>
> Well, not really. I didn't ask whether it was ever possible to use FP
> concepts to advantage in non-FP languages. The fact that I knew the
> answer to that one led me to want to know more.
>
> Specifically, when working in *non-FP languages*:
>
> -- which FP concepts/techniques/advantages are still
> available/usable/practical in real production work?
> -- for each, to what extent? (on rare occasions, almost always, etc.)
> -- which ones might I be tempted to use but should probably avoid (and
> why)?
> -- any other advice/tips on when and when not to use them?
> -- any advice for *how* best to do so (good libraries, a style of
> modularization, etc.)?

I have found that most mainstream languages force you to think real
hard about the `machinery' that underlies the implementation.  This
disadvantage of the language can be turned to your advantage:  if an FP
technique (such as lazy evaluation) is impractical in the language, it
will be *so* painful to use that it will be glaringly obvious.  If the
technique is useful, however, it mostly just `looks funny' to people
who use the language `normally'.

This is just my experience, but it seems to hold.

-------------------

Here are a couple of very specific things: 

   1.  You don't need to name *every* intermediate expression:

       redirect_loop (new Set ().adjoin (new URL (server_uri)));


   1.  Use the `? :' conditional expression when you are using an
       infix language.  If you indent it right, it almost looks like a
       Scheme or Lisp IF:

       public int cardinality () {
         return (my_element == null)
            ? 0
            : my_more.cardinality() + 1;
       }


       You can make it look like a COND, too:

       public Set remove (Object element) {
         return 
               my_element == null          ? this
             : my_element.equals (element) ? my_more.remove (element)
             : new Set (my_element, my_more.remove (element));
       }


    2.  You call them exceptions, I call them continuations.  You can
        use the exception handling mechanism to mimic
        continuation-passing-style.  (Note that this is often quite
        slow because language implementors generally don't think users
        will be this perverse.)

        try {
             // serveResponse always throws one of four values
             serveResponse (false);
        }
        catch (RedirectNoService redirect) {
            // This server can't handle this request.
            // Place it in the `bad servers' set.
            bad_servers = bad_servers.adjoin (server_to_try);
            servers_to_try =
                servers_to_try.union
                (redirect.alternateServers().difference
                 (lazy_servers.union (busy_servers.union (dead_servers.union (bad_servers)))));
        }

        catch (RedirectReadOnly redirect) {
            // Server is read-write, but command is read-only.
            // Place server in the `lazy' list.
            lazy_servers = lazy_servers.adjoin (server_to_try);
            // More alternates
            servers_to_try =
                servers_to_try.union
                (redirect.alternateServers().difference
                 (lazy_servers.union (busy_servers.union (dead_servers.union (bad_servers)))));
        }

        catch (RedirectBusy redirect) {
            // This server is too busy, maybe try others.
            // Place it in the `busy' list.
            busy_servers = busy_servers.adjoin (server_to_try);
            servers_to_try =
                // add the new alternates to the servers_to_try
                servers_to_try.union
                (redirect.alternateServers().difference
                 (lazy_servers.union (busy_servers.union (dead_servers.union (bad_servers)))));
        }

        catch (RedirectContinue redirect) {
            busy_servers = busy_servers.adjoin (server_to_try);

            priority_flag = false; // new command starts at low priority
            command = redirect.newCommand();
            servers_to_try =
                servers_to_try.union
                (lazy_servers.union
                 (busy_servers.union
                  (bad_servers.union (redirect.alternateServers()))))
                .difference (dead_servers);

            busy_servers = new Set ();
            lazy_servers = new Set ();
            bad_servers = new Set ();
        }            


      -------

        public void serveCliResponse (boolean debugTrace) 
            throws RedirectNoService, RedirectBusy, RedirectContinue, RedirectReadOnly
        {
            String redirect_type = st.nextToken ();
            Set alternates = new Set ();
            if (redirect_type.equals (":NO-SERVICE"))
                throw new RedirectNoService (alternates);

            if (redirect_type.equals (":BUSY"))
                throw new RedirectBusy (alternates);

            if (redirect_type.equals (":READ-ONLY"))
                throw new RedirectReadOnly (alternates);

            if (redirect_type.equals (":CONTINUE"))
                throw new RedirectContinue (alternates, new_command);
         }            


-- 
~jrm
0
2/13/2004 12:12:00 AM
"Tuang" wrote:

: Specifically, when working in *non-FP languages*:
:
: -- which FP concepts/techniques/advantages are still
: available/usable/practical in real production work?
: -- for each, to what extent? (on rare occasions, almost always, etc.)
: -- which ones might I be tempted to use but should probably avoid (and
: why)?
: -- any other advice/tips on when and when not to use them?
: -- any advice for *how* best to do so (good libraries, a style of
: modularization, etc.)?
: -- any other practical advice for getting the most benefit out of FP
: study if most real production work will still be in mainstream
: languages for the foreseeable future? (particular books or online docs
: you recommend, language recommendations for this purpose, study
: techniques such as "I always prototype X-type apps in FP language Y
: before implementing in non-FP lang Z", things to be wary of, etc.)
:
: Any advice or suggestions regarding any of the above items would be
: appreciated.

There have been circulating in the c++ community for several years certain
"rules of thumb" which mirror some functional approaches much more than they
mirror some common object techniques.  In particular, in algorithmics, it is
often much better for reuse to not associate functions of general form with
a class, and it is argued quite convincingly that this actually increases
encapsulation.  In other words, instead of uber class hierarchies with
classes containing many members, a better rule of thumb is to use shallow
hierarchies and detach any functions that can be defined in terms of the
public interfaces (cf. Scott Meyers' "How non-member functions increase
encapsulation" in the c/c++ user's journal).  Interfaces should be minimal.

This gives the immediate benefit that generic algorithmics can be written in
O(M + N) "pieces", as opposed to O(MN) (for M functions on N types
overlapping in this functionality), but also it allows more functional
concepts of reuse, such as functional composition and adaption.

Additionally, the more imperative state one must manage, the more difficult
it is to provide robust resource management.  Although there are idioms such
as RAII, striving for declarative semantics just makes the task much easier.

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

galathaea: prankster, fablist, magician, liar


0
galathaea1 (62)
2/13/2004 1:07:34 AM
In article <402bfdd3$0$28955$626a14ce@news.free.fr>, Bruno Desthuilliers wrote:
> Python is another beast. It offers some (pretty limited but...) support 
> for fp constructs, but with *really* bad performances. As Python is 
> already a bit slow, slowing it down some more is not a good idea.

If you're on the IA31 architecture, the psyco JIT compiler speeds
Python right up again.  To the point where you don't have to write
'optimised' Python, but can express yourself more naturally.  Check it
out: http://psyco.sourceforge.net

Stefan,
-- 
Stefan Axelsson                  (email at http://www.cs.chalmers.se/~sax)
0
sax1 (19)
2/13/2004 11:02:42 AM
Matthias Felleisen <matthias@ccs.neu.edu> wrote in message news:<c0g2hu$i8r$1@camelot.ccs.neu.edu>...
> You may want to start thinking about functional loops:
>
>    (define range
>      (case-lambda
>        [(n) (build-list n identity)]
>        [(n m)
>         (if (< n m)
>             (build-list (- m n) (lambda (i) (+ n i)))
>             (error 'range (format range1 n m)))]
>        [(n m k)
>         (if (and (< n m) (> k 0))
>             (build-list (ceiling (/ (- m n) k)) (lambda (i) (+ n (* i k))))
>             (error 'range (format range2 n m k)))]))

I fail to understand what do you mean by functional loops and how they
are related to the range function. Do you mean Python loops are functional
loops? Or do you mean for-each is a functional loop since it is a function
and not a macro? Just confused about the terminology.

    Michele Simionato
0
2/13/2004 12:02:36 PM
Bruno Desthuilliers <bdesth.quelquechose@free.quelquepart.fr> wrote in message news:<402bfdd3$0$28955$626a14ce@news.free.fr>...
> Python is another beast. It offers some (pretty limited but...) support 
> for fp constructs, but with *really* bad performances. As Python is 
> already a bit slow, slowing it down some more is not a good idea.

My point was not about performances: even if the performance of functional
programming in Python was excellent, still I would not use functional
techniques too much, since there are alternative (i.e. OOP programming)
which are better supported by the language and have advantages in terms
of readability (i.e. write in the paradigm people are used to, if you
want your code to be read by others), self-documentation (i.e. Python the 
introspection features are very strong for classes and methods, very
poor for functions inside functions) and extensibility (i.e. it is 
easier to add methods to a class than to add inner functions to a
nested function).

    Michele Simionato
0
2/13/2004 12:12:47 PM
Michele Simionato wrote:

> Matthias Felleisen <matthias@ccs.neu.edu> wrote in message news:<c0g2hu$i8r$1@camelot.ccs.neu.edu>...
> 
>>You may want to start thinking about functional loops:
>>
>>   (define range
>>     (case-lambda
>>       [(n) (build-list n identity)]
>>       [(n m)
>>        (if (< n m)
>>            (build-list (- m n) (lambda (i) (+ n i)))
>>            (error 'range (format range1 n m)))]
>>       [(n m k)
>>        (if (and (< n m) (> k 0))
>>            (build-list (ceiling (/ (- m n) k)) (lambda (i) (+ n (* i k))))
>>            (error 'range (format range2 n m k)))]))
> 
> 
> I fail to understand what do you mean by functional loops and how they
> are related to the range function. Do you mean Python loops are functional
> loops? Or do you mean for-each is a functional loop since it is a function
> and not a macro? Just confused about the terminology.
> 
>     Michele Simionato

In FP, you want to compose functions to implement your specs. So, a loop in 
functional programming is something that iterates an action over a recursive 
data structure (incl N). Try to program like that instead of using Python's 
imperative loops and you will have a small revelation.

Here is a teaser:

;; add up numbers in a vector

(define A (vector 0 1 2 3 4 5 6 7 8 9))

;; FP:
(apply + (vector->list A))

;; Python or your favorite imperative (mindset) language:
(let ([sum 0])
  (for-each (lambda (i)
              (set! sum (+ sum (vector-ref A i))))
            (range (vector-length A)))
  sum)

-- Matthias

0
2/13/2004 2:14:15 PM
Joe Marshall wrote:

> Here are a couple of very specific things: 
> 
>    1.  You don't need to name *every* intermediate expression:
> 
>        redirect_loop (new Set ().adjoin (new URL (server_uri)));

But you should - at least in most languages. The "new" operators may 
fail, and you have to do error handling.
Well, at least that's what happens in C++ (since it doesn't have garbage 
collection, and you're supposed to clean up in case of failure).

Of course, this problem is just waiting for a Maybe monad.
Point being, you don't have Maybe in most non-functional languages. (I'm 
not sure that it can be sensibly expressed in C++ anyway... it may be 
difficult to cope with all these failure modes, while something wrapped 
in Maybe may not have any side effects except returning Nothing).

Of course, things are a *lot* simpler in Java - garbage collection makes 
this style useful. Just watch out for side effects - say, if 
redirect_loop has side effects, it may be a bug if it's called after a 
failure in "new URL".

Regards,
Jo
--
Currently looking for a new job.

0
2/13/2004 3:14:56 PM
Joachim Durchholz wrote:
> Joe Marshall wrote:
> 
>> Here are a couple of very specific things:
>>    1.  You don't need to name *every* intermediate expression:
>>
>>        redirect_loop (new Set ().adjoin (new URL (server_uri)));
> 
> 
> But you should - at least in most languages. The "new" operators may 
> fail, and you have to do error handling.
When standard new fails, it throws a bad_alloc exception, so no need to 
name the allocation expression. The problem is rather that there are two 
"new" expressions here, which could lead to leaks unless handeled.. but 
what about a simple
   redirect_loop(Set<URL>().adjoin(URL(server_uri)));

Cheers,
Michael
0
cm1 (31)
2/13/2004 5:34:26 PM
Joachim Durchholz <joachim.durchholz@web.de> writes:
> Joe Marshall wrote:
> 
> > Here are a couple of very specific things:    1.  You don't need to name
> > *every* intermediate expression:
> >        redirect_loop (new Set ().adjoin (new URL (server_uri)));
> 
> But you should - at least in most languages. The "new" operators may fail, and
> you have to do error handling.
> Well, at least that's what happens in C++ (since it doesn't have garbage
> collection, and you're supposed to clean up in case of failure).

And in Java you do have GC, so take advantage of it. It really really allows
for prettier, more succinct, cleaner code.

While I miss some of C++'s expressiveness, I simply can no longer live without
garbage collection. Gone are the days where 90% of the bugs were bad pointer
problems.

> Of course, things are a *lot* simpler in Java - garbage collection makes this
> style useful. Just watch out for side effects - say, if redirect_loop has side
> effects, it may be a bug if it's called after a failure in "new URL".

Exceptions force you to deal with failures, such that redirect_loop won't even
be called:

  try
  {
    redirect_loop (new Set ().adjoin (new URL (server_uri)));
  }
  catch (MalformedURLException badUrl)
  {
    // ... report or rethrow ...
  }

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.
0
rAYblaaK (362)
2/13/2004 5:39:54 PM
Matthias Felleisen <matthias@ccs.neu.edu> wrote in message news:<c0ima6$iss$1@camelot.ccs.neu.edu>...
> Michele Simionato wrote:
> 
> > Matthias Felleisen <matthias@ccs.neu.edu> wrote in message news:<c0g2hu$i8r$1@camelot.ccs.neu.edu>...
> > 
> >>You may want to start thinking about functional loops:
> >>
> >>   (define range
> >>     (case-lambda
> >>       [(n) (build-list n identity)]
> >>       [(n m)
> >>        (if (< n m)
> >>            (build-list (- m n) (lambda (i) (+ n i)))
> >>            (error 'range (format range1 n m)))]
> >>       [(n m k)
> >>        (if (and (< n m) (> k 0))
> >>            (build-list (ceiling (/ (- m n) k)) (lambda (i) (+ n (* i k))))
> >>            (error 'range (format range2 n m k)))]))
> > 
> > 
> > I fail to understand what do you mean by functional loops and how they
> > are related to the range function. Do you mean Python loops are functional
> > loops? Or do you mean for-each is a functional loop since it is a function
> > and not a macro? Just confused about the terminology.
> > 
> >     Michele Simionato
> 
> In FP, you want to compose functions to implement your specs. So, a loop in 
> functional programming is something that iterates an action over a recursive 
> data structure (incl N). Try to program like that instead of using Python's 
> imperative loops and you will have a small revelation.
> 
> Here is a teaser:
> 
> ;; add up numbers in a vector
> 
> (define A (vector 0 1 2 3 4 5 6 7 8 9))
> 
> ;; FP:
> (apply + (vector->list A))
> 
> ;; Python or your favorite imperative (mindset) language:
> (let ([sum 0])
>   (for-each (lambda (i)
>               (set! sum (+ sum (vector-ref A i))))
>             (range (vector-length A)))
>   sum)
> 
> -- Matthias

I see. I should point out that I have already looked at SRFI-1 in the 
last few weeks, and I am already a converted to fold, fold-right and 
company. It is quite easy to think functional for me, probably due to
my mathematical background. Also macro programming fits my mind quite
easily, I was already in the right mind set. Continuations, OTOH,
are quite new and they still need some thinking on my part.

P.S. in Python 2.3 you would do sum(range(10)), in older versions 
reduce(operator.add,range(10)). So, there is already some support 
for FP. The main issues are the limited lambda and the unhappy scope 
rules.
0
2/13/2004 7:47:22 PM
Joachim Durchholz <joachim.durchholz@web.de> writes:

> Joe Marshall wrote:
>
>> Here are a couple of very specific things:    1.  You don't need to
>> name *every* intermediate expression:
>>        redirect_loop (new Set ().adjoin (new URL (server_uri)));
>
> But you should - at least in most languages. The "new" operators may
> fail, and you have to do error handling.

It *ought* to throw.  

> Well, at least that's what happens in C++ (since it doesn't have
> garbage collection, and you're supposed to clean up in case of
> failure).

Can't you redefine it to do the right thing?

0
jrm (1310)
2/13/2004 7:58:38 PM
Michael Walter <cm@leetspeak.org> writes:

> Joachim Durchholz wrote:
>> Joe Marshall wrote:
>>
>>> Here are a couple of very specific things:
>>>    1.  You don't need to name *every* intermediate expression:
>>>
>>>        redirect_loop (new Set ().adjoin (new URL (server_uri)));
>> But you should - at least in most languages. The "new" operators may
>> fail, and you have to do error handling.
> When standard new fails, it throws a bad_alloc exception, so no need
> to name the allocation expression. The problem is rather that there
> are two "new" expressions here, which could lead to leaks unless
> handeled.. but what about a simple
>    redirect_loop(Set<URL>().adjoin(URL(server_uri)));

I snarfed that from Java code I wrote.  No parameterized types there.

0
jrm (1310)
2/13/2004 7:59:30 PM
Joe Marshall wrote:
> Michael Walter <cm@leetspeak.org> writes:
> 
> 
>>Joachim Durchholz wrote:
>>
>>>Joe Marshall wrote:
>>>
>>>
>>>>Here are a couple of very specific things:
>>>>   1.  You don't need to name *every* intermediate expression:
>>>>
>>>>       redirect_loop (new Set ().adjoin (new URL (server_uri)));
>>>
>>>But you should - at least in most languages. The "new" operators may
>>>fail, and you have to do error handling.
>>
>>When standard new fails, it throws a bad_alloc exception, so no need
>>to name the allocation expression. The problem is rather that there
>>are two "new" expressions here, which could lead to leaks unless
>>handeled.. but what about a simple
>>   redirect_loop(Set<URL>().adjoin(URL(server_uri)));
> 
> 
> I snarfed that from Java code I wrote.  No parameterized types there.
> 
In Java you don't suffer from memory leaks in the exception case, though 
(my point was against Joachim's claim that you should name every 
intermediate expression) :)
0
cm1 (31)
2/13/2004 8:18:04 PM
Michele Simionato wrote:

> Matthias Felleisen <matthias@ccs.neu.edu> wrote in message news:<c0ima6$iss$1@camelot.ccs.neu.edu>...

>>Here is a teaser:
>>
>>;; add up numbers in a vector
>>
>>(define A (vector 0 1 2 3 4 5 6 7 8 9))
>>
>>;; FP:
>>(apply + (vector->list A))
>>
>>;; Python or your favorite imperative (mindset) language:
>>(let ([sum 0])
>>  (for-each (lambda (i)
>>              (set! sum (+ sum (vector-ref A i))))
>>            (range (vector-length A)))
>>  sum)
>>
>>-- Matthias
> 
> 
> P.S. in Python 2.3 you would do sum(range(10)), in older versions 
> reduce(operator.add,range(10)). 

Excuse me, I should have not used such a stupid sample vector:

  (vector -1 9 0 3 2 55 -100)

> So, there is already some support 
> for FP. The main issues are the limited lambda and the unhappy scope 
> rules.

Sure but GvR disavows FP programming, and the real problem is that every 
built-in method returns unit (is of void type). Even if you wanted to (we 
tried), you can't maintain an FP style in this world.

-- Matthias :-)

0
2/13/2004 8:42:47 PM
Joe Marshall wrote:

> Joachim Durchholz <joachim.durchholz@web.de> writes:
> 
>> Well, at least that's what happens in C++ (since it doesn't have 
>> garbage collection, and you're supposed to clean up in case of 
>> failure).
> 
> Can't you redefine it to do the right thing?

Well, sort of - it's quite difficult and requires a lot of replicated
boilerplate code to make it watertight. Catching exceptions that a C++
constructor may throw is *messy*.
That's why "two-step construction" is one of the more common C++ idioms.
You write a simple construct that cannot throw exceptions (i.e. you dumb
it down as far as possible, to the point that it doesn't do anything
useful other than allocate the object), then you call an initialization
function and handle any exceptions that come out of that. Since the
dumbed-down constructor can't do anything but fail due to memory
allocation problems, you're guaranteed that there's no memory leak.
(There are still some caveats to worry about...)
The downside is that two-step initialization requires assignment of the
intermediary result to a name. (Of course, a function not based on a
class could do that as well. But that's non-OO *gg*)

Regards,
Jo
--
Currently looking for a new job.

0
2/13/2004 10:39:39 PM
Matthias Felleisen <matthias@ccs.neu.edu> wrote in message news:<c0jd2l$qao$1@camelot.ccs.neu.edu>...
> Michele Simionato wrote:
> > P.S. in Python 2.3 you would do sum(range(10)), in older versions 
> > reduce(operator.add,range(10)). 
> 
> Excuse me, I should have not used such a stupid sample vector:
> 
>   (vector -1 9 0 3 2 55 -100)

Then sum([-1,9,0,3,2,55,-100]) ;-)
 
> > So, there is already some support 
> > for FP. The main issues are the limited lambda and the unhappy scope 
> > rules.
> 
> Sure but GvR disavows FP programming, 

True

> and the real problem is that every 
> built-in method returns unit (is of void type). Even if you wanted to (we 
> tried), you can't maintain an FP style in this world.

I think you are referring at the implementation level. At the user level
it is quite possible to do FP in Python, it is just ugly. For instance, 
since there is no "set!", you must use tricks such as to wrap variables in 
a list and modify the list. This is ugly, so people don't do that. It is
also true that the uglyness is on design since Guido wants to have "only 
one obvious way to do it" (which is not the FP way).

      Michele Simionato
0
2/14/2004 5:00:39 AM
In comp.lang.scheme Joachim Durchholz <joachim.durchholz@web.de> wrote:
> 
> Well, sort of - it's quite difficult and requires a lot of replicated
> boilerplate code to make it watertight. Catching exceptions that a C++
> constructor may throw is *messy*.

Its about the same really as writing a C function that constructs a complex
object and gracefully handles failure. Its a bit of a pain in both cases, 
and you can easily get to experience the same pain in garbage-colected
languages if you are handling things like database connections.

> 
> Regards,
> Jo
> --
> Currently looking for a new job.
> 

-- 
	Sander

+++ Out of cheese error +++
0
sander569 (105)
2/14/2004 10:12:48 PM
Sander Vesik wrote:
> In comp.lang.scheme Joachim Durchholz <joachim.durchholz@web.de> wrote:
> 
>>Well, sort of - it's quite difficult and requires a lot of replicated
>>boilerplate code to make it watertight. Catching exceptions that a C++
>>constructor may throw is *messy*.
> 
> Its about the same really as writing a C function that constructs a complex
> object and gracefully handles failure. Its a bit of a pain in both cases, 

Nope. You obviously haven't programmed in C++, or you're not aware of 
the all the details.
C++ has exceptions, C doesn't. C++ has default constructors, C doesn't. 
A C++ constructor will automatically run all the constructors of the 
base classes, with no means of catching exception in the base class 
constructors [*]; any uncaught exception here will leave the 
object-in-creation being left lying around with no owner referring to 
it, hence nobody being responsible for destroying it. Which means you 
have a memory leak.

[*] That's a white lie: you *can* catch such exceptions. But the syntax 
for that is incredibly ugly, and the exception handling code for that 
case is often a verbatim copy of another exception handler within the 
constructor code. It's one of the worst instances of language design I 
have seen in my life, rivalled only by the ugliness of RPG IV.

> and you can easily get to experience the same pain in garbage-colected
> languages if you are handling things like database connections.

No, that's just cleaning up where you must.
Besides, there are enough idioms in FP that allow a separation of 
concerns. C++ exception handling in constructors gives you no means of 
doing that - you have to write the same boilerplate code over and over 
again. (And check that it was done correctly. And test and debug it. Ack.)

Alternatively, you can adopt a "two-step construction" idiom. Which 
means that every single object has a "constructor" and an "initializer, 
and instead of writing
   display (new url ("http://www.foo.com"))
you have to write
   HttpConnection conn = new url ("http://www.foo.com")
   conn.open()
   display (conn)
which is bearable for trivial examples, but incredibly painful if you 
handle just a modest amount of intermediate results within a routine. (I 
suspect that a large part of the "higher expressivity" (i.e. less lines 
of code per function point) comes from the fact that you don't have to 
name and initialize every intermediate result. But that's just an 
educated guess.)

Regards,
Jo
--
Currently looking for a new job.

0
2/15/2004 10:37:29 PM
>> In comp.lang.scheme Joachim Durchholz <joachim.durchholz@web.de> wrote:
>>> Catching exceptions that a C++ constructor may throw is *messy*.

> Sander Vesik wrote:
>> Its about the same really as writing a C function that constructs a
>> complex

> Joachim Durchholz <joachim.durchholz@web.de> writes:

> A C++ constructor will automatically run all the constructors
> of the base classes, with no means of catching exception in the base
> class constructors [*];

You guys are funny.  Every C++ program I ever used handled constructor
exceptions the same way:  dumping core.

-- 
~jrm
0
2/15/2004 11:31:04 PM
Joachim Durchholz <joachim.durchholz@web.de> writes:

| Sander Vesik wrote:
| > In comp.lang.scheme Joachim Durchholz <joachim.durchholz@web.de> wrote:
| >
| >>Well, sort of - it's quite difficult and requires a lot of replicated
| >>boilerplate code to make it watertight. Catching exceptions that a C++
| >>constructor may throw is *messy*.
| > Its about the same really as writing a C function that constructs a
| > complex
| > object and gracefully handles failure. Its a bit of a pain in both
| > cases,
| 
| Nope. You obviously haven't programmed in C++, or you're not aware of
| the all the details.

Ahem.

| C++ has exceptions, C doesn't. C++ has default constructors, C
| doesn't. A C++ constructor will automatically run all the constructors
| of the base classes, with no means of catching exception in the base
| class constructors [*]; any uncaught exception here will leave the
| object-in-creation being left lying around with no owner referring to

As part of stack unwinding, cleanups (i.e. destructors) are run for
automatic objects.

| it, hence nobody being responsible for destroying it. Which means you
| have a memory leak.

    An object that is partially constructed or partially destroyed will
    have destructors executed for all of its fully constructed
    subobjects, that is, for subobjects for which the constructor has
    completed execution and the destructor has not yet begun
    execution. Should a constructor for an element of an automatic array
    throw an exception, only the constructed elements of that array will
    be destroyed. If the object or array was allocated in a newexpression
    and the newexpression does not contain a newplacement, the deallocation
    function (3.7.3.2, 12.5) is called to free the storage occupied by the
    object; the deallocation function is chosen as specified in
    5.3.4. If the object or array was allocated in a newexpression
    and the newexpression contains a newplacement, the storage occupied
    by the object is deallocated only if an appropriate placement
    operator delete is found, as specified in 5.3.4. 

The programming technique known as RAII (Resource Acquisition Is
Initialization  is based on that).

| [*] That's a white lie:

At least, try to make it credible ;-/

-- Gaby
0
gdr (104)
2/16/2004 1:13:19 AM
Joachim Durchholz <joachim.durchholz@web.de> writes:
> [...] A C++ constructor will automatically run all the constructors
> of the base classes, with no means of catching exception in the base
> class constructors [*];
[...]
> [*] That's a white lie: you *can* catch such exceptions. But the
> syntax for that is incredibly ugly, and the exception handling code
> for that case is often a verbatim copy of another exception handler
> within the constructor code

Sounds interesting. Could you post an example / do you have links to
one? That'd be nice.

Regards,
        Mario.

0
m_mommer (212)
2/16/2004 4:50:56 PM
Mario S. Mommer wrote:

> Joachim Durchholz <joachim.durchholz@web.de> writes:
> 
>> [...] A C++ constructor will automatically run all the constructors
>>  of the base classes, with no means of catching exception in the
>> base class constructors [*];
> 
> [...]
> 
>> [*] That's a white lie: you *can* catch such exceptions. But the 
>> syntax for that is incredibly ugly, and the exception handling code
>>  for that case is often a verbatim copy of another exception
>> handler within the constructor code
> 
> Sounds interesting. Could you post an example / do you have links to 
> one? That'd be nice.

Uh... it's been a while, and I didn't retain any code from that time.
The first hit from googling for
   "C++ constructor" exception
gave me
   http://www.gotw.ca/gotw/066.htm
which is a pretty complete description of how to do that and all the 
ramifications.
I just hope that your C++ compiler implements function try blocks 
properly for constructors (and the function try block syntax is what I
referred to with "incredibly ugly" - you wrap the entire body of the 
function into a try-catch block, which is quite heavyweight syntax for a 
simply problem).

I have to take some of my words back - code duplication is probably not
necessary if you really, really stick with the practices recommended in
that article. (When I last used C++, I didn't have that article
available and finally got some very annoying code duplication, and it 
seems now that I could have done better.)
It's an excellent example of "C++ gives you lots of options for shooting
yourself, a single way of doing it correctly, and making the syntax so
ugly that you don't ever want to code that way".
It seems that C++ is nearer to Cobol than I ever imagined (there are 
other similarities, like "it's incredibly verbose", "everybody uses it 
despite inferior design", "it's not going to go away for decades", 
"maintaining it is painful", and other parallels that I'm too lazy to 
remember right now...)

Regards,
Jo
--
Currently looking for a new job.

0
2/16/2004 7:02:24 PM
Gabriel Dos Reis wrote:

> Joachim Durchholz <joachim.durchholz@web.de> writes:
> 
> | Sander Vesik wrote:
> | > In comp.lang.scheme Joachim Durchholz <joachim.durchholz@web.de> wrote:
> | >
> | >>Well, sort of - it's quite difficult and requires a lot of replicated
> | >>boilerplate code to make it watertight. Catching exceptions that a C++
> | >>constructor may throw is *messy*.
> | > Its about the same really as writing a C function that constructs a
> | > complex
> | > object and gracefully handles failure. Its a bit of a pain in both
> | > cases,
> | 
> | Nope. You obviously haven't programmed in C++, or you're not aware of
> | the all the details.
> 
> Ahem.
> 
> | C++ has exceptions, C doesn't. C++ has default constructors, C
> | doesn't. A C++ constructor will automatically run all the constructors
> | of the base classes, with no means of catching exception in the base
> | class constructors [*]; any uncaught exception here will leave the
> | object-in-creation being left lying around with no owner referring to
> 
> As part of stack unwinding, cleanups (i.e. destructors) are run for
> automatic objects.

I meant subobjects (i.e. objects inherited from parents).
C++ forces you to use smart pointers and other advanced stuff if you 
wish to avoid memory leaks.
It can be done, but you have to carefully code for it.

> The programming technique known as RAII (Resource Acquisition Is
> Initialization  is based on that).

It's just that RAII forces programmers to tackle two complicated issues 
at the same time: C++ construction/destruction and resource 
acquisition/release, complicated by advanced features like initializer 
lists and function try-catch blocks.
I'm really wondering who's expecting anybody to learn C++ to an 
accceptable, ready-for-production level in less than two years... I have 
seen people catch up on Eiffel in less than six months, and I'd think 
that learning a functional language takes considerably less. (All 
figures excluding time for learning the paradigms - I'm assuming 
programmers who already know the paradigm but learn the language.)

Anyway, the relevant point still stands: C++ is sufficiently removed 
from C that handling failure is an entirely different issue in that 
language. Most differences lie in the area of object 
construction/destruction, with some additional considerations due to the 
availability of exceptions in C++.

Regards,
Jo
--
Currently looking for a new job.

0
2/16/2004 7:10:53 PM
Joachim Durchholz <joachim.durchholz@web.de> writes:

> Joe Marshall wrote:
> 
> > You don't need to name *every* intermediate expression:
> 
> But you should - at least in most languages. The "new" operators may
> fail, and you have to do error handling.  Well, at least that's what
> happens in C++ (since it doesn't have garbage collection, and you're
> supposed to clean up in case of failure).
  
Languages without garbage collection must die.  The ones with
*nothing* must die first, and the ones with only reference counting
must eventually die as well, or get real gc.  The days are gone when
it was acceptable to spend dozens of programmer hours finding a memory
leak (or, worse, leave it in and allow apps to periodically crash for
no good reason) in order to save a few measley bytes or hertz.  The
human's time is now worth more than the computer's resources in
virtually every circumstance.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/19/2004 6:02:14 AM
Joe Marshall <prunesquallor@comcast.net> writes:

> You guys are funny.  Every C++ program I ever used handled constructor
> exceptions the same way:  dumping core.

C++ is high on the list of Languages That Must Die, IMO.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/19/2004 6:04:25 AM
Joe Marshall <jrm@ccs.neu.edu> writes:

> michele.simionato@poste.it (Michele Simionato) writes:
> 
> > Still, I do think it is bad practice to code Python with a Scheme
> > mindset, just as it is bad practice to code Scheme with a Python
> > mindset.
> 
> I don't.
> 
> A friend of mine once remarked that even my C code looked like
> Scheme.  He meant it as a compliment and I took it as one.

Yes and no.  It's fine to program in one language using some
constructs and approaches that come from the mindset of another
language; in fact, that's good.  However, you don't want to be
ignorant of the mindset of the language you *are* programming in,
either.  The best example I can think of for this is people who
previously know only procedural programming (especially C) and then
start learning Perl.  At first there are certain whole classes of
mistakes that they make because of fundamental misunderstandings that
they have.  The most common case is that they think of things as
statements that would be statements in C -- but Perl doesn't have
statements, or anything very much like them.  This leads to code like
the following:

print (3 + 6) * 2;

The procedural programmer, thinking of that as a statement, assumes
that it will print 18, which of course is ludicrous, because print is
a list operator function.  Therefore, the parentheses set off its
argument list, just like they would with any other function, and its
return value gets multiplied by 2 in void context (which will generate
a warning, but C programmers are typically so used to ignoring
warnings that they often don't notice this).

A little more familiarity with the language you're using will clear up
this sort of problem.  It's certainly fine to use a procedural
paradigm in Perl, similar to that in C, but you have to be aware of
the fact that you're doing that and do it deliberately, rather than
just assuming the language is that way (because, it's not entirely).

The same sort of things would apply to writing Python code with a
Scheme mindset, I would think.  As long as you know Python well enough
to carry over the Scheme paradigms effectively, all is well and good,
but you wouldn't want to just assume that Python works like Scheme.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/19/2004 6:19:30 AM
michele.simionato@poste.it (Michele Simionato) writes:

> Just an anedoctical example: I recently wrote a state machine in
> Scheme which (quite naturally) came out to be full of closures.
> Then I translated it in Python and discovered (with my surprise)
> that I could maintain the functional structure essentially
> unchanged.

This is not surprising to me.  I'm not by any means a Python
programmer, but I know just enough of the language to be pretty sure
it's a multiparadigmatic language (though not quite as expressly so as
Perl).  The Python docs spend a lot of time talking about OO stuff,
but the language is clearly more flexible than that.

> Nevertheless, I would never have used that style if I had coded in
> Python from the beginning (I would have used classes instead).

Next question:  would it (the Python implementation) have been better
coded using classes, or is the functional implementation just as good?
Id est, was your implementation warped undesirably by the bias of
translating from the other language, or OTOH is your Python coding
normally biased unfairly against the function paradigm, perhaps due to
the way the Python documentation emphasizes OO?

> So, certainly the functional paradigm is independent from the
> language.

Quite.  I find programming in Perl that closures are an indispensible
tool to have at my disposal.  I don't use them in every program, of
course, but I use them with some regularity.  Certain classes of
problems are just best solved that way.

> Still, I do think it is bad practice to code Python with a Scheme
> mindset, just as it is bad practice to code Scheme with a Python
> mindset. I personally make a personality split and I code with
> different styles and paradigm according to the target language,
> unless for some reason I need to translate literally from one to the
> other (as in that case).

This is somewhat different from my approach.  I freely mix whatever
paradigms are available to me in the language I'm using, based on what
best solves the problem at hand.  This may be partly because my
primary language is Perl, which is an expressly multiparadigmatic
language.  Functional, contextual, and procedural programming are all
equally straightforward in Perl, and OO is also possible (though a bit
klunky in Perl5; Perl6 is going to fix this, but it's not ready yet )-:

> OTOH, it is by no means guaranteed that a paradigm which is superior
> in language X is also superior in language Y, so my advice is to use
> the paradigm the language support the best.

Often a particular problem is best solved with a certain paradigm, or
at least better solved with some paradigms than with others.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/19/2004 6:30:37 AM
"Jonadab the Unsightly One" wrote:
> This may be partly because my primary language is Perl, which
> is an expressly multiparadigmatic language.  Functional, contextual,
> and procedural programming are all equally straightforward in Perl,

The idea that functional programming is straightforward in Perl only lasts
until you try to write, say, a CPS parser in it.  The point being that Perl
doesn't support tail recursion, and so at the very least it'll use a huge
amount of space when it doesn't need to; at worst, you'll run out of space
if your machine survives the humongous memory allocation that it does with
recursive programs.

To see the problem, try monitoring memory usage with the following program:

  sub foo {
     my $x = shift;
     print $x if $x%100000 == 0;
     foo ($x+1);
  }
  foo(0);

This will use up hundreds of megabytes of memory, startlingly quickly.  The
equivalent Scheme, ML etc. will run without any impact on memory usage,
until numeric overflow occurs:

  (define (foo x) (if (zero? (remainder x 100000)) (display x)) (foo (+ x
1)))
  (foo 0)

Although it's true that experienced functional programmers can use languages
like Perl & Python in various functional ways, often by relying on
workarounds, there's a problem with claiming that these languages support
functional programming: it encourages Perl/Python/Ruby programmers to
believe that they understand FP and have access to it in their language, so
they miss out on the opportunity to learn more about one of the more
important and enlightening programming paradigms.

It's particularly unfortunate that the generally poor support for FP in
these languages often leads people to conclude that the functional paradigm
is flawed, when the fault actually lies with the non-FP language they're
using.

Anton


0
anton58 (1240)
2/19/2004 8:31:51 AM
"Anton van Straaten" <anton@appsolutions.com> writes:

> "Jonadab the Unsightly One" wrote:
>> This may be partly because my primary language is Perl, which
>> is an expressly multiparadigmatic language.  Functional, contextual,
>> and procedural programming are all equally straightforward in Perl,
>
> The idea that functional programming is straightforward in Perl only lasts
> until you try to write, say, a CPS parser in it.  The point being that Perl
> doesn't support tail recursion, and so at the very least it'll use a huge
> amount of space when it doesn't need to; at worst, you'll run out of space
> if your machine survives the humongous memory allocation that it does with
> recursive programs.

Languages that are not `safe for space' are fundamentally broken.

-- 
~jrm
0
2/19/2004 10:41:47 AM
Jonadab the Unsightly One wrote:

> Languages without garbage collection must die.

*sigh* kill RPG first. Then Cobol. Then Fortran.
Good luck - you'll need it.
After that, we can even begin to think about C++. (Or Java.)

Regards,
Jo
--
Currently looking for a new job.

0
2/19/2004 10:59:48 AM
"Jonadab the Unsightly One" <jonadab@bright.net> wrote in message news:<hdxninsi.fsf@jonadab.homeip.net>...> michele.simionato@poste.it (Michele Simionato) writes:
> 
> > Just an anedoctical example: I recently wrote a state machine in
> > Scheme which (quite naturally) came out to be full of closures.
> > Then I translated it in Python and discovered (with my surprise)
> > that I could maintain the functional structure essentially
> > unchanged.
> 
> This is not surprising to me.  I'm not by any means a Python
> programmer, but I know just enough of the language to be pretty sure
> it's a multiparadigmatic language (though not quite as expressly so as
> Perl).  The Python docs spend a lot of time talking about OO stuff,
> but the language is clearly more flexible than that.

I don't think Python is less flexible than Perl, at least in the stuff
that matters (it is much less flexible in the syntax, of course).
However, doing things in different ways (i.e. not Guido's way) is 
discouraged. The Zen of Python explicitely says:

"There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch."

So, in principle, there should be only one "obviously Pythonic" style.
Pythonistas think of this kind of uniformity as of a worthwhile goal,
Perl programmers probably see this as a disgusting perspective ;)

> Next question:  would it (the Python implementation) have been better
> coded using classes, or is the functional implementation just as good?

What does it mean better? Faster, more efficient? Or simpler to write,
simpler to maintain, simpler to read for other Pythonistas? 

I haven't never benchmarked applications using a functional style
vs. applications using an object oriented style, so I can't say,
but I don't expect to see big differences. OTOH, writing the
application in the obvious OO style has big advantages if the
application has to be maintained by another Python programmer not 
very familiar with functional techniques.

In my experience the difference between OO style and functional style
(in Python) is more a matter of personal taste than a matter of substance.
In Scheme the situation is different, since OO is not in the standard 
and you are naturally encouraged to think functionally.
 
> Id est, was your implementation warped undesirably by the bias of
> translating from the other language,

Yes, using all those closures would look bizarre, "unpythonic" to 
another Python programmer looking at that code.

> or OTOH is your Python coding
> normally biased unfairly against the function paradigm, perhaps due to
> the way the Python documentation emphasizes OO?

I would not say "unfairly", but it is true that I would use FP techniques 
much more if I had better support in the language. By design the language 
itself (i.e. it is not a matter of documentation only) has good 
support for OOP, but only minimal support for functional programming. 
For instance, my first post on c.l.p. was originated by my confusion 
about the scope rules in closures (*):

def make_adders(n):
    return [(lambda x: x+i) for i in range(n)]

add0,add1=make_adders(2)

print add0(0) #=> 1 !! this is surprising !!
print add1(0) #=> 1

(I guess Perl is less surprising in this respect).
The way to get what I wanted was this:

def make_adders(n):
    return [(lambda x,i=i: x+i) for i in range(n)]

add0,add1=make_adders(2)

print add0(0) #=> 0
print add1(0) #=> 1

You see a typical example of "Pythonicity" here: this way is not obvious, 
so it is NOT the right way to go. The scope rules are bizarre on purpose,
since people are not expected to use closures, they should use objects:

class adder(object):
    "Returns callable objects acting as adders."
    def __init__(self,i):
        self.i=i
    def __call__(self,x):
        return x+self.i

def make_adders(n):
    return [adder(i) for i in range(n)]
    
add0,add1=make_adders(2)

print add0(0) #=> 0
print add1(0) #=> 1

This is the "obvious" way for a Python programmer (I am sure Scheme or 
Perl programmers here will think the functional version is clearer, but 
let it be) and has various advantages over the functional version: 

- the adder logic is encapsulated in an object with a self-documenting name;
- I can add docstrings to the adder class and not to the lambda;
- I can later extends the class whereas I cannot derive from the lambda;
- I dont'use the esoteric name "lambda" which may scare my readers ;)
 
OTOH, if I had good scope rules in the language, I would code the 
adder with a lambda, and my code would be less Pythonic, i.e. less
readable to others. So, Guido doesn't give me better scope rules and 
"ipso facto" guarantees that the only *obvious* way to do it is the OO way. 
Uniformity of style is a very valuable thing in a corporate environment, 
less so if you code for yourself.
> 
> > Still, I do think it is bad practice to code Python with a Scheme
> > mindset, just as it is bad practice to code Scheme with a Python
> > mindset. I personally make a personality split and I code with
> > different styles and paradigm according to the target language,
> > unless for some reason I need to translate literally from one to the
> > other (as in that case).
> 
> This is somewhat different from my approach.  I freely mix whatever
> paradigms are available to me in the language I'm using, based on what
> best solves the problem at hand.  This may be partly because my
> primary language is Perl, which is an expressly multiparadigmatic
> language.  Functional, contextual, and procedural programming are all
> equally straightforward in Perl, and OO is also possible (though a bit
> klunky in Perl5; Perl6 is going to fix this, but it's not ready yet )-:

The choice of the paradigm is somewhat restricted by the language,
not only by the problem. For instance, I am aware of the possible warts 
of using closures in Python, so I have two choices: 1) I am very careful, 2) 
I switch to the OOP paradigm. More often than not 2) is the simplest
choice. I am sure that in Perl you would tend to prefer the functional 
way over the object oriented way, instead.

> > OTOH, it is by no means guaranteed that a paradigm which is superior
> > in language X is also superior in language Y, so my advice is to use
> > the paradigm the language support the best.
> 
> Often a particular problem is best solved with a certain paradigm, or
> at least better solved with some paradigms than with others.

That's true, but still the language may encourage you to go with a
certain paradigm.

Coming back OT, i.e. to Scheme, I would say in Scheme (or Lisp) these
issues are less relevant, since in these languages you have the
opportunity to modify the language itself to follows the paradigm 
you have in mind. Still, even Scheme encourage some paradigm over others:
for instance you are encouraged to solve a problem recursively and not 
iteratively, to think in terms of higher order functions, etc.

Summarizing, I think that if you want to use a paradigm not supported by 
language X, you have two choices: 1) change your mind: 2) switch to language 
Y. In the case of X = Scheme, Y can be a customization of Scheme written
by yourself; still you have to to write the language yourself, which
is a bigger effort than using the native constructs. I.e. I may
implement Python "for" loops in Scheme, but more often than not I
am better off by using the native looping constructs.


(*) notice that I was naturally inclined to think in a FP way even
before studying Python and before studying Scheme: I was introduced
to map and friends by Mathematica and Maple. I had no problems at
all in learning the FP approach, whereas I needed a more substantial
effort to grasp OOP.
0
2/19/2004 1:52:01 PM
michele.simionato@poste.it (Michele Simionato) writes:

> However, doing things in different ways (i.e. not Guido's way) is 
> discouraged. The Zen of Python explicitely says:
>
> "There should be one-- and preferably only one --obvious way to do it.
> Although that way may not be obvious at first unless you're Dutch."

I wonder how Guido puts his pants on in the morning.
0
jrm (1310)
2/19/2004 2:49:12 PM
> Joe Marshall wrote:
>>  You don't need to name *every* intermediate expression:

> Joachim Durchholz <joachim.durchholz@web.de> writes:
> But you should - at least in most languages. 

So much for referential transparency.  You shouldn't have to.
0
jrm (1310)
2/19/2004 2:51:21 PM
"Jonadab the Unsightly One" <jonadab@bright.net> writes:

> Joe Marshall <jrm@ccs.neu.edu> writes:
>
>> A friend of mine once remarked that even my C code looked like
>> Scheme.  He meant it as a compliment and I took it as one.
>
> Yes and no.  It's fine to program in one language using some
> constructs and approaches that come from the mindset of another
> language; in fact, that's good.  

I think that the proper mindset transcends the language.

> However, you don't want to be ignorant of the mindset of the
> language you *are* programming in, either.

Yes, you should know what you are up against.

For instance, function pointers are first-class objects in C, but they
are not closures.  You can write higher-order function in C (the
syntax is quite painful, but the results are nice), but you must
always remember that the `environment' is not part of the `function'
and must be passed separately.
0
jrm (1310)
2/19/2004 2:56:24 PM
In article <smh7nmzb.fsf@ccs.neu.edu>, Joe Marshall <jrm@ccs.neu.edu> 
wrote:

> michele.simionato@poste.it (Michele Simionato) writes:
> 
> > However, doing things in different ways (i.e. not Guido's way) is 
> > discouraged. The Zen of Python explicitely says:
> >
> > "There should be one-- and preferably only one --obvious way to do it.
> > Although that way may not be obvious at first unless you're Dutch."
> 
> I wonder how Guido puts his pants on in the morning.

In parallel assignment, order is an implementation issue:

   left_leg, right_leg = pants()

;)

-M

-- 
Michael J. Fromberger             | Lecturer, Dept. of Computer Science
http://www.dartmouth.edu/~sting/  | Dartmouth College, Hanover, NH, USA
0
2/19/2004 5:15:46 PM
Joe Marshall wrote:

>>Joe Marshall wrote:
>>
>>> You don't need to name *every* intermediate expression:
> 
>>Joachim Durchholz <joachim.durchholz@web.de> writes:
>>But you should - at least in most languages. 
> 
> So much for referential transparency.  You shouldn't have to.

Definitely!

One language I worked with didn't have a "new" operator, forcing me to 
assign all newly-created objects to names. This most definitely sucked.

Of course, doing without those "new" operators sprinkled everywhere is 
even better :-)

Regards,
Jo
--
Currently looking for a new job.

0
2/19/2004 5:30:30 PM
Joe Marshall wrote:

> 
> Languages that are not `safe for space' are fundamentally broken.
> 

Is Common Lisp fundamentally broken? Or are implementations space safe,
notwithstanding the lack of mandate in the standard?
-- 
Grzegorz Chrupała | http://pithekos.net | grzegorzc@jabber.org
gosh -bE "begin (print \
(map (cut number->string <> 36) '(18 806564 1714020422))) (exit)"
0
grzegorz1 (80)
2/19/2004 7:39:52 PM
Grzegorz =?UTF-8?B?Q2hydXBhxYJh?= <grzegorz@pithekos.net> writes:

> Joe Marshall wrote:
>
>> 
>> Languages that are not `safe for space' are fundamentally broken.
>> 
>
> Is Common Lisp fundamentally broken? 

I was going for hyperbole, but I would consider a Common Lisp
implementation that could not be made tail recursive through the
appropriate magic to be seriously flawed.

> Or are implementations space safe, notwithstanding the lack of
> mandate in the standard? 

Some are.  Franz Allegro and Lispworks can be made properly tail
recursive with the correct compiler switches.
0
jrm (1310)
2/19/2004 8:43:02 PM
"Anton van Straaten" <anton@appsolutions.com> writes:

> The point being that Perl doesn't support tail recursion,

Not true, though you have to know the magic words.

> To see the problem, try monitoring memory usage with the following program:
>
>   sub foo {
>      my $x = shift;
>      print $x if $x%100000 == 0;
>      foo ($x+1);
>   }
>   foo(0);

    sub foo {
        my $x = shift;
        print $x unless $x%100000;
        @_=($x-1);           # <-- pass arguments
        goto &foo;           # <-- tail recursive call
    }
    foo(0);

/s
0
look-it-up (12)
2/19/2004 10:40:21 PM
Sean O'Rourke wrote:
> "Anton van Straaten" <anton@appsolutions.com> writes:
>
> > The point being that Perl doesn't support tail recursion,
>
> Not true, though you have to know the magic words.
>
> > To see the problem, try monitoring memory usage with the following
program:
> >
> >   sub foo {
> >      my $x = shift;
> >      print $x if $x%100000 == 0;
> >      foo ($x+1);
> >   }
> >   foo(0);
>
>     sub foo {
>         my $x = shift;
>         print $x unless $x%100000;
>         @_=($x-1);           # <-- pass arguments
>         goto &foo;           # <-- tail recursive call
>     }
>     foo(0);

Unless you're talking about something new in a bleeding-edge version of
Perl, this doesn't seem to work - it eats memory just as fast.  I tried it
in Perl v5.8.2, as well as a couple of earlier versions, under Debian, Red
Hat, and Windows.

If there is a newer version of Perl which has added some sort of tail
recursion optimization, then based on the above example, it would still not
be very practical, if used widely in real programs.  It would put recursive
programs firmly into the same category as OOP in Perl 5: you would be able
to do it, but it would be messy, bug-prone, and far from natural.  It would
still be extremely misleading to claim that such a language "supports
functional programming", and it would be a disservice to Perl programmers
interested in learning about techniques beyond the confines of Perl.

Anton


0
anton58 (1240)
2/20/2004 2:30:02 AM
In article <95aa1afa.0402190552.1d91a260@posting.google.com>, Michele Simionato wrote:
> 
> Summarizing, I think that if you want to use a paradigm not supported by 
> language X, you have two choices: 1) change your mind: 2) switch to language 
> Y. In the case of X = Scheme, Y can be a customization of Scheme written
> by yourself; still you have to to write the language yourself, which
> is a bigger effort than using the native constructs. I.e. I may
> implement Python "for" loops in Scheme, but more often than not I
> am better off by using the native looping constructs.
> 

Nice post, Michele.

You reminded me of an epigram I came up last year:

When a language makes something hard to say, there are three options: change
the language, abuse the language, or say something different. 

-- 
..:[ dave benjamin: ramen/[sp00] -:- spoomusic.com -:- ramenfest.com ]:.
:  please talk to your son or daughter about parametric polymorphism. :
0
ramen (337)
2/20/2004 2:35:13 AM
"Anton van Straaten" <anton@appsolutions.com> writes:
> Unless you're talking about something new in a bleeding-edge version
> of Perl, this doesn't seem to work - it eats memory just as fast.  I
> tried it in Perl v5.8.2, as well as a couple of earlier versions,
> under Debian, Red Hat, and Windows.

This must have changed somewhere between 5.6.0 and 5.8.3, the two
versions I have available.  The version I posted still grows over
time, but much more slowly than your original.  My guess is that this
has to do with the lexical "my $x" not being freed; this seems like a
bug in the implementation.  The following version avoids creating a
lexical, and does not grow:

    sub foo {
        print $_[0] unless $_[0]%100000;
        $_[0]--;
        goto &foo
    }
    $x=1;
    foo($x);

For an explanation "perldoc perlfunc", and search for "goto &NAME".

> [re recursive programming]
>
> you would be able to do it, but it would be messy,

That's just syntax ;).

> bug-prone,

Explain.  Are you referring to the extra step in a function call?

> and far from natural.

I agree it's unnatural, though I don't know about the "far from".

> It would still be extremely misleading to claim that such a language
> "supports functional programming", and it would be a disservice to
> Perl programmers interested in learning about techniques beyond the
> confines of Perl.

Why is this misleading?  Is the programmer misled about what tail call
elimination does?  To cite another poster, does Lisp's lack of
guaranteed tail call elimination make it "extremely misleading to
claim that such a language 'supports functional programming'"?

And why is it a disservice?  Is the programmer, after using tail calls
in Perl, unable to recognize or use them in other languages?  At worst
the Perl programmer, wanting to use "goto &func" in his Scheme
program, will be surprised that he can use the same syntax as for a
normal function call.  At best, he will have a decent understanding
of what the techinque actually does.

/s
0
look-it-up (12)
2/20/2004 3:06:52 AM
Dave Benjamin <ramen@lackingtalent.com> writes:

> When a language makes something hard to say, there are three options: change
> the language, abuse the language, or say something different. 

I generally say something abusive about the language.

-- 
~jrm
0
2/20/2004 3:07:56 AM
Sean O'Rourke wrote:
> "Anton van Straaten" <anton@appsolutions.com> writes:
> > Unless you're talking about something new in a bleeding-edge version
> > of Perl, this doesn't seem to work - it eats memory just as fast.  I
> > tried it in Perl v5.8.2, as well as a couple of earlier versions,
> > under Debian, Red Hat, and Windows.
>
> This must have changed somewhere between 5.6.0 and 5.8.3, the two
> versions I have available.  The version I posted still grows over
> time, but much more slowly than your original.

I tried with 5.6.0, 5.8.2, and some in between, and growth was unreasonably
fast in all cases.  Are you saying 5.8.3 is better at this?  I'm not sure I
believe that, based on what I've seen so far.

> My guess is that this has to do with the lexical "my $x" not
> being freed; this seems like a bug in the implementation.

"Bug" or "broken", whatever you call it, it prevents you from using the
language to do functional programming.  It's quite likely that it's not a
simple thing to fix.

> The following version avoids creating a
> lexical, and does not grow:
>
>     sub foo {
>         print $_[0] unless $_[0]%100000;
>         $_[0]--;
>         goto &foo
>     }
>     $x=1;
>     foo($x);

This version still grows without apparent limit under 5.6.0 and 5.8.2 - just
a bit more slowly than the previous examples.  None of this seems to be
solving the problem, but it's certainly doing a good job of proving that
Perl doesn't *support* functional programming, no matter how many tricks are
used.  It might be possible to simulate FP poorly, but that's not "support".

> > bug-prone,
>
> Explain.  Are you referring to the extra step in a function call?

It requires the user to manually choose when to use which function call
mechanism, opening the possibility that the wrong one is used, leading to
space leaks at least, if not semantic errors.  The point is, Perl does not
*support* recursive programming, or functional programming, and it's
misleading to claim otherwise.

> > and far from natural.
>
> I agree it's unnatural, though I don't know about the "far from".

No offense, but looking at the latest code sample above - in which a trivial
example is mangled beyond any hope of usability, yet still appears to suffer
from a serious space leak - this exchange is reminding me of the Monty
Python "Black Knight" skit, where the Black Knight continues to taunt his
opponent even after all of his limbs have been chopped off.  ;)

> > It would still be extremely misleading to claim that such a language
> > "supports functional programming", and it would be a disservice to
> > Perl programmers interested in learning about techniques beyond the
> > confines of Perl.
>
> Why is this misleading?  Is the programmer misled about what tail call
> elimination does?

It's misleading because it leads people to believe that what they can do (or
have to do) in Perl is in some way representative of functional programming.
It isn't - not even close.

We've just been focusing on one feature here: tail recursion.  We haven't
even started on tail calls in general, and since the workarounds given so
far don't even fix pure self-recursion, it seems pointless to speculate on
what might happen with more complex, mutually recursive programs, or
programs written in CPS.  And then there's the data structures: Perl doesn't
have recursive data structures, and if you try to use its existing types to
build recursive structures, you run into the same problems that make OOP in
Perl so clumsy.

> To cite another poster, does Lisp's lack of guaranteed tail
> call elimination make it "extremely misleading to claim that
> such a language 'supports functional programming'"?

The question of whether it's the specification or the implementation that
supports such a feature isn't particularly relevant when *neither* the
specification nor implementation supports it, as is the case in Perl.  In
practice, Lisp implementations handle recursion far better than Perl.

> And why is it a disservice?  Is the programmer, after using tail calls
> in Perl, unable to recognize or use them in other languages?

It's a disservice because it misleads programmers into believing that
they've gained an understanding of functional programming from within Perl
(or Python), when that's not the case at all.  This seems to make many
people feel they don't need to investigate further - the myth that Perl
supports functional programming encourages a sort of "oh, is that all there
is to it?" reaction, when they in fact haven't been exposed to even a
fraction of what functional programming is about.  People are good at making
up excuses not to learn things - giving them another such excuse by
pretending that Perl supports a paradigm that it doesn't, is a disservice.

> At worst
> the Perl programmer, wanting to use "goto &func" in his Scheme
> program, will be surprised that he can use the same syntax as for a
> normal function call.  At best, he will have a decent understanding
> of what the techinque actually does.

That's not my experience in practice.  Can you point to any significant
programs written in the style you've described?  I'd be surprised to find
they exist, other than in small proof-of-concept examples.  If people aren't
writing recursive programs as a matter of course, then they're not doing
functional programming.

Anton


0
anton58 (1240)
2/20/2004 5:30:13 AM
At Fri, 20 Feb 2004 05:30:13 GMT, Anton van Straaten wrote:
> 
> This version still grows without apparent limit under 5.6.0 and 5.8.2 - just
> a bit more slowly than the previous examples.  None of this seems to be
> solving the problem, but it's certainly doing a good job of proving that
> Perl doesn't *support* functional programming, no matter how many tricks are
> used.  It might be possible to simulate FP poorly, but that's not "support".

What's functional programming?  Could one say that Scheme doesn't
support functional programming because it has set!?  Or that ML and
Haskell don't support functional programming because you can't define
anti?

[from a current c.l.s thread where

   (define (anti f)
     (lambda args (not (apply f args))))
]

-- 
Alex

0
foof (110)
2/20/2004 5:48:40 AM
Alex Shinn wrote:
> At Fri, 20 Feb 2004 05:30:13 GMT, Anton van Straaten wrote:
> >
> > This version still grows without apparent limit under 5.6.0 and 5.8.2 -
just
> > a bit more slowly than the previous examples.  None of this seems to be
> > solving the problem, but it's certainly doing a good job of proving that
> > Perl doesn't *support* functional programming, no matter how many tricks
are
> > used.  It might be possible to simulate FP poorly, but that's not
"support".
>
> What's functional programming?  Could one say that Scheme doesn't
> support functional programming because it has set!?

There are flavors of functional programming - statically typed, dynamically
typed, pure, impure.  Scheme is impure, but has a pure subset, so it has
both of those bases covered.

> Or that ML and Haskell don't support functional programming
> because you can't define anti?

That's because they're of the statically-typed flavor.

Although I don't think any of the above examples really demonstrate it,
there is a spectrum of support for functional programming.  However, I'm
saying that the languages on the impoverished end of that spectrum don't do
an acceptable job of allowing someone to learn the principles of functional
programming, and they actively harm one's prospects of learning those
principles, as long as one restricts oneself to those languages.

If you want a litmus test for functional programming, it would relate to the
kinds of programs you can reasonably write in the language.  Anything
written in CPS is a good test - almost any functional language can support
CPS, whether or not you would really choose to write programs that way in
the language.

By that test, two of the more important characteristics for functional
programming would be *correct* support for first-class lexical closures
(Perl has, Python doesn't), as well as space-safety in the presence of tail
calls (Perl doesn't have this, Stackless Python does).  But there are other
requirements also, as I mentioned in my previous post, such as how well a
language supports recursive data structures.

I think it's encouraging that these languages are evolving in more
sophisticated directions, but it should be recognized that they're not there
yet, and anyone who wants to learn about functional programming should
explore at least one real functional language.

Anton


0
anton58 (1240)
2/20/2004 6:28:48 AM
I wrote:
> Sean O'Rourke wrote:
> > The following version avoids creating a
> > lexical, and does not grow:
> >
> >     sub foo {
> >         print $_[0] unless $_[0]%100000;
> >         $_[0]--;
> >         goto &foo
> >     }
> >     $x=1;
> >     foo($x);
>
> This version still grows without apparent limit under 5.6.0 and 5.8.2 -
just
> a bit more slowly than the previous examples.

I was incorrect, although this example does grow under 5.6.0, it does not
grow under 5.8.2.  (In my prior test, the Linux instance running 5.8.2 had
hung during a previous run in which all memory & swap was consumed, and I
ended up inadvertently only testing earlier versions.)

Anyway, I'll eagerly await the "bugfix" which allows one to safely use local
variables in procedures which perform tail calls.  After that, a version
which can safely perform tail calls without explicit stack manipulation
would be nice...

Anton


0
anton58 (1240)
2/20/2004 9:33:58 AM
"Anton van Straaten" <anton@appsolutions.com> writes:

> If you want a litmus test for functional programming, it would relate to the
> kinds of programs you can reasonably write in the language.  Anything
> written in CPS is a good test - almost any functional language can support
> CPS, whether or not you would really choose to write programs that way in
> the language.
>
> Anton
>

Exactly!  If you cannot program in CPS, the language is broken.  
(I was just saying this yesterday.)

This naturally implies proper tail recursion and first-class lexical
closures.



-- 
~jrm
0
2/20/2004 11:03:07 AM
Joe Marshall wrote:
> "Anton van Straaten" <anton@appsolutions.com> writes:
> 
> 
>>If you want a litmus test for functional programming, it would relate to the
>>kinds of programs you can reasonably write in the language.  Anything
>>written in CPS is a good test  ...
>>
> 
> Exactly!  If you cannot program in CPS, the language is broken.  
> (I was just saying this yesterday.)
> 
> This naturally implies proper tail recursion and first-class lexical
> closures.

Either you admit that there is ALWAYS a way of making tail recursion or
other Glorious Constructs in *any* language, so none are *really* broken,

Or you say that all assemblers are broken.

This, in my eyes, spoils the word "broken" of any sense but your personal
appreciation.


Jerzy Karczmarczuk

(PS. What do you recommend to do with broken languages?
      And with people (as myself) who profess broken English?)


0
karczma (331)
2/20/2004 11:27:55 AM
> Joe Marshall wrote:
>> If you cannot program in CPS, the language is broken.

Jerzy Karczmarczuk <karczma@info.unicaen.fr> writes:
>
> Either you admit that there is ALWAYS a way of making tail recursion or
> other Glorious Constructs in *any* language, so none are *really* broken,

I refuse to admit it!

Ok, one can obviously implement a Scheme interpreter in any
Turing-complete language, but that's not what I mean.  If we consider
the `macro-expressibility' as suggested by Felleisen in `On the
Expressive Power of Programming Languages', then we can make the
distinction between tail-recursion being *provided* by the language
vs. tail-recursion simply being *implementable* in the language.

>
> Or you say that all assemblers are broken.

Actually, I think assembly code is second to Scheme in this regard.
In general (like x86 or PDP-11 sort of assembly code), assembly code
has poor support for closures, but full tail-recursion is trivial.  In
addition, you can easily program in continuation-passing-style by
pushing multiple return addresses and allowing the callee to return to
the one of its choosing.

> This, in my eyes, spoils the word "broken" of any sense but your personal
> appreciation.

You say that as if my personal appreciation isn't the highest measure
of objective utility.

> Jerzy Karczmarczuk
>
> (PS. What do you recommend to do with broken languages?
>       And with people (as myself) who profess broken English?)

It depends on how strongly you profess broken English.  I imagine that
broken English is better than none at all for communicating to an
English speaker, but probably not as effective as fluent English.

Your broken English is certainly better than my broken French.
0
jrm (1310)
2/20/2004 3:18:13 PM
Sean O'Rourke wrote:
> 
>>The point being that Perl doesn't support tail recursion,
> 
> Not true, though you have to know the magic words.
> 
>>To see the problem, try monitoring memory usage with the following program:
>>
>>  sub foo {
>>     my $x = shift;
>>     print $x if $x%100000 == 0;
>>     foo ($x+1);
>>  }
>>  foo(0);
> 
> 
>     sub foo {
>         my $x = shift;
>         print $x unless $x%100000;
>         @_=($x-1);           # <-- pass arguments
>         goto &foo;           # <-- tail recursive call
>     }
>     foo(0);

This example is beside the point, because it is overly trivial. For 
functional programming you need tail-recursion for functions that 
actually return something. How can a goto help in this situation?

	- Andreas

-- 
Andreas Rossberg, rossberg@ps.uni-sb.de

Let's get rid of those possible thingies!  -- TB

0
rossberg (600)
2/20/2004 4:49:55 PM
Andreas Rossberg <rossberg@ps.uni-sb.de> writes:
> This example is beside the point, because it is overly trivial. For
> functional programming you need tail-recursion for functions that
> actually return something. How can a goto help in this situation?

Magic ;)

    sub foo {
        return $_[0] if $_[0] == $y;
        $_[0]++; goto &foo
    }
    for (4..31) {
        $y = 2**$_;
        print "$y: ", foo($x=1);
    }

/s
0
look-it-up (12)
2/20/2004 5:08:28 PM
"Anton van Straaten" <anton@appsolutions.com> writes:
> Anyway, I'll eagerly await the "bugfix" which allows one to safely
> use local variables in procedures which perform tail calls.

Check in the next "version" of "perl" to see if this "issue" has been
"addressed".

> After that, a version which can safely perform tail calls without
> explicit stack manipulation would be nice...

It's been on the to-do list for quite some time.  As they say in the
Perl world, "patches welcome" ;).

/s
0
look-it-up (12)
2/20/2004 5:11:54 PM
Sean O'Rourke wrote:
> Andreas Rossberg <rossberg@ps.uni-sb.de> writes:
> > This example is beside the point, because it is overly trivial. For
> > functional programming you need tail-recursion for functions that
> > actually return something. How can a goto help in this situation?
>
> Magic ;)
>
>     sub foo {
>         return $_[0] if $_[0] == $y;
>         $_[0]++; goto &foo
>     }
>     for (4..31) {
>         $y = 2**$_;
>         print "$y: ", foo($x=1);
>     }

This sort of thing only drives home my original point.  If you show a Perl
programmer how to program in a functional style using techniques like the
above, the typical reaction will be "why on earth would I ever want to
program like that?"

You can respond to this by saying that there are languages in which this
style is natural and elegant, and is even used to implement basic constructs
like loops.  At that point, said Perl programmer is looking at you with a
funny expression, shaking his head, and backing away slowly.  That
programmer has lost the opportunity to learn something important.  That's
why it's misleading, and a disservice to Perl programmers, to claim that
Perl supports functional programming.

Anton


0
anton58 (1240)
2/20/2004 5:36:47 PM
Sean O'Rourke wrote:
> "Anton van Straaten" <anton@appsolutions.com> writes:
> > Anyway, I'll eagerly await the "bugfix" which allows one to safely
> > use local variables in procedures which perform tail calls.
>
> Check in the next "version" of "perl" to see if this "issue" has been
> "addressed".
>
> > After that, a version which can safely perform tail calls without
> > explicit stack manipulation would be nice...
>
> It's been on the to-do list for quite some time.  As they say in the
> Perl world, "patches welcome" ;).

I have a patch that fixes this and a number of other problems, too.  It's
called "Scheme" (substitute Lisp, ML, Haskell etc. to taste.)

Anton


0
anton58 (1240)
2/20/2004 5:39:14 PM
"Anton van Straaten" <anton@appsolutions.com> writes:
> Sean O'Rourke wrote:
>> As they say in the Perl world, "patches welcome" ;).
>
> I have a patch that fixes this and a number of other problems, too.
> It's called "Scheme" (substitute Lisp, ML, Haskell etc. to taste.)

Well, let me know when you've ported CPAN to Scheme, and I'll be
there.  (Yes, I know about Slib.  No, it's not the same.)  I'll let
you know when I've added cons cells and tail call elimination to Perl.

/s
0
look-it-up (12)
2/20/2004 5:46:27 PM
Sean O'Rourke wrote:
> "Anton van Straaten" <anton@appsolutions.com> writes:

>>I have a patch that fixes this and a number of other problems, too.
>>It's called "Scheme" (substitute Lisp, ML, Haskell etc. to taste.)

> Well, let me know when you've ported CPAN to Scheme, and I'll be
> there.  

The Schemers are embracing Python first. Then ...

     <http://155.33.117.141:20080/servlets/spy-web/python-web.scm?page=home>

-- 
Jens Axel S�gaard




0
usenet153 (246)
2/20/2004 6:58:21 PM
> Jerzy Karczmarczuk <karczma@info.unicaen.fr> writes:
>> Either you admit that there is ALWAYS a way of making tail recursion or
>> other Glorious Constructs in *any* language, so none are *really* broken,
>> Or you say that all assemblers are broken.

Joe Marshall <jrm@ccs.neu.edu> wrote:
> Actually, I think assembly code is second to Scheme in this regard.
> In general (like x86 or PDP-11 sort of assembly code), assembly code
> has poor support for closures, but full tail-recursion is trivial.

Indeed, Steele's paper on tail calls ("Procedure Calls Considered
Harmful") focused on how easy it is to implement procedure calls as tail
calls in assembly. (In practice, I've found that it's harder than that
paper makes it seem, mostly because of procedure arguments.)
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
2/20/2004 9:33:46 PM
Jens Axel S�gaard <usenet@jasoegaard.dk> writes:
> The Schemers are embracing Python first. Then ...
>
>      <http://155.33.117.141:20080/servlets/spy-web/python-web.scm?page=home>

I saw this awhile ago, and thought it was cool until I saw mention of
the 2-3 order of magnitude slowdown vs. CPython.  I don't know if this
has improved, but hopefully this project will stay alive long enough
to produce a competitive (i.e. within 1 order of magnitude)
implementation, rather than having everyone just take their toys and
go home again.  Have any of the regular CPython people bought into PLT
Spy?

/s
0
look-it-up (12)
2/20/2004 9:47:14 PM
Sean O'Rourke wrote:
> Jens Axel S�gaard <usenet@jasoegaard.dk> writes:

>>The Schemers are embracing Python first. Then ...
>>     <http://155.33.117.141:20080/servlets/spy-web/python-web.scm?page=home>

> I saw this awhile ago, and thought it was cool until I saw mention of
> the 2-3 order of magnitude slowdown vs. CPython.  I don't know if this
> has improved, but hopefully this project will stay alive long enough
> to produce a competitive (i.e. within 1 order of magnitude)
> implementation, rather than having everyone just take their toys and
> go home again.  Have any of the regular CPython people bought into PLT Spy?

Any language benefits from having several implementations. You can use
Scheme for almost anything, due to the huge number of implementations.
If one implementor comes up with an idea worth copying, the other
implementors follow.

It seems as though you judge the implementation on speed alone?
That would be a mistake.

In the Scheme world it is not uncommon to use one Scheme implementation
for development and another for deployment. An example: The Stalin compiler
makes *very* fast code, due to it's many optimizations, but since it is a
whole program compiler, the compile time are not fit for development.


A good language implementation provides the programmer with good tools.

In case of DrScheme you get among other things an GUI editor, control
flow analysis, a profiler, a tool for test suites and syntactical analysis.

The translation from Python to Scheme allows the Python programmer
to use the *same* tools DrScheme provides for the Scheme programmer.
The implementors of DrScheme have namely implmented the tools in a
language independent way.


As an example consider the arrows in figure 2 in

<http://155.33.117.141:20080/spy-web/scheme-workshop-2003-web/scheme-workshop-2003.html>

They are drawn at the basis of the semantics of the underlying language -
it's not just a simple text analysis.


Note that from a Scheme programmers point of view, the scoop is that we can
use Python libraries from Scheme.



Oh - apropos the Perl discussion, did you notice that this implementation
supports tail call optimization?

(Poor Guido - now there is more than one way to make loops)

-- 
Jens Axel S�gaard
0
usenet153 (246)
2/20/2004 10:21:35 PM
Jens Axel S�gaard  wrote:
>Sean O'Rourke wrote:
>>I saw this awhile ago, and thought it was cool until I saw mention of
>>the 2-3 order of magnitude slowdown vs. CPython.

One of the main reasons for that slowdown is the huge number of
conversions between the scheme representation of python values and the
corresponding scheme values, and back.  E.g. in code like 1 + 2 the
python values for 1, 2, and + are represented after translation as
scheme data structures.  Doing the addition means converting these
scheme data structures to the scheme integers 1 and 2 and the closure
for +, then doing the actual addition to get 3, then repacking the
scheme value 3 back into a scheme data structure representing the
python value 3.  Well, in reality it's still a bit more complicated
since python code like 1 + 2 is really something like 1.__add__(2), so
there's also a method lookup that has to be done somewhere along the
way, but you get the idea...

Needless to say a small amount of simple analysis would probably help
a lot in getting rid of many of those conversions and improving the
performance, but currently we are concentrating on getting the thing
right (in particular the interfacing with the part of the python
runtime that's written in C) rather than fast.

>>I don't know if this
>>has improved, but hopefully this project will stay alive long enough
>>to produce a competitive (i.e. within 1 order of magnitude)
>>implementation, rather than having everyone just take their toys and
>>go home again.

Well, we're always looking for volunteers to help :-)

>>Have any of the regular CPython people bought into PLT Spy?

No yet but we are going to Pycon at the end of next month to recruit
people!  http://www.pycon.org/dc2004/talks/index_html#plt-scheme

>The translation from Python to Scheme allows the Python programmer
>to use the *same* tools DrScheme provides for the Scheme programmer.
>The implementors of DrScheme have namely implmented the tools in a
>language independent way.
>
>As an example consider the arrows in figure 2 in
>
><http://155.33.117.141:20080/spy-web/scheme-workshop-2003-web/scheme-workshop-2003.html>
>
>They are drawn at the basis of the semantics of the underlying language -
>it's not just a simple text analysis.

Yes, and this is a very important point.  DrScheme's Syntax Checker
has no idea that it's dealing with python code instead of scheme code.
We created the python to scheme translator, plugged it into DrScheme,
and tools like the Syntax Checker or the test coverage tool just
worked without modification and gave information back to the user in
terms of the *python* code (other tools like MrFlow or the Stepper
need more work though).

>Note that from a Scheme programmers point of view, the scoop is that we can
>use Python libraries from Scheme.

Yes, that's the other important point.

Philippe

0
meunier (7)
2/21/2004 12:01:06 AM
On Fri, 20 Feb 2004 13:47:14 -0800, Sean O'Rourke wrote:
> Jens Axel S�gaard <usenet@jasoegaard.dk> writes:
>> The Schemers are embracing Python first. Then ...
>>
>>      <http://155.33.117.141:20080/servlets/spy-web/python-web.scm?page=home>
> 
> I saw this awhile ago, and thought it was cool until I saw mention of
> the 2-3 order of magnitude slowdown vs. CPython.  I don't know if this
> has improved, but hopefully this project will stay alive long enough
> to produce a competitive (i.e. within 1 order of magnitude)
> implementation, rather than having everyone just take their toys and
> go home again.  Have any of the regular CPython people bought into PLT
> Spy?
> 
> /s


Hi, I am Spy's currently active developer.  As Philippe said, I'm in the
process of integrating half of the CPython core C modules (String,
Integer, List, etc).  It is my belief that we will gain a good amount of
speed by doing that, since they've been tweaking their C code for a while
now.

After that, I will go back to optimizing code generation.  Tail call
optimization was easy and I've removed a lot of the other unnecessary
escape continuations as well.  Locking down built-in methods and
collapsing constant expressions should give us another boost.

Then we can start using Python for FP...

- Daniel
0
dsilva (35)
2/21/2004 2:07:49 AM
Jens Axel S�gaard <usenet@jasoegaard.dk> writes:
> It seems as though you judge the implementation on speed alone?
> That would be a mistake.

I see your point.  When all the implementations are of the exact same
standard, efficiency is pretty much all that matters to me.  This
"efficiency" can be in speed, space, or edit-compile-debug time,
meaning that different implementations may be most efficient in
different circumstances.

What I didn't consider is that "Schemes" implement a number of
subtlely different languages (construed as semi-necessary extensions
beyond R5RS).  This diversity is certainly a boon for language
implementors, but not so great for language users trying to try out
multiple implementations.  For example, I've used both Bigloo and PLT,
and been bitten dealing with their different module systems.

> A good language implementation provides the programmer with good
> tools.
>
> In case of DrScheme you get among other things an GUI editor,
> control flow analysis, a profiler, a tool for test suites and
> syntactical analysis.

A better language implementation allows other programmers to write
their own tools, and to plug into existing ones.  One reason I prefer
Bigloo to PLT is that it gives me an IDE to use within Emacs, and can
therefore immediately take advantage of its many charms.  YMMV, of
course.

> As an example consider the arrows in figure 2 in
>
> <http://155.33.117.141:20080/spy-web/scheme-workshop-2003-web/scheme-workshop-2003.html>
>
> They are drawn at the basis of the semantics of the underlying language -
> it's not just a simple text analysis.

I think this kind of semantic analysis would be really cool if it can
be applied to languages more complicated than Scheme (e.g. Python).  I
haven't used the PLT tools in quite awhile, but I enjoy having a
language-aware IDE such as those for Lisp and Java.

> Note that from a Scheme programmers point of view, the scoop is that
> we can use Python libraries from Scheme.

And you guys will rule the world if you can pull this off.  Well, at
least that lesser corner of the world that isn't ruled by Perl ;).

> Oh - apropos the Perl discussion, did you notice that this
> implementation supports tail call optimization?
>
> (Poor Guido - now there is more than one way to make loops)

You realize, of course, that you've doomed PLT Spy as an "approved"
Python implementation, at least until you come up with an "unimport
MTOWTDI" statement.  (Aside: it would be interesting (but hard) for
the compiler to detect common "synonymous" code constructs and warn
about the inconsistency).

/s
0
look-it-up (12)
2/21/2004 2:54:32 AM
meunier@ccs.neu.edu (Philippe) writes:
> One of the main reasons for that slowdown is the huge number of
> conversions between the scheme representation of python values and
> the corresponding scheme values, and back. [...]

Gack!  How much would you have had to change the semantics of Python
to just use native Scheme data types?  While the operators
(e.g. division) have different semantics, the basic datatypes like
bignums, complexes, and strings, seem very similar.  It seems like it
would avoid a world of hurt if you could avoid converting
primitives.  I'm sure you looked at doing this, but I'm just curious
where the similarity breaks down, and how badly.

>>>Have any of the regular CPython people bought into PLT Spy?
>
> No yet but we are going to Pycon at the end of next month to recruit
> people!  http://www.pycon.org/dc2004/talks/index_html#plt-scheme

So your pitch will be centered on the semantic information in the IDE?
Tragically, I'm more of a Perl aficionado, and I doubt you want to try
porting that to PLT (though there's work at Fontago porting it to
Parrot).  Anyways...

/s
0
look-it-up (12)
2/21/2004 3:33:34 AM
Sean O'Rourke <look-it-up@cs.ucsd.edu> writes:

>                       (Aside: it would be interesting (but hard) for
> the compiler to detect common "synonymous" code constructs and warn
> about the inconsistency).

Do you mean an analysis that warns a programmer that they used a
tail-calling pattern in one place and a FOR loop in another?

I wonder why this would even be interesting, given that each has its
place.  It would seem that the interesting analysis is instead one
that responds with

  "Hey, you used this complicated tail-calling pattern, but a simple
  FOR loop would have done just as well (unless you mean to extend
  this in ways that would be more difficult with a loop)" (followed by
  "If you hold on a moment, I'll generate the FOR version for you")

and

  "Hey, you have this convoluted FOR with additional iteration
  variables; did you consider using a simple tail-calling loop
  instead?" (followed by ...)

At any rate, it's very important that the analysis and refactoring
tool prefix all its output with "Hey".

Shriram
0
sk1 (223)
2/21/2004 5:03:41 PM
Shriram Krishnamurthi <sk@cs.brown.edu> writes:
> Sean O'Rourke <look-it-up@cs.ucsd.edu> writes:
>> (Aside: it would be interesting (but hard) for the compiler to
>> detect common "synonymous" code constructs and warn about the
>> inconsistency).
>
> Do you mean an analysis that warns a programmer that they used a
> tail-calling pattern in one place and a FOR loop in another?
>
> I wonder why this would even be interesting, given that each has its
> place.  It would seem that the interesting analysis is instead one
> that responds with [... suggested simplifications ...]

* It's interesting from the Pythonic perspective that always doing
  things one way makes it easier to understand code.  If your code is
  internally inconsistent (e.g. reflecting learning between having
  written different parts), this will show you which parts could use
  re-writing.  If you're working with a number of people, this could
  help enforce coding style standards.

* It's the same analysis as suggested simplifications, except with the
  latter, you're either seeding the code with some exemplars, and/or
  putting in some metric of code clarity (e.g. branches per
  function).

> At any rate, it's very important that the analysis and refactoring
> tool prefix all its output with "Hey".

Hey, I think you're onto something here.

/s
0
look-it-up (12)
2/21/2004 5:43:47 PM
On Fri, 20 Feb 2004 19:33:34 -0800, Sean O'Rourke wrote:
> meunier@ccs.neu.edu (Philippe) writes:
>> One of the main reasons for that slowdown is the huge number of
>> conversions between the scheme representation of python values and
>> the corresponding scheme values, and back. [...]
> 
> Gack!  How much would you have had to change the semantics of Python
> to just use native Scheme data types?  While the operators
> (e.g. division) have different semantics, the basic datatypes like
> bignums, complexes, and strings, seem very similar.  It seems like it
> would avoid a world of hurt if you could avoid converting
> primitives.  I'm sure you looked at doing this, but I'm just curious
> where the similarity breaks down, and how badly.
>

We didn't use primitive Scheme types for primitive Python times because we
needed them to behave like all other Python objects, and a big cond at
every runtime operation would be hell.  Luckily, we're making progress
with the C modules.

>>>>Have any of the regular CPython people bought into PLT Spy?
>>
>> No yet but we are going to Pycon at the end of next month to recruit
>> people!  http://www.pycon.org/dc2004/talks/index_html#plt-scheme
> 
> So your pitch will be centered on the semantic information in the IDE?
> Tragically, I'm more of a Perl aficionado, and I doubt you want to try
> porting that to PLT (though there's work at Fontago porting it to
> Parrot).  Anyways...

You should give it a shot.  All you need is the Perl BNF and what it means.

Daniel
0
dsilva (35)
2/21/2004 7:38:39 PM
> On Fri, 20 Feb 2004 19:33:34 -0800, Sean O'Rourke wrote:

>>Gack!  How much would you have had to change the semantics of Python
>>to just use native Scheme data types?  While the operators
>>(e.g. division) have different semantics, the basic datatypes like
>>bignums, complexes, and strings, seem very similar.  It seems like it
>>would avoid a world of hurt if you could avoid converting
>>primitives.  I'm sure you looked at doing this, but I'm just curious
>>where the similarity breaks down, and how badly.

>>Tragically, I'm more of a Perl aficionado, and I doubt you want to try
>>porting that to PLT (though there's work at Fontago porting it to
>>Parrot). 

Suppose somone offered you PerlPlus, a language with the exact syntax of your 
favorite programming language but a language whose programs always behave a 
little bit differently than what you expect. Would you look at this PerlPlus twice?

-- Matthias

0
2/21/2004 9:23:21 PM
"Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:

> You should give it a shot.  All you need is the Perl BNF and what it means.

  "Perl's grammar can not be reduced to BNF.  The work of parsing Perl
   is distributed between yacc, the lexer, smoke and mirrors." 
          --- Chaim Frenkel as quoted in the Perl FAQ

Does Perl even *have* semantics?

-- 
~jrm
0
2/21/2004 10:35:43 PM
On Sat, 21 Feb 2004 22:35:43 +0000, Joe Marshall wrote:

> "Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:
> 
>> You should give it a shot.  All you need is the Perl BNF and what it means.
> 
>   "Perl's grammar can not be reduced to BNF.  The work of parsing Perl
>    is distributed between yacc, the lexer, smoke and mirrors." 
>           --- Chaim Frenkel as quoted in the Perl FAQ
> 
> Does Perl even *have* semantics?

Should be straightforward to port the Perl parser using the same DIY
methods, no?

- Daniel
0
dsilva (35)
2/21/2004 11:44:08 PM
"Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:

> On Sat, 21 Feb 2004 22:35:43 +0000, Joe Marshall wrote:
>
>> "Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:
>> 
>>> You should give it a shot.  All you need is the Perl BNF and what it means.
>> 
>>   "Perl's grammar can not be reduced to BNF.  The work of parsing Perl
>>    is distributed between yacc, the lexer, smoke and mirrors." 
>>           --- Chaim Frenkel as quoted in the Perl FAQ
>> 
>> Does Perl even *have* semantics?
>
> Should be straightforward to port the Perl parser using the same DIY
> methods, no?

You'd be surprised.  The parse is context-sensitive which means in
this case that the meaning of the next character will depend on the
runtime state.

To steal some examples from http://perlmonks.thepen.com/44722.html  

The '/' character either means divide or that the following text is a
regular expression.  So this parse:

    BEGIN {
         eval (time % 2 ? 'sub zany ();' : 'sub zany (@);');
       }
   zany / ...

will depend on whether you attempted to do it on an even or odd
second.


       sin  / 25 ; # / ; die "this doesn't die";
       time / 25 ; # / ; die "this dies!";

The first one is computing the sin of the true/false value gotten by
matching " 25 ; # " against $_.  Then it dies.  The second one is
computing the time of day divided by 25, then ignoring the comment.

-- 
~jrm
0
2/22/2004 12:17:36 AM
On Sun, 22 Feb 2004 00:17:36 +0000, Joe Marshall wrote:
> "Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:
>> On Sat, 21 Feb 2004 22:35:43 +0000, Joe Marshall wrote:
>>> "Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:
>>> 
>>>> You should give it a shot.  All you need is the Perl BNF and what it means.
>>> 
>>>   "Perl's grammar can not be reduced to BNF.  The work of parsing Perl
>>>    is distributed between yacc, the lexer, smoke and mirrors." 
>>>           --- Chaim Frenkel as quoted in the Perl FAQ
>>> 
>>> Does Perl even *have* semantics?
>>
>> Should be straightforward to port the Perl parser using the same DIY
>> methods, no?
> 
> You'd be surprised.  The parse is context-sensitive which means in
> this case that the meaning of the next character will depend on the
> runtime state.
> 
> To steal some examples from http://perlmonks.thepen.com/44722.html  
> 
> The '/' character either means divide or that the following text is a
> regular expression.  So this parse:
> 
>     BEGIN {
>          eval (time % 2 ? 'sub zany ();' : 'sub zany (@);');
>        }
>    zany / ...
> 
> will depend on whether you attempted to do it on an even or odd
> second.
> 
> 
>        sin  / 25 ; # / ; die "this doesn't die";
>        time / 25 ; # / ; die "this dies!";
> 
> The first one is computing the sin of the true/false value gotten by
> matching " 25 ; # " against $_.  Then it dies.  The second one is
> computing the time of day divided by 25, then ignoring the comment.

Nasty.  Never mind DrPerl then.

Oh well, too bad.  I doubt the Perl and Scheme communities care about each
other anyway.

- Daniel
0
dsilva (35)
2/22/2004 12:39:54 AM
Joe Marshall wrote:

> "Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:
> 
> 
>>You should give it a shot.  All you need is the Perl BNF and what it means.
> 
> 
>   "Perl's grammar can not be reduced to BNF.  The work of parsing Perl
>    is distributed between yacc, the lexer, smoke and mirrors." 
>           --- Chaim Frenkel as quoted in the Perl FAQ
> 
> Does Perl even *have* semantics?
> 

Does Scheme have one?

-- Matthias

aka the instructor who used to tell his Rice 311 kids,
"Scheme is as bad as Fortran" and thinks he's right.

0
2/22/2004 1:25:20 AM
Matthias Felleisen <matthias@ccs.neu.edu> writes:
>> [snip me saying stuff]
> 
> Suppose somone offered you PerlPlus, a language with the exact syntax
> of your favorite programming language but a language whose programs
> always behave a little bit differently than what you expect. Would you
> look at this PerlPlus twice?

I think that would be spelled "Perl#".  By "what I'd expect", I assume
you mean "what I'd expect if they were Perl programs.  As long as it
behaved slightly differently in a consistent way, and as long as the
CPAN's modules worked with it, sure.

I'm not a monoglot -- I even enjoy learning new languages, provided I
see them giving me some advantage.  If you were to dirty yourself and
your grad students writing a PerlLT that didn't take away CPAN, and
had tail call elimination and/or macros (or some means to give me
access to the AST), I would leap to use, improve, and evangelize it.
I would even do so if it behaved slightly differently.  On the other
hand, if it were just an 80% throwaway implementation that translated
into PLT scheme and ran 100x slower, I would be more reluctant.  And
unfortunately, I'll bet that there aren't theses and papers to be had
writing a 99.44%, industrial-strength PerlLT...

/s
0
look-it-up (12)
2/22/2004 2:14:13 AM
"Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:
> Nasty.  Never mind DrPerl then.
>
> Oh well, too bad.  I doubt the Perl and Scheme communities care
> about each other anyway.

False!  See Dan Sugalski's presence on the LL1 list.  I think Perl
people are generally receptive to new ideas, if they can see examples
and applications.  The belief that "TMTOWTDI" implies a willingness to
learn new ways to do things.

However, the two communities _do_ seem to have trouble avoiding
flamefests.  The conversation has trouble walking that knife-edge
between "You haven't read so-and-so's paper on control/reset?  You
heathen!" and "Get back to me when you've written some *real* apps."
If we could avoid the flames and mutual disdain, I think both sides
could learn.  As I read somewhere recently: "For just one day imagine,
whenever someone disagrees with you, that they are smarter and more
knowledgeable than you are, and ask yourself why they have beliefs
apparently opposed to yours."

/s
0
look-it-up (12)
2/22/2004 2:30:35 AM
"Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:
> You should give it a shot.  All you need is the Perl BNF and what it
> means.

As other posters have said, there is no BNF for Perl.  I'm not a Perl
5 hacker, so I don't know the full extent of the pain.  In addition to
the business another poster cited about function prototypes changing
the parse (IIRC C/C++ typedefs have a similar effect), I recently
found that "9-y/aeiou//" (i.e. 9 minus the number of translations of
characters "aieou") parses correctly in 5.8.x, but requires a space
after the "-" in 5.6.x.  For an even more extreme example, check out
the mailing list posts about distinguishing between the "defined-or"
operator "//" and the empty regexp "//".

On the one hand, I find this somewhat distasteful -- my CS-major's
stomach churns -- so you probably find it absolutely appalling.  On
the other hand, I am continually surprised at the Perl community's
ability to gradually massage these sorts of perversions into a state
where they do "what you mean", where "you" includes nearly the entire
vocal Perl community.

/s
0
look-it-up (12)
2/22/2004 2:39:34 AM
> Joe Marshall wrote:
>>
>> Does Perl even *have* semantics?
>>

Matthias Felleisen <matthias@ccs.neu.edu> writes:

> Does Scheme have one?
>
> -- Matthias
>
> aka the instructor who used to tell his Rice 311 kids,
> "Scheme is as bad as Fortran" and thinks he's right.

And how many courses did this instructor teach in Fortran?

-- 
~jrm
0
2/22/2004 2:50:28 AM
>>>>> "Sean" == Sean O'Rourke <look-it-up@cs.ucsd.edu> writes:

> As other posters have said, there is no BNF for Perl.  I'm not a Perl
> 5 hacker, so I don't know the full extent of the pain. 

Bad enough that Perl6 will be a complete rewrite, with a grammar. A
grammar that it will be possible to change at runtime, mind you, but
the Lisp people should be familiar with that sort of thing, I think.

Even today it's possible to write parts of your Perl programs in other
languages, by way of the Inline:: set of modules. The only functional
one I can see offhand is Scheme (GNU Guile, specifically), but it
should be fairly easy to add more. An Erlang one would be pretty
interesting, to get at the distributedness.

> For an even more extreme example, check out the mailing list posts
> about distinguishing between the "defined-or" operator "//" and the
> empty regexp "//".

As far as I've understood things on the P6 mailing lists, Larry and
Damian are trying to remove (or at least minimize) this sort of thing
in Perl6.

> On the one hand, I find this somewhat distasteful -- my CS-major's
> stomach churns -- so you probably find it absolutely appalling.

This, however, is unlikely to change. Perl will, to borrow an
expression from Sean Burke, continue to be the Borg of programming
languages: it will actively hunt down other languages and steal their
nifty features (prime targets for Perl6 at the moment seem to be Ruby
and Smalltalk). While this makes for a language where you can almost
always choose a way to express a problem that makes it easy to solve,
it *really* does not make for the sort of cleanliness and
orthogonality that CS people tend to like.
-- 
		    Calle Dybedahl <calle@lysator.liu.se>
   "Being stomped on by some sexy commie chick and her perfectly vegan Doc
	       Martens is not as hot as it sounds." -- babycola
0
calle5 (7)
2/22/2004 10:10:33 AM
On Sun, 22 Feb 2004 00:17:36 +0000, Joe Marshall wrote:

> The '/' character either means divide or that the following text is a
> regular expression.

But this is at least possible to specify, it depends only on how the
function before '/' was prototyped (if at all). There are worse cases,
my favorite is this:

/* S_intuit_more
 * Returns TRUE if there's more to the expression (e.g., a subscript),
 * FALSE otherwise.
 *
 * It deals with "$foo[3]" and /$foo[3]/ and /$foo[0123456789$]+/
 *
 * ->[ and ->{ return TRUE
 * { and [ outside a pattern are always subscripts, so return TRUE
 * if we're outside a pattern and it's not { or [, then return FALSE
 * if we're in a pattern and the first char is a {
 *   {4,5} (any digits around the comma) returns FALSE
 * if we're in a pattern and the first char is a [
 *   [] returns FALSE
 *   [SOMETHING] has a funky algorithm to decide whether it's a
 *      character class or not.  It has to deal with things like
 *      /$foo[-3]/ and /$foo[$bar]/ as well as /$foo[$\d]+/
 * anything else returns TRUE
 */

/* This is the one truly awful dwimmer necessary to conflate C and sed. */

STATIC int
S_intuit_more(pTHX_ register char *s)
{
    if (PL_lex_brackets)
	return TRUE;
    if (*s == '-' && s[1] == '>' && (s[2] == '[' || s[2] == '{'))
	return TRUE;
    if (*s != '{' && *s != '[')
	return FALSE;
    if (!PL_lex_inpat)
	return TRUE;

    /* In a pattern, so maybe we have {n,m}. */
    if (*s == '{') {
	s++;
	if (!isDIGIT(*s))
	    return TRUE;
	while (isDIGIT(*s))
	    s++;
	if (*s == ',')
	    s++;
	while (isDIGIT(*s))
	    s++;
	if (*s == '}')
	    return FALSE;
	return TRUE;
	
    }

    /* On the other hand, maybe we have a character class */

    s++;
    if (*s == ']' || *s == '^')
	return FALSE;
    else {
        /* this is terrifying, and it works */
	int weight = 2;		/* let's weigh the evidence */
	char seen[256];
	unsigned char un_char = 255, last_un_char;
	char *send = strchr(s,']');
	char tmpbuf[sizeof PL_tokenbuf * 4];

	if (!send)		/* has to be an expression */
	    return TRUE;

	Zero(seen,256,char);
	if (*s == '$')
	    weight -= 3;
	else if (isDIGIT(*s)) {
	    if (s[1] != ']') {
		if (isDIGIT(s[1]) && s[2] == ']')
		    weight -= 10;
	    }
	    else
		weight -= 100;
	}
	for (; s < send; s++) {
	    last_un_char = un_char;
	    un_char = (unsigned char)*s;
	    switch (*s) {
	    case '@':
	    case '&':
	    case '$':
		weight -= seen[un_char] * 10;
		if (isALNUM_lazy_if(s+1,UTF)) {
		    scan_ident(s, send, tmpbuf, sizeof tmpbuf, FALSE);
		    if ((int)strlen(tmpbuf) > 1 && gv_fetchpv(tmpbuf,FALSE, SVt_PV))
			weight -= 100;
		    else
			weight -= 10;
		}
		else if (*s == '$' && s[1] &&
		  strchr("[#!%*<>()-=",s[1])) {
		    if (/*{*/ strchr("])} =",s[2]))
			weight -= 10;
		    else
			weight -= 1;
		}
		break;
	    case '\\':
		un_char = 254;
		if (s[1]) {
		    if (strchr("wds]",s[1]))
			weight += 100;
		    else if (seen['\''] || seen['"'])
			weight += 1;
		    else if (strchr("rnftbxcav",s[1]))
			weight += 40;
		    else if (isDIGIT(s[1])) {
			weight += 40;
			while (s[1] && isDIGIT(s[1]))
			    s++;
		    }
		}
		else
		    weight += 100;
		break;
	    case '-':
		if (s[1] == '\\')
		    weight += 50;
		if (strchr("aA01! ",last_un_char))
		    weight += 30;
		if (strchr("zZ79~",s[1]))
		    weight += 30;
		if (last_un_char == 255 && (isDIGIT(s[1]) || s[1] == '$'))
		    weight -= 5;	/* cope with negative subscript */
		break;
	    default:
		if (!isALNUM(last_un_char) && !strchr("$@&",last_un_char) &&
			isALPHA(*s) && s[1] && isALPHA(s[1])) {
		    char *d = tmpbuf;
		    while (isALPHA(*s))
			*d++ = *s++;
		    *d = '\0';
		    if (keyword(tmpbuf, d - tmpbuf))
			weight -= 150;
		}
		if (un_char == last_un_char + 1)
		    weight += 5;
		weight -= seen[un_char];
		break;
	    }
	    seen[un_char]++;
	}
	if (weight >= 0)	/* probably a character class */
	    return FALSE;
    }

    return TRUE;
}

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/

0
qrczak (1266)
2/22/2004 12:17:04 PM
On Fri, 13 Feb 2004 04:12:47 -0800, Michele Simionato wrote:
> Bruno Desthuilliers <bdesth.quelquechose@free.quelquepart.fr> wrote:
>> Python is another beast. It offers some (pretty limited but...) support 
>> for fp constructs, but with *really* bad performances. As Python is 
>> already a bit slow, slowing it down some more is not a good idea.
> 
> My point was not about performances: even if the performance of functional
> programming in Python was excellent, still I would not use functional
> techniques too much, since there are alternative (i.e. OOP programming)
> which are better supported by the language and have advantages [...]

I don't understand this.  Because PLT Scheme includes functional
libraries and allows for FP without worries about special calling
conventions, does it make the language suitable only for FP?

Imagine it allowed you to write:

;; A and B are sequences...
  (for i := 0 step 1 until m do
     (when (= x (aref A i))
       (set-aref! B i (+ 1 (aref B i)))
       (break)))

Would the language now push for an imperative style?

Imagine PLT Scheme had a class system included with the regular
distribution.  It would then be an OOP language as well. Which style would
be appropriate to use with the language at that point?

Then imagine it included a reactive programming collection called FrTime...

Daniel
0
dsilva (35)
2/22/2004 2:35:38 PM
Joachim Durchholz <joachim.durchholz@web.de> writes:

> Jonadab the Unsightly One wrote:
> 
> > Languages without garbage collection must die.
> 
> *sigh* kill RPG first. Then Cobol. 

RPG and Cobol are still alive?

I don't mind if a few businesses still have ten-year-old "legacy" code
that they're running and continuing to maintain.  That doesn't bother
me, and I don't consider that to be an indication that the language is
"alive".  I just want bad languages to die with respect to being used
for general-purpose programming and chosen for new projects.  C most
of all must die, and C++ second.

> Then Fortran.  

Fortran is only alive for heavy-duty number crunching.  Nobody is
still choosing it for squat else.  If C/C++ fell into that kind of
obscurity (being used for, say, boot loaders and kernels and such), I
would be content with that.  I'm just tired of seeing it chosen for
application-level programming where its strengths are mostly unuseful
and its weaknesses a major handicap.

> Good luck - you'll need it.  After that, we can even begin to think
> about C++. (Or Java.)

Java doesn't bother me so badly, though I personally (in the small
amount of time I've spent with it) have not acquired a liking for it.
But it doesn't have the kind of ubiquity that gets a language chosen
when it is an extremely poor match for the problem domain just because
it's the language everyone has been using for everything since anyone
can remember.  Java also doesn't get immitated by other languages in
the same ways that C does.  In the last ten years C has been a
significant negative influence in that regard.

Lisp kinda got a raw deal.  It was way ahead of its time, doing
garbage collection in an era when human resources were cheap and
computer time was pricey.  It got such a bad reputation for
performance that it sorta became a second-class citizen; these days
the relative costs have utterly reversed, and compilers and
interpreters are much better, and there's virtually *no* reason to
choose C over lisp, except for ubiquity, but it continues to be chosen
for project after project.  This must stop.

I don't think at this point that a lisp-based language can displace C,
however, for historical reasons and because of tradition and
perception.  Multiparadigmatic languages (Perl, Python, and so forth)
have a better chance if they can shake the "scripting language" myth.
(Calling Perl a scripting language almost makes sense, historically,
though it betrays ignorance of the last five years.  Calling Python a
scripting language is just idiotic, but it's a common perception,
probably because Python gets compared to Perl a lot, due to their both
being multiparadigmatic (though the similarity doesn't go far beyond
that).  It doesn't help anything that PHP often gets lumped in with
Python and Perl.)  I would have thought that the thing needed to shake
that myth would be optimizing compilers, but Java seems to have evaded
it despite running on a vm, so I have some hope for Parrot.  Maybe.
(Or maybe Java escaped being known as a scripting language due to the
existence of a completely-unrelated Javascript language, and maybe
Python and Perl do need optimizing compilers...)

Perl will *finally* get real gc in version 6, and a decent object
model, and better functional stuff too, among other things.  (Rumor is
we're getting continuations, which has inspired me to learn Scheme so
that I can understand continuations conceptually ahead of time.)

Hopefully this progress will influence the design of other languages
as well...  but progress is frustratingly slow, sometimes.  I can see
(part of) the future, but we can't get there right now because too
many people don't see it.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/23/2004 3:13:26 PM
"Anton van Straaten" <anton@appsolutions.com> writes:

> "Jonadab the Unsightly One" wrote:
> > This may be partly because my primary language is Perl, which
> > is an expressly multiparadigmatic language.  Functional, contextual,
> > and procedural programming are all equally straightforward in Perl,
> 
> The idea that functional programming is straightforward in Perl only lasts
> until you try to write, say, a CPS parser in it.  

Some functional programming is easy in Perl.  Closures are like eating
cake.  Continuations are another matter; Perl5 doesn't really have
them.  I've heard that Perl6 is probably getting them, though.

> The point being that Perl doesn't support tail recursion, 

It does support it at the language level; it just doesn't make the
guarantee that Scheme makes that the interpreter/compiler/whatever
will do a certain optimization.

> and so at the very least it'll use a huge amount of space when it
> doesn't need to; at worst, you'll run out of space if your machine
> survives the humongous memory allocation that it does with recursive
> programs.
  
Yes, this is true, at least somewhat.  Tail recursion really ought to
be optimized; that would be a quite worthwhile improvement.  I suspect
it will be made eventually.
  
> Although it's true that experienced functional programmers can use
> languages like Perl & Python in various functional ways, often by
> relying on workarounds, there's a problem with claiming that these
> languages support functional programming: it encourages
> Perl/Python/Ruby programmers to believe that they understand FP and
> have access to it in their language, so they miss out on the
> opportunity to learn more about one of the more important and
> enlightening programming paradigms.
  
Actually, it is Perl's functional stuff that has lead me to want to
learn Scheme.  I previously knew elisp, but never messed with lambda
expressions or closures in that language.  Just one data point, but I
think it's significant that I never understood closures until I
learned to use them in Perl.  Now Perl6 is getting continuations, or
so I've heard, and to understand them ahead of time I'm learning
Scheme.
  
> It's particularly unfortunate that the generally poor support for FP
> in these languages often leads people to conclude that the
> functional paradigm is flawed

I've not seen this to be the case.  (I'm not saying it doesn't happen,
just that it's not what I've seen.)  The elegance of things like list
transformations in Perl (especially map, and extra-especially
constructions like the Schwartzian Transform[1] and the Orcish
Manoeuver) leads people to learn about FP who formerly, due to
indoctrination in C and C++ and their ilk, considered FP to be
impractical.  Closures also are extremely useful in Perl.  So while
it's true that not all elements of functional programming are
supported in Perl5, the ones that are supported prove to be undeniably
useful and practical, in a world where "useful" and "practical" are
not terms a lot of people are used to applying to functional
programming.

The Schwartzian Transform is a particularly potent argument against
the notion that FP is impractical, because the hard cold data show it
to run circles around the more "traditional" (i.e., procedural) ways
of doing the same thing, to the extent that sometimes it makes the
difference between usable and unusable.

Okay, so experienced functional programmers (the sort of people who
are accustomed to implementing custom problem-domain-specific
languages using CPS just for a particular problem) probably don't
consider the Schwartzian Transform to be "real" functional
programming, but it's a whole lot more functionally-oriented than
traditional procedural or OO programming.

(And yes, I know that FP has just as much tradition as the other
paradigms, but it's been (unfortunately) shoved into niches for so
long that the other paradigms are often considered more "traditional",
probably mostly due to the influence of C and C++.  It may also be
noted that the "OO" in C++ is a long way from being real OO and has
unfortunately seriously warped a lot of people's perception of OO, but
talking much about that will take us off-topic.)

[1] An example, just in case anyone's not familiar with it:
dosomething for map { $$_[0] } sort { $$a[1] <=> $$b[1]
   } map { [$_, some_expensive_function($_)] } somelist;

If my Scheme were a lot better, I'd translate this, but basically the
first map (the one on the second line) performs some calculation or
database lookup or whatever once for each element of the list so that
the sort doesn't have to do it multiple times, and the second map (on
the first line) returns the elements of the list to their original
state, keeping the sorted order.  The same thing could be done more in
a more procedural fashion, but it would be a lot more verbose and
inelegant and involve superfluous list-copying:

{
   my @transformed;
   for (somelist) {
     push @transformed, [$_, some_expensive_function($_)];
   }
   my @sorted = sort { $$a[1] <=> $$b[1] } @transformed;
   my @restored;
   for (@sorted) {
     push @restored $$_[0];
   }
   for (@restored) {
     dosomething;
   }
}

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/23/2004 4:44:49 PM
Joe Marshall <prunesquallor@comcast.net> writes:
>        sin  / 25 ; # / ; die "this doesn't die";
>        time / 25 ; # / ; die "this dies!";
> 
> The first one is computing the sin of the true/false value gotten by
> matching " 25 ; # " against $_.  Then it dies.  The second one is
> computing the time of day divided by 25, then ignoring the comment.

Oh lord jesus have mercy on their souls. They are going down down to hell.

-- 
Cheers,                                        The Rhythm is around me,
                                               The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.
0
rAYblaaK (362)
2/23/2004 5:36:26 PM
"Daniel P. M. Silva" <dsilva@ccs.neu.edu> wrote in message news:<pan.2004.02.22.14.35.38.87147@ccs.neu.edu>...
> On Fri, 13 Feb 2004 04:12:47 -0800, Michele Simionato wrote:
> > My point was not about performances: even if the performance of functional
> > programming in Python was excellent, still I would not use functional
> > techniques too much, since there are alternative (i.e. OOP programming)
> > which are better supported by the language and have advantages [...]
> 
> I don't understand this.  Because PLT Scheme includes functional
> libraries and allows for FP without worries about special calling
> conventions, does it make the language suitable only for FP?
> 
> Imagine it allowed you to write:
> 
> ;; A and B are sequences...
>   (for i := 0 step 1 until m do
>      (when (= x (aref A i))
>        (set-aref! B i (+ 1 (aref B i)))
>        (break)))
> 
> Would the language now push for an imperative style?
> 
> Imagine PLT Scheme had a class system included with the regular
> distribution.  It would then be an OOP language as well. Which style would
> be appropriate to use with the language at that point?
> 
> Then imagine it included a reactive programming collection called FrTime...
> 
> Daniel

Suppose you have an ideal multi-paradigm language where every programming
paradigm is supported equally well. Then you would chose the paradigm
according to the problem. End of the story, fine.
However, in real life languages do not support all paradigms equally well 
and you have to take in account this fact.
For instance PLT Scheme object system is not that good (IMHO of course);
suppose also that for some reason I cannot/don't want to install Swindle,
which is quite good instead. Then, I would probably chose a non-OO 
programming paradigm if possible, whereas my choice would probably be 
different had I the option of using Swindle.

The choice of the paradigm is always a matter of costs vs. benefits, and
the language facilities enter in the game, isn't it?

        Michele Simionato
0
2/24/2004 7:14:34 AM
In article <65dx924e.fsf@jonadab.homeip.net>, Jonadab the Unsightly One wrote:
[...]
>The Schwartzian Transform is a particularly potent argument against
>the notion that FP is impractical, because the hard cold data show it
>to run circles around the more "traditional" (i.e., procedural) ways
>of doing the same thing, to the extent that sometimes it makes the
>difference between usable and unusable.
>
>Okay, so experienced functional programmers (the sort of people who
>are accustomed to implementing custom problem-domain-specific
>languages using CPS just for a particular problem) probably don't
>consider the Schwartzian Transform to be "real" functional
>programming, but it's a whole lot more functionally-oriented than
>traditional procedural or OO programming.
[...]
>[1] An example, just in case anyone's not familiar with it:
>dosomething for map { $$_[0] } sort { $$a[1] <=> $$b[1]
>   } map { [$_, some_expensive_function($_)] } somelist;
>
>If my Scheme were a lot better, I'd translate this, but basically the
>first map (the one on the second line) performs some calculation or
>database lookup or whatever once for each element of the list so that
>the sort doesn't have to do it multiple times, and the second map (on
>the first line) returns the elements of the list to their original
>state, keeping the sorted order.  The same thing could be done more in
>a more procedural fashion, but it would be a lot more verbose and
>inelegant and involve superfluous list-copying:
[...]

FYI, the Schwartzian Transform in Ruby:

    foo.sort_by { |x|
        some_expensive_method(x)
    }

The same thing could be done in Scheme (with suitably defined sort-by
function):

    (sort-by some_expensive_function foo)

Of course you can also define sort_by in Perl.

-- 
Tim Sutherland <timsuth@ihug.co.nz>
2004 SDKACM President
Software Developers' Klub - the University of Auckland ACM Student Chapter
http://www.sdkacm.com/
0
timsuth (206)
2/24/2004 9:06:41 AM
michele.simionato@poste.it (Michele Simionato) writes:
> 
> Suppose you have an ideal multi-paradigm language where every programming
> paradigm is supported equally well. Then you would chose the paradigm
> according to the problem. End of the story, fine.
> However, in real life languages do not support all paradigms equally well 
> and you have to take in account this fact.


Oz is a multiparadigm language and at least *claims* to support them
equally well.  I suspect though that often people just choose the
subset of paradigms they are most comfortable with.

k

http://www.mozart-oz.org/



0
glynn (8)
2/24/2004 9:56:42 AM
Kevin Glynn <glynn@info.ucl.ac.be> writes:

> michele.simionato@poste.it (Michele Simionato) writes:
> > 
> > Suppose you have an ideal multi-paradigm language where every programming
> > paradigm is supported equally well. Then you would chose the paradigm
> > according to the problem. End of the story, fine.
> > However, in real life languages do not support all paradigms equally well 
> > and you have to take in account this fact.
> 
> 
> Oz is a multiparadigm language and at least *claims* to support them
> equally well.  I suspect though that often people just choose the
> subset of paradigms they are most comfortable with.

Having worked for quite a while in a large project, I'd like
to stress the importance of consistency in _large_ software 
systems. I'd much rather stick to one paradigm that supports
90% of my problems _well enough_, and most of the really 
critical problems for my domain exceedingly well. I could 
live with a language that does this, and proves completely
unsuitable for some percentage of my problems.

I'd be wary of a language that supports all paradigms and 
doesn't offer a clear preference regarding style. For the same 
reason, I'm wary of programming languages with endless macro
capabilities (e.g. LISP) and languages with a motto like
"there's more than one way to do it" (perl).

IMHO, a programming language for large systems should _encourage_ 
good design by offering intuitive and consistent abstractions,
subtly leading the programmer towards a reasonably clean and
stable design. This may not preclude the notion of supporting
multiple paradigms, but offering too many modeling options to
the programmer and letting him/her choose may well be counter-
productive on a sufficiently large scale.

Arguably, it is sometimes handy to have a gun in your house,
but if you also have kids, you'd better make sure to store
that gun where the kids can't get to it.

Programmers in large systems are, almost by definition, of 
average skill and constantly under pressure to deliver.
Furthermore, 80% of the life cycle cost lies in maintenance,
and the people maintaining the code are almost never the 
same people as those who wrote it. This makes uniformity
more important in the long run than flexibility.

(BTW, this is an area where FPLs should be able to shine.)

/Uffe
-- 
Ulf Wiger, Senior Specialist,
   / / /   Architecture & Design of Carrier-Class Software
  / / /    Strategic Product & System Management
 / / /     Ericsson AB, Connectivity and Control Nodes
0
ulf.wiger1 (107)
2/24/2004 1:09:47 PM
Kevin Glynn <glynn@info.ucl.ac.be> writes:

>michele.simionato@poste.it (Michele Simionato) writes:
>> 
>> Suppose you have an ideal multi-paradigm language where every programming
>> paradigm is supported equally well. Then you would chose the paradigm
>> according to the problem. End of the story, fine.
>> However, in real life languages do not support all paradigms equally well 
>> and you have to take in account this fact.
>
>Oz is a multiparadigm language and at least *claims* to support them
>equally well.

Oz may be a multiparadigm language, but I don't think it supports
the template meta-programming paradigm very well.

Supporting a few or even a large number of particular paradigms is not
the same as supporting *every* paradigm.

Furthermore, sometimes support for one paradigm will encourage
design decisions that may be less than optimal for another paradigm.
For example, in Oz, higher-order terms have identity, which means that
they do not obey all the same laws that higher-order terms obey in a
pure functional and/or pure logic language.

-- 
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
0
fjh (268)
2/24/2004 3:31:58 PM
michele.simionato@poste.it (Michele Simionato) writes:

> I don't think Python is less flexible than Perl, at least in the
> stuff that matters (it is much less flexible in the syntax, of
> course).  

Syntax is only syntax.  (Okay, significant whitespace is arguably an
evil syntax, but still... the semantics are of much more critical
importance than the syntax.)

> However, doing things in different ways (i.e. not Guido's way) is
> discouraged. The Zen of Python explicitely says:
> 
> "There should be one-- and preferably only one --obvious way to do it.
> Although that way may not be obvious at first unless you're Dutch."

Interesting.  When I played around with Python I found that I didn't
like it.  I now wonder whether this had something to do with that.  At
the time, I didn't think so.  I knew the docs stressed OO especially,
but I figured that was just the docs trying to capitalize on the
buzzword value of OO.  Additionally, I wouldn't expect an emphasis on
OO to drive me away from a language; Inform is very OO (though it's
also possible to program procedurally, but the OO is much stronger),
and I love Inform.  (The quality of the Designer's Manual had
something to do with that...)  No, I think it was other things about
Python that I personally didn't like, things that are not relevant to
this discussion or this newsgroup, such as the typing system.

> So, in principle, there should be only one "obviously Pythonic" style.
> Pythonistas think of this kind of uniformity as of a worthwhile goal,
> Perl programmers probably see this as a disgusting perspective ;)

Fortunately, Python as a language does not enforce this, at least not
entirely.  Still, what you point out below is disturbing.

> > Next question:  would it (the Python implementation) have been better
> > coded using classes, or is the functional implementation just as good?
> 
> What does it mean better? Faster, more efficient? Or simpler to write,
> simpler to maintain, simpler to read for other Pythonistas? 

Simpler to write and simpler to maintain for the author, is what I was
thinking.  Authors and maintainers, if there are multiple such.

There's also an implication of _reasonable_ efficiency; An O(n!)
algorithm would not be "better" than an O(n) algorithm, even if it's
much easier to write and maintain.  I'm not talking about minor
differences in efficiency, though, like one order of magnitude; those
get lost in the underflow on modern systems.

> I haven't never benchmarked applications using a functional style
> vs. applications using an object oriented style, so I can't say,
  
If it's even remotely close, it doesn't matter which is more efficient
with the computer's resources.  Unless you're trying to sqeeze a
couple more years of life out of that old 386, or something.

I do try to avoid any algorithm worse than O(n^2) though.  But that
isn't what I was talking about.  Perl gets accused (by C programmers
mostly) of poor performance, but the only time it's not instant in my
experience is when it's retrieving data from a slow external source (a
database, the internet, ...) or when the algorithm is extremely bad
for performance (say, O(n!) or something like that).  I guess I'm just
not very interested in benchmarking to save a few microseconds.  Spend
the extra nanowatt and make my life easier.
  
> but I don't expect to see big differences. OTOH, writing the
> application in the obvious OO style has big advantages if the
> application has to be maintained by another Python programmer not
> very familiar with functional techniques.

What if the application is to be maintained by some future programmer
yet to be determined who may or may not have significant pre-existing
familiarity with OO, FP, or Python?

I would think it would be a mistake to write a program under the
assumption that an unknown future maintainer will necessarily be bound
by a certain suboptimal mindset.

> Yes, using all those closures would look bizarre, "unpythonic" to
> another Python programmer looking at that code.

Okay, but what about a competent programmer who knows several
languages (as all really good programmers do) but happen at the moment
to be using Python, because the code is written in Python?

> You see a typical example of "Pythonicity" here: this way is not
> obvious, so it is NOT the right way to go. The scope rules are
> bizarre on purpose, since people are not expected to use closures,
> they should use objects:

Objects for *that*?  Okay, I'm going way off topic here, but I'd sure
have used closures rather than objects for something like that in Perl:

sub make_adders {
    return map { my $i=$_; sub { $i+shift } } 1 .. shift if wantarray;
    my $i = shift; return sub { $i + shift }
}
my $addfive = make_adders(5);
my ($addone, $addtwo) = make_adders(2);
print $addfive->($addone->($addtwo->(3))) . "\n";  # 11

(Except, I wouldn't really create an object or closure for such a
simple thing as adding 5 to a number.  But that's beside the point.)

Python aims to be a general-purpose VHLL; it's going to eventually
have to have the scoping needed to provide decent support for FP, just
as Perl discovered that good OO was necessary.  (The OO in Perl6 is
being completely overhauled.  It will rock, when it's finally done,
which I currently estimate as sometime between entirely too long from
now and never.)

> [OO] is the "obvious" way for a Python programmer (I am sure Scheme
> or Perl programmers here will think the functional version is
> clearer, but let it be) and has various advantages over the
> functional version:
> 
> - the adder logic is encapsulated in an object with a
>   self-documenting name;
> - I can add docstrings to the adder class and not to the lambda;
> - I can later extends the class whereas I cannot derive from the
>   lambda;
> - I dont'use the esoteric name "lambda" which may scare my readers
>   ;)

Wait, you can't derive from a closure in Python?  Does that mean you
can't have a closure that holds other closures and uses them for
stuff?  That seems like a pretty serious limitation.  I would have
thought Python, being mostly a fairly flexible language, could handle
that, even if it's not The Van Rossam Way.  Or did you only mean that
there's no built-in support for altering or cloning a closure?

> OTOH, if I had good scope rules in the language, I would code the
> adder with a lambda, and my code would be less Pythonic, i.e. less
> readable to others. So, Guido doesn't give me better scope rules and
> "ipso facto" guarantees that the only *obvious* way to do it is the
> OO way.  Uniformity of style is a very valuable thing in a corporate
> environment, less so if you code for yourself.

This kind of uniformity isn't the mere normal sort of style -- it's
bondage and discipline.  It reminds me of the Pascal teacher who
wanted us to have each subroutine only be called from one place, to
keep control flow straightforward.  Even Pascal wouldn't enforce that
kind of severe limitation.

> > This is somewhat different from my approach.  I freely mix
> > whatever paradigms are available to me in the language I'm using,
> 
> The choice of the paradigm is somewhat restricted by the language,
> not only by the problem. For instance, I am aware of the possible
> warts of using closures in Python, so I have two choices: 1) I am
> very careful, 2) I switch to the OOP paradigm. More often than not
> 2) is the simplest choice. I am sure that in Perl you would tend to
> prefer the functional way over the object oriented way, instead.
  
I intermix them freely.  For example, my closures might hold private
objects, which they might use for various things.  Any moderately
competent Perl programmer understands both, so there's no problem.
Whatever gets the job done and makes the code maintainable...

my @foo = map {
   my %rec = %{GetRecord('sometable',$_)};
   my $dt = DateTimeFromDB($rec{somefield});
   return { # A hashful of closures:
      isfinished  => sub { DateTime::compare($dt, DateTime->now()) >= 0 }
      othermethod => sub { dostuff($rec{foo}), $dt->year }
      anothersub  => sub { whee(@rec{somefield,anotherfield}) }
   }
} grep {
  somecriterion($_)
} @whatever;

$foo[$someindex]{othermethod}->();

The arrow syntax is annoying, but it's only syntax, and it's getting
fixed in Perl6.

> Coming back OT, i.e. to Scheme, I would say in Scheme (or Lisp)
> these issues are less relevant, since in these languages you have
> the opportunity to modify the language itself to follows the
> paradigm you have in mind. 

That's something I'd really like to explore; sounds incredibly useful.
However, I think I'd better get a more solid handle on continuations
first.

> Still, even Scheme encourage some paradigm over others:  for
> instance you are encouraged to solve a problem recursively and not
> iteratively, to think in terms of higher order functions, etc.

Recursion (well, general recursion) and iteration are isomorphic.  If
you fully understand them both, thinking in terms of the one is
exactly the same as thinking in terms of the other.

sub iterative_wondrousp {
  my $n = int shift;
  while ($n > 1) {
     if ($n % 2) {
        ($n*=3)++;
     } else {
        $n /= 2;
     }
  }
  return "Wondrous!";
}

sub recursive_wondrousp {
   my $n = int shift;
   return "Wondrous!" unless $n > 1;
   if ($n % 2) {
     return resursive_wondrousp($n*3+1);
   } else {
     return recursive_wondrousp($n/2);
   }
}

print map {"$_\n"} (iterative_wondrousp(27), recursive_wondrousp(27));

Is that even different?  I guess I moved the return associated with
the base case to a different physical spot in the code, but the
control flow hasn't changed at all.  If you desk-check it (i.e., walk
through the interpretive flow by hand), you'll quickly see that the
two are identical in every significant way.

It's possible to think in terms of iteration or in terms of recursion,
but it's also possible to just think about them generally,
collectively, and to think about solving a particular problem using
either of them, without worrying about which.  Which one you use is an
unimportant detail, in most of the common cases.

Sure, there are some problems that are difficult to think about
iteratively, because of their complexity.  Still, ultimately, the
recursion boils down to iteration at the machine code level.  Or the
iteration boils down to recursion, depending on how you look at it.
Really, they're the same.

> Summarizing, I think that if you want to use a paradigm not
> supported by language X, you have two choices: 1) change your mind:
> 2) switch to language Y. In the case of X = Scheme, Y can be a
> customization of Scheme written by yourself; still you have to to
> write the language yourself, which is a bigger effort than using the
> native constructs. I.e. I may implement Python "for" loops in
> Scheme, but more often than not I am better off by using the native
> looping constructs.

> (*) notice that I was naturally inclined to think in a FP way even
> before studying Python and before studying Scheme: I was introduced
> to map and friends by Mathematica and Maple. I had no problems at
> all in learning the FP approach, whereas I needed a more substantial
> effort to grasp OOP.

I understood OO before FP, largely due to the influence of Inform
(which has really good OO, incidentally, much better than Perl5), but
there are some problems that are inherently easier to solve
functionally than objectually.  Languages that don't provide at least
some of both paradigms (or adequate flexibility so that you can easily
construct them) are missing important parts.  I don't know Python well
enough to be certain of which category it falls into; my feeling
before this discussion was that the language itself was flexible
enough for multiple paradigms, but that the docs just tended to
emphasize OO more than the others.  But I never got all the way
through the Python book before other things distracted me, so it could
have some stiffer limitations than I realised.

Of coruse, it doesn't really matter; I didn't come to comp.lang.scheme
to discuss Python -- or, for that matter, Perl or elisp.  I want to
learn about continuations in Scheme.  This is just a side-discussion,
albeit an interesting one.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/24/2004 4:21:09 PM
Joe Marshall <prunesquallor@comcast.net> writes:

> Languages that are not `safe for space' are fundamentally broken.

Yeah, I'm looking forward to that's being changed in Perl6, too.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/24/2004 4:23:07 PM
"Anton van Straaten" <anton@appsolutions.com> writes:

> Anyway, I'll eagerly await the "bugfix" which allows one to safely use local
> variables in procedures which perform tail calls.  After that, a version
> which can safely perform tail calls without explicit stack manipulation
> would be nice...

Yeah, me too.  Perl6.  We're also getting a real object model, lazy
evaluation, and other cool stuff.  Coming "soon", for varying values
of "soon", hopefully sometime before the heat-death of the internet.

I'm also waiting for the "bugfix" that gives Emacs proper multitasking
and threads, so that Gnus can send one message while retrieving others
while I read or compose yet another at the same time.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/24/2004 4:35:15 PM
"Anton van Straaten" <anton@appsolutions.com> writes:

> You can respond to this by saying that there are languages in which
> this style is natural and elegant, and is even used to implement
> basic constructs like loops.  At that point, said Perl programmer is
> looking at you with a funny expression, shaking his head, and
> backing away slowly.  That programmer has lost the opportunity to
> learn something important.

With regard to the tail recursion you were talking about, I agree
here.  However...

> That's why it's misleading, and a disservice to Perl programmers, to
> claim that Perl supports functional programming.

Perl provides excellent support for *some elements* of functional
programming.  Closures in Perl are a long way from broken, for
example, and are very useful, but closures are certainly an example of
functional programming.  They sure aren't object-oriented or
procedural or declarative.  No, they come from the functional school
of programming.

Yes, when tail recursion is optimized, that will be a big improvement
and will provide for additional elements of functional programming.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/24/2004 4:44:40 PM
Joe Marshall <prunesquallor@comcast.net> writes:

> "Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:
> 
> > You should give it a shot.  All you need is the Perl BNF and what it means.
> 
>   "Perl's grammar can not be reduced to BNF.  The work of parsing Perl
>    is distributed between yacc, the lexer, smoke and mirrors." 
>           --- Chaim Frenkel as quoted in the Perl FAQ
> 
> Does Perl even *have* semantics?

Perl has very powerful semantics, but I have my doubts whether BNF is
expressive enough to cover things like context.  Certainly the BNF for
Perl, if it could even be written, would not be human-readable.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/24/2004 4:54:05 PM
"Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:

> Nasty.  Never mind DrPerl then.

All this really means is that BNF is not sufficiently expressive to
handle a complex language like Perl.  Context, far from being "nasty",
is at the core of Perl's expressiveness, one of the things (besides
the CPAN of course) that really sets it apart from other languages.

> Oh well, too bad.  I doubt the Perl and Scheme communities care
> about each other anyway.

There are a number of people on perlmonks.org who also use Scheme.
One in particular, going by the handle "Schemer", posted a meditation
once explaining how by just playing around with Perl's closures he
ended up implementing an object system.  There are a number of people
involved with the designed of Perl6 who are familiar with Scheme (and
other languages too).  Perl loves to borrow concepts from other
languages.  We're done borrowing from C and awk and sed (and have been
done borrowing from them for some time now) and are now moving on to
borrow from the likes of Scheme and Smalltalk and Prolog.  Resistance
is futile; you will be assimilated (or something like that).

Besides, unless I miss my guess, Scheme also is more expressive than
BNF, and it is surely possible (albeit not an easy thing) to implement
Perl in Scheme, if someone wanted to undertake it.

I've never seen a claim to the effect that BNF is Turing-complete, so
why it should be a surprise that languages exist which are not
expressible in it is beyond me.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/24/2004 5:06:51 PM
"Jonadab the Unsightly One" <jonadab@bright.net> writes:

> "Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:
>
>> Nasty.  Never mind DrPerl then.
>
> All this really means is that BNF is not sufficiently expressive to
> handle a complex language like Perl.  Context, far from being "nasty",
> is at the core of Perl's expressiveness, one of the things (besides
> the CPAN of course) that really sets it apart from other languages.

REBOL has context-sensitive parsing.  I agree that context-sensitive
parsing sets a language apart, but that is not a good thing.  It makes
it impossible to reason about code independently of context!  It is
nice to be able to take a code fragment and know what it does without
having to know where it was lifted from.

> Besides, unless I miss my guess, Scheme also is more expressive than
> BNF, and it is surely possible (albeit not an easy thing) to implement
> Perl in Scheme, if someone wanted to undertake it.

Scheme is almost BNF, the exception being quasiquote (which needs to
keep track of the quasiquote depth only, so there is no runtime
dependence).  You could implement Perl in Scheme, but Danial Silva was
talking about the Dr.X language interface, which requires a BNF.

> I've never seen a claim to the effect that BNF is Turing-complete, so
> why it should be a surprise that languages exist which are not
> expressible in it is beyond me.

It is not Turing complete.  The reason it is surprising is that there
is a powerful infrastructure (LEX & YACC) *specifically* designed to
parse infix-based languages efficiently and quickly provided that you
make some small concessions to left-to-right order.  Instead, people
write ad-hoc parsers or invent languages that are virtually impossible
to parse with the standard tools.  This is simply idiotic.  At least
the XML based languages can be parsed!

0
jrm (1310)
2/24/2004 5:27:46 PM
"Jonadab the Unsightly One" <jonadab@bright.net> writes:

> Joe Marshall <prunesquallor@comcast.net> writes:
>
>> "Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:
>> 
>> > You should give it a shot.  All you need is the Perl BNF and what it means.
>> 
>>   "Perl's grammar can not be reduced to BNF.  The work of parsing Perl
>>    is distributed between yacc, the lexer, smoke and mirrors." 
>>           --- Chaim Frenkel as quoted in the Perl FAQ
>> 
>> Does Perl even *have* semantics?
>
> Perl has very powerful semantics, but I have my doubts whether BNF is
> expressive enough to cover things like context.  Certainly the BNF for
> Perl, if it could even be written, would not be human-readable.

That's the problem, then.  Nearly any string of noise has *some*
meaning in Perl.
0
jrm (1310)
2/24/2004 5:29:05 PM
>>>>> "Joe" == Joe Marshall <jrm@ccs.neu.edu> writes:

> That's the problem, then.  Nearly any string of noise has *some*
> meaning in Perl.

You exaggerate, good sir. Practical tests show that 80-character
strings of randomly chosen printable ASCII characters are only about
4.5% likely to be syntactically valid Perl.


This code below was used to determine this:

#!/usr/bin/perl -l

close STDERR; # To avoid noise from all the compiles that don't work

our $broken = 0;
our $works = 0;

for (1..10000) {
    $str = pack("c*", map {32+rand(95)} 1..80);
    eval $str;
    if ($@) {
	$broken++
    } else {
	$works++
    }
}

print "Score: $broken non-working entries, $works working ones.";

-- 
		    Calle Dybedahl <calle@lysator.liu.se>
    "I don't know what art these programs are state-of; possibly macrame."
		     -- Dr Richard A. O'Keefe, comp.risks
0
calle5 (7)
2/24/2004 6:45:34 PM
<calle@cyberpomo.com> wrote:
>You exaggerate, good sir. Practical tests show that 80-character
>strings of randomly chosen printable ASCII characters are only about
>4.5% likely to be syntactically valid Perl.

I've done a similar experiment in the past, though with 30 character
strings of printable characters. Most of those syntactically valid
programs are going to be either inline documentation (POD, lines
starting with a '=') or contain a comment character ('#') early on, so
it's hard to claim that they actually have a meaning (which is what
Joe claimed). Either that, or practically any string of garbage is a
Scheme program with some meaning... :-)

Anyway, in my tests only 11/10000 of the programs without comments
parsed correctly. With 80 characters I'd expect even fewer "valid"
programs.

>    eval $str;
>    if ($@) {

You're misclassifying programs that parse correctly but run into a
runtime error. Giving the program as input to "perl -c" and testing
for the return value should be more reliable.

-- 
Juho Snellman
0
jsnell (317)
2/24/2004 7:13:00 PM
Jonadab the Unsightly One wrote:
> 
> Joe Marshall <prunesquallor@comcast.net> writes:
> 
> > "Daniel P. M. Silva" <dsilva@ccs.neu.edu> writes:
> >
> > > You should give it a shot.  All you need is the Perl BNF and what it means.
> >
> >   "Perl's grammar can not be reduced to BNF.  The work of parsing Perl
> >    is distributed between yacc, the lexer, smoke and mirrors."
> >           --- Chaim Frenkel as quoted in the Perl FAQ
> >
> > Does Perl even *have* semantics?
> 
> Perl has very powerful semantics, but I have my doubts whether BNF is
> expressive enough to cover things like context.  Certainly the BNF for
> Perl, if it could even be written, would not be human-readable.

Eh.  I think any grammar can be expressed in a form that *somebody* will
call "BNF".  Classical BNF will context-free grammars.  BNF with appropriate
extensions (rules decorated with instructions for stack manipulation and 
reading) will parse context-sensitive grammars. BNF with yet more extensions
(multiple tokens on the rule RHS) is Turing-complete. 

			Bear
0
bear (1219)
2/24/2004 7:26:58 PM
Joe Marshall wrote:
> 
> > Perl has very powerful semantics, but I have my doubts whether BNF is
> > expressive enough to cover things like context.  Certainly the BNF for
> > Perl, if it could even be written, would not be human-readable.
> 
> That's the problem, then.  Nearly any string of noise has *some*
> meaning in Perl.

That's the major barrier when I attempt to use perl.  The syntax looks 
like line noise to me.  It's admirably concise, but the individual bits 
of punctuation don't correspond in any coherent way to their meanings. 
The fact that they mean different things in different contexts complicates
the problem. 

You can tell Wall is a (natural) linguist; a lot of the ways things get 
used in Perl correspond to multiple uses of words in natural language. 
This adds to Perl's conciseness without increasing its token set, but 
also adds to confusion when trying to analyze something.

Although perl will "usually do what you mean", the multiple semantics 
of the implied grammar allow seeming implicit ambiguities which make 
it very difficult to read code and know what it actually does, or to 
debug code that looks like it "should" be doing something else. 

				Bear
0
bear (1219)
2/24/2004 7:34:25 PM
Calle Dybedahl <calle@cyberpomo.com> writes:

>>>>>> "Joe" == Joe Marshall <jrm@ccs.neu.edu> writes:
>
>> That's the problem, then.  Nearly any string of noise has *some*
>> meaning in Perl.
>
> You exaggerate, good sir. Practical tests show that 80-character
> strings of randomly chosen printable ASCII characters are only about
> 4.5% likely to be syntactically valid Perl.

How do other languages compare?
0
jrm (1310)
2/24/2004 9:02:11 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

> "Jonadab the Unsightly One" <jonadab@bright.net> writes:
>
>> I've never seen a claim to the effect that BNF is Turing-complete, so
>> why it should be a surprise that languages exist which are not
>> expressible in it is beyond me.
>
> It is not Turing complete.  The reason it is surprising is that there
> is a powerful infrastructure (LEX & YACC) *specifically* designed to
> parse infix-based languages efficiently and quickly provided that you
> make some small concessions to left-to-right order.  Instead, people
> write ad-hoc parsers or invent languages that are virtually impossible
> to parse with the standard tools.  This is simply idiotic.  At least
> the XML based languages can be parsed!

In addition to not being able to use standard toolsets like LALR(1) parser
generators, keeping a language's grammar context-free also makes it much
easier for the programmer to reason about the language and programs written
in that language.  In Perl, one can easily get surprising parses, and those
often lead to unexpected and hard-to-diagnose run-time behavior.  I claim
this is a result of two things: Perl's incredibly complex grammar, and its
`do what I mean' attitude towards parsing.

When I write in a language with a more straightforward syntax, like Scheme,
C, C++, or Java, I'll get a number of errors at compile time complaining of
trivial syntax mistakes: missing close paren in Scheme, or a missing
semicolon in C.  These are mildly annoying, but I think there are strong
advantages to this approach.  I would far rather be clearly informed of an
error, however trivial, than risk the possibility of the language
misinterpreting my program and producing an incorrect answer (with no
indication that it's incorrect).
                                 * * * * *

Side note: as Joe has recently experienced, C++'s grammar is also decidedly
not context-free.  However, my experience is that people find it easier to
deal with than Perl's grammar.  I conclude from this that there are certain
amounts of context sensitivity that we're good at dealing with.  I'm not
sure, though, whether this is an issue of degree or kind.  That is, in C++
the context sensitivity is largely limited to issues like, `is this
identifier a class name, a template name, or a function name?'.  Perl's
context sensitivity pervades the syntax and affects things as basic as
expression parsing.

Richard
0
cobbe (63)
2/24/2004 11:02:54 PM
Calle Dybedahl <calle@cyberpomo.com> writes:

> >>>>> "Sean" == Sean O'Rourke <look-it-up@cs.ucsd.edu> writes:
> 
> > As other posters have said, there is no BNF for Perl.  I'm not a Perl
> > 5 hacker, so I don't know the full extent of the pain. 
> 
> Bad enough that Perl6 will be a complete rewrite, with a grammar. 

The Perl6 grammar will be written using Perl6 grammar rules, though.
(See the Apocalypse article on pattern matching.)  I doubt it will be
reducible to BNF, any more than Perl5 is.

> Even today it's possible to write parts of your Perl programs in
> other languages, by way of the Inline:: set of modules. 

Yes, but usually the only reasons to do that are to get around Perl's
limitations (which will be less of an issue in Perl6) or for
obfuscatory purposes.  There are edge cases where people use Inline
for other reasons, but those two are the only *common* reasons.

> > For an even more extreme example, check out the mailing list posts
> > about distinguishing between the "defined-or" operator "//" and
> > the empty regexp "//".
> 
> As far as I've understood things on the P6 mailing lists, Larry and
> Damian are trying to remove (or at least minimize) this sort of thing
> in Perl6.

Reduce, but it won't be possible to entirely remove it.

> > On the one hand, I find this somewhat distasteful -- my CS-major's
> > stomach churns -- so you probably find it absolutely appalling.
> 
> This, however, is unlikely to change. Perl will, to borrow an
> expression from Sean Burke, continue to be the Borg of programming
> languages: it will actively hunt down other languages and steal
> their nifty features (prime targets for Perl6 at the moment seem to
> be Ruby and Smalltalk). 

I've seen mentions of Scheme and Prolog also, among others.  But yes,
Ruby and Smalltalk seem to come up a lot.  Especially Smalltalk.

> While this makes for a language where you can almost always choose a
> way to express a problem that makes it easy to solve, it *really*
> does not make for the sort of cleanliness and orthogonality that CS
> people tend to like.

It makes for an expressive and useful language, is what it makes for.
I've heard that we're getting continuations in Perl6, and I'm excited
about it.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/24/2004 11:33:10 PM
At Tue, 24 Feb 2004 12:27:46 -0500, Joe Marshall wrote:
> 
> Scheme is almost BNF, the exception being quasiquote (which needs to
> keep track of the quasiquote depth only, so there is no runtime
> dependence).

quasiquote can be defined as a macro, which means the Scheme parser can
be defined with a BNF (as is done in the Formal Syntax section of R5RS).

-- 
Alex

0
foof (110)
2/25/2004 1:34:56 AM
Joe Marshall  wrote:
>You could implement Perl in Scheme, but Daniel Silva was
>talking about the Dr.X language interface, which requires a BNF.

That's not exactly true, actually.  Scott Owen's parser tools, which
require your average LALR(1) yacc/bison-like grammar, make your life
easier if you want to implement a new language in DrScheme (in fact
all the non-Scheme languages currently supported by DrScheme use those
tools: Algol 60, Ocaml, Python, and Java) but you're not required to
use those tools in your language implementation, you can do the whole
parsing yourself if necessary.

So implementing perl6 as a DrScheme language is perfectly doable.  In
fact, given how much functional stuff perl6 is apparently going to
contain and given the huge amount of that functional stuff that's
already there in DrScheme, I wouldn't be surprised if a motivated
scheme programmer could come up with an implementation of perl6 (well,
of the language proper, not all the extra library modules and such)
for DrScheme well before the perl6 people officially release their own
system.  I could even see the perl6 people use that DrScheme
implementation to experiment with their language design instead of
just writing design documents, that could actually accelerate their
development...

Philippe

0
meunier (7)
2/25/2004 1:37:40 AM
meunier@ccs.neu.edu (Philippe) writes:

> Joe Marshall  wrote:
>>You could implement Perl in Scheme, but Daniel Silva was
>>talking about the Dr.X language interface, which requires a BNF.
>
> That's not exactly true, actually.  Scott Owen's parser tools, which
> require your average LALR(1) yacc/bison-like grammar, make your life
> easier if you want to implement a new language in DrScheme (in fact
> all the non-Scheme languages currently supported by DrScheme use those
> tools: Algol 60, Ocaml, Python, and Java) but you're not required to
> use those tools in your language implementation, you can do the whole
> parsing yourself if necessary.

I stand corrected.  Thanks!

-- 
~jrm
0
2/25/2004 1:43:02 AM
Richard C. Cobbe wrote:
> Side note: as Joe has recently experienced, C++'s grammar is also decidedly
> not context-free.  However, my experience is that people find it easier to
> deal with than Perl's grammar.  

Hrm.

Context sensitive? I had rather assumed it was lalr(k).
0
johan.NO (67)
2/25/2004 6:11:09 AM
"Jonadab the Unsightly One" <jonadab@bright.net> wrote in message news:<1xok8n4a.fsf@jonadab.homeip.net>...
> michele.simionato@poste.it (Michele Simionato) writes:

> > but I don't expect to see big differences. OTOH, writing the
> > application in the obvious OO style has big advantages if the
> > application has to be maintained by another Python programmer not
> > very familiar with functional techniques.
> 
> What if the application is to be maintained by some future programmer
> yet to be determined who may or may not have significant pre-existing
> familiarity with OO, FP, or Python?

If the future maintainer has to maintain an application written in Python,
he has to know Python! (this include OOP, not necessarely FP)

> I would think it would be a mistake to write a program under the
> assumption that an unknown future maintainer will necessarily be bound
> by a certain suboptimal mindset.

I see that you come from Perl ;) In Python, I can make strong assumptions
about the style used by my future maintainers and about their mindset,
and these assumptions are most likely correct. For instance, I do know 
for sure that if I use an OOP style my application can be maintained by 
any average Python programmer, whereas if I use an heavy functional style 
I am imposing a burden on my future maintainers which will expect classes,
not closures.

> > Yes, using all those closures would look bizarre, "unpythonic" to
> > another Python programmer looking at that code.
> 
> Okay, but what about a competent programmer who knows several
> languages (as all really good programmers do) but happen at the moment
> to be using Python, because the code is written in Python?

Here you are making unwarred assumptions about the capabilities of your 
future maintainers. A Pythonista would write his code in such a way that 
it can be read by an *average* Python programmer. 
Suppose you can write a class in 10 lines using regular boring Python, and 
in 5 lines using a clever trick: an experienced Python programmer would 
prefer the 10 lines boring version. "Explicit is better than implicit, flat 
is better than nested, readability counts, etc. etc."
> 
> Python aims to be a general-purpose VHLL; it's going to eventually
> have to have the scoping needed to provide decent support for FP

I hope so, but I think it is unlikely in the near future. For instance,
we waited until version 2.1 to get nested scopes, since Guido didn't care 
particularly about them. Giving better support to closures must be the 
last point in his TODO list (this, assuming he would not be opposed to the
change). The idea of Python is to put *less* things in the language, not
more. We are waiting Python 3.0 for *removing* things from the language.
It is quite possible that "map" and "filter" will be removed from the core
language and put in a library, just to make clear that list comprehension
(and in the next future, generator comprehension) is "the" Pythonic way.
For the record, I am strongly in favor of this change.
 
> Wait, you can't derive from a closure in Python?  

What do you mean with "deriving from a closure"? There is no such
a thing a inheritance of closures.

> Does that mean you
> can't have a closure that holds other closures and uses them for
> stuff?  That seems like a pretty serious limitation.  I would have
> thought Python, being mostly a fairly flexible language, could handle
> that, even if it's not The Van Rossam Way.  Or did you only mean that
> there's no built-in support for altering or cloning a closure?

A Python closure is a first class object that my contains other closures,
may be copied, etc. If you play dirty tricks, you can do a lot with
closures, but in Python, you don't. If something is not obvious, it 
is not the right tool to use.
  
> This kind of uniformity isn't the mere normal sort of style -- it's
> bondage and discipline.  It reminds me of the Pascal teacher who
> wanted us to have each subroutine only be called from one place, to
> keep control flow straightforward.  Even Pascal wouldn't enforce that
> kind of severe limitation.

I like Pascal ;) Seriously, the difference between Python and Pascal is
the following: in Pascal you are *forced* to do things in a given way,
in Python you are *strongly encouraged* to do things in Guido's way. 
You can use closures instead of class: but it is just ugly, less self 
documenting and more error prone than using classes, so you don't want 
to do it.
OTOH, you have quite a lot of freedom in Python: for instance you can
change the class of an instance at run time. It is something you should
not do and Pascal prevents you from doing it; Python leaves you the freedom, 
but discourage this usage. So, there is no primitive to change the class 
of an instance and you have to play with special attributes and underscores:
but still you can do that.

> > Coming back OT, i.e. to Scheme, I would say in Scheme (or Lisp)
> > these issues are less relevant, since in these languages you have
> > the opportunity to modify the language itself to follows the
> > paradigm you have in mind. 
> 
> That's something I'd really like to explore; sounds incredibly useful.
> However, I think I'd better get a more solid handle on continuations
> first.

You should study macros if you want to play with the language; continuations
are control structures to implement lightweight threads et similia.
 
> > Still, even Scheme encourage some paradigm over others:  for
> > instance you are encouraged to solve a problem recursively and not
> > iteratively, to think in terms of higher order functions, etc.
> 
> Recursion (well, general recursion) and iteration are isomorphic.  If
> you fully understand them both, thinking in terms of the one is
> exactly the same as thinking in terms of the other.

> Sure, there are some problems that are difficult to think about
> iteratively, because of their complexity.  Still, ultimately, the
> recursion boils down to iteration at the machine code level.  Or the
> iteration boils down to recursion, depending on how you look at it.
> Really, they're the same.

That's true in principle, but it is NOT convenient to think in this way
(just IMHO, of course).
 
> I don't know Python well
> enough to be certain of which category it falls into; my feeling
> before this discussion was that the language itself was flexible
> enough for multiple paradigms, but that the docs just tended to
> emphasize OO more than the others.  But I never got all the way
> through the Python book before other things distracted me, so it could
> have some stiffer limitations than I realised.

Don't take me wrong: there are no more limitations than in Perl (to make
closure working is just a matter of complicating the syntax) and the 
language *is* multiparadigm. But OOP is nicer to use and it is the obvious
way. BTW, Stackless Python has got first class continuations and tail 
recursive calls years ago. Nevertheless, I don't think that continuations
will ever enter in the standard implementation, they are a too low level
tool (we got generators instead). OTOH, getting proper tail recursion
is conceivable; however, I don't think it is high in Guido's priority list.

> Of coruse, it doesn't really matter; I didn't come to comp.lang.scheme
> to discuss Python -- or, for that matter, Perl or elisp.  I want to
> learn about continuations in Scheme.  This is just a side-discussion,
> albeit an interesting one.

Since this message is crossposted to c.l.functional, and we are talking
about FP, I think we are at least half on-topic ;)


       Michele Simionato
0
2/25/2004 7:34:43 AM
On Tue, 24 Feb 2004 19:34:25 +0000, Ray Dillinger wrote:

> Although perl will "usually do what you mean", the multiple semantics 
> of the implied grammar allow seeming implicit ambiguities which make 
> it very difficult to read code and know what it actually does, or to 
> debug code that looks like it "should" be doing something else. 

This is what bite me yesterday:

sub possible(&$) {
   ...
}

if (possible {...} $var1 && possible {...} $var2) {...}

It got parsed as
   if (possible {...} ($var1 && possible {...} $var2)) {...}
which was not what I meant, and I could fix it only by
   if ((possible {...} $var1) && ...
or
   if (possible(sub {...}, $var1) && ...
neither of which I liked much. I would prefer adding parens around
arguments, like in functions without the block, or better that the
original version worked.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/

0
qrczak (1266)
2/25/2004 10:05:17 AM
Fergus Henderson <fjh@cs.mu.oz.au> writes:

> Kevin Glynn <glynn@info.ucl.ac.be> writes:
> 
> >michele.simionato@poste.it (Michele Simionato) writes:
> >> 
> >> Suppose you have an ideal multi-paradigm language where every programming
> >> paradigm is supported equally well. Then you would chose the paradigm
> >> according to the problem. End of the story, fine.
> >> However, in real life languages do not support all paradigms equally well 
> >> and you have to take in account this fact.
> >
> >Oz is a multiparadigm language and at least *claims* to support them
> >equally well.
> 
> Oz may be a multiparadigm language, but I don't think it supports
> the template meta-programming paradigm very well.
> 
> Supporting a few or even a large number of particular paradigms is not
> the same as supporting *every* paradigm.
> 

Oooookaaaaay.  Of course, Oz only claims to be multiparadigm not
omniparadigm, I apologise if I misled.  However, it supports many of
the commonly used paradigms and I think it is interesting to see how
well it manages that difficult trick.

> Furthermore, sometimes support for one paradigm will encourage
> design decisions that may be less than optimal for another paradigm.
> For example, in Oz, higher-order terms have identity, which means that
> they do not obey all the same laws that higher-order terms obey in a
> pure functional and/or pure logic language.
> 

I don't understand what you mean by "Higher-order terms have
identity".  Can you clarify?


cheers
k

0
glynn (8)
2/25/2004 12:33:01 PM
On Wed, 25 Feb 2004 00:11:09 -0600, Johan wrote:

>> Side note: as Joe has recently experienced, C++'s grammar is also decidedly
>> not context-free.  However, my experience is that people find it easier to
>> deal with than Perl's grammar.  
> 
> Hrm.
> 
> Context sensitive? I had rather assumed it was lalr(k).

It's not. The parsing of
   A<x>::B * y;
(whether it declares a pointer y or multiplies something by y), where x
is a constant of type int, can depend on the value of x.

Yes, it would be a pathology if some template specializations defined B
as a type and others as a value, but in general a language processor must
be prepared to evaluate constant expressions before parsing further parts
of the program.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/

0
qrczak (1266)
2/25/2004 12:51:56 PM
Johan <johan.NO@SPAM.ccs.neu.PLEASE.edu> writes:

> Richard C. Cobbe wrote:
>> Side note: as Joe has recently experienced, C++'s grammar is also decidedly
>> not context-free.  However, my experience is that people find it easier to
>> deal with than Perl's grammar.
>
> Hrm.
>
> Context sensitive? I had rather assumed it was lalr(k).

Possibly, for arbitrarily large values of k.  Joe's fond of stating that
parsing C++ requires infinite lookahead.  I haven't looked at the problem
in great depth, just enough to discover that it's officially Very Hard and,
when you get down to it, not that intrinsically interesting.

Richard
0
cobbe (63)
2/25/2004 1:32:00 PM
Kevin Glynn wrote:
> 
>>For example, in Oz, higher-order terms have identity, which means that
>>they do not obey all the same laws that higher-order terms obey in a
>>pure functional and/or pure logic language.
> 
> I don't understand what you mean by "Higher-order terms have
> identity".  Can you clarify?

I suppose Fergus is referring to the fact that Oz provides 
token-equality on functions. From a more semantic point of view that is 
pretty nasty, since to some extent, implementation details become 
observable in programs.

On the other hand, many "native" FP languages have something similar, 
e.g. OCaml with its == operator, or Lisp with its jungle of EQsomething 
operators. So I'm not convinced that this feature has too much to do 
with questions of paradigm.

Having said that, I'm not a friend of multi-paradigmism either. But 
neither do I like the "one true way" approach of Python. I think it is 
nonsense - no language designer could ever anticipate all possible 
requirements for abstractions that programmers might encounter, so some 
compositional flexibility is in order.

	- Andreas

-- 
Andreas Rossberg, rossberg@ps.uni-sb.de

Let's get rid of those possible thingies!  -- TB

0
rossberg (600)
2/25/2004 2:18:18 PM
Johan <johan.NO@SPAM.ccs.neu.PLEASE.edu> writes:

> Richard C. Cobbe wrote:
>> Side note: as Joe has recently experienced, C++'s grammar is also decidedly
>> not context-free.  However, my experience is that people find it easier to
>> deal with than Perl's grammar.
>
> Hrm.
>
> Context sensitive? I had rather assumed it was lalr(k).


Does   (T)+5  cast positive 5 to type T, or does it add 5 to the
variable T?

This sequence of tokens:       foo < a , b > - 5
has two very different meanings depending on whether foo names a
template or not.
0
jrm (1310)
2/25/2004 2:19:18 PM
cobbe@ccs.neu.edu (Richard C. Cobbe) writes:

> Johan <johan.NO@SPAM.ccs.neu.PLEASE.edu> writes:
>
>> Richard C. Cobbe wrote:
>>> Side note: as Joe has recently experienced, C++'s grammar is also decidedly
>>> not context-free.  However, my experience is that people find it easier to
>>> deal with than Perl's grammar.
>>
>> Hrm.
>>
>> Context sensitive? I had rather assumed it was lalr(k).
>
> Possibly, for arbitrarily large values of k.  Joe's fond of stating that
> parsing C++ requires infinite lookahead.  

To quote Willink's thesis, Section 5.2:

  ``The C++ grammar is ambiguous, context-dependent, and potentially
    requires infinite lookahead to resolve some ambiguities:

        int(x), y, *const z;  // int x; int y; int * const z;

    Is a comma-separated list of declarations in which the first is
    redundantly parenthesised, whereas changing the final list
    element: 

         int (x), y, new int; // ((int(x)), (y), (new int));

    gives a list of expressions, the first two of which are redundant,
    and the third causes a memory leak.''

> I haven't looked at the problem in great depth, just enough to
> discover that it's officially Very Hard and, when you get down to
> it, not that intrinsically interesting.

Sort of like a shin-kicking competition.  Very painful and not really
something you'd care to win anyway.
0
jrm (1310)
2/25/2004 2:31:47 PM
"Jonadab the Unsightly One" <jonadab@bright.net> writes:

> I've heard that we're getting continuations in Perl6, and I'm excited
> about it.

*That* should be amusing.  Continuations and context-sensitivity are
an interesting mix, especially if you can get a different parse upon
re-entering some code.
 
0
jrm (1310)
2/25/2004 2:34:00 PM
Ulf Wiger wrote:
> 
> Having worked for quite a while in a large project, I'd like
> to stress the importance of consistency in _large_ software 
> systems. I'd much rather stick to one paradigm that supports
> 90% of my problems _well enough_, and most of the really 
> critical problems for my domain exceedingly well. I could 
> live with a language that does this, and proves completely
> unsuitable for some percentage of my problems.
> 
> I'd be wary of a language that supports all paradigms and 
> doesn't offer a clear preference regarding style.

Oz is multiparadigm, but it has a clear guideline about what paradigm 
should apply in what situation: use the weakest paradigm that does the 
job well.
Oz is a layered language, each layer supporting an additional, more 
powerful feature. For example, the transition from functional to 
imperative programming is just a handful of additional constructs, most 
of them being just syntactic sugar anyway (the only essential new 
construct is assignment).

I don't think that your comparison to Perl does justice to Oz: Perl 
offers tons of ways to express the same computation, without leaving the 
same (imperative) paradigm. In Oz, there's essentially only one way to 
express a computation once you have fixed the design and the algorithm.

The difference with Python is easier to spot: Python is a 
single-paradigm language, Oz a multi-paradigm one :-)

Regards,
Jo
--
Currently looking for a new job.

0
2/25/2004 6:50:35 PM
Jonadab the Unsightly One wrote:
> 
> Recursion (well, general recursion) and iteration are isomorphic. [...]

Ah, not quite.
The kind of recursion you're talking about (tail recursion) indeed is 
isomorphic to iteration. But there are other forms of recursion (sorry 
if I got the terminology wrong, it's been a while since university, and 
I'm using German terminology which may not match the terminology of your 
country):


Primitive recursion: where the function will recurse at most once in 
every flow graph, like in
   foo (int i)
     if even (i) then
       foo (i / 2)
     else
       foo (i + 1)
     end
     do_something_else
Translated to grammars, tail recursion is equivalent to regular 
expressions, primitive recursion is equivalent to context-free grammars.
Primitive recursion can be rewritten by using two loops, but it's less 
straightforward, and thinking about the code definitely becomes different.


Simple recursion: the function will recurse a fixed number of times. The 
Fibonaccy function is an example (using typical FPL syntax for variation):
   fib 1 = 1
   fib 2 = 1
   fib n = fib (n - 1) + fib (n - 2)
The weaker forms of recursion can be rewritten as loops mechanically: 
they are straightforward transformations. Recursion of the Fibonaccy 
type cannot (I think).


At the next stage, we have things like Quicksort. You can rewrite it as 
a loop, but only if you're prepared to simulate recursion by manually 
maintaining a stack. (You don't always need a stack. Recursive-descent 
parsers for context-free languages typically need a stack, but if the 
language just makes sure that the number of opening and closing 
parentheses matches, you can reduce that stack to a single nesting depth 
counter.)


Then there's full recursion, like the Ackermann function where a 
recursive call uses another recursive call as a parameter. Here, we have 
arrived at the equivalent of Chomsky-0 grammars - in particular, it's 
nearly impossible to fathom what it actually does.
Here's Ackermann:
   ack (0, m) = m + 1
   ack (n, 0) = ack (n - 1, 1)
   ack (n, m) = ack (n - 1, ack (n, m - 1))
Of course, even this kind of function can be rewritten iteratively; 
incidentally, it doesn't even need a stack (taken from 
http://www-user.tu-chemnitz.de/~brum/Ackermann.pdf if you can read 
German, slightly transformed and written in a Pascal derivative that can 
declare local array variables with an open size):
   function Ackermann (m, n: int): int;
     var
       a, i: array[0..m] of ATyp;
       x: int;
     begin
       if m = 0 then Ackermann := n + 1 else begin
         x := m � 1;
         a[x + 1] := 2;
         a[x] := n + 3;
         i[x] := 0;
         repeat
           while x > 0 do begin
             Inc(i[x]);
             Dec(x);
             a[x] := 2;
             i[x] := 0;
           end;
           a[0] := a[0] + 2;
           Inc(i[x]);
           while (x < m) and (i[x] = a[x + 1] � 1) do begin
             Inc(x);
             a[x] := a[x � 1];
           end;
         until x = m;
       Ackermann := a[0] � 3;
     end;
   end;
Have fun trying to find out what each Ackermann function does :-)
Ah well, I'll spoil:
   A(0, n) = n + 1
   A(1, n) = 2 + (n + 3) � 3
   A(2, n) = 2 * (n + 3) � 3
   A(3, n) = 2 ^ (n + 3) � 3
   etc. ...
i.e. the m parameter gives the "order" of the arithmic operation to be 
performed.

Regards,
Jo
--
Currently looking for a new job.

0
2/25/2004 7:31:19 PM
Tuang wrote:
>>>I'm curious to what extent a solid understanding of functional
>>>programming would help me if I still had to do most real work in
>>>mainstream imperative languages.

It let's you know what you're missing, and know when it's time to change 
languages :)

> -- which FP concepts/techniques/advantages are still
> available/usable/practical in real production work?

Recursion - always available, but not always tail-recursive :(
Memory Management - get yourself a garbage collector.  Why C/C++ 
programmers don't use garbage collectors as a matter of habit is beyond me.
Dynamic scope - to some degree, dynamic scope can be faked in 
traditional languages
Continuations - while you _could_ implement a stack-copy procedure to do 
continuations in other languages, it's probably not worth it.  However, 
you can do continuation-like things on a smaller scale.  You basically 
just have to have a "stack" on the heap.
Stateless programming - always the best way to structure your programs 
when possible

> -- any other practical advice for getting the most benefit out of FP
> study if most real production work will still be in mainstream
> languages for the foreseeable future? (particular books or online docs
> you recommend, language recommendations for this purpose, study
> techniques such as "I always prototype X-type apps in FP language Y
> before implementing in non-FP lang Z", things to be wary of, etc.)

I was working on this at one point.  I had some classes planned (see 
schedule http://www.eskimo.com/~johnnyb/fer/classes.pdf) but only did 
the first three because not many people wanted to meet for an hour at 
noon at the library for a lecture on functional programming :)  I have 
VideoCD's and hand-out materials for the first two classes if anyone is 
interested.

Jon

0
none
2/25/2004 8:21:28 PM
In comp.lang.scheme Marcin 'Qrczak' Kowalczyk <qrczak@knm.org.pl> wrote:
> On Wed, 25 Feb 2004 00:11:09 -0600, Johan wrote:
> 
> >> Side note: as Joe has recently experienced, C++'s grammar is also decidedly
> >> not context-free.  However, my experience is that people find it easier to
> >> deal with than Perl's grammar.  
> > 
> > Hrm.
> > 
> > Context sensitive? I had rather assumed it was lalr(k).
> 
> It's not. The parsing of
>    A<x>::B * y;
> (whether it declares a pointer y or multiplies something by y), where x
> is a constant of type int, can depend on the value of x.

It cannot depend on the value of x - only the definition of x.

-- 
	Sander

+++ Out of cheese error +++
0
sander569 (105)
2/26/2004 12:45:40 AM
In comp.lang.scheme Richard C. Cobbe <cobbe@ccs.neu.edu> wrote:
> Johan <johan.NO@SPAM.ccs.neu.PLEASE.edu> writes:
> 
> > Richard C. Cobbe wrote:
> >> Side note: as Joe has recently experienced, C++'s grammar is also decidedly
> >> not context-free.  However, my experience is that people find it easier to
> >> deal with than Perl's grammar.
> >
> > Hrm.
> >
> > Context sensitive? I had rather assumed it was lalr(k).
> 
> Possibly, for arbitrarily large values of k.  Joe's fond of stating that
> parsing C++ requires infinite lookahead.  I haven't looked at the problem
> in great depth, just enough to discover that it's officially Very Hard and,
> when you get down to it, not that intrinsically interesting.

real compilers have so far had relatively small integer limits on some 
"parameters" of the language though - and there are symbol name limits - 
so it doesn't in practice get to be that bad.  

> 
> Richard

-- 
	Sander

+++ Out of cheese error +++
0
sander569 (105)
2/26/2004 12:47:39 AM
Sander Vesik wrote:

> In comp.lang.scheme Marcin 'Qrczak' Kowalczyk <qrczak@knm.org.pl> wrote:
> 
>>On Wed, 25 Feb 2004 00:11:09 -0600, Johan wrote:
>>
>>
>>>>Side note: as Joe has recently experienced, C++'s grammar is also decidedly
>>>>not context-free.  However, my experience is that people find it easier to
>>>>deal with than Perl's grammar.  
>>>
>>>Hrm.
>>>
>>>Context sensitive? I had rather assumed it was lalr(k).
>>
>>It's not. The parsing of
>>   A<x>::B * y;
>>(whether it declares a pointer y or multiplies something by y), where x
>>is a constant of type int, can depend on the value of x.
> 
> 
> It cannot depend on the value of x - only the definition of x.
> 

With C++ template specialization, I think it does indeed depend on the 
(compile-time) value of x. Yuk...

-thant

0
thant (332)
2/26/2004 1:20:41 AM
Sander Vesik <sander@haldjas.folklore.ee> writes:

> In comp.lang.scheme Marcin 'Qrczak' Kowalczyk <qrczak@knm.org.pl> wrote:
>> On Wed, 25 Feb 2004 00:11:09 -0600, Johan wrote:
>> 
>> >> Side note: as Joe has recently experienced, C++'s grammar is also decidedly
>> >> not context-free.  However, my experience is that people find it easier to
>> >> deal with than Perl's grammar.  
>> > 
>> > Hrm.
>> > 
>> > Context sensitive? I had rather assumed it was lalr(k).
>> 
>> It's not. The parsing of
>>    A<x>::B * y;
>> (whether it declares a pointer y or multiplies something by y), where x
>> is a constant of type int, can depend on the value of x.
>
> It cannot depend on the value of x - only the definition of x.

No, one can do silly things like defining A as a template whose parameter
is an integer, not a type, then specialize that template for specific
values of the parameter.  Such specializations can have, so far as I
remember, completely independent signatures and behavior.

I believe Thant is correct, though, in that this works only if the compiler
can determine the value of x at compile time; the following will presumably
fail to compile:

void foo(int i)
{
    A<i>::B * y;

    // ...
}

This may work, though:

void foo()
{
    static const int i = 4;
    A<i>::B * y;

    // ...
}

Ain't C++ grand?

Richard
0
cobbe (63)
2/26/2004 6:20:10 PM
Joachim Durchholz <joachim.durchholz@web.de> wrote in message news:<c1iqni$ogi$1@news.oberberg.net>...
> Oz is multiparadigm, but it has a clear guideline about what paradigm 
> should apply in what situation: use the weakest paradigm that does the 
> job well.
> Oz is a layered language, each layer supporting an additional, more 
> powerful feature. For example, the transition from functional to 
> imperative programming is just a handful of additional constructs, most 
> of them being just syntactic sugar anyway (the only essential new 
> construct is assignment).
> 
> I don't think that your comparison to Perl does justice to Oz: Perl 
> offers tons of ways to express the same computation, without leaving the 
> same (imperative) paradigm. In Oz, there's essentially only one way to 
> express a computation once you have fixed the design and the algorithm.

You are describing Python here.

> The difference with Python is easier to spot: Python is a 
> single-paradigm language, Oz a multi-paradigm one :-)

I will pretend I have not read that ;)
0
2/27/2004 6:37:55 AM
Michele Simionato wrote:

> Joachim Durchholz <joachim.durchholz@web.de> wrote:
> 
>> In Oz, there's essentially only one way to express a computation
>> once you have fixed the design and the algorithm.
> 
> You are describing Python here.

Yes - but the difference is that Oz offers paradigms that aren't
available in Python (AFAIK).

>> The difference with Python is easier to spot: Python is a 
>> single-paradigm language, Oz a multi-paradigm one :-)
> 
> I will pretend I have not read that ;)

Hmm... if you disagree, please share your insights.
I'd like to understand the issue better, being flatly told that I'm
wrong doesn't help me with that (though I like the way you worded it *g*).

On the "Python is multi-paradigm" issue itself:

What paradigms (other than imperative OO) does Python offer?
 From previous undisputed messages in this newsgroup, I was under the
impression that Python's "One Way to Do Things" philosophy includes the
paradigm. At least those who tried to do apply functional idioms have
reported that the language was more obstacle than help...

Regards,
Jo
--
Currently looking for a new job.

0
2/27/2004 1:22:50 PM
Joachim Durchholz <joachim.durchholz@web.de> wrote in message news:<c1ng8q$qvo$1@news.oberberg.net>...
> Michele Simionato wrote:
> 
> > Joachim Durchholz <joachim.durchholz@web.de> wrote:
> > 
> >> In Oz, there's essentially only one way to express a computation
> >> once you have fixed the design and the algorithm.
> > 
> > You are describing Python here.
> 
> Yes - but the difference is that Oz offers paradigms that aren't
> available in Python (AFAIK).
> 
> >> The difference with Python is easier to spot: Python is a 
> >> single-paradigm language, Oz a multi-paradigm one :-)
> > 
> > I will pretend I have not read that ;)
> 
> Hmm... if you disagree, please share your insights.
> I'd like to understand the issue better, being flatly told that I'm
> wrong doesn't help me with that (though I like the way you worded it *g*).
> 
> On the "Python is multi-paradigm" issue itself:
> 
> What paradigms (other than imperative OO) does Python offer?
>  From previous undisputed messages in this newsgroup, I was under the
> impression that Python's "One Way to Do Things" philosophy includes the
> paradigm. At least those who tried to do apply functional idioms have
> reported that the language was more obstacle than help...
> 
> Regards,
> Jo

> > Joachim Durchholz <joachim.durchholz@web.de> wrote:
> > 
> >> In Oz, there's essentially only one way to express a computation
> >> once you have fixed the design and the algorithm.
> > 
> > You are describing Python here.
> 
> Yes - but the difference is that Oz offers paradigms that aren't
> available in Python (AFAIK).
> 
> >> The difference with Python is easier to spot: Python is a 
> >> single-paradigm language, Oz a multi-paradigm one :-)
> > 
> > I will pretend I have not read that ;)
> 
> Hmm... if you disagree, please share your insights.
> I'd like to understand the issue better, being flatly told that I'm
> wrong doesn't help me with that (though I like the way you worded it *g*). 

Honestly, I thought you were joking, due to the smile in you original
post.
 
> On the "Python is multi-paradigm" issue itself:
> 
> What paradigms (other than imperative OO) does Python offer?

- imperative programming (you can use Python as if it was Basic)
- procedural programming (you have bona fide functions, not methods)
- structured programming (while, for blocks, etc)
- modular programming (Python was inspired by Modula-2, I think)
- functional programming (first class functions, read-only closures)
- event driven programming (in the GUI) 
- object oriented programming (single dispatch but fully dynamic
objects)
- metaprogramming with a MOP (including inheritance of metaclasses)

Other paradigms (i.e. declarative programming) are not supported by
the language but are easily implementable (see for instance David
Mertz's column http://www-106.ibm.com/developerworks/linux/library/l-cpdec.html).

> From previous undisputed messages in this newsgroup, I was under the
> impression that Python's "One Way to Do Things" philosophy includes the
> paradigm. 

Python says that there should be an *obvious* way: this obvious way
more often than not is OOP (at least in large applications). However, 
Python is not Java that forces you to write a class to print 
"Hello, World!". Python dogma is "readibility counts" so you should 
be using the paradigm which is simpler to read and to understand for 
your maintainers. For instance, there is no reason to bloat a ten
lines
script with classes: the old fashioned "spagetti code programming 
paradigm" is more than appropriate for a ten line script. 
Also, functions are quite appropriate in various circumnstances. 
For instance, Python's "os.path.walk" primitive to walk through 
your filesystem - which implements the visitor pattern - requires a 
function as visitor, not an object. Of course Python is not a "pure"
FP
language, so this visitor acts via side effect and it is less elegant
than Scheme equivalent find-files. Nevertheless, it does the job
and it is not object oriented. If you look at the modules of the
standard library, actually most of them are not OOP. On the other
hand, Tkinter, the GUI, is fully object oriented with multiple
inheritance, mixins, etc. So, use the minimal paradigm appropriate
for the job is the obvious way in Python, just as in Oz.

> At least those who tried to do apply functional idioms have
> reported that the language was more obstacle than help...

I myself reported that FP in Python is not supported as well as OOP;
however, this does not mean that you cannot do functional programming 
in Python or that you never do it. You do, but not as extensively as 
you would do in Scheme or in other languages. The little support of FP
in Python is enough to make simple things and enough to make you want 
to learn a real FP language, so it is quite good ;)

Further suggested readings:

This recent thread on c.l.py about closures:

http://groups.google.it/groups?dq=&hl=it&lr=&ie=UTF-8&threadm=tyffzcwbm7x.fsf%40pcepsft001.cern.ch&prev=/groups%3Fdq%3D%26num%3D25%26hl%3Dit%26lr%3D%26ie%3DUTF-8%26group%3Dcomp.lang.python%26start%3D25

David Mertz's articles on functional programming in Python (this is
the first):
http://www-106.ibm.com/developerworks/linux/library/l-prog.html
0
2/28/2004 7:38:08 AM
michele.simionato@poste.it (Michele Simionato) writes:

> > I would think it would be a mistake to write a program under the
> > assumption that an unknown future maintainer will necessarily be
> > bound by a certain suboptimal mindset.
> 
> I see that you come from Perl ;) In Python, I can make strong
> assumptions about the style used by my future maintainers and about
> their mindset, and these assumptions are most likely correct. 

I can't escape the feeling that you are somehow trying to see this as
a good thing, thinking to yourself, "Sure, all the Python programmers
I've known are impoverished in their understanding, but since they're
all impoverished in the _same way_, it's good."

I would like to think there are programmers who know Python but are
not limited in these same ways in their thinking, and I would surely
rather have said programmers maintaining the code than the
all-OO-all-the-time Python programmers of whom you speak.

> > Okay, but what about a competent programmer who knows several
> > languages (as all really good programmers do) but happen at the
> > moment to be using Python, because the code is written in Python?
> 
> Here you are making unwarred assumptions about the capabilities of
> your future maintainers. A Pythonista would write his code in such a
> way that it can be read by an *average* Python programmer.
>
> Suppose you can write a class in 10 lines using regular boring
> Python, and in 5 lines using a clever trick: an experienced Python
> programmer would prefer the 10 lines boring version. "Explicit is
> better than implicit, flat is better than nested, readability
> counts, etc. etc."

This would be valid, up to a point, but I think that when you start
preferring twisted and unclear code just because it's objectual rather
than functional, the path of reason is far from where you tread.

> > Python aims to be a general-purpose VHLL; it's going to eventually
> > have to have the scoping needed to provide decent support for FP
> 
> I hope so, but I think it is unlikely in the near future. For instance,
> we waited until version 2.1 to get nested scopes, since Guido didn't care 
> particularly about them.

Does Guido have exclusive control of Python?  I was somehow under the
impression that Python was OSS.

> > > Coming back OT, i.e. to Scheme, I would say in Scheme (or Lisp)
> > > these issues are less relevant, since in these languages you
> > > have the opportunity to modify the language itself to follows
> > > the paradigm you have in mind.
> > 
> > That's something I'd really like to explore; sounds incredibly
> > useful.  However, I think I'd better get a more solid handle on
> > continuations first.
> 
> You should study macros if you want to play with the language;
> continuations are control structures to implement lightweight
> threads et similia.
  
Macros (if they are the same as what I normally understand the word
"macros" to mean in non-Scheme contexts) I already understand.  I want
to understand continuations in general and CPS in particular, and
their ramifications.  But that's another thread.
  
> > Sure, there are some problems that are difficult to think about
> > iteratively, because of their complexity.  Still, ultimately, the
> > recursion boils down to iteration at the machine code level.  Or
> > the iteration boils down to recursion, depending on how you look
> > at it.  Really, they're the same.
> 
> That's true in principle, but it is NOT convenient to think in this
> way (just IMHO, of course).

I guess I don't understand the value of getting bogged down thinking
about details such as, "am I going to implement this recursively, or
iteratively?"  It doesn't matter; they're the same.  Not only do they
accomplish the same thing, they accomplish it in pretty much exactly
the same way, plus or minus a little syntax and a funtion call or two,
neither of which matters to my understanding of how the code works.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/29/2004 12:46:06 AM
Joachim Durchholz <joachim.durchholz@web.de> writes:

> Jonadab the Unsightly One wrote:
> > Recursion (well, general recursion) and iteration are
> > isomorphic. [...]
> 
> Ah, not quite.  The kind of recursion you're talking about (tail
> recursion) indeed is isomorphic to iteration.

I was talking not just about tail recursion specifically but general
recursion in general.  I mean "general recursion" in the sense defined
by Hofstadter, to wit: the recursion is known to "bottom out" (i.e.,
terminate) for all inputs, but there may or may not be an upper bound
to the recursion depth, a level at or before which it will terminate
or bottom out in all cases.  If there is such an upper bound known in
advance that applies for all inputs, or that can be calculated as a
simple algebraic function of the inputs prior to starting the
recursion, then that's primitive recursion (i.e., can be implemented
in BlooP); if not, it's not primitive, but it's still general
recursion.

> Primitive recursion: where the function will recurse at most once in
> every flow graph, like in
>    foo (int i)
>      if even (i) then
>        foo (i / 2)
>      else
>        foo (i + 1)
>      end
>      do_something_else

This is not general recursive, because it never terminates.  (If you
insert a base case for 1, then it becomes general recursive and, in
fact, primitive recursive.)

> Primitive recursion can be rewritten by using two loops, but it's
> less straightforward, and thinking about the code definitely becomes
> different.

The above is not general recursive (and *certainly* not primitive
recursive) because it doesn't terminate, but with the base case of 1,
it becomes iterative with almost no effort:

   foo (int i)
     if not even (i) then i = i + 1
     while even (i) 
       i = i / 2
     do_something_else

> Simple recursion: the function will recurse a fixed number of
> times. The Fibonaccy function is an example (using typical FPL syntax
> for variation):
>    fib 1 = 1
>    fib 2 = 1
>    fib n = fib (n - 1) + fib (n - 2)
> The weaker forms of recursion can be rewritten as loops mechanically:
> they are straightforward transformations. Recursion of the Fibonaccy
> type cannot (I think).
  
You think wrong.  I implemented Fibonacci using a FOR loop in GWBASIC
before I learned how to fake function calls using GOSUB and using an
array as a stack to handle the arguments.   It requires an extra
variable, but it's entirely straightforward.

10 INPUT N
20 J = 0
30 K = 1
40 FOR C = 2 TO N
50     FIB = J + K
60     J = K
70     K = FIB
80     NEXT C
90 PRINT FIB

Or, in Perl:

sub fibonacci {
  my ($j, $k) = (0, 1);
  ($j, $k) = ($k, $j + $k) for 2..shift;
  return $k
}

Except that in Perl we would probably write it to take a list of
arguments and return the fibonacci number of each:

sub fionacci {
  return map {
    my ($j, $k) = (0, 1);
    ($j, $k) = ($k, $j + $k) for 2..$_;
    $k } @_;
}

Then we might apply the orcish manoeuver to eliminate the need for
redundantly calculating the same numbers over and over again, and at
that point we're doing something that isn't easy to do in an iterative
solution, so we'd probably go back to a recursive setup.

Then again, what we'd really do is use Math::Fibonacci;

There *are* forms of recursion that are difficult to transform to
iteration.  Usually this involves significant amounts of state being
passed to and manipulated by the function each time it is called.

> At the next stage, we have things like Quicksort.
  
Yes, that's a good example.  It *can* be expressed iteratively, but
it is significantly more clear when expressed recursively.
  
-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/29/2004 1:38:22 AM
Jonadab the Unsightly One wrote:

> Joachim Durchholz <joachim.durchholz@web.de> writes:
> 
>> Jonadab the Unsightly One wrote:
>> 
>>> Recursion (well, general recursion) and iteration are isomorphic.
>>> [...]
>> 
>> Ah, not quite.  The kind of recursion you're talking about (tail 
>> recursion) indeed is isomorphic to iteration.
> 
> I was talking not just about tail recursion specifically but general 
> recursion in general.  I mean "general recursion" in the sense
> defined by Hofstadter, to wit: the recursion is known to "bottom out"
> (i.e., terminate) for all inputs, but there may or may not be an
> upper bound to the recursion depth, a level at or before which it
> will terminate or bottom out in all cases.

Then your idea of the equivalence of iteration and recursion doesn't
hold up.

>> At the next stage, we have things like Quicksort.
> 
> Yes, that's a good example.  It *can* be expressed iteratively, but 
> it is significantly more clear when expressed recursively.

It's not just a question of clearness.
Actually Quicksort can be expressed quite clearly in an iterative
fashion, to the point that some people might reasonably say that the
iterative version is easier to understand.

The relevant difference is really the presence of a stack. (IMHO. YMMV. 
etc.)

Regards,
Jo
--
Currently looking for a new job.

0
2/29/2004 2:36:09 AM
On 28 Feb 2004 20:38:22 -0500, Jonadab the Unsightly One posted:
> Joachim Durchholz <joachim.durchholz@web.de> writes:
> 
>> Jonadab the Unsightly One wrote:
>> 
>> Simple recursion: the function will recurse a fixed number of
>> times. The Fibonaccy function is an example (using typical FPL syntax
>> for variation):
>>    fib 1 = 1
>>    fib 2 = 1
>>    fib n = fib (n - 1) + fib (n - 2)
>> The weaker forms of recursion can be rewritten as loops mechanically:
>> they are straightforward transformations. Recursion of the Fibonaccy
>> type cannot (I think).
>   
> You think wrong.  I implemented Fibonacci using a FOR loop in GWBASIC
> before I learned how to fake function calls using GOSUB and using an
> array as a stack to handle the arguments.   It requires an extra
> variable, but it's entirely straightforward.
> 
> 10 INPUT N
> 20 J = 0
> 30 K = 1
> 40 FOR C = 2 TO N
> 50     FIB = J + K
> 60     J = K
> 70     K = FIB
> 80     NEXT C
> 90 PRINT FIB

You've used a different algorithm, and it's probably unreasonable to expect
a mechanical translation from the original form.

Your version is equivalent to:

fib x = fib' x 0 1
  where
      fib' 0 j k = j
      fib' i j k = fib' (i-1) k (j+k)

which is, indeed, very easy to translate to an iterative form.

mrak

-- 
realise your life was only bait for a bigger fish
	-- aesop rock
0
mwotton (39)
2/29/2004 8:04:20 AM
"Jonadab the Unsightly One" <jonadab@bright.net> wrote in message news:<ishq3e7l.fsf@jonadab.homeip.net>...
> michele.simionato@poste.it (Michele Simionato) writes:
> > In Python, I can make strong
> > assumptions about the style used by my future maintainers and about
> > their mindset, and these assumptions are most likely correct. 
> 
> I can't escape the feeling that you are somehow trying to see this as
> a good thing, thinking to yourself, "Sure, all the Python programmers
> I've known are impoverished in their understanding, but since they're
> all impoverished in the _same way_, it's good."

There is difference between a programmer who knows a paradigm and
uses it since it is the only thing he knows, and a programmer who
knows a dozen of paradigms and still uses that same paradigm since he 
thinks that conformity to language guidelines and readability are 
important. Python has good and bad programmers: the first will follow 
Python guidelines because of choice, the others since they know nothing 
else. In both case, I can assume an uniformity of style. This is a 
very valuable aspect of Python, especially in a corporate environment. 
Of course, there are (both bad and good) programmers who do not like 
Python style, but then there is always Perl for them ;)

> I would like to think there are programmers who know Python but are
> not limited in these same ways in their thinking, and I would surely
> rather have said programmers maintaining the code than the
> all-OO-all-the-time Python programmers of whom you speak.

I think you misread me. Python programmers are not all-OO-all-the-time.
A lot of Python programming is just old fashioned procedural programming.
We don't use objects when objects would be overkill or would bloat the
code without any real benefit: that is Java, not Python. Nevertheless,
I said that Python favors OOP over FP anf that is true. The context I 
was talking about were closures. I said: I wrote a Scheme program full 
of closures, then I translated it to Python. It worked, but it was not 
Pythonic. A Python programmer would have used classes instead of closures 
to write the same program. Why so? Since the language has better support 
for objects than for closures. And why the language has better support 
for objects than for closures? Since Guido and the majority of Python 
developers thinks that closures are poor man objects and not viceversa. 
The design was to favor objects over closures because it was thought
it helped readability and clarity. This choice that can be questioned,
but still it is a well thought choice that makes sense in the context
of Python design guidelines. It is not an historical accident.

> when you start
> preferring twisted and unclear code just because it's objectual rather
> than functional, the path of reason is far from where you tread.

Yes, but Python programmers want to avoid functional code just because
they think that it is twisted and unclear! For instance, we no more use 
map and filter since we have list comprehensions that are considered
more explicit and easier to read (more "Pythonic" if you wish).

> Does Guido have exclusive control of Python?  I was somehow under the
> impression that Python was OSS.

Python programmers are self-selected to agree with all Guido's design
decisions ;) 
Guido is the BDFL (Benevolent Dictator for Life). 
This means that it has the last word on Python developement. 
Sometimes he makes mistakes (according to his own wording) and he let 
lambda, map, filter and reduce enter in the language (all the functional 
stuff was contributed code, not something Guido wrote himself). Since then,
he is actively working to make the functional stuff less and less used: 
for instance, he made list comprehensions faster than map and filter, and 
he accepted sum in the built-ins with the idea of replacing 95% of use 
cases of reduce; the final project is to remove the functional stuff from 
the built-ins and put it in a library module. FWIW, I second this project, 
because it makes sense in the context of Python. Of course, it would not 
make any sense in the context of Scheme. Different languages make different 
choices, and the same paradigm can be more or less relevant according to 
the language. This is all I tried to say all along this thread.

           Michele Simionato
0
2/29/2004 8:37:54 AM
Joe Marshall <jrm@ccs.neu.edu> writes:

> REBOL has context-sensitive parsing.  I agree that context-sensitive
> parsing sets a language apart, but that is not a good thing.  It
> makes it impossible to reason about code independently of context!
  
Only if the code is written in a perversely obfuscated manner (as was
the case with the snippets you saw).  Normal code is written so that
it's clear what it does, and although it may do something different
depending on context, you can generally see that at first glance.

Also, the contextual elements of Perl go beyond the mere syntax and
parsing; context is relevant to the semantics of what various
functions do.  (Note that this includes some of what most people would
call "operators".  In Perl, functions are operators and to a large
extent operators are functions.)

> It is nice to be able to take a code fragment and know what it does
> without having to know where it was lifted from.

Yes, certainly.  This is generally not a problem, unless the code in
question is deliberately obfuscated (which is usually done for fun, in
code that doesn't matter, not in production code).  I frequently write
functions that return something different in list context from what
they return in scalar context, but this is always clear from looking
at the function, without seeing the calling code.  In Perl6 we're
getting context-sensitive objects, so that a user-defined object can
define how it behaves in list context, in string context, in numeric
context, and so on.

> > I've never seen a claim to the effect that BNF is Turing-complete,
> > so why it should be a surprise that languages exist which are not
> > expressible in it is beyond me.
> 
> It is not Turing complete.  The reason it is surprising is that there
> is a powerful infrastructure (LEX & YACC) *specifically* designed to
> parse infix-based languages efficiently and quickly provided that you
> make some small concessions to left-to-right order.

I'm pretty sure left-to-right order is not the only concession that
would need to be made, and those "small concessions" would wreak havoc
on Perl's grammar.  It isn't really an infix-based language (though it
*has* infix operators -- but it also has list operators and other
things).

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/29/2004 1:39:23 PM
"Jonadab the Unsightly One" <jonadab@bright.net> writes:

> Joe Marshall <jrm@ccs.neu.edu> writes:
>
>> REBOL has context-sensitive parsing.  I agree that context-sensitive
>> parsing sets a language apart, but that is not a good thing.  It
>> makes it impossible to reason about code independently of context!
>   
> Only if the code is written in a perversely obfuscated manner (as was
> the case with the snippets you saw).  Normal code is written so that
> it's clear what it does, and although it may do something different
> depending on context, you can generally see that at first glance.

I'm confused.  What is the advantage of context-sensitive parsing if
`normal code' doesn't use it and the code that does use it is
considered `perversely obfuscated'?

>> It is nice to be able to take a code fragment and know what it does
>> without having to know where it was lifted from.
>
> Yes, certainly.  This is generally not a problem, unless the code in
> question is deliberately obfuscated (which is usually done for fun, in
> code that doesn't matter, not in production code).  

`Generally' doesn't count.  If your parse is context sensitive, then
you need to reason about the runtime values in order to understand the
parse.  Even if you almost never use the context-sensitive elements,
you still have to reason about them because you cannot tell a priori
whether it makes a difference.

> In Perl6 we're getting context-sensitive objects, so that a
> user-defined object can define how it behaves in list context, in
> string context, in numeric context, and so on.

That'll be a hoot.  With first-class continuations the context can
vary at runtime.

-- 
~jrm
0
2/29/2004 4:42:13 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

> That's the problem, then.  Nearly any string of noise has *some*
> meaning in Perl.

This is a myth, propagated by people who see JAPH signatures and
recoil in horror, apparently not realising they're deliberately
crafted that way as a game and quite atypical of normal code.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/29/2004 6:40:00 PM
On 29 Feb 2004, Michele Simionato <- michele.simionato@poste.it wrote:

> Yes, but Python programmers want to avoid functional code just because
> they think that it is twisted and unclear! For instance, we no more use 

Sorry but this is just wrong.  Think about the writings of David Mertz
e.g.  That we don't see more Python code written in a functional style
may result from some people always saying: this is not pythonic (what
ever that may be) and the people who wrote the functional code change
their code (sadly) to the mainstream Python style.

> map and filter since we have list comprehensions that are considered
> more explicit and easier to read (more "Pythonic" if you wish).

But list comprehensions the way they are in Python were taken from
Haskell; so you can't say they are not functional.  They are just an
alternative way to write e.g a mapping or a filter.



   KP

-- 
M�nner der Wissenschaft! Man sagt ihr viele nach, 
aber die meisten mit Unrecht.  
                             Karl Kraus 'Aphorismen'
0
sigurd
2/29/2004 6:54:16 PM
Marcin 'Qrczak' Kowalczyk <qrczak@knm.org.pl> writes:

> It got parsed as
>    if (possible {...} ($var1 && possible {...} $var2)) {...}
> which was not what I meant, and I could fix it only by
>    if ((possible {...} $var1) && ...
> or
>    if (possible(sub {...}, $var1) && ...
> neither of which I liked much. 

Your problem is &&.  The normal spelling of a logical and in Perl is
"and", which binds loosely and would do what you want in this case;
the && binds much more tightly, like in C, and is used when you want
to logically and two simple conditions together within a larger
expression.  What you wanted was this:

if (possible {...} $var1 and possible {...} $var2) {
   ...
}

A lot of C programmers make this mistake in Perl.  It's in all the
FAQs and tutorials and such like.  A related mistake is this one:

print foo, bar, baz || die "Wincing Error:  $!";

The or there should be spelled "or", not "||".  As this stands, print
only gets called if baz is true; otherwise, it dies.  (If die were
replaced with a function that returns, the return value would be the
third thing printed.)

The precedence table of Perl *is* fairly complex, and this is a
problem that will be addressed to some extent in Perl6.  A significant
part of the problem comes from early versions of Perl trying to do
things the same way as C (a misguided decision; C is a horrible
example to follow, *especially* for a language like Perl that does not
have a C-like concept of statements) and later versions of Perl trying
not to break existing Perl code that worked in the earlier versions.

Personally, I never use the && and || operators, because I can never
remember their exact precedence level.  So if I want that effect, I
parenthesize.  But usually, the and and or operators have the right
level of precedence for most logical and and or operations.  They
bind more loosely than the comma, but tighter than if and unless,
which is usually what is wanted:

open LOG, ">>$logfile" and print LOG $stuff and close LOG if $debug;

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/29/2004 7:07:55 PM
cobbe@ccs.neu.edu (Richard C. Cobbe) writes:

> In addition to not being able to use standard toolsets like LALR(1)
> parser generators, keeping a language's grammar context-free also
> makes it much easier for the programmer to reason about the language
> and programs written in that language.  In Perl, one can easily get
> surprising parses,

I find that Perl's parses surprise me a *lot* less often than
languages like C.  Perl was expressly designed to do the least
surprising thing as often as possible.  (The one time this backfires
is when people come to Perl expecting it to behave like another
language (usually C), in which case they quickly learn otherwise.)

Now, granted, elisp's parses *never* surprise me, and I don't suppose
Scheme's will either, but that's because of the trivial and
straightforward syntax design of those languages.  (Precedence?  Whoo
needs it?  Everything is parenthesized.)  Of course, this virtually
requires all code to be written and maintained in an editor that does
paren-matching and automatic indentation.  Since I use Emacs (even
before Perl) that's not a problem for me (and, to be fair, Perl really
needs syntax highlighting and/or automatic indentation, though it can
do without the paren-matching better), but it is, as I'm sure you're
aware, a common source of complaints about these languages.  You can't
work with them in Notepad or whatever.

I personally don't mind this way of doing things (i.e., the lisp way,
put explicit parens around everything), but there are things to be
said for other approaches.  The Perl approach works pretty well,
despite the somewhat hairy precedence table.

> When I write in a language with a more straightforward syntax, like
> Scheme, C, C++, or Java, I'll get a number of errors at compile time
> complaining of trivial syntax mistakes: missing close paren in
> Scheme, or a missing semicolon in C.  These are mildly annoying, but
> I think there are strong advantages to this approach.  I would far
> rather be clearly informed of an error, however trivial, than risk
> the possibility of the language misinterpreting my program and
> producing an incorrect answer (with no indication that it's
> incorrect).  * * * * *

I can see your point about lisp-based languages like Scheme, but C and
C++ are horrific monstrosities.  Many of the worst and most surprising
things about Perl, including the one someone complained about
upthread, are a direct result of trying to do things the way C does
them.  Larry intends to shed some of that baggage in Perl6, though I'm
afraid he won't get rid of all of it.  I won't comment on Java, since
I don't really know it well enough to say.

There are some things I don't like about Perl.  The object model sucks
gravel, for instance.  (Inform has me spoiled...)  There are too many
levels in the precedence table, a holdover of superfluous C
compatibility.  The point someone else brought up about tail recursion
not being optimized is a valid one, and there are other things Perl
can learn from FP languages.  We need real garbage collection.  (We're
getting it in Perl6, but Perl5 just has reference counting, which is a
bummer for maintaining certain kinds of data structures.)  There's no
decent standard works-on-all-platforms-with-a-GUI GUI library (no, Tk
does *not* count), and there should be.  (It should use native widgets
on platforms that have them (Win32, Mac) and whatever it finds
installed (Gtk, Qt, whatever) otherwise, spitting a fatal error only
if no such library is found.  The programmer should not need to care
which widgets will ultimately get used when he writes the code.)  Perl
doesn't have the textbuffers and markers that are so darned convenient
in elisp.  The most popular Windows port (the ActiveState one) doesn't
even have a working CPAN module!  Also, *far* too many of the modules
on CPAN use C code (and thus require a number of extra non-Perl things
in order to install, such as a C compiler) that shouldn't need to do so.

But objections like "Perl is line noise" are just misinformed, and
complaints like "Hey, you can't express its grammar in BNF, so it must
be counterintuitive" are just plain silly.  If you want to criticize
Perl, understand it first.  You don't see me waltzing in here and
criticizing Scheme, do you?  There are very good reasons why Perl's
grammar does not reduce to BNF.  The language would be significantly
impoverished if this were changed by any means other than extending
BNF to be Turing-complete.

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/29/2004 7:45:44 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

> "Jonadab the Unsightly One" <jonadab@bright.net> writes:
> 
> > I've heard that we're getting continuations in Perl6, and I'm excited
> > about it.
> 
> *That* should be amusing.  Continuations and context-sensitivity are
> an interesting mix, especially if you can get a different parse upon
> re-entering some code.

You wouldn't get a different parse on re-entering unless you're using
eval to do so explicitely (which you can do now, e.g. in a loop, and
the string can change between parsings potentially, but this is not a
common technique; in fact, the only time I've seen it used in practice
was in a round of Perl golf; eval is more commonly used to trap
errors, but in that case the block form is usually used, which does
not reparse the code each time it is evaluated).

Parsing is done at compile time; as I understand it, continuations
would be a run-time thing.  (Yes, compile time and run time are
juxtaposed with the standard perl, but they are distinct steps.)

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
2/29/2004 7:55:51 PM
"Jonadab the Unsightly One" <jonadab@bright.net> writes:

> But objections like "Perl is line noise" are just misinformed,

Oh? 

use strict; $\=$/;$_=q&&;s()/#?=>$":}\~>\\!;/;$/=~s~
~s{;};$")!{$"\\:<_!;~e;y{%*-.$'&(#-<}$@-~$s;{y(;-?)=
r-{)!=s;y$T-_$`-|$;y}{-~}l-\}};s{!}$Y<$g&&redo}print

or

> $;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
> split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/


The Perl style doc says ``put in some whitespace to make it look a
little less like line noise''

> and complaints like "Hey, you can't express its grammar in BNF, so
> it must be counterintuitive" are just plain silly.

I believe the upthread comment was that making Dr. Scheme understand
Perl would be trivial with a BNF grammar.  I pointed out that the
grammar is not BNF, therefore it would be more difficult to extend
Dr. Scheme (I thought it would be essentially impossible, but Phillipe
says otherwise).

However, since the grammar is context-sensitive, it is therefore
*impossible* to understand a fragment of Perl without having the
surrounding context.  This is the definition of context-sensitive.

-- 
~jrm
0
2/29/2004 9:58:48 PM
Andreas Rossberg <rossberg@ps.uni-sb.de> writes:

>Kevin Glynn wrote:
>> 
>>>For example, in Oz, higher-order terms have identity, which means that
>>>they do not obey all the same laws that higher-order terms obey in a
>>>pure functional and/or pure logic language.
>> 
>> I don't understand what you mean by "Higher-order terms have
>> identity".  Can you clarify?

I mean it in the same sense that people sometimes say that in OOP languages
"objects have identity".  In OOP languages, you can take two objects and
compare them with "==" or equivalent to see if they are actually the same
object.  Likewise in Oz you can compare two higher-order terms with "=="
to see if they are in some sense the same.

>I suppose Fergus is referring to the fact that Oz provides 
>token-equality on functions. From a more semantic point of view that is 
>pretty nasty, since to some extent, implementation details become 
>observable in programs.

Yes.  Actually I thought Oz's notion of equality on higher-order terms
was different than token equality.  I thought it was more similar to OOP
object identity, with the identity of two freshly constructed closures
being different.  Does the Oz equivalent of

	(\ x = x + 1) == (\ x = x + 1)

return false or true?

>On the other hand, many "native" FP languages have something similar, 
>e.g. OCaml with its == operator, or Lisp with its jungle of EQsomething 
>operators. So I'm not convinced that this feature has too much to do 
>with questions of paradigm.

I think it would be fair to say that languages like OCaml and Lisp
combine the paradigms of imperative and functional programming.  For
pure functional programming languages, such as Haskell and Clean,
higher-order terms do not have identity.

-- 
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
0
fjh (268)
2/29/2004 11:47:28 PM
"Jonadab the Unsightly One" <jonadab@bright.net> writes:

> cobbe@ccs.neu.edu (Richard C. Cobbe) writes:
>
>> In addition to not being able to use standard toolsets like LALR(1)
>> parser generators, keeping a language's grammar context-free also
>> makes it much easier for the programmer to reason about the language
>> and programs written in that language.  In Perl, one can easily get
>> surprising parses,
>
> I find that Perl's parses surprise me a *lot* less often than
> languages like C.  Perl was expressly designed to do the least
> surprising thing as often as possible.  (The one time this backfires
> is when people come to Perl expecting it to behave like another
> language (usually C), in which case they quickly learn otherwise.)

Well, if you're used to it, then you're used to it.  Me, I get real tired
of things like

        print 3 + 5;

being misinterpreted with no warning.  (Perl does the right thing here?
I'm sorry, but why on earth would I be interested in print's return code?)

I'm also annoyed by the counter-intuitive interpretation of @foo[3].  I
know that this is an array slice of one element, but in just about every
other PL I've used that uses the [] syntax for array indexing, you append
the [] to the name of the array, which is `@foo', not `$foo'.

Incidentally, the array-slicing issue is one place where a good type system
would help immensely.  Unfortunately, the `Do What I Mean' form of fault
tolerance is built into the language at such a deep level that any sort of
reasonable type system is, so far as I can tell, completely impossible.

> I personally don't mind this way of doing things (i.e., the lisp way,
> put explicit parens around everything), but there are things to be
> said for other approaches.  The Perl approach works pretty well,
> despite the somewhat hairy precedence table.

The precedence table, while hairy, isn't the problem.  I address that with
precisely the same attitude that I use in C, C++, and Java: know the
precedence rules for the common cases, and explicitly parenthesize
everything else.  I can live with that.

> There are some things I don't like about Perl.

<SNIP>

You raise a number of criticisms of Perl, most of which address issues that
I don't feel comfortable discussing; my knowledge of those parts of Perl is
not very deep.  I'll just mention those that I do understand.

> We need real garbage collection.  (We're getting it in Perl6, but Perl5
> just has reference counting, which is a bummer for maintaining certain
> kinds of data structures.)  

Yup.  For the record, there are 4 main methods for non-concurrent,
non-interleaved, non-realtime GC.

  - mark-and-sweep (McCarthy, April 1960)
  - ref count (George Collins, December 1960)
  - stop-and-copy (Minsky, 1963; Fenichel and Yochelson, 1969; Cheney,
    1970)
  - generational (Liebermann and Hewitt, 1983)
  - there's also been some recent work on oldest-first collection, but I
    don't know as much about that.

Detailed references available on request.  (You might drop me an email; I'm
following this thread only intermittently.)

The point of all this history is to say that we've had _44 years_ to work
on getting the other GC algorithms fast.  That's an awfully long time,
especially for a field as young as computer science, and we've made a *lot*
of progress.  Claiming that non-refcount-GC is too slow is, in general,
being willfully ignorant of current technology.

> There's no decent standard works-on-all-platforms-with-a-GUI GUI library

We can argue for days over where the dividing line between the language and
the library is, but I'll classify this as a library issue.  (I call it a
library issue because you, or someone else interested, could very well
write a be-all-and-end-all GUI toolkit without affecting the core language.
That is, you could do it without having to change the compiler,
interpreter, or whatever Perl uses these days).

> Perl doesn't have the textbuffers and markers that are so darned
> convenient in elisp.

Again, a library issue.

> Also, *far* too many of the modules on CPAN use C code (and thus require
> a number of extra non-Perl things in order to install, such as a C
> compiler) that shouldn't need to do so.

This is interesting.  Perhaps it would be fruitful to investigate *why* all
these extensions feel it necessary to call out to C code.  This may well
demonstrate shortcomings of Perl's core language.

> But objections like "Perl is line noise" are just misinformed.

Well, maybe, but I never said that.

> [A]nd complaints like "Hey, you can't express its grammar in BNF, so it
> must be counterintuitive" are just plain silly.

See above arguments.  I'm not saying it's counterintuitive, I'm saying that
it's easy for the programmer to be misinterpreted.

(Just out of curiosity, has anyone actually tried to determine whether
Perl's grammar is unambiguous?  For that matter, has anyone ever actually
tried to prove any non-trivial static properties about the language at all?)

> If you want to criticize Perl, understand it first.

I will admit that I haven't seriously programmed in Perl in quite a while.
But I have written enough code in Perl to understand exactly why I don't
like the language.

> There are very good reasons why Perl's grammar does not reduce to BNF.

Just for my curiosity, what are they?  The only justifications I've heard
are some vague mumblings about wanting Perl to work more like natural
languages.  Well, a) there are good reasons why programming languages
generally don't work like natural languages, and b) I'm beginning to think
that Larry Wall doesn't understand natural languages as well as he claims,
his degree in linguistics notwithstanding.  (Details available on request;
I don't want to digress into a discussion of his linguistics abilities.)

> The language would be significantly impoverished if this were changed by
> any means other than extending BNF to be Turing-complete.

In what way?  Sure, it might have to have a slightly more verbose syntax,
but I find it hard to see that as `significantly impoverishing' the
language.

> $;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
> split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/

So, if Perl isn't just line noise, why do so many Perl hackers pride
themselves on their ability to do stuff like this?  I distrust communities
that pride themselves on their understanding of some sort of shared arcane
knowledge.

Richard
0
cobbe (63)
3/1/2004 1:39:44 AM
cobbe@ccs.neu.edu (Richard C. Cobbe) writes:

>
> Yup.  For the record, there are 4 main methods for non-concurrent,
> non-interleaved, non-realtime GC.
>
>   - mark-and-sweep (McCarthy, April 1960)
>   - ref count (George Collins, December 1960)
>   - stop-and-copy (Minsky, 1963; Fenichel and Yochelson, 1969; Cheney,
>     1970)
>   - generational (Liebermann and Hewitt, 1983)
>   - there's also been some recent work on oldest-first collection, but I
>     don't know as much about that.
>

    - Replicating (Nettles and O'Toole, 1993)

@inproceedings{ nettles93realtime,
    author = "Scott Nettles and James O'Toole",
    title = "Real-Time Replication Garbage Collection",
    booktitle = "{SIGPLAN} Conference on Programming Language Design and Implementation",
    pages = "217-226",
    year = "1993",
    url = "citeseer.nj.nec.com/nettles93realtime.html" }

While replicating GC was designed to be concurrent, there is no reason
you couldn't stop the main process while GC'ing.  In any case, it is
different from mark-sweep, refcount, and stop-and-copy.

> Detailed references available on request.  (You might drop me an email; I'm
> following this thread only intermittently.)
>
> The point of all this history is to say that we've had _44 years_ to work
> on getting the other GC algorithms fast.  That's an awfully long time,
> especially for a field as young as computer science, and we've made a *lot*
> of progress.  Claiming that non-refcount-GC is too slow is, in general,
> being willfully ignorant of current technology.

And rather ignorant of reference count GC, too.

-- 
~jrm
0
3/1/2004 3:42:43 AM
Joe Marshall <prunesquallor@comcast.net> writes:

> "Jonadab the Unsightly One" <jonadab@bright.net> writes:
> > Joe Marshall <jrm@ccs.neu.edu> writes:
> >> It makes it impossible to reason about code independently of context!
> > Only if the code is written in a perversely obfuscated manner.
> > Normal code is written so that it's clear what it does.
> 
> I'm confused.  What is the advantage of context-sensitive parsing if
> `normal code' doesn't use it and the code that does use it is
> considered `perversely obfuscated'?
  
Sorry; I haven't made myself clear.  Normal code often uses context,
but it is clear from looking at the code what it's doing with the
context.  

For example, a function foo may be written so that in list context it
takes a list of objects, finds the ones that match a certain
criterion, extracts a particular value from each one that does match,
and returns a list consisting of these extracted values; the same
function in scalar context may take a similar list of objects, examine
it, counting _how many_ match the same criterion, and return that
number, without extracting any values.

This would all be clear from looking at the function, and the
documentation for the function would also state that this is what the
function does, and by looking at the calling code you'd be able to
tell which context the function was being called in in that instance.

This particular example doesn't warp the syntax based on the context,
but it does alter the meaning of the function call.  It is also
possible for a block of code to, based on context in a similar
fashion, have an impact on how the surrounding code is parsed, but in
Perl5 subroutines cannot easily do this due to the automatic
flattened-list context that all non-prototyped user-defined
subroutines supply to their arguments.  I'm not sure whether this will
change in Perl6, or whether the overhaul of function prototypes will
allow for prototyped functions to achieve this effect.  Anyway, a hunk
of code that's not walled off in a subroutine can do this.  But normal
code does not do this in an unclear or confusing way.

In one sense, I suppose that Joe's statement, "It makes it impossible
to reason about code independently of context!", is true of such code,
to the extent that when you reason about the code you do think about
context.  But if that is what he meant, then saying that "It makes it
impossible to reason about code independently of context!" is
equivalent to saying that the way Haskell does things makes it
impossible to reason about code independently of functions, which is
technically true but practically meaningless.  I assumed that he
meant, rather, that it would be impossible to think about a given
piece of code without thinking about all the ways in which it might be
used, or all possible calling code, or something along those lines,
and that is not the case.
  
> > Yes, certainly.  This is generally not a problem, unless the code
> > in question is deliberately obfuscated (which is usually done for
> > fun, in code that doesn't matter, not in production code).
> 
> `Generally' doesn't count.  If your parse is context sensitive, then
> you need to reason about the runtime values in order to understand the
> parse.  

You're assuming that you can't tell at a glance whether a runtime
value might alter the parse.  You always can.  If you do run into code
like that in real production code, you expect to see a lot of POD
nearby, or some comments.  But generally, you only run into code like
that in clever obfuscations.  In any case, you'd be able to recognise
it if you did run into it, so you'd know there was something funky
going on, even if it were difficult to tell at a glance exactly how
the details worked out.

> Even if you almost never use the context-sensitive elements, you
> still have to reason about them because you cannot tell a priori
> whether it makes a difference.

Yes, that much you can tell.  This is not a problem.  Really, you're
making a big deal out of nothing.  I read other people's Perl code all
the time, and as long as the person who wrote the code had a decent
understanding of Perl, it's almost as easy as reading my own code.

Occasionally you do run into Perl code that was written by an utter
neophyte taking his first programming course, or by a C programmer who
thinks PERL is just a bigger and better awk, or somesuch along those
lines, and yes, that's hard to read.  But you can get that sort of
thing in any language with a sufficiently low learning curve that such
people are able to write working code at all.  (Hmmm, it wasn't that
long ago I used to *write* code like that.  In various languages.  My
first few weeks writing elisp I used an indentation style based on
that of other languages I'd worked with, putting each closing paren on
its own line with a comment following, explaining what it was closing.
The people on gnu.emacs.help thought I was from Mars.  My first bits
of Perl code were similarly bad.  I'm now learning Scheme, and I
expect to write bad code in Scheme for a little while, too, until I
get a feel for the language.  All part of the learning process.)

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
3/1/2004 5:12:36 AM
sigurd@12move.de (Karl Pfl�sterer) wrote in message news:<m3u119endl.fsf@hamster.pflaesterer.de>...
> On 29 Feb 2004, Michele Simionato <- michele.simionato@poste.it wrote:
> 
> > Yes, but Python programmers want to avoid functional code just because
> > they think that it is twisted and unclear! For instance, we no more use 
> 
> Sorry but this is just wrong.  Think about the writings of David Mertz
> e.g.  That we don't see more Python code written in a functional style
> may result from some people always saying: this is not pythonic (what
> ever that may be) and the people who wrote the functional code change
> their code (sadly) to the mainstream Python style.

Speaking from myself, I am writing less functional code in Python now than
I used to write. The reason is that there are better alternative availables, 
according to my own taste. Luckily, my taste happens to coincide with
the taste of the majority of Python developers.
 
> > map and filter since we have list comprehensions that are considered
> > more explicit and easier to read (more "Pythonic" if you wish).
> 
> But list comprehensions the way they are in Python were taken from
> Haskell; so you can't say they are not functional.  They are just an
> alternative way to write e.g a mapping or a filter.
> 

There is some merit in what you say. Still, consider this Python code,
that I would not consider functional (list comprehension works with side 
effects, the loop variable "i" is a global variable):

def make_adders(n):
    return [(lambda x: x+i) for i in range(n)]

Consider now this Scheme code, doing the right thing and in a way that I
would consider strongly functional:

(define (make-adders n)
  (unfold (cut = <> n) (lambda (i) (cut + <> i)) add1 0))

Do you see the difference? Also, remember that list comprehension 
in Python are not as in Haskell.

               Michele Simionato
0
3/1/2004 5:18:24 AM
sigurd@12move.de (Karl Pfl�sterer) wrote in message news:<m3u119endl.fsf@hamster.pflaesterer.de>...

Oops! When I said "i" is a global variable I made a mistake: I meant
"i" is not local to the loop. However, since it is inside the routine
make_adders it is local to that routine. If we were at the top level
then "i" would be global. I apologize for the confusion. 

I consider as a wart of Python the fact that loop variables are
not locals and extend outside the loop.


        Michele Simionato
0
3/1/2004 5:24:37 AM
On 29 Feb 2004 21:18:24 -0800, Michele Simionato posted:
> sigurd@12move.de (Karl Pfl�sterer) wrote in message news:<m3u119endl.fsf@hamster.pflaesterer.de>...
>> On 29 Feb 2004, Michele Simionato <- michele.simionato@poste.it wrote:
>> 
>> > map and filter since we have list comprehensions that are considered
>> > more explicit and easier to read (more "Pythonic" if you wish).
>> 
>> But list comprehensions the way they are in Python were taken from
>> Haskell; so you can't say they are not functional.  They are just an
>> alternative way to write e.g a mapping or a filter.
> 
> There is some merit in what you say. Still, consider this Python code,
> that I would not consider functional (list comprehension works with side 
> effects, the loop variable "i" is a global variable):
> 
> def make_adders(n):
>     return [(lambda x: x+i) for i in range(n)]

in Haskell:

make_adders n = [ \x -> x + i | i <- [1..n] ]

this looks isomorphic to me, and without side effects...

>   (unfold (cut = <> n) (lambda (i) (cut + <> i)) add1 0))
> 
> Do you see the difference? Also, remember that list comprehension 
> in Python are not as in Haskell.

Forgive a possibly stupid question, but in what way?

mrak

-- 
realise your life was only bait for a bigger fish
	-- aesop rock
0
mwotton (39)
3/1/2004 5:47:41 AM
cobbe@ccs.neu.edu (Richard C. Cobbe) writes:

> Well, if you're used to it, then you're used to it.  Me, I get real
> tired of things like
> 
>         print 3 + 5;
> 
> being misinterpreted with no warning.

Huh?  What else could you possibly expect that to do, other than print "8"?

> (Perl does the right thing here?  I'm sorry, but why on earth would
> I be interested in print's return code?)

Huh?  I assume that you must be talking about cases like
print (3 + 5) * 7;

There are cases where you might be interested in print's return code,
although they're not very common (and usually involve treating print's
return code as a condition in boolean context), but the important
reason why that has to parse the way it does is for consistency; *ALL*
functions in Perl, whether built-in or user-defined, if followed by a
left parenthesis, will take the contents of the parentheses as their
argument list.  Always.  This is an extremely basic part of the
language; the equivalent for lisp or Scheme would be something along
the lines of, if a parenthesized list is evaluated, the first item in
the list is treated as a function, with the remaining items in the
list passed to it as arguments.  (I may not have the wording exactly
right here, but you get the idea.  It's fundamental to the way
functions are called and are passed arguments.)

The usual reason this confuses people is because they're thinking of
print as a statement, rather than a function, a mistake I would *not*
expect someone to make who understands Scheme or the functional
programming paradigm.  Usually it's C programmers who get tripped up
by this, because they expect the parser to split on semicolons, take
the leftmost token as the statement name, and pass the rest to it as
arguments.  (This is a gross oversimplification of the way C does
things, of course, but it's *nothing at all* like the way Perl does
anything, ever.  Perl has expressions, not statements.)

The way function arguments are parsed is explained VERY early in the
camel book:  "If it looks like a function call, it is a function
call."  You'll be very confused about Perl if you miss this point.
  
> I'm also annoyed by the counter-intuitive interpretation of @foo[3].  I
> know that this is an array slice of one element, but in just about every
> other PL I've used that uses the [] syntax for array indexing, you append
> the [] to the name of the array, which is `@foo', not `$foo'.

This is changing in Perl6.  Personally, I like it the way it is.
*shrug*.  I'll get used to the other way easily enough.

> Incidentally, the array-slicing issue is one place where a good type
> system would help immensely.  

What are you talking about?  Perl has probably the best type system of
any language I have yet used.  But I think in Perl6 slicing will be
done as a method call or something.  (Yes, lvalue method calls.)
Anyway, its syntax is definitely changing.

> > We need real garbage collection.  (We're getting it in Perl6.)  
> 
> Yup.  For the record, there are 4 main methods for non-concurrent,
> non-interleaved, non-realtime GC.
> 
>   - mark-and-sweep (McCarthy, April 1960)
>   - ref count (George Collins, December 1960)
>   - stop-and-copy (Minsky, 1963; Fenichel and Yochelson, 1969; Cheney,
>     1970)
>   - generational (Liebermann and Hewitt, 1983)
>   - there's also been some recent work on oldest-first collection, but I
>     don't know as much about that.

I *think* mark-and-sweep is what we're getting, but I'm not sure.  I
don't really care exactly how it works under the hood, as long as it
cleans up anything and everything that's no longer accessible.
Reference counting is inadequate because it can leave things
uncollected, if there are circular structures.

> The point of all this history is to say that we've had _44 years_ to
> work on getting the other GC algorithms fast.  That's an awfully
> long time, especially for a field as young as computer science, and
> we've made a *lot* of progress.  Claiming that non-refcount-GC is
> too slow is, in general, being willfully ignorant of current
> technology.

Speed is not the issue.  The garbage collector in Perl5 has not been
overhauled primarily because nobody has done the work of overhauling
it.  (Apparently, it would impact more than just the gc; a lot of the
internal workings of Perl would be touched by such a change, and so it
would be a lot of work.)  The one in Perl6 will be done right from the
start.

> > There's no decent standard works-on-all-platforms-with-a-GUI GUI library
> 
> We can argue for days over where the dividing line between the
> language and the library is, but I'll classify this as a library
> issue.  

I don't care where the issue lies, but it's something we (the Perl
community) need to fix.  Yes, it could be done as a module.  CPAN
being what it is, the dividing line between the core versus external
libraries is not really important in Perl.  In fact, we have a concept
of "core modules" -- modules that are not, strictly speaking, part of
perl itself, but nevertheless are part of standard core Perl.  (Note
the change in capitalization; perl is the implementation; Perl is the
language.)

> > Also, *far* too many of the modules on CPAN use C code (and thus
> > require a number of extra non-Perl things in order to install,
> > such as a C compiler) that shouldn't need to do so.
> 
> This is interesting.  Perhaps it would be fruitful to investigate
> *why* all these extensions feel it necessary to call out to C code.
> This may well demonstrate shortcomings of Perl's core language.

Many of them do.  Mostly the same handful of shortcomings, I think,
which hopefully are being rectified in Perl6, to a greater or lesser
extent.

> > There are very good reasons why Perl's grammar does not reduce to BNF.
> 
> Just for my curiosity, what are they?  The only justifications I've
> heard are some vague mumblings about wanting Perl to work more like
> natural languages.  

The language would lose a lot of its flexibility.  The features that
cause Perl not to be reducable to BNF are the very features that set
it apart from other languages.  I said far upthread that Perl is
explicitely a multiparadigmatic language -- I meant it.  There are
four major paradigms represented in Perl:  procedural/imperative,
functional, object-oriented, and contextual programming.  Of these,
the last is perhaps the most important for a Perl programmer to
understand, because to a large extent it governs the interaction
between the other three.  You can get along in Perl without objects,
until you need to use a module off of the CPAN that happens to present
an object-oriented interface (e.g. DBI).  It's harder to get along
without functional programming, but it can be done.  You can *almost*
entirely avoid procedural programming, if you should happen to want to
do that, except for a few particulars (e.g., the use directive).  But
any attempt to avoid context is doomed to failure.  It isn't just one
of the four paradigms; it's the central one, the one that ties the
language together.

For example, let's say you want to read a line from an open filehandle
and print it out.  (This is about as simple as it gets, right?)  You
can do this in various ways:

my $line = <FOO>; print $line; # Decidedly procedural.
print scalar <FOO>;            # Still mostly procedural.
print map { <FOO> } 1;         # Somewhat more functional.

All of these methods have one thing in common:  they evaluate the file
read in scalar context.  If you do it in list context, your results
will be different.  (It's also possible to use IO::File and IO::Handle
to read from the file in an object-oriented fashion.)

You can talk about other languages that don't have Perl's concept of
context, but Perl is not those other languages, and if it were, then
it wouldn't need to exist as a separate language.

And yes, being able to do file reads in list context is very useful,
particularly in conjunction with the functional paradigm, as you often
want to pass the resulting list right into a map or grep or for or
what-have-you.

> > The language would be significantly impoverished if this were
> > changed by any means other than extending BNF to be
> > Turing-complete.
> 
> In what way?  Sure, it might have to have a slightly more verbose
> syntax, but I find it hard to see that as `significantly
> impoverishing' the language.

It goes way beyond syntax.  Without context, Perl would cease to be Perl.

> > $;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
> > split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
> 
> So, if Perl isn't just line noise, why do so many Perl hackers pride
> themselves on their ability to do stuff like this?

It's a *game*.  Why does the IOCCC exist?  Why are crossword puzzles
so popular?  Why do people play Trivial Pursuit, or Scrabble, or
Balderdash?  Why did some clown invent unlambda, for crying out loud?

-- 
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}}
split//,"ten.thgirb\@badanoj$/ --";$\=$ ;-> ();print$/
0
jonadab (30)
3/1/2004 6:27:40 AM
Joe Marshall wrote:
>> The point of all this history is to say that we've had _44 years_
>> to work on getting the other GC algorithms fast.  That's an awfully
>> long time, especially for a field as young as computer science, and
>> we've made a *lot* of progress.  Claiming that non-refcount-GC is
>> too slow is, in general, being willfully ignorant of current
>> technology.
> 
> And rather ignorant of reference count GC, too.

Any references?

My current understanding of the speed-improvement techniques for
reference-count GC is that these techniques all incur a delay in the
time until a block is released (at least on the average). Which, in
other words, means that you end up with the disadvantages of reference
counting (inability/extra complexity to cope with cycles) and the
mark-and-sweep variants (unused memory not released immediately).

Regards,
Jo
--
Currently looking for a new job.

0
3/1/2004 7:58:02 AM
Jonadab the Unsightly One wrote:

> There's no
> decent standard works-on-all-platforms-with-a-GUI GUI library (no, Tk
> does *not* count),

Why?

How about GTK+?

 > and there should be.  (It should use native widgets
> on platforms that have them (Win32, Mac) and whatever it finds
> installed (Gtk, Qt, whatever) otherwise, spitting a fatal error only
> if no such library is found.  The programmer should not need to care
> which widgets will ultimately get used when he writes the code.)

Such libraries were tried, but all failed.
The reason is that libraries have to choose one of two evils: the 
least-common-multiple or greatest-common-denominator.

GCD means that you support only functionality that's available in all 
supported platforms. Adding another platform is a major disaster: if the 
new platform doesn't support a previously supported mechanism, that 
mechanism would have to be dropped if the library designer wishes to 
stick with GCD.
GCD-style GUI libraries quickly came out of fashion.

LCM means that you support all mechanisms of all platforms, 
re-implementing them on the platforms that don't support them.
This is problematic as well. For example, the listview control on 
Windows cannot contain arbitrary controls, those of other GUI toolsets 
often can.
So either you reimplement the more general listview for Windows (which 
is a major undertaking, and you'll end up with subtle incompatibilities 
between the Windows listview and your homegrown listview), or you offer 
two different listview widgets (i.e. don't hide the incompatibility from 
the application programs), or you revert to the GCD variant.

There's a whole lot more to look-and-feel than just implementing widgets 
though.
On Windows, applications are expected to respond to OS messages like 
"the system is going to shut down now" and "the user just has changed 
the color scheme and, oh, the default window border size as well". 
Similar mechanisms exist for some of the other GUI toolkits, but not for 
all, and they are utterly incompatible.
How do you handle *that*, in a platform-independent way?

I have spent several years programming in Windows and pondering other 
toolkits... and the result was always the same: it's easier to write 
your own, platform-independent toolkit, based on line-drawing and bitblt 
primitives, than getting all these GUIs under one hood.
YMMV.

Regards,
Jo
--
Currently looking for a new job.

0
3/1/2004 8:17:58 AM
mwotton@cse.unsw.edu.au (Mark Alexander Wotton) virkkoi:
> On 29 Feb 2004 21:18:24 -0800, Michele Simionato posted:
> > def make_adders(n):
> >     return [(lambda x: x+i) for i in range(n)]
> 
> in Haskell:
> 
> make_adders n = [ \x -> x + i | i <- [1..n] ]

I'd write it

makeAdders n = map (+) [1..n]

or in Scheme (using the fairly common "curry" and srfi-1's "iota"):

(define (make-adders n) (map (curry +) (iota n 1)))

A matter of taste, of course.


Lauri Alanko
la@iki.fi
0
la (473)
3/1/2004 10:49:25 AM
Fergus Henderson wrote:
> 
>>I suppose Fergus is referring to the fact that Oz provides 
>>token-equality on functions. From a more semantic point of view that is 
>>pretty nasty, since to some extent, implementation details become 
>>observable in programs.
> 
> Yes.  Actually I thought Oz's notion of equality on higher-order terms
> was different than token equality.  I thought it was more similar to OOP
> object identity, with the identity of two freshly constructed closures
> being different. 

Well, that's what I meant by token equality. What other interpretation 
do you have?

> Does the Oz equivalent of
> 
> 	(\ x = x + 1) == (\ x = x + 1)
> 
> return false or true?

It returns false.

> I think it would be fair to say that languages like OCaml and Lisp
> combine the paradigms of imperative and functional programming.  For
> pure functional programming languages, such as Haskell and Clean,
> higher-order terms do not have identity.

OK, it's a purity issue. So this boils down to saying that purer 
languages are "more functional". I can live with that.

	- Andreas

-- 
Andreas Rossberg, rossberg@ps.uni-sb.de

Let's get rid of those possible thingies!  -- TB

0
rossberg (600)
3/1/2004 12:31:11 PM
Fergus Henderson wrote:
> 
> Yes.  Actually I thought Oz's notion of equality on higher-order terms
> was different than token equality.  I thought it was more similar to OOP
> object identity, with the identity of two freshly constructed closures
> being different.  Does the Oz equivalent of
> 
> 	(\ x = x + 1) == (\ x = x + 1)
> 
> return false or true?

I'm not sure whether this is too relevant. Equality of functions is 
undecidable, so even the "purest" language cannot do more than some 
heuristics.

E.g. the real question is which of the items on the following list are 
equal to \ x = x + 1:

   \ x = x + 1 -- expression duplication?
   \ y = y + 1 -- bound variable renaming?
   \ y = (\k = y + 1) 42 -- unneeded lambda removal?
   \ y = y + 1 + 0 -- constant evaluation for primitives?
   \ y = y + fac 1 -- constant evaluation for user-defined functions?
   \ y = y + y / y -- equal on those domains where it is defined...
                   -- this is a different kind of equality, interesting
                   -- in its own.

I.e. even the purest language cannot test for true equality, it will 
have to do structural equality over some normalized form of the 
expressions (which isn't really function equality).

Regards,
Jo
--
Currently looking for a new job.

0
3/1/2004 2:42:43 PM
<jonadab@bright.net> wrote:
>my $line = <FOO>; print $line; # Decidedly procedural.
>print scalar <FOO>;            # Still mostly procedural.
>print map { <FOO> } 1;         # Somewhat more functional.
>
>All of these methods have one thing in common:  they evaluate the file
>read in scalar context.

The last one doesn't. The (implicit) subroutine given as the first
argument to map is always executed in list context, regardless of the
context that the map is being executed in. See for example:

  print map { /./g } "aaa"      # -> aaa
  print 0 + map { /./g } "aaa"  # -> 3
  print map { 0 + /./g } "aaa"  # -> 1

-- 
Juho Snellman
0
jsnell (317)
3/1/2004 3:14:56 PM
"Jonadab the Unsightly One" <jonadab@bright.net> writes:

> Joe Marshall <prunesquallor@comcast.net> writes:
>
>> "Jonadab the Unsightly One" <jonadab@bright.net> writes:
>> > Joe Marshall <jrm@ccs.neu.edu> writes:
>> >> It makes it impossible to reason about code independently of context!
>> > Only if the code is written in a perversely obfuscated manner.
>> > Normal code is written so that it's clear what it does.
>> 
>> I'm confused.  What is the advantage of context-sensitive parsing if
>> `normal code' doesn't use it and the code that does use it is
>> considered `perversely obfuscated'?
>   
> Sorry; I haven't made myself clear.  Normal code often uses context,
> but it is clear from looking at the code what it's doing with the
> context.  

The point is that you cannot reason about the code *independent* of
context.  Here's an example.  This fragment of lisp code divides
something by 25:
    (/ x 25)

This fragment of Perl code *either* divides something by 25 or tries
to match a regular expression:

    x / 25 ; # /

You cannot reason about it without knowing what X is.

> For example, a function foo may be written so that in list context it
> takes a list of objects, finds the ones that match a certain
> criterion, extracts a particular value from each one that does match,
> and returns a list consisting of these extracted values; the same
> function in scalar context may take a similar list of objects, examine
> it, counting _how many_ match the same criterion, and return that
> number, without extracting any values.
>
> This would all be clear from looking at the function, and the
> documentation for the function would also state that this is what the
> function does, and by looking at the calling code you'd be able to
> tell which context the function was being called in in that instance.

The problem with this is that you are dispatching based on the type of
the continuation.  This means you have introduced constraint-based
programming into your model.  Constraint-based programming isn't
necessarily a bad thing, but it is a lot harder to reason about than
functional programming.

> In one sense, I suppose that Joe's statement, "It makes it impossible
> to reason about code independently of context!", is true of such code,
> to the extent that when you reason about the code you do think about
> context.  But if that is what he meant, then saying that "It makes it
> impossible to reason about code independently of context!" is
> equivalent to saying that the way Haskell does things makes it
> impossible to reason about code independently of functions, which is
> technically true but practically meaningless.  I assumed that he
> meant, rather, that it would be impossible to think about a given
> piece of code without thinking about all the ways in which it might be
> used, or all possible calling code, or something along those lines,
> and that is not the case.

It certainly seems to be the case.  What does this fragment mean:

     / 25 ; # /

And please don't talk about the surrounding context.

>> > Yes, certainly.  This is generally not a problem, unless the code
>> > in question is deliberately obfuscated (which is usually done for
>> > fun, in code that doesn't matter, not in production code).
>> 
>> `Generally' doesn't count.  If your parse is context sensitive, then
>> you need to reason about the runtime values in order to understand the
>> parse.  
>
> You're assuming that you can't tell at a glance whether a runtime
> value might alter the parse.  You always can.  If you do run into code
> like that in real production code, you expect to see a lot of POD
> nearby, or some comments.  But generally, you only run into code like
> that in clever obfuscations.  In any case, you'd be able to recognise
> it if you did run into it, so you'd know there was something funky
> going on, even if it were difficult to tell at a glance exactly how
> the details worked out.

In your second sentence, you said `always', but in your third you
admit the possibility, in the fourth you say `generally', and in the
fifth you say `even if it were difficult to tell at a glance'.  The
problem is that you *have* to look at the context even to determine
that nothing `funky' is going on.

>> Even if you almost never use the context-sensitive elements, you
>> still have to reason about them because you cannot tell a priori
>> whether it makes a difference.
>
> Yes, that much you can tell.  This is not a problem.  Really, you're
> making a big deal out of nothing.  I read other people's Perl code all
> the time, and as long as the person who wrote the code had a decent
> understanding of Perl, it's almost as easy as reading my own code.

I believe you, but I'm not making a big deal out of nothing.  I've
worked with both context-sensitive and context-free languages both as
a user and as an implementor.  There are *significant* disadvantages
to context-sensitive languages that arise simply because a user
*might* use a context-sensitive element.  Context-free languages,
however, appear to be powerful enough to express any program a
context-sensitive language can express and the reduction in complexity
is a major benefit.
0
jrm (1310)
3/1/2004 3:35:40 PM
"Jonadab the Unsightly One" <jonadab@bright.net> writes:

> Joe Marshall <jrm@ccs.neu.edu> writes:
>
>> "Jonadab the Unsightly One" <jonadab@bright.net> writes:
>> 
>> > I've heard that we're getting continuations in Perl6, and I'm excited
>> > about it.
>> 
>> *That* should be amusing.  Continuations and context-sensitivity are
>> an interesting mix, especially if you can get a different parse upon
>> re-entering some code.
>
> You wouldn't get a different parse on re-entering unless you're using
> eval to do so explicitely (which you can do now, e.g. in a loop, and
> the string can change between parsings potentially, but this is not a
> common technique; in fact, the only time I've seen it used in practice
> was in a round of Perl golf; eval is more commonly used to trap
> errors, but in that case the block form is usually used, which does
> not reparse the code each time it is evaluated).

Since the parse is context-sensitive, it would have to change if you
used a `list-context' continuation where a `scalar-context' had been
used earlier.
0
jrm (1310)
3/1/2004 3:41:51 PM
"Jonadab the Unsightly One" <jonadab@bright.net> writes:

> I can't escape the feeling that you are somehow trying to see this as
> a good thing, thinking to yourself, "Sure, all the Python programmers
> I've known are impoverished in their understanding, but since they're
> all impoverished in the _same way_, it's good."

Well put!
0
jrm (1310)
3/1/2004 3:42:22 PM
Joe Marshall wrote:

> context.  Here's an example.  This fragment of lisp code divides
> something by 25:
>     (/ x 25)

No it doesn't.  It either applies the procedure / (whatever it may be at 
the time) to x and 25, or it expands the macro application (/ x 25).

David
0
feuer (188)
3/1/2004 5:20:22 PM
mwotton@cse.unsw.edu.au (Mark Alexander Wotton) wrote in message news:<slrnc45jr7.d72.mwotton@pill03.orchestra.cse.unsw.EDU.AU>...

> in Haskell:
> 
> make_adders n = [ \x -> x + i | i <- [1..n] ]
> 
> this looks isomorphic to me, and without side effects...

I know nothing about Haskell, so it may well be that I said something
wrong. In Python this happens:

>>> [i for i in [0,1]]
[0, 1]
>>> i
1

i.e. the list comprehension pollutes my namespace by introducing
a global binding "i" with the value 1 (the last value in the loop).
I would expect that in Haskell - a pure functional language - "i" would
be undefined outside the scope of the list comprehension. 

I have a question, too. In Python I get this:

def make_adders(n):
    return [(lambda x: x+i) for i in range(n)]

add0,add1=make_adders(2)

print add0(0) #=> 1 (NOT 0!)
print add1(0) #=> 1

do I get the same in Haskell? (I have no Haskell installed, 
so I cannot check myself).

        Michele Simionato
0
3/1/2004 6:11:24 PM
Feuer <feuer@his.com> writes:

> Joe Marshall wrote:
>
>> context.  Here's an example.  This fragment of lisp code divides
>> something by 25:
>>     (/ x 25)
>
> No it doesn't.  It either applies the procedure / (whatever it may be
> at the time) to x and 25, or it expands the macro application (/ x 25).

*technically* you are right, but since '/' is a standard procedure and
since there is no lexically apparent shadowing binding, I think I'm
justified in being imprecise.

This fragment of lisp code (assuming that '/' has the standard
definition and is not shadowed) (attempts to) divides something by 25:
     (/ x 25)

0
jrm (1310)
3/1/2004 6:17:50 PM
On  1 Mar 2004, Michele Simionato <- michele.simionato@poste.it wrote:

> sigurd@12move.de (Karl Pfl�sterer) wrote in message
> news:<m3u119endl.fsf@hamster.pflaesterer.de>...
>> On 29 Feb 2004, Michele Simionato <- michele.simionato@poste.it wrote:

>> > Yes, but Python programmers want to avoid functional code just because
>> > they think that it is twisted and unclear! For instance, we no more use 

>> Sorry but this is just wrong.  Think about the writings of David Mertz
>> e.g.  That we don't see more Python code written in a functional style
>> may result from some people always saying: this is not pythonic (what
>> ever that may be) and the people who wrote the functional code change
>> their code (sadly) to the mainstream Python style.

> Speaking from myself, I am writing less functional code in Python now than
> I used to write. The reason is that there are better alternative availables, 

Better alternatives in what way?  Faster?  Or do you only use less
closures e.g. (closures are relatively new in Python so I understand why
some people always insist on using classes).

> according to my own taste. Luckily, my taste happens to coincide with
> the taste of the majority of Python developers.

Well I also like Python but sometimes (and the longer I use it the more)
have the feeling that Python is sometimes a bit like a straitjacket.
That's not necesarrily the language but often some of its users who
think that their way of doing things is the only one and right way.

>> > map and filter since we have list comprehensions that are considered
>> > more explicit and easier to read (more "Pythonic" if you wish).

>> But list comprehensions the way they are in Python were taken from
>> Haskell; so you can't say they are not functional.  They are just an
>> alternative way to write e.g a mapping or a filter.


> There is some merit in what you say. Still, consider this Python code,
> that I would not consider functional (list comprehension works with side 
> effects, the loop variable "i" is a global variable):

> def make_adders(n):
>     return [(lambda x: x+i) for i in range(n)]

Well what did you think that code to do?  The binding of `i' is to the
last value `i' had in the loop (which is `n-1').

That has nothing to do with functional or not but with scoping and
binding rules.  Erik Naggum wrote some time ago a good article about
that (regarding CL and Scheme) where he wrote exactly about that
problem.  I think someone gave here also such an example.

> Consider now this Scheme code, doing the right thing and in a way that I
> would consider strongly functional:

> (define (make-adders n)
>   (unfold (cut = <> n) (lambda (i) (cut + <> i)) add1 0))

I don't know exaktly what `cut' does (what's that in R5RS? (Is that in
PLT?)). But again: that has IMO nothing to do with being more or less
functional.

> Do you see the difference? Also, remember that list comprehension 

I see a difference in binding.

> in Python are not as in Haskell.

Not exactly but I could translate your Python example to Haskell and
what would I see?  The binding rules in Haskell give here the same
result as your Scheme code.

make_adders n = [(i+) | i <- [0..n]]
show_adders m n = mapM (\f -> print (f m)) (make_adders n)

With show_adders 0 2 I see 0, 1, 2 as output.  But what does that prove?
IMO nothing (at least concerning functional qualification of a
language).



   KP

-- 
            One, two!  One, two!  And through and through
                 The vorpal blade went snicker-snack!
          He left it dead, and with its head
                He went galumphing back.   "Lewis Carroll" "Jabberwocky"
0
sigurd
3/1/2004 6:35:25 PM
Joachim Durchholz <joachim.durchholz@web.de> wrote:

> It's not just a question of clearness.
> Actually Quicksort can be expressed quite clearly in an iterative
> fashion, to the point that some people might reasonably say that the
> iterative version is easier to understand.

Really?  In some math or cs course in the early 80s, I got shown an
iterative quicksort in fortran, and it took me hours to wrap my head
around what the fuck it was doing.  I never really got it to the point
where I could reproduce it myself without peeking.  

Once I passed the homework/test, I just gave up.  For programming
purposes, if I couldn't find a library and needed a sort, I just wrote a
heap or radix sortt, and decided quicksort was some kind of dark magic
that I would maybe trouble myself again at some point when I had a lot
of free time.

A couple years ago, I looked at sombody's applescript implementation of
quicksort and darned if it wasn't perfectly clear to me what it was
doing and why it worked within about a minute.  It was purely recursive.

I would really like to see the interative quicksort which is easier to
understand than the recursive version.


Michael
0
michael296 (152)
3/1/2004 7:23:18 PM
On 1 Mar 2004 10:11:24 -0800, Michele Simionato posted:
> mwotton@cse.unsw.edu.au (Mark Alexander Wotton) wrote in message news:<slrnc45jr7.d72.mwotton@pill03.orchestra.cse.unsw.EDU.AU>...
> 
>> in Haskell:
>> 
>> make_adders n = [ \x -> x + i | i <- [1..n] ]
>> 
>> this looks isomorphic to me, and without side effects...
> 
> I know nothing about Haskell, so it may well be that I said something
> wrong. In Python this happens:
> 
>>>> [i for i in [0,1]]
> [0, 1]
>>>> i
> 1

Yuck. :)
 
> i.e. the list comprehension pollutes my namespace by introducing
> a global binding "i" with the value 1 (the last value in the loop).
> I would expect that in Haskell - a pure functional language - "i" would
> be undefined outside the scope of the list comprehension. 
> 
> I have a question, too. In Python I get this:
> 
> def make_adders(n):
>     return [(lambda x: x+i) for i in range(n)]
> 
> add0,add1=make_adders(2)
> 
> print add0(0) #=> 1 (NOT 0!)
> print add1(0) #=> 1
> 
> do I get the same in Haskell? (I have no Haskell installed, 
> so I cannot check myself).

I think I misinterpreted range(n). Rewriting the make_adders function:

Prelude> let make-adders n = [ \x -> x + i | i <- [0..n-1] ]
Prelude> let [add0,add1] = make_adders 2
Prelude> add0 0
0
Prelude> add1 0
1

So, yes, you're quite right. I wasn't aware Python had this "feature".

mrak

-- 
realise your life was only bait for a bigger fish
	-- aesop rock
0
mwotton (39)
3/1/2004 9:29:47 PM
On Mon, 1 Mar 2004 10:49:25 +0000 (UTC), Lauri Alanko posted:
> mwotton@cse.unsw.edu.au (Mark Alexander Wotton) virkkoi:
>> On 29 Feb 2004 21:18:24 -0800, Michele Simionato posted:
>> > def make_adders(n):
>> >     return [(lambda x: x+i) for i in range(n)]
>> 
>> in Haskell:
>> 
>> make_adders n = [ \x -> x + i | i <- [1..n] ]
> 
> I'd write it
> 
> makeAdders n = map (+) [1..n]
> 
> or in Scheme (using the fairly common "curry" and srfi-1's "iota"):
> 
> (define (make-adders n) (map (curry +) (iota n 1)))
> 
> A matter of taste, of course.

Sure. I just wanted to show the structure was isomorphic - initially, I
wrote it as a cut anyway.

make_adders n = [ (+) i | i <- [1..n] ]

mrak

-- 
realise your life was only bait for a bigger fish
	-- aesop rock
0
mwotton (39)
3/1/2004 9:32:18 PM
At Mon, 01 Mar 2004 12:20:22 -0500, Feuer wrote:
> 
> Joe Marshall wrote:
> 
> > context.  Here's an example.  This fragment of lisp code divides
> > something by 25:
> >     (/ x 25)
> 
> No it doesn't.  It either applies the procedure / (whatever it may be at 
> the time) to x and 25, or it expands the macro application (/ x 25).

Or does something else entirely if used inside a macro.  Pure
context-insensitivity is rare (and boring) in languages, but there are
degrees, and making sure (/ x 25) always means the same thing is one
reason most Lispers minimize their use of macros wherever possible.

-- 
Alex

0
foof (110)
3/2/2004 12:53:54 AM
sigurd@12move.de (Karl Pfl�sterer) wrote in message news:<m34qt832nd.fsf@hamster.pflaesterer.de>...
> On  1 Mar 2004, Michele Simionato <- michele.simionato@poste.it wrote:
> 
> > Speaking from myself, I am writing less functional code in Python now than
> > I used to write. The reason is that there are better alternative availables, 
> 
> Better alternatives in what way?  Faster?  

No, more readable. I find a list comprehension more readable than a
map,
for instance. I maintain that Python list comprehension are not truly
functional because of this:

>>> [i for i in [0,1]] 

[0, 1]
>>> i
1
> Or do you only use less
> closures e.g. (closures are relatively new in Python so I understand why
> some people always insist on using classes).

I still use closures in Python, but much less than in Scheme. Why?
Because Scheme has set! and in Scheme I use closures to fake classes,
whereas in Python I just use classes.

> > according to my own taste. Luckily, my taste happens to coincide with
> > the taste of the majority of Python developers.
> 
> Well I also like Python but sometimes (and the longer I use it the more)
> have the feeling that Python is sometimes a bit like a straitjacket.
> That's not necesarrily the language but often some of its users who
> think that their way of doing things is the only one and right way.

On the contrary, I think the "there should be only one obviuos way
...."
philosophy is right; sometimes, however, I a feel a bit constrained by
the language itself: for instance the fact that statements are not 
expressions constrains me a bit; also, if I send a chunk of code via
e-mail
I have a substantial probability of a mishap with indentation and that
it really annoying :-(

> > def make_adders(n):
> >     return [(lambda x: x+i) for i in range(n)]
> 
> Well what did you think that code to do?  The binding of `i' is to the
> last value `i' had in the loop (which is `n-1').

> That has nothing to do with functional or not but with scoping and
> binding rules.  Erik Naggum wrote some time ago a good article about
> that (regarding CL and Scheme) where he wrote exactly about that
> problem.  I think someone gave here also such an example.

Yes, this has to do with scoping and binding rules. I maintain that
Haskell choice makes easier to program in a functional way, since
code like the previous one works out of the box, whereas in Python you
have to *explicitly* rebind "i" at each iteration in this way:

def make_adders(n):
    return [(lambda x,i=i: x+i) for i in range(n)]
 
It is a matter of default; Haskell defaults are more convenient for
functional programming, just IMO.
 
> > Consider now this Scheme code, doing the right thing and in a way that I
> > would consider strongly functional:
>  
> > (define (make-adders n)
> >   (unfold (cut = <> n) (lambda (i) (cut + <> i)) add1 0))
> 
> I don't know exaktly what `cut' does (what's that in R5RS? (Is that in
> PLT?)). But again: that has IMO nothing to do with being more or less
> functional.

cut is a poor man curry (not a curry, but it does the job) in srfi-26.
Whereas in principle having in the standard library facilities for
playing with functions does not say that a language is more functional
than one other, in practice it does ;-) The fact that Python has no
functional standard module says something.
 
    Michele Simionato
0
3/2/2004 5:32:11 AM
Michael Sullivan wrote:

> Joachim Durchholz <joachim.durchholz@web.de> wrote:
> 
>> It's not just a question of clearness. Actually Quicksort can be
>> expressed quite clearly in an iterative fashion, to the point that
>> some people might reasonably say that the iterative version is
>> easier to understand.
> 
> Really?  In some math or cs course in the early 80s, I got shown an 
> iterative quicksort in fortran, and it took me hours to wrap my head 
> around what the fuck it was doing.

You might have to add some comments to make it clear enough.

It's essentially the classical rewrite-tail-recursion-as-a-loop, and
pushing the unfinished business (the other interval that we can't sort
right now because there's a first interval to sort first) on a stack.

I also assume that the person who considers the imperative version
clearer is somebody who isn't used to recursive thinking. (No criticism
implied - recursion is simple at first though, becomes complicated after
a while, then becomes simple again. Not everybody had the opportunity to
go through the process.)

> A couple years ago, I looked at sombody's applescript implementation
> of quicksort and darned if it wasn't perfectly clear to me what it
> was doing and why it worked within about a minute.  It was purely
> recursive.

Agreed - quicksort is a good way to demonstrate the usefulness of 
recursion. I just meant to say that some people will find the iterative 
version difficult to understand, but simply will not wrap their mind 
around recursion.

BTW I have seen some recursive versions of quicksort and didn't 
understand them. Mainly because the syntactic shorthands of the language 
were used in a way that I wasn't used to, so the problem is not with 
quicksort itself but with the concrete representation. (I suspect that 
the Fortran version was more complicated than would have been necessary.)

Regards,
Jo
--
Currently looking for a new job.

0
3/2/2004 3:13:47 PM
>>>>> "Michele" == Michele Simionato <michele.simionato@poste.it> writes:

    Michele> I consider as a wart of Python the fact that loop
    Michele> variables are not locals and extend outside the loop.

Yep, but the misfeature is deprecated now (as far ar list
comprehensions go), I'm not sure whether Python 2.4 will remove it
though.

Anyway, it's great to finally get rid of it,

-- 
Ville Vainio   http://tinyurl.com/2prnb
0
ville (416)
3/3/2004 8:26:31 PM
Lauri Alanko <la@iki.fi> wrote in message news:<c1v4fl$14l$1@la.iki.fi>...
> mwotton@cse.unsw.edu.au (Mark Alexander Wotton) virkkoi:
> > On 29 Feb 2004 21:18:24 -0800, Michele Simionato posted:
> > > def make_adders(n):
> > >     return [(lambda x: x+i) for i in range(n)]
> > 
> > in Haskell:
> > 
> > make_adders n = [ \x -> x + i | i <- [1..n] ]
> 
> I'd write it
> 
> makeAdders n = map (+) [1..n]
> 
> or in Scheme (using the fairly common "curry" and srfi-1's "iota"):
> 
> (define (make-adders n) (map (curry +) (iota n 1)))
> 
> A matter of taste, of course.
> 

Using one procedure less:

(define (make-adders n) (list-tabulate n (curry +)))

--
G.
0
grzegorz1 (80)
3/4/2004 11:46:10 AM
Joe Marshall wrote:


> However, since the grammar is context-sensitive, it is therefore
> *impossible* to understand a fragment of Perl without having the
> surrounding context.  This is the definition of context-sensitive.

Scheme's grammar is not context-free.

David
0
feuer (188)
3/4/2004 5:37:03 PM
Feuer <feuer@his.com> writes:

> Joe Marshall wrote:
>
>
>> However, since the grammar is context-sensitive, it is therefore
>> *impossible* to understand a fragment of Perl without having the
>> surrounding context.  This is the definition of context-sensitive.
>
> Scheme's grammar is not context-free.

Quasiquote breaks it, but the rest of the lexical structure is
expressible in BNF.
0
jrm (1310)
3/4/2004 6:15:49 PM
Joachim Durchholz <joachim.durchholz@web.de> wrote:

[iterative vs. recursive quicksort -- which is more clear?]

> BTW I have seen some recursive versions of quicksort and didn't 
> understand them. Mainly because the syntactic shorthands of the language
> were used in a way that I wasn't used to, so the problem is not with 
> quicksort itself but with the concrete representation. (I suspect that
> the Fortran version was more complicated than would have been necessary.)

I'm sure this is true.  I'm also realizing that it's not a fair
comparison, since I saw the Fortran version when I was only 16 and
didn't even know what good programming looked like. I hadn't yet seen C
or Lisp, for instance.  All my programming before college was in BASIC
or naive [Z80|6502] assembler, so I probably had a fair bit of
de-brain-damaging to do that's since been swept under the narrative rug.
Had I seen a recursive version then, it might not have been a whole lot
easier to understand.


Michael
0
michael296 (152)
3/4/2004 10:28:55 PM
i'm not sure how solid my undestanding is, but i've been using
functional languages in my own time for years, while i'm paid to work
in Java, C, Fortran, Python etc.

the differences for me are:
- i can be more productive in my own time
- i'm more annoyed by stupid bugs at work related to mutable state
- i'm better at the things functional languages do well, because i can
see an easy solution from a functional approach and then translate it
to the language at hand

of these, maybe only the last is directly relevant to your question,
but i think without the first i probably wouldn't do so much
programming on my own, which would have an indirect impact. 
functional programming is more fun.

an example of the last point from the top of my head: i work on a
project that processes astronomy data (using a fortran variant).  the
group i'm involved with wanted to implement a "calculator type" task
that processed not just values, but propogated variance estimates
(which needs the first derivative of an algebraic expression).  this
was viewed as a hard problem, yet in an hour or two i put together a
solution using techniques i've learnt as a functional programmer
(recursive descent parser, quick + dirty binary tree using tuples,
etc) (see bottom of http://www.acooke.org/andrew/diary/2004/mar/4.html).

now maybe you could argue that no-one, not even a programmer in
imperative languages, should consider this a hard problem.  that may
be so - there's a selection effect in that anyone interested in
functional programming is soemone who cares about programming etc
etc...

another example would be caching and memoization.  i use haskell
mainly, so persistent data structures are an "obvious" solution to me,
but relatively strange to imperative programmers.

hope that helps,
andrew

ps i don't do much "functional style" programming in imperative
languages (you asked about using such libraries, which do exist) -
mainly because i would just get hassle from conservative co-workers. 
i have to write code that everyone else can maintain. :o(

tuanglen@hotmail.com (Tuang) wrote in message news:<df045d93.0402111325.73b437c3@posting.google.com>...
> I'm curious to what extent a solid understanding of functional
> programming would help me if I still had to do most real work in
> mainstream imperative languages.
> 
> Mainstream languages, almost by definition, are more likely to be used
> in mainstream commercial projects. So if I'm involved in such a
> project (and I do get involved in lots of them), what benefits would
> functional programming expertise give me, and how could I achieve
> those benefits?
> 
> I'm working through SICP with Scheme. I've discovered that I'm going
> through interesting stages: first discovering how to use Scheme to
> solve certain example problems (using recursion, for ex.), then
> finding that approach generalizing into my brain as just a way to
> *see* the problem independent of Scheme (which, of course, is the
> point of SICP), then discovering that I can transcribe that way of
> seeing the problem into mainstream languages that I've been using for
> years, even if those languages make it a little harder.
> 
> Even though I've used recursion occasionally for years, for example, I
> never really had a "sense" of it until I used it for a while in a
> language really designed for it. Now, I'm wondering how much else I
> could learn from Scheme, Haskell, or Ocaml, what additional "senses" I
> could develop, how much of it is applicable even in mainstream
> languages, and how much of it simply isn't of practical use unless you
> actually use an FP (or FP-oriented) language.
> 
> And when using mainstream languages, are there ways to get more out of
> them -- such as special libraries with FP-style datatypes and
> functions on them -- or if such things simply aren't realistic.
> 
> Thanks.
0
andrew8810 (346)
3/6/2004 11:29:00 PM
Reply: