f



Compatibility questions

I was recently working on a small program which I had written a few
months earlier in DrScheme, and decided to try and vett it on some
other interpreters (Bigloo, MIT, SCM, and Petite Chez) to see how
portable it was. As it happens, there were only a handful of issues -
SCM did not implement (port?), Chez Scheme  did not parse escaped
newlines in strings ("\n") but *did* work with a #\newline character
made into a string using (make-string), etc. The real surprise was
that the following function,

;; (read-dump-line port integer) => list
;; read in a line of data
;; 
;; Side Effects: reads data from port src
;; Requirements: none 
(define (read-dump-line src width)
  (let read-data ((count width))
    (if (or (>= 0 count)                   
            (eof-object? (peek-char src))) ; check to see if its EOF
        '()
        (cons (read-char src) (read-data (- count 1))))))

when run under MIT Shceme would return the list of characters in
*reverse* order from how they were in the file.

All other interpreters ran the code as I expected it would. I cannot
see any logical error, so I can only wonder why the MIT interpreter
would give such an odd result. While reversing the list to the correct
order would be trivial (I could even set it up so that on the first
function call, it would detect which interpreter it was running under,
generate the appropriate subfunction, and apply it on each subsequent
call), I was less concerned with running the program itself and more
with why there would be this discrepancy. Does anyone have any ideas?

I was also curious if anyone could give any advice as to improving the
program at hand. The work I was doing was simply an exercise program
optimization - I was using the program as a demonstration, and I'd
wanted it to be reasonably efficient as well as reasonably clear - and
I was hoping for some input by a few more experienced Schemers. The
code for the program (a simple hex dump utility) can be found at

http://www.mega-tokyo.com/forum/attachments/uploaded_files/dump-file.scm.txt

Thank you for considering these questions.

--
Jay Osako                                    aka Schol-R-LEA;2
If the phone rings today, water it!
0
scholr (10)
12/3/2003 3:05:35 AM
comp.lang.scheme 4781 articles. 0 followers. Post Follow

390 Replies
1836 Views

Similar Articles

[PageSpeed] 41

scholr@hotmail.com (Jay Osako) virkkoi:
> (define (read-dump-line src width)
>   (let read-data ((count width))
>     (if (or (>= 0 count)                   
>             (eof-object? (peek-char src))) ; check to see if its EOF
>         '()
>         (cons (read-char src) (read-data (- count 1))))))

At the last line the semantics of your program depend on the evaluation
order of the arguments to cons. As you can see in
http://schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-7.html#%_sec_4.1.3
the evaluation order is undefined. You'd better change your program so
that the desired evaluation order is explicit.


Lauri Alanko
la@iki.fi
0
la (473)
12/3/2003 3:15:13 AM
Jay Osako wrote:

> I was recently working on a small program which I had written a few
> months earlier in DrScheme, and decided to try and vett it on some
> other interpreters (Bigloo, MIT, SCM, and Petite Chez) to see how
> portable it was. As it happens, there were only a handful of issues -
> SCM did not implement (port?), Chez Scheme  did not parse escaped
> newlines in strings ("\n") but *did* work with a #\newline character
> made into a string using (make-string), etc. The real surprise was
> that the following function,
> 
> ;; (read-dump-line port integer) => list
> ;; read in a line of data
> ;; 
> ;; Side Effects: reads data from port src
> ;; Requirements: none 
> (define (read-dump-line src width)
>   (let read-data ((count width))
>     (if (or (>= 0 count)                   
>             (eof-object? (peek-char src))) ; check to see if its EOF
>         '()
>         (cons (read-char src) (read-data (- count 1))))))

Try

(define (read-dump-line src width)
  (let read-date ((count width))
    (let ([next (read-char)])
      (if (eof-object? next)
          '()
          (cons next (read-data (- count 1)))))))

1. A program with peek-char is almost always broken. The other MF.

2. You want to perform effect-full actions only once via let. (A
lesson from monads.)

3. The Scheme Report imposes an undecidable correctness criteria on
programs -- that they don't depend on the order of evaluation -- without
(naturally) asking implementations to check it. Go figure; and that language
supposedly has a semantics.

-- Matthias


> 
> when run under MIT Shceme would return the list of characters in
> *reverse* order from how they were in the file.
> 
> All other interpreters ran the code as I expected it would. I cannot
> see any logical error, so I can only wonder why the MIT interpreter
> would give such an odd result. While reversing the list to the correct
> order would be trivial (I could even set it up so that on the first
> function call, it would detect which interpreter it was running under,
> generate the appropriate subfunction, and apply it on each subsequent
> call), I was less concerned with running the program itself and more
> with why there would be this discrepancy. Does anyone have any ideas?
> 
> I was also curious if anyone could give any advice as to improving the
> program at hand. The work I was doing was simply an exercise program
> optimization - I was using the program as a demonstration, and I'd
> wanted it to be reasonably efficient as well as reasonably clear - and
> I was hoping for some input by a few more experienced Schemers. The
> code for the program (a simple hex dump utility) can be found at
> 
> http://www.mega-tokyo.com/forum/attachments/uploaded_files/dump-file.scm.txt
> 
> Thank you for considering these questions.
> 
> --
> Jay Osako                                    aka Schol-R-LEA;2
> If the phone rings today, water it!

0
12/3/2003 4:09:49 AM
Matthias Felleisen <matthias@ccs.neu.edu> writes:

> 3. The Scheme Report imposes an undecidable correctness criteria on
> programs -- that they don't depend on the order of evaluation -- without
> (naturally) asking implementations to check it. Go figure; and that language
> supposedly has a semantics.

Right on.  This is, IMO, the bigger problem with Scheme's "fully
formalized semantics".  Leaving evaluation order unspecified is a big
mistake (buys nearly nothing -- regardless of what some people will
tell you -- and comes with correctness headaches such as those
demonstrated by this example), and the formal semantics section in
RnRS does not even handle it properly.

Everyone would be better off fixing the order of evaluation:

  - The programmer does not have to make sure that her program works
    under circumstances that she cannot test today but which might be
    a fact tomorrow (when the new compiler comes out).
  - The writer of the formal semantics section of RnRS does not have
    to do a hack job ("permute"/"unpermute").
  - The compiler writer does not need to wonder if there is a way of
    squeezing out another ounce of performance by "cleverly" fiddling
    with evaluation order.  (Been there, done that. (*))

(*) Of course, a good compiler will still try to rearrange parts of
the computation.  But the rules are much simpler if they uniformely
say: "Don't rearrange if you cannot prove that doing so does not alter
the observable outcome of the overall computation."

(the other) Matthias
0
find19 (1244)
12/3/2003 4:37:55 AM
Matthias Blume wrote:
> Matthias Felleisen <matthias@ccs.neu.edu> writes:
> 
> 
>>3. The Scheme Report imposes an undecidable correctness criteria on
>>programs -- that they don't depend on the order of evaluation -- without
>>(naturally) asking implementations to check it. Go figure; and that language
>>supposedly has a semantics.
> 
> 
> Right on.  This is, IMO, the bigger problem with Scheme's "fully
> formalized semantics".  Leaving evaluation order unspecified is a big
> mistake (buys nearly nothing -- regardless of what some people will
> tell you -- and comes with correctness headaches such as those
> demonstrated by this example), and the formal semantics section in
> RnRS does not even handle it properly.
> 
> Everyone would be better off fixing the order of evaluation:
> 
>   - The programmer does not have to make sure that her program works
>     under circumstances that she cannot test today but which might be
>     a fact tomorrow (when the new compiler comes out).
>   - The writer of the formal semantics section of RnRS does not have
>     to do a hack job ("permute"/"unpermute").
>   - The compiler writer does not need to wonder if there is a way of
>     squeezing out another ounce of performance by "cleverly" fiddling
>     with evaluation order.  (Been there, done that. (*))
> [...]

You are suggesting a fixed evaluation order (probably left to right).
The implications of that are:
1. Either `let' should have the same semantics as `let*'.
2. Or `let' is not semantically equivalent to `left left lambda'.
3. Or some other option that I'm not aware of.

Would you please elaborate on your opinion a little more.

Aziz,,,

0
12/3/2003 9:26:39 AM
Abdulaziz Ghuloum <aghuloum@cs.indiana.edu> writes:

> You are suggesting a fixed evaluation order (probably left to right).
> The implications of that are:
> 1. Either `let' should have the same semantics as `let*'.
> 2. Or `let' is not semantically equivalent to `left left lambda'.
> 3. Or some other option that I'm not aware of.

(let  ((- +) (n (- 1))) n) => -1
(let* ((- +) (n (- 1))) n) =>  1
0
12/3/2003 10:57:38 AM
On Wed, 03 Dec 2003 04:26:39 -0500, Abdulaziz Ghuloum wrote:

> You are suggesting a fixed evaluation order (probably left to right).
> The implications of that are:
> 1. Either `let' should have the same semantics as `let*'.
> 2. Or `let' is not semantically equivalent to `left left lambda'.
> 3. Or some other option that I'm not aware of.

No, fixing evaluation order doesn't imply changing scope rules.
Let would be precisely equivalent to applied lambda.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/

0
qrczak (1266)
12/3/2003 11:17:47 AM
Abdulaziz Ghuloum <aghuloum@cs.indiana.edu> writes:

> Matthias Blume wrote:
> > Matthias Felleisen <matthias@ccs.neu.edu> writes:
> > 
> >>3. The Scheme Report imposes an undecidable correctness criteria on
> >>programs -- that they don't depend on the order of evaluation -- without
> >>(naturally) asking implementations to check it. Go figure; and that language
> >>supposedly has a semantics.
> > Right on.  This is, IMO, the bigger problem with Scheme's "fully
> > formalized semantics".  Leaving evaluation order unspecified is a big
> > mistake (buys nearly nothing -- regardless of what some people will
> > tell you -- and comes with correctness headaches such as those
> > demonstrated by this example), and the formal semantics section in
> > RnRS does not even handle it properly.
> > Everyone would be better off fixing the order of evaluation:
> >   - The programmer does not have to make sure that her program works
> >     under circumstances that she cannot test today but which might be
> >     a fact tomorrow (when the new compiler comes out).
> >   - The writer of the formal semantics section of RnRS does not have
> >     to do a hack job ("permute"/"unpermute").
> >   - The compiler writer does not need to wonder if there is a way of
> >     squeezing out another ounce of performance by "cleverly" fiddling
> >     with evaluation order.  (Been there, done that. (*))
> > [...]
> 
> You are suggesting a fixed evaluation order (probably left to right).
> The implications of that are:
> 1. Either `let' should have the same semantics as `let*'.

Not at all.  Let should evaluate left-to-right, but the scoping is
still that of let -- which is different from let*.

> 2. Or `let' is not semantically equivalent to `left left lambda'.
> 3. Or some other option that I'm not aware of.
> 
> Would you please elaborate on your opinion a little more.

Notice that fixing the evaluation order is completely consistent with
today's Scheme definition.  Doing so just makes life easier for
everyone.
0
find19 (1244)
12/3/2003 3:47:51 PM
Matthias Felleisen <matthias@ccs.neu.edu> writes:

> 3. The Scheme Report imposes an undecidable correctness criteria on
> programs -- that they don't depend on the order of evaluation -- without
> (naturally) asking implementations to check it. Go figure; and that language
> supposedly has a semantics.

The issue is that MIT Scheme does right to left by default in the
interpreter where other Schemes do left to right.  MIT Scheme and Chez
Scheme also re-order the arguments when compiling (for optimization).

MIT strongly disagreed with l2r, and the ones doing r2l strongly
disagreed with MIT and no one wanted to force the issue.

There is another issue, though.  In the presence of macros the order
of evaluation may no longer be left to right (in the untransformed
source code).  Should only function calls have the order enforced?

(By the way, how about removing that correctness criteria from the
report...)


0
jrm (1310)
12/3/2003 4:48:57 PM
Joe Marshall wrote:
> Matthias Felleisen <matthias@ccs.neu.edu> writes:
> 
> 
>>3. The Scheme Report imposes an undecidable correctness criteria on
>>programs -- that they don't depend on the order of evaluation -- without
>>(naturally) asking implementations to check it. Go figure; and that language
>>supposedly has a semantics.
> 
> 
> The issue is that MIT Scheme does right to left by default in the
> interpreter where other Schemes do left to right.  MIT Scheme and Chez
> Scheme also re-order the arguments when compiling (for optimization).
> 
> MIT strongly disagreed with l2r, and the ones doing r2l strongly
> disagreed with MIT and no one wanted to force the issue.
> 
FWIW I'm also against specifying an order of evaluation.  SISC's 
continuation protection algorithm relies currently on a right to left 
order.  Other systems take advantage of an unspecified order for other 
optimizations.

	Scott

0
scgmille (240)
12/3/2003 7:56:56 PM
On Wed, 03 Dec 2003 11:48:57 -0500, Joe Marshall wrote:

> There is another issue, though.  In the presence of macros the order
> of evaluation may no longer be left to right (in the untransformed
> source code).  Should only function calls have the order enforced?

Yes, it's obvious for me than a macro can make the evaluation order
different.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/

0
qrczak (1266)
12/3/2003 8:52:13 PM
"Scott G. Miller" <scgmille@freenetproject.org> writes:

> Other systems take advantage of an unspecified order for other
> optimizations.

How much do they actually buy?  I can always reorder by hand if it is
so important to the efficiency of my code.

I say it again: Leaving the order unspecified, for whatever reason, is
a huge mistake.

Matthias
0
find19 (1244)
12/3/2003 9:14:11 PM
Matthias Blume <find@my.address.elsewhere> writes:

> "Scott G. Miller" <scgmille@freenetproject.org> writes:
>
>> Other systems take advantage of an unspecified order for other
>> optimizations.
>
> How much do they actually buy?  

According to Will Clinger:

  ``As compiled by Twobit for the SPARC, this optimization reduces the
    code size for Larceny v0.24 from 489456 to 452872 bytes, a savings
    of 7.5%.''

See:  
  http://compilers.iecc.com/comparch/article/95-08-080

> I say it again:  Leaving the order unspecified, for whatever reason, is
> a huge mistake.

Probably (I haven't heard a truly persuasive argument, yet).

However, *relying* on the order of evaluation is also a huge mistake,
and if your code doesn't rely on the order of evaluation, then it
doesn't matter that it is unspecified.  

If you write code that does not depend on order of evaluation, then
the order of evaluation makes no difference.  Logically, therefore, if
the order of evaluation makes a difference, it must be that you write
code that depends on it, right?


0
jrm (1310)
12/3/2003 9:28:01 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

> However, *relying* on the order of evaluation is also a huge mistake,
> and if your code doesn't rely on the order of evaluation, then it
> doesn't matter that it is unspecified.  
> 
> If you write code that does not depend on order of evaluation, then
> the order of evaluation makes no difference.  Logically, therefore, if
> the order of evaluation makes a difference, it must be that you write
> code that depends on it, right?

I feel like I'm falling for a troll even as I write this, but....

The problem with your comment is that it assumes programmers are aware
of when they are relying on a particular order.  Often they aren't
(especially if they call into a third-party library that uses effects
such as mutation or continuations).  Unless they test on multiple
Scheme systems with different orders of evaluation, and construct the
right test cases, they'll never find out, either.

Shriram
0
sk1 (223)
12/3/2003 9:53:19 PM
Joe Marshall wrote:

[...]

> If you write code that does not depend on order of evaluation, then
> the order of evaluation makes no difference.  Logically, therefore, if
> the order of evaluation makes a difference, it must be that you write
> code that depends on it, right?

Patient: Doctor, doctor! It hurts when I do this.

Doctor: Well then don't do that no more.

Not a very satisfying argument.

-thant



-- 
Reports that say that something hasn't happened are always interesting 
to me,
because as we know, there are known knowns; there are things we know we 
know.
We also know there are known unknowns; that is to say we know there are 
some
things we do not know. But there are also unknown unknowns - the ones we
don't know we don't know. -- Secretary of Defense Donald Rumsfeld

0
thant (332)
12/3/2003 9:54:46 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

> > How much do they actually buy?  
> 
> According to Will Clinger:
> 
>   ``As compiled by Twobit for the SPARC, this optimization reduces the
>     code size for Larceny v0.24 from 489456 to 452872 bytes, a savings
>     of 7.5%.''
> 
> See:  
>   http://compilers.iecc.com/comparch/article/95-08-080

I would like to see a more detailed account of this, in particular,
what "this optimization" refers to, precisely, i.e., what exactly was
being measured.  In particular, when turning off "this optimization"
were any other things turned off (or on) as well?

(In any case, if the savings are significant enough to want them, then
one can always recoup them by reordering by hand.)

> However, *relying* on the order of evaluation is also a huge mistake,

Well, I would not call it a mistake if the language mandates some
fixed order.  But granted, I would always call it bad style.

[In some idiomatic cases it might not even be bad style.  For example,
the "before" operation in SML can be defined as

   infix before
   fun x before y = x

thereby relying on evaluation order.]

> and if your code doesn't rely on the order of evaluation, then it
> doesn't matter that it is unspecified.  

The trouble is that it is hard to know that it does not rely on
evaluation order -- and even testing won't find out unless you have a
test compiler generates all possible permutations.

> If you write code that does not depend on order of evaluation, then
> the order of evaluation makes no difference.  Logically, therefore, if
> the order of evaluation makes a difference, it must be that you write
> code that depends on it, right?

Yes, all languages with side effects have the inherent problem that
order of evaluation matters.  Counting non-termination as a side
effect, there are very few languages that are truly side-effect free.

The above line of reasoning can be used to "defend" all kinds of
language misfeatures: If you carefully program around them they won't
matter, therefore, if they do matter, you didn't carefully program
around them.  This sort of argument is especially harmful if knowing
whether one has actually "carefully programmed around them" is very
hard.

Matthias
0
find19 (1244)
12/3/2003 9:58:07 PM
My criticism of Scheme's evaluation order is *not* based on any deep knowledge 
about compilers. I am *not* a compiler expert. When I need to know something 
about compilers I ask people like Kent, Will, and for low level things, Keith 
Cooper. Kent and Will have repeatedly told me that a free order of evaluation 
for function applications buys some performance and possibly space. I actually 
trust their judgment.

But ... for the sake of a language standard that avoids bad and difficult to 
debug surprises when you move from one dialect to another, I am willing to 
sacrifice performance for well-definedness.

I also believe that well-definedness for the entire language would benefit the 
community as a whole. That includes academic researchers (Will), commercial 
outfits (scheme.com), and people in between (PLT). For researchers, it is 
important that you can point to a standard document that stands out from the 
riff-raff (Java and C#). Scheme has lost the egde there. SML and Haskell have 
set a new standard to which we need to live up yet. A well-defined standard is 
also the basis for good scripting languages. If mzscheme is your scripting tool 
of choice today, you need to keep in mind that PLT may move on to ML tomorrow 
and you will need to use Phantom Scheme the day after. Only a well-defined 
standard helps here.

Let's face it: if you are that dependent on performance, use high performance 
fortran. Develop in Scheme. Translate. But don't give up mathematical 
foundations for Scheme. You wouldn't do that for your mathematics or product 
either.

-- Matthias



0
12/4/2003 1:30:00 AM
Matthias Felleisen wrote:

> My criticism of Scheme's evaluation order is *not* based on any deep knowledge 
> about compilers. I am *not* a compiler expert. When I need to know something 
> about compilers I ask people like Kent, Will, and for low level things, Keith 
> Cooper. Kent and Will have repeatedly told me that a free order of evaluation 
> for function applications buys some performance and possibly space. I actually 
> trust their judgment.
> 
> But ... for the sake of a language standard that avoids bad and difficult to 
> debug surprises when you move from one dialect to another, I am willing to 
> sacrifice performance for well-definedness.
>
Perhaps, but I agree with Joe that those surprises are really exposing 
yet undiscovered bugs in a program.  One of Scheme's great advantages in 
my opinion is the flexibility it give implementations by leaving certain 
things unspecified.  Some of these make portable programming difficult, 
but I would argue that the order of evaluation is so low level and 
transparent to 90% of programs that the gain in compiler flexibility 
outweighs the ambiguity to programs.

Besides, we have constructs for fixing an order of evaluation for 
expressions which have side effects.  This is no worse than say the 
undefined behavior of multithreaded programs when the programmer doesn't 
   use a mutex construct to protect a critical region.

	Scott

0
scgmille (240)
12/4/2003 2:27:50 AM
"Scott G. Miller" <scgmille@freenetproject.org> virkkoi:
> One of Scheme's great advantages in my opinion is the flexibility it
> give implementations by leaving certain things unspecified.

Golly, what an advanced language C must then be.


Lauri Alanko
la@iki.fi
0
la (473)
12/4/2003 2:32:36 AM
Scott G. Miller wrote:

 > This is no worse than say the
> undefined behavior of multithreaded programs when the programmer doesn't 
> use a mutex construct to protect a critical region.

That's a great analogy except it illustrates the exact opposite of what 
you intended. Multithreaded programs are incredibly hard to get right 
and a nightmare to debug. Fortunately they are also rare, in no small 
part precisely *because* they are so difficult to write. By contrast, 
nearly everything in Scheme involves function application. Eliminating 
any behaviour-altering non-determinism in such a crucial area is 
definitely a worthwhile goal.


Matthias.

0
12/4/2003 2:50:04 AM
"Scott G. Miller" <scgmille@freenetproject.org> writes:

> Matthias Felleisen wrote:
> 
> Perhaps, but I agree with Joe that those surprises are really exposing
> yet undiscovered bugs in a program.

Wrong.  The particular problem here is that the problem won't get
exposed anytime soon (unless you are doing your testing with N
different implementations -- and there is still no guarantee that it
won't break on the N+1st, or even on the 1st when you upgrade to the
latest version).

The worst kind of bug is the one you get away with for a long time.

> One of Scheme's great advantages in my opinion is the flexibility it
> give implementations by leaving certain things unspecified.

I found this particular "flexibility" not so terribly useful from an
implementor's point of view, and from a language user's point of view
it is simply terrible.

> Some of these make portable programming
> difficult, but I would argue that the order of evaluation is so low
> level and transparent to 90% of programs that the gain in compiler
> flexibility outweighs the ambiguity to programs.

Again, there is hardly any useful gain (not in my experience, and all
positive reports that I have seen leave me very unconvinced), while
the expense is huge.

> Besides, we have constructs for fixing an order of evaluation for
> expressions which have side effects.

Yes, and those constructs can be used for fixing the order differently
than what is the (fixed) default order when performance really
matters.

>  This is no worse than say the
> undefined behavior of multithreaded programs when the programmer
> doesn't use a mutex construct to protect a critical region.

Now you want to throw all the complexity of concurrent programming
even at simple sequential programs?!?

Matthias
0
find19 (1244)
12/4/2003 2:59:41 AM
"Scott G. Miller" <scgmille@freenetproject.org> writes:

> Perhaps, but I agree with Joe that those surprises are really exposing
> yet undiscovered bugs in a program.  

The problem is that they are *yet* *undiscovered*.  We're not talking
about programmers who knowingly, willfully insert bugs.  We're talking
about protecting the innocent.

Shriram
0
sk1 (223)
12/4/2003 4:10:22 AM
Matthias Radestock <matthias@sorted.org> writes:

> By contrast, nearly everything in Scheme involves function
> application. Eliminating any behaviour-altering non-determinism in
> such a crucial area is definitely a worthwhile goal.

You mean "indeterminacy", not "non-determinism".

(First thread on c.l.s to feature all three Matthias's?  How about in
a single day?)

Shriram
0
sk1 (223)
12/4/2003 4:12:08 AM
Shriram Krishnamurthi wrote:

> Matthias Radestock <matthias@sorted.org> writes:
> 
> 
>>By contrast, nearly everything in Scheme involves function
>>application. Eliminating any behaviour-altering non-determinism in
>>such a crucial area is definitely a worthwhile goal.
> 
> 
> You mean "indeterminacy", not "non-determinism".

indeterminacy
      n : the quality of being vague and poorly defined

nondeterminism
      <algorithm> A property of a computation which may have more
      than one result.

I definitely meant the latter. RnRS is not at all vague about evaluation 
order - it defines the rules very precisely. The problem is that the 
rules permit nondeterminism - calling a procedure twice with exactly the 
same inputs (i.e. args, external inputs, state of heap etc) can produce 
different results.


Matthias.

0
12/4/2003 11:13:17 AM
On Thu, 4 Dec 2003 02:32:36 +0000 (UTC), Lauri Alanko <la@iki.fi> wrote:

> "Scott G. Miller" <scgmille@freenetproject.org> virkkoi:
>> One of Scheme's great advantages in my opinion is the flexibility it
>> give implementations by leaving certain things unspecified.
>
> Golly, what an advanced language C must then be.

And it is. For low-level programming there hasn't really been anything
much better. Even the Fox project, which is impressive, hasn't produced
a competing OS to Linux. This surely has to say something about the
efficacy of the ideas embodied in C.

Mind you, C has it's problems, but my favorite thought experiments these
days involve recasting C in s-expressions with first-class functions, full
TCO, and automatic memory management. Oh...that sounds a lot like Scheme
with machine-oriented types...hmmm

david rush
-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
0
kumo7543 (108)
12/4/2003 12:05:13 PM
On 03 Dec 2003 23:10:22 -0500, Shriram Krishnamurthi <sk@cs.brown.edu> 
wrote:
> "Scott G. Miller" <scgmille@freenetproject.org> writes:
>
>> Perhaps, but I agree with Joe that those surprises are really exposing
>> yet undiscovered bugs in a program.
>
> The problem is that they are *yet* *undiscovered*.  We're not talking
> about programmers who knowingly, willfully insert bugs.

STR: I'm going to program me a BMW!

> We're talking about protecting the innocent.

STR: Everyone's guilty of something.

david rush
-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
0
kumo7543 (108)
12/4/2003 12:08:37 PM
Shriram Krishnamurthi <sk@cs.brown.edu> writes:

> Joe Marshall <jrm@ccs.neu.edu> writes:
>
>> However, *relying* on the order of evaluation is also a huge mistake,
>> and if your code doesn't rely on the order of evaluation, then it
>> doesn't matter that it is unspecified.  
>> 
>> If you write code that does not depend on order of evaluation, then
>> the order of evaluation makes no difference.  Logically, therefore, if
>> the order of evaluation makes a difference, it must be that you write
>> code that depends on it, right?
>
> I feel like I'm falling for a troll even as I write this, but....

I need to work on being more subtle.  You weren't supposed to notice.

> The problem with your comment is that it assumes programmers are aware
> of when they are relying on a particular order.  

Yes, that is a problem.  One (facetious) suggestion was to vary the
order on each and every call.

> Often they aren't (especially if they call into a third-party
> library that uses effects such as mutation or continuations).
> Unless they test on multiple Scheme systems with different orders of
> evaluation, and construct the right test cases, they'll never find
> out, either.

But if all Scheme systems have the same order of evaluation, they'll
never find out.  Isn't this a case of masking the problem rather than
solving it?
0
jrm (1310)
12/4/2003 2:37:19 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

> > Often they aren't (especially if they call into a third-party
> > library that uses effects such as mutation or continuations).
> > Unless they test on multiple Scheme systems with different orders of
> > evaluation, and construct the right test cases, they'll never find
> > out, either.
> 
> But if all Scheme systems have the same order of evaluation, they'll
> never find out.  Isn't this a case of masking the problem rather than
> solving it?

No.  It is turning the problem into a non-problem because relying on
evaluation order is then no longer a bug but merely poor style.

Matthias
0
find19 (1244)
12/4/2003 2:48:07 PM
Matthias Blume <find@my.address.elsewhere> writes:

> Joe Marshall <jrm@ccs.neu.edu> writes:
>
>> If you write code that does not depend on order of evaluation, then
>> the order of evaluation makes no difference.  Logically, therefore, if
>> the order of evaluation makes a difference, it must be that you write
>> code that depends on it, right?
>
> The above line of reasoning can be used to "defend" all kinds of
> language misfeatures:  If you carefully program around them they won't
> matter, therefore, if they do matter, you didn't carefully program
> around them.  

`Programming around them' implies that there is a desire to do things
one way and that is being prohibited by the language.

Contrast Interlisp, which does not do arity checking (arguments
default to NIL if not supplied, extra arguments are silently ignored),
with MacLisp, which does.  When you write code in MacLisp, you must
ensure that every call site matches the arity of the callee.  So you
have to carefully program around the arity checking so that your code
does not depend on argument defaulting or discarding.

The reasoning is sound, it is the premise that is at fault.

> This sort of argument is especially harmful if knowing whether one
> has actually "carefully programmed around them" is very hard.

And this is the fault in the premise.

Note that requiring a particular order of evaluation makes it *harder*
to find places where you depend on it.


I'm really not invested leaving it unspecified.  I don't much care.
But the only valid reason I've seen for specifying it is that it will
cut the number of bugs reported that are caused by dependence upon a
particular order.  (Of course, it will make it much harder to explain
*why* one shouldn't depend on the order of evaluation.)

Neither C nor C++ define a particular order of evaluation and they
seem to be popular.  I even hear that C can be useful.

--
~jrm
0
jrm (1310)
12/4/2003 3:06:52 PM
> Joe Marshall <jrm@ccs.neu.edu> writes:
>> But if all Scheme systems have the same order of evaluation, they'll
>> never find out.  Isn't this a case of masking the problem rather than
>> solving it?

Matthias Blume <find@my.address.elsewhere> wrote:
> No.  It is turning the problem into a non-problem because relying on
> evaluation order is then no longer a bug but merely poor style.

And I think it's a mistake to "bless" a poor style by making it
"well-defined but discouraged," because sloppy programmers will ignore
the "but discouraged" part, instead claiming that the well-definedness
justifies their style. If anything, I'd rather go in the other
direction, requiring a diagnostic for code that breaks the constraint --
except, of course, the constraint is non-decidable in this case.

Furthermore, an explicit evaluation order removes some expressiveness
from a language. In general, only some steps of a procedure need to
happen in a particular order. Good designers know this and make it
explicit in their designs. It's not just about optimization either; it
also affects maintainability. It's much harder to maintain code when you
don't know which sequences are semantically important and which are just
an artifact of coding.

In other words, explicit sequencing is important any time there's *any*
reason to reorganize the code, whether it's a machine or a human doing
it. Hand-tuning, compiler optimization, parallelism (including the
automatic kind now found in many CPUs with multiple logic units),
maintenance -- all of these tasks need to know when the order is
important and when it isn't.

Overspecifying the order of evaluation takes away expressiveness because
it makes the ordering implicit rather than explicit. It's one more place
where a programmer needs to rely on comments rather than the actual
procedure to explain what's going on. Personally, I'd rather see *more*
constructs that allow a programmer to say "the order doesn't matter,"
rather than fewer of them. So I'm strongly opposed to changing order of
evaluation from "unspecified" to "implicitly specified."
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 3:44:25 PM
"Bradd W. Szonye" <bradd+news@szonye.com> virkkoi:
> And I think it's a mistake to "bless" a poor style by making it
> "well-defined but discouraged," because sloppy programmers will ignore
> the "but discouraged" part,

How's that worse than having sloppy programmers ignore the current
"undefined" part?


Lauri Alanko
la@iki.fi
0
la (473)
12/4/2003 4:01:37 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > Joe Marshall <jrm@ccs.neu.edu> writes:
> >> But if all Scheme systems have the same order of evaluation, they'll
> >> never find out.  Isn't this a case of masking the problem rather than
> >> solving it?
> 
> Matthias Blume <find@my.address.elsewhere> wrote:
> > No.  It is turning the problem into a non-problem because relying on
> > evaluation order is then no longer a bug but merely poor style.
> 
> And I think it's a mistake to "bless" a poor style by making it
> "well-defined but discouraged," because sloppy programmers will ignore
> the "but discouraged" part, instead claiming that the well-definedness
> justifies their style.

Well, that's fine.  They are not committing a factual error, they are
just writing poor code.  "Poor" is, of course, in the eye of the
beholder, whereas "factually wrong" is not.

> If anything, I'd rather go in the other
> direction, requiring a diagnostic for code that breaks the constraint --
> except, of course, the constraint is non-decidable in this case.

If it were decidable, and if a diagnostic were required, then I would
have no problem with it.  Precisely because it is NOT decidable I am
against leaving the order unspecified.

> It's much harder to maintain code when you don't know which
> sequences are semantically important and which are just an artifact
> of coding.

.... and which non-sequences are outright bugs that just haven't been
discovered yet.

> Overspecifying the order of evaluation takes away expressiveness because
> it makes the ordering implicit rather than explicit. It's one more place
> where a programmer needs to rely on comments rather than the actual
> procedure to explain what's going on. Personally, I'd rather see *more*
> constructs that allow a programmer to say "the order doesn't matter,"

Well, if you really want that, there are plenty of languages that let
you *explicitly* say "do these things in any order you like".  Usually
such features make it much harder to get one's code correct.  For that
reason, making this part of a feature as ubiquitous as procedure calls
is a big mistake.  All the arguments about "expressiveness" are red
herrings.

People in this newsgroup often like to waffle about the "freedom" that
is granted by a language.  Making the evaluation order unspecified
*takes away* such freedom.  Proof: Every program that is a valid
program under unspecified order is also a valid program under fixed
evaluation order.  But not vice versa.  Therefore, there are fewer
correct programs under "unspecified" order.  Unlike with type systems,
though, whether or not a program is invalid because of reliance on
evaluation order is not decidable (either at compile or at runtime).

I really don't see anything in your argument that is the least bit
convincing.  Unspecified evaluation order is bad for everyone
involved.  There is no upside to it.

> rather than fewer of them. So I'm strongly opposed to changing order of
> evaluation from "unspecified" to "implicitly specified."

We'll have to agree to disagree then.

Matthias (one of the three)
0
find19 (1244)
12/4/2003 4:04:55 PM
> "Bradd W. Szonye" <bradd+news@szonye.com> virkkoi:
>> And I think it's a mistake to "bless" a poor style by making it
>> "well-defined but discouraged," because sloppy programmers will
>> ignore the "but discouraged" part,

Lauri Alanko <la@iki.fi> wrote:
> How's that worse than having sloppy programmers ignore the current
> "undefined" part?

The sloppy programmers part doesn't get any worse, but it doesn't get
any better either. Meanwhile, optimization, parallelization, and
maintenance become more difficult. There's no gain and a significant
loss, therefore, it's a bad idea.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 4:13:57 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Meanwhile, optimization, parallelization, and
> maintenance become more difficult.

Nonsense.  Optimization does not get any harder, the thing has
absolutely nothing to do with parallelization (read what the
definition actually says!), and maintenance becomes *easier*.

> There's no gain and a significant
> loss, therefore, it's a bad idea.

You have it upside down.  There is no loss and a significant gain,
therefore it is an excellent idea.
0
find19 (1244)
12/4/2003 4:20:05 PM
Matthias Blume <find@my.address.elsewhere> wrote:
>>> No.  It is turning the problem into a non-problem because relying on
>>> evaluation order is then no longer a bug but merely poor style.

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> And I think it's a mistake to "bless" a poor style by making it
>> "well-defined but discouraged," because sloppy programmers will
>> ignore the "but discouraged" part, instead claiming that the
>> well-definedness justifies their style.

> Well, that's fine.  They are not committing a factual error, they are
> just writing poor code.  "Poor" is, of course, in the eye of the
> beholder, whereas "factually wrong" is not.

Why bless poor code? It encourages the errors.

>> Overspecifying the order of evaluation takes away expressiveness
>> because it makes the ordering implicit rather than explicit. It's one
>> more place where a programmer needs to rely on comments rather than
>> the actual procedure to explain what's going on. Personally, I'd
>> rather see *more* constructs that allow a programmer to say "the
>> order doesn't matter,"

> Well, if you really want that, there are plenty of languages that let
> you *explicitly* say "do these things in any order you like".  Usually
> such features make it much harder to get one's code correct.

Only if you're sloppy about design (i.e., you haven't carefully
considered which sequences are important and which are coincidental).
Generally, I'm all for protecting programmers from making careless
mistakes, but not when it actually obscures the design.

> For that reason, making this part of a feature as ubiquitous as
> procedure calls is a big mistake.  All the arguments about
> "expressiveness" are red herrings.

No, the big mistake is conflating the concepts of "argument list" and
"sequence of operations." That's a fundamental design error, not just a
bit of sloppy programming, and a programming language should *not*
encourage the error.

> People in this newsgroup often like to waffle about the "freedom" that
> is granted by a language.  Making the evaluation order unspecified
> *takes away* such freedom.

Not when there's a trivial, alternate way that lets you write it.

    (let* ((arg1 ...) (arg2 ...)) (f arg1 arg2))

If you use that pattern a lot, you can even write a macro to simplify
the syntax, e.g. (sequential-args f arg1 arg2). But that's a one-way
trip. Once you require sequential evaluation for all procedure calls,
you remove all possibility of automatic optimization. Only hand-tuning
is possible then, and I think we all know how error-prone that is.

> Proof: Every program that is a valid program under unspecified order
> is also a valid program under fixed evaluation order.  But not vice
> versa.  Therefore, there are fewer correct programs under
> "unspecified" order.

Refutation: Every program that is not valid under unspecified order has
a trivial transformation that makes it valid (and that makes the
ordering *explicit*, which is an aid to maintainers). Meanwhile, the
fixed order precludes many automatic optimizations, which encourages
premature hand-tuning. (Yes, it actually encourages more than one poor
coding style.)

> Unlike with type systems, though, whether or not a program is invalid
> because of reliance on evaluation order is not decidable (either at
> compile or at runtime).

Which is why it's a bad idea to conflate evaluation order with other
concepts, like argument lists. When order matters, it's not enough to
just throw code at a compiler and hope that it works.

> I really don't see anything in your argument that is the least bit
> convincing.  Unspecified evaluation order is bad for everyone
> involved.  There is no upside to it.

Allowing automatic optimization and discouraging poor coding styles is
not an upside? Meanwhile, fixed evaluation order has several concrete
drawbacks. It effectively shifts the ambiguity from initial coding to
maintenance. The original developer knows that he can rely on the order
of evaluation, but maintainers can't tell whether they've actually done
so. By conflating the two ideas, you lose information.

I don't know how much maintenance work you've done, but it's a huge part
of the software lifecycle, typically over 50%. By blessing a poor design
and coding style, you make maintenance more difficult and increase the
overall cost of the software. It's a quadruple loss: it encourages
sloppiness, it encourages premature hand-tuning, it precludes
optimization, and it loses information about the program.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 4:33:14 PM
> "Bradd W. Szonye" wrote:
>> Meanwhile, optimization, parallelization, and
>> maintenance become more difficult.

Matthias Blume <find@my.address.elsewhere> wrote:
> Nonsense.  Optimization does not get any harder ....

By constraining the order of evaluation, you reduce the set of possible
optimizations. On some architectures (e.g., IA-64), those constraints go
all the way down to the hardware level.

> the thing has absolutely nothing to do with parallelization (read what
> the definition actually says!) ....

"Although the order of evaluation is otherwise unspecified, the effect
of any concurrent evaluation of the operator and operand expressions is
constrained to be consistent with some sequential order of evaluation."

Evaluating arguments in parallel can be consistent with some sequential
order of evaluation. If you tighten the constraints on evaluation order,
however, you reduce the opportunities for parallelization. For example,
consider a call with three arguments. The first and last arguments are
purely functional, but the middle argument is not. With unspecified
order, the system can parallelize evaluation of the purely-functional
arguments. With fixed order, it cannot, because the middle argument
creates a "sequence point" that you cannot cross without breaking
consistency with the abstract model.

This sort of thing is especially important now that "superscalar"
machine architectures are ubiquitous. For example, on an IA-64 system,
unnecessary sequence points have *major* performance implications. Why
introduce unnecessary sequence points just to bless a poor design and
coding style?

> and maintenance becomes *easier*.

No, a few bugs become non-bugs. But overall maintenance becomes more
expensive, because the fixed evaluation order removes information about
the procedure. Specifically, it removes a guarantee that "order doesn't
matter here," and replaces it with "order may or may not be important."
That increases the cost of diagnosis and code rework.

It does eliminate a few situations where a programmer relied on order
where he shouldn't have, but that kind of activity is *much* less common
than code review and diagnosis.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 4:49:14 PM
Matthias Felleisen <matthias@ccs.neu.edu> wrote in message news:<bqm2fu$dtt$1@camelot.ccs.neu.edu>...

> [] I am willing to 
> sacrifice performance for well-definedness.
> 
> I also believe that well-definedness for the entire language would benefit the 
> community as a whole.  

Does this necessarily require fixed evaluation order?  A quick perusal
of the literature shows various candidate techniques for giving a precise, 
well-defined semantics to unspecified evaluation order.  Powerdomains
come up most often, but there are others.

J.E.
0
12/4/2003 5:26:34 PM
Matthias Radestock <matthias@sorted.org> writes:

> indeterminacy
>       n : the quality of being vague and poorly defined
> 
> nondeterminism
>       <algorithm> A property of a computation which may have more
>       than one result.

You're way too smart to pull this dumb dictionary trick on me.  The
function application has only one result.  It may be chosen from a set
of multiple results.

Put otherwise: leaving the order of evaluation undefined does not turn
Scheme into a mini-Prolog.

Shriram
0
sk1 (223)
12/4/2003 5:27:18 PM
Matthias Blume <find@my.address.elsewhere> writes:

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>
>> There's no gain and a significant
>> loss, therefore, it's a bad idea.
>
> You have it upside down.  There is no loss and a significant gain,
> therefore it is an excellent idea.

You have it sideways:  there is insignificant loss and insignificant
gain and therefore an excellent topic for a usenet flamefest.

So....
  Left to right, right to left, or something more original?

  Is the function position evaluated first or last?

  Does this apply to LETREC?

  Should library syntax be required to have a particular order?

  What about syntax introduced by extensions or SRFI's?


0
jrm (1310)
12/4/2003 5:31:43 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> "Although the order of evaluation is otherwise unspecified, the effect
> of any concurrent evaluation of the operator and operand expressions is
> constrained to be consistent with some sequential order of evaluation."
> 
> Evaluating arguments in parallel can be consistent with some sequential
> order of evaluation.

But only if you know what side-effects occur -- in which case you can
do that optimization even in the case of mandatory fixed order.

> If you tighten the constraints on evaluation order,
> however, you reduce the opportunities for parallelization. For example,
> consider a call with three arguments. The first and last arguments are
> purely functional, but the middle argument is not. With unspecified
> order, the system can parallelize evaluation of the purely-functional
> arguments. With fixed order, it cannot, because the middle argument
> creates a "sequence point" that you cannot cross without breaking
> consistency with the abstract model.

False.  If the compiler actually knows that the first and third
arguments are purely functional, then it can ignore the conceptional
sequence point and go ahead with parallelization.  If it does not know
it, then it cannot parallelize regardless of whether or not the order
is fixed by the language definition.

> > and maintenance becomes *easier*.
> 
> No, a few bugs become non-bugs. But overall maintenance becomes more
> expensive, because the fixed evaluation order removes information about
> the procedure.


> Specifically, it removes a guarantee that "order doesn't
> matter here,"

There is no such "guarantee" unless the compiler proves it.  If the
compiler can prove it, it can rearrange anyway because the proof means
that nobody will be able to tell the differenc.

Matthias
0
find19 (1244)
12/4/2003 5:41:07 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

> You have it sideways:  there is insignificant loss and insignificant
> gain and therefore an excellent topic for a usenet flamefest.

Yes, apparently.  In any case, to *me* the gain is very significant,
and I speak for myself here.

> So....
>   Left to right, right to left, or something more original?

I don't really care, but left-to-right might be favorable under the
principle of least surprise.  (But my own VSCM implementation uses
right-to-left, which at the time I found cute.  I wouldn't do it that
way anymore.)

>   Is the function position evaluated first or last?

It depends.  For example, under l2r I'd say first, under r2l: last.

>   Does this apply to LETREC?

Yes.

>   Should library syntax be required to have a particular order?

Yes.  That should be part of the library spec.  (As a general
principle, I don't like macros very much, though.)

>   What about syntax introduced by extensions or SRFI's?

Specified by the spec of the extension or the SRFI.
0
find19 (1244)
12/4/2003 5:45:26 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

>   Left to right, right to left, or something more original?

Since the primes are the building blocks of the natural numbers, do
the primes in order first, then the others, also in order.

A really clever program could then use the execution time as a
primality test.

Since 1 is neither prime nor non-prime, the order in which the first
argument is evaluated is left undefined. <-;

Okay, I'll get back to work now.

Shriram
0
sk1 (223)
12/4/2003 6:01:32 PM
Matthias Blume wrote:
   Bradd W. Szonye  wrote:
> > If you tighten the constraints on evaluation order,
> > however, you reduce the opportunities for parallelization. For example,
> > consider a call with three arguments. The first and last arguments are
> > purely functional, but the middle argument is not. With unspecified
> > order, the system can parallelize evaluation of the purely-functional
> > arguments. With fixed order, it cannot, because the middle argument
> > creates a "sequence point" that you cannot cross without breaking
> > consistency with the abstract model.
> 
> False.  If the compiler actually knows that the first and third
> arguments are purely functional, then it can ignore the conceptional
> sequence point and go ahead with parallelization.  If it does not know
> it, then it cannot parallelize regardless of whether or not the order
> is fixed by the language definition.

You are right that, with pure functional arguments it doesn't
matter and the compiler can resequence them anyway if it proves 
they are pure functional.

But the definition of pure functional has two parts; a pure 
functional expression is non-side effecting and it is not 
affected by any side effects. Consider instead the case of 
three non-side effecting expressions.  Even if they aren't 
pure functional (ie, they can be affected by side effects)  
the compiler can schedule them arbitrarily because it knows 
that none of them will affect the outcome of any of the others.  
But if you insert a side-effecting expression as the second 
argument, and the compiler is unable to prove that its side 
effects don't affect the other two functions, then it 
introduces a sequence point that dictates the evaluation 
order of all three expressions.

This was probably the situation Bradd was thinking of.

			Bear
0
bear (1219)
12/4/2003 6:03:27 PM
Shriram Krishnamurthi <sk@cs.brown.edu> writes:

> Joe Marshall <jrm@ccs.neu.edu> writes:
>
>>   Left to right, right to left, or something more original?
>
> Since the primes are the building blocks of the natural numbers, do
> the primes in order first, then the others, also in order.

Ok, but what do you mean by `in order'?

> A really clever program could then use the execution time as a
> primality test.
>
> Since 1 is neither prime nor non-prime, the order in which the first
> argument is evaluated is left undefined. <-;

Relative to the function itself?
0
jrm (1310)
12/4/2003 6:13:52 PM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> "Although the order of evaluation is otherwise unspecified, the
>> effect of any concurrent evaluation of the operator and operand
>> expressions is constrained to be consistent with some sequential
>> order of evaluation."
>> 
>> Evaluating arguments in parallel can be consistent with some
>> sequential order of evaluation.

Matthias Blume <find@my.address.elsewhere> wrote:
> But only if you know what side-effects occur -- in which case you can
> do that optimization even in the case of mandatory fixed order.

The former is true, but the latter is false.

>> If you tighten the constraints on evaluation order, however, you
>> reduce the opportunities for parallelization. For example, consider a
>> call with three arguments. The first and last arguments are purely
>> functional, but the middle argument is not. With unspecified order,
>> the system can parallelize evaluation of the purely-functional
>> arguments. With fixed order, it cannot, because the middle argument
>> creates a "sequence point" that you cannot cross without breaking
>> consistency with the abstract model.

> False.  If the compiler actually knows that the first and third
> arguments are purely functional, then it can ignore the conceptional
> sequence point and go ahead with parallelization ....

Not if the the last argument depends on a side-effect of the middle
argument. The compiler has no way to tell whether that will change the
meaning of the evaluation.

You may be thinking that such code would be invalid under the
unspecified-order model, but that isn't generally true, because
side-effects do not always change the abstract meaning of a procedure.
For a concrete example, consider what happens if all three arguments
refer to a splay tree, and the middle argument is a splay-tree find. It
*will* have side-effects, because splay-find mutates the tree. However,
the evaluation order does not change the meaning of the program, because
the mutation is entirely below the abstraction barrier. A C++ programmer
would call it a "mutable const" procedure.

Unfortunately, the compiler has no way of knowing what's "below the
abstraction barrier." All it can see is the mutation. Therefore, it sets
a sequence point, and it can't parallelize the first and last arguments.

Are these situations common? I'm not sure. It can happen any time an
argument has a side-effect below the abstraction barrier.

>> ... overall maintenance becomes more expensive, because the fixed
>> evaluation order removes information about the procedure.
>> Specifically, it removes a guarantee that "order doesn't matter
>> here" ....

> There is no such "guarantee" unless the compiler proves it.

The Scheme standard provides the guarantee. If a programmer ignores it,
that's a bug. Therefore, if a maintainer does notice that a procedure
call depends on a particular order, he immediately knows that it's a
design bug.

I'm curious: What's your main programming experience -- development,
maintenance, teaching, etc.? Your views on this suggest to me that you
work mainly on new development, or maybe with student programmers in a
teaching environment. I work more in coding standards, reviews, best
practices, maintenance, and other QA type stuff. To me, "fixing" the
language instead of the designs seems like exactly the wrong thing to
do.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 6:15:44 PM
Regarding (f a1 a2 a3), where a2 has side-effects and therefore
introduces a "sequence point" under a fixed argument evaluation order.

> Matthias Blume wrote:
>> False.  If the compiler actually knows that the first and third
>> arguments are purely functional, then it can ignore the conceptional
>> sequence point and go ahead with parallelization.  If it does not
>> know it, then it cannot parallelize regardless of whether or not the
>> order is fixed by the language definition.

Ray Dillinger <bear@sonic.net> wrote:
> You are right that, with pure functional arguments it doesn't matter
> and the compiler can resequence them anyway if it proves they are pure
> functional. But the definition of pure functional has two parts; a
> pure functional expression is non-side effecting and it is not
> affected by any side effects.

I was using a less strict definition, functions that produce no side
effects. However, my concrete example also fits your definition:
Consider two functions, SPLAY-FIND and INORDER-TRAVERSE. The first
function has side effects, but they're below the abstraction barrier.
The second function is a purely-functional tree walker. Even though
SPLAY-FIND mutates the tree, they don't affect INORDER-TRAVERSE, because
all the side effects are below the abstraction barrier.

> Consider instead the case of three non-side effecting expressions.
> Even if they aren't pure functional (ie, they can be affected by side
> effects)  the compiler can schedule them arbitrarily because it knows
> that none of them will affect the outcome of any of the others.

Right.

> But if you insert a side-effecting expression as the second argument,
> and the compiler is unable to prove that its side effects don't affect
> the other two functions, then it introduces a sequence point that
> dictates the evaluation order of all three expressions.

Right. Part of the problem is that the compiler doesn't know about
abstraction barriers. It has no way to know that SPLAY-FIND doesn't
change the external behavior at all.

Another example that might make this more obvious to Schemers: CONS has
side effects. With a copying collector, it can completely rearrange a
program's memory layout. However, those side effects are behind an
abstraction barrier such that Scheme programs aren't even aware of it.
Consider what happens, though, if you re-implement CONS so that it's
above the compiler's abstraction barrier. Now, almost every function,
even purely-functional ones, appear to have "side effects," thus
introducing more sequence points. While the programmer may realize that
CONS's side effects are irrelevant to the abstract program, the compiler
has no way to determine that.

Or, to put it another way, if you make SPLAY-FIND a primitive, and you
put a hard abstraction barrier over it, the compiler can treat it as
"purely functional." But a compiler can't figure that out for itself. It
must be much more conservative about side-effects and sequence points,
because it doesn't know where the abstraction barriers lie.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 6:40:11 PM
Ray Dillinger <bear@sonic.net> writes:

> Matthias Blume wrote:
>    Bradd W. Szonye  wrote:
> > > If you tighten the constraints on evaluation order,
> > > however, you reduce the opportunities for parallelization. For example,
> > > consider a call with three arguments. The first and last arguments are
> > > purely functional, but the middle argument is not. With unspecified
> > > order, the system can parallelize evaluation of the purely-functional
> > > arguments. With fixed order, it cannot, because the middle argument
> > > creates a "sequence point" that you cannot cross without breaking
> > > consistency with the abstract model.
> > 
> > False.  If the compiler actually knows that the first and third
> > arguments are purely functional, then it can ignore the conceptional
> > sequence point and go ahead with parallelization.  If it does not know
> > it, then it cannot parallelize regardless of whether or not the order
> > is fixed by the language definition.
> 
> You are right that, with pure functional arguments it doesn't
> matter and the compiler can resequence them anyway if it proves 
> they are pure functional.
> 
> But the definition of pure functional has two parts; a pure 
> functional expression is non-side effecting and it is not 
> affected by any side effects. Consider instead the case of 
> three non-side effecting expressions.  Even if they aren't 
> pure functional (ie, they can be affected by side effects)  
> the compiler can schedule them arbitrarily because it knows 
> that none of them will affect the outcome of any of the others.  
> But if you insert a side-effecting expression as the second 
> argument, and the compiler is unable to prove that its side 
> effects don't affect the other two functions, then it 
> introduces a sequence point that dictates the evaluation 
> order of all three expressions.
> 
> This was probably the situation Bradd was thinking of.

Sure, there is no use in denying that there are some (rather obscure)
corner cases where the restriction gives some additional power to a
compiler.  I suspect, though, that most of the measured benefits are
of the "we trust the programmer" nature.

On average, the gains in efficiency achieved by the restriction on
what the programmer can write reeks of institutionalized premature
micro-optimization.  If the presence of B between A and C prevents A
and C from being parallelized, and if this *really* matters, and if
you know that B could be pulled in front, then *just do it*:

   (let ((b B)) (f A b C))

instead of writing

   (f A B C)

and then relying on questionable, potentially semantics-altering
"optimizations" done by a compiler.

Matthias
0
find19 (1244)
12/4/2003 7:16:30 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Another example that might make this more obvious to Schemers: CONS has
> side effects. With a copying collector, it can completely rearrange a
> program's memory layout. However, those side effects are behind an
> abstraction barrier such that Scheme programs aren't even aware of it.

Scheme's CONS has a side-effect even if you ignore what happens behind
that particular abstraction barrier:  it allocates state.  (But that's
an entirely separate story.)

Matthias
0
find19 (1244)
12/4/2003 7:20:25 PM
Matthias Blume <find@my.address.elsewhere> wrote:
> Sure, there is no use in denying that there are some (rather obscure)
> corner cases where the [unspecified argument evaluation order] gives
> some additional power to a compiler.

Obscure? I'd expect it to be systematic, actually. At the very least, I
would expect a small but significant difference from switching between
left to right and right to left according to the machine's stack
architecture. (For example, right-to-left seems like it would be more
efficient if the machine uses a downward-growing stack for arguments.)

And I'd also expect the implicit sequence points to make a noticeable
difference on "superscalar" CPUs like the Pentium and Itanium, where the
CPU itself parallelizes instructions. Compilers would need to insert a
lot more NOPs or stop bits to guarantee correct ordering, which
interferes with pipelining and increases code size.

> I suspect, though, that most of the measured benefits are of the "we
> trust the programmer" nature.

Dunno what you mean by this.

> On average, the gains in efficiency achieved by the restriction on
> what the programmer can write reeks of institutionalized premature
> micro-optimization.

How so? If you shave a bit off each procedure call, especially in a
language as procedure-happy as Scheme, you see systematic improvements.
Also, the more you let the compiler optimize code, the less you need
programmers doing expensive hand-tuning. It's the very opposite of
premature micro-optimization.

> If the presence of B between A and C prevents A and C from being
> parallelized, and if this *really* matters, and if you know that B
> could be pulled in front, then *just do it*:
> 
>    (let ((b B)) (f A b C))
> 
> instead of writing
> 
>    (f A B C)
> 
> and then relying on questionable, potentially semantics-altering
> "optimizations" done by a compiler.

But the compiler *doesn't* alter semantics. R5RS doesn't permit it, and
the suggested change wouldn't either. Currently, the semantics are only
"changed" when programmers rely on unspecified behavior, which is a
design/coding error, not a compiler error.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 7:50:22 PM
Bradd W. Szonye wrote:
> Matthias Blume <find@my.address.elsewhere> wrote:

[...]

>>On average, the gains in efficiency achieved by the restriction on
>>what the programmer can write reeks of institutionalized premature
>>micro-optimization.
> 
> 
> How so? If you shave a bit off each procedure call, especially in a
> language as procedure-happy as Scheme, you see systematic improvements.
> Also, the more you let the compiler optimize code, the less you need
> programmers doing expensive hand-tuning. It's the very opposite of
> premature micro-optimization.

If this kind of optimization is so important, why not go with a static 
type system? That's going to buy you much more of an improvement in 
performance than allowing for arbitrary evaluation order.

-thant

0
thant (332)
12/4/2003 8:13:03 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> But the compiler *doesn't* alter semantics. R5RS doesn't permit it, and
> the suggested change wouldn't either. Currently, the semantics are only
> "changed" when programmers rely on unspecified behavior, which is a
> design/coding error, not a compiler error.

That's just sophistry.  This thread started with a real example of
someone observing unexpected behavior when moving to a different
compiler.  So effectively the semantics changed on his program -- even
though, as you correctly note, the standard did not say what the
semantics were supposed to be.  Every particular implementation
provides a particular semantics, and I find it to be a mistake if the
standard permits different implementations to provide different
semantics for a language construct as fundamental as procedure
application in Scheme.

All the talk about possible optimizations that can or cannot be done,
while certainly valid, is still no good argument if

   a) the alleged benefits are not realized in real-world implementations
      (or are so small that they are not worth the trouble)
   b) they come at the expense of having major pitfalls in the language
      which we are unable to detect reliably

Matthias
0
find19 (1244)
12/4/2003 8:13:45 PM
Matthias Blume <find@my.address.elsewhere> writes:

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>
>> But the compiler *doesn't* alter semantics. R5RS doesn't permit it, and
>> the suggested change wouldn't either. Currently, the semantics are only
>> "changed" when programmers rely on unspecified behavior, which is a
>> design/coding error, not a compiler error.
>
> That's just sophistry.  This thread started with a real example of
> someone observing unexpected behavior when moving to a different
> compiler.  So effectively the semantics changed on his program -- even
> though, as you correctly note, the standard did not say what the
> semantics were supposed to be.  

That's weird.  You expect all implementations to provide the *same*
semantics for *undefined* behavior?


> Every particular implementation provides a particular semantics, and
> I find it to be a mistake if the standard permits different
> implementations to provide different semantics for a language
> construct as fundamental as procedure application in Scheme.

The standard permits different implementations to provide different
semantics for mismatched arity, too.

Besides, the standard als permits different implementations to provide
different semantics for bad types, unbound variables, multiple
occurrances of variables in binding lists, use of macro keywords which
don't match patterns, assignment to unbound variables, using inexact
numbers as indices, CAR and CDR on empty lists, etc.

The standard also says that ``One restriction on LETREC is very
important:  it must be possible to evaluate each <init> without
assigning or referring to the value of any <variable>.  If this
restriction is violated, then it is an error.''

Do you expect implementations to detect this error?  Or would you
prefer to define some sort of standard semantics to this case?

The same situation applies to shadowing syntactic keywords that may be
necessary to delimit internal definitions, mutation of symbol->string
results.

0
jrm (1310)
12/4/2003 9:18:44 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

> Matthias Blume <find@my.address.elsewhere> writes:
> 
> > "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> >
> >> But the compiler *doesn't* alter semantics. R5RS doesn't permit it, and
> >> the suggested change wouldn't either. Currently, the semantics are only
> >> "changed" when programmers rely on unspecified behavior, which is a
> >> design/coding error, not a compiler error.
> >
> > That's just sophistry.  This thread started with a real example of
> > someone observing unexpected behavior when moving to a different
> > compiler.  So effectively the semantics changed on his program -- even
> > though, as you correctly note, the standard did not say what the
> > semantics were supposed to be.  
> 
> That's weird.  You expect all implementations to provide the *same*
> semantics for *undefined* behavior?

No.  I expect that there is no "undefined behavior".  If the program
is accepted, it should have well-defined behavior.

> > Every particular implementation provides a particular semantics, and
> > I find it to be a mistake if the standard permits different
> > implementations to provide different semantics for a language
> > construct as fundamental as procedure application in Scheme.
> 
> The standard permits different implementations to provide different
> semantics for mismatched arity, too.
> 
> Besides, the standard als permits different implementations to provide
> different semantics for bad types, unbound variables, multiple
> occurrances of variables in binding lists, use of macro keywords which
> don't match patterns, assignment to unbound variables, using inexact
> numbers as indices, CAR and CDR on empty lists, etc.

I would definitely prefer standard semantics for all of these or
otherwise have them rejected as errors.  In terms of practical
importance, though, I would rate the evaluation order problem higher
than most of these issues.  Also, most of these things are -- unlike
the ordering problem -- easy to deal with, at least at runtime.  (Of
course, as you know, I would like it even better if all of these are
dealt with at compile time.)

> The standard also says that ``One restriction on LETREC is very
> important:  it must be possible to evaluate each <init> without
> assigning or referring to the value of any <variable>.  If this
> restriction is violated, then it is an error.''
> 
> Do you expect implementations to detect this error?  Or would you
> prefer to define some sort of standard semantics to this case?

Implementations that detect this (at runtime) exist.

> The same situation applies to shadowing syntactic keywords that may be
> necessary to delimit internal definitions, mutation of symbol->string
> results.

Yes, unfortunately there is more than one dark corner in the language.

Matthias
0
find19 (1244)
12/4/2003 9:27:19 PM
Joe Marshall wrote:

[...]

> The standard permits different implementations to provide different
> semantics for mismatched arity, too.
> 
> Besides, the standard als permits different implementations to provide
> different semantics for bad types, unbound variables, multiple
> occurrances of variables in binding lists, use of macro keywords which
> don't match patterns, assignment to unbound variables, using inexact
> numbers as indices, CAR and CDR on empty lists, etc.

[...]

Now I remember why I liked ML better.

-thant



0
thant (332)
12/4/2003 9:28:22 PM
Matthias Blume <find@my.address.elsewhere> writes:

> No.  I expect that there is no "undefined behavior".  If the program
> is accepted, it should have well-defined behavior.

I'm not sure how this could be made compatible with language
extensions, so I think that *some* undefined behavior must be
acceptable.

0
jrm (1310)
12/4/2003 9:37:49 PM

Goddammit. Earlier today I found out there's a serious possibility I'm 
gonna have to do a serious project in C in the not-too-distant future 
and it's put me in a grumpy mood and I wanna fight about something and 
no one is taking the bait.

-thant

0
thant (332)
12/4/2003 9:42:45 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

> Matthias Blume <find@my.address.elsewhere> writes:
> 
> > No.  I expect that there is no "undefined behavior".  If the program
> > is accepted, it should have well-defined behavior.
> 
> I'm not sure how this could be made compatible with language
> extensions, so I think that *some* undefined behavior must be
> acceptable.

Simple: language extensions make previously illegal programs legal.
In other words, you no longer get error messages for certain things.

Matthias
0
find19 (1244)
12/4/2003 9:42:59 PM
> Bradd W. Szonye wrote:
>> If you shave a bit [of time] off each procedure call, especially in a
>> language as procedure-happy as Scheme, you see systematic
>> improvements. Also, the more you let the compiler optimize code, the
>> less you need programmers doing expensive hand-tuning. It's the very
>> opposite of premature micro-optimization.

Thant Tessman <thant@acm.org> wrote:
> If this kind of optimization is so important, why not go with a static
> type system? That's going to buy you much more of an improvement in
> performance than allowing for arbitrary evaluation order.

Because it's a major change in semantics. Dynamic type systems provide a
different level of expressiveness. Order of argument evaluation does
not, since Scheme does have (trivial!) ways to specify it if you really
need it.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 9:43:06 PM
> 
> Ray Dillinger <bear@sonic.net> wrote:
> 
>>> You are right that, with pure functional arguments it doesn't matter
>>> and the compiler can resequence them anyway if it proves they are pure
>>> functional. But the definition of pure functional has two parts; a
>>> pure functional expression is non-side effecting and it is not
>>> affected by any side effects.
> 
> 
> I was using a less strict definition, functions that produce no side
> effects. However, my concrete example also fits your definition:
> Consider two functions, SPLAY-FIND and INORDER-TRAVERSE. The first
> function has side effects, but they're below the abstraction barrier.
> The second function is a purely-functional tree walker. Even though
> SPLAY-FIND mutates the tree, they don't affect INORDER-TRAVERSE, because
> all the side effects are below the abstraction barrier.
> 

If you really want to specify this properly, you need
  - no exceptions
  - no errors
  - no effect from free vars
  - no effect on free vars
  - no infinite loops
  - no allocation effects
In other words, your compiler needs to prove something really strong.

For someone in Q&A, I am surprised to hear any argument in favor of a
language that doesn't specify decidable criteria for discovering when
a program diverges from standards. Are all programmers in your organization 
perfect?

-- Matthias

0
12/4/2003 9:50:33 PM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> But the compiler *doesn't* alter semantics. R5RS doesn't permit it,
>> and the suggested change wouldn't either. Currently, the semantics
>> are only "changed" when programmers rely on unspecified behavior,
>> which is a design/coding error, not a compiler error.

Matthias Blume <find@my.address.elsewhere> wrote:
> That's just sophistry.  This thread started with a real example of
> someone observing unexpected behavior when moving to a different
> compiler.  So effectively the semantics changed on his program ....

No, he got undefined behavior either way, because he didn't follow the
rules. Now, I'm generally inclined to protect newbies and naive users,
but where do people get the idea that it's OK to rely on evaluation
order in procedure calls anyway? It's something that *most* languages
IME leave unspecified, because a fixed order isn't very useful, because
it obscures meaning (by conflating the notions of argument evaluation
and sequencing), and because it inhibits optimization. It isn't good for
much of anything except protecting students from mistakes, and even that
isn't a very good idea, because those same students will have the same
problem (only worse) once they start using a systems language like C.

> All the talk about possible optimizations that can or cannot be done,
> while certainly valid, is still no good argument if
> 
>    a) the alleged benefits are not realized in real-world implementations
>       (or are so small that they are not worth the trouble)

A source cited earlier claimed a 7% performance improvement by
optimizing argument eval order. That's an impressive gain; I've worked
in the perf world. Unfortunately, there wasn't enough data to explain
the conclusion. I too would like to see more, because that's better than
even I expected.

Then again, it's not totally surprising. If it was a Scheme compiler for
i386, using the hardware stack, I would expect a major improvement in
speed and code size by using right-to-left arg eval. That's because the
i386 has a downward-growing stack, and the PUSH instruction is *very*
efficient in time and space. (IIRC, it's a 1-byte opcode with a 1-cycle
running time.) Therefore, it works best if you push the args in last to
first order, and *that* is easier to do when you eval the args
"backwards."

This brings up another important point: I don't think just *any*
evaluation order will do. If you define it as anything but
left-to-right, you'll screw up the English-speaking newbies just as
badly. And left-to-right is exactly the wrong way to do it in a
native-code Scheme compiler for the i386.

>    b) they come at the expense of having major pitfalls in the language
>       which we are unable to detect reliably

It's not a major pitfall. And it's not like Scheme is alone in this
behavior. There's a reason why most languages leave it unspecified!
Several reasons, actually, and I've been trying to explain them to you.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 9:55:53 PM
Shriram Krishnamurthi <sk@cs.brown.edu> virkkoi:
[On non-determinism]
> Put otherwise: leaving the order of evaluation undefined does not turn
> Scheme into a mini-Prolog.

Please. The concept of determinism has been around for far longer than
Prolog has. To my mind, "non-determinism" means _primarily_
"impredictability", and only secondarily the CS-specific concept of
"returning multiple alternative values" or "backtracking". Do you really
insist that the latter is the only valid usage for the word?


Lauri Alanko
la@iki.fi
0
la (473)
12/4/2003 9:57:33 PM
> Joe Marshall <jrm@ccs.neu.edu> writes:
>> That's weird.  You expect all implementations to provide the *same*
>> semantics for *undefined* behavior?

Matthias Blume <find@my.address.elsewhere> wrote:
> No.  I expect that there is no "undefined behavior".  If the program
> is accepted, it should have well-defined behavior.

What, you don't like extensions to programming languages? Or do you just
prefer O(2^n) compilers? Most undefined behavior in language standards,
IME, is because (1) the correctness of some construct is undecidable, or
(2) there are potential extensions to handle "error" cases, but the
standards body doesn't want to mandate them.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 9:58:46 PM
On Thu, 04 Dec 2003 13:13:03 -0700, Thant Tessman <thant@acm.org> wrote:

> Bradd W. Szonye wrote:
>> Matthias Blume <find@my.address.elsewhere> wrote:
>
> [...]
>
>>
>> How so? If you shave a bit off each procedure call, especially in a
>> language as procedure-happy as Scheme, you see systematic improvements.
>> Also, the more you let the compiler optimize code, the less you need
>> programmers doing expensive hand-tuning. It's the very opposite of
>> premature micro-optimization.
>
> If this kind of optimization is so important, why not go with a static 
> type system? That's going to buy you much more of an improvement in 
> performance than allowing for arbitrary evaluation order.
>

On Thu, 04 Dec 2003 14:42:45 -0700, Thant Tessman <thant@acm.org> wrote:

>
>
> Goddammit. Earlier today I found out there's a serious possibility I'm 
> gonna have to do a serious project in C in the not-too-distant future and 
> it's put me in a grumpy mood and I wanna fight about something and no one 
> is taking the bait.
>
> -thant
>
>

Statically typed languages are evil. They restrict expressiveness,
and kill all the fun of programming. It's for people who like to be
patronized by their compilers.


felix

0
felix4557 (46)
12/4/2003 10:10:24 PM
"Bradd W. Szonye" <bradd+news@szonye.com> virkkoi:
> >> ... overall maintenance becomes more expensive, because the fixed
> >> evaluation order removes information about the procedure.
> >> Specifically, it removes a guarantee that "order doesn't matter
> >> here" ....
> 
> > There is no such "guarantee" unless the compiler proves it.
> 
> The Scheme standard provides the guarantee. If a programmer ignores it,
> that's a bug. Therefore, if a maintainer does notice that a procedure
> call depends on a particular order, he immediately knows that it's a
> design bug.

That's a _huge_ "if". The property is non-decidable also for humans
(provided you don't refute the Church-Turing thesis).

So let's see. The current situation on depending the evaluation order of
arguments is:

- It is discouraged
- The machine cannot reliably detect it
- A human cannot reliably detect it
- A program that nevertheless depends on it is unreliable
+ If someone reading the code does detect it, then there does not
  need to be a comment stating "this is intentional" for the reader to
  know whethere it is a bug or not.

If the order were specified then depending on it would mean that:

- It is discouraged
- The machine cannot reliably detect it
- A human cannot reliably detect it
+ A program that nevertheless depends on it is still reliable
- _If_ someone reading the code does detect it, then there needs
  to be a comment stating "this is intentional" for the reader to
  know whethere it is a bug or not.

Now which is the bigger plus, which is the bigger minus?


Lauri Alanko
la@iki.fi
0
la (473)
12/4/2003 10:15:31 PM
Bradd wrote:
>> I was using a less strict definition, functions that produce no side
>> effects. However, my concrete example also fits your definition:
>> Consider two functions, SPLAY-FIND and INORDER-TRAVERSE. The first
>> function has side effects, but they're below the abstraction barrier.
>> The second function is a purely-functional tree walker. Even though
>> SPLAY-FIND mutates the tree, they don't affect INORDER-TRAVERSE,
>> because all the side effects are below the abstraction barrier.

Matthias Felleisen <matthias@ccs.neu.edu> wrote:
> If you really want to specify this properly, you need
>   - no exceptions
>   - no errors
>   - no effect from free vars
>   - no effect on free vars
>   - no infinite loops
>   - no allocation effects
> In other words, your compiler needs to prove something really strong.

If it wants to move an evaluation past a sequence point, that's true,
because it must prove that the optimization will not affect the
program's semantics. That's *why* the fixed sequencing is a bad idea --
it's rarely important, there's a trivial way to specify it when it is
important, and it interferes with automated optimization.

With the unspecified sequencing, there is no problem. The compiler can
reorganize argument evaluation freely. That also increases the chances
of useful parallelization, because there are more ways to combine the
procedures.

> For someone in Q&A, I am surprised to hear any argument in favor of a
> language that doesn't specify decidable criteria for discovering when
> a program diverges from standards.

Which program diverged from standards? It's not an *error* for the
evaluation of two arguments to interact. Indeed, that's exactly what
happens in the splay find & traverse example above. Since R5RS
guarantees that it's equivalent to *some* sequence, you're OK so long as
all the side-effects are below some abstraction barrier.

It's only a problem when a programmer naively expects left-to-right
sequential behavior. And "don't do that!" gets drummed into most
programmers very early, because most languages IME leave arg eval order
unspecified.

> Are all programmers in your organization perfect?

No, certainly not. That's why we hold design and code reviews. It
doesn't catch all of the bugs, but it's the best way to find a certain
class of errors, like "does this program halt?" Sequencing errors are
exactly the kind of thing that you look for in design & code reviews,
not in compilers, because humans have the domain knowledge to answer
"undecidable" problems in a reasonable amount of time. Human programmers
are good at using abstraction barriers to break the problems into small,
modular chunks, proving correctness one unit at a time. Compilers don't
(yet) know how to do that.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 10:29:15 PM
felix wrote:

[...]

> Statically typed languages are evil. They restrict expressiveness,
> and kill all the fun of programming. It's for people who like to be
> patronized by their compilers.

Thanks, I needed that.

-thant

0
thant (332)
12/4/2003 10:34:46 PM
With all that being said, we could both have our cake and eat it. Applications 
in Scheme should be

  (<identifier> <identifier>*)

and we're done. -- Matthias

0
12/4/2003 10:40:10 PM
> "Bradd W. Szonye" <bradd+news@szonye.com> virkkoi:
>> The Scheme standard provides the guarantee. If a programmer ignores
>> it, that's a bug. Therefore, if a maintainer does notice that a
>> procedure call depends on a particular order, he immediately knows
>> that it's a design bug.

Lauri Alanko <la@iki.fi> wrote:
> That's a _huge_ "if". The property is non-decidable also for humans
> (provided you don't refute the Church-Turing thesis).

True. However, remember that "undecidable" doesn't mean that analysis
necessarily takes forever. In particular, while the general problem is
undecidable, many specific examples are tractable. Humans use
abstraction barriers and similar self-imposed constraints to keep the
analysis tractable. However, compilers don't (yet) understand those
things, so they're stuck solving the general problem, which is much
harder.

> So let's see. The current situation on depending the evaluation order
> of arguments is:
> 
> - It is discouraged
> - The machine cannot reliably detect it
> - A human cannot reliably detect it

That depends on what you mean by "reliably." If the programmer did it
intentionally, because he *wanted* the args to evaluate in a particular
order, I'd expect any decent code reviewer to catch it. (That's exactly
what happened in this case.) And I wouldn't expect many competent
programmers to make the mistake in the first place; knowing which
instructions are sequenced and which aren't is an important part of
learning any programming language. Yes, student programmers will make
the mistake, usually pretty early, and usually because a teacher or
textbook sternly warned them not to do it. Yes, even competent
programmers will occasionally make the mistake, but then it's no more
serious than any other brain fart, and it should never make it past
review.

That leaves the cases where a programmer *unintentionally* used
arguments that interact badly. Those cases *are* difficult to find, and
humans won't reliably detect it. However, fixing the arg eval order
doesn't fix that problem. Sure, you may get more consistent behavior,
but it's still a design error. I'd expect a fixed eval order to
partially mask the problem such that it only shows up in corner cases.
(It might be useful to have a "perverse" option on the compiler to
jumble up the eval order -- it could help to unmask this kind of bug.)

So I really don't expect a fixed eval order to help with anything but
intentional misuse of the feature, and that's easy enough to prevent
with training and code reviews.

> - A program that nevertheless depends on it is unreliable
> + If someone reading the code does detect it, then there does not
>   need to be a comment stating "this is intentional" for the reader to
>   know whethere it is a bug or not.

Also:
  + Permits more compiler optimizations, including very basic but very
    effective "optimizations" like evaluating args from left to right or
    right to left according to what works best with the machine's
    hardware stack.

> If the order were specified then depending on it would mean that:
> 
> - It is discouraged

I disagree. If you make it well-defined, that will just encourage
programmers to insist that it *is* a good idea. Seriously, how do you
explain that it's "discouraged" when it's legal? Would you also
discourage (let* ((arg1 ...) (arg2 ...)) (f arg1 arg2))? Why?

> - The machine cannot reliably detect it
> - A human cannot reliably detect it
> + A program that nevertheless depends on it is still reliable

If the programmer intentionally depends on it, it's reliable. If it
happens by accident (which is more likely, if you have competent
programmers and reviewers), the fixed order doesn't help much at all.

> - _If_ someone reading the code does detect it, then there needs
>   to be a comment stating "this is intentional" for the reader to
>   know whethere it is a bug or not.

Now add:

  - Inhibits significant optimizations.
  - Encourages bad programming practices that *will* bite you on the ass
    when you switch to one of the many other languages that leave the
    order unspecified.

> Now which is the bigger plus, which is the bigger minus?

Your "plus" isn't a plus at all, and you omitted some of the advantages
and disadvantages.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 10:57:51 PM
Matthias Felleisen <matthias@ccs.neu.edu> wrote:
> With all that being said, we could both have our cake and eat it.
> Applications in Scheme should be
> 
>   (<identifier> <identifier>*)
> 
> and we're done. -- Matthias

Heh, that would do it. Of course, that's just shifting the problem onto
the binding constructs (which would need to become primitives instead of
macros that expand into applications).
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 10:59:50 PM
Matthias Blume <find@my.address.elsewhere> wrote:
>>> No.  I expect that there is no "undefined behavior".  If the program
>>> is accepted, it should have well-defined behavior.

> Joe Marshall <jrm@ccs.neu.edu> writes:
>> I'm not sure how this could be made compatible with language
>> extensions, so I think that *some* undefined behavior must be
>> acceptable.

> Simple: language extensions make previously illegal programs legal. In
> other words, you no longer get error messages for certain things.

Not so simple: If the standard requires that the program signal an
error, you can't just ignore the condition. If the standard doesn't
require the diagnostic, then you're effectively back to "undefined
behavior." Fully-defined languages and extensions just don't mix well,
not if they want to claim standards conformance, anyway.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/4/2003 11:03:42 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > [ ... ] This thread started with a real example of
> > someone observing unexpected behavior when moving to a different
> > compiler.  So effectively the semantics changed on his program ....
> 
> No, he got undefined behavior either way, because he didn't follow the
> rules.

He got behavior not defined *by the language standard*.  The concrete
compiler, however, did provide a defined semantics (even though it
might not have been documented).  And that is precisely the problem:
compiler's have a hard time not completely defining the semantics of
language constructs they implement.

> Now, I'm generally inclined to protect newbies and naive users,
> but where do people get the idea that it's OK to rely on evaluation
> order in procedure calls anyway?

It is certainly not ok in a language that does not define that order.
(But I'd say that the problem is with the language, not with the
people here.)  In other languages it is quite ok, but many (myself
included) might not consider it good style. Leaving gaping holes in
language definitions, however, is definitely not ok.

> It's something that *most* languages IME leave unspecified, because
> a fixed order isn't very useful, because it obscures meaning (by
> conflating the notions of argument evaluation and sequencing), and
> because it inhibits optimization.

If you count by usage, then you are probably right since C and C++ are
in the "order unspecified" camp.  But other than that I would not be
so sure.  And given all the known problems with these languages, I
would hardly consider them benchmarks in good language design.

> It isn't good for much of anything except protecting students from
> mistakes,

No, it is not just students.  I have seen very experienced programmers
make this mistake (and then proceed to spend days trying to chase down
the resulting bugs).

> and even that isn't a very good idea, because those same students
> will have the same problem (only worse) once they start using a
> systems language like C.

Well, if we only could get students to stop moving to ill-defined and
unsafe languages like C.  Look at the recent deluge of compromised
Linux systems and you know what I mean...

> > All the talk about possible optimizations that can or cannot be done,
> > while certainly valid, is still no good argument if
> > 
> >    a) the alleged benefits are not realized in real-world implementations
> >       (or are so small that they are not worth the trouble)
> 
> A source cited earlier claimed a 7% performance improvement by
> optimizing argument eval order. That's an impressive gain; I've worked
> in the perf world.

7% is barely above the threshold where some people call optimizations
worthwhile.  In any case, it is not clear at all how much of those 7%
are due to optimizations that would not be possible at all under fixed
order.  Some of them might simply require more compiler effort to
prove them safe.  It is really hard to tell without seeing a very
detailed study.

> Unfortunately, there wasn't enough data to explain the conclusion. I
> too would like to see more, because that's better than even I
> expected.

The real question is how much of this could have been recovered in
very short time by profiling and fixing the hotspots by hand.

I have actually seen far higher savings when (effectively) switching
from l2r to r2l under certain circumstances.  Under other
circumstances, l2r is far better than r2l.  But with fixed order of
evaluation it is fairly easy to reason about this, and just a little
bit of linguistic support is enough to give the programmer the tools
to deal with the problem the right way: by explicitly writing the code
in such a way that evaluation proceeds in the order that is less
expensive.

> Then again, it's not totally surprising. If it was a Scheme compiler for
> i386, using the hardware stack, I would expect a major improvement in
> speed and code size by using right-to-left arg eval. That's because the
> i386 has a downward-growing stack, and the PUSH instruction is *very*
> efficient in time and space. (IIRC, it's a 1-byte opcode with a 1-cycle
> running time.) Therefore, it works best if you push the args in last to
> first order, and *that* is easier to do when you eval the args
> "backwards."

You can get the same benefit by simply changing the calling
conventions.  There are other issues with l2r that are far more
detrimental to performance when one is not careful with them.  But see
above, even those are not difficult to deal with.

> >    b) they come at the expense of having major pitfalls in the language
> >       which we are unable to detect reliably
> 
> It's not a major pitfall. And it's not like Scheme is alone in this
> behavior.

Just because several language designs get this wrong does not mean
that it is suddenly right.

> There's a reason why most languages leave it unspecified!

Sure.  The important question is whether the reasoning behind those
"reasons" is sound, and whether it considered all consequences.

> Several reasons, actually, and I've been trying to explain them to you.

Thank you very much.  But you know, I'm a bit dense, so your efforts
at patronizing me aren't doing any good. (Actually, all of the people
here who have argued against leaving the order unspecified are known
to be completely clueless when it comes to PL questions.  So don't
listen to us.)

Matthias
0
find19 (1244)
12/4/2003 11:05:31 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Matthias Blume <find@my.address.elsewhere> wrote:
> >>> No.  I expect that there is no "undefined behavior".  If the program
> >>> is accepted, it should have well-defined behavior.
> 
> > Joe Marshall <jrm@ccs.neu.edu> writes:
> >> I'm not sure how this could be made compatible with language
> >> extensions, so I think that *some* undefined behavior must be
> >> acceptable.
> 
> > Simple: language extensions make previously illegal programs legal. In
> > other words, you no longer get error messages for certain things.
> 
> Not so simple: If the standard requires that the program signal an
> error, you can't just ignore the condition. If the standard doesn't
> require the diagnostic, then you're effectively back to "undefined
> behavior." Fully-defined languages and extensions just don't mix well,
> not if they want to claim standards conformance, anyway.

That's a red herring.  Programs that signal errors (in the sense: "I
give up because the program has an error", not in the sense: "I raise
an exception at runtime that gets handled elsewhere by the program")
are hardly useful.  The error message is there precisely to signal
that such a non-useful program has been detected. I see no harm in
making more programs useful by not signalling some of these errors.

(Of course, I also prefer such errors to be signalled at compile time
-- in which case the "gets handled at runtime by an exception
hanndler" part becomes trivially irrelevant.)
0
find19 (1244)
12/4/2003 11:14:43 PM
Bradd wrote:
>> If the standard requires that the program signal an error, you can't
>> just ignore the condition. If the standard doesn't require the
>> diagnostic, then you're effectively back to "undefined behavior."
>> Fully-defined languages and extensions just don't mix well, not if
>> they want to claim standards conformance, anyway.

Matthias Blume wrote:
> That's a red herring.  Programs that signal errors (in the sense: "I
> give up because the program has an error", not in the sense: "I raise
> an exception at runtime that gets handled elsewhere by the program")
> are hardly useful. The error message is there precisely to signal that
> such a non-useful program has been detected. I see no harm in making
> more programs useful by not signalling some of these errors.

A standard where all of the errors are optional is a joke. It doesn't
have any teeth in it. A language standard where all of the behavior is
well-defined is also a joke, unless (1) you intend to use only a single
implementation on a single system, or (2) you can get a large government
to fund the development. No undefined or implementation-defined behavior
leaves little room for alternate implementation strategies, and it makes
porting to some platforms *very* difficult.

Language standards leave some behavior unspecified *exactly* because
there's more than one good way to do it, and because existing
implementations have user communities that don't want to give up their
way. They also leave some behavior unspecified because some behavior
just doesn't translate well to all systems.

However, for the things a language does define, it's important that you
do get diagnostics when you run a non-comforming program. Without them,
it's too easy to get hooked by an unadvertised language extension.
That's why C, as loose as it is, strictly requires that every
implementation document *all* of the implementation-defined behavior.
That's why standards certification bodies require a waiver for every
deviation before they'll grant use of the "approved" trademark.

I'm even more curious about your background now, because your
suggestions keep getting farther and farther from the reality I'm
familiar with. It may be that we come from different places with very
different ideas of standards and best practices.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/5/2003 12:34:15 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> However, for the things a language does define, it's important that you
> do get diagnostics when you run a non-comforming program.

Since programs that rely on order of evaluation are non-conforming,
shouldn't we then require a diagnostic?

> I'm even more curious about your background now, because your
> suggestions keep getting farther and farther from the reality I'm
> familiar with.

My background is easy to find out.

What is yours?
0
find19 (1244)
12/5/2003 3:00:59 AM
Matthias Blume wrote:
> 
> "Scott G. Miller" <scgmille@freenetproject.org> writes:
> 
> > Besides, we have constructs for fixing an order of evaluation for
> > expressions which have side effects.
> 
> Yes, and those constructs can be used for fixing the order differently
> than what is the (fixed) default order when performance really
> matters.

A better argument for you would be to claim that non-side-effecting
functions can be reordered at will, so the compiler actually can
reorder almost anything it wants, given the right information.

> >  This is no worse than say the
> > undefined behavior of multithreaded programs when the programmer
> > doesn't use a mutex construct to protect a critical region.
> 
> Now you want to throw all the complexity of concurrent programming
> even at simple sequential programs?!?

A program running on Unix or some such is concurrent whether
or not it wants to be.

David
0
feuer (188)
12/5/2003 3:12:18 AM

"Bradd W. Szonye" wrote:

> Evaluating arguments in parallel can be consistent with some sequential
> order of evaluation. If you tighten the constraints on evaluation order,
> however, you reduce the opportunities for parallelization. For example,
> consider a call with three arguments. The first and last arguments are
> purely functional, but the middle argument is not. With unspecified
> order, the system can parallelize evaluation of the purely-functional
> arguments. With fixed order, it cannot, because the middle argument
> creates a "sequence point" that you cannot cross without breaking
> consistency with the abstract model.

You _could_ fix this in the code by replacing
(let ((a (pure 1))
      (b (impure))
      (c (pure 2)))
   foo)

with

(let ((a (pure 1))
      (c (pure 2)))
   (let ((b (impure)))
     foo))

but that may be too much to expect?

David
0
feuer (188)
12/5/2003 3:17:43 AM
Matthias Felleisen <matthias@ccs.neu.edu> wrote in message news:<bqjnfg$gq3$1@camelot.ccs.neu.edu>...
> 1. A program with peek-char is almost always broken. The other MF.
> 
> 2. You want to perform effect-full actions only once via let. (A
> lesson from monads.)
> 
> 3. The Scheme Report imposes an undecidable correctness criteria on
> programs -- that they don't depend on the order of evaluation -- without
> (naturally) asking implementations to check it. Go figure; and that language
> supposedly has a semantics.

Thank you very much for both the corrected function and the
explanation. While it did need two minor changes - you dropped the
test on the count variable, and the port variable was missing in the
read-char call - the function as you rewrote it works fine now on all
the implementations I've tested it with. I'm surprised I didn't think
to replace peek-char with read-char and save the result rather than
doing the read twice, but the order of operations issue was one I had
missed. I'll have to pay closer attention to that.

I am still curious whether anyone had taken a chance to look at the
rest of the program (a link to it was given in the TLP), and what they
thought of it, but given the way that this thread ignited over issue
of defined-vs-undefined order of operations, and some of the other
discussions I've seen here, I suspect I am out of my depth in this
newsgroup... I'll probably lurk a while before any further posting,
except for responding to any specific followups on my existing posts.

--
Jay Osako                              aka Schol_R-LEA;2
If the phone rings today, water it!
0
scholr (10)
12/5/2003 3:18:05 AM

Matthias Blume wrote:

> False.  If the compiler actually knows that the first and third
> arguments are purely functional

Which is hard to do in Scheme because every Scheme variable and
cons cell is mutable.

David
0
feuer (188)
12/5/2003 3:21:01 AM
Feuer <feuer@his.com> writes:

> Matthias Blume wrote:
> 
> > False.  If the compiler actually knows that the first and third
> > arguments are purely functional
> 
> Which is hard to do in Scheme because every Scheme variable and
> cons cell is mutable.

Right.  That is one of my other gripes.
0
find19 (1244)
12/5/2003 3:30:28 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Why bless poor code? It encourages the errors.

Why make code that is merely poor but not incorrect incorrect without
also providing a means of finding out that the code is now, in fact,
incorrect?

> No, the big mistake is conflating the concepts of "argument list" and
> "sequence of operations." That's a fundamental design error, not just a
> bit of sloppy programming, and a programming language should *not*
> encourage the error.

I don't see why this should be a "design error".  It certainly is not
a "fundamental" design error.

> > People in this newsgroup often like to waffle about the "freedom" that
> > is granted by a language.  Making the evaluation order unspecified
> > *takes away* such freedom.
> 
> Not when there's a trivial, alternate way that lets you write it.
> 
>     (let* ((arg1 ...) (arg2 ...)) (f arg1 arg2))

That's not what I meant.  Sure, you can fix the order by hand.  The
problem is that if you don't, you are now required to prove (to
yourself) that the program is correct under every possible permutation
in the evaluation order. With long enough argument lists, this is a
lot of proving.  With fixed order, you have to prove this only for one
possible order.  And, of course, every program that would be correct
under all possible permutations is also correct in the fixed-order
scenario.

> If you use that pattern a lot, you can even write a macro to simplify
> the syntax, e.g. (sequential-args f arg1 arg2). But that's a one-way
> trip. Once you require sequential evaluation for all procedure calls,
> you remove all possibility of automatic optimization.

No, you do not remove the possibility, at least not in general.  It
can still be done whenever the compiler can prove that it does not
matter to the outcome of the program, i.e., if the reordering is not
observable.  (Granted, in languages like Scheme or C doing so is very
hard because there are side effects left and right wherever you look.
That is another weakness of these languages.  Even LAMBDA is effectful
in Scheme -- something which popular systems such as DrScheme quietly
ignore because it makes life just too damn difficult without providing
adequate benefit.)

> Only hand-tuning is possible then, and I think we all know how
> error-prone that is.

I don't think it is all that error-prone, especially not compared to
having a major pitfall in the language such as the one we are
discussing here.  I actually have hand-tuned code that exhibited bad
performance under one particular evaluation order, but this didn't
happen very frequently, and the fix was very straightforward and
without danger of messing things up.

> > Proof: Every program that is a valid program under unspecified order
> > is also a valid program under fixed evaluation order.  But not vice
> > versa.  Therefore, there are fewer correct programs under
> > "unspecified" order.
> 
> Refutation: Every program that is not valid under unspecified order has
> a trivial transformation that makes it valid (and that makes the
> ordering *explicit*, which is an aid to maintainers).

This is not a "refutation".

> Meanwhile, the fixed order precludes many automatic optimizations,
> which encourages premature hand-tuning. (Yes, it actually encourages
> more than one poor coding style.)

Not at all, in my experience.  These days I exclusively program in
languages with fixed evaluation order, and there certainly is no
problem with "premature" tuning.  The little tuning that has been done
was always after the program already worked and the fragment in
question had been singled out as performing poorly.

> > Unlike with type systems, though, whether or not a program is invalid
> > because of reliance on evaluation order is not decidable (either at
> > compile or at runtime).
> 
> Which is why it's a bad idea to conflate evaluation order with other
> concepts, like argument lists. When order matters, it's not enough to
> just throw code at a compiler and hope that it works.

Nobody is saying "throw code at a compiler and hope that it works".
In languages where fixed evaluation order doing so is not necessary as
"hope" does not need to come in anywhere.  That's precisely because
the rules are unambiguous and fully determine the program's outcome.

There are other places where the same "conflation" as you call it is
done, and nobody complains about it: Why are arguments evaluated
before the function is invoked?  Why does the condition of an
if-expression get to run before the branch(es)?  Why are the arguments
to AND evaluated strictly from left to right?

Well, your answer will probably contain the words "call by value",
"short-circuiting behavior", and so on.  It all comes down to the fact
that in languages with effects order matters.  And it matters not just
in some places, it matters everywhere.

> Allowing automatic optimization and discouraging poor coding styles is
> not an upside?

No.  Once again: Turning correct but perhaps poor coding style into
downright incorrect code without a way of detecting that this has
happened is definitely not an upside.  Allowing automatic optimization
is good, but only as long as "optimization" means "preserving
semantics" and "having a well-defined semantics in the first place".

> Meanwhile, fixed evaluation order has several concrete
> drawbacks. It effectively shifts the ambiguity from initial coding to
> maintenance. The original developer knows that he can rely on the order
> of evaluation, but maintainers can't tell whether they've actually done
> so. By conflating the two ideas, you lose information.

I wouldn't call that a "concrete" drawback.  As I said, I program in a
language with fixed evaluation order on a daily basis, and I deal with
a very large code basis, much of which has been written by other
people years ago.  I have not once run into the sort of problem that
you allude to.  On the other hand, I have accidentally run into the
problem with unspecified evaluation order in Scheme and C, and I have
seen it happen to other people as well on several occasions.

> I don't know how much maintenance work you've done, but it's a huge part
> of the software lifecycle, typically over 50%.

Well, see above.

> By blessing a poor design and coding style, you make maintenance
> more difficult and increase the overall cost of the software.

But is is not really "blessing poor design".  People don't do this
sort of stuff all that much -- not more than they do accidentially or
out of ignorance in C or Scheme.  [Here I am ignoring a few idiomatic
uses such as the SML "before" operator that I mentioned earlier.
These definitely do not count as "poor style".]

Matthias
0
find19 (1244)
12/5/2003 4:55:36 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:
{stuff deleted}
> This sort of thing is especially important now that "superscalar"
> machine architectures are ubiquitous. For example, on an IA-64 system,
> unnecessary sequence points have *major* performance implications. Why
> introduce unnecessary sequence points just to bless a poor design and
> coding style?


1. The IA-64 despite all the marketing from Intel is not a "ubiquitous" 
   architecture
2. All existing architectures that allow for parallel execution
   dynamically schedule instructions on the fly. 
   (i.e. Pentium 4, PowerPCs, Transmeta, AMD x86-64, UltraSparc ...)

The only performance number that has been discussed in this thread is a 7%
cost for strict left-right evaluation. I'd hardly call 7% a major cost.

Most paralleism left in applications is not fine grain instruction level,
but at the thread level. If you want to exploit paralleism I'd personally be
writing programs in the Pi-calculus or Erlang. :)
0
danwang742 (171)
12/5/2003 5:43:47 AM

"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Matthias Blume <find@my.address.elsewhere> wrote:
> > Sure, there is no use in denying that there are some (rather obscure)
> > corner cases where the [unspecified argument evaluation order] gives
> > some additional power to a compiler.
> 
> Obscure? I'd expect it to be systematic, actually. At the very least, I
> would expect a small but significant difference from switching between
> left to right and right to left according to the machine's stack
> architecture. (For example, right-to-left seems like it would be more
> efficient if the machine uses a downward-growing stack for arguments.)

If typically, you pass arguments in registers rather than on the stack,
which is what you ought to be doing with all the registers anyway it doesn't
matter.

> And I'd also expect the implicit sequence points to make a noticeable
> difference on "superscalar" CPUs like the Pentium and Itanium, where the
> CPU itself parallelizes instructions. Compilers would need to insert a
> lot more NOPs or stop bits to guarantee correct ordering, which
> interferes with pipelining and increases code size.

The Pentium *dynamically* schedules code at runtime. The Itanium must do it
at compile time, which basically makes it a real PITA to compile for. You
would think that now that you've made the compiler work really hard you
could throw away all the complicated dynamic instruction scheduling logic
and use those transitors for something else like a bigger cache or a simpler
pipeline.

From what I know about the Itanium its a monster peice of power hungry
hardware that requires sophsiticated compilers to get decent
performance. Many, people believe the Itanium will end up being a
multi-billion dollar disaster for Intel.  Luckily for Intel is one of the
few compaines that can surive such a mistake. 
0
danwang742 (171)
12/5/2003 5:56:32 AM
On Thu, 04 Dec 2003 12:31:43 -0500, Joe Marshall <jrm@ccs.neu.edu> wrote:
>   Left to right, right to left, or something more original?

How about writing argmuent expressions with indelible ink on
indigestible sheets of plastic and feeding them to your yak?
The first expression to come out the other end is the one to
get evaluated. This method has been found to discourage
optimizations due to the necessity of slicing open the
pre-processor, which makes the code far more predictable...

>   Is the function position evaluated first or last?

Well first you have to get the yak into kneeling position...

>   Should library syntax be required to have a particular order?

The library yak won't arrive until the 25th. I'm hoping for a
copy of SICY (Structure and Interpretation of Crap from Yaks) to
be available this time so I can finish my thesis...

>   What about syntax introduced by extensions or SRFI's?

You can't surf with a yak.

david rush
-- 
going into hiding from the maniacs at the Nepalese Journal of
Deconstruction Sociologists and Sheep-herders
0
kumo7543 (108)
12/5/2003 3:18:44 PM
On 04 Dec 2003 15:27:19 -0600, Matthias Blume <find@my.address.elsewhere> 
wrote:
> Joe Marshall <jrm@ccs.neu.edu> writes:
>> Besides, the standard als permits different implementations to provide
>> different semantics for bad types, unbound variables, multiple
>> occurrances of variables in binding lists, use of macro keywords which
>> don't match patterns, assignment to unbound variables, using inexact
>> numbers as indices, CAR and CDR on empty lists, etc.
>
> I would definitely prefer standard semantics for all of these or
> otherwise have them rejected as errors.

What? You *don't* like bats flying out your nose whenever your program
has a run-time error? It's those charming personal touches which make
Scheme implementations so attractive...

david rush
-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
0
kumo7543 (108)
12/5/2003 3:29:10 PM
On Thu, 04 Dec 2003 14:42:45 -0700, Thant Tessman <thant@acm.org> wrote:
> Goddammit. Earlier today I found out there's a serious possibility I'm 
> gonna have to do a serious project in C in the not-too-distant future 
> and it's put me in a grumpy mood and I wanna fight about something and 
> no one is taking the bait.

That's because Scheme and ML are the same language, even if Matthias
doesn't want to admit it...

<duck>

david rush
-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
0
kumo7543 (108)
12/5/2003 3:32:30 PM
Lauri Alanko <la@iki.fi> writes:

> Shriram Krishnamurthi <sk@cs.brown.edu> virkkoi:
> [On non-determinism]
>> Put otherwise: leaving the order of evaluation undefined does not turn
>> Scheme into a mini-Prolog.
>
> Please. The concept of determinism has been around for far longer than
> Prolog has. To my mind, "non-determinism" means _primarily_
> "impredictability", and only secondarily the CS-specific concept of
> "returning multiple alternative values" or "backtracking".  Do you really
> insist that the latter is the only valid usage for the word?

In any subgroup of the `comp' hierarchy I think it is reasonable to
assume that `non-deterministic' means the CS-specific concept.

Do you think that `NP-complete' means that you simply can't
predict when an answer will be forthcoming?  Is a `non-deterministic
Turing machine' simply one with a time-varying clock rate?
0
jrm (1310)
12/5/2003 3:57:44 PM
David Rush wrote:
> On Thu, 04 Dec 2003 14:42:45 -0700, Thant Tessman <thant@acm.org> wrote:
> 
>> [...] and no one is taking the bait.
> 
> 
> That's because Scheme and ML are the same language, even if Matthias
> doesn't want to admit it...

SML is Scheme with a type system and all that that implies. Scheme is 
SML with macros and all that that implies.

But Matthias is absolutely right about the evaluation order thing.

And the mutability thing...

And the environment thing...

-thant

0
thant (332)
12/5/2003 4:09:46 PM
If it were up to me, I'd probably just specify r2l and be done with
it.  `Compiler optimization' just isn't a compelling argument.




Matthias Blume <find@my.address.elsewhere> writes:

> It is certainly not ok in a language that does not define that order.
> (But I'd say that the problem is with the language, not with the
> people here.)  In other languages it is quite ok, but many (myself
> included) might not consider it good style. Leaving gaping holes in
> language definitions, however, is definitely not ok.

Why do you think this is `a gaping hole'?  In my opinion it seems a
rather trivial bit of untidiness.

> No, it is not just students.  I have seen very experienced programmers
> make this mistake (and then proceed to spend days trying to chase down
> the resulting bugs).

I make these mistakes, too.

But the most recent time I made this kind of mistake was in a Common
Lisp program, which defines l2r evaluation.  Specifying the order of
evaluation doesn't reduce the rate at which people depend on it, it
simply gives them a guarantee that *sometimes* they'll be lucky.  This
is a truly lame `improvement'.

> Well, if we only could get students to stop moving to ill-defined and
> unsafe languages like C.  Look at the recent deluge of compromised
> Linux systems and you know what I mean...

If only C had a fixed order of argument evaluation, we could stop all
those viruses that exploit the ambiguity.

> 7% is barely above the threshold where some people call
> optimizations worthwhile.

I'd say 10% is minimum, but only if you can get the savings
trivially.

> But you know, I'm a bit dense, so your efforts at patronizing me
> aren't doing any good. (Actually, all of the people here who have
> argued against leaving the order unspecified are known to be
> completely clueless when it comes to PL questions.  So don't listen
> to us.)

What?  Did you say something?
0
jrm (1310)
12/5/2003 4:15:02 PM
David Rush <kumo@gofree.indigo.ie> writes:

> On Thu, 04 Dec 2003 12:31:43 -0500, Joe Marshall <jrm@ccs.neu.edu> wrote:
>>   Left to right, right to left, or something more original?
>
> How about writing argmuent expressions with indelible ink on
> indigestible sheets of plastic and feeding them to your yak?
> The first expression to come out the other end is the one to
> get evaluated. This method has been found to discourage
> optimizations due to the necessity of slicing open the
> pre-processor, which makes the code far more predictable...
>
>>   Is the function position evaluated first or last?
>
> Well first you have to get the yak into kneeling position...


This explains why my yak is walking funny.

Next time, please experiment on your own yak first.
0
jrm (1310)
12/5/2003 4:19:03 PM
David Rush <kumo@gofree.indigo.ie> writes:

> What? You *don't* like bats flying out your nose whenever your program
> has a run-time error?

The bats are fine.  I just don't like my hard drive catching fire.
0
find19 (1244)
12/5/2003 4:25:09 PM
> Matthias Blume wrote:
>> I would definitely prefer standard semantics for all of these or
>> otherwise have them rejected as errors.

David Rush <kumo@gofree.indigo.ie> wrote:
> What? You *don't* like bats flying out your nose whenever your program
> has a run-time error? It's those charming personal touches which make
> Scheme implementations so attractive...

Ha! Scheme programmers are wimps! When a C++ programmer screws up,
*demons* fly out of his nose.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/5/2003 4:58:07 PM
Daniel C. Wang wrote:
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> {stuff deleted}
> 
>>This sort of thing is especially important now that "superscalar"
>>machine architectures are ubiquitous. For example, on an IA-64 system,
>>unnecessary sequence points have *major* performance implications. Why
>>introduce unnecessary sequence points just to bless a poor design and
>>coding style?
> 
> 
> 
> 1. The IA-64 despite all the marketing from Intel is not a "ubiquitous" 
>    architecture
> 2. All existing architectures that allow for parallel execution
>    dynamically schedule instructions on the fly. 
>    (i.e. Pentium 4, PowerPCs, Transmeta, AMD x86-64, UltraSparc ...)
> 
> The only performance number that has been discussed in this thread is a 7%
> cost for strict left-right evaluation. I'd hardly call 7% a major cost.
> 
> Most paralleism left in applications is not fine grain instruction level,
> but at the thread level. If you want to exploit paralleism I'd personally be
> writing programs in the Pi-calculus or Erlang. :)

Actually, by using a reverse order of operations, SISC gains 35% in 
continuation capture/application.  So large gains can be had for some 
operations.  You can get pretty sizeable gains by reordering the 
application of math operations on bignums too for example, though 
arguably you can still do that by 'cheating' if the order is fixed by 
the standard and reording anyway.

	Scott

0
scgmille (240)
12/5/2003 5:10:34 PM
"Scott G. Miller" <scgmille@freenetproject.org> writes:
{stuff deleted}
> Actually, by using a reverse order of operations, SISC gains 35% in
> continuation capture/application.  So large gains can be had for some
> operations.  You can get pretty sizeable gains by reordering the
> application of math operations on bignums too for example, though
> arguably you can still do that by 'cheating' if the order is fixed by
> the standard and reording anyway.

Fair, but I doubt the sizeable performance wins are going to show up because
of the underlying machine micro-architecture.

BTW is this optimization explained in detail anywhere? 

0
danwang742 (171)
12/5/2003 5:21:33 PM
Daniel C. Wang wrote:
> "Scott G. Miller" <scgmille@freenetproject.org> writes:
> {stuff deleted}
> 
>>Actually, by using a reverse order of operations, SISC gains 35% in
>>continuation capture/application.  So large gains can be had for some
>>operations.  You can get pretty sizeable gains by reordering the
>>application of math operations on bignums too for example, though
>>arguably you can still do that by 'cheating' if the order is fixed by
>>the standard and reording anyway.
> 
> 
> Fair, but I doubt the sizeable performance wins are going to show up because
> of the underlying machine micro-architecture.

It depends.  Its very possible that reordering can strongly affect 
memory usage enough to influence cache performance, which can be a huge 
win.  Granted, the compiler must be smart enough to do so.

> 
> BTW is this optimization explained in detail anywhere? 

Matthias R. and I have been meaning to write it up in a paper but 
neither of us seem to have much free time these days.

	Scott


0
scgmille (240)
12/5/2003 5:45:57 PM
On Fri, 05 Dec 2003 09:09:46 -0700, Thant Tessman <thant@acm.org> wrote:
> David Rush wrote:
>> On Thu, 04 Dec 2003 14:42:45 -0700, Thant Tessman <thant@acm.org> wrote:
>>
>>> [...] and no one is taking the bait.
>>
>> That's because Scheme and ML are the same language, even if Matthias
>> doesn't want to admit it...
>
> SML is Scheme with a type system and all that that implies. Scheme is 
> SML with macros and all that that implies.

Oooh good. My .sigmonster is licking it's bits^Wchops

> But Matthias is absolutely right about the evaluation order thing.

Surprisingly (to me), I think that he is not. And I actually do get bitten
by the lack of specification fairly regularly (since I regularly run my
Scheme code on at least 3 different implementations). Nearly *every* time
I make the evaluation order mistake, fixing it improves the code. Sometimes
rather dramatically.

> And the mutability thing...

I'm pretty sure I agree with this one.

> And the environment thing...

Not sure what this one is.

And you forgot winding continuations...

david rush
-- 
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
0
kumo7543 (108)
12/5/2003 6:13:56 PM
David Rush wrote:
> On Fri, 05 Dec 2003 09:09:46 -0700, Thant Tessman <thant@acm.org> wrote:

[...]

>> But Matthias is absolutely right about the evaluation order thing.
> 
> 
> Surprisingly (to me), I think that he is not. And I actually do get bitten
> by the lack of specification fairly regularly (since I regularly run my
> Scheme code on at least 3 different implementations). Nearly *every* time
> I make the evaluation order mistake, fixing it improves the code. Sometimes
> rather dramatically.

Surprisingly (to me), I've *never* been bitten by this issue in any 
language. (Don't get me wrong, I've been bitten by a lot of stupid 
stuff. I once spent a day chasing down a bug that turned out to be a 
type-o in an include guard.) It's just that leaving evaluation order 
unspecified is the Wrong Thing (TM) in a language that otherwise prides 
itself on doing the Right Thing (TM) given a certain Scheme aesthetic.


[...]

>> And the environment thing...
> 
> Not sure what this one is.

(define (bar) foo) ; should be an error because foo isn't defined yet

(define foo 23)
(define (bar) foo)
(define foo 5)
(bar) => 5
; should be 23, the value of foo when bar was defined

I suppose this is related to the mutability thing, but not only is this 
semantically awkward, but isn't it actually a performance hit to support 
this kind of functionality?

-thant

0
thant (332)
12/5/2003 8:57:34 PM
On Fri, 05 Dec 2003 13:57:34 -0700, Thant Tessman <thant@acm.org> wrote:

> David Rush wrote:
>> On Fri, 05 Dec 2003 09:09:46 -0700, Thant Tessman <thant@acm.org> wrote:
>>> And the environment thing...
>>
>> Not sure what this one is.
>
> (define (bar) foo) ; should be an error because foo isn't defined yet

Ah you mean the squidgy-top-level-specification-because-we-dont-know-how-
to-bootstrap-a-mutable-toplevel-when-it-shouldn't-*be*-mutable-in-the-first-
place thingy. Yes, I agree 100% that this is a serious problem. set! *is* a
bitch, ain't it? It'sjust so convenient sometimes to have...

never mind.

david rush
-- 
(\x.(x x) \x.(x x)) -> (s i i (s i i))
         -- aki helin (on comp.lang.scheme)
0
drush (122)
12/5/2003 8:59:55 PM
Matthias Blume wrote:

>   Leaving gaping holes in
> language definitions, however, is definitely not ok.
>

But pluggin the hole does not necessary require fixing the evaluation order.
See, for example, this old article by Christian Queinnec:

http://zurich.ai.mit.edu/pipermail/rrrs-authors/1993-May/001634.html



0
andre9567 (120)
12/5/2003 9:03:38 PM
Andre <andre@het.brown.edu> writes:

> Matthias Blume wrote:
> 
> >   Leaving gaping holes in
> > language definitions, however, is definitely not ok.
> >
> 
> But pluggin the hole does not necessary require fixing the evaluation order.
> See, for example, this old article by Christian Queinnec:
> 
> http://zurich.ai.mit.edu/pipermail/rrrs-authors/1993-May/001634.html

Sure, you can "plug" the whole by making things explicitly
non-deterministic in the semantics, the same way this has to be done
for, e.g., a concurrent language.  I find it offensive, though, to
throw all the complexity of concurrency on a sequential language.  In
practical terms, this is *not* a solution as it changes nothing from
the point of view of the programmer.  Returning some random element
from a denotation in Answer* is just as bad as not returning a
well-defined result.  (The two things are the same.)

As far as the above article is concerned, I am not sure whether it is
correct in the details.  Getting the details right on this stuff is
very hard.  In fact, after reading the article a bit more closely, I
think it is wrong.

One way of correctly dealing with Scheme's unspecified evaluation
order involves using an infinite stream of random bits that becomes an
additional parameter to the program.  Each pair of corresponding
permute/unpermute calls then consumes some of the random bits.  I
program has a deterministic outcome if you can prove that it has the
same outcome for all possible input tapes of random bits.

Anyway, these things are nice mental exercises, but they do not change
the situation as the programmer sees it.

Matthias
0
find19 (1244)
12/5/2003 9:51:55 PM
Thant Tessman <thant@acm.org> wrote:
> Surprisingly (to me), I've *never* been bitten by this issue in any
> language. (Don't get me wrong, I've been bitten by a lot of stupid
> stuff. I once spent a day chasing down a bug that turned out to be a
> type-o in an include guard.) It's just that leaving evaluation order
> unspecified is the Wrong Thing (TM) in a language that otherwise
> prides itself on doing the Right Thing (TM) given a certain Scheme
> aesthetic.

In contrast, I believe that unspecified evaluation order is the Right
Thing. Implicit sequencing is sometimes helpful to newbies, but to
experienced programmers, it doesn't make much difference, and it can get
in the way (by masking design flaws and inhibiting optimization). Scheme
has plenty of sequencing forms if you need them; it doesn't need another
one. I personally wouldn't mind if Scheme cut out all implicit
sequencing, so that the only sequencing features were procedure calls
(to preserve by-value semantics) and those elements which exist
specifically to provide sequencing, like LET*, BEGIN, IF, AND, OR.

One thing I dislike about implicit sequencing is that it gives a false
sense of security. Scheme has a few implicit sequences, so newbies
expect arg eval to work that way too. Oops! OK, we make arg eval
sequential too. Newbies expect library macros to work the same way --
they look just like other function calls, after all. Oops! OK, we put
requirements on standard macros. Newbies expect third-party macros to
work the same way. Oops! Unless you really want to handcuff Scheme
implementors and library providers, there will always be some Oops!
where the newbie expects sequential evaluating and doesn't get it.

Yes, that's a slippery-slope argument. I personally feel that Scheme is
already a step or two down the slope. Adding fixed arg eval goes further
down the slope, doesn't really fix anything, and does hurt some things.

Furthermore, macros make it dead simple to create a syntax for
fixed-order arg eval, if you *really* want it. (Maybe it should even be
in the standard Scheme library.) If your Scheme has something like PLT's
#%apply, you can even redefine primitive combinations.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/6/2003 1:15:48 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Furthermore, macros make it dead simple to create a syntax for
> fixed-order arg eval, if you *really* want it. (Maybe it should even be
> in the standard Scheme library.) If your Scheme has something like PLT's
> #%apply, you can even redefine primitive combinations.

Weren't you one of the people arguing for unshackling optimizers?
Using the macro system to implement an order steals valuable
information from an optimizer -- even baffles it by turning
application into something much more complex.

Shriram
0
sk1 (223)
12/6/2003 3:15:07 AM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> However, for the things a language does define, it's important that
>> you do get diagnostics when you run a non-comforming program.

Matthias Blume <find@my.address.elsewhere> wrote:
> Since programs that rely on order of evaluation are non-conforming,
> shouldn't we then require a diagnostic?

How are they "non-conforming"? It is not formally an *error* to rely on
unspecified behavior. Unfortunately, R5RS doesn't formally define a
"conforming program" at all (at least not that I noticed).

The C and C++ languages do provide a useful defintion of the term,
though. "A conforming program is one that is acceptable to a conforming
implementation," [C99] and a conforming implementation "shall accept any
strictly conforming program." Finally,

    A strictly conforming program shall use only those features of the
    language and library specified in this International Standard. It
    shall not produce output dependent on any unspecified, undefined, or
    implementation-defined behavior, and shall not exceed any minimum
    implementation limit.

There are very few strictly conforming C programs; in practice, the
definition is useful only for its role in defining a conforming
implementation. By C's standard, a program that depends on order of
evaluation *is* a conforming program, albeit not a *strictly* confirming
program.

Also, a diagnostic is impossible in practice, because it's generally
impossible for a compiler to determine whether the program's output
depends on the unspecified behavior. If the "dependence" lies below an
abstraction barrier, it may have no effect on the program's output. That
would make it conform to a requirement not to "produce output dependent
on any unspecified ... behavior." Note that the definition is in terms
of output specifically to show the program's abstract behavior rather
than its concrete behavior, because it's possible to write programs
which are robust even in the presence of unspecified behavior.

>> I'm even more curious about your background now, because your
>> suggestions keep getting farther and farther from the reality I'm
>> familiar with.

> My background is easy to find out.

Is there any particular reason why you're being difficult here? I wasn't
mocking your background, just curious about it. I suspect that our
backgrounds and goals are different, and that we're talking past each
other because of it.

> What is yours?

R�sum�: Learned BASIC on a Commodore PET over 20 years ago, received a
BSE (Computer Engineering) from the University of Michigan in 1993,
worked about 2 years in data processing, 3 years in applications and
networking, 1 year in C++ compiler construction, 2 years in performance
enhancements for the HP-UX C library, and 2 years in test development
and defect repair for the HP-UX system kernel. My education and career
is engineering-oriented, with a strong emphasis on software interfaces,
best practices, systems software, and tools.

Standard disclaimer: HP pays me for my opinions, but the company doesn't
endorse what I write on comp.lang.scheme.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/6/2003 3:39:48 AM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> Furthermore, macros make it dead simple to create a syntax for
>> fixed-order arg eval, if you *really* want it. (Maybe it should even
>> be in the standard Scheme library.) If your Scheme has something like
>> PLT's #%apply, you can even redefine primitive combinations.

Shriram Krishnamurthi <sk@cs.brown.edu> wrote:
> Weren't you one of the people arguing for unshackling optimizers?

Yes, that's me.

> Using the macro system to implement an order steals valuable
> information from an optimizer -- even baffles it by turning
> application into something much more complex.

First, I wouldn't recommend making it the *default* for application.
Second, the macro would expand to a simple LET* + procedure call, and
I'd be surprised if if that's sufficient to baffle an optimizer.

In case it wasn't clear, I was recommending a standard library
definition something like this:

    ; pseudo-code for a sequential procedure call macro
    (define-syntax seq-call
      (syntax-rules ()
        ((_ f arg1 ...)
         (let* ((<temp> arg1) ...) (f <temp> ...)))))

Sure, it wouldn't be as efficient as a compiler designed for left->right
eval order (unless the implementor provided a fast, primitive version of
the macro), but I wouldn't worry about it too much, because I wouldn't
expect programs to need it very often.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/6/2003 3:49:12 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Is there any particular reason why you're being difficult here?

Why do you think I am being difficult?  My resume is online.  Go and
look it up.  I am not going to post it in the middle of a silly
flamewar on comp.lang.scheme, though.

> I wasn't mocking your background, just curious about it.  I suspect
> that our backgrounds and goals are different, and that we're talking
> past each other because of it.

We are not talking past each other.  I completely understand your
point of view.  I happen not to share it.  You might be interested to
know that there was a time when I did share it, but I have
reconsidered since.

Matthias
0
find19 (1244)
12/6/2003 4:01:50 AM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> Is there any particular reason why you're being difficult here?

Matthias Blume <find@my.address.elsewhere> wrote:
> Why do you think I am being difficult?  My resume is online.  Go and
> look it up.  I am not going to post it in the middle of a silly
> flamewar on comp.lang.scheme, though.

You could've at least posted a link, or even just said that you're in
academics, with a research focus in operating systems, garbage
collection, and compiler construction. I didn't want a whole resume,
just a general idea of your background.

>> I wasn't mocking your background, just curious about it.  I suspect
>> that our backgrounds and goals are different, and that we're talking
>> past each other because of it.

> We are not talking past each other.  I completely understand your
> point of view.  I happen not to share it.  You might be interested to
> know that there was a time when I did share it, but I have
> reconsidered since.

It might be helpful if you explained why you changed your mind. That
would be more useful to me than a laundry list of pros and cons, because
it would explain why your position differs.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/6/2003 4:13:26 AM
Joe Marshall <jrm@ccs.neu.edu> wrote:
> If it were up to me, I'd probably just specify r2l and be done with
> it.  `Compiler optimization' just isn't a compelling argument.

Why right->left? That would be just as "counterintuitive" to humans as
unspecified order, *and* it would inhibit optimization.

Personally, I'd much rather see Scheme translators with an option for
varying the eval order. That would help with automated testing,
especially with flushing out the bugs that result from subtle eval-order
dependencies. It'd be similar to David Rush's practice of porting to
several Scheme imps, with the same benefits.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/6/2003 4:17:41 AM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> Obscure? I'd expect [the benefits of eval order optimization] to be
>> systematic, actually ....

Daniel C. Wang <danwang74@hotmail.com> wrote:
> If typically, you pass arguments in registers rather than on the
> stack, which is what you ought to be doing with all the registers
> anyway it doesn't matter.

Yes, that's a nice feature of register-based arguments. Unfortunately,
it's not a practical option on many architectures, including the
ubiquitous i386.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/6/2003 4:19:24 AM
> "Bradd W. Szonye" writes:
>> This sort of thing is especially important now that "superscalar"
>> machine architectures are ubiquitous. For example, on an IA-64
>> system, unnecessary sequence points have *major* performance
>> implications. Why introduce unnecessary sequence points just to bless
>> a poor design and coding style?

Daniel C. Wang <danwang74@hotmail.com> wrote:
> 1. The IA-64 despite all the marketing from Intel is not a "ubiquitous" 
>    architecture

I didn't mean to imply that it was. The IA-64 is just one example of a
superscalar architecture. There are many others, including its
predecessor (on the Intel side).

> 2. All existing architectures that allow for parallel execution
>    dynamically schedule instructions on the fly. 
>    (i.e. Pentium 4, PowerPCs, Transmeta, AMD x86-64, UltraSparc ...)

Correct -- except when the compiler inserts nops and other pipeline
flushers to make sure that the dynamic scheduling doesn't break
sequencing rules.

Note: I may be spouting nonsense, at least when it comes to the
instruction level, because I haven't worked much at this level recently,
and I've definitely never worked on a compiler with fixed arg eval
order. I do know that left->right vs right->left makes a big difference,
but I'm just speculating on this superscalar thing.

> The only performance number that has been discussed in this thread is
> a 7% cost for strict left-right evaluation. I'd hardly call 7% a major
> cost.

I certainly would! Improving overall system performance by 7%, without
requiring any user code changes whatsoever, is a *big* deal, worth big
bucks in the commercial software world. Sure, we like the big 2x and 10x
performance improvements, but 7% system-wide is nothing to sneeze at.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/6/2003 4:32:35 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> You could've at least posted a link, or even just said that you're in
> academics, with a research focus in operating systems, garbage
> collection, and compiler construction. I didn't want a whole resume,
> just a general idea of your background.

I did not want to post a link.  These days people are easy to find.
Oh, and for the record, although I do have an interest in operating
systems, it has never been my research focus.  Same goes for garbage
collection.

> It might be helpful if you explained why you changed your mind. That
> would be more useful to me than a laundry list of pros and cons, because
> it would explain why your position differs.

I have already explained why I think that leaving the order of
evaluation unspecified is a bad idea, and I am not going to repeat all
that again.  But in summary, and for the last time: What you think
(and I once thought) to be advantages are, in my opinion, not worth
*any* trouble.  On the other hand, sacrificing well-defined semantics
is a high price to pay in the case of a language that is not meant to
be non-deterministic.  Dealing with and reasoning about
non-determinism is hard, and one should not make things needlessly
hard.  As the original poster's problem demonstrated, the choice of
leaving evaluation order unspecified *is* trouble, and---as I have
witnessed myself---it happens to beginners and veterans alike.

Matthias
0
find19 (1244)
12/6/2003 4:39:44 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> I certainly would! Improving overall system performance by 7%, without
> requiring any user code changes whatsoever, is a *big* deal, worth big
> bucks in the commercial software world. Sure, we like the big 2x and 10x
> performance improvements, but 7% system-wide is nothing to sneeze at.

Unfortunately, nothing at all has been said about "system performance".
Go and read Clinger's original comment.  It was about code size.
0
find19 (1244)
12/6/2003 4:54:04 AM
> "Bradd W. Szonye" writes:
>> I certainly would! Improving overall system performance by 7%,
>> without requiring any user code changes whatsoever, is a *big* deal,
>> worth big bucks in the commercial software world. Sure, we like the
>> big 2x and 10x performance improvements, but 7% system-wide is
>> nothing to sneeze at.

Matthias Blume <find@my.address.elsewhere> wrote:
> Unfortunately, nothing at all has been said about "system
> performance". Go and read Clinger's original comment.  It was about
> code size.

My mistake; I misremembered the quotation. However, a 7% reduction in
code size is also an impressive improvement, even more impressive than a
7% performance improvement IMO. And Scott G. Miller also reported
impressive gains from choosing a particular evaluation order.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/6/2003 5:21:01 AM
> "Bradd W. Szonye" writes:
>> You could've at least posted a link, or even just said that you're in
>> academics, with a research focus in operating systems, garbage
>> collection, and compiler construction. I didn't want a whole resume,
>> just a general idea of your background.

Matthias Blume wrote:
> I did not want to post a link. These days people are easy to find.

Any particular reason? Because so far you're just reinforcing my
impression that you're being difficult.

> Oh, and for the record, although I do have an interest in operating
> systems, it has never been my research focus.  Same goes for garbage
> collection.

Then why are they listed first as your research subjects at Kyoto
University? Or did I find the wrong Matthias Blume? Only module systems
appear more prominently in your resume.

>> It might be helpful if you explained why you changed your mind. That
>> would be more useful to me than a laundry list of pros and cons,
>> because it would explain why your position differs.

> I have already explained why I think that leaving the order of
> evaluation unspecified is a bad idea, and I am not going to repeat all
> that again.

I didn't ask you to. I asked why you *changed your mind*. That would be
helpful to me, because it might give me some perspective; it might be
more convincing stated in those terms.

> But in summary, and for the last time: What you think (and I once
> thought) to be advantages are, in my opinion, not worth *any* trouble.

Having worked on a commercial compiler, system library, and operating
system kernel, where we've spent considerable resources on just that
kind of improvement, I can't take that opinion very seriously. This is
exactly why I suspect that the differences in our backgrounds is
important. In my job, we take both performance *and* usability very
seriously, because both of them sell the product. Yes, there are
trade-offs between the two. However, when there's a 7% or 35% advantage
on one hand, and an RTFM-type mistake on the other hand, the choice is
pretty obvious.

> On the other hand, sacrificing well-defined semantics is a high price
> to pay in the case of a language that is not meant to be
> non-deterministic. Dealing with and reasoning about non-determinism is
> hard, and one should not make things needlessly hard.

There is no non-determinism for a well-designed program, and with a
non-perverse implementation, there is no non-determinism at all.

> As the original poster's problem demonstrated, the choice of leaving
> evaluation order unspecified *is* trouble, and---as I have witnessed
> myself---it happens to beginners and veterans alike.

And in my experience, a fixed order doesn't help much, except in the
most obvious cases (and that's what code reviews are for). In the
subtler cases, where you have unintentional interactions between
arguments, the fixed error merely masks what is most likely a serious
design flaw. My experience matches David Rush's: Programmers are better
off using constructs that *provoke* bugs rather than suppressing them.
And I do strongly feel that a fixed order merely suppresses the bugs; it
doesn't actually fix anything except trivial RTFM-type mistakes.

Yes, veterans occasionally make the mistake too. Covering it up with a
kludge is not the right solution, though. There are at least half a
dozen better ways of dealing with the problem; the perverse,
random-order approach is actually better than fixed order IMO.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/6/2003 5:43:12 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Then why are they listed first as your research subjects at Kyoto
> University? Or did I find the wrong Matthias Blume? Only module systems
> appear more prominently in your resume.

I left Japan 4 years ago.  What you found is way outdated.  (I assume
that it is stuff on my old Princeton web page which I need to take
down one of these days.)  For the record: Yes, I spent some amount of
time thinking about OS and GC issues when I was in Kyoto, but so far
nothing tangible came of it.

> >> It might be helpful if you explained why you changed your mind. That
> >> would be more useful to me than a laundry list of pros and cons,
> >> because it would explain why your position differs.
> 
> > I have already explained why I think that leaving the order of
> > evaluation unspecified is a bad idea, and I am not going to repeat all
> > that again.
> 
> I didn't ask you to.

Yes you did.  The reasons why I changed my mind are precisely the same
reasons that I already gave.

> However, when there's a 7% or 35% advantage on one hand, and an
> RTFM-type mistake on the other hand, the choice is pretty obvious.

You are *way* overstating your case.  I don't believe for a second
that there would be a 7% (not to mention 35%) sustainable advantage.

And the "RTFM" remark carries no weight at all since every poor
language design decision could be brushed away with it.
0
find19 (1244)
12/6/2003 6:07:19 AM
Matthias Blume <find@my.address.elsewhere> writes:

> I left Japan 4 years ago.

Correction: 3 years, with 3.5 years since I left my position at Kyoto
University.
0
find19 (1244)
12/6/2003 6:11:01 AM

Matthias Blume wrote:

> Sure, you can "plug" the whole by making things explicitly
> non-deterministic in the semantics, the same way this has to be done
> for, e.g., a concurrent language.  I find it offensive, though, to
> throw all the complexity of concurrency on a sequential language.  In
> practical terms, this is *not* a solution as it changes nothing from
> the point of view of the programmer.  Returning some random element
> from a denotation in Answer* is just as bad as not returning a
> well-defined result.  (The two things are the same.)

Just to add something irrelevant, I have wondered on and off for a
while about the rather messy lazy IO system in Haskell, and have
wondered if and how such a system could be based on a concurrent
model.  Any thoughts on this?

David
0
feuer (188)
12/6/2003 6:15:23 AM
Matthias Blume wrote:

> the point of view of the programmer.  Returning some random element
> from a denotation in Answer* is just as bad as not returning a
> well-defined result.  (The two things are the same.)

Not necessarily, though it could well be said that any program in which
this is not true is Too Fragile.

> One way of correctly dealing with Scheme's unspecified evaluation
> order involves using an infinite stream of random bits that becomes an
> additional parameter to the program.  Each pair of corresponding
> permute/unpermute calls then consumes some of the random bits.  I
> program has a deterministic outcome if you can prove that it has the
> same outcome for all possible input tapes of random bits.

Right.  The R5RS authors just didn't want to deal with threading that
stream through the semantics.

David
Monads, monads.
0
feuer (188)
12/6/2003 6:24:46 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Joe Marshall <jrm@ccs.neu.edu> wrote:
> > If it were up to me, I'd probably just specify r2l and be done with
> > it.  `Compiler optimization' just isn't a compelling argument.
> 
> Why right->left? That would be just as "counterintuitive" to humans as
> unspecified order, *and* it would inhibit optimization.

Time to brush up on your Arabic.

Shriram
0
sk1 (223)
12/6/2003 2:09:35 PM
>>>>> "Matthias" == Matthias Felleisen <matthias@ccs.neu.edu> writes:

Matthias> 3. The Scheme Report imposes an undecidable correctness criteria on
Matthias> programs -- that they don't depend on the order of evaluation -- without
Matthias> (naturally) asking implementations to check it. Go figure; and that language
Matthias> supposedly has a semantics.

While this is certainly a problem, the bigger problem in practice is
the unspecified parts of EQ? and EQV?

-- 
Cheers =8-} Mike
Friede, V�lkerverst�ndigung und �berhaupt blabla
0
sperber (138)
12/6/2003 3:28:37 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Joe Marshall <jrm@ccs.neu.edu> wrote:
>> If it were up to me, I'd probably just specify r2l and be done with
>> it.  `Compiler optimization' just isn't a compelling argument.
>
> Why right->left? That would be just as "counterintuitive" to humans as
> unspecified order, *and* it would inhibit optimization.

Right to left because then LET expressions go from top to bottom.  It
isn't much of a reason to prefer it to l2r, but then I think this is
trivial.  And as I said, the optimization argument isn't compelling.

> Personally, I'd much rather see Scheme translators with an option for
> varying the eval order. That would help with automated testing,
> especially with flushing out the bugs that result from subtle eval-order
> dependencies. It'd be similar to David Rush's practice of porting to
> several Scheme imps, with the same benefits.

That'd be an interesting option.  Even if Scheme *does* settle on some
specific eval order, it would be handy to have a way to vary it for a
bug shakedown.


-- 
~jrm
0
12/6/2003 4:34:30 PM
Michael Sperber wrote:

>>>>>>"Matthias" == Matthias Felleisen <matthias@ccs.neu.edu> writes:
> 
> 
> Matthias> 3. The Scheme Report imposes an undecidable correctness criteria on
> Matthias> programs -- that they don't depend on the order of evaluation -- without
> Matthias> (naturally) asking implementations to check it. Go figure; and that language
> Matthias> supposedly has a semantics.
> 
> While this is certainly a problem, the bigger problem in practice is
> the unspecified parts of EQ? and EQV?


Yes, I am in full agreement. The language has many vague, unspecified corners, 
and programmers suffer from those all the time. It was not my intention to 
restrict the discussion to the order of evaluation in applications. It was just 
that the original post was about that problem (a careful programmer had been 
bidden by this vagueness once again).

I proposed at the Northeastern workshop that Scheme should make a serious effort 
to catch up with SML and Haskell concerning the semantic definition of the core. 
This would help the language itself and those academics who serve Scheme's 
development in some forum or another. (I put it more bluntly then and I will 
repeat it here. Please compare the academic success of the authors of the SML 
report with those who wrote the 3 and 4 and 5 reports on Scheme [at the time of 
the writing of the reports]. I am grateful that Will is with me here at NU.)

A consequence of my proposal is to factor out the entire systems library and 
create a mechanism for managing libraries. Then the core language can be 
specified with a full-fledged semantics (like the one for SML) and the libraries 
stand on their own. This work can be spun off on volunteers. (Robby Findler and 
Jacob Matthews have already taken me up on that. I believe they have an 
executable rewriting semantics for a fixed R4RS. Ryan and I are working with 
them on a semantics for macros.)

Naturally, I don't advocate an exact copy of SML's work. I find the idea of a 
semantic definition that has to come with a full-fledged interpretation (in a 
second book) unsatisfactory. Fortunately, we don't have to specify a type system 
at this point. Still, in the ideal world, we should be able to prove a type 
soundness theorem (a la Milner) for the core of Scheme that we do specify. 
Perhaps we can even enlist some automatic theorem provers to do that for us.

If we succeed, we may be the first useful scripting language with growth 
potential and a well-defined language. What progress :-)

-- Matthias

0
12/6/2003 6:16:29 PM
>> Joe Marshall <jrm@ccs.neu.edu> wrote:
>>> If it were up to me, I'd probably just specify r2l and be done with
>>> it.  `Compiler optimization' just isn't a compelling argument.

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> Why right->left? That would be just as "counterintuitive" to humans
>> as unspecified order, *and* it would inhibit optimization.

Shriram Krishnamurthi <sk@cs.brown.edu> wrote:
> Time to brush up on your Arabic.

When I mentioned this last time, I was careful to qualify that as
"English-speaking humans." I was careless this time!
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/6/2003 6:22:17 PM
Joe Marshall <prunesquallor@comcast.net> wrote:
>>> If it were up to me, I'd probably just specify r2l and be done with
>>> it.  `Compiler optimization' just isn't a compelling argument.

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> Why right->left? That would be just as "counterintuitive" to
>> [English-speaking] humans as unspecified order, *and* it would
>> inhibit optimization.

> Right to left because then LET expressions go from top to bottom.

Why does right->left imply that? Or, more specifically, why would
left->right necessarily impose bottom to top evaluation in LET? That
seems like a quality-of-implementation issue for the LET macro, rather
than a necessity.

> It isn't much of a reason to prefer it to l2r, but then I think this
> is trivial.  And as I said, the optimization argument isn't
> compelling.

Likewise, I don't find any of the "better semantics" arguments
compelling. All they do is conflate the notions of sequential evaluation
and argument evaluation. I don't by that because procedures aren't
conceptually sequential, not in general. They're closer to tree-shaped,
with several evaluation branches flowing into each expression. (In
general, the flow of information is a graph, but tree patterns are
especially common.) For example, in

    (+ (* 2 3) (* 4 6))

one branch calculates 2x3, one branch calculates 4x6, and the final
procedure call combines the two branches into a single result. The two
branches are independent, though. This is why superscalar CPU
architectures are possible.

Unfortunately, there aren't many programming languages that let you
describe the actual graph structure of information flow. I suspect
that's because the text files we use for programming are linear in
nature, so it's hard to express information flow in non-linear ways.

Unspecified argument evaluation order is good because it's one of the
few constructs that actually supports the non-linearity of information
flow in a program. I'm opposed to taking that away. I think it
*obscures* the program's information flow, rather than defining it
better.

>> Personally, I'd much rather see Scheme translators with an option for
>> varying the eval order. That would help with automated testing,
>> especially with flushing out the bugs that result from subtle
>> eval-order dependencies. It'd be similar to David Rush's practice of
>> porting to several Scheme imps, with the same benefits.

> That'd be an interesting option.  Even if Scheme *does* settle on some
> specific eval order, it would be handy to have a way to vary it for a
> bug shakedown.

Yes, indeed. It's the same kind of idea as "electric fence" options for
memory allocators: It tries to provoke the program into bad behavior,
unmasking the hidden bugs.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/6/2003 6:39:58 PM
On Sat, 06 Dec 2003 05:43:12 GMT, Bradd W. Szonye <bradd+news@szonye.com> 
wrote:
> My experience matches David Rush's: Programmers are better
> off using constructs that *provoke* bugs rather than suppressing them.

Well, I don't know if I'd put it that way, but I do believe in the 
principle
of earliest surprise - the earlier the surprise, the sooner it's fixed. And
there's a corrolary for defensive programming: don't. Program errors are
easiest to diagnose when they are close to the source of the error.

Those are slippery-slope opinions, mind you, and it takes years to develop
a good feel for when to apply them as rules. Perhaps it would be better if
I said that I prefer documenting invariants to writing code that handles
invariant violations, however gracefully.

david rush
-- 
(\x.(x x) \x.(x x)) -> (s i i (s i i))
         -- aki helin (on comp.lang.scheme)
0
drush (122)
12/6/2003 6:50:47 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Joe Marshall <prunesquallor@comcast.net> wrote:
>>>> If it were up to me, I'd probably just specify r2l and be done with
>>>> it.  `Compiler optimization' just isn't a compelling argument.
>
>> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>>> Why right->left? That would be just as "counterintuitive" to
>>> [English-speaking] humans as unspecified order, *and* it would
>>> inhibit optimization.
>
>> Right to left because then LET expressions go from top to bottom.
>
> Why does right->left imply that? 

<homer>D'oh!</homer> I've been saying it the wrong way.  I meant left
to right.

> Or, more specifically, why would left->right necessarily impose
> bottom to top evaluation in LET?  That seems like a
> quality-of-implementation issue for the LET macro, rather than a
> necessity.

You are right.

Left to right because inserting line breaks makes it top to bottom and
because other forms like BEGIN, and IF do it left to right.

Incidentally, if you specify order of evaluation for function call
expressions, you really ought to specify it for macro expressions as
well.

> Likewise, I don't find any of the "better semantics" arguments
> compelling. 

Nor do I.  I see a lot of `gaping hole' and `needless complication'
and being bandied about but I don't see anything like `the
such-and-such calculus is a powerful tool, but it can't be easily used
with Scheme because the argument order is unspecified'.

> All they do is conflate the notions of sequential evaluation
> and argument evaluation. I don't by that because procedures aren't
> conceptually sequential, not in general. They're closer to tree-shaped,
> with several evaluation branches flowing into each expression. (In
> general, the flow of information is a graph, but tree patterns are
> especially common.) For example, in
>
>     (+ (* 2 3) (* 4 6))
>
> one branch calculates 2x3, one branch calculates 4x6, and the final
> procedure call combines the two branches into a single result. The two
> branches are independent, though. This is why superscalar CPU
> architectures are possible.

Yes, but since Scheme is a sequential language this doesn't convince
me.  A dumb compiler cannot take advantage of the parallelism because
of sequentiality, whereas a sufficiently smart compiler could take
advantage of it *despite* defining the order.

>>> Personally, I'd much rather see Scheme translators with an option for
>>> varying the eval order. That would help with automated testing,
>>> especially with flushing out the bugs that result from subtle
>>> eval-order dependencies. It'd be similar to David Rush's practice of
>>> porting to several Scheme imps, with the same benefits.
>
>> That'd be an interesting option.  Even if Scheme *does* settle on some
>> specific eval order, it would be handy to have a way to vary it for a
>> bug shakedown.
>
> Yes, indeed. It's the same kind of idea as "electric fence" options for
> memory allocators:  It tries to provoke the program into bad behavior,
> unmasking the hidden bugs.

In the build system I use I hacked the graph linearizer to randomize
the order if the explicit dependencies permitted it.  This exposed
many implicit dependencies.

-- 
~jrm
0
12/6/2003 7:18:47 PM
David Rush <drush@aol.net> writes:

> On Sat, 06 Dec 2003 05:43:12 GMT, Bradd W. Szonye
> <bradd+news@szonye.com> wrote:
>> My experience matches David Rush's: Programmers are better
>> off using constructs that *provoke* bugs rather than suppressing them.

I assume that by `suppressing' you mean something like `masking' them,
i.e., the bug is still there, it is just benign.  

Of course a bug that is benign in *all* situations is barely worth
worrying about, but a bug that is benign in *most* situations is a
pain in the ass.

Some systems are designed so that the most innocuous bug instantly
brings the system to a halt.  These are called `fail-fast' systems.

-- 
~jrm
0
12/6/2003 8:46:33 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> >> Why right->left? That would be just as "counterintuitive" to humans
> >> as unspecified order, *and* it would inhibit optimization.
> 
> Shriram Krishnamurthi <sk@cs.brown.edu> wrote:
> > Time to brush up on your Arabic.
> 
> When I mentioned this last time, I was careful to qualify that as
> "English-speaking humans." I was careless this time!

Most r2l writers program in l2r languages.  (And math is always l2r.)
So a r2l evaluation order would be as counter intuitive as it is for
l2r's.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/7/2003 3:26:03 AM
Joe Marshall <prunesquallor@comcast.net> wrote:
> Of course a bug that is benign in *all* situations is barely worth
> worrying about, but a bug that is benign in *most* situations is a
> pain in the ass.

I would go further and say that a so-called bug that is benign isn't a
bug, it is a style issue. If order was l->r, I would have no guilt
converting this:

(let* ((a (return-1-and-record-history))
       (b (return-2-and-record-history))
       (c (return-3-and-record-history)))
  (+ a b c))

to this:

(+ (return-1-and-record-history)
   (return-2-and-record-history)
   (return-3-and-record-history))

and I'd call my expression compact, correct, good style, good
semantics, not hard to understand, and if Bradd chewed me out in a
code review, I'd tell him to get a life. This thread is too freaking
long, I can't believe I read it all the way through.

-- 
Anthony Carrico
0
acarrico (19)
12/7/2003 9:44:18 AM
> (+ (return-1-and-record-history)
>    (return-2-and-record-history)
>    (return-3-and-record-history))
> 
> and I'd call my expression compact, correct, good style, good
> semantics, not hard to understand, and if Bradd chewed me out in a
> code review, I'd tell him to get a life. This thread is too freaking
> long, I can't believe I read it all the way through.
> 

This really gives up one conceptual advantage of functional programming 
though in my opinion; that a function merely consumes values and 
produces a value as a result, and that its semantics do not depend on 
the order of operations.

Scheme is not purely functional though.  But it does divide itself
cleanly between functional and non-functional code quite nicely through
the separation of s-expression positions where return value is important
an where it is not.  Consider the following expression:

(let ((v1 <vs>)
       (v2 <vs>)
   <cs>
   <cs>
   (f <vs> <vs>))

In it, <vs> are s-expressions whose value is consumed by another form, 
and <cs> are s-expressions in command context, whose values are 
discarded.  In Scheme, the <cs> expressions are irrelevant if they 
perform no side-effects.  Because they must perform side effects, the 
order in which they are evaluated is important, and so Scheme mandates 
that they are evaluated left-to-right.  The value producing expressions 
are functional, and their order should not matter to the consuming 
functions or continuations.

Occasionally though we have side-effecting s-expressions which produce a 
value we are interested in.  For this reason we have let*, which really 
performs two functions.  First it evaluates a sequence of s-expressions 
in left-to-right order, with the result available lexically to the later 
s-expressions.  Second, it binds these results to variables.

The only disadvantage to let* to express where sequencing is necessary 
is that it conses several closures, one for each binding.  This makes it 
syntactically more verbose and possibly more expensive.

There appear to be two arguments for fixing an order of evaluation.  One 
is that it removes an undeterministic property of the language allowing 
for formal correctness analysis.  I believe someone brought up that this 
didn't in fact cause such a problem; that formal program analysis could 
be done even with an undefined order.  The second argument is pragmatic: 
that programmers should be able to rely on the order of operations so 
bugs don't occur because of them.  I'm strongly in agreement with David 
Rush, that its always a bad idea to turn careless programming which is 
really buggy into 'just bad style'.  Many other languages allow 
shortcuts such as these in order to get to a finished program sooner, 
and I'd rather Scheme remain a programming language biased towards where 
one writes what the program means, not what it does.

Leaving the OoE unspecified allows for more flexibility both for the 
compiler and for code generating macros and programs.  The compiler 
flexibility argument is pretty handily demonstrated in SISC, where we 
can cut nearly a third from continuation capture by choosing an OoE 
which may be counterintuitive to the naive programmer.

At any rate, a compromise to the verbosity and efficiency of mandating 
let* when sequencing is required might be in order.  Taylor Campbell 
brough up the idea in #scheme of an apply* special form, which could be 
written in the following macro:

(define-syntax apply*
   (syntax-rules ()
     ((_ fun arg1 args ...)
      (let* ((tmp1 arg1)
             (tmps args) ...)
        (fun tmp1 tmps ...)))))

This allows you to more concisely write the above example as:

(apply* + (return-1-and-record-history)
           (return-2-and-record-history)
           (return-3-and-record-history))

while retaining the compilers flexibility for all truely functional 
applications.  Future compilers could provide apply* natively which 
would sequence function application without the need to create closures, 
and could reduce closure consing by optimizing

(let* ((v1 expr1)
        (v2 expr2)
        (v3 expr3)
   body ...)

where the expr2 and 3 don't depend reference previous bindings into:

(apply* (lambda (v1 v2 v3) body ...) expr1 expr2 expr3)

	Scott

0
scgmille (240)
12/7/2003 4:27:28 PM
Scott G. Miller <scgmille@freenetproject.org> wrote:
>
>> (+ (return-1-and-record-history)
>>    (return-2-and-record-history)
>>    (return-3-and-record-history))
>>
>> and I'd call my expression compact, correct, good style, good
>> semantics, not hard to understand, and if Bradd chewed me out in a
>> code review, I'd tell him to get a life. This thread is too freaking
>> long, I can't believe I read it all the way through.
>>
>
> This really gives up one conceptual advantage of functional programming
> though in my opinion; that a function merely consumes values and
> produces a value as a result, and that its semantics do not depend on
> the order of operations.

But this isn't a functional program, and I qualified the example to
apply only in an l->r language.

I don't disagree with you, and I'm not arguing for OR against
specifying an argument order. I'm just saying that if it is specified
as l->r, then I don't agree with everybody in this thread who says it
would be bad "newbie" style. If it wasn't for the optimization issue,
then I'd be happy with l->r, and I'd sometimes rely on it, and I
wouldn't feel guilty, and I'd be very happy that my code was more
compact and easier to read and understand.

Basically I'm just staking out the only position left in this debate
so that Bradd and Matthias B. both have a chance to agree about
something: they can point at me and tell each other, "see I told you
so", and they would both be correct, and then we can all go home :).

-- 
Anthony Carrico
0
acarrico (19)
12/7/2003 5:49:10 PM
acarrico@memebeam.org wrote:
> Scott G. Miller <scgmille@freenetproject.org> wrote:
> 
>>>(+ (return-1-and-record-history)
>>>   (return-2-and-record-history)
>>>   (return-3-and-record-history))
>>>
>>>and I'd call my expression compact, correct, good style, good
>>>semantics, not hard to understand, and if Bradd chewed me out in a
>>>code review, I'd tell him to get a life. This thread is too freaking
>>>long, I can't believe I read it all the way through.
>>>
>>
>>This really gives up one conceptual advantage of functional programming
>>though in my opinion; that a function merely consumes values and
>>produces a value as a result, and that its semantics do not depend on
>>the order of operations.
> 
> 
> But this isn't a functional program, and I qualified the example to
> apply only in an l->r language.
> 
> I don't disagree with you, and I'm not arguing for OR against
> specifying an argument order. I'm just saying that if it is specified
> as l->r, then I don't agree with everybody in this thread who says it
> would be bad "newbie" style. If it wasn't for the optimization issue,
> then I'd be happy with l->r, and I'd sometimes rely on it, and I
> wouldn't feel guilty, and I'd be very happy that my code was more
> compact and easier to read and understand.

Sure.  If Scheme mandated block structured imperative programming, I'd 
use it too and it wouldn't be bad style.  But it doesn't.  I don't think 
it should mandate an OoE either, and was pointing out that it needn't do 
so to provide programmers with a concise way to have ordered side-effect 
supporting expressions.

We can all stop talking past each other now. :)

	Scott

0
scgmille (240)
12/7/2003 9:14:55 PM
Scott G. Miller <scgmille@freenetproject.org> wrote:
> If Scheme mandated block structured imperative programming, I'd use it
> too and it wouldn't be bad style.  But it doesn't.  I don't think it
> should mandate an OoE either, and was pointing out that it needn't do
> so to provide programmers with a concise way to have ordered
> side-effect supporting expressions.

That's roughly my feeling on it too. I don't see Scheme as a "sequential
language" in the sense that some posters have been claiming. It's an
impure functional language with many sequential and imperative
constructs.

I'd prefer *not* to fix the arg eval order, because it's another step
down the road toward sequential, imperative evaluation (which I see as
undesirable, for several reasons) and because it would provide little
concrete benefit other than protecting some programmers from RTFM-type
mistakes.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/7/2003 11:34:01 PM
Eli Barzilay <eli@barzilay.org> wrote:
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> 
>> > "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> >> Why right->left? That would be just as "counterintuitive" to humans
>> >> as unspecified order, *and* it would inhibit optimization.
>> 
>> Shriram Krishnamurthi <sk@cs.brown.edu> wrote:
>> > Time to brush up on your Arabic.
>> 
>> When I mentioned this last time, I was careful to qualify that as
>> "English-speaking humans." I was careless this time!
> 
> Most r2l writers program in l2r languages.  (And math is always l2r.)
> So a r2l evaluation order would be as counter intuitive as it is for
> l2r's.

Yes, agreed, which is *why* I was careless about the qualifier.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/7/2003 11:35:21 PM
Bradd wrote:
>> [In my experience]: Programmers are better off using constructs that
>> *provoke* bugs rather than suppressing them.

Joe Marshall <prunesquallor@comcast.net> wrote:
> I assume that by `suppressing' you mean something like `masking' them,
> i.e., the bug is still there, it is just benign.  

Yes.

> Of course a bug that is benign in *all* situations is barely worth
> worrying about, but a bug that is benign in *most* situations is a
> pain in the ass.

Agreed. Based on my experience, fixing the arg eval order would have
this result. It would work fine for the cases where the programmer
intentionally relied on left->right evaluation, but it would commonly
result in "mostly benign" bugs in the accidental cases. Also IME,
"mostly benign" bugs are much worse than "fast-fail" bugs. Therefore, I
feel that fixed eval order is likely to cause even more damage (by
masking subtle design flaws) than it is to help.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/7/2003 11:40:59 PM
Joe Marshall <prunesquallor@comcast.net> wrote:
> Incidentally, if you specify order of evaluation for function call
> expressions, you really ought to specify it for macro expressions as
> well.

And that's a huge can of worms, IMO! In Scheme, macros look just like
procedure calls, but they don't behave the same way. While you could
write many macros to mimic a fixed evaluation order, I doubt that it's
possible for all of them, and it's probably impossible for a compiler to
check it.

Because of that, setting a fixed arg eval order is just postponing the
problem. Eventually, a programmer will rely on it in a macro call, and
there's no good way to warn the macro user (or the macro author!) that
this is a problem. I personally think it's a very, very bad idea to say,
"You can rely on the eval order in this construct, but you can't count
on it in this other construct that looks exactly like the other one."
That's bad, bad, error-prone language design!

> In the build system I use I hacked the graph linearizer to randomize
> the order if the explicit dependencies permitted it.  This exposed
> many implicit dependencies.

Good idea!
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/7/2003 11:46:46 PM
"Bradd W. Szonye" <bradd+news@szonye.com> virkkoi:
> I'd prefer *not* to fix the arg eval order, because it's another step
> down the road toward sequential, imperative evaluation (which I see as
> undesirable, for several reasons) and because it would provide little
> concrete benefit other than protecting some programmers from RTFM-type
> mistakes.

By the way, do you also think that C got it right by letting the values
of all newly declared local variables be undefined unless explicitly
initialized? After all, initializing all variables to zero or null
automatically would only protect some programmers from RTFM-type
mistakes.


Lauri Alanko
la@iki.fi
0
la (473)
12/7/2003 11:46:51 PM
Bradd wrote:
>> My experience matches David Rush's: Programmers are better
>> off using constructs that *provoke* bugs rather than suppressing them.

David Rush wrote:
> Well, I don't know if I'd put it that way, but I do believe in the
> principle of earliest surprise - the earlier the surprise, the sooner
> it's fixed. And there's a corrolary for defensive programming: don't.
> Program errors are easiest to diagnose when they are close to the
> source of the error.

Agreed, and that's exactly what I meant by my paraphrase above (so my
apologies if it didn't come across that way).

> Those are slippery-slope opinions, mind you, and it takes years to
> develop a good feel for when to apply them as rules.

Also agreed. There are some cases where it makes sense to guard against
minor flakiness. However, it's always dangerous; whenever you spot a
violated invariant, there's a risk that the corruption runs deeply
through the program state.

> Perhaps it would be better if I said that I prefer documenting
> invariants to writing code that handles invariant violations, however
> gracefully.

Same here.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/7/2003 11:49:39 PM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> Then why are they listed first as your research subjects at Kyoto
>> University? Or did I find the wrong Matthias Blume? Only module
>> systems appear more prominently in your resume.

Matthias Blume wrote:
> I left Japan 4 years ago.  What you found is way outdated.  (I assume
> that it is stuff on my old Princeton web page which I need to take
> down one of these days.)  For the record: Yes, I spent some amount of
> time thinking about OS and GC issues when I was in Kyoto, but so far
> nothing tangible came of it.

OK, thanks for the clarification. Now, could you please provide some
background information (or a link to it) that's actually current? You
claimed that the information was easy to find, but what I *actually*
found was inaccurate and misleading.

>>>> It might be helpful if you explained why you changed your mind. That
>>>> would be more useful to me than a laundry list of pros and cons,
>>>> because it would explain why your position differs.

>>> I have already explained why I think that leaving the order of
>>> evaluation unspecified is a bad idea, and I am not going to repeat
>>> all that again.

>> I didn't ask you to.

> Yes you did.  The reasons why I changed my mind are precisely the same
> reasons that I already gave.

Please, this isn't helpful at all. You're being difficult again. Why did
you change your position on those beliefs? For example: Do you put less
importance on optimization than you used to? Or have you determined that
the *level* of optimization is not enough to outweigh the disadvantages?

If you feel that optimization is less important in general, then we have
a difference in premises, and we probably won't ever agree. If you
merely feel that the *level* of optimization is insignificant, then
please offer some evidence in that direction.

>> However, when there's a 7% or 35% advantage on one hand, and an
>> RTFM-type mistake on the other hand, the choice is pretty obvious.

> You are *way* overstating your case.  I don't believe for a second
> that there would be a 7% (not to mention 35%) sustainable advantage.

Please provide some evidence for it, then. The evidence in support for
my position is obviously far from bulletproof, but it's all we have to
go on so far.

> And the "RTFM" remark carries no weight at all since every poor
> language design decision could be brushed away with it.

You're offering a false dichotomy here. An "RTFM" response is quite
appropriate in some contexts. For example, if there are two valid
answers to a question, and you choose the wrong answer because you
didn't read the spec, that's an RTFM mistake. That's exactly the case
here. While you may prefer the fixed order, the other alternative is
still a valid language choice, and the spec tells us which one is
standard.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/7/2003 11:59:30 PM
Eli Barzilay wrote:
> Most r2l writers program in l2r languages.  

Ok.

> (And math is always l2r.)

I am not sure about this.  For 12 years, I learned math in Arabic and it 
was, just like everything else, right-to-left.  So, we say

	c + x b + 2^x a = (x)Y

(of course, it would not look as odd as this as we use the arabic letters.)

On the other hand, numerals in Arabic are left-to-right.  So, this year 
is written as 2003 rather than 3002.

l2r evaluation in English is "top-down" evaluation.  r2l evaluation in 
Arabic is also "top-down".

Aziz,,,

0
12/8/2003 12:11:33 AM
> "Bradd W. Szonye" <bradd+news@szonye.com> virkkoi:
>> I'd prefer *not* to fix the arg eval order, because it's another step
>> down the road toward sequential, imperative evaluation (which I see
>> as undesirable, for several reasons) and because it would provide
>> little concrete benefit other than protecting some programmers from
>> RTFM-type mistakes.

Lauri Alanko <la@iki.fi> wrote:
> By the way, do you also think that C got it right by letting the
> values of all newly declared local variables be undefined unless
> explicitly initialized?

Yes, based on the language's design goals and other features. It makes
sense in context. You asked the wrong question. Do I think it's a good
idea to permit uninitialized variables at all? No, not in general.
However, fixing that in C would require the ability to declare all
variables upon first use and some way to explicitly say that you *don't*
want initialization in those few contexts where it makes sense. The C
rule makes sense, given the limitations on where and how you can declare
variables (and the limitations of linker technology at the time).

Also, C sets up a nasty trap for newbies by using implicit
initialization in some contexts (file scope) but not others (local
scope). Scheme has a few similar traps when it comes to top-level and
lexical environments.

> After all, initializing all variables to zero or null automatically
> would only protect some programmers from RTFM-type mistakes.

Agreed. Or was I not supposed to agree with this sarcastic remark? Given
the overall language design, the uninitialized variable rule *is* a
valid choice. Therefore, the only way to determine whether it's the
actual rule is to read the spec; it's an RTFM issue.

This would be a much more interesting question for C++, where there's
little need for uninitialized variables. In that case, the uninitialized
variable rule is less valid, and it's arguably *not* an RTFM issue, but
rather a case of weak language design.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 12:18:40 AM
Abdulaziz Ghuloum <aghuloum@cs.indiana.edu> writes:

> On the other hand, numerals in Arabic are left-to-right.  So, this
> year is written as 2003 rather than 3002.

Actually, it makes more sense to put the least significant digit first.

I've always wondered about that.



-- 
~jrm
0
12/8/2003 1:04:09 AM
Lauri Alanko wrote:
> "Bradd W. Szonye" <bradd+news@szonye.com> virkkoi:
> 
>>I'd prefer *not* to fix the arg eval order, because it's another step
>>down the road toward sequential, imperative evaluation (which I see as
>>undesirable, for several reasons) and because it would provide little
>>concrete benefit other than protecting some programmers from RTFM-type
>>mistakes.
> 
> 
> By the way, do you also think that C got it right by letting the values
> of all newly declared local variables be undefined unless explicitly
> initialized? After all, initializing all variables to zero or null
> automatically would only protect some programmers from RTFM-type
> mistakes.
> 
Scheme has this as well in the specification of letrec, which isn't 
allowed to have right-hand-side expressions which refer directly to 
other letrec bound variables.  Doing so is 'undefined' and is quite 
similar to the undefined contents of uninitialized variables in C.

But this is quite different than OoE, which does not affect values of 
expressions in the absence of side effects.

	Scott

0
scgmille (240)
12/8/2003 2:25:39 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Now, could you please provide some background information (or a link
> to it) that's actually current? You claimed that the information was
> easy to find, but what I *actually* found was inaccurate and
> misleading.

One word: google.

> Please, this isn't helpful at all. You're being difficult again. Why did
> you change your position on those beliefs? For example: Do you put less
> importance on optimization than you used to? Or have you determined that
> the *level* of optimization is not enough to outweigh the disadvantages?

You could put it in those terms.  I have come to appreciate thoroughly
defined languages.  AFAIAC, performance, while of course not
unimportant, has to play second fiddle to proper language design.
Fortunately, enough other people (whose judgement I have reason to
trust) think likewise.

> If you feel that optimization is less important in general, then we have
> a difference in premises, and we probably won't ever agree.

It has become pretty obvious to me that we won't agree on this.  But
that is no reason for me to change my mind.

> If you merely feel that the *level* of optimization is
> insignificant, then please offer some evidence in that direction.

Well, how about first seeing some *real* evidence that the level is
significant?  I am using a language with fixed l2r order of evaluation
daily.  While the implementation that I am using has certain
shortcomings in the performance department, this is definitely not due
to a fixed order of evaluation.  Other implementations of the same
language are proof of that.

> > And the "RTFM" remark carries no weight at all since every poor
> > language design decision could be brushed away with it.
> 
> You're offering a false dichotomy here. An "RTFM" response is quite
> appropriate in some contexts. For example, if there are two valid
> answers to a question, and you choose the wrong answer because you
> didn't read the spec, that's an RTFM mistake. That's exactly the case
> here. While you may prefer the fixed order, the other alternative is
> still a valid language choice, and the spec tells us which one is
> standard.

But I am arguing against the standard here, so "RTFM" won't cut it.
What you are effectively arguing is that there is no such thing as a
bad language design that makes programming more error-prone.  Because
programmer errors are *always*, by definition, of the RTFM variety.

But let's give it a rest now.  This is leading nowhere.

Matthias
0
find19 (1244)
12/8/2003 3:21:51 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Agreed. Based on my experience, fixing the arg eval order would have
> this result. It would work fine for the cases where the programmer
> intentionally relied on left->right evaluation, but it would
> commonly result in "mostly benign" bugs in the accidental cases.
> Also IME, "mostly benign" bugs are much worse than "fast-fail" bugs.
> Therefore, I feel that fixed eval order is likely to cause even more
> damage (by masking subtle design flaws) than it is to help.

Can you describe a scenario where fixing the evaluation order leads to
a bug that wouldn't have happened otherwise?  (Or maybe just one where
it leads to a bug?)

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/8/2003 4:20:51 AM
Abdulaziz Ghuloum <aghuloum@cs.indiana.edu> writes:

> Eli Barzilay wrote:
> > (And math is always l2r.)
> 
> I am not sure about this.  For 12 years, I learned math in Arabic
> and it was, just like everything else, right-to-left.  So, we say
> 
> 
> 	c + x b + 2^x a = (x)Y
> 
> (of course, it would not look as odd as this as we use the arabic
> letters.)

I think, but I'm really not sure since it has been loooong ago, that
at some point we had some attempts at "hebrew-ified" algebraic
expressions, but it certainly didn't last long.  (Of course, mixing
written language and math can lead to some problems, for example, "let
x = 12 in..." and "let x equal 12 in..." are written differently.)
Also, I didn't know that for sure, but I did think that in Arabic
there are r2l formulas -- but in any case, I was referring to:

> On the other hand, numerals in Arabic are left-to-right.  So, this year is
> written as 2003 rather than 3002.

and to the fact that I'm pretty sure that any attempts at "Arabic
Programming Languages" are as bad as a "Hebrew Programming Language"
(which I did see an example of, and it was as bad as you can guess).


> l2r evaluation in English is "top-down" evaluation.  r2l evaluation
> in Arabic is also "top-down".

Yeah -- I think that it's not the "left" and the "right" that matter,
it's whatever logical direction the text is (which means that if I
ever program in a r2l language, I'd expect a r2l evaluation since
that's the direction characters are read).

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/8/2003 4:45:16 AM
At Sun, 07 Dec 2003 19:11:33 -0500, Abdulaziz Ghuloum wrote:
> 
> On the other hand, numerals in Arabic are left-to-right.  So, this year 
> is written as 2003 rather than 3002.

They're not left-to-right, they're just little-endian :)

-- 
Alex

0
foof (110)
12/8/2003 4:49:53 AM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> Agreed. Based on my experience, fixing the arg eval order would have
>> this result. It would work fine for the cases where the programmer
>> intentionally relied on left->right evaluation, but it would commonly
>> result in "mostly benign" bugs in the accidental cases. Also IME,
>> "mostly benign" bugs are much worse than "fast-fail" bugs. Therefore,
>> I feel that fixed eval order is likely to cause even more damage (by
>> masking subtle design flaws) than it is to help.

Eli Barzilay <eli@barzilay.org> wrote:
> Can you describe a scenario where fixing the evaluation order leads to
> a bug that wouldn't have happened otherwise?  (Or maybe just one where
> it leads to a bug?)

Why would I want to? I don't claim that fixed AEO *causes* bugs, just
that it masks them, which is just as bad IME.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 5:53:09 AM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> Now, could you please provide some background information (or a link
>> to it) that's actually current? You claimed that the information was
>> easy to find, but what I *actually* found was inaccurate and
>> misleading.

Matthias Blume wrote:
> One word: google.

That's where I got the other resume, and according to you, the
information I got was incorrect. Despite your claim, it was *not* easy
to learn about your background, and Googling for it is not reliable.
You're still being difficult. Why?

>> Please, this isn't helpful at all. You're being difficult again. Why
>> did you change your position on those beliefs? For example: Do you
>> put less importance on optimization than you used to? Or have you
>> determined that the *level* of optimization is not enough to outweigh
>> the disadvantages?

> You could put it in those terms.  I have come to appreciate thoroughly
> defined languages.  AFAIAC, performance, while of course not
> unimportant, has to play second fiddle to proper language design.

I agree with the latter claim. However, you've buried several
assumptions in those two sentences: "Proper language design" requires
that the language be "thoroughly defined," the language isn't
"thoroughly defined" unless all programs written in the language are
also "thoroughly defined," and the programs aren't "thoroughly defined"
if they rely on any behavior that's not specified by the language
standard.

I keep seeing assertions in this thread that unspecified behavior is
undesirable, but with nothing much to back up that bare assertion. It's
similar to the claims that Scheme is a "sequential language." No, Scheme
*isn't* entirely sequential, and attempts to claim otherwise are
bootstrapping.

True, the language standard doesn't specify all behavior, to allow room
for implementation extensions and optimizations. The arg eval order
isn't specified by R5RS, but it may be specified by the Scheme
implementation, and it's obviously possible to write programs with
invariant output even in the presence of the unspecified behavior. In
fact, it's trivially easy to specify any evaluation order you want, if
it's important to the program.

You simply haven't established that it's not "proper language design,"
or that it isn't "thoroughly defined," or that your preferred solution
is an improvement on the current situation.

>> If you feel that optimization is less important in general, then we
>> have a difference in premises, and we probably won't ever agree.

> It has become pretty obvious to me that we won't agree on this.  But
> that is no reason for me to change my mind.

I understand that. I was actually trying to help you change *my* mind,
by explaining your premises and evidence more carefully. But obviously,
you missed that. Again, why are you being so difficult and
condescending? You seem to be going out of your way *not* to convince
me that you're right.

>> If you merely feel that the *level* of optimization is insignificant,
>> then please offer some evidence in that direction.

> Well, how about first seeing some *real* evidence that the level is
> significant?

How about seeing some *real* evidence that your preferred solution
actually offers concrete benefits, rather than just masking bugs? Your
whole argument, especially the "thoroughly defined" bit, hinges on
usability and cost of maintenance. Do you have any evidence whatsoever
that your approach provides real benefits in those areas? So far, I've
only seen anecdotal evidence, and some of it (e.g., David Rush's)
actually *contradicts* your conclusions.

>> You're offering a false dichotomy here. An "RTFM" response is quite
>> appropriate in some contexts. For example, if there are two valid
>> answers to a question, and you choose the wrong answer because you
>> didn't read the spec, that's an RTFM mistake. That's exactly the case
>> here. While you may prefer the fixed order, the other alternative is
>> still a valid language choice, and the spec tells us which one is
>> standard.

> But I am arguing against the standard here, so "RTFM" won't cut it.

You're missing my point! Either choice is *valid*, so the only way for a
programmer to figure out which is true is to RTFM.

> What you are effectively arguing is that there is no such thing as a
> bad language design that makes programming more error-prone.

No, I'm not. I'm arguing that your proposed solution doesn't really make
programming less error-prone. It sometimes eliminates *trivial* errors,
and it sometimes masks bugs. It also conflates two different concepts in
a way that I find desirable, and it precludes some optimizations. In my
opinion, it has significant costs but very little benefit. Even
*without* the performance hit, I would still argue against it!

> Because programmer errors are *always*, by definition, of the RTFM
> variety.

No, some of them are also design errors. Detailed design is part of
programming.

> But let's give it a rest now.  This is leading nowhere.

I've been *trying* to take the discussion somewhere productive, even
been trying to help you express your position in a way that I might find
more compelling. It's going nowhere because you're going out of your way
*not* to convince me.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 6:18:00 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> >> Agreed. Based on my experience, fixing the arg eval order would have
> >> this result. It would work fine for the cases where the programmer
> >> intentionally relied on left->right evaluation, but it would commonly
> >> result in "mostly benign" bugs in the accidental cases. Also IME,
> >> "mostly benign" bugs are much worse than "fast-fail" bugs. Therefore,
> >> I feel that fixed eval order is likely to cause even more damage (by
> >> masking subtle design flaws) than it is to help.
> 
> Eli Barzilay <eli@barzilay.org> wrote:
> > Can you describe a scenario where fixing the evaluation order leads to
> > a bug that wouldn't have happened otherwise?  (Or maybe just one where
> > it leads to a bug?)
> 
> Why would I want to? I don't claim that fixed AEO *causes* bugs,
> just that it masks them, which is just as bad IME.

OK, rephrase.  Can you describe a scenario where fixing the evaluation
order leads to an otherwise masked bug?

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/8/2003 6:37:54 AM
On Sun, 07 Dec 2003 23:46:46 +0000, Bradd W. Szonye wrote:

> I personally think it's a very, very bad idea to say,
> "You can rely on the eval order in this construct, but you can't count
> on it in this other construct that looks exactly like the other one."
> That's bad, bad, error-prone language design!

You could as well say that it's bad that in function calls you can rely on
the fact that arguments are evaluated at all, but you can't count on it in
macros which looks exactly the same.

If it's OK for macros to change whether arguments are evaluated, it's OK
to change the evaluation order as well even if it was fixed for functions.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/

0
qrczak (1266)
12/8/2003 7:40:35 AM
Lauri Alanko <la@iki.fi> writes:
> By the way, do you also think that C got it right by letting the values
> of all newly declared local variables be undefined unless explicitly
> initialized?

Actually, yes. But that's because I view C as a very-high-level assembly
language.

david rush
-- 
In no other country in the world is the love of property keener or
more alert than in the United States, and nowhere else does the
majority display less inclination toward doctrines which in any way
threaten the way property is owned.
	-- Democracy in America (Alexis de Tocqueville)
0
drush (122)
12/8/2003 12:00:00 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> That's where I got the other resume, and according to you, the
> information I got was incorrect. Despite your claim, it was *not* easy
> to learn about your background, and Googling for it is not reliable.
> You're still being difficult. Why?

How hard can it be to follow the first hit that google returns you
when you type in my name?  (I took down the Princeton site entirely to
avoid further confusion, but even before that the page in question
contained a prominent note at the top which pointed to the correct
page.)

> I agree with the latter claim. However, you've buried several
> assumptions in those two sentences: "Proper language design" requires
> that the language be "thoroughly defined," the language isn't
> "thoroughly defined" unless all programs written in the language are
> also "thoroughly defined," and the programs aren't "thoroughly defined"
> if they rely on any behavior that's not specified by the language
> standard.

Yes, I have come to the /belief/ that all of the above is true.

> No, Scheme *isn't* entirely sequential, and attempts to claim
> otherwise are bootstrapping.

Yes, it is.  The report is very specific in saying that evaluation has
to be consistent with a sequential ordering, it just happens not to
say which one, and that is the problem.  Aside from some optimization
opportunities of minor importance nothing is gained from being so
vague.  If it is so desirable to program under the assumption that OoE
be not fixed, then /one can do that/ even under fixed OoE without harm.

> True, the language standard doesn't specify all behavior, to allow room
> for implementation extensions and optimizations.

I don't think that OoE was left unspecified to allow for either
extensions or optimizations.  Those are rationaliations after the
fact.  The decision was a political one.

> [ ... ] and it's obviously possible to write programs with
> invariant output even in the presence of the unspecified behavior.

Yes, one can trivially work around all the shortcomings.  That does
not make those shortcomings go away.

> You simply haven't established that it's not "proper language design,"
> or that it isn't "thoroughly defined," or that your preferred solution
> is an improvement on the current situation.

"Proper language design" is not a very well-defined thing.  My current
working definition includes that each program has a meaning that is
independent of the particular implementation that is being used.
(Now, in corner cases we can argue whether insisting on this in every
case is always desirable, but at least the major building blocks of
the language ought to strongly support this idea.)

> >> [...] and we probably won't ever agree.

> [...] I was actually trying to help you change *my* mind,

You are contradicting yourself.

> How about seeing some *real* evidence that your preferred solution
> actually offers concrete benefits, rather than just masking bugs?

I am interested in reasoning about properties of programs (both by
machine and by human).  It should be pretty clear that avoiding the
extra input from an infinite source of random bits in the language's
semantics (and, in particular, a dependence of the semantics of the
most fundamental and ubiquitous operation, namely procedure
application, on these random bits) helps in this regard.

> > But I am arguing against the standard here, so "RTFM" won't cut it.
> 
> You're missing my point! Either choice is *valid*, so the only way for a
> programmer to figure out which is true is to RTFM.

I am not missing your point.  Yes, when I program in Scheme or in C I
have to be aware that OoE is unspecified.  If my program has a bug
because I forgot, then it is my fault.  If I never knew, then it is my
fault too, because I could have RTFM.  So, yes, I understand and agree
with this particular point of yours.  Unfortunately, it does not have
much to do with *my* point.

> No, I'm not. I'm arguing that your proposed solution doesn't really make
> programming less error-prone.

It does in my experience.

> It sometimes eliminates *trivial* errors,

.... and sometimes not so trivial ones.  I have seen that happen.

> and it sometimes masks bugs.

It *never* masks _bugs_.  Relying on OofE in a language with fixed OoE
is not a bug.

> > Because programmer errors are *always*, by definition, of the RTFM
> > variety.
> 
> No, some of them are also design errors. Detailed design is part of
> programming.

There is no such thing as "incorrect" design.  /Poor/ design can lead to
incorrect programs down the road, but poor design by itself is not
an "error".
0
find19 (1244)
12/8/2003 1:12:48 PM
Marcin 'Qrczak' Kowalczyk <qrczak@knm.org.pl> writes:

> On Sun, 07 Dec 2003 23:46:46 +0000, Bradd W. Szonye wrote:
> 
> > I personally think it's a very, very bad idea to say,
> > "You can rely on the eval order in this construct, but you can't count
> > on it in this other construct that looks exactly like the other one."
> > That's bad, bad, error-prone language design!
> 
> You could as well say that it's bad that in function calls you can rely on
> the fact that arguments are evaluated at all, but you can't count on it in
> macros which looks exactly the same.

It is interesting to note that Bradd's above comment undermines the
whole rant in favor of leaving OoE unspecified.  To witness: LET and
LET* look exactly like one another, so why should one of them behave
differently as far as OofE is concerned?

> If it's OK for macros to change whether arguments are evaluated, it's OK
> to change the evaluation order as well even if it was fixed for functions.

Exactly.  In fact, with macros the rule is exceedingly simple: The
order of evaluation is completely determined by the semantics of the
output of the macro transformer.
0
find19 (1244)
12/8/2003 1:16:50 PM
Matthias Blume wrote:

> 
> It *never* masks _bugs_.  Relying on OofE in a language with fixed OoE
> is not a bug.
> 

Of course.  That argument is irrelevant to the discussion, though.  The 
real issue is that specifying a fixed OoE removes information from the 
program that was previously there and takes a step towards imperative 
languages.  Currently in Scheme, programs inherently contain information 
about sequences of s-expressions which can be evaluated in any order 
(function application), and sequences which must be evaluated in a fixed
order (begin, let*).

Fixing an OoE removes that information, which removes some of the 
elegance of the functional programming style and of course removes 
information from the view of the compiler.  The math analogy is again a 
good one, the expression:

a*x + b*y + c*z

has a set of rules for evaluation.  Multiplication takes precedence over 
addition, but beyond that the meaning of the expression does not change 
if I mentally evaluate it from any direction.  I view function 
application the same way in Scheme.  Imagine b*y was in fact a much more 
complicaed expression.  Mentally I may choose to evaluate it first 
because it would be easier to remember the rest of the computation once 
I remove the complicated bit.  Because it doesn't matter to the meaning 
which I do first, this is possible.

Similarly, a compiler may choose to evaluate expressions in an unusual 
order to optimially use architecture registers and to keep relevant data 
in the CPU cache (a rough equivalent to my memory).  This can have a 
substantial effect on efficiency.  With a fixed OoE, the compiler must 
first prove that the sub-expressions do not depend on each other through 
side-effects before realizing any gains.

Again, it has already been shown that the language specification need 
not include any sort of 'random number stream' to leave OoE unspecified. 
  Its just the Right Thing for a functional language to do.  Scheme is 
semi-functional, and we already have sequencing operators for the 
non-functional corners of the language.

	Scott

0
scgmille (240)
12/8/2003 2:45:23 PM
Scott G. Miller wrote:
> Matthias Blume wrote:
> 
>>
>> It *never* masks _bugs_.  Relying on OofE in a language with fixed OoE
>> is not a bug.
>>
> 
> Of course.  That argument is irrelevant to the discussion, though.

Not the discussion Bradd W. Szonye is having.

>  The 
> real issue is that specifying a fixed OoE removes information from the 
> program that was previously there and takes a step towards imperative 
> languages.  [...]

This is a strange argument. OofE is only relevant to the degree that 
Scheme is imperative. In other words, the "information" whose loss you 
lament only exists to the degree that undefined OofE leaves the 
semantics of a program unspecified. That is, the more information there 
is, the less we know.

-thant

0
thant (332)
12/8/2003 4:07:14 PM
"Scott G. Miller" <scgmille@freenetproject.org> writes:

> The real issue is that specifying a fixed OoE removes information from
> the program that was previously there

Which information?  That the programmer thought that order does not
matter at this point?  The problem is that this "information" can too
easily be wrong, and there is no recourse if that is the case.

> and takes a step towards imperative languages.

Scheme *is* an imperative language.  Get used to it.

> Fixing an OoE removes that information, which removes some of the
> elegance of the functional programming style and of course removes
> information from the view of the compiler.  The math analogy is again
> a good one, the expression:
> 
> a*x + b*y + c*z
> 
> has a set of rules for evaluation.  Multiplication takes precedence
> over addition, but beyond that the meaning of the expression does not
> change if I mentally evaluate it from any direction.

This statement is false!

> Similarly, a compiler may choose to evaluate expressions in an unusual
> order to optimially use architecture registers and to keep relevant
> data in the CPU cache (a rough equivalent to my memory).

This is fine unless doing so can change the outcome of the program.
Back to square one.

>  This can
> have a substantial effect on efficiency.  With a fixed OoE, the
> compiler must first prove that the sub-expressions do not depend on
> each other through side-effects before realizing any gains.

Yes.  And that is a good thing.

> Again, it has already been shown that the language specification need
> not include any sort of 'random number stream' to leave OoE
> unspecified.

Where?
0
find19 (1244)
12/8/2003 4:41:42 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> I don't claim that fixed AEO *causes* bugs, just that it masks them,
> which is just as bad IME.

What is the difference between a bug that is "masked" and one that is
not there?
0
find19 (1244)
12/8/2003 4:44:05 PM

"Bradd W. Szonye" wrote:
> 
> That's where I got the other resume, and according to you, the
> information I got was incorrect. Despite your claim, it was *not* easy
> to learn about your background, and Googling for it is not reliable.
> You're still being difficult. Why?

Who knows.  Here's the info: http://www.tti-c.org/blume.shtml

David
0
feuer (188)
12/8/2003 5:01:55 PM
> Scott G. Miller wrote:
>> The real issue is that specifying a fixed OoE removes information
>> from the program that was previously there and takes a step towards
>> imperative languages.  [...]

Thant Tessman <thant@acm.org> wrote:
> This is a strange argument. OofE is only relevant to the degree that
> Scheme is imperative. In other words, the "information" whose loss you
> lament only exists to the degree that undefined OofE leaves the
> semantics of a program unspecified.

Correct. By using a procedure invocation instead of a sequencing
combination like BEGIN, the programmer indicates that the order of
evaluation is unimportant. That information is important to maintainers
and compilers.

> That is, the more information there is, the less we know.

Incorrect. By using the right construct, the programmer indicates what
is or isn't important. That is useful information.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 5:02:31 PM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> I don't claim that fixed AEO *causes* bugs, just that it masks them,
>> which is just as bad IME.

Matthias Blume <find@my.address.elsewhere> wrote:
> What is the difference between a bug that is "masked" and one that is
> not there?

A masked bug may be entirely benign, or only mostly benign. The latter
case is actually *worse* than having an obvious bug.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 5:03:27 PM
Feuer <feuer@his.com> writes:

> "Bradd W. Szonye" wrote:
> > 
> > That's where I got the other resume, and according to you, the
> > information I got was incorrect. Despite your claim, it was *not* easy
> > to learn about your background, and Googling for it is not reliable.
> > You're still being difficult. Why?
> 
> Who knows.  Here's the info: http://www.tti-c.org/blume.shtml

David, I would appreciate if you wouldn't do such a thing again.

Thank you.

(If I had wanted the URL posted here, I would have done it myself.)

Matthias
0
find19 (1244)
12/8/2003 5:04:50 PM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> That's where I got the other resume, and according to you, the
>> information I got was incorrect. Despite your claim, it was *not*
>> easy to learn about your background, and Googling for it is not
>> reliable. You're still being difficult. Why?

Matthias Blume <find@my.address.elsewhere> wrote:
> How hard can it be to follow the first hit that google returns you
> when you type in my name?

I did use the first hit, from "Matthias Blume resume." I added the last
search term so that I'd specifically get information about your
professional or academic background.

>> You simply haven't established that it's not "proper language
>> design," or that it isn't "thoroughly defined," or that your
>> preferred solution is an improvement on the current situation.

> "Proper language design" is not a very well-defined thing.  My current
> working definition includes that each program has a meaning that is
> independent of the particular implementation that is being used.

That's a reasonable definition, but I'd prefer C99's definition: A
strictly conforming application has the same output on all conforming
implementations. However, that definition is tautological in the context
of this discussion, so it isn't helpful.

>>>> [...] and we probably won't ever agree.

>> [...] I was actually trying to help you change *my* mind,

> You are contradicting yourself.

That's not a contradiction. I don't think it's likely that we'll agree,
but that's partly because you seem to be going out of your way to
withhold information that might change my mind. It's also because you
keep saying things that look ridiculous to me, like:

> There is no such thing as "incorrect" design.  /Poor/ design can lead
> to incorrect programs down the road, but poor design by itself is not
> an "error".

This kind of statement just doesn't hold up in an engineering
environment. It might make sense in an academic environment (although
truthfully, I can't see how). This is why I suggested that our different
backgrounds may be leading us to talk past each other.

Also:

> It *never* masks _bugs_.  Relying on OofE in a language with fixed OoE
> is not a bug.

We've heard anecdotal evidence from David Rush that this is false, that
removing dependence on OOE was superior to setting a fixed OOE in his
experience.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 5:11:22 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> >> I don't claim that fixed AEO *causes* bugs, just that it masks them,
> >> which is just as bad IME.
> 
> Matthias Blume <find@my.address.elsewhere> wrote:
> > What is the difference between a bug that is "masked" and one that is
> > not there?
> 
> A masked bug may be entirely benign, or only mostly benign. The latter
> case is actually *worse* than having an obvious bug.

You have not answered the question.
0
find19 (1244)
12/8/2003 5:11:31 PM
Matthias Blume wrote:
> "Scott G. Miller" <scgmille@freenetproject.org> writes:
> 
> 
>>The real issue is that specifying a fixed OoE removes information from
>>the program that was previously there
> 
> 
> Which information?  That the programmer thought that order does not
> matter at this point?  The problem is that this "information" can too
> easily be wrong, and there is no recourse if that is the case.
> 
> 
>>and takes a step towards imperative languages.
> 
> 
> Scheme *is* an imperative language.  Get used to it.
> 
> 
>>Fixing an OoE removes that information, which removes some of the
>>elegance of the functional programming style and of course removes
>>information from the view of the compiler.  The math analogy is again
>>a good one, the expression:
>>
>>a*x + b*y + c*z
>>
>>has a set of rules for evaluation.  Multiplication takes precedence
>>over addition, but beyond that the meaning of the expression does not
>>change if I mentally evaluate it from any direction.
> 
> 
> This statement is false!
> 
How so?

> 
>>Similarly, a compiler may choose to evaluate expressions in an unusual
>>order to optimially use architecture registers and to keep relevant
>>data in the CPU cache (a rough equivalent to my memory).
> 
> 
> This is fine unless doing so can change the outcome of the program.
> Back to square one.

If the program is functional as the programmer wrote it, it cannot 
change the outcome of the program.  Otherwise it is a bug.

>> This can
>>have a substantial effect on efficiency.  With a fixed OoE, the
>>compiler must first prove that the sub-expressions do not depend on
>>each other through side-effects before realizing any gains.

> Yes.  And that is a good thing.
I'm sorry?  The compiler does not need to prove that the sub-expressions 
don't depend on each other if OoE is unspecified.  They must not or it 
is a bug.

>>Again, it has already been shown that the language specification need
>>not include any sort of 'random number stream' to leave OoE
>>unspecified.
> 
> 
> Where?

Previously in this thread a link was provided on how you can formally 
define the language even with unspecified OoE.  Google is your friend.

	Scott

0
scgmille (240)
12/8/2003 5:14:01 PM

Matthias Blume wrote:
> 
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> > and it sometimes masks bugs.
> 
> It *never* masks _bugs_.  Relying on OofE in a language with fixed OoE
> is not a bug.

It is perhaps a mind bug.  A program that works, but whose operation
does not match the programmer's mental model, and which is therefore
fragile.

David
0
feuer (188)
12/8/2003 5:14:29 PM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> Why would I want to? I don't claim that fixed AEO *causes* bugs, just
>> that it masks them, which is just as bad IME.

Eli Barzilay <eli@barzilay.org> wrote:
> OK, rephrase.  Can you describe a scenario where fixing the evaluation
> order leads to an otherwise masked bug?

I'll need to think about it. I personally haven't seen an example of
this kind of problem in a *long* time, at least not in a production
environment. That's one of the reasons why I don't accept Matthias B's
argument. I suspect that he works in an environment where formal
correctness proofs are more important, or where there are a lot of
students who haven't internalized the rule. You might want to ask David
Rush; he seems to have more experience with this issue than I do.

I realize that the unspecified order does make formal proofs more
difficult, because the possible permutations increase the workload. IME,
that's not an issue, because we don't often rely on formal proofs;
instead, we use abstraction barriers and informal proofs. Since I don't
work on life-critical or academic applications, the cost/benefit ratio
of formal proofs is *already* too high, and therefore I don't care much
about issues that make the cost of formal proofs higher.

I think a better "compromise" solution would be to define library syntax
that provides the fixed order for applications that need it, or to
define a "Formal Scheme" that implementations can optionally conform to.
However, I think it would be a very *bad* idea to use the "Formal
Scheme" definition for all applications, because the costs are high and
the benefits questionable for most users.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 5:17:52 PM
> Bradd W. Szonye wrote:
>> I personally think it's a very, very bad idea to say, "You can rely
>> on the eval order in this construct, but you can't count on it in
>> this other construct that looks exactly like the other one." That's
>> bad, bad, error-prone language design!

Marcin 'Qrczak' Kowalczyk <qrczak@knm.org.pl> wrote:
> You could as well say that it's bad that in function calls you can
> rely on the fact that arguments are evaluated at all, but you can't
> count on it in macros which looks exactly the same.

Sure, that's true too. There are degrees of "worse."

> If it's OK for macros to change whether arguments are evaluated, it's
> OK to change the evaluation order as well even if it was fixed for
> functions.

But this isn't true. Just because a language has one flaw doesn't mean
that you should open up the whole can of worms. "Bad" is not the same as
"worst possible."
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 5:19:10 PM
"Scott G. Miller" <scgmille@freenetproject.org> writes:

> >>a*x + b*y + c*z
> >>
> >>has a set of rules for evaluation.  Multiplication takes precedence
> >>over addition, but beyond that the meaning of the expression does not
> >>change if I mentally evaluate it from any direction.
> > This statement is false!
> > 
> How so?

Let a*x = 1e50
Let b*y = -1e50
Let c*z = 1.0

To demonstrate, I use SML (because this way I can be sure in which
order things get evaluated):

Standard ML of New Jersey v110.44 [FLINT v1.5], November 6, 2003
- 1e50 - 1e50 + 1.0;
val it = 1.0 : real
- 1.0 - 1e50 + 1e50;
val it = 0.0 : real

> >>Similarly, a compiler may choose to evaluate expressions in an unusual
> >>order to optimially use architecture registers and to keep relevant
> >>data in the CPU cache (a rough equivalent to my memory).
> > This is fine unless doing so can change the outcome of the program.
> > Back to square one.
> 
> If the program is functional as the programmer wrote it, it cannot
> change the outcome of the program.  Otherwise it is a bug.

I know.  But to me that is the problem:  It should not be a bug.

> >>Again, it has already been shown that the language specification need
> >>not include any sort of 'random number stream' to leave OoE
> >>unspecified.
> > Where?
> 
> Previously in this thread a link was provided on how you can formally
> define the language even with unspecified OoE.  Google is your friend.

Unfortunately, the solution the link points to is wrong.  Google is
your friend.  Of coures, it with great difficulty it could be made
right, but the end result will be eqquivalent to the "random number
stream" solution.

Matthias
0
find19 (1244)
12/8/2003 5:21:18 PM
Bradd W. Szonye wrote:
>>Scott G. Miller wrote:
>>
>>>The real issue is that specifying a fixed OoE removes information
>>>from the program that was previously there and takes a step towards
>>>imperative languages.  [...]
> 
> 
> Thant Tessman <thant@acm.org> wrote:
> 
>>This is a strange argument. OofE is only relevant to the degree that
>>Scheme is imperative. In other words, the "information" whose loss you
>>lament only exists to the degree that undefined OofE leaves the
>>semantics of a program unspecified.
> 
> 
> Correct. By using a procedure invocation instead of a sequencing
> combination like BEGIN, the programmer indicates that the order of
> evaluation is unimportant. That information is important to maintainers
> and compilers.

A well-specified language is far more valuable to maintainers than an 
attempt by a maintainer to divine the intention of the programmer by 
their choice of construct which may or may not have 
been...er...intentional. And the value to compilers is debatable (as 
made obvious by this thread). More than that, what is obvious is that 
Scheme has already embodied many design decisions where much bigger 
performance penalties were considered less important than some other 
aspect of the language. So the performance argument smells disingenuous.

> 
> 
>>That is, the more information there is, the less we know.
> 
> 
> Incorrect. By using the right construct, the programmer indicates what
> is or isn't important. That is useful information.

My point is that this argument is circular. It's not useful information 
if it's not relevant. And it's not relevant unless OofE matters, in 
which case its value as information lies solely in the fact that it's 
potentially the source of erroneous program behavior.

-thant


0
thant (332)
12/8/2003 5:28:21 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > It *never* masks _bugs_.  Relying on OofE in a language with fixed OoE
> > is not a bug.
> 
> We've heard anecdotal evidence from David Rush that this is false, that
> removing dependence on OOE was superior to setting a fixed OOE in his
> experience.

It cannot be false, David Rush's testimony notwithstanding.  How can
relying on something that is guaranteed ever be a bug in and of
itself?

I know that programs improve quite often when one removes certain
imperative aspects -- which has the side effect (no pun intended) of
making the program less dependent on evaluation order.  In fact, I am
a great fan of the pursuit of such program design.  But that does not
mean that one should throw out the baby with the bath water and leave
evaluation order unspecified.  The benefits David refers to can be
realized regardless of whether or not the language is vague on the
issue.
0
find19 (1244)
12/8/2003 5:28:23 PM

Thant Tessman wrote:

> > Incorrect. By using the right construct, the programmer indicates what
> > is or isn't important. That is useful information.
> 
> My point is that this argument is circular. It's not useful information
> if it's not relevant. And it's not relevant unless OofE matters, in
> which case its value as information lies solely in the fact that it's
> potentially the source of erroneous program behavior.

Consider it a program annotation, telling the compiler that it doesn't
matter.  It may be wrong, but if it's right it could be useful.

David
0
feuer (188)
12/8/2003 5:43:50 PM
"Bradd W. Szonye" wrote:
> 
> > Scott G. Miller wrote:
> >> The real issue is that specifying a fixed OoE removes information
> >> from the program that was previously there and takes a step towards
> >> imperative languages.  [...]

<clip>

> By using a procedure invocation instead of a sequencing
> combination like BEGIN, the programmer indicates that the order of
> evaluation is unimportant. That information is important to maintainers
> and compilers.
 
<clip>

> By using the right construct, the programmer indicates what
> is or isn't important. That is useful information.

would you advocate the use of l2rlambda or r2llambda - variant forms 
which create procedures that evaluate all arguments left to right or 
right to left - in *addition to* lambda, or *instead of* lambda?

A construct like l2rlambda could be supported by a scheme, allowing
the programmer to specify l2r evaluation where it was important.  In 
fact, it would be a useful debugging thing, because you could just 
use a define-syntax to interpret all lambdas as l2rlambdas for invariant
behavior, or use define-syntax to send evaluation the other direction
to make sure the results were the same.

Of course this requires a model where evaluation of arguments is 
under the control of the procedure being called, or at least follows
known conventions depending on a classification property of the 
procedure being called, and this requires evaluating the procedure
first, which may mean some loss of the optimizability you're trying
to leave in with the regular lambda.

				Bear
0
bear (1219)
12/8/2003 5:45:12 PM
Feuer wrote:
> 
> Thant Tessman wrote:
> 
> 
>>>Incorrect. By using the right construct, the programmer indicates what
>>>is or isn't important. That is useful information.
>>
>>My point is that this argument is circular. It's not useful information
>>if it's not relevant. And it's not relevant unless OofE matters, in
>>which case its value as information lies solely in the fact that it's
>>potentially the source of erroneous program behavior.
> 
> 
> Consider it a program annotation, telling the compiler that it doesn't
> matter.  It may be wrong, but if it's right it could be useful.

Which tells me that if indeed there is a significant performance 
benefit, it should be available via a compiler option which is off by 
default.

-thant

0
thant (332)
12/8/2003 5:59:53 PM
Matthias Blume <find@my.address.elsewhere> writes:

> You have not answered the question.

the difference between a masked bug and one that is not there is a
matter of time and circumstance.  this thread is about capturing the
intent of a programmer (who is most likely human and thus almost beyond
comprehension), which means any attempt to characterize the particular
time and circumstance required to unmask this kind of bug is itself
another masked bug.

i suppose it's natural, to want to teach machines to err like humans.

thi
0
ttn (27)
12/8/2003 6:36:24 PM

Thant Tessman wrote:
> 
> Feuer wrote:

> > Consider it a program annotation, telling the compiler that it doesn't
> > matter.  It may be wrong, but if it's right it could be useful.
> 
> Which tells me that if indeed there is a significant performance
> benefit, it should be available via a compiler option which is off by
> default.

Perhaps...  But that strikes me as kinda sketchy.

David
0
feuer (188)
12/8/2003 6:36:52 PM
Feuer wrote:
> 
> Thant Tessman wrote:
> 
>>Feuer wrote:
> 
> 
>>>Consider it a program annotation, telling the compiler that it doesn't
>>>matter.  It may be wrong, but if it's right it could be useful.
>>
>>Which tells me that if indeed there is a significant performance
>>benefit, it should be available via a compiler option which is off by
>>default.
> 
> 
> Perhaps...  But that strikes me as kinda sketchy.

C++ has this notion of constant-ness: you can declare that a value is 
intended to be constant. It's broken in sort of the opposite way OofE is 
in Scheme. The compiler will not allow a program to compile that 
knowingly violates constness. However, there are situations in which the 
compiler is not allowed to use that programmer notation to optimize 
performance because the resulting program might be buggy from the 
point-of-view of a C/C++ programmer who deliberately chooses to 
circumvent the original programmer's intent in a way that the compiler 
can't detect. It's funny that the C++ community chose predictability 
over performance in this situation. You can tell the compiler to 
"believe" these declarations, but the problem always seems to be in 
other people's libraries.

-thant

0
thant (332)
12/8/2003 6:55:47 PM
> "Bradd W. Szonye" wrote:
>> By using a procedure invocation instead of a sequencing combination
>> like BEGIN, the programmer indicates that the order of evaluation is
>> unimportant. That information is important to maintainers and
>> compilers .... By using the right construct, the programmer indicates
>> what is or isn't important. That is useful information.

Ray Dillinger <bear@sonic.net> wrote:
> would you advocate the use of l2rlambda or r2llambda - variant forms 
> which create procedures that evaluate all arguments left to right or 
> right to left - in *addition to* lambda, or *instead of* lambda?
> 
> A construct like l2rlambda could be supported by a scheme, allowing
> the programmer to specify l2r evaluation where it was important ....

First, a point of clarification: This approach is very different from
what I've been discussing. It makes the argument evaluation order (AEO)
a property of the procedure rather than a property of the invocation.
I've been talking about the difference between

    (invoke-left-to-right proc args ...)
    (invoke-unspecified-aeo proc args ...)

where the call site determines how to evaluate the arguments. Your
suggestion here considers the difference between

    (lazy-invoke left-to-right-proc args ...)
    (lazy-invoke right-to-left-proc args ...)
    (lazy-invoke unspecified-aeo-proc args ...)

I call it "lazy-invoke" because it defers argument evaluation,
delegating it to the called procedure. I'm having a hard time coming up
with a justification for this feature. It doesn't actually provide lazy
semantics; all the arguments are still evaluated before invoking the
procedure. What kind of procedure would need a guarantee about the order
of argument evaluation, but *not* need actual laziness? This feature
doesn't make sense to me for a call-by-value language.

It *almost* makes sense, because you could *almost* use it to implement
AND & OR as first-class procedures instead of syntax, but the lack of
actual laziness makes it impossible to implement the "shortcut logic"
elements, and without that, they aren't really useful.

Therefore, I think this solution is inferior to the current syntactic
approach. If you need shortcut logic, you use a different syntax
(denoted by putting AND/OR in the application position). If you need
sequential evaluation, you use a different syntax (denoted by putting
BEGIN in the application position).

The current syntax isn't perfect. Syntactic keywords aren't first-class
values, so you can't parameterize "similar" syntax like AND & OR. It's
easy enough to sequence AEO with LET*, but many people would prefer a
simpler syntax for it, maybe even make it the "default" for the
"combination" procedure-call syntax.

I'm tempted to propose a solution with explicit AEO syntax and a
user-specified default for the basic combination syntax:

(INVOKE f args ...) is primitive syntax for unspecified AEO, and
(INVOKE-LEFT->RIGHT f args ...) is library syntax for left->right AEO.
When a sexp matches "combination syntax" -- (f args ...) where F is not
a syntactic keyword -- the translator converts it to one of the explicit
AEO forms, and the user can specify which AEO to use by "default."

Unfortunately, that leaves the problem of the "default default." Should
the "bare combination" syntax default to unspecified AEO unless the
programmer redefines it? Should it default to left->right? Should the
translator support "bare combination" syntax but not define it, such
that programs must specify its meaning?
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 7:19:15 PM
Matthias Blume wrote:

> > Previously in this thread a link was provided on how you can formally
> > define the language even with unspecified OoE.  Google is your friend.
>
> Unfortunately, the solution the link points to is wrong.

I'm not sure that it is wrong, at least not morally.  It seems equivalent to
a powerdomain semantics
as is commonly used to model nondeterminism elsewhere, and seems to be
related to ideas
discussed by Clinger in e.g.,

http://groups.google.com/groups?selm=338616D3.2EC7%40ccs.neu.edu&output=gplain

(I'm not trying to put words in anyone's mouth, though).  See also, for
related discussions, the thread

http://zurich.ai.mit.edu/pipermail/rrrs-authors/1988-June/000972.html

> Of coures, it with great difficulty it could be made
> right, but the end result will be eqquivalent to the "random number
> stream" solution.

This sounds like an oracle semantics,  which is
perhaps similar to the current "permution oracle" Scheme semantics of
Clinger's.

Actually, while more burdensome for formal proofs,  this is no worse than
modeling the nondeterminism in the IO semantics, which will need to
be specified anyway.  How would you propose to do that?
If you model IO in the "world as value" paradigm, it only slightly
complicates things to include  your random number stream in the value of the
world
(although I would not advocate doing things quite this way).

While I'm not advocating doing things this way, I would like to point out
that in
practice the semantics of IO is just as urgent, if not more so, than that of
evaluation order.

A.

0
andre9567 (120)
12/8/2003 7:55:31 PM
Matthias Blume wrote:
> > True, the language standard doesn't specify all behavior, to allow
> > room for implementation extensions and optimizations.
>
> I don't think that OoE was left unspecified to allow for either
> extensions or optimizations.  Those are rationaliations after the
> fact.  The decision was a political one.

All decisions of committees are political.  But even if political, that
particular decision was presumably to allow for different existing
implementations, or different opinions of implementors.  That still amounts
to leaving OoE unspecified to allow for flexibility in implementations -
it's not a rationalization.

Standard ML has been raised as an example of OoE having been fixed as l2r.
But beyond SML, there are ML variants/derivatives that use the opposite
evaluation order - for example, OCaml uses r2l.  Whether or not anyone
considers OCaml to be an ML, this example points out an underlying
unresolved issue in this discussion, which is what the goals of RnRS are, or
should be.

Arguably, R5RS is a standard of a sort which the ML language family lacks,
afaik: a standard which defines minimum characteristics of a larger set of
language variants than that defined by The Definition of Standard ML.  If an
equivalent to R5RS existed for ML, it might encompass both SML and OCaml.

If one takes the position that Scheme is a family of languages, and that
R5RS defines the lowest common denominator between those languages - a
position solidly grounded in existing facts and practice - then leaving
evaluation order unspecified is eminently reasonable.

If, otoh, one takes the position that Scheme and future Scheme standards
should move towards something like a "Common Scheme", i.e. that the
definition of Scheme should be sufficiently large and well-defined so as to
describe a language that doesn't require significant extensions to be
useful, then fixing evaluation order makes perfect sense.

A fixed evaluation order would make sense for a formal definition of Scheme
of the sort Matthias Felleisen has advocated: a definition more
comprehensive than the core formal semantics in R5RS, and more like The
Definition of Standard ML.

A fixed evaluation order could also make sense for a standard oriented
towards addressing the sorts of expectations that commercial users and users
of many other languages tend to have of a language standard (the C example
notwithstanding).

However, none of this negates the fact that there is value in a broader,
more inclusive standard that supports significant variation between
implementations - as RnRS has done historically, and as R5RS does today.  If
I want to design a new language that's "just like Scheme except...", then
aren't I better off making that language an extension of standard Scheme, as
opposed to an incompatible variation?  The narrower the Scheme standard
becomes, the more not-quite-Schemes there will be, and standard Scheme will
become less useful as a language experimentation platform.  It is in this
context that it makes little sense to fix OoE: it imposes an arbitrary
decision on implementations that has no semantic justification.

Anton



0
anton58 (1240)
12/8/2003 7:59:46 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> >> Why would I want to? I don't claim that fixed AEO *causes* bugs, just
> >> that it masks them, which is just as bad IME.
> 
> Eli Barzilay <eli@barzilay.org> wrote:
> > OK, rephrase.  Can you describe a scenario where fixing the evaluation
> > order leads to an otherwise masked bug?
> 
> I'll need to think about it. I personally haven't seen an example of
> this kind of problem in a *long* time, at least not in a production
> environment. That's one of the reasons why I don't accept Matthias
> B's argument. I suspect that he works in an environment where formal
> correctness proofs are more important, or where there are a lot of
> students who haven't internalized the rule. You might want to ask
> David Rush; he seems to have more experience with this issue than I
> do. [...]

Your claim seem to be that it might be preferable to have a fixed
order for theoretical things like correctness proofs, but you care
more for the practical stuff.  But many people on the fixed order side
say that they want a fixed order for practical reasons too.  I'm just
now writing code that looks like:

  (define-values (foo bar)
    (values (make-gui-widget) (make-another-gui-widget)))

And it is just damn convenient to know that they will be placed in the
right order.  The alternatives you suggest are (AFAICT):

  (define-values (foo bar)
    (let* ((v1 (make-gui-widget))
           (v2 (make-another-gui-widget)))
      (values v1 v2)))

or:

  (define-values (foo bar)
    (apply* values
            (list* (make-gui-widget) (make-another-gui-widget))))

or maybe even:

  (define-values (foo bar)
    (funcall* values (make-gui-widget) (make-another-gui-widget)))

It might be subjective, and I don't know about your preference, but I
prefer to use a language that fixes the evaluation order as something
I can naturally use.  This is a very practical point for me, no
proofs, no semantics, just programming.  *If* there is some
performance hit that I care about, I'd probably use a different
implementation or language -- I'm paying a huge performance price for
a dynamic language so it seems silly to make something so fundamental
unspecified for the sake of some optimization.

Besides, I think that the main point Matthias Blume is pushing is that
the language definition is so much more important than anything else
that performance issues should not play any role in designing the
language.  If the result turns out as some beautiful language that is
extremely useful in practical developement, then this opens up new
research opportunities for how to get back some of the lost
performance.  If you really consider performance issues right at the
beginning, you wouldn't get to a lot of good stuff we have today,
certainly no Scheme to talk about.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/8/2003 8:07:35 PM
"Anton van Straaten" <anton@appsolutions.com> writes:

> Standard ML has been raised as an example of OoE having been fixed as l2r.
> But beyond SML, there are ML variants/derivatives that use the opposite
> evaluation order - for example, OCaml uses r2l.  Whether or not anyone
> considers OCaml to be an ML, this example points out an underlying
> unresolved issue in this discussion, which is what the goals of RnRS are, or
> should be.

SML and Ocaml are very different languages.  In fact, except for
trivial ones, there are hardly any programs that are common to both.
The languages are "similar" in many respects, but drafting a common
standard for both of them would not only be very difficult but also
fairly useless.

> However, none of this negates the fact that there is value in a broader,
> more inclusive standard that supports significant variation between
> implementations - as RnRS has done historically, and as R5RS does today.

I'm not sure this is true.

> If I want to design a new language that's "just like Scheme
> except...", then aren't I better off making that language an
> extension of standard Scheme, as opposed to an incompatible
> variation?

If you ask me personally, then the answer is "no".  Almost every
change that I would personally have liked in Scheme is incompatible
with RnRS.

> The narrower the Scheme standard becomes, the more not-quite-Schemes
> there will be, and standard Scheme will become less useful as a
> language experimentation platform.

Obviously, any "standard" is in the way if your goal is to experiment
with languages.  If experimentation is my concern, I couldn't care
less what RnRS (or the Definition of Standard ML, for that matter)
says.  But if I want to write reliable software that will work 5 years
from now on improved, new, or simply different implementations, then I
want a standard that nails things down as precisely as possible.

> It is in this context that it makes little sense to fix OoE: it
> imposes an arbitrary decision on implementations that has no
> semantic justification.

Of course, it *has* a semantic justification: It makes semantics
considerably simpler.  Given that it does not invalidate any currently
legal Scheme program, this alone is justification enough, IMO.
0
find19 (1244)
12/8/2003 8:17:27 PM
Andre <andre@het.brown.edu> writes:

> Matthias Blume wrote:
> 
> > > Previously in this thread a link was provided on how you can formally
> > > define the language even with unspecified OoE.  Google is your friend.
> >
> > Unfortunately, the solution the link points to is wrong.
> 
> I'm not sure that it is wrong, at least not morally.  It seems equivalent to
> a powerdomain semantics
> as is commonly used to model nondeterminism elsewhere, and seems to be
> related to ideas
> discussed by Clinger in e.g.,

I had another look, and you are probably right: The solution might be
correct (but I did not check every detail).  So I take back the
earlier remark.  In any case, there is really nothing different about
it: in the end they wind up with a huge list of possible answers and
an implementation-defined way of selecting one.  So they simply
shifted the place where all those random bits get consumed to the end.
Reasoning about programs does not become any simpler that way.

> > Of coures, it with great difficulty it could be made
> > right, but the end result will be eqquivalent to the "random number
> > stream" solution.
> 
> This sounds like an oracle semantics,  which is
> perhaps similar to the current "permution oracle" Scheme semantics of
> Clinger's.

Yes, this is related to the RnRS rendition using permute/unpermute.
It basically fixes it up so that it reflects what is intended.

> Actually, while more burdensome for formal proofs, this is no worse
> than modeling the nondeterminism in the IO semantics, which will
> need to be specified anyway.

At least the dependence on I/O is confined to I/O operations and not
spread to nearly every single primitive.

> While I'm not advocating doing things this way, I would like to
> point out that in practice the semantics of IO is just as urgent, if
> not more so, than that of evaluation order.

There are differences.  With I/O we really are *inherently* forced to
reason about our programs' behaviors under /every possible/ input.
Reflecting this in the semantics is just par for the course.  But
arbitrarily adding a source of random bits to the semantics (and
making the most common operation depend on it) is not something that
is inherent to the task, so it is an unnecessary burden which can
easily be avoided.
0
find19 (1244)
12/8/2003 8:44:00 PM
On Mon, 08 Dec 2003 14:17:27 -0600, Matthias Blume wrote:

> Almost every change that I would personally have liked in Scheme
> is incompatible with RnRS.

May I ask which are these? I'm curious.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/

0
qrczak (1266)
12/8/2003 8:56:39 PM
On Mon, 08 Dec 2003 21:56:39 +0100, Marcin 'Qrczak' Kowalczyk 
<qrczak@knm.org.pl> wrote:

> On Mon, 08 Dec 2003 14:17:27 -0600, Matthias Blume wrote:
>
>> Almost every change that I would personally have liked in Scheme
>> is incompatible with RnRS.
>
> May I ask which are these? I'm curious.
>

Making Scheme the equivalent of SML... ;-)


<runs for cover>


cheers,
felix
0
felix4557 (46)
12/8/2003 9:04:55 PM
Eli Barzilay <eli@barzilay.org> wrote:
> Your claim seem to be that it might be preferable to have a fixed
> order for theoretical things like correctness proofs, but you care
> more for the practical stuff.

Correct. I'll concede that Matthias Blume is correct when he claims that
unspecified AEO greatly increases the complexity of formal proofs.
However, I disagree with his claim that it makes it harder to "reason
about programs"; abstraction barriers usually make it possible to tackle
code review in a way that keeps the complexity manageable.

> But many people on the fixed order side say that they want a fixed
> order for practical reasons too.

Correct -- but I don't find that argument compelling, for two reasons:
There are trivial transformations that provide the semantics they want,
and fixing the AEO removes useful information about a program (namely,
the ability to say that the order doesn't matter).

> I'm just now writing code that looks like:
> 
>   (define-values (foo bar)
>     (values (make-gui-widget) (make-another-gui-widget)))

OK.

> And it is just damn convenient to know that they will be placed in the
> right order.  The alternatives you suggest are (AFAICT):
> 
>   (define-values (foo bar)
>     (let* ((v1 (make-gui-widget))
>            (v2 (make-another-gui-widget)))
>       (values v1 v2)))

This is the "trivial transformation." I like it because it explicitly
says what you mean -- evaluate these expressions in sequential,
imperative order. However, it's a bit cumbersome, so I would prefer to
see standard library syntax like one of your other two examples:

>   (define-values (foo bar)
>     (apply* values
>             (list* (make-gui-widget) (make-another-gui-widget))))
> 
> or maybe even:
> 
>   (define-values (foo bar)
>     (funcall* values (make-gui-widget) (make-another-gui-widget)))

Either of these "sugared" forms are reasonable, and I'd much rather add
these than remove the ability to write "the order doesn't matter here!"
in code.

Another possibility: Unspecified AEO is the "primitive," but it needn't
be the terser syntax. It wouldn't be a horrible idea to provide an
(INVOKE f args ...) syntax for unspecified AEO, and define the simpler
(f args ...) syntax as library syntax for the let*/invoke form.

However, I'd rather have it the first way, where the basic ("default")
syntax uses unspecified AEO, because I think it's the more practical of
the two, in general. It's better at exposing subtle bugs, it's more
amenable to optimization, and it does a better job of expressing the
programmer's intent (eval order doesn't matter) in the vast majority of
calls.

> It might be subjective, and I don't know about your preference, but I
> prefer to use a language that fixes the evaluation order as something
> I can naturally use.  This is a very practical point for me, no
> proofs, no semantics, just programming.

What's "natural" about using a syntax that says, "Order matters here!"
when you really don't care about the order? What's unnatural about using
a syntactic keyword when you really do want to say, "Order matters
here!" It's a clue to maintainers and compilers that they may not
reorganize the code unless they preserve the semantics.

> Besides, I think that the main point Matthias Blume is pushing is that
> the language definition is so much more important than anything else
> that performance issues should not play any role in designing the
> language.

Sure, the language definition is important, but I don't agree that
sequential, imperative semantics are important to language definition!
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 9:53:40 PM
> Feuer wrote:
>> Consider it a program annotation, telling the compiler that it
>> doesn't matter.  It may be wrong, but if it's right it could be
>> useful.

Thant Tessman <thant@acm.org> wrote:
> Which tells me that if indeed there is a significant performance
> benefit, it should be available via a compiler option which is off by
> default.

If it were just about runtime efficiency, that would be more compelling.
However, the "program annotation" also tells *maintainers* that the
order doesn't matter, that they may freely reorganize the code. (If they
do so, and it breaks, that's a significant diagnostic clue: There's an
unintended dependency.) Likewise, it also tells *code reviewers* what to
expect from the code. Therefore, it also improves programmer efficiency.

You could do the same thing with comments, but comments often describe
the code incorrectly. That's not possible with code. (Code can implement
the design incorrectly, but that's easier to check when you can easily
read what the code really does, rather than relying on unreliable
comments.)
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 9:58:41 PM
Matthias Blume <find@my.address.elsewhere> wrote:
>>> It *never* masks _bugs_.  Relying on OofE in a language with fixed
>>> OoE is not a bug.

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> We've heard anecdotal evidence from David Rush that this is false,
>> that removing dependence on OOE was superior to setting a fixed OOE
>> in his experience.

> It cannot be false, David Rush's testimony notwithstanding.  How can
> relying on something that is guaranteed ever be a bug in and of
> itself?

It's a bug when you unintentionally rely on the AEO. When the programmer
does it *intentionally*, it's easy to spot, and it's trivial to fix.
When the programmer does it *accidentally* -- i.e., the arguments
interact in unexpected ways -- that's a bug. Fixing the AEO will tend to
mask that bug, and varying the AEO will tend to expose it.

That's why I recommend a compiler/interpreter setting to "shake up" the
AEO, regardless of the default setting. It exposes that kind of bug. You
can get similar benefits by porting to multiple Schemes, or maybe even
by changing your optimization settings, but it'd be better to have a
switch specifically aimed at AEO. (My only reservation is that it may
tempt some users to set it to "left->right AEO" and leave it that way.)

Of course, if you write code that requires left->right AEO, that
bug-exposing technique is useless, because you can't turn it on without
breaking the *intentional* uses.

> I know that programs improve quite often when one removes certain
> imperative aspects -- which has the side effect (no pun intended) of
> making the program less dependent on evaluation order.  In fact, I am
> a great fan of the pursuit of such program design.  But that does not
> mean that one should throw out the baby with the bath water and leave
> evaluation order unspecified.  The benefits David refers to can be
> realized regardless of whether or not the language is vague on the
> issue.

How will you find the subtler bugs, if you can't shake up the AEO? Yes,
there are other ways to do it, but that's a very cheap and effective
technique.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 10:06:17 PM
> Bradd W. Szonye wrote:
>> By using a procedure invocation instead of a sequencing combination
>> like BEGIN, the programmer indicates that the order of evaluation is
>> unimportant. That information is important to maintainers and
>> compilers.

Thant Tessman <thant@acm.org> wrote:
> A well-specified language is far more valuable to maintainers than an
> attempt by a maintainer to divine the intention of the programmer by
> their choice of construct which may or may not have
> been...er...intentional.

The language is well-specified, though. It states that the procedure
call syntax is *solely* for calling procedures, not for imperative
sequencing. That's useful information.

Now, as you point out, a programmer may use it incorrectly. That's what
code reviews are for: to spot that kind of mistake, and to educate the
programmers who make them. That takes care of the "RTFM-type" mistakes.

Also, a programmer may provide arguments that interact in non-obvious
ways, without realizing that he's done it. That's what automatic
diagnostic tools like the "argument randomizer" are for. It helps to
shake out that kind of bug.

Therefore, by the maintainer gets the code, he can be reasonably sure
that AEO really *doesn't* matter in a procedure call, and if he does
notice that kind of behavior he *knows* that it's a design flaw or a
coding error.

>> By using the right construct, the programmer indicates what is or
>> isn't important. That is useful information.

> My point is that this argument is circular. It's not useful
> information if it's not relevant. And it's not relevant unless OofE
> matters ....

The information is always relevant: It's necessary to understand the
code (review), and it's certainly necessary if you want to *re-organize*
the code (maintenance). Code review and maintenance are very, very
important roles in software engineering.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 10:13:52 PM
Bradd W. Szonye wrote:

[...]

> If it were just about runtime efficiency, that would be more compelling.
> However, the "program annotation" also tells *maintainers* that the
> order doesn't matter, that they may freely reorganize the code. [...]

I understand what you're saying. I just don't believe for a moment that 
being able to rearrange argument evaluation is anywhere near as 
important to someone maintaining code than knowing that the order of 
evaluation is fixed.

-thant

0
thant (332)
12/8/2003 10:16:18 PM
> "Scott G. Miller" <scgmille@freenetproject.org> writes:
>> a*x + b*y + c*z
>> 
>> has a set of rules for evaluation.  Multiplication takes precedence
>> over addition, but beyond that the meaning of the expression does
>> not change if I mentally evaluate it from any direction.

Matthias Blume <find@my.address.elsewhere> wrote:
> This statement is false! ...
>
> Let a*x = 1e50
> Let b*y = -1e50
> Let c*z = 1.0
> 
> To demonstrate, I use SML (because this way I can be sure in which
> order things get evaluated):

Maybe you use SML to mentally evaluate arithmetic, but I doubt that
Scott does. Yes, SML's inexact arithmetic may violate the commutative
property, but that's a flaw in the implementation, not the mental model.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 10:16:35 PM
Matthias Blume <find@my.address.elsewhere> wrote:
>>> What is the difference between a bug that is "masked" and one that
>>> is not there?

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> A masked bug may be entirely benign, or only mostly benign. The
>> latter case is actually *worse* than having an obvious bug.

> You have not answered the question.

Yes, I did. If the bug does not exist, it's always benign. If the bug
exists, but is masked, then it is only "mostly benign," and in practice,
that's often worse than having an obvious, malignant bug. That's the
difference: One fixes the problem, and the other makes the problem
harder to find. IMO, fixed AEO accomplishes the latter, not the former.

If that doesn't answer your question, then please clarify, rather than
just saying, "You didn't answer!" or "I already told you!" or "It's
easy!" So far, every time you've said this, I *did* answer the question
as well as I could, or you *didn't* tell me, or it *wasn't* easy.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 10:23:34 PM
> Bradd W. Szonye wrote:
>> If it were just about runtime efficiency, that would be more
>> compelling. However, the "program annotation" also tells
>> *maintainers* that the order doesn't matter, that they may freely
>> reorganize the code. [...]

Thant Tessman <thant@acm.org> wrote:
> I understand what you're saying. I just don't believe for a moment
> that being able to rearrange argument evaluation is anywhere near as
> important to someone maintaining code than knowing that the order of
> evaluation is fixed.

Let's say that you're working on some legacy code, and you see

    (proc (foo) (bar))

You think the code would work better if you evaluated (bar) first. Is it
possible? Will the code still work if you reorganize it? Did the
original programmer hide some global dependency between the two
arguments? If the language "blesses" the code with fixed AEO, he may
have. You'll need to dig deeper to find out, and if he *did* rely on it,
then you've got some non-trivial redesign ahead of you.

Now suppose that you see

    (let* ((a1 (foo))
           (a2 (bar)))
      (proc a1 a2))

Now you know that the order of evaluation is important. Yes, you still
need to dig deeper, and refactoring may still be difficult. However, you
only need to do it *when you see this pattern*. You don't need to do it
for every procedure call.

Matthias Blume claims that the unspecified order makes it harder to
reason about programs. In my experience, it's the *imperative* element
that makes it harder to reason about programs. The nice thing about
unspecified order is that it clearly states, "This code is not
imperative!"

Also in my experience, imperative constructs strongly encourage
programmers to write fragile code. If you guarantee that some construct
is sequential, then they'll write sequential code, even if it isn't
really necessary. I suspect that's because sequential code is easier to
write than commutative code. Unfortunately, it's also much harder to
maintain. Therefore, while it may be reasonable for teaching or
prototyping languages, I would prefer to avoid it in large-scale
software whenever possible. (I ran into this kind of thing all the time
when I worked on a commercial compiler; it was a huge relief when I
switched to system libraries, because the code was much more
"commutative" and modular.)
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 10:43:31 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > But many people on the fixed order side say that they want a fixed
> > order for practical reasons too.
> 
> Correct -- but I don't find that argument compelling, for two
> reasons: There are trivial transformations that provide the
> semantics they want, and fixing the AEO removes useful information
> about a program (namely, the ability to say that the order doesn't
> matter).

Well, it does add a default order which is what some people like to
have as a default.  Personally, I want that default that I wouldn't
mind an explicit annotation whenever I need to say that I don't care
about the order -- this is for the simple fact that every once in a
while I do want to rely on a fixed (l2r) order, but I have never
needed to say that I don't care what the order is.

In fact, I will never specify such a don't-care-about-the-order option
unless I'm trying to optimize some code and know that my compiler can
use that for a substantial performance boost -- if I ever do this, it
will be a part of optimizing a working piece of code (eg, adding
declarations in CL).  I don't want to suffer constantly for the sake
of some compiler optimizations that are easier without the more
convenient deault, I don't want to even think about optimizations
until I start working on them.

An extreme and stupid example is -- say that you add some `eval'
declaration for expressions that need to be evaluated.  Now the claim
is that I can write programs that behave as before, I just need to
mark things that need to be evaluated:

  (eval ((eval +) (eval 1) (eval 2)))

So the default is now to not evaluate stuff, and explicitly mark
things that need to be evaluated.  This will obviously allow some
extreme optimizations in certain cases -- but I like the default mode
to be evaluating stuff -- and when I want to optimize things then I
want a dont-eval mark.

On a similar line of thought, I'd prefer implementations to let you
have some funcall-unordered as something to use when I'm optimizing
instead of that being the default with a funcall-r2l when things
matter.


> Another possibility: Unspecified AEO is the "primitive," but it
> needn't be the terser syntax. It wouldn't be a horrible idea to
> provide an (INVOKE f args ...) syntax for unspecified AEO, and
> define the simpler (f args ...) syntax as library syntax for the
> let*/invoke form.

This is just getting heavier and heavier for the wrong reason.  I just
don't want to see anything that is related to compilation optimization
in my language definition.


> However, I'd rather have it the first way, where the basic
> ("default") syntax uses unspecified AEO, because I think it's the
> more practical of the two, in general.

I find the l2r thing more *practical* since I can actually *use* it.
The unordered thing is not something I can use -- it's just something
I *cannot* use.


> It's better at exposing subtle bugs,

So does requiring you to specify the type of every expression.  So
does requiring you to have a closing paren that looks like </foo>
that matches information in the open paren.  So does requiring you to
always prefix numbers with a "#i" or a "#e".  I still like to have
some set of reasonable defaults for all of these.


> it's more amenable to optimization,

Again -- I don't want to see the word "optimization" in my language
definition.


> and it does a better job of expressing the programmer's intent (eval
> order doesn't matter) in the vast majority of calls.

It does not do a better job of expressing this particular programmer's
intent since it forces him to be more verbose in some situations.


> > It might be subjective, and I don't know about your preference,
> > but I prefer to use a language that fixes the evaluation order as
> > something I can naturally use.  This is a very practical point for
> > me, no proofs, no semantics, just programming.
> 
> What's "natural" about using a syntax that says, "Order matters
> here!"  when you really don't care about the order?

The natural part is that I expect things to be evaluated in *some*
order, and a l2r order is more natural for me since this is the
direction I write my programs.  Like Matthias said -- Scheme is
imperative, I can pretend to live in a world where this is not the
case, but then I wouldn't care about some default order.  This is why
I don't see it as a bad step that gets you a bit "closer to an
imperative" language.  Haskell is a good exaple of having defaults
that are so radically different than the imperative world that they
have to go through monadic hoops to be able to express imperative
stuff.


> What's unnatural about using a syntactic keyword when you really do
> want to say, "Order matters here!" It's a clue to maintainers and
> compilers that they may not reorganize the code unless they preserve
> the semantics.

Is there anything unnatural in prefixing every sentence you write with
"the following sentence is written in English"?  When I post here I
assume you use an implicit assumption "his text is always in English
unless indicated otherwise".


> > Besides, I think that the main point Matthias Blume is pushing is
> > that the language definition is so much more important than
> > anything else that performance issues should not play any role in
> > designing the language.
> 
> Sure, the language definition is important, but I don't agree that
> sequential, imperative semantics are important to language
> definition!

Sure they are -- they are important for us programmers, since it
changes the way we write programs.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/8/2003 11:03:38 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Let's say that you're working on some legacy code, and you see
> 
>     (proc (foo) (bar))
> 
> You think the code would work better if you evaluated (bar) first.
> Is it possible? Will the code still work if you reorganize it?

FWIW, if you refactor my code in such a way, then yes -- I expect you
to dig deeper.  ("FWIW" since I never saw, and can't think of any
reason for you to change my code in this way, other than a compiler.)


> [...] In my experience, it's the *imperative* element that makes it
> harder to reason about programs. [...]

So you should use Haskell.


> The nice thing about unspecified order is that it clearly states,
> "This code is not imperative!"

This is Scheme -- code *is* imperative.


> Also in my experience, imperative constructs strongly encourage
> programmers to write fragile code.

Please, PLEASE, *PLEASE*, show an example of such fragile code.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/8/2003 11:19:55 PM
Eli Barzilay <eli@barzilay.org> wrote:
>>> But many people on the fixed order side say that they want a fixed
>>> order for practical reasons too.

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> Correct -- but I don't find that argument compelling, for two
>> reasons: There are trivial transformations that provide the semantics
>> they want, and fixing the AEO removes useful information about a
>> program (namely, the ability to say that the order doesn't matter).

> Well, it does add a default order which is what some people like to
> have as a default.  Personally, I want that default that I wouldn't
> mind an explicit annotation whenever I need to say that I don't care
> about the order -- this is for the simple fact that every once in a
> while I do want to rely on a fixed (l2r) order, but I have never
> needed to say that I don't care what the order is.

I believe that unspecified AEO makes for a better default, because the
evaluation order *doesn't* matter in the vast majority of procedure
calls. When you write (vector-ref v i) do you generally care whether V
or I gets evaluated first? In (loop (cons (car l) result) (cdr l))
do you care whether you build up the result or strip the input first?
And most important, when you write (+ a 1) or (= (v1) (v2)) wouldn't you
find it suprising to learn that addition and equality *aren't*
commutative?

IME, code that actually cares about the AEO is uncommon, and code that
doesn't care is *very* common. Non-imperative code (including calls with
unspecified AEO) is much easier to review, understand, and modify than
imperative code. So why would you want to make the imperative version
the *default*, when it's the less common case?

> In fact, I will never specify such a don't-care-about-the-order option
> unless I'm trying to optimize some code and know that my compiler can
> use that for a substantial performance boost ....

And that's exactly why it should be the default. If you make it an
optional thing, programmers will ignore it; they will default to the
imperative notation, even when it isn't necessary. That makes code
review and maintenance more expensive.

> On a similar line of thought, I'd prefer implementations to let you
> have some funcall-unordered as something to use when I'm optimizing
> instead of that being the default with a funcall-r2l when things
> matter.

Thing is, it's not just about runtime efficiency, it's also about review
and maintenance efficiency. Some folks don't care much about the
compiler optimization, but I do, and I care even more about defaulting
to a style that's more difficult to maintain.

>> Another possibility: Unspecified AEO is the "primitive," but it
>> needn't be the terser syntax. It wouldn't be a horrible idea to
>> provide an (INVOKE f args ...) syntax for unspecified AEO, and define
>> the simpler (f args ...) syntax as library syntax for the let*/invoke
>> form.

> This is just getting heavier and heavier for the wrong reason.

How is this "heavier"?

No matter how you define (f args ...), unspecified AEO will always be
the "primitive" operation, and fixed AEO will always be "library"
syntax:

    (INVOKE-L2R args ...) => (LET* (args ...) (INVOKE-UNSPEC args ...))

The only real question is how to "sugar" it. Do you make the simple
(args ...) combination syntax equivalent to INVOKE-L2R or INVOKE-UNSPEC?
Do you make the other syntax available to users? R5RS defines the
combination syntax to mean INVOKE-UNSPEC, and it doesn't provide the
INVOKE-L2R syntax.

> I just don't want to see anything that is related to compilation
> optimization in my language definition.

Why not? Also, what makes you think that this is just about compiler
optimization?

>> However, I'd rather have it the first way, where the basic
>> ("default") syntax uses unspecified AEO, because I think it's the
>> more practical of the two, in general.

> I find the l2r thing more *practical* since I can actually *use* it.
> The unordered thing is not something I can use -- it's just something
> I *cannot* use.

I don't know what you mean here, but taken literally it's obviously
false. For example, I'm sure that you write (+ n 1) all the time, and
that *is* using unspecified AEO. Indeed, making the + syntax imperative
would probably be highly counterintuitive, because it removes
commutativity.

>> ... it does a better job of expressing the programmer's intent (eval
>> order doesn't matter) in the vast majority of calls.

> It does not do a better job of expressing this particular programmer's
> intent since it forces him to be more verbose in some situations.

So what? In the vast majority of procedure calls, where you don't care
about the evaluation order, it lets you express that "don't care" very
tersely! And in the small minority of calls, where you do care about the
order, it forces you to add a little extra to explicitly state the
imperative requirements. That's a *good* thing.

>> What's "natural" about using a syntax that says, "Order matters
>> here!"  when you really don't care about the order?

> The natural part is that I expect things to be evaluated in *some*
> order, and a l2r order is more natural for me since this is the
> direction I write my programs.

Is that left->right order *important*? If so, then you should be writing
it explictly anyway, either in a comment or with LET*. If not, then
fixing the evaluation order accomplishes nothing, and it removes
information that's valuable to code reviewers, maintainers, and
compilers.

> Like Matthias said -- Scheme is imperative, I can pretend to live in a
> world where this is not the case, but then I wouldn't care about some
> default order.  This is why I don't see it as a bad step that gets you
> a bit "closer to an imperative" language.

I think it's quite ironic to argue that "reasoning about programs is
hard" while advocating imperative style.

>> What's unnatural about using a syntactic keyword when you really do
>> want to say, "Order matters here!" It's a clue to maintainers and
>> compilers that they may not reorganize the code unless they preserve
>> the semantics.

> Is there anything unnatural in prefixing every sentence you write with
> "the following sentence is written in English"?

Yes, of course. That's exactly why it's unnatural to use imperative
syntax for procedure calls, because the vast majority of calls *don't*
care about the arg eval order.

>> Sure, the language definition is important, but I don't agree that
>> sequential, imperative semantics are important to language
>> definition!

> Sure they are -- they are important for us programmers, since it
> changes the way we write programs.

No, it doesn't. It changes the way you write a few procedures every now
and then. For the vast majority of calls, it makes no difference
whatsoever, and in the remaining few cases (where you need LET* or
something similar), it actually *improves* style by clarifying the
program's meaning.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 11:58:30 PM
> "Bradd W. Szonye" writes:
>> Also in my experience, imperative constructs strongly encourage
>> programmers to write fragile code.

Eli Barzilay wrote:
> Please, PLEASE, *PLEASE*, show an example of such fragile code.

Like I said, ask David Rush for examples. I don't have any examples
handy, because I *avoid* that programming style.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/8/2003 11:59:14 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > "Bradd W. Szonye" writes:
> >> Also in my experience, imperative constructs strongly encourage
> >> programmers to write fragile code.
> 
> Eli Barzilay wrote:
> > Please, PLEASE, *PLEASE*, show an example of such fragile code.
> 
> Like I said, ask David Rush for examples.

This is not a private email.


> I don't have any examples handy, because I *avoid* that programming
> style.

Bogus.  If you avoid a programming style that leads to bugs or to
fragile code, then please show an example of *what you avoid*.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/9/2003 12:08:04 AM
Bradd wrote:
>> Also in my experience, imperative constructs strongly encourage
>> programmers to write fragile code.

Eli Barzilay wrote:
> Please, PLEASE, *PLEASE*, show an example of such fragile code .... If
> you avoid a programming style that leads to bugs or to fragile code,
> then please show an example of *what you avoid*.

If I show you a simple example, you won't think it's "fragile," because
a simple example will be easy to understand. I don't have a complex
example handy, because I try to avoid that programming style.

However, I'm surprised that you find it so difficult to believe. Suppose
that you write:

    (proc (foo) (bar))

In the initial version, FOO and BAR are independent. You could swap them
if you needed to. Later, you discover that there's an easier way to
calculate BAR, but you need to share a variable with FOO. You could
refactor the code, or you could use a global variable to communicate
between the two procedures. Refactoring is difficult, but hidden global
state is fragile. Which solution do you choose?

If you make it easy to rely on imperative order, I'll bet you that all
but the most diligent programmers will use the global. If you're lucky,
somebody will catch it in a code review and reject the fragile solution.
If not, you've just introduced a hidden dependency, and you've made the
code much more difficult to maintain.

If you require the programmer to explicitly state intent, then at the
very least he'll use LET* to warn about the hidden global interaction.
He might even consider refactoring, because the relative cost is now
less.

Also, consider what happens if the PROC interface changes. Suppose that
a review uncovers a design problem with PROC, and the arguments actually
belong in the other order. If FOO and BAR are interchangeable, that's
not a problem. But if you relied on the order of evaluation, you'll need
to break them out into a separate LET* anyway!

That brings up an important point: Fixed AEO is only "easier" if your
imperative order happens to match the argument order. In other words,
you can write (map (get-fn) (get-list)), but you can't write (map
(get-list) (get-fn)). In fact, you can generally only rely on fixed AEO
when the function is *commutative* -- and that's exactly the kind of
function that you *don't* want to make imperative, because it's
counterintuitive to impose a one-way argument order on a commutative
function.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 12:46:02 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> IME, code that actually cares about the AEO is uncommon, and code
> that doesn't care is *very* common. Non-imperative code (including
> calls with unspecified AEO) is much easier to review, understand,
> and modify than imperative code. So why would you want to make the
> imperative version the *default*, when it's the less common case?

Because sometimes I do use it, and when I'm not using it I don't pay
any price at all.  I don't want the extra verbosity that will result
from an unspecified order.


> > In fact, I will never specify such a don't-care-about-the-order
> > option unless I'm trying to optimize some code and know that my
> > compiler can use that for a substantial performance boost ....
> 
> And that's exactly why it should be the default. If you make it an
> optional thing, programmers will ignore it; they will default to the
> imperative notation, even when it isn't necessary.

So what's the big deal?  This is the basic question which you have so
far avoided by some vague optimization excuse.


> That makes code review and maintenance more expensive.

AwComeOn...  Code review and code maintenance is perfectly fine as
long as reviewers and maintainers are using the same language
definitions.


> > On a similar line of thought, I'd prefer implementations to let
> > you have some funcall-unordered as something to use when I'm
> > optimizing instead of that being the default with a funcall-r2l
> > when things matter.
> 
> Thing is, it's not just about runtime efficiency, it's also about
> review and maintenance efficiency.  Some folks don't care much about
> the compiler optimization, but I do, and I care even more about
> defaulting to a style that's more difficult to maintain.

Again, PLEASE show some code that is more difficult to maintain with a
specified order.


> >> Another possibility: Unspecified AEO is the "primitive," but it
> >> needn't be the terser syntax. It wouldn't be a horrible idea to
> >> provide an (INVOKE f args ...) syntax for unspecified AEO, and define
> >> the simpler (f args ...) syntax as library syntax for the let*/invoke
> >> form.
> 
> > This is just getting heavier and heavier for the wrong reason.
> 
> How is this "heavier"?

Weight of ink used by my printer when I print the manual.
Weight of ink used by my printer when I print programs.


> > I just don't want to see anything that is related to compilation
> > optimization in my language definition.
> 
> Why not?

Because when I program in Scheme I want to forget that there is an
machine, registers, gates, etc etc etc.  All I want is for it to run
my code.  For example, when I write machine code, all I want is fast
memory access, I don't want to know that there is a cache in the
middle, and I certainly don't want to have the differences between an
L1 and an L2 cache to affect my programs.


> Also, what makes you think that this is just about compiler
> optimization?

Your posts.  So far I have not seen any example for bugs that are
avoided by an unspecified order, or code that is not fragile.
(Actually, "fragile" is the first word I think about when I consider
the current state of things with repect to evaluation order.)


> >> However, I'd rather have it the first way, where the basic
> >> ("default") syntax uses unspecified AEO, because I think it's the
> >> more practical of the two, in general.
> 
> > I find the l2r thing more *practical* since I can actually *use* it.
> > The unordered thing is not something I can use -- it's just something
> > I *cannot* use.
> 
> I don't know what you mean here, but taken literally it's obviously
> false.

Demonstrated literally:

* Demonstration of "can use":

    Here I don't use it: (+ 12 34)

    Here I do use it: (list (display "12") (display "34"))

  => so there is some feature that I can use.

* Demonstration of "cannot use":

    If you can fill in this blank then you show that you can use
    unspecified order.

  Note here that I was talking about *me* -- the programmer.  So
  filling in this slot with something that is part of the
  implementation does not work.



> For example, I'm sure that you write (+ n 1) all the time, and that
> *is* using unspecified AEO.

The "(+ n 1)" that *I'm* writing is using a specified order.


> Indeed, making the + syntax imperative would probably be highly
> counterintuitive, because it removes commutativity.

You don't have commutativity in any case:

  (let ((x 1))
    (define (foo) (set! x (+ x 1)) x)
    (+ (foo) (* 2 (foo))))

In this code, the + is never used in a commutative way, all you
achieve with the unspecified order is turn a deterministic result to
an undeterministic one -- turn a defined behavior to either a bug (if
it doesn't evaluate like you expect), a benign bug (if it does) or to
a random behavior (if you randomize the order on every call).  I don't
see any of these outcomes as useful -- AFAICS, there is no bug in the
above code -- when I write it I know what the result should be.


> >> ... it does a better job of expressing the programmer's intent (eval
> >> order doesn't matter) in the vast majority of calls.
> 
> > It does not do a better job of expressing this particular
> > programmer's intent since it forces him to be more verbose in some
> > situations.
> 
> So what?

So what??  I don't know about you, but I personally find the verbosity
of:

  (set! r1 (* 2 3))
  (set! r2 (* 4 5))
  (return (+ r1 r2))

to be a Bad Idea compared to:

  (+ (* 2 3) (* 4 5))


> In the vast majority of procedure calls, where you don't care about
> the evaluation order, it lets you express that "don't care" very
> tersely! And in the small minority of calls, where you do care about
> the order, it forces you to add a little extra to explicitly state
> the imperative requirements. That's a *good* thing.

As someone who, by definition, never cares for expressing "don't
care"s, I am perfectly happy with the my current (implementation-
dependent) situation where I don't need any extra, little or not.


> >> What's "natural" about using a syntax that says, "Order matters
> >> here!"  when you really don't care about the order?
> 
> > The natural part is that I expect things to be evaluated in *some*
> > order, and a l2r order is more natural for me since this is the
> > direction I write my programs.
> 
> Is that left->right order *important*?

Sometimes.


> If so, then you should be writing it explictly anyway, either in a
> comment or with LET*.

When I write "(+ (foo) (bar))" in my implementation, the l2r order is
specified, therefore needs no extra comments or let*s.  If foo and bar
are defined in some "remote" place, and I do intend for these calls to
be evaluated only in this order, I might add a comment, but this
happens many times with many other constructs -- even in "(+ (foo))"
where you need to call foo.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/9/2003 12:48:48 AM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> IME, code that actually cares about the AEO is uncommon, and code
>> that doesn't care is *very* common. Non-imperative code (including
>> calls with unspecified AEO) is much easier to review, understand, and
>> modify than imperative code. So why would you want to make the
>> imperative version the *default*, when it's the less common case?

Eli Barzilay <eli@barzilay.org> wrote:
> Because sometimes I do use it, and when I'm not using it I don't pay
> any price at all.  I don't want the extra verbosity that will result
> from an unspecified order.

So what do you do when you want

    (map (get-fn) (get-list))

but you need GET-LIST to be evaluated first? What do you do when the
arguments aren't commutative? Do you use LET*? Do you write a macro?
What do you do in all the cases where fixed AEO doesn't solve the
problem?

Now, explain why it's OK to write something different for those cases,
but not for the case where the args happen to fall in the right order.
Explain why it's a good idea to use two different idioms for the same
thing.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 12:59:59 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> If I show you a simple example, you won't think it's "fragile,"
> because a simple example will be easy to understand. I don't have a
> complex example handy, because I try to avoid that programming
> style.
> 
> However, I'm surprised that you find it so difficult to
> believe. Suppose that you write:
> 
>     (proc (foo) (bar))
> 
> In the initial version, FOO and BAR are independent. You could swap
> them if you needed to. Later, you discover that there's an easier
> way to calculate BAR, but you need to share a variable with FOO. You
> could refactor the code, or you could use a global variable to
> communicate between the two procedures. Refactoring is difficult,
> but hidden global state is fragile. Which solution do you choose?

Either refactoring (which should be easy in such cases, a simple
(begin (init-stuff) (proc (foo) (bar))) or the so-called fragile
version with a simple comment.  You might consider this a bad solution
but what about a piece of code that looks like this:

  (let ((x (foo)))
    (if (= x 0)
      0
      ;; if (foo) is zero, mustn't call bar
      (* x (bar))))

This is a similar comment that you cannot do much about, besides
having that comment.  (Or changing to Haskell.)


> If you make it easy to rely on imperative order, I'll bet you that
> all but the most diligent programmers will use the global. If you're
> lucky, somebody will catch it in a code review and reject the
> fragile solution.  If not, you've just introduced a hidden
> dependency, and you've made the code much more difficult to
> maintain.

Nothing hidden about it -- (+ (foo) (bar)) is exactly that.  You
cannot change it to (+ (bar) (foo)), (+ (foo) (bar) (- (foo)) (foo)),
(+ 1 (foo) (bar) -1), etc without knowing what they do.


> That brings up an important point: Fixed AEO is only "easier" if
> your imperative order happens to match the argument order. In other
> words, you can write (map (get-fn) (get-list)), but you can't write
> (map (get-list) (get-fn)).

So you want to make everything "harder".  You must like bureaucracy.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/9/2003 1:18:38 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> >> IME, code that actually cares about the AEO is uncommon, and code
> >> that doesn't care is *very* common. Non-imperative code (including
> >> calls with unspecified AEO) is much easier to review, understand, and
> >> modify than imperative code. So why would you want to make the
> >> imperative version the *default*, when it's the less common case?
> 
> Eli Barzilay <eli@barzilay.org> wrote:
> > Because sometimes I do use it, and when I'm not using it I don't pay
> > any price at all.  I don't want the extra verbosity that will result
> > from an unspecified order.
> 
> So what do you do when you want
> 
>     (map (get-fn) (get-list))
> 
> but you need GET-LIST to be evaluated first? What do you do when the
> arguments aren't commutative? Do you use LET*? Do you write a macro?
> What do you do in all the cases where fixed AEO doesn't solve the
> problem?

Yes, I need to do something else to specify the different evaluation
order.  This has nothing to do with the cases where the l2r order
does work, where you want to force me to use an explicit order
expression too.

See other post.


> Now, explain why it's OK to write something different for those
> cases, but not for the case where the args happen to fall in the
> right order.  Explain why it's a good idea to use two different
> idioms for the same thing.

Because I can.  Seriously -- there are places where the l2r order
doesn't work, but I don't see the point of crippling all function
calls just to make this symmetrically crippled.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/9/2003 1:24:42 AM
Matthias Blume wrote:
> "Anton van Straaten" <anton@appsolutions.com> writes:
>
> > Standard ML has been raised as an example of OoE having been fixed as
l2r.
> > But beyond SML, there are ML variants/derivatives that use the opposite
> > evaluation order - for example, OCaml uses r2l.  Whether or not anyone
> > considers OCaml to be an ML, this example points out an underlying
> > unresolved issue in this discussion, which is what the goals of RnRS
are, or
> > should be.
>
> SML and Ocaml are very different languages.  In fact, except for
> trivial ones, there are hardly any programs that are common to both.
> The languages are "similar" in many respects, but drafting a common
> standard for both of them would not only be very difficult but also
> fairly useless.

That may illustrate my point, in a way I didn't intend.  Some of the areas
of divergence between SML and OCaml are different for no apparent reason.
At the very least, their respective syntaxes are one area where there seems
to be a lot of pointless difference.  Had there been a broader standard in
effect, SML and OCaml might enjoy more similarity.

And then, when I read Pierce's "Types and Programming Languages", I wouldn't
have to mentally (and sometimes actually) translate the OCaml examples into
SML, which I'm more familiar with.  I mention that as just one
semi-practical consequence of the differences.

One way to look at RnRS could be to say it exists to avoid such a wide
divergence between implementations of Scheme.

[Aside: I've since discovered that the case with OCaml is more subtle:
apparently the OCaml specification leaves evaluation order unspecified (as
of Jun 2001 at least, see e.g.
http://caml.inria.fr/archives/200106/msg00136.html ).  However, the one and
only OCaml implementation uses r2l evaluation, apparently to help with
compiler optimizations in the presence of currying.]

> > However, none of this negates the fact that there is value in a broader,
> > more inclusive standard that supports significant variation between
> > implementations - as RnRS has done historically, and as R5RS does today.
>
> I'm not sure this is true.

Does that mean you think there's zero value in a broader language standard,
or just that the value is small enough as to not be worth pursuing?  Also,
does that apply only to the OoE issue, or more generally?

> > If I want to design a new language that's "just like Scheme
> > except...", then aren't I better off making that language an
> > extension of standard Scheme, as opposed to an incompatible
> > variation?
>
> If you ask me personally, then the answer is "no".  Almost every
> change that I would personally have liked in Scheme is incompatible
> with RnRS.

I was reminded of the value of a broad standard for language experimentation
the other day, when looking at Oaklisp.  It is described as a dialect of
Scheme, and the original 1986 paper about it states that Scheme was used "in
order to minimize our contribution to the continual proliferation of
incompatible varieties of Lisp".  As a direct result of that, I found I was
able to spend close to zero time learning anything about the syntax or basic
semantics of Oaklisp, and by simply reading about Oaklisp types, I was able
to write useful Oaklisp code, combining Scheme features that I already knew
with unique features of Oaklisp related to type creation and manipulation.

A more modern example of this sort of thing would be PLT Scheme, which has
extended Scheme far beyond what R5RS mandates in all sorts of ways, yet
still provides an R5RS-compliant implementation.

Focusing narrowly on a language standard as being primarily a way to ensure
portable code, misses other benefits of a standard, including the ability to
port one's knowledge of a language between dialects.

So, regardless of any individual personally having a use for a language
standard that supports extensibility in multiple dimensions, there are uses
for such a standard, and the Scheme standard has illustrated that in
practice.

> > The narrower the Scheme standard becomes, the more not-quite-Schemes
> > there will be, and standard Scheme will become less useful as a
> > language experimentation platform.
>
> Obviously, any "standard" is in the way if your goal is to experiment
> with languages.  If experimentation is my concern, I couldn't care
> less what RnRS (or the Definition of Standard ML, for that matter)
> says.

My examples above are intended to show that this isn't entirely true.
There's benefit to have a reasonably well-specified common core which a
family of languages can share, and which experimental languages can use as a
jumping-off point.

> But if I want to write reliable software that will work 5 years
> from now on improved, new, or simply different implementations, then I
> want a standard that nails things down as precisely as possible.

I agree.  But R5RS is not really that sort of standard at the moment, and I
don't know that it should be.  I'm of the opinion that a separate standard
for "real world Scheme" or "Big Scheme" would be preferable than trying to
turn RnRS into such a standard.  Java has quite successfully used a
multi-standard model to address everything from miniscule smart cards and
embedded devices (Java Card & J2ME) to small/medium scale applications
(J2SE) to large enterprise applications (J2EE).  The spectrum Scheme is
addressing is similarly broad, but in different dimensions.

> > It is in this context that it makes little sense to fix OoE: it
> > imposes an arbitrary decision on implementations that has no
> > semantic justification.
>
> Of course, it *has* a semantic justification: It makes semantics
> considerably simpler.  Given that it does not invalidate any currently
> legal Scheme program, this alone is justification enough, IMO.

It does not make the semantics of any single implementation simpler, since
presumably any sane implementation will specify a fixed evaluation order
(exactly like the OCaml case I mentioned.)  The only additional "complexity"
is in the semantics of a deliberately loose specification for the language
family.  In that context, I'm not sure I see what's wrong with the
permute/unpermute hack, if it's seen as a placeholder for more specific
behavior provided by implementations.  I would expect semanticists working
with a formal specification to pick an evaluation order, just as
implementations would.

Anton



0
anton58 (1240)
12/9/2003 2:54:48 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> You could do the same thing with comments, but comments often describe
> the code incorrectly. That's not possible with code.

*Of course* it is possible with code -- *especially* in this case.
That's the very problem with leaving the order unspecified.  Someone
might accidentally rely on the particular order that her
implementation happens to use, and she will never know that she
depended on it because it did not break on her.  So the "information"
contained in using a procedure call instead of an explicit sequencing
construct can be as wrong as a comment.
0
find19 (1244)
12/9/2003 3:15:42 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > "Scott G. Miller" <scgmille@freenetproject.org> writes:
> >> a*x + b*y + c*z
> >> 
> >> has a set of rules for evaluation.  Multiplication takes precedence
> >> over addition, but beyond that the meaning of the expression does
> >> not change if I mentally evaluate it from any direction.
> 
> Matthias Blume <find@my.address.elsewhere> wrote:
> > This statement is false! ...
> >
> > Let a*x = 1e50
> > Let b*y = -1e50
> > Let c*z = 1.0
> > 
> > To demonstrate, I use SML (because this way I can be sure in which
> > order things get evaluated):
> 
> Maybe you use SML to mentally evaluate arithmetic, but I doubt that
> Scott does.

It works the same way in almost every other language that I am aware
of, including Scheme.  If Scott G. Miller uses a mental model that
does not match the realities of programming, then what is the point of
the exercise?

> Yes, SML's inexact arithmetic may violate the commutative
> property, but that's a flaw in the implementation, not the mental model.

Maybe you can show me a mainstream language without this "flaw" then.
The fact is that in order to be able to reason correctly about the
properties of my code, I have to take the realities of IEEE arithmetic
etc. into account.

The bottom line is that the meaning of the expression almost certainly
*does* change if the expression is an expression in a program rather
than an expression on some mathematician's blackboard.

Matthias
0
find19 (1244)
12/9/2003 3:39:59 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Matthias Blume <find@my.address.elsewhere> wrote:
> >>> What is the difference between a bug that is "masked" and one that
> >>> is not there?
> 
> > "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> >> A masked bug may be entirely benign, or only mostly benign. The
> >> latter case is actually *worse* than having an obvious bug.
> 
> > You have not answered the question.
> 
> Yes, I did. If the bug does not exist, it's always benign. If the bug
> exists, but is masked, then it is only "mostly benign," and in practice,
> that's often worse than having an obvious, malignant bug. [...]

No, you still have not answered my question.  I did not ask you to
classify various bugs according to their relative severeness, I asked
for the definition of a bug that exists but is masked.  In other
words, how do I know that something is a bug that is masked as opposed
to no bug at all?

0
find19 (1244)
12/9/2003 3:45:27 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> It's a bug when you unintentionally rely on the AEO. When the programmer
> does it *intentionally*, it's easy to spot, and it's trivial to fix.
> When the programmer does it *accidentally* -- i.e., the arguments
> interact in unexpected ways -- that's a bug. Fixing the AEO will tend to
> mask that bug, and varying the AEO will tend to expose it.

No, for crying out loud!!!  If you fix the AEO, then it is no longer a
bug at all!  It is only a bug if you rely on an AEO that is not
guaranteed.

> How will you find the subtler bugs, if you can't shake up the AEO?

I don't know what "subtler bugs" you refer to here.  For the last
time, if the AEO is fixed, then relying on it, be it intentionally or
out of luck, is not a bug.
0
find19 (1244)
12/9/2003 3:48:58 AM
felix <felix@call-with-current-continuation.org> writes:

> On Mon, 08 Dec 2003 21:56:39 +0100, Marcin 'Qrczak' Kowalczyk
> <qrczak@knm.org.pl> wrote:
> 
> > On Mon, 08 Dec 2003 14:17:27 -0600, Matthias Blume wrote:
> >
> >> Almost every change that I would personally have liked in Scheme
> >> is incompatible with RnRS.
> >
> > May I ask which are these? I'm curious.
> >
> 
> Making Scheme the equivalent of SML... ;-)
> 
> 
> <runs for cover>

you better!
0
find19 (1244)
12/9/2003 3:52:32 AM
"Bradd W. Szonye" <bradd+news@szonye.com> wrote in message news:<slrnbt9u23.f9q.bradd+news@szonye.com>...
>> Matthias Blume <find@my.address.elsewhere> wrote:
> > This statement is false! ...
> >
> > To demonstrate, I use SML (because this way I can be sure in which
> > order things get evaluated):
> > Standard ML of New Jersey v110.44 [FLINT v1.5], November 6, 2003
> > - 1e50 - 1e50 + 1.0;
> > val it = 1.0 : real
> > - 1.0 - 1e50 + 1e50;
> > val it = 0.0 : real
> 
> Maybe you use SML to mentally evaluate arithmetic, but I doubt that
> Scott does. Yes, SML's inexact arithmetic may violate the commutative
> property, but that's a flaw in the implementation, not the mental model.

Inexact arithmetic does not violate the commutative law, (unless SMLNJ
does something truly bizzare), it violates the associative law.

The difference is whether (+ a b c) means (+ a (+ b c)) or (+ (+ a b) c).
Scheme specifies the latter, but it has nothing to do with order of
evaluation of arguments, but is entirely the internal working of the
(+) procedure.

      -- Keith
0
12/9/2003 4:47:29 AM
kwright@gis.net (Programmer in Chief) writes:

> "Bradd W. Szonye" <bradd+news@szonye.com> wrote in message news:<slrnbt9u23.f9q.bradd+news@szonye.com>...
> >> Matthias Blume <find@my.address.elsewhere> wrote:
> > > This statement is false! ...
> > >
> > > To demonstrate, I use SML (because this way I can be sure in which
> > > order things get evaluated):
> > > Standard ML of New Jersey v110.44 [FLINT v1.5], November 6, 2003
> > > - 1e50 - 1e50 + 1.0;
> > > val it = 1.0 : real
> > > - 1.0 - 1e50 + 1e50;
> > > val it = 0.0 : real
> > 
> > Maybe you use SML to mentally evaluate arithmetic, but I doubt that
> > Scott does. Yes, SML's inexact arithmetic may violate the commutative
> > property, but that's a flaw in the implementation, not the mental model.
> 
> Inexact arithmetic does not violate the commutative law, (unless SMLNJ
> does something truly bizzare), it violates the associative law.
> 
> The difference is whether (+ a b c) means (+ a (+ b c)) or (+ (+ a b) c).
> Scheme specifies the latter, but it has nothing to do with order of
> evaluation of arguments, but is entirely the internal working of the
> (+) procedure.

Right.  But didn't the way he phrased it (unfortunately that part has
been snipped) refer to leaving even the associativity of + open?

(OTOH, he also said "in math" -- which means that he was not actually
wrong.  So I take that part back.)

Matthias
0
find19 (1244)
12/9/2003 5:21:03 AM
Anton van Straaten <anton@appsolutions.com> wrote:
>> Of course, it *has* a semantic justification: It makes semantics
>> considerably simpler.  Given that it does not invalidate any currently
>> legal Scheme program, this alone is justification enough, IMO.
>
> It does not make the semantics of any single implementation simpler, since
> presumably any sane implementation will specify a fixed evaluation order
> (exactly like the OCaml case I mentioned.)  The only additional "complexity"
> is in the semantics of a deliberately loose specification for the language
> family.  In that context, I'm not sure I see what's wrong with the
> permute/unpermute hack, if it's seen as a placeholder for more specific
> behavior provided by implementations.  I would expect semanticists working
> with a formal specification to pick an evaluation order, just as
> implementations would.

Hi Anton. Sorry, but don't you miss something when you focus on a
single implementation? Suppose you deliver your code to an unknown set
of Scheme VM's built into future web browsers. Wouldn't it be nice to
be confident that your application will work correctly on any client?
It seems like simplifying the semantics of the family of
implementations buys you quite a bit in this case.

A declaration or alternate syntax could achieve calls with undefined
argument evaluation for optimization (or for Bradd to use to
communicate with his children's children's children in lieu of a
comment or more formal interface).

For tight semantics, our would-be web application writer could easily
ban these declarations, but he can't easily write a program to search
every function call for argument evaluation order bugs.

Random thought: Would fixing the argument evaluation order somehow
bring DS and CPS "closer" together, since the order is already fixed
in CPS?

-- 
Anthony Carrico
0
acarrico (19)
12/9/2003 8:07:57 AM
> "Bradd W. Szonye" wrote:
>> Maybe you use SML to mentally evaluate arithmetic, but I doubt that
>> Scott does. Yes, SML's inexact arithmetic may violate the commutative
>> property, but that's a flaw in the implementation, not the mental
>> model.

Programmer in Chief <kwright@gis.net> wrote:
> Inexact arithmetic does not violate the commutative law, (unless SMLNJ
> does something truly bizzare), it violates the associative law.

Mea culpa!
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 8:28:44 AM
"Scott G. Miller" wrote:
>>>> "a*x + b*y + c*z" has a set of rules for evaluation.
>>>> Multiplication takes precedence over addition, but beyond that the
>>>> meaning of the expression does not change if I mentally evaluate it
>>>> from any direction.

Matthias Blume wrote:
>>> This statement is false! ...
>>> Let a*x = 1e50 ... b*y = -1e50 ... c*z = 1.0
>>> To demonstrate, I use SML (because this way I can be sure in which
>>> order things get evaluated):

> "Bradd W. Szonye" writes:
>> Maybe you use SML to mentally evaluate arithmetic, but I doubt that
>> Scott does.

> It works the same way in almost every other language that I am aware
> of, including Scheme.

Really? On PLT Scheme:

    (define a #e+1e50)
    (define b #e-1e50)
    (define c #e1)
    (+ a b c) => 1
    (+ b c a) => 1
    (+ c a b) => 1
    (+ c b a) => 1
    (+ b a c) => 1
    (+ a c b) => 1

As I said, SML's inexact arithmetic may violate some mathematical
properties, but Scott was talking about abstract mathematics, not some
flawed implementation in code.

>> Yes, SML's inexact arithmetic may violate the [associative] property,
>> but that's a flaw in the implementation, not the mental model.

> Maybe you can show me a mainstream language without this "flaw" then.

Does PLT Scheme count? How about C programs using libgmp?

> The fact is that in order to be able to reason correctly about the
> properties of my code, I have to take the realities of IEEE arithmetic
> etc. into account.

If you're limited to using inexact arithmetic, that's true. However, in
that situation, I'd be a fool to rely on procedure-call syntax to
express the sequencing. If the order is important to ensure numerical
correctness, I'd definitely want to be more explicit about it than that.

> The bottom line is that the meaning of the expression almost certainly
> *does* change if the expression is an expression in a program rather
> than an expression on some mathematician's blackboard.

That depends on the language you're using, and it's certainly a poor
argument for fixed AEO.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 8:35:10 AM
Matthias Blume wrote:
> No, you still have not answered my question.  I did not ask you to
> classify various bugs according to their relative severeness, I asked
> for the definition of a bug that exists but is masked.  In other
> words, how do I know that something is a bug that is masked as opposed
> to no bug at all?

In general, it's very difficult to tell until the bug blows up. That's
why so many programmers advocate the "fast fail" approach, and that's
one of the reasons why I think the "mask, maybe fix bugs by fixing eval
order" approach is a bad idea.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 8:36:42 AM
> "Bradd W. Szonye" writes:
>> It's a bug when you unintentionally rely on the AEO. When the
>> programmer does it *intentionally*, it's easy to spot, and it's
>> trivial to fix. When the programmer does it *accidentally* -- i.e.,
>> the arguments interact in unexpected ways -- that's a bug. Fixing the
>> AEO will tend to mask that bug, and varying the AEO will tend to
>> expose it.

Matthias Blume <find@my.address.elsewhere> wrote:
> No, for crying out loud!!!  If you fix the AEO, then it is no longer a
> bug at all!  It is only a bug if you rely on an AEO that is not
> guaranteed.

It's still a bug if you *accidentally* rely on it. I'm not talking about
the obvious cases like (display (append (read) (read)))! Yes, a fixed
AEO fixes those cases, but they aren't very interesting, because there's
a trivial transformation that makes them unnecessary, and a code review
can easily catch the mistake if you use it in a language with
unspecified AEO.

I'm talking about the cases where it's not obvious that the arguments
depend on each other, where the programmer *accidentally* relies on AEO.
Those cases are much more interesting, because it's tough to spot them
in a code review or a bughunt/maintenance job. Those cases are what
makes code fragile, what makes it harder for maintainers to repair and
reorganize code. Sometimes the code is merely fragile; other times,
there are actually bugs lurking in the corner cases, but the fixed AEO
masks most of the poor design.

If you leave AEO unspecified, you can use tools to "shake up" the AEO,
which will tend to expose the fragility and the latent bugs. David Rush
gets the same effect by porting to several different Schemes.

If you fix the AEO, however, and programmers depend on it in some cases,
those tools become unusable. Also, in my experience, programmers are
more likely to write the fragile code in the first place, in that kind
of environment. It encourages poor style.

>> How will you find the subtler bugs, if you can't shake up the AEO?

> I don't know what "subtler bugs" you refer to here.  For the last
> time, if the AEO is fixed, then relying on it, be it intentionally or
> out of luck, is not a bug.

It does result in fragile code, and it does mask bugs. Specifically, it
can "fix" most of a design flaw, such that the bug only shows up in rare
corner cases, instead of failing all the time. That makes the design
flaw much harder to uncover and repair. Do you understand now?
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 8:47:17 AM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> Suppose that you write:
>> 
>>     (proc (foo) (bar))
>> 
>> In the initial version, FOO and BAR are independent. You could swap
>> them if you needed to. Later, you discover that there's an easier way
>> to calculate BAR, but you need to share a variable with FOO. You
>> could refactor the code, or you could use a global variable to
>> communicate between the two procedures. Refactoring is difficult, but
>> hidden global state is fragile. Which solution do you choose?

Eli Barzilay <eli@barzilay.org> wrote:
> Either refactoring (which should be easy in such cases, a simple
> (begin (init-stuff) (proc (foo) (bar))) or the so-called fragile
> version with a simple comment.

Refactoring may be easy, but stuffing in a global under the hood is even
easier. If you make the latter option a little more difficult (by
requiring explicit syntax for it, like LET* or INVOKE*), you encourage
the lazy programmers to do it the right way -- and if they don't at
least they've documented the poor style in a way that can't get "lost"
the way that comments do. I don't trust comments; I've seen far too many
of them get out of sync with the code.

>> If you make it easy to rely on imperative order, I'll bet you that
>> all but the most diligent programmers will use the global. If you're
>> lucky, somebody will catch it in a code review and reject the fragile
>> solution.  If not, you've just introduced a hidden dependency, and
>> you've made the code much more difficult to maintain.

> Nothing hidden about it -- (+ (foo) (bar)) is exactly that.  You
> cannot change it to (+ (bar) (foo)) ....

Sure, you can, if you have the guarantee that the code doesn't rely on
the argument evaluation order. A maintainer might want to do exactly
that. If he has good tools (e.g., a compiler option that "shakes up" the
order), he can easily verify that the program obeys the language rules,
that it doesn't rely on imperative argument style.

>> That brings up an important point: Fixed AEO is only "easier" if your
>> imperative order happens to match the argument order. In other words,
>> you can write (map (get-fn) (get-list)), but you can't write (map
>> (get-list) (get-fn)).

> So you want to make everything "harder".  You must like bureaucracy.

No, I want to make code review and maintenance *easier*. Using
imperative argument style makes it harder, so I'll gladly discourage it
whenever possible, and I *strongly* oppose any suggestion that we should
make functional argument style indistinguishable from imperative
argument style.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 8:55:37 AM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> You could do the same thing with comments, but comments often
>> describe the code incorrectly. That's not possible with code.

Matthias Blume <find@my.address.elsewhere> wrote:
> *Of course* it is possible with code -- *especially* in this case.
> That's the very problem with leaving the order unspecified.

The code says exactly what it means. That may not match what the
programmer *intended* it to mean, but maintainers don't generally have
access to the programmer's thoughts.

> Someone might accidentally rely on the particular order that her
> implementation happens to use, and she will never know that she
> depended on it because it did not break on her.

That's a good reason *not* to support imperative argument style in a
translator. I'd much prefer an implementation that uses a "perverse"
order (via a "debug" switch, if necessary).

> So the "information" contained in using a procedure call instead of an
> explicit sequencing construct can be as wrong as a comment.

Only if the programmer doesn't RTFM, *and* the implementation holds his
hand and tries to cover up the mistake instead of exposing it.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 8:58:44 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> 
> >> If you make it easy to rely on imperative order, I'll bet you that
> >> all but the most diligent programmers will use the global. If you're
> >> lucky, somebody will catch it in a code review and reject the fragile
> >> solution.  If not, you've just introduced a hidden dependency, and
> >> you've made the code much more difficult to maintain.
> 
> > Nothing hidden about it -- (+ (foo) (bar)) is exactly that.  You
> > cannot change it to (+ (bar) (foo)) ....
> 
> Sure, you can, if you have the guarantee that the code doesn't rely
> on the argument evaluation order. A maintainer might want to do
> exactly that.

Why would a "maintainer" want to change that?  That seems like a basic
thing that you still did not justify.  If my language had fixed order,
then by definition you should look into my code if you ever want to
change the order.  That would be slightly easier if no order was
defined, but the practicality of things is that there is always *some*
order and you should always look at my code and see if it has some bad
side effects, (and a "shaker" wouldn't help you with the 2^32-17 cases
you won't check).

> > So you want to make everything "harder".  You must like bureaucracy.
> 
> No, I want to make code review and maintenance *easier*. Using
> imperative argument style makes it harder, so I'll gladly discourage
> it whenever possible, and I *strongly* oppose any suggestion that we
> should make functional argument style indistinguishable from
> imperative argument style.

And yet, it looks like whatever amounts of verbosity is spilled,
you're stuck in the vague world of "subtle", "benign", "masked out"
bugs, "fragile code", "code reviewers and maintainers", "style" -- you
keep avoiding any less hand-waving arguments.

This thread should be put out of its misery.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/9/2003 9:24:02 AM
Eli Barzilay <eli@barzilay.org> wrote:
>>> Nothing hidden about it -- (+ (foo) (bar)) is exactly that.  You
>>> cannot change it to (+ (bar) (foo)) ....

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> Sure, you can, if you have the guarantee that the code doesn't rely
>> on the argument evaluation order. A maintainer might want to do
>> exactly that.

> Why would a "maintainer" want to change that?

It depends on what the functions actually do. Maintainers change code.
They reorganize it. They fiddle with it to fix bugs, to improve
efficiency, to improve usability -- lots of reasons, and lots of code
reorganization. Fragile code is bad, because it makes it harder to make
those changes.

> That seems like a basic thing that you still did not justify.

It *is* a very basic thing, and it's a major part of my day job. It's so
basic that I have a hard time believing that it's not obvious to you. Do
you do much maintenance work? Imperative argument style is a great way
to create "write-only" code.

> If my language had fixed order, then by definition you should look
> into my code if you ever want to change the order.

Which is a *drawback*! With unspecified order, you have at least some
chance that it's possible to reorganize the code without a major code
review. Even better if you can use a "shaker" to expose latent
dependencies, and even better if the original developers used one.

With a fixed order, you have no choice but to drill down below the
abstraction barriers and determine exactly what every piece of code
does, for *every* call. I prefer a style that leaves those barriers
intact, where you can reason about the abstract semantics instead of
dragging in the whole ball of spaghetti. That doesn't *always* work, but
it doesn't even have a chance when you use imperative argument style.

> That would be slightly easier if no order was defined, but the
> practicality of things is that there is always *some* order and you
> should always look at my code and see if it has some bad side effects,
> (and a "shaker" wouldn't help you with the 2^32-17 cases you won't
> check).

A "shaker" is an automated testing tool, so of course it won't find all
the bugs. I'd bet that you'd shake out a large number of them just by
flipping between left->right and right->left, though. Once you do that,
there's a good chance that you can reason about the program *without*
breaking abstraction barriers, which greatly reduces the workload on
maintainers.

>> ... I want to make code review and maintenance *easier*. Using
>> imperative argument style makes it harder, so I'll gladly discourage
>> it whenever possible, and I *strongly* oppose any suggestion that we
>> should make functional argument style indistinguishable from
>> imperative argument style.

> And yet, it looks like whatever amounts of verbosity is spilled,
> you're stuck in the vague world of "subtle", "benign", "masked out"
> bugs, "fragile code", "code reviewers and maintainers", "style" -- you
> keep avoiding any less hand-waving arguments.

That's not hand-waving. That's a description of my day job, and it works
very well in practice. Sorry, but I can't explain all of my programming,
maintenance, and usability experience to you in a Usenet discussion. In
my experience, unnecessary imperative style (especially when it's
implicit) makes maintenance more expensive, because it makes code
fragile and more difficult to understand.

If your experience is different, you should consider yourself lucky.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 9:43:08 AM
> Anton van Straaten <anton@appsolutions.com> wrote:
>> [Fixed AEO] does not make the semantics of any single implementation
>> simpler, since presumably any sane implementation will specify a
>> fixed evaluation order ....

acarrico@memebeam.org <acarrico@memebeam.org> wrote:
> Hi Anton. Sorry, but don't you miss something when you focus on a
> single implementation?

Yes; when you rely on implementation-defined behavior, you sacrifice
some portability. In practice, that's not much of a hardship, because
real programs tend to go beyond what you'll find in a language standard
anyway. For example, very few languages define the system environment in
a portable way, and (arguably) none of them do it in a way that's
actually usable on all environments. Compared to the environment
problem, a little bit of implementation-dependent language use is small
beans.

> Suppose you deliver your code to an unknown set of Scheme VM's built
> into future web browsers. Wouldn't it be nice to be confident that
> your application will work correctly on any client?

That gets into ABI issues and environment issues, which traditionally go
well beyond language definitions. These things usually go into
extensions, because really nailing it down in a language spec would
bloat the language to unusable proportions.

Some standards, like POSIX, try to specify both a language (by
reference) and an environment, but even those aren't entirely portable.

> A declaration or alternate syntax could achieve calls with undefined
> argument evaluation for optimization ....

Bad idea, IMO, because functional argument style is also useful for
"maintainer optimization," not just "compiler optimization" -- but
typical programmers won't use it that way if you make them type extra to
get it.

> For tight semantics, our would-be web application writer could easily
> ban these declarations ....

I wouldn't call that "tight semantics." It blurs the distinction between
procedure invocation and imperative evaluation. That's "sloppy," not
"tight."
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 10:03:56 AM
On Tue, 09 Dec 2003 00:59:59 +0000, Bradd W. Szonye wrote:

> Now, explain why it's OK to write something different for those cases,
> but not for the case where the args happen to fall in the right order.

That it's not always the order which is needed doesn't imply that it
should never be guaranteed to be the order which is needed (assuming
the code is imperative, i.e. the order matters at all).

It doesn't have to be a coincidence if left-to-right is needed. I was
recently writing a parser in a language with left-to-right order and
it was handy that I could put parsing of subexpressions (and taking
the current source location) as direct arguments of node constructors.
Node constructors usually have arguments in the order corresponding
to the textual order of these parts in the source, and languages usually
have a syntax which can be parsed left-to-right without much lookahead
and not necessarily in the other direction.

I would also specify that map processes elements in order. Now if I need
a map-in-order in Scheme, I must write it myself.

I agree that there is a value in information which parts are functional
and thus can be freely reordered or moved around, and there is a value
in writing things functionally if both styles fit. I believe interfaces
should be explicitly documented, not inferred from the ways functions are
used. After all, if I see an application to two function calls, I'm not
sure whether the order doesn't matter or the author forgot that the order
of evaluation of arguments is unspecified.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/

0
qrczak (1266)
12/9/2003 11:22:11 AM
Bradd W. Szonye wrote:

[...]

> Matthias Blume <find@my.address.elsewhere> wrote:
> 
>>No, for crying out loud!!!  If you fix the AEO, then it is no longer a
>>bug at all!  It is only a bug if you rely on an AEO that is not
>>guaranteed.
> 
> 
> It's still a bug if you *accidentally* rely on it. [...]

I have no response to this.

There is no response to this.

I just wanted to, kinda, collectively stare at it for a moment.

-thant


0
thant (332)
12/9/2003 3:05:02 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> >> You could do the same thing with comments, but comments often
> >> describe the code incorrectly. That's not possible with code.
> 
> Matthias Blume <find@my.address.elsewhere> wrote:
> > *Of course* it is possible with code -- *especially* in this case.
> > That's the very problem with leaving the order unspecified.
> 
> The code says exactly what it means.

Except if the language intentionally leaves the meaning vague...

> That may not match what the
> programmer *intended* it to mean, but maintainers don't generally have
> access to the programmer's thoughts.

The problem here is that the annotation is in the wrong place: Whether
two function calls depend on each other so that order matters is a
property of those functions' definitions and not a property of how
they are being used.  By trying to code the information into the
application site one can make mistakes just as easily as with
comments, and the information can get out of sync just as easily as
with comments, too.

> > Someone might accidentally rely on the particular order that her
> > implementation happens to use, and she will never know that she
> > depended on it because it did not break on her.
> 
> That's a good reason *not* to support imperative argument style in a
> translator. I'd much prefer an implementation that uses a "perverse"
> order (via a "debug" switch, if necessary).

This is a funny leap that you are taking.  To me this is a good reason
to fix the order of evaluation because it means that relying on it
will never be an accident.

> > So the "information" contained in using a procedure call instead of an
> > explicit sequencing construct can be as wrong as a comment.
> 
> Only if the programmer doesn't RTFM, *and* the implementation holds his
> hand and tries to cover up the mistake instead of exposing it.

Huh?  You lost me.

Anyway, never mind.  Let's give it a rest.
0
find19 (1244)
12/9/2003 4:47:58 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Thant Tessman <thant@acm.org> wrote:
> > A well-specified language is far more valuable to maintainers than an
> > attempt by a maintainer to divine the intention of the programmer by
> > their choice of construct which may or may not have
> > been...er...intentional.
> 
> The language is well-specified, though.

No.  It does not give a well-defined meaning to every accepted
program. Period.
0
find19 (1244)
12/9/2003 4:51:57 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Really? On PLT Scheme:
> 
>     (define a #e+1e50)
>     (define b #e-1e50)
>     (define c #e1)
>     (+ a b c) => 1
>     (+ b c a) => 1
>     (+ c a b) => 1
>     (+ c b a) => 1
>     (+ b a c) => 1
>     (+ a c b) => 1

You are using exact arithmetic, so no wonder you get an exact result.
Try inexact.

> Does PLT Scheme count? How about C programs using libgmp?

No.  I was talking about IEEE floating point arithmetic.  It is not
flawed.  It works the way it is specified.  Yes, it does not match but
merely approximates mathematical reals.  That's so by design.

> That depends on the language you're using, and it's certainly a poor
> argument for fixed AEO.

The whole argument is a poor one for or against fixed order of
evaluation.  The two things have very little to do with each other.
0
find19 (1244)
12/9/2003 4:57:29 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Matthias Blume wrote:
> > No, you still have not answered my question.  I did not ask you to
> > classify various bugs according to their relative severeness, I asked
> > for the definition of a bug that exists but is masked.  In other
> > words, how do I know that something is a bug that is masked as opposed
> > to no bug at all?
> 
> In general, it's very difficult to tell until the bug blows up. That's
> why so many programmers advocate the "fast fail" approach, and that's
> one of the reasons why I think the "mask, maybe fix bugs by fixing eval
> order" approach is a bad idea.

Oh, come on now!  Will you finally answer the question?  If a bug can
blow up, then it is certainly not masked.  If it cannot blow up, then
it is not a bug.
0
find19 (1244)
12/9/2003 4:59:48 PM
Regarding the conflation of argument order and evaluation order:

Marcin 'Qrczak' Kowalczyk wrote:
> That [the argument order] not always the [imperative] order which is
> needed doesn't imply that it should never be guaranteed to be the
> order which is needed (assuming the code is imperative, i.e. the order
> matters at all).

Agreed, which is why I'm not opposed to syntax which guarantees a
particular argument evaluation order. I simply believe that the language
syntax should discourage the use of imperative argument style by
default.

> It doesn't have to be a coincidence if left-to-right is needed. I was
> recently writing a parser in a language with left-to-right order and
> it was handy that I could put parsing of subexpressions (and taking
> the current source location) as direct arguments of node constructors.
> Node constructors usually have arguments in the order corresponding to
> the textual order of these parts in the source, and languages usually
> have a syntax which can be parsed left-to-right without much lookahead
> and not necessarily in the other direction.

And in those cases, it would make good sense to use the syntax that
guarantees imperative argument evaluation; the functions are
specifically designed to support it. Likewise, I don't have any problem
with AND, OR, IF, and COND. All of those forms are specially designed to
provide a specific evaluation order.

I just don't want to see imperative argument style become the default,
because I feel that it's fragile, error-prone, and difficult to
maintain. I'd much prefer some kind of simple library syntax, so that
it's easy to use when you really want it, but so that programmers won't
use it by default. Different syntax also has the advantage of warning
maintainers that "there be dragons here."

Also, library syntax like (invoke* f args ...) has the advantage of not
using the function name itself for the special form, which means that
the function is still a first-class object. Compare that to AND: If you
need to use AND as a map function, you need to roll your own, because
the standard version is second-class syntax, not a first-class
procedure.

> I agree that there is a value in information which parts are
> functional and thus can be freely reordered or moved around, and there
> is a value in writing things functionally if both styles fit. I
> believe interfaces should be explicitly documented, not inferred from
> the ways functions are used.

I prefer languages that encourage styles where "the way X is used" is
sufficient documentation, at least for understanding the detailed
design. That's one of the neat things about Eiffel: It encourages a
style where you document all interfaces and invariants in a way that the
compiler can understand, so that the programmer and the compiler both
interpret the code the same way. Good stuff.

> After all, if I see an application to two function calls, I'm not sure
> whether the order doesn't matter or the author forgot that the order
> of evaluation of arguments is unspecified.

True. Or the author may have even been accustomed to a particular
implementation-defined order; R5RS doesn't define it, but some imps do.
That's another reason why I prefer stuff like the library syntax
approach for "tagging" imperative style and the "shaker" for exposing
untagged imperatives. The former is a tool to let you say exactly what
you mean, and the latter is a tool for automatically verifying that you
did say what you meant.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 5:32:37 PM
<acarrico@memebeam.org> wrote:
> Anton van Straaten <anton@appsolutions.com> wrote:
> >> Of course, it *has* a semantic justification: It makes semantics
> >> considerably simpler.  Given that it does not invalidate any currently
> >> legal Scheme program, this alone is justification enough, IMO.
> >
> > It does not make the semantics of any single implementation simpler,
since
> > presumably any sane implementation will specify a fixed evaluation order
> > (exactly like the OCaml case I mentioned.)  The only additional
"complexity"
> > is in the semantics of a deliberately loose specification for the
language
> > family.  In that context, I'm not sure I see what's wrong with the
> > permute/unpermute hack, if it's seen as a placeholder for more specific
> > behavior provided by implementations.  I would expect semanticists
working
> > with a formal specification to pick an evaluation order, just as
> > implementations would.
>
> Hi Anton. Sorry, but don't you miss something when you focus on a
> single implementation? Suppose you deliver your code to an unknown set
> of Scheme VM's built into future web browsers. Wouldn't it be nice to
> be confident that your application will work correctly on any client?

Kinda like the case with Javascript in browsers today?  Ahahahahaha!!  ;)

I addressed your point directly prior to the part you quoted.  I wrote:

>Matthias Blume:
>> But if I want to write reliable software that will work 5 years
>> from now on improved, new, or simply different implementations, then I
>> want a standard that nails things down as precisely as possible.
>
> I agree.  But R5RS is not really that sort of standard at the moment,
> and I don't know that it should be.  I'm of the opinion that a separate
> standard for "real world Scheme" or "Big Scheme" would be preferable
> than trying to turn RnRS into such a standard.

Back to you:
> It seems like simplifying the semantics of the family of
> implementations buys you quite a bit in this case.

Yes, and for people for whom this is important, a standard defining things
like this may be a good idea in general, and it's one I'd support.
Specifically, on the OoE issue, though, in writing this post I came to the
conclusion that fixing a single OoE in the language standard is the wrong
way to go.

What I was trying to point out in my previous post is that Scheme has other
goals, and that not every Scheme program or tool cares about what people are
doing in web browsers, or web applications, or blogs, or scripting
languages.  Turning RnRS into the language standard that everyone familiar
with other languages expects would be a serious mistake.

The OoE issue is a good litmus test of this, since specifying it for the
entire Scheme language family would be gratuitous and, IMO, seriously wrong.
To pick an example of why it might matter, I would consider it very
unfortunate if I ran a partial evaluator to specialize a Scheme program and
it refused to reduce an argument consisting of a function application
because it couldn't prove that another application next to it wasn't
side-effecting.  The fact that use of techniques like partial evaluation is
not commonplace right now is not a good argument for dumbing down the
language to cripple their use in future.

> A declaration or alternate syntax could achieve calls with undefined
> argument evaluation for optimization

The issue goes beyond optimization, although it depends how you define
"optimization".  Is deriving a compiler from an interpreter an optimization?

I have no problem with an alternate syntax for function applications with
fixed order of evaluation, though.  After all, they're the rarer case (or
should be, in Scheme).  You can do it right now as a macro, and it could be
standardized via a SRFI.  Actually, in PLT Scheme, you could easily define a
language variant which would give you the OoE you want on a module-wide
basis.  I'd rather see *that* sort of thing (module languages) standardized,
than OoE.  Then programmers can effectively write declarations that say
"this code relies on such-and-such OoE", without forcing that decision on
all other code via a language standard.

> For tight semantics, our would-be web application writer could easily
> ban these declarations, but he can't easily write a program to search
> every function call for argument evaluation order bugs.

He *can* easily learn not to depend on order of evaluation in function
applications.  A lot more easily than anyone can learn the differences in
exception handling or DOM API between Microsoft's and Mozilla's Javascript,
across multiple browser versions.

> Random thought: Would fixing the argument evaluation order somehow
> bring DS and CPS "closer" together, since the order is already fixed
> in CPS?

I think that highlights the argument *against* fixing the evaluation order:
full CPS is a great representation for compilers to use, not so great for
hand-written programs.  Compilers use it after they've decided on the
evaluation order - ultimately, *some* evaluation order has to be used, and
there are any number of ways it may be picked.  But having the language
force an evaluation order to be specified in your source code even when you
don't need one is just wrong.

I want to be able to express what I mean in a language, not what the
language requires me to express whether I like it or not.  And I especially
don't want to be forced to express things that I don't mean to express to
make up for poor education or practices amongst other programmers.

Summary: fixing eval order in the language standard is morally wrong.  If
some find it necessary to have a fixed eval order, let it be an option
available via declaration.

Anton



0
anton58 (1240)
12/9/2003 5:52:35 PM
Matthias Blume <find@my.address.elsewhere> wrote:
>>> No, you still have not answered my question.  I did not ask you to
>>> classify various bugs according to their relative severeness, I
>>> asked for the definition of a bug that exists but is masked.  In
>>> other words, how do I know that something is a bug that is masked as
>>> opposed to no bug at all?

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> In general, it's very difficult to tell until the bug blows up.
>> That's why so many programmers advocate the "fast fail" approach, and
>> that's one of the reasons why I think the "mask, maybe fix bugs by
>> fixing eval order" approach is a bad idea.

> Oh, come on now!  Will you finally answer the question?

I thought the answer was clear from context and common usage, but
apparently not. I would define a "masked bug" as "a flaw in the
program's design or implementation that is usually or entirely
suppressed by some other part of the system." They're usually exposed by
corner cases or by seemingly unrelated changes to the program.

> If a bug can blow up, then it is certainly not masked.  If it cannot
> blow up, then it is not a bug.

Both of these statements are false, in my experience. A "masked bug" may
only be masked for common cases, such that it can blow up, but only for
unusual inputs. And even in the cases where the flaw is entirely masked,
I would still call it a "bug."

For example, suppose that you have an implementation of the UNIX "sort"
command. The author botched the numeric sorting algorithm, but it has no
effect on the output because he *also* botched the command-line
processing such that the command doesn't call the algorithm when it's
supposed to. The botched algorithm is still a "bug" even though it can't
affect the output in the program's current state.

Another common example: The program contains a flawed algorithm, but the
output passes through some filter that corrects it. (The "filter" may be
an optimizer that accidentally fixes the output, or it may be a
defensive programming construct specifically designed to fix it.) The
flaw is a "bug" because it doesn't do what it's supposed to, but it's
"masked" because other parts of the program compensate for it. This kind
of problem is expensive, because it makes the program brittle, and
because it's easy to unmask the bug.

That's roughly how I see the imperative argument style: Often, it's a
filter that paves over flaws in the program design, and it encourages
programmers to rely on that filter even when they shouldn't.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 5:56:00 PM
Marcin 'Qrczak' Kowalczyk wrote:
> I would also specify that map processes elements in order. Now if I need
> a map-in-order in Scheme, I must write it myself.

Or use SRFI-1.

0
12/9/2003 6:18:05 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Correct. I'll concede that Matthias Blume is correct when he claims that
> unspecified AEO greatly increases the complexity of formal proofs.

I won't.  Left to right (or right to left for that matter) evaluation
is compatible with the Scheme standard, so just use one (or the other)
in the proofs.
0
jrm (1310)
12/9/2003 6:24:12 PM
Matthias Blume <find@my.address.elsewhere> writes:

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>
>> Matthias Blume wrote:
>> > No, you still have not answered my question.  I did not ask you to
>> > classify various bugs according to their relative severeness, I asked
>> > for the definition of a bug that exists but is masked.  In other
>> > words, how do I know that something is a bug that is masked as opposed
>> > to no bug at all?
>> 
>> In general, it's very difficult to tell until the bug blows up. That's
>> why so many programmers advocate the "fast fail" approach, and that's
>> one of the reasons why I think the "mask, maybe fix bugs by fixing eval
>> order" approach is a bad idea.
>
> Oh, come on now!  Will you finally answer the question?  If a bug can
> blow up, then it is certainly not masked.  If it cannot blow up, then
> it is not a bug.

Since I think I was the first to mention `masked bugs', I'll tell you
what I meant by it.  It is a bug that *can* blow up, but *usually*
does not because of some coincidence.
0
jrm (1310)
12/9/2003 6:29:53 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> 
> > Correct. I'll concede that Matthias Blume is correct when he claims that
> > unspecified AEO greatly increases the complexity of formal proofs.
> 
> I won't.  Left to right (or right to left for that matter) evaluation
> is compatible with the Scheme standard, so just use one (or the other)
> in the proofs.

You're kidding, right?
0
find19 (1244)
12/9/2003 6:35:12 PM
> Marcin 'Qrczak' Kowalczyk <qrczak@knm.org.pl> writes:
>
>> If it's OK for macros to change whether arguments are evaluated, it's OK
>> to change the evaluation order as well even if it was fixed for functions.

Matthias Blume <find@my.address.elsewhere> writes:

> Exactly.  In fact, with macros the rule is exceedingly simple: The
> order of evaluation is completely determined by the semantics of the
> output of the macro transformer.

It may be, but the transformer need not be identical on all scheme
systems.  Suppose I write this:

(let ((x (foo))
      (y (bar)))
  (baz x y))

One system *could* expand this to:
  ((lambda (x y) (bar x y)) (foo) (bar))

another could expand it to:
  ((lambda (y x) (bar x y)) (bar) (foo))

If you specify the order of evaluation of arguments, then you need to
specify the order of evaluation for macros that evaluate some or all
of their arguments as well.

0
jrm (1310)
12/9/2003 6:35:23 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

> Since I think I was the first to mention `masked bugs', I'll tell you
> what I meant by it.  It is a bug that *can* blow up, but *usually*
> does not because of some coincidence.

A coincidence such as, say, having the compiler happen to choose an
order of evaluation (out of the many that are permitted by the
language definition) that happens to be one of the few that are
consistent with the way I wrote my code?

As I see it, leaving the order of evaluation unspecified has a far
higher potential of "masking" bugs in this sense than the other way around.
0
find19 (1244)
12/9/2003 6:38:21 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

> > Marcin 'Qrczak' Kowalczyk <qrczak@knm.org.pl> writes:
> >
> >> If it's OK for macros to change whether arguments are evaluated, it's OK
> >> to change the evaluation order as well even if it was fixed for functions.
> 
> Matthias Blume <find@my.address.elsewhere> writes:
> 
> > Exactly.  In fact, with macros the rule is exceedingly simple: The
> > order of evaluation is completely determined by the semantics of the
> > output of the macro transformer.
> 
> It may be, but the transformer need not be identical on all scheme
> systems.  Suppose I write this:
> 
> (let ((x (foo))
>       (y (bar)))
>   (baz x y))
> 
> One system *could* expand this to:
>   ((lambda (x y) (bar x y)) (foo) (bar))
> 
> another could expand it to:
>   ((lambda (y x) (bar x y)) (bar) (foo))
> 
> If you specify the order of evaluation of arguments, then you need to
> specify the order of evaluation for macros that evaluate some or all
> of their arguments as well.

Yes, of course.  Where is the problem?
0
find19 (1244)
12/9/2003 6:39:30 PM
Matthias Blume <find@my.address.elsewhere> writes:

> Joe Marshall <jrm@ccs.neu.edu> writes:
>
>> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> 
>> > Correct. I'll concede that Matthias Blume is correct when he claims that
>> > unspecified AEO greatly increases the complexity of formal proofs.
>> 
>> I won't.  Left to right (or right to left for that matter) evaluation
>> is compatible with the Scheme standard, so just use one (or the other)
>> in the proofs.
>
> You're kidding, right?

No, I'm not.  What's wrong with just assuming left to right?
0
jrm (1310)
12/9/2003 7:10:25 PM
Matthias Blume <find@my.address.elsewhere> writes:

> Joe Marshall <jrm@ccs.neu.edu> writes:
>
>> > Marcin 'Qrczak' Kowalczyk <qrczak@knm.org.pl> writes:
>> >
>> >> If it's OK for macros to change whether arguments are evaluated, it's OK
>> >> to change the evaluation order as well even if it was fixed for functions.
>> 
>> Matthias Blume <find@my.address.elsewhere> writes:
>> 
>> > Exactly.  In fact, with macros the rule is exceedingly simple: The
>> > order of evaluation is completely determined by the semantics of the
>> > output of the macro transformer.
>> 
>> It may be, but the transformer need not be identical on all scheme
>> systems.  Suppose I write this:
>> 
>> (let ((x (foo))
>>       (y (bar)))
>>   (baz x y))
>> 
>> One system *could* expand this to:
>>   ((lambda (x y) (bar x y)) (foo) (bar))
>> 
>> another could expand it to:
>>   ((lambda (y x) (bar x y)) (bar) (foo))
>> 
>> If you specify the order of evaluation of arguments, then you need to
>> specify the order of evaluation for macros that evaluate some or all
>> of their arguments as well.
>
> Yes, of course.  Where is the problem?

Burden on the macro writer, and/or some really hairy language that
specifies something to the effect that conforming macros that evaluate
subexpressions must preserve lexical left to right evaluation order.
(Which won't work for macros that descend into their arguments...)
0
jrm (1310)
12/9/2003 7:13:26 PM
Joe Marshall <jrm@ccs.neu.edu> writes:

> Matthias Blume <find@my.address.elsewhere> writes:
> 
> > Joe Marshall <jrm@ccs.neu.edu> writes:
> >
> >> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> >> 
> >> > Correct. I'll concede that Matthias Blume is correct when he claims that
> >> > unspecified AEO greatly increases the complexity of formal proofs.
> >> 
> >> I won't.  Left to right (or right to left for that matter) evaluation
> >> is compatible with the Scheme standard, so just use one (or the other)
> >> in the proofs.
> >
> > You're kidding, right?
> 
> No, I'm not.  What's wrong with just assuming left to right?

What's wrong with it?  Well, whatever you prove might have absolutely
nothing to do with what the program actually does if the
implementation happens to not use left to right. That's what's wrong
with it.
0
find19 (1244)
12/9/2003 7:23:24 PM
> Joe Marshall <jrm@ccs.neu.edu> writes:
>> Since I think I was the first to mention `masked bugs', I'll tell you
>> what I meant by it.  It is a bug that *can* blow up, but *usually*
>> does not because of some coincidence.

Matthias Blume wrote:
> A coincidence such as, say, having the compiler happen to choose an
> order of evaluation (out of the many that are permitted by the
> language definition) that happens to be one of the few that are
> consistent with the way I wrote my code?
> 
> As I see it, leaving the order of evaluation unspecified has a far
> higher potential of "masking" bugs in this sense than the other way
> around.

Not in my experience.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 7:27:15 PM
> Joe Marshall <jrm@ccs.neu.edu> writes:
>> What's wrong with just assuming left to right?

Matthias Blume <find@my.address.elsewhere> wrote:
> What's wrong with it?  Well, whatever you prove might have absolutely
> nothing to do with what the program actually does if the
> implementation happens to not use left to right. That's what's wrong
> with it.

Then use an implementation that conforms to a stricter standard -- i.e.,
make one of the proof's premises be that the implementation evaluates
arguments in left->right order. But don't endeavor to burden all
implementations with a definition that many of us find error-prone or
inefficient.

Personally, if you *really* want this, I think it would be wiser to use
an implementation that guarantees right->left evaluation. That would
eliminate the "permutation problem" from proofs, while still
discouraging the error-prone imperative argument style. (Yes, I objected
to this earlier when Joe accidentally proposed it, but in retrospect it
seems like a better idea.)
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 7:40:33 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > As I see it, leaving the order of evaluation unspecified has a far
> > higher potential of "masking" bugs in this sense than the other way
> > around.
> 
> Not in my experience.

So far you have not been able to give a single example from your vast
experience with this matter where a fixed order of evaluation would
have "masked" a bug.  Instead, you keep referring to other people's hearsay.

On the other hand, I have seen it happen several times that some
implementation's particular OoE managed to mask a bug in C or Scheme
programs.  Unlike in the example that started this very thread, it was
sometimes extremely difficult to track down what was going on.
0
find19 (1244)
12/9/2003 7:41:05 PM
Okay, I see what you're getting at; 

You are thinking that the need for a fixed order of evaluation
is likely to be a property of the situation in which a procedure
is called rather than a property of the procedure itself?

That's probably better, now that I think about it.

			Bear
0
bear (1219)
12/9/2003 8:11:30 PM
Matthias Blume <find@my.address.elsewhere> writes:

> Joe Marshall <jrm@ccs.neu.edu> writes:
>
>> Matthias Blume <find@my.address.elsewhere> writes:
>> 
>> > Joe Marshall <jrm@ccs.neu.edu> writes:
>> >
>> >> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> >> 
>> >> > Correct. I'll concede that Matthias Blume is correct when he claims that
>> >> > unspecified AEO greatly increases the complexity of formal proofs.
>> >> 
>> >> I won't.  Left to right (or right to left for that matter) evaluation
>> >> is compatible with the Scheme standard, so just use one (or the other)
>> >> in the proofs.
>> >
>> > You're kidding, right?
>> 
>> No, I'm not.  What's wrong with just assuming left to right?
>
> What's wrong with it?  Well, whatever you prove might have absolutely
> nothing to do with what the program actually does if the
> implementation happens to not use left to right. That's what's wrong
> with it.

True, but you know the implementation uses *some* order, and therefore
some permutation of your proof should still apply.
0
jrm (1310)
12/9/2003 8:52:00 PM
Ray Dillinger <bear@sonic.net> wrote:
> Okay, I see what you're getting at; 
> 
> You are thinking that the need for a fixed order of evaluation is
> likely to be a property of the situation in which a procedure is
> called rather than a property of the procedure itself?

Yes. Actually, you probably need to know what's going on at both levels,
but it's much easier to analyze the situation when you're already below
the abstraction barrier of the procedure call. I dislike implicitly
imperative argument evaluation because it forces you to look below the
abstraction barrier to determine whether the arguments really are
imperative (among other reasons).
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 8:59:52 PM
Matthias Blume <find@my.address.elsewhere> wrote:
>>> As I see it, leaving the order of evaluation unspecified has a far
>>> higher potential of "masking" bugs in this sense than the other way
>>> around.

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> Not in my experience.

> So far you have not been able to give a single example from your vast
> experience with this matter where a fixed order of evaluation would
> have "masked" a bug.

Spare the condescension for somebody else, please. I haven't seen any
concrete examples from you, either. I don't have examples handy for a
Usenet discussion because (1) any real example is complicated enough
that it wouldn't be of much value in an informal forum like this,
(2) any examples I have run across in my work are proprietary, so I
can't post them here if I wanted to, and (3) I'd need to do some digging
to come up with an example even if I could post it -- it's not like I
keep examples of this bad style on file.

You've been nothing but difficult from the beginning of this discussion,
even going so far as to give me trouble over looking up your resume! And
you keep saying stuff like "masked bugs aren't bugs," that runs
completely counter to my experience and training in best practices. I'm
reluctant to go to the trouble of looking up a good example, because I
don't believe that you'd give it a fair and reasonable evaluation.
You've been too belligerent in this discussion for me to assume
fairness, and your ideas of "good programming style" are so different
from mine that I don't think I'd consider your evaluation reasonable
either.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 9:06:49 PM
Bradd W. Szonye wrote:
> Matthias Blume <find@my.address.elsewhere> wrote:
> 
>>>>As I see it, leaving the order of evaluation unspecified has a far
>>>>higher potential of "masking" bugs in this sense than the other way
>>>>around.
> 
> 
>>"Bradd W. Szonye" <bradd+news@szonye.com> writes:
>>
>>>Not in my experience.
> 
> 
>>So far you have not been able to give a single example from your vast
>>experience with this matter where a fixed order of evaluation would
>>have "masked" a bug.
> 
> 
> Spare the condescension for somebody else, please. I haven't seen any
> concrete examples from you, either. [...]

This thread was started with an example.

-thant

0
thant (332)
12/9/2003 9:18:38 PM
> Bradd W. Szonye wrote:
>> Spare the condescension for somebody else, please. I haven't seen any
>> concrete examples from you, either. [...]

Thant Tessman <thant@acm.org> wrote:
> This thread was started with an example.

Yes, a trivial example. I find it a very compelling example for
advocating code reviews, and a very poor example for advocating
implicitly imperative argument style.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 9:32:26 PM
Matthias Blume wrote:

> > No, I'm not.  What's wrong with just assuming left to right?
>
> What's wrong with it?  Well, whatever you prove might have absolutely
> nothing to do with what the program actually does if the
> implementation happens to not use left to right. That's what's wrong
> with it.

Not if you require, as an assumption, that your arguments are commutative
with respect to evaluation order.

There is a good functional precedent for this - Haskell monads.  These are
required to satisfy a set of monad axioms that are not enforced by the compiler.
It is up to programmers to make sure that any monad instance they write obey those
laws.
In principle, the compiler can trust the programmer and
make optimizations consistent with those axioms.  You can then
write long papers full of proofs relying on the monad axioms.

I think the monad example gives another approach to making the Scheme
semantics well-defined and deterministic.  As you would do in proofs in programs
with monads, in Scheme you may regard the source as a programmer declararion
that certain laws or axioms are satisfied by the computation.  In this case,
commutativity
with respect to order of evaluation.  As in the monad example, we can then allow
the compiler to trust the programmer and make optimizations consistent with these
axioms.  Oh wait, we already have that!

I think it is useful to keep the current ability of expressing this "commutative
law"
in Scheme (and thereby informing the  compiler of something it cannot always
prove).
Fixing the evaluation order would take away that facility.

Amusingly, this facility of Scheme is in a way dual to the Haskell monad example,
where one declares that certain operations will happen in a specific order.
This gives Haskell the ability to sequence things, something that Scheme is
already good at.
Why should we give up the ability to *not sequence* things, which Haskell is so
good at?

Anton van Straaten wrote:

> To pick an example of why it might matter, I would consider it very
> unfortunate if I ran a partial evaluator to specialize a Scheme program and
> it refused to reduce an argument consisting of a function application
> because it couldn't prove that another application next to it wasn't
> side-effecting.  The fact that use of techniques like partial evaluation is
> not commonplace right now is not a good argument for dumbing down the
> language to cripple their use in future.

I agree that this is another strong reason.

Andre


0
andre9567 (120)
12/9/2003 9:32:50 PM
Matthias Blume wrote:

> > No, I'm not.  What's wrong with just assuming left to right?
>
> What's wrong with it?  Well, whatever you prove might have absolutely
> nothing to do with what the program actually does if the
> implementation happens to not use left to right. That's what's wrong
> with it.

If you're worried about making your proofs easier, just curry all your functions.
Problem solved. :)

A.


0
andre9567 (120)
12/9/2003 9:39:42 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> I haven't seen any concrete examples from you, either.

A concrete example started this thread.  Admittedly, it was not I who
provided it, though.

The only reference to a case where trying different evaluations orders
(and fixing the resulting breakage) ended up improving the code was
David Rush's (IIRC).

I have not seen concrete code from him, but I can easily believe the
general phenomenon: If you make your code run with several different
orders of evaluation, then you reduce the overall reliance on side
effects -- which even according to my supposedly screwy ideas of good
programming style is a Good Thing.

My problem is with the particular method of indirectly "encouraging"
programmers to use such a style, namely by making any other style even
*more* error-prone than it already is, thereby screwing up clean
language semantics and considerably increasing the difficulties
associated with reasoning about programs.

> I don't have examples handy for a
> Usenet discussion because (1) any real example is complicated enough
> that it wouldn't be of much value in an informal forum like this,
> (2) any examples I have run across in my work are proprietary, so I
> can't post them here if I wanted to, and (3) I'd need to do some
> digging to come up with an example even if I could post it -- it's
> not like I keep examples of this bad style on file.

> [...]

> I'm reluctant to go to the trouble of looking up a good example,
> because I don't believe that you'd give it a fair and reasonable
> evaluation.

Sounds like a bunch of hollow excuses.  So the bottom line is that you
don't have any examples.  And you know it, too.

> and your ideas of "good programming style" are so different
> from mine that I don't think I'd consider your evaluation reasonable
> either.

Let's see: "ideas differ from mine, so I shouldn't take him
seriously".  Glad we have this part clear.

(I am not sure where you made any inferences about my ideas of good
programming style.  Most of what I said in this thread was not
concerned with style but with correctness, reliability, and
tractability.  I even said outright that relying on order of
evaluation may not constitute good style!
There is a lot of code that I wrote over the years "out there".  You
could look it up and make a more informed decision of whether you can
agree or disagree with my idea of what good programming style is.  But
then you had trouble finding my home page using google and even have
the nerve to blame me for that, so I probably shouldn't expect too
much in this direction.)

Matthias

PS: I have my reasons not to publish an URL to my home page here.
These reasons are none of your business, I do not need to explain them
to you or anyone.
0
find19 (1244)
12/9/2003 10:10:09 PM
On Tue, 09 Dec 2003 16:32:50 -0500, Andre wrote:

> In principle, the compiler can trust the programmer and
> make optimizations consistent with those axioms.  You can then
> write long papers full of proofs relying on the monad axioms.

Haskell compilers don't do that in practice.
(I'm not even sure if they theoretically could.)

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/

0
qrczak (1266)
12/9/2003 10:11:29 PM
Andre <andre@het.brown.edu> writes:

> Matthias Blume wrote:
> 
> > > No, I'm not.  What's wrong with just assuming left to right?
> >
> > What's wrong with it?  Well, whatever you prove might have absolutely
> > nothing to do with what the program actually does if the
> > implementation happens to not use left to right. That's what's wrong
> > with it.
> 
> Not if you require, as an assumption, that your arguments are commutative
> with respect to evaluation order.

But in an imperative language you cannot just assume this.  You have
to /prove/ it.

If a Haskell optimizer relies on monad axioms without proving that
they hold for (what is supposed to be) the particular monad in
question, then this Haskell optimizer is unsound.

Fortunately, whether or not a particular structure is a monad or not
can be proved by local inspection of the structure itself.  For most
monads, the proof is simple, so there might not be too unreasonable
for the compiler to trust the programmer on this.

With non-inteference of potentially side-effecting computations this
is not so simple as non-interference, in general, is a global
property.

Matthias

PS: Peyton-Jones et al. have also played with the idea of letting the
programmer explicitly specify certain program transformations which
the programmer knows to be valid but an optimizer would have a fairly
difficult time proving so.  Those fall into the same category of
"unsound in general, but maybe acceptable due to reliance on local
reasoning only".
0
find19 (1244)
12/9/2003 10:19:13 PM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> I haven't seen any concrete examples from you, either.

Matthias Blume <find@my.address.elsewhere> wrote:
> A concrete example started this thread.

Again, that example is a good reason to encourage code reviews but a
poor reason to default to imperative style.

>> I'm reluctant to go to the trouble of looking up a good example,
>> because I don't believe that you'd give it a fair and reasonable
>> evaluation.

> Sounds like a bunch of hollow excuses.

It's because of remarks like this that I don't trust you to give a
concrete a fair and reasonable example. I strongly suspect that I'd just
be wasting my time. I did try to *describe* examples, but you just sneer
at those.

>> and your ideas of "good programming style" are so different from mine
>> that I don't think I'd consider your evaluation reasonable either.

> Let's see: "ideas differ from mine, so I shouldn't take him
> seriously".  Glad we have this part clear.

Here too -- I've taken your comments seriously, but I've also taken them
with a grain of salt because your background and goals seem different
enough from mine that they're of little practical value to me.

> (I am not sure where you made any inferences about my ideas of good
> programming style ....

I've gotten it from your comments about bugs, debugging, and similar
issues. You seem to prefer a style that makes formal proof easy even if
it makes typical engineering practices difficult. That's unacceptable to
me.

> PS: I have my reasons not to publish an URL to my home page here.
> These reasons are none of your business, I do not need to explain them
> to you or anyone.

I suppose that you may have a good reason, although it seems a bit daft
to hide your website from Google Groups when you're perfectly happy to
let Google Search find it.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/9/2003 10:55:05 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > (I am not sure where you made any inferences about my ideas of good
> > programming style ....
> 
> I've gotten it from your comments about bugs, debugging, and similar
> issues. You seem to prefer a style that makes formal proof easy even if
> it makes typical engineering practices difficult. That's unacceptable to
> me.

None of these comments should let you draw any conclusions about my
programming style, though.  It is simply not true that "typical
engineering practices" become "difficult" unless you are saying that
the ability to reason about programs is somehow at odds with
engineering practices.  (And I am not only talking about formal
reasoning here!)

You have snipped the important part of my previous comment, namely
that about making programming generally *more* error-prone as a way of
"encouraging" people to use a certain programming style.  If that is
good engineering and "best practices" (whatever that means -- I'm not
good in the buzzword department), then we, indeed, have irreconcilable
differences in opinion.

The optimization argument is a red herring (I have written enough
compilers to understand a bit about that, thank you very much), and
the stuff about "masking bugs" etc. is completely bogus, too, as it is
exactly the opposite of what's really going on.  If anything, bugs get
masked by inadvertently relying on a coincidental evaluation order.
With a guaranteed fixed order, there is no such thing as a
coincidental order.

> I suppose that you may have a good reason, although it seems a bit daft
> to hide your website from Google Groups when you're perfectly happy to
> let Google Search find it.

Who said I'm hiding my website from Google Groups (or anyone else for
that matter)?  Again, I do not have to and will not explain myself to
you.

By the way, now that you have openly called me stupid, my discussion
with you is hereby over.
0
find19 (1244)
12/9/2003 11:26:31 PM
Matthias Blume <find@my.address.elsewhere> wrote:
>>> (I am not sure where you made any inferences about my ideas of good
>>> programming style ....

> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> I've gotten it from your comments about bugs, debugging, and similar
>> issues. You seem to prefer a style that makes formal proof easy even
>> if it makes typical engineering practices difficult. That's
>> unacceptable to me.

> None of these comments should let you draw any conclusions about my
> programming style, though.  It is simply not true that "typical
> engineering practices" become "difficult" unless you are saying that
> the ability to reason about programs is somehow at odds with
> engineering practices.  (And I am not only talking about formal
> reasoning here!)

You keep dragging out this bit about "ability to reason about programs,"
as if we'll all agree a priori that functional argument style makes it
harder. But I don't agree, and I'm not alone in that.

> You have snipped the important part of my previous comment, namely
> that about making programming generally *more* error-prone as a way of
> "encouraging" people to use a certain programming style.

And again, you talk as though your conclusion is indisputable. It isn't.
In my experience, just the opposite is true.

> The optimization argument is a red herring (I have written enough
> compilers to understand a bit about that, thank you very much) ....

How have you managed to overlook the fact that easy reorganization
optimizes more than just compiler output? It's also useful for code
maintainers.

>> I suppose that you may have a good reason, although it seems a bit
>> daft to hide your website from Google Groups when you're perfectly
>> happy to let Google Search find it.

> Who said I'm hiding my website from Google Groups (or anyone else for
> that matter)?  Again, I do not have to and will not explain myself to
> you.

You're the one who claimed that it was "easy" to find information about
you, but it wasn't easy, and you keep being evasive and difficult when I
asked for more information.

> By the way, now that you have openly called me stupid, my discussion
> with you is hereby over.

I did no such thing. Don't put words in my mouth. I said that your
behavior seemed daft -- crazy, incomprehensible. I suspected that we
disagree on some premises, and that your background might be important.
You responded by being evasive about it and by claiming that I have no
right to know why you're being evasive. I don't know whether to call
that crazy, arrogant, dishonest, or what, but it sure isn't a good way
to make a compelling argument.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/10/2003 12:00:06 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > None of these comments should let you draw any conclusions about my
> > programming style, though.  It is simply not true that "typical
> > engineering practices" become "difficult" unless you are saying that
> > the ability to reason about programs is somehow at odds with
> > engineering practices.  (And I am not only talking about formal
> > reasoning here!)
> 
> You keep dragging out this bit about "ability to reason about programs,"
> as if we'll all agree a priori that functional argument style makes it
> harder. But I don't agree, and I'm not alone in that.

Ok, I guess I have to answer one more time:

No, the functional style does not make reasoning harder.  It is the
/imperative/ nature of the language paired with an unspecified
evaluation order.  In other words, it is because you first have to
/prove/ that you are using functional style before you can make an
argument based on that.

> > You have snipped the important part of my previous comment, namely
> > that about making programming generally *more* error-prone as a way of
> > "encouraging" people to use a certain programming style.
> 
> And again, you talk as though your conclusion is indisputable. It isn't.
> In my experience, just the opposite is true.

I don't believe that, and you have done nothing to convince me.
Instead of just giving a single example, you hide behind the "its
proprietary" line of defense.

> > The optimization argument is a red herring (I have written enough
> > compilers to understand a bit about that, thank you very much) ....
> 
> How have you managed to overlook the fact that easy reorganization
> optimizes more than just compiler output? It's also useful for code
> maintainers.

In my experience the opposite of that is true.  The information that
you claim to be useful for maintainers is in the wrong place where
there is no guarantee that it is correct.  It might be wrong and be
"masked", to use your term, by the coincidental ordering that the
compiler chose.  The maintainer still needs to inspect the code to see
that this is not so, and doing that is equivalent to what you need to
do when you want to reorder function arguments in a fixed-order
language.

> >> I suppose that you may have a good reason, although it seems a bit
> >> daft to hide your website from Google Groups when you're perfectly
> >> happy to let Google Search find it.
> 
> > Who said I'm hiding my website from Google Groups (or anyone else for
> > that matter)?  Again, I do not have to and will not explain myself to
> > you.
> 
> You're the one who claimed that it was "easy" to find information about
> you, but it wasn't easy, and you keep being evasive and difficult when I
> asked for more information.

How much easier can it get: Type "Matthias Blume" into Google and hit
"I'm Feeling Lucky"?  It seems like /you/ are the one being difficult
here.

> > By the way, now that you have openly called me stupid, my discussion
> > with you is hereby over.
> 
> I did no such thing. Don't put words in my mouth. I said that your
> behavior seemed daft -- crazy, incomprehensible.

Look it up!  Webster's first definition is "stupid", followed by
"foolish", "idiotic", "delirious", and "insane", all of which I find
offensive.

> I suspected that we
> disagree on some premises, and that your background might be important.
> You responded by being evasive about it and by claiming that I have no
> right to know why you're being evasive.

I was /not/ evasive.  Everybody else seems to be able to find me just
fine.  The problem is obviously at your end.

> I don't know whether to call that crazy, arrogant, dishonest, or
> what, but it sure isn't a good way to make a compelling argument.

Whether or not I post a URL to my homepage should have nothing to do
with the contents of the argument, so ad hominems won't cut it.  You
also just added three more offensive terms to the list.  Good bye.
0
find19 (1244)
12/10/2003 1:08:59 AM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> I don't know whether to call that crazy, arrogant, dishonest, or
>> what, but it sure isn't a good way to make a compelling argument.

Matthias Blume wrote:
> Whether or not I post a URL to my homepage should have nothing to do
> with the contents of the argument, so ad hominems won't cut it.

I didn't care about your home page. I wanted some more information about
your background, goals, premises, anything like that, and you refused to
give anything resembling a helpful answer. Just "I'm easy to find." And
when I looked it up myself, you complained that I was relying on
incorrect and outdated information.

Guess what? If you hadn't been such an asshole about it, it wouldn't
have taken three go-arounds, and the discussion might have gotten
somewhere. You have been very helpful to me in other threads, so I don't
know what it is that has you so worked up in this one.

> You also just added three more offensive terms to the list.  Good bye.

And I just added another one. What, do you expect that you can act like
a jerk and not have anybody complain about it? Meanwhile, you accuse me
of "hiding" behind proprietary code. Sorry, but my personal and
professional ethics don't permit me to post non-trivial sections of my
employer's code to Usenet. You make unfounded accusations of dishonesty,
and then you complain that *I'm* saying nasty things. Good riddance,
hypocrite!
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/10/2003 1:50:50 AM
Matthias Blume <find@my.address.elsewhere> schrieb:
> Scheme *is* an imperative language.  Get used to it.

I don't think it's terribly controversial to say that good design
maintains a clean separation between imperative pieces of code and
purely-functional ones. Scheme is a good language because it facilitates
this separation, i.e. by providing a usable subset of the language which
is purely functional and by clearly marking side-effecting procedures.

In this spirit there should be a way of explicitly distinguishing
between where sequential evaluation is intended and where it is not. I
agree with others that function application is a natural place to draw
the line, since the purely-functional subset of the language still has
to apply functions and the non-functional extension provides plenty of
sequencing operators.

Even if you're against unspecified OoE, you must see the value of a way
to signal to compilers and other programmers that some sequence of
expressions may be executed in any order, and the burden of proof is
left to the programmer. If you choose to signal this with something
besides function application, functional programming becomes much harder
as you have to litter your code with "parallel" forms or whatever.
0
adrian61 (83)
12/10/2003 3:05:27 AM
I must admit that I can't bring it over me to leave this completely
unanswered.  So here we go again:

"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> I didn't care about your home page. I wanted some more information about
> your background, goals, premises, anything like that,

.... which all happen to be on my homepage or accessible via links from
that homepage.

> Just "I'm easy to find." And when I looked it up myself, you
> complained that I was relying on incorrect and outdated information.

Well, you found an old resume whose timeline ended 1999.  Happens on
the web.  I, when I look someone up, dig a bit deeper right away, but
maybe that's just me.  In any case, at least after I told you that
what you found is outdated you shouldn't have had any more problems.

The fact is that I *am* easy to find.  Posting a URL is not necessary,
and since I dislike doing so I did not do it.

> You have been very helpful to me in other threads, so I don't
> know what it is that has you so worked up in this one.

Are you sure it is /me/ being "worked up"?

(Not that I am not a bit piqued right now, I must admit.  But that
should be no surprise.)

> Meanwhile, you accuse me of "hiding" behind proprietary code. Sorry,
> but my personal and professional ethics don't permit me to post
> non-trivial sections of my employer's code to Usenet.

But you surely could have constructed an illustrative example based on
what you learned from that code.  I don't even want some huge example.
I can extrapolate myself to the realistic case.

> You make unfounded accusations of dishonesty, and then you complain
> that *I'm* saying nasty things.

You still have not provided any example that illustrates your claims.
Maybe they exist, but I have not seen them.  Why should I take you at
your word.  Because you are such a nice guy?  Clearly *someone* has to
be wrong in this discussion, and I am not the only one arguing *for*
fixing the OoE.  Would I be accusing others of "dishonesty" by
believing in your claims?  Or vice versa?

I am sure the example that you think would illustrate your claim
exist.  But you clearly already have doubts about how convincing they
end up being, and so you pre-emptively accuse me of not giving them
due consideration.

Whether or not the examples you surely have actually do convince me or
not, and whether I give them the consideration they deserve or not, you
cannot know until you try.  So in effect /you/ say you don't provide
your examples because you fear that /I/ will be dishonest in judging
them.  Pot, kettle, black.

And, by the way, in case you provide your example, and provided I give
them due consideration, there is a chance that I they still leave me
unconvinced.  So in that case you would have been wrong in your belief
of being in possesion of convincing examples.  But that would not
constitute dishonesty on your part.  So accusing me of accusing you of
being dishonest is pretty ridiculous.

Well, I guess that's enough.  I'm sure you don't want to talk to a
daft hypocrite asshole any longer, or do you?

Matthias
0
find19 (1244)
12/10/2003 3:14:54 AM
Adrian Kubala <adrian@sixfingeredman.net> writes:

> Matthias Blume <find@my.address.elsewhere> schrieb:
> > Scheme *is* an imperative language.  Get used to it.
> 
> I don't think it's terribly controversial to say that good design
> maintains a clean separation between imperative pieces of code and
> purely-functional ones. Scheme is a good language because it facilitates
> this separation, i.e. by providing a usable subset of the language which
> is purely functional and by clearly marking side-effecting procedures.

No, this is not true.  Nearly every operation is Scheme has effects.
You can use subsets of Scheme in a purely functional style, but that
requires actively ignoring its imperative aspects and *not* mixing it
with any other, imperative components.

> Even if you're against unspecified OoE, you must see the value of a way
> to signal to compilers and other programmers that some sequence of
> expressions may be executed in any order, and the burden of proof is
> left to the programmer.

Sorry, no.  I do not see the value in that unless it comes with added
expressive power.  Leaving such fundamental stuff unspecified means
getting all the disadvantages of concurrency without getting the
benefits of it.

Matthias
0
find19 (1244)
12/10/2003 3:23:27 AM
Thant Tessman <thant@acm.org> schrieb:
> Bradd W. Szonye wrote:
>> Matthias Blume <find@my.address.elsewhere> wrote:
>> 
>>>No, for crying out loud!!!  If you fix the AEO, then it is no longer a
>>>bug at all!  It is only a bug if you rely on an AEO that is not
>>>guaranteed.
>> 
>> It's still a bug if you *accidentally* rely on it. [...]
>
> I have no response to this.
>
> There is no response to this.
>
> I just wanted to, kinda, collectively stare at it for a moment.

If the specification of a function states that it does not share state
with another function, but it does, this is a bug, even if it happens
that the way these functions are used in a particular program this
sharing of state does not cause problems.

That's what Szonye means by "accidentally" relying on fixed OoE. That
fixed OoE has masked a bug, that the code was *supposed* to work no
matter what OoE was chosen, but *by accident* it does not.

This is important because even if the language specifies an order of
evaluation, there will *always* be reasons to change the OoE (in the
compiler, in macros, and in the code itself), and knowing that you can
do so (that someone else has already proven it is ok) is valuable
information.
0
adrian61 (83)
12/10/2003 3:36:06 AM
Matthias Blume <find@my.address.elsewhere> schrieb:
> Adrian Kubala <adrian@sixfingeredman.net> writes:
>> Even if you're against unspecified OoE, you must see the value of a way
>> to signal to compilers and other programmers that some sequence of
>> expressions may be executed in any order, and the burden of proof is
>> left to the programmer.
>
> Sorry, no.  I do not see the value in that unless it comes with added
> expressive power.

I do not understand your definition of "expressive power". Earlier you
pointed out that the set of programs (A) which function correctly with
fixed OoE is a superset of those (B) which function correctly without,
so perhaps this is what you mean. But the "extra programs" (A-B) can be
trivially mapped to programs in B (via the let* transformation), so this
"extra expressiveness" is not very meaningful.

My definition of expressive power includes the ability of the language
to describe invariants under code transformations. (Something which is
not uniform among turing-equivalent languages.) A fixed-OoE language
provides NO WAY to express that function args may be evaluated in any
order without altering the meaning of the program. So I consider it
strictly less expressive.

Now, as compilers become better at proving things they will be able to
derive invariants like this and stating them explicitly will become less
important, but until then I'd like some means to express them even if it
means I have to do the proof myself.
0
adrian61 (83)
12/10/2003 4:15:18 AM
Adrian Kubala <adrian@sixfingeredman.net> writes:

> Matthias Blume <find@my.address.elsewhere> schrieb:
> > Adrian Kubala <adrian@sixfingeredman.net> writes:
> >> Even if you're against unspecified OoE, you must see the value of a way
> >> to signal to compilers and other programmers that some sequence of
> >> expressions may be executed in any order, and the burden of proof is
> >> left to the programmer.
> >
> > Sorry, no.  I do not see the value in that unless it comes with added
> > expressive power.
> 
> I do not understand your definition of "expressive power". Earlier you
> pointed out that the set of programs (A) which function correctly with
> fixed OoE is a superset of those (B) which function correctly without,
> so perhaps this is what you mean. But the "extra programs" (A-B) can be
> trivially mapped to programs in B (via the let* transformation), so this
> "extra expressiveness" is not very meaningful.

No, that is not what I meant.  What I meant is that leaving the
evaluation order open comes with many (although not all) of the
hazards that concurrency comes with.  But it lacks the extra
expressive power of concurrency, so I get some of the worst of both
worlds without any of the good.

> My definition of expressive power includes the ability of the language
> to describe invariants under code transformations.

So does mine.

> (Something which is not uniform among turing-equivalent languages.)
> A fixed-OoE language provides NO WAY to express that function args
> may be evaluated in any order without altering the meaning of the
> program.

I do not like invariants that can be expressed but not enforced.

Moreover, it is not clear that there is no way /in principle/ to
express what you want to express.  A language with fixed order of
evaluation could presumably provide a way to /prove/ to the compiler
that the transformation in question is sound.

> So I consider it strictly less expressive.

Well, you are right in some sense: Leaving the order of evaluation
unspecified means that by writing down a single program you specify a
multitude of possible behaviors.  This is, in a twisted sense, more
expressive than specifying just a single behavior.

> Now, as compilers become better at proving things they will be able to
> derive invariants like this and stating them explicitly will become less
> important, but until then I'd like some means to express them even if it
> means I have to do the proof myself.

Well, then let's strife for a language where you can express the
invariant together with the proof to the compiler, so the compiler
does not have to take you at your word.  Yes, I would like such a
language.  On the other hand, leaving the order of evaluation
unspecified as as poor man's substitute for such a language is
unacceptable, even as an interim solution.  At least to me.

Matthias
0
find19 (1244)
12/10/2003 4:58:10 AM
Matthias Blume <find@my.address.elsewhere> writes:

> Ok, I guess I have to answer one more time:
>
> No, the functional style does not make reasoning harder.  It is the
> /imperative/ nature of the language paired with an unspecified
> evaluation order.  In other words, it is because you first have to
> /prove/ that you are using functional style before you can make an
> argument based on that.

So how hard is that?
0
jrm (1310)
12/10/2003 2:51:12 PM
Adrian Kubala wrote:

[...]

> That's what Szonye means by "accidentally" relying on fixed OoE. That
> fixed OoE has masked a bug, that the code was *supposed* to work no
> matter what OoE was chosen, but *by accident* it does not.

What you guys are describing is a thought crime--one you apparently 
can't stand to see go unpunished. In a language with fixed OofE, by what 
powers of divination are you deciding whether the reliance on OofE is 
intentional or not? If a compiler can *prove* the evaluation order is 
irrelevant (which is easier in some languages than others), then it is 
free to rearrange evaluation order. If a programmer wants to rearrange 
evaluation order, knowing exactly what a program really does is far more 
valuable than guessing at what the previous programmer *thought* it was 
supposed to do.

[...]

-thant

0
thant (332)
12/10/2003 3:05:10 PM
Thant Tessman wrote:
> Adrian Kubala wrote:
> 
> [...]
> 
> 
>>That's what Szonye means by "accidentally" relying on fixed OoE. That
>>fixed OoE has masked a bug, that the code was *supposed* to work no
>>matter what OoE was chosen, but *by accident* it does not.
> 
> 
> What you guys are describing is a thought crime--one you apparently 
> can't stand to see go unpunished. In a language with fixed OofE, by what 
> powers of divination are you deciding whether the reliance on OofE is 
> intentional or not? If a compiler can *prove* the evaluation order is 
> irrelevant (which is easier in some languages than others), then it is 
> free to rearrange evaluation order. If a programmer wants to rearrange 
> evaluation order, knowing exactly what a program really does is far more 
> valuable than guessing at what the previous programmer *thought* it was 
> supposed to do.

So your argument is that we shouldn't trust the programmer and that 
'smart' compilers can reorder anyway?  Firstly, I'd hate to insult 
programmers in the language specification.  Secondly, compilers that 
smart don't get built in practice.

	Scott

0
scgmille (240)
12/10/2003 4:04:22 PM
"Scott G. Miller" <scgmille@freenetproject.org> virkkoi:
> So your argument is that we shouldn't trust the programmer and that 
> 'smart' compilers can reorder anyway?  Firstly, I'd hate to insult 
> programmers in the language specification.

I'd hate to design a language for programmers who take insult at the
implication that they, too, can make mistakes.

But this is starting to sound very much like the endless wars on static
vs. dynamic typing...


Lauri Alanko
la@iki.fi
0
la (473)
12/10/2003 4:19:46 PM
Scott G. Miller wrote:

[...]

> So your argument is that we shouldn't trust the programmer and that 
> 'smart' compilers can reorder anyway?  Firstly, I'd hate to insult 
> programmers in the language specification.  Secondly, compilers that 
> smart don't get built in practice.

What's insulting to programmers is the claim that a program that relies 
on OofE in a language that specifies OofE may contain a bug that is 
merely "masked" and only works by "accident." It's an argument that does 
nothing more than assume what it's trying to prove.

-thant

0
thant (332)
12/10/2003 4:37:05 PM
Lauri Alanko wrote:
> "Scott G. Miller" <scgmille@freenetproject.org> virkkoi:
> 
>>So your argument is that we shouldn't trust the programmer and that 
>>'smart' compilers can reorder anyway?  Firstly, I'd hate to insult 
>>programmers in the language specification.
> 
> 
> I'd hate to design a language for programmers who take insult at the
> implication that they, too, can make mistakes.

Of course not.  But its a slippery slope when you start to change the 
language so that its more tolerant of sloppy programming.

> But this is starting to sound very much like the endless wars on static
> vs. dynamic typing...
Indeed.

	Scott

0
scgmille (240)
12/10/2003 4:38:28 PM
Scott G. Miller wrote:
> Lauri Alanko wrote:
> 
[...]
>> I'd hate to design a language for programmers who take insult at the
>> implication that they, too, can make mistakes.
> 
> 
> Of course not.  But its a slippery slope when you start to change the 
> language so that its more tolerant of sloppy programming.

Is the purpose of automatic memory management to make a language more 
tolerant of sloppy programming? Or is its purpose to abstract out detail 
so as to allow the programmer to put their attention to the bigger task 
at hand?

-thant

0
thant (332)
12/10/2003 4:46:47 PM
Thant Tessman wrote:
> Scott G. Miller wrote:
> 
>>Lauri Alanko wrote:
>>
> 
> [...]
> 
>>>I'd hate to design a language for programmers who take insult at the
>>>implication that they, too, can make mistakes.
>>
>>
>>Of course not.  But its a slippery slope when you start to change the 
>>language so that its more tolerant of sloppy programming.
> 
> 
> Is the purpose of automatic memory management to make a language more 
> tolerant of sloppy programming? Or is its purpose to abstract out detail 
> so as to allow the programmer to put their attention to the bigger task 
> at hand?
> 

Thats a terrible example.  An unspecified OoE doesn't require the 
programmer to do any more work, in stark contrast to manual memory 
management.  I could easily argue that an unspecified OoE in fact 
"abstracts out detail" (the order required to efficiently evaluate 
operands to a function call) "so as to allow the programmer to put their 
attention to the bigger task at hand".   By your argument, all threaded 
languages should be completely serial, so that programmers cannot write 
thread-unsafe programs.

	Scott

0
scgmille (240)
12/10/2003 6:03:10 PM
Scott G. Miller wrote:
> Thant Tessman wrote:
> 
>> Scott G. Miller wrote:
>>
>>> Lauri Alanko wrote:
>>>
>>
>> [...]
>>
>>>> I'd hate to design a language for programmers who take insult at the
>>>> implication that they, too, can make mistakes.
>>>
>>>
>>>
>>> Of course not.  But its a slippery slope when you start to change the 
>>> language so that its more tolerant of sloppy programming.
>>
>>
>>
>> Is the purpose of automatic memory management to make a language more 
>> tolerant of sloppy programming? Or is its purpose to abstract out 
>> detail so as to allow the programmer to put their attention to the 
>> bigger task at hand?
>>
> 
> Thats a terrible example.  An unspecified OoE doesn't require the 
> programmer to do any more work, in stark contrast to manual memory 
> management. 

Many C++ programmers will argue flat out that lack of automatic memory 
management is no burden to them at all. Of course we know that there are 
approaches to programming beyond their practical reach, but they don't 
miss what they never had.

More to the point however, I was talking about the abstraction of 
detail. Undeterministic OofE is a subtle detail that must be learned and 
kept in mind. Predictably deterministic OofE can be safely ignored, 
unknowingly assumed, or knowingly leveraged without ill effect. Arguing 
that this somehow promotes sloppy programming still sounds like someone 
griping about how the kids these days have it too easy.


> I could easily argue that an unspecified OoE in fact 
> "abstracts out detail" (the order required to efficiently evaluate 
> operands to a function call) "so as to allow the programmer to put their 
> attention to the bigger task at hand".   By your argument, all threaded 
> languages should be completely serial, so that programmers cannot write 
> thread-unsafe programs.

As Matthias Blume pointed out elsewhere, explicitly specifying 
undetermined order of evaluation can be a good thing if it actually 
*buys* you something semantically, as is the case with threading. In the 
case of function calls, it doesn't buy you anything except some 
not-very-well-quantified performance benefits.

-thant

0
thant (332)
12/10/2003 6:46:39 PM
"Scott G. Miller" <scgmille@freenetproject.org> writes:

> Thant Tessman wrote:
> > Scott G. Miller wrote:
> > 
> >>Lauri Alanko wrote:
> >>
> > [...]
> > 
> >>>I'd hate to design a language for programmers who take insult at the
> >>>implication that they, too, can make mistakes.
> >>
> >>
> >> Of course not.  But its a slippery slope when you start to change
> >> the language so that its more tolerant of sloppy programming.
> > Is the purpose of automatic memory management to make a language
> > more tolerant of sloppy programming? Or is its purpose to abstract
> > out detail so as to allow the programmer to put their attention to
> > the bigger task at hand?
> > 
> 
> Thats a terrible example.  An unspecified OoE doesn't require the
> programmer to do any more work, in stark contrast to manual memory
> management.

It does if she is interested in correctness: Either she actively
avoids the places where order is unspecified (e.g., by using Matthias
Felleisen's suggestion of having all function arguments be values --
in which case the whole discussion becomes irrelevant), or she must
prove to herself that her code is correct under every permutation that
is permissible under the language specification.  That is a whole lot
of extra work.

>  I could easily argue that an unspecified OoE in fact
> "abstracts out detail" (the order required to efficiently evaluate
> operands to a function call) "so as to allow the programmer to put
> their attention to the bigger task at hand".

I don't think that is so easy to argue.  You can say it, but it does
not sound convincing.

> By your argument, all threaded languages should be completely
> serial, so that programmers cannot write thread-unsafe programs.

I don't know how you can make that leap.  Real threads actually do
provide additional expressive power.  Of course, reasoning about
threaded code is much harder, so we should use it only where we really
need it.  Leaving the order of evaluation unspecified makes reasoning
harder without giving anything in return.

I find Thant's example excellent.  Try going over to comp.lang.c++ and
suggest that the language needs garbage collection.  Then count the
number of people who get offended by the implicit suggestion that they
cannot manage memory properly on their own.  (Been there, done that,
unfortunately.)

Matthias
0
find19 (1244)
12/10/2003 6:55:06 PM
Matthias Blume <find@my.address.elsewhere> wrote:
> I must admit that I can't bring it over me to leave [Bradd's insults]
> completely unanswered.

Detailed apology sent off-line. Public apology: Mea culpa for escalating
the hostility. Apparently, I gave the impression that I was just waiting
for an excuse to call Matthias a kook. I tried to defuse that, but I
failed to communicate it. My apologies for the misunderstanding and the
name-calling.

Thanks to Matthias for explaining his position off-line in a gentlemanly
way. I hope that my detailed response clears up the misunderstanding.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/10/2003 8:14:35 PM
Thant Tessman <thant@acm.org> writes:

> Scott G. Miller wrote:
>> Lauri Alanko wrote:
>>
> [...]
>>> I'd hate to design a language for programmers who take insult at the
>>> implication that they, too, can make mistakes.
>> Of course not.  But its a slippery slope when you start to change
>> the language so that its more tolerant of sloppy programming.
>
> Is the purpose of automatic memory management to make a language more
> tolerant of sloppy programming? Or is its purpose to abstract out
> detail so as to allow the programmer to put their attention to the
> bigger task at hand?

Neither.  In a sufficiently complex language it is undecidable at
compile time what memory might be needed, so you have to have some
sort of mechanism at run time to figure it out.
0
jrm (1310)
12/10/2003 9:50:45 PM
Thant Tessman <thant@acm.org> wrote:
> Many C++ programmers will argue flat out that lack of automatic memory
> management is no burden to them at all. Of course we know that there
> are approaches to programming beyond their practical reach, but they
> don't miss what they never had.

Yes, many C++ programmers feel that way, but that's not why the language
standard lacks a facility for automatic memory management. Indeed, many
members of the standardization committee would very much like to add an
(optional) garbage-collection facility to the language. However, doing
so is very difficult, because:

1. Many C++ applications require deterministic memory management, which
   makes garbage collection unattractive.
2. The language includes a facility for acquiring and releasing
   resources in general, which is very difficult to implement correctly
   in the presence of garbage collection.
3. Many C++ applications rely on the ability to "alias" and "swizzle"
   pointers, which makes it difficult to correctly determine whether a
   value is a pointer.

There are solutions for each problem -- for example, Boehm's
conservative collection algorithms help deal with the third problem.
However, they're difficult to implement and use correctly, so the
garbage-collection advocates are reluctant to standardize it without a
simple, usable solution to all three problems.

> As Matthias Blume pointed out elsewhere, explicitly specifying
> undetermined order of evaluation can be a good thing if it actually
> *buys* you something semantically ....

It does. Both compilers and humans may freely reorganize code that obeys
the constraint. Putting the constraint in the language (rather than a
coding standards document) enables the creation of automatic tools to
enforce the constraint.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/10/2003 9:53:21 PM
Joe Marshall wrote:
> Thant Tessman <thant@acm.org> writes:
> 
> 
>>Scott G. Miller wrote:
>>
>>>Lauri Alanko wrote:
>>>
>>
>>[...]
>>
>>>>I'd hate to design a language for programmers who take insult at the
>>>>implication that they, too, can make mistakes.
>>>
>>>Of course not.  But its a slippery slope when you start to change
>>>the language so that its more tolerant of sloppy programming.
>>
>>Is the purpose of automatic memory management to make a language more
>>tolerant of sloppy programming? Or is its purpose to abstract out
>>detail so as to allow the programmer to put their attention to the
>>bigger task at hand?
> 
> 
> Neither.  In a sufficiently complex language it is undecidable at
> compile time what memory might be needed, so you have to have some
> sort of mechanism at run time to figure it out.

You're describing the need for dynamic memory management, not automatic 
memory management (i.e. garbage collection). The latter implies the 
former, but they're not the same thing. Even C and C++ provide for 
dynamic memory management.

Besides, this just begs the question: What's the point of a 
"sufficiently complex" language if not to make the programmer's task easier?

-thant

0
thant (332)
12/10/2003 10:25:11 PM
> Adrian Kubala wrote:
[An excellent summary of my position.]

Thant Tessman <thant@acm.org> wrote:
> What you guys are describing is a thought crime--one you apparently
> can't stand to see go unpunished.

No, not at all. Actually, I think the disagreement goes all the way down
to premises; I think you're using different definitions of "correctness"
and "bug."

I get the impression that you, Matthias Blume, and Eli Barzilay define
"correct" and "bug" in a way that cares only about the entire program. I
also care about the correctness of *subprograms*. That is, it's not
enough that a program produce the correct output for all possible
inputs, but also that (1) the program's implementation matches its
design and that (2) all subprograms also produce the correct output for
all possible inputs.

For example, if your design calls for quicksort, but you actually
implement mergesort, that's a bug. The program will still produce the
correct output, and will even have the same big-oh complexity. However,
because the implementation does not match the design, it's more
difficult to maintain. The mismatch creates more work for maintainers
and reviewers. This may not seem like a serious problem -- the code
works, doesn't it? -- but it creates nontrivial workflow and "programmer
efficiency" problems.

Another sorting example: I remember reading a textbook discussion of
the risks of optimization, which used quicksort as an example. A
programmer wanted a faster sorting algorithm, so he switched from
insertion sort to quicksort. Unfortunately, his quicksort code had a
major bug: Instead of sorting the inputs, it merely scrambled them
randomly! Miraculously, the program output was still correct, because of
another bug in the same code. Like most quicksorts, this version used
insertion sort for the "base cases," but it actually sorted the whole
set instead of the base cases. Therefore, the sorting algorithm still
worked "correctly" (if slowly).

This last example clearly shows how you can have two major bugs in
*subprograms* and yet still have a provably-correct program. Those bugs
make the program "fragile": If you reorganize or enhance the buggy code,
you're likely to break the program.

As somebody who frequently maintains, enhances, and reuses existing
code, I'm very sensitive to this kind of bug. It's not enough for a
whole program to produce correct outputs; it's also very important to me
that the subprograms also work correctly. In my opinion, imperative
argument style increases the likelihood that the whole program will
work, but it *greatly* decreases the likelihood that subprograms will
work as designed.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/10/2003 10:58:53 PM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> I get the impression that you, Matthias Blume, and Eli Barzilay
> define "correct" and "bug" in a way that cares only about the entire
> program. I also care about the correctness of *subprograms*. That
> is, it's not enough that a program produce the correct output for
> all possible inputs, but also that (1) the program's implementation
> matches its design and that (2) all subprograms also produce the
> correct output for all possible inputs.

I disagree with this distinction.  If you have:

  (define (insertionsort ...)
    ...)
  (define (quicksort ...)
    ...bug2...)
  (define (sort ...)
    (if ...bug1...
      (...quicksort...)
      (...insertionsort...)))

in the way that you describe (bug1 causing it to always use insertion
sort), *and* if you don't care for the runtime, *and* you *never*
intend to use `quicksort' anywhere else (even for testing), only then
you can say that there is no bug there.  Obviously, this is a very
rare if not nonexistent situation, which means that there is a bug in
the normal sense (using my own subjective definition of "normal").

I still think that at this point everyone knows what everyone else
thinks, so additional words are pretty much wasted.  It's best to drag
this thread to a dark corner and shoot it.  Sorry for burping more
stuff into it.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/10/2003 11:23:22 PM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> I get the impression that you, Matthias Blume, and Eli Barzilay
>> define "correct" and "bug" in a way that cares only about the entire
>> program. I also care about the correctness of *subprograms*. That
>> is, it's not enough that a program produce the correct output for
>> all possible inputs, but also that (1) the program's implementation
>> matches its design and that (2) all subprograms also produce the
>> correct output for all possible inputs.

Eli Barzilay <eli@barzilay.org> wrote:
> I disagree with this distinction.  If you have:
> 
>   (define (insertionsort ...)
>     ...)
>   (define (quicksort ...)
>     ...bug2...)
>   (define (sort ...)
>     (if ...bug1...
>       (...quicksort...)
>       (...insertionsort...)))

That wasn't the bug, though. Instead, it was more like this:

    (define (insertion-sort ...) ...)
    (define (quicksort ...)
      (if (<bug1a>)
          (<bug2>)
          (insertion-sort <bug1b>)))

where the overall effect was to occasionally invoke <bug2>, but to also
run insertion sort over the entire input, masking everything but the
performance degradation. For small data sets, the degradation isn't
measurable, so this collection of bugs is may well pass unit testing.

> ... *and* if you don't care for the runtime, *and* you *never* intend
> to use `quicksort' anywhere else (even for testing), only then you can
> say that there is no bug there.

Given the nature of the bug, the output will still be correct in other
contexts, and the performance may even be acceptable for many inputs. It
only "fails" (because of poor performance) in certain corner cases
(large inputs). That's a classic example of a masked bug.

Also note that this is very close to an evaluation order problem;
insertion sort gets invoked at the wrong time, on the wrong quantity of
data. I suspect that it wouldn't be difficult to rewrite this example so
that its correctness or performance depends on argument evaluation
style.

> Obviously, this is a very rare if not nonexistent situation, which
> means that there is a bug in the normal sense (using my own subjective
> definition of "normal").

That's true for the version you posted, but it didn't accurately
recreate what I described. In the original example, the poor performance
with large data sets was the only clue to the bug. IIRC, the author only
found it by carefully reviewing the code.

> I still think that at this point everyone knows what everyone else
> thinks, so additional words are pretty much wasted.

I'm not sure of that; your restatement was incorrect in a way that
understated the actual problem.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/11/2003 12:05:53 AM
Bradd W. Szonye wrote:

[...]

> I get the impression that you, Matthias Blume, and Eli Barzilay define
> "correct" and "bug" in a way that cares only about the entire program.

[...]

Absolutely not. Yes a program can contain bugs that are "masked." But 
what you're really arguing is something else entirely:

Given programming language A with known and deterministic order of 
evaluation, and language B which is exactly like language A except that 
order of evaluation is unknown, there exists program (subroutine, 
function, whatever) P that exhibits correct behavior in language A, but 
not in language B. Okay, fine. But you go further. You claim that the 
fact that program P works in language A might be accidental--that is, 
the programmer didn't actually write the program with the fact in mind 
that programming language A has deterministic order of evaluation.

Now it gets interesting: There is no way that program P can be made to 
exhibit incorrect behavior in language A, so the only way we have to 
determine if in fact program P is buggy--the only definition of "buggy" 
we can reasonably entertain--is to know whether the program faithfully 
reproduce's the programmer's understanding of the program. The existence 
of these buggy programs P is, therefore, your own personal testimony not 
withstanding, inherently hypothetical.

But you keep going: you then use the hypothetical existence of these 
buggy programs as supporting evidence that deterministic order of 
evaluation is hiding exactly the bugs you claim as proof of your argument.

A camping story I was told as a kid was about the behind-the-head bats. 
When the sun has just gone down, the behind-the-head bats fly out of the 
caves and hover right behind your head where you can't see them. They're 
very quick, so it doesn't matter how fast you turn your head, you'll 
still never see them.

Your "masked" order-of-evaluation bugs are actually behind-the-head bats.

-thant

0
thant (332)
12/11/2003 12:15:28 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> > "Bradd W. Szonye" <bradd+news@szonye.com> writes:
> >> I get the impression that you, Matthias Blume, and Eli Barzilay
> >> define "correct" and "bug" in a way that cares only about the entire
> >> program. I also care about the correctness of *subprograms*. That
> >> is, it's not enough that a program produce the correct output for
> >> all possible inputs, but also that (1) the program's implementation
> >> matches its design and that (2) all subprograms also produce the
> >> correct output for all possible inputs.
> 
> Eli Barzilay <eli@barzilay.org> wrote:
> > I disagree with this distinction.  If you have:
> > 
> >   (define (insertionsort ...)
> >     ...)
> >   (define (quicksort ...)
> >     ...bug2...)
> >   (define (sort ...)
> >     (if ...bug1...
> >       (...quicksort...)
> >       (...insertionsort...)))
> 
> That wasn't the bug, though. Instead, it was more like this:
> 
>     (define (insertion-sort ...) ...)
>     (define (quicksort ...)
>       (if (<bug1a>)
>           (<bug2>)
>           (insertion-sort <bug1b>)))
> 
> where the overall effect was to occasionally invoke <bug2>, but to
> also run insertion sort over the entire input, masking everything
> but the performance degradation. For small data sets, the
> degradation isn't measurable, so this collection of bugs is may well
> pass unit testing.

It's still the same situation -- I didn't limit sections of code that
are functions, my example was misleading.  (+ x 2) is a buggy way of
adding one to x, even if you put it in a context that never uses it.
Only if you promise to never (re)use that little piece of code for
adding one to x, only then it is not a bug.


> > I still think that at this point everyone knows what everyone else
> > thinks, so additional words are pretty much wasted.
> 
> I'm not sure of that; your restatement was incorrect in a way that
> understated the actual problem.

Did we anyone learn something new?  The *informative* content of these
posts is pretty close to zero now, and getting closer with every new
post.

(If it wasn't for line wrapping, the References header would be close
now to sticking out of the building.)

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/11/2003 12:25:17 AM
> Bradd W. Szonye wrote:
>> I get the impression that you, Matthias Blume, and Eli Barzilay
>> define "correct" and "bug" in a way that cares only about the entire
>> program.

Thant Tessman <thant@acm.org> wrote:
> Absolutely not. Yes a program can contain bugs that are "masked." But 
> what you're really arguing is something else entirely:

The paraphrase below does not accurately restate my argument.

> Given programming language A with known and deterministic order of
> evaluation, and language B which is exactly like language A except
> that order of evaluation is unknown, there exists program (subroutine,
> function, whatever) P that exhibits correct behavior in language A,
> but not in language B. Okay, fine.

OK so far, if you define "correct behavior" to mean "the whole program
produces the expected output for every input." This is why the fixed
evaluation order *seems* better, but I claim that this gain is
superficial and outweighed by other problems.

> But you go further. You claim that the fact that program P works in
> language A might be accidental--that is, the programmer didn't
> actually write the program with the fact in mind that programming
> language A has deterministic order of evaluation.

This part of the paraphrase is not entirely accurate. It would be more
accurate to say, "The program's design does not depend on order of
evaluation, but its implementation does." That's a mismatch between
design and implementation, which is a kind of bug. At the very least,
it's a documentation error.

> Now it gets interesting: There is no way that program P can be made to
> exhibit incorrect behavior in language A, so the only way we have to
> determine if in fact program P is buggy--the only definition of
> "buggy" we can reasonably entertain--is to know whether the program
> faithfully reproduce's the programmer's understanding of the program.

That's true but irrelevant. While I do care whether program P works, I
also care about programs P2, P3, and P4 (i.e., later versions of program
P with bugfixes and enhancements). I also care about subprograms S1, S2,
and S3, because they get reorganized and reused in those later versions.

Features like fixed evaluation order pave over minor design problems in
the subprograms. They smooth out some glitches in intermodule
communication. That has two effects, in my experience:

1. While program P may work correctly, the subprograms contain subtle
   bugs that don't appear in testing, because the language features pave
   over the bugs. They don't show up until later, when I try to make
   changes to the original program, or when I try to reuse those
   subprograms in other code.

2. In that kind of environment, programmers tend to make more sloppy
   mistakes. They tend to assume that "correct output" means "no bugs,"
   just like you're doing now. Therefore, anything that paves over bugs
   is likely to convince programmers that no bugs exist, which
   reinforces the bad habits that lead to the bugs in the first place.

> The existence of these buggy programs P is, therefore, your own
> personal testimony not withstanding, inherently hypothetical.

See, you're still assuming that "correct output from P" means "no bugs."
That simply isn't true. Bugs in a subprogram are still bugs, even if the
program as a whole somehow manages to mask them.

> But you keep going: you then use the hypothetical existence of these
> buggy programs as supporting evidence that deterministic order of
> evaluation is hiding exactly the bugs you claim as proof of your
> argument.

Yes, because it can turn buggy subprograms into "correct" programs! That
encourages poor programming, and it makes maintenance more difficult,
because the program gets released with the bugs still in it.

> Your "masked" order-of-evaluation bugs are actually behind-the-head bats.

This only shows that Eli was wrong: We *don't* all understand each
other. You've completely paved over the difference between "correct
program" and "program that's correct in all of its parts." Users might
only care about the former, but as a maintainer and a reviewer, I
definitely care about the latter also, and *that's* where stuff like
fixed AEO and defensive programming cause problems.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/11/2003 12:42:47 AM
> "Bradd W. Szonye" <bradd+news@szonye.com> writes:
>> That wasn't the bug, though. Instead, it was more like [example,]
>> where the overall effect was to occasionally invoke <bug2>, but to
>> also run insertion sort over the entire input, masking everything but
>> the performance degradation. For small data sets, the degradation
>> isn't measurable, so this collection of bugs is may well pass unit
>> testing.

Eli Barzilay <eli@barzilay.org> wrote:
> It's still the same situation -- I didn't limit sections of code that
> are functions, my example was misleading.  (+ x 2) is a buggy way of
> adding one to x, even if you put it in a context that never uses it.

Again, that's not equivalent. If you use that as the implementation for
the ADD1 function, you *will* notice the difference, probably in unit
testing. So far, both of your examples have relied on dead code: The bug
exists, but it never actually gets executed.

In my example, the buggy code actually *runs*, but some other part of
the program paves over the bug. Some workaround hides the bug such that
it doesn't affect the overall output. It might be an explicit workaround
or a bit of defensive programming. Or a specific evaluation order just
happens to run the code in a way that doesn't trigger the bug.

While these may not look like bugs to the end user, they're still bugs.
Because the overall program suppresses them, they survive pre-release
testing. However, they still exist, and they can trip up maintainers.
Reorganize the code for a performance enhancement, and you can expose
the bug. Reuse the buggy part in another section, and blammo.

> Only if you promise to never (re)use that little piece of code for
> adding one to x, only then it is not a bug.

Despite the inaccurate example, I do agree with this. However, unless
the program is at the end of its lifecycle, no such promise exists.
Therefore, it *is* a bug -- but it doesn't affect the program's output,
so it survives testing. The bug survives until the maintenance phase of
the lifecycle.

It's well-known in the software industry that the cost of fixing defects
increases with the span between introduction and diagnosis. For example,
it's very expensive to fix a design flaw during post-release
maintenance. That's why I find some of your arguments uncompelling: By
fixing the evaluation order, you prevent some bugs that would be
introduced and repared in the same lifecycle phase, but you mask other
bugs so that they aren't found until post-release maintenance. That
drives up the overall engineering cost.

>>> I still think that at this point everyone knows what everyone else
>>> thinks, so additional words are pretty much wasted.

>> I'm not sure of that; your restatement was incorrect in a way that
>> understated the actual problem.

> Did we anyone learn something new? The *informative* content of these
> posts is pretty close to zero now, and getting closer with every new
> post.

I did present new information, though. I gave an example closer to what
you've been asking for. I just provided data about development costs. In
another reply, I tried to correct a blatant misstatement of my position.
If you didn't learn anything new, then you can fault my teaching
ability, but it's not because I didn't *say* anything new.

> (If it wasn't for line wrapping, the References header would be close
> now to sticking out of the building.)

I really hope you're not trying to imply that "long discussion"
necessarily means "asymptotically less information."
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/11/2003 1:14:17 AM
Bradd W. Szonye wrote:

[...]

> OK so far, if you define "correct behavior" to mean "the whole program
> produces the expected output for every input." This is why the fixed
> evaluation order *seems* better, but I claim that this gain is
> superficial and outweighed by other problems.

No. I mean "correct behavior" in the sense that the program and every 
piece of it does *exactly* what the programmer intended. All the bits of 
your post that assume otherwise clipped.

> 
> 
>>But you go further. You claim that the fact that program P works in
>>language A might be accidental--that is, the programmer didn't
>>actually write the program with the fact in mind that programming
>>language A has deterministic order of evaluation.
> 
> 
> This part of the paraphrase is not entirely accurate. It would be more
> accurate to say, "The program's design does not depend on order of
> evaluation, but its implementation does." That's a mismatch between
> design and implementation, which is a kind of bug. At the very least,
> it's a documentation error.

Who says the program's design doesn't depend on the order of evaluation? 
You keep saying this like it's a given, and then use the fact that no 
bug happens as proof that OofE is hiding the bug. If the programmer 
deliberately relies on deterministic OofE then the fact that the program 
(and all its little bits) works is *not* proof of a bug. Nor is your 
proclaimed desire to rearrange evaluation order arbitrarily.

You're like a politician who claims to speak for the "silent majority." 
"A program that only works with deterministic OofE is buggy. Proof? All 
the bugs are masked. See? You can't see any of them."

-thant


0
thant (332)
12/11/2003 1:22:00 AM
> Bradd W. Szonye wrote:
>> This part of the paraphrase is not entirely accurate. It would be
>> more accurate to say, "The program's design does not depend on order
>> of evaluation, but its implementation does." That's a mismatch
>> between design and implementation, which is a kind of bug. At the
>> very least, it's a documentation error.

Thant Tessman <thant@acm.org> wrote:
> Who says the program's design doesn't depend on the order of
> evaluation? You keep saying this like it's a given ....

Yes, because I've been talking about the situations where it is true.
However, I have not claimed that it is *always* true. I've been using
those situations as a *premise* in this part of my argument, which is
why it sounds like I'm taking it for granted -- I am!

> ... and then use the fact that no bug happens as proof that OofE is
> hiding the bug.

Programmers *do* write code that doesn't match the design. Imperative
argument style *can* conceal that fact. When it does, it means that bugs
survive until post-release maintenance, where they're much more
expensive to fix.

Furthermore, it's my experience that imperative argument style,
defensive programming, and similar styles make this kind of bug much
more common. While they do turn some bugs into non-bugs, other bugs
don't go away; they're just harder to find. That gives a false sense of
security, and you end up paying for it in post-release maintenance,
where it's a *lot* more expensive. I *estimate* that the overall cost is
much higher, because the cost you save from elminating some bugs is much
less than the cost you add by obscuring other bugs.

> If the programmer deliberately relies on deterministic OofE then the
> fact that the program (and all its little bits) works is *not* proof
> of a bug.

I never claimed otherwise. You have misunderstood my argument.

> Nor is your proclaimed desire to rearrange evaluation order
> arbitrarily.

When have I ever claimed that I want to reorganize code *arbitrarily*?
Please don't put words in my mouth.

> You're like a politician who claims to speak for the "silent
> majority." "A program that only works with deterministic OofE is
> buggy. Proof? All the bugs are masked. See? You can't see any of
> them."

In the future, please try to make sure that you actually understand my
argument before you use it as a launching pad for personal attacks.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/11/2003 1:39:06 AM
Thant Tessman wrote:

> Bradd W. Szonye wrote:
> 
> [...]
> 
>> OK so far, if you define "correct behavior" to mean "the whole program
>> produces the expected output for every input." This is why the fixed
>> evaluation order *seems* better, but I claim that this gain is
>> superficial and outweighed by other problems.
> 
> {stuff deleted}
> 
> Who says the program's design doesn't depend on the order of evaluation? 
> You keep saying this like it's a given, and then use the fact that no 
> bug happens as proof that OofE is hiding the bug. If the programmer 
> deliberately relies on deterministic OofE then the fact that the program 
> (and all its little bits) works is *not* proof of a bug. Nor is your 
> proclaimed desire to rearrange evaluation order arbitrarily.
> 
> You're like a politician who claims to speak for the "silent majority." 
> "A program that only works with deterministic OofE is buggy. Proof? All 
> the bugs are masked. See? You can't see any of them."


I'm surprised to admit this but after reading this thread, I see Bradd's 
point. I think, it is getting lost in terminology. As, I understand it 
Bradd is assuming that in the specification of a problem, the 
specification states explicitly that the order of evaluation of X and Y 
are non-deterministic or unspecified. (Imagine, you are impelmenting a 
simulator for an inherently concurent system in Scheme.)


If I translate the specification into a language with a fixed order of 
evaluation. The fact that the specification states the order is 
irrelevant is lost. I cannot faithfully translate my specification into 
a language with fixed order of evaluation. He consider this a "bug".

In the future when someone reads my implementation, they may assume 
incorrectly that the order of evaluation of two terms is a particular 
order when in fact the specification states explicitly that it doesn't 
matter. They may modify my program taking advantage of a fixed order of 
evaluation, even thought the specification says "don't do that".

Later, I preform a high-level optimization which I believe is safe 
because the specification says it should be, but because someone else 
incorrectly used my implementation as a replacement for the true 
specification they have introduced a bug with respect to the specification.

All, perfectly sensible. If you want an implementation that has some 
non-determinism to allow you to faithfully encoded specifications which 
have this property, then fine. Non-determinism is not a dirty word.

However, I'd just prefer that non-determinism not happen when I call 
function arguments. Things should be deterministic by default, or it 
should be easy to choose between the two.

If I were designing a language, I'd have tuples with a deterministic 
evaluation and another form with and explicit non-deterministic 
semantics. I'd also have a effect system and a compiler that would warn 
me if effectfull expressions are used in a non-deterministic context.


0
danwang742 (171)
12/11/2003 4:16:31 AM
Matthias Blume <find@my.address.elsewhere> schrieb:
> "Scott G. Miller" <scgmille@freenetproject.org> writes:
>> Thats a terrible example.  An unspecified OoE doesn't require the
>> programmer to do any more work, in stark contrast to manual memory
>> management.
>
> It does if she is interested in correctness: Either she actively
> avoids the places where order is unspecified (e.g., by using Matthias
> Felleisen's suggestion of having all function arguments be values --
> in which case the whole discussion becomes irrelevant), or she must
> prove to herself that her code is correct under every permutation that
> is permissible under the language specification.  That is a whole lot
> of extra work.

I'm sympathetic to your argument that compilers should be doing proof
work. But realistically, in Scheme, this will never happen.

For the programmer to do the proof is not really very hard, since they
can rely on design conventions a compiler cannot. And with unspecified
OoE the proof must only be done ONCE, when the code is written, rather
than EVERY TIME someone wants to rearrange the code.

I think that if a programmer can't easily prove that OoE does or doesn't
matter for code they wrote themselves, then the design is very bad
already.
0
adrian61 (83)
12/11/2003 4:34:25 AM
"Bradd W. Szonye" <bradd+news@szonye.com> writes:

> Again, that's not equivalent. If you use that as the implementation
> for the ADD1 function, you *will* notice the difference, probably in
> unit testing. So far, both of your examples have relied on dead
> code: The bug exists, but it never actually gets executed.

The fact that my examples didn't use the code is irrelevant, since the
same holds when it does interact with another bug that cancel it out:

  ;; loop over elements of vector foo
  (let loop ((n 1))
    ...
    ;; we're about to refer to the n'th element a lot, but Scheme
    ;; counts from 0, so temporarily set n
    (set! n (- n 1))
    ...do stuff with n...
    ;; [forget to reimcrement n]
    ...
    (loop (+ n 2))) ; loop with the next element

The same happens: I expect the (+ n 2) to be the add-one operation,
but it is not.  The bug *is* there, unless I never break this code up.


> While these may not look like bugs to the end user, they're still
> bugs.

I can only speak for myself, but I don't think that this was ever
disputed.  But they are only bugs if there are parts that will be
inspected and reused elsewhere -- for example, if you just compile the
above loop to machine code and throw away the source then the result
has no bugs.  (For example, some "smart" compiler might compile the
non-buggy version of that loop to the same way, and you wouldn't care
about it.)

The whole point Matthias made is that by fixing the order you get less
bugs since they are reliable and portable.  Of course they *might*
encourage bad programmers to write code that *might* break when
someone is not aware of hidden assumption, but the reality is that
bugs that happen due to incompatible order are more common, as with
the example that started this cursed thread.


> > Only if you promise to never (re)use that little piece of code for
> > adding one to x, only then it is not a bug.
> 
> Despite the inaccurate example, I do agree with this. However,
> unless the program is at the end of its lifecycle, no such promise
> exists.

Exactly.  This is why I consider these things as bugs, not "benign",
not "masked", simple bugs.  (I call extra spaces, identifiers without
uniform dash style, and comments with non-uniform letter case bugs, I
will rewrite pieces of code if an identifier has a better name even if
it is already exported and used, and I began switching to using plt's
"[]" brackets so at some point I will go over the 10K lines in Swindle
and change it all, I have been called a "Coding Ayatollah" and took
that as a compliment --- now, do you think I'll consider these things
as non bugs?)


> It's well-known in the software industry that the cost of fixing
> defects increases with the span between introduction and diagnosis.
> For example, it's very expensive to fix a design flaw during
> post-release maintenance. That's why I find some of your arguments
> uncompelling: By fixing the evaluation order, you prevent some bugs
> that would be introduced and repared in the same lifecycle phase,
> but you mask other bugs so that they aren't found until post-release
> maintenance. That drives up the overall engineering cost.

And according to what I explicitly said below, you know that this is
old news for me, and you know what I think and can say.  So feel free
to stop here for a few seconds and imagine that I did write that stuff
you knew I'd write.  You can even reply to it.  It might be fun.


> >>> I still think that at this point everyone knows what everyone else
> >>> thinks, so additional words are pretty much wasted.
> 
> >> I'm not sure of that; your restatement was incorrect in a way that
> >> understated the actual problem.
> 
> > Did we anyone learn something new? The *informative* content of these
> > posts is pretty close to zero now, and getting closer with every new
> > post.
> 
> I did present new information, though. I gave an example closer to
> what you've been asking for. I just provided data about development
> costs. In another reply, I tried to correct a blatant misstatement
> of my position.  If you didn't learn anything new, then you can
> fault my teaching ability, but it's not because I didn't *say*
> anything new.

I have enough information regarding your view, so none of the above
was new to me.


> > (If it wasn't for line wrapping, the References header would be
> > close now to sticking out of the building.)
> 
> I really hope you're not trying to imply that "long discussion"
> necessarily means "asymptotically less information."

Of course it does.  It's a natural thing -- in the first few posts
there was a lot of new stuff people wrote, and the least you could
learn is where people stand and their reasons.  Arguments continue
while narrowing down the discussion to the core differences, and at
some point you already know what the OP will say, you just keep
responding to posts because whatever `they' say, you always have
something different to say.  At this point the argument is at a high
risk of deteriorating to personal attacks, which is also very common
in deep threads.  So at this point of the discussion I'm sure that you
won't see any hidden light and suddenly agree for a fixed order,
probably just as you're sure that I won't see the light on the other
side.

Of course there are some cases of deep discussion threads that keep
presenting new technical stuff, but these are *extremely* rare -- when
it happens, people get excited about new ideas and start new threads,
or many people participate.  When I see this:

  R  [  15: Matthias Blume      ] Re: Compatibility questions
  R      [  20: Bradd W. Szonye     ] 
  R          [  30: Matthias Blume      ] 
  R              [  44: Bradd W. Szonye     ] 
  R                  [  15: Matthias Blume      ] 
  R                      [  66: Bradd W. Szonye     ] 
  R                          [  18: Matthias Blume      ] 
  R                              [  28: Bradd W. Szonye     ] 
  R                                  [  29: Matthias Blume      ] 
  R                                      [  73: Bradd W. Szonye     ] 
  R                                          [  33: Matthias Blume      ] 
  R                                              [  63: Bradd W. Szonye     ] 

I can be pretty sure that there are no surprises down there...

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/11/2003 5:04:58 AM
"Daniel C. Wang" <danwang74@hotmail.com> writes:

> However, I'd just prefer that non-determinism not happen when I call
> function arguments. Things should be deterministic by default, or it
> should be easy to choose between the two.
> 
> If I were designing a language, I'd have tuples with a deterministic
> evaluation and another form with and explicit non-deterministic
> semantics.

Yes -- this is exactly what I said in some other corner of this
thread.  Bradd, however, will only go to the mirror side of this: that
the default is non-deterministic and you can have forms where you
explicitly say that order does matter.

I would *really* not mind the above, for example, when you write
mzscheme code that uses the module I posted earlier, you pretty much
know that order does not matter[*] -- if this is combined with a
compiler, then it should know that such code is open for evaluation
order optimizations.

([*] Unless it is used in some perverse way to implement random
numbers or some randomized algorithm.)


> I'd also have a effect system and a compiler that would warn me if
> effectfull expressions are used in a non-deterministic context.

That is a really good idea.  (Which is another reason I like the fixed
order better -- I don't like the idea of such annotation being
implicit, if you want to say it, just say it.  Of course the same can
be said on the other side, but deterministic behavior seems much more
fundamental.)

The only comment I have is that you can have effectfull expressions if
they don't interact with each other.  For example:

  (define (foo x) (display x) x)
  (unordered-call + (foo 1) (foo 2))

should barf:

  warninig: non-deterministic output

but if you use this instead:

  (define (foo x) (display x) x)
  (unordered-call + (with-output-to-file "foo" (lambda () (foo 1)))
                    (foo 2))

then there is no problem.  The same holds of accessing any global
state.

Now something that can do *this* would be very nice.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/11/2003 5:28:44 AM
Eli Barzilay wrote:
{stuff deleted}
> The only comment I have is that you can have effectfull expressions if
> they don't interact with each other.  For example:
> 
>   (define (foo x) (display x) x)
>   (unordered-call + (foo 1) (foo 2))
> 
> should barf:
> 
>   warninig: non-deterministic output
> 
> but if you use this instead:
> 
>   (define (foo x) (display x) x)
>   (unordered-call + (with-output-to-file "foo" (lambda () (foo 1)))
>                     (foo 2))
> 
> then there is no problem.  The same holds of accessing any global
> state.
> 
> Now something that can do *this* would be very nice.

I think, a sufficently powerful, effect/monadic, type system could track 
the effects above and notice the effects are in different files and 
therefore non-interfering. Assuming no aliasing of output-ports.
So this kind of thing can be done with a static type system. I think, 
you could easily adapt the effect inference systems used for static 
memory management to address this issue.
0
danwang742 (171)
12/11/2003 5:49:53 AM
Thant Tessman <thant@acm.org> schrieb:
> Who says the program's design doesn't depend on the order of evaluation? 

Who says it does? It looks like your argument is that, since we can't
tell if the programmer intended to rely on fixed OoE or not (never mind
that the intention is masked by fixed OoE itself), we should assume the
programmer ALWAYS intended to rely on it, which doesn't follow and would
be terrible design on the part of the programmer anyways.

Some parts of the program will undoubtedly be OoE-independent. That is
enough.
0
adrian61 (83)
12/11/2003 5:53:08 AM
"Daniel C. Wang" <danwang74@hotmail.com> writes:

> Eli Barzilay wrote:
> > Now something that can do *this* would be very nice.
> 
> I think, a sufficently powerful, effect/monadic, type system could
> track the effects above and notice the effects are in different
> files and therefore non-interfering.

I can only repeat what I said -- this would be one of those things
that would be fun to read about and to use.


> Assuming no aliasing of output-ports.

Obviously.


> So this kind of thing can be done with a static type system.  I
> think, you could easily adapt the effect inference systems used for
> static memory management to address this issue.

Is "effect inference systems" the region-based memory management
stuff?

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
12/11/2003 6:38:48 AM
Thant Tessman <thant@acm.org> writes:

> Joe Marshall wrote:
>> Thant Tessman <thant@acm.org> writes:
>>
>>>Scott G. Miller wrote:
>>>
>>>>Lauri Alanko wrote:
>>>>
>>>
>>>[...]
>>>
>>>>>I'd hate to design a language for programmers who take insult at the
>>>>>implication that they, too, can make mistakes.
>>>>
>>>>Of course not.  But its a slippery slope when you start to change
>>>>the language so that its more tolerant of sloppy programming.
>>>
>>>Is the purpose of automatic memory management to make a language more
>>>tolerant of sloppy programming? Or is its purpose to abstract out
>>>detail so as to allow the programmer to put their attention to the
>>>bigger task at hand?
>> Neither.  In a sufficiently complex language it is undecidable at
>> compile time what memory might be needed, so you have to have some
>> sort of mechanism at run time to figure it out.
>
> You're describing the need for dynamic memory management, not
> automatic memory management (i.e. garbage collection). The latter
> implies the former, but they're not the same thing. Even C and C++
> provide for dynamic memory management.

I was implying that simply dynamic memory management ala C and C++ is
insufficient unless you restrict yourself to a certain programming
style.  For the moment, let's outlaw the `smart pointers' and
`reference count' hacks that show up in C++ and talk only about malloc
and free.  To manage memory without `leaks', it is important to call
free within some limited time after an object can no longer make a
difference to the future of the calculation.  This is known to be
undecidable and automatic memory management uses a conservative
approximation to this (i.e., it assumes that an object that is
reachable must be kept around, it doesn't attempt to prove that the
object will in fact be used).

It is not usually considered sloppy programming to write code that is
statically undecidable, and no amount of attention to detail will
allow you to figure out where to place calls to free such that memory
is neither leaked nor prematurely freed (assuming an unrestrained
usage model).  Therefore, the purpose of automatic memory management
is neither to make a language tolerant of sloppiness nor to abstract
out detail.  It is rather an implementation necessity implied by the
semantics of the language.

> Besides, this just begs the question:  What's the point of a
> "sufficiently complex" language if not to make the programmer's task
> easier?

I added the `sufficiently complex' part so I could avoid arguments
about languages that had only stack allocation (no malloc or free), or
no allocation at all (turing machine), or some sort of weird model
(all objects have exactly one use and are freed upon use and explicit
copies are made everywhere).
0
jrm (1310)
12/11/2003 3:33:20 PM
Adrian Kubala wrote:
> Thant Tessman <thant@acm.org> schrieb:
> 
>>Who says the program's design doesn't depend on the order of evaluation? 
> 
> 
> Who says it does? It looks like your argument is that, since we can't
> tell if the programmer intended to rely on fixed OoE or not (never mind
> that the intention is masked by fixed OoE itself), we should assume the
> programmer ALWAYS intended to rely on it,

Yes,


> which doesn't follow and would
> be terrible design on the part of the programmer anyways.

In Scheme, you bet. In a langauge with fixed OofE, not necessarily.

> 
> Some parts of the program will undoubtedly be OoE-independent. That is
> enough.

Rather, I suspect *most* of the time code will be OofE independent. Like 
I said, I've never been bitten by undeterministic OofE and 
simultaneously, I've never felt the need to leverage its implications 
when "rearranging" code. I really don't know why. Maybe it's just a 
subconscious style thing on my part. This whole argument is rather 
trivial to me in that sense. It's the bizarre arguments used to defend 
undeterministic OofE that's goaded me into participating in this thread.

-thant

0
thant (332)
12/11/2003 3:44:57 PM
Daniel C. Wang wrote:

[...]

> I'm surprised to admit this but after reading this thread, I see Bradd's 
> point. I think, it is getting lost in terminology. As, I understand it 
> Bradd is assuming that in the specification of a problem, the 
> specification states explicitly that the order of evaluation of X and Y 
> are non-deterministic or unspecified. (Imagine, you are impelmenting a 
> simulator for an inherently concurent system in Scheme.)

Argh! I (and if I may be so bold, Matthias Blume) understood this point 
*long* ago. The point I'm trying to make is that this argument, when 
applied to function argument OofE the way Bradd is doing so, by its very 
nature precludes objective evidence for or against it. Matthias and I 
have both already said that being able to explicitly specify 
undeterministic order of evaluation is a *good* thing for something like 
concurrency.

-thant

0
thant (332)
12/11/2003 3:54:06 PM
Eli Barzilay wrote:
> "Daniel C. Wang" <danwang74@hotmail.com> writes:
> 
> Is "effect inference systems" the region-based memory management
> stuff?

Yep. many region-based memory systems track stores and loads into
different regions. Think, of your file handle as a region and
"read/writes" to it just like memory loads and stores, and there you go.

The disk, is just some other kind of very slow memory.



0
danwang742 (171)
12/11/2003 3:54:27 PM
Eli Barzilay wrote:


> That is a really good idea.  (Which is another reason I like the fixed

> order better -- I don't like the idea of such annotation being
> implicit, if you want to say it, just say it.  Of course the same can
> be said on the other side, but deterministic behavior seems much more
> fundamental.)

While I have sympathy for this argument,  in programs where order does not
matter (i.e. all currently correct Scheme programs),
the *semantics* is /by definition/ deterministic independently
of which order you do the evaluations.

Also, I would say that it is more fundamental for the order of evaluation
of arguments not to
matter (in fact today it is impossible to write a /correct/ Scheme program
where
they do).  It probably depends on one's personal sense of aesthetics, but I
personally would find it
inelegant to break the nice permutation symmetry of algebraic evaluation
order
just in order to make some currently buggy programs correct.

Right now, if I want to
understand what a correct program is doing, I can start hand-evaluating
subexpressions in any
convenient order consistent with call-by-value, just as in primary school
arithmetic, and
I don't want to be forced to do it from left to right (or force the machine
to do it that way)
when doing so would be unnatural or more labor-intensive.  That would also
be, in my
view, against the spirit of equational reasoning (even the more limited
form that call-by-value
imposes).

For this reason,  it is in fact conceivable that such a restriction would
make it /more/ difficult,
not less, to read  programs, since now I *have* to hand-evaluate all
subexpressions in
all function calls in a determined order since the original programmer
might have hidden
sequencing of effects in the ordering of procedure arguments.

I know that some people are uncomfortable that this imposes an undecidable
correctness criterion on Scheme programs.
But Schemers are no strangers to undecidable correctness criteria.
Type correctness is also undecidable.

I would also argue that sequencing of effects should always be intentional
and explicitly coded.
If someone does not know that their program depends on a certain sequence
of events,
they have no right to expect "accidental correctness", and probably they
are in the wrong business.
On the other hand, if they  know that certain effects should be sequenced,
they should be required to express that.  In my personal view, the function
call syntax is probably not the best
place to put that information.

A.







0
andre9567 (120)
12/11/2003 4:19:20 PM
Joe Marshall wrote:

> [...]  Therefore, the purpose of automatic memory management
> is neither to make a language tolerant of sloppiness nor to abstract
> out detail.  It is rather an implementation necessity implied by the
> semantics of the language.

Okay, maybe I'm using the notion of abstraction in a much broader sense 
than you're interpreting it. If/when indeed we prefer to use a language 
with semantics that require automatic memory management, it is because 
that language offers us some sort of advantage. Of course the attempt to 
quantify exactly what these advantages are always leads to the most 
interesting discussions, but in my mind I associate that advantage with 
expressiveness and the ability to build new abstractions (in what might 
be a less technical use of the term), which in turn is about being able 
to forget at least for a while about how those abstractions themselves 
are supported.

-thant

0
thant (332)
12/11/2003 4:29:08 PM
Thant Tessman <thant@acm.org> writes:

> Joe Marshall wrote:
>
>> [...]  Therefore, the purpose of automatic memory management
>> is neither to make a language tolerant of sloppiness nor to abstract
>> out detail.  It is rather an implementation necessity implied by the
>> semantics of the language.
>
> Okay, maybe I'm using the notion of abstraction in a much broader
> sense than you're interpreting it. If/when indeed we prefer to use a
> language with semantics that require automatic memory management, it
> is because that language offers us some sort of advantage. 

C has semantics that require automatic memory management, but the
language doesn't provide it.

0
jrm (1310)
12/11/2003 4:36:55 PM
Joe Marshall wrote:
> Thant Tessman <thant@acm.org> writes:
> 
> 
>>Joe Marshall wrote:
>>
>>
>>>[...]  Therefore, the purpose of automatic memory management
>>>is neither to make a language tolerant of sloppiness nor to abstract
>>>out detail.  It is rather an implementation necessity implied by the
>>>semantics of the language.
>>
>>Okay, maybe I'm using the notion of abstraction in a much broader
>>sense than you're interpreting it. If/when indeed we prefer to use a
>>language with semantics that require automatic memory management, it
>>is because that language offers us some sort of advantage. 
> 
> 
> C has semantics that require automatic memory management, but the
> language doesn't provide it.
> 

Tell me about it! :-)

-thant

0
thant (332)
12/11/2003 4:40:06 PM
Andre <andre@het.brown.edu> writes:

> Eli Barzilay wrote:
> 
> 
> > That is a really good idea.  (Which is another reason I like the fixed
> 
> > order better -- I don't like the idea of such annotation being
> > implicit, if you want to say it, just say it.  Of course the same can
> > be said on the other side, but deterministic behavior seems much more
> > fundamental.)
> 
> While I have sympathy for this argument,  in programs where order does not
> matter (i.e. all currently correct Scheme programs),
> the *semantics* is /by definition/ deterministic independently
> of which order you do the evaluations.

That's again that tautology which can be used to defend arbitrarily
bizarre language designs.

> Also, I would say that it is more fundamental for the order of
> evaluation of arguments not to matter

There is nothing "fundamental" about that at all.

> It probably depends on one's personal sense of aesthetics, but I
> personally would find it inelegant to break the nice permutation
> symmetry of algebraic evaluation order just in order to make some
> currently buggy programs correct.

And I find it inelegant having to reason about a multitude of
behaviors when I write a single program.  The symmetry you are talking
about does not exist the moment your language allows for effects.
Leaving the order of evaluation unspecified amounts to *pretending* it
exists, even though it does not.

> Right now, if I want to
> understand what a correct program is doing, I can start hand-evaluating
> subexpressions in any
> convenient order consistent with call-by-value, just as in primary school
> arithmetic, and
> I don't want to be forced to do it from left to right (or force the machine
> to do it that way)
> when doing so would be unnatural or more labor-intensive.

The task of "trying to understand what a correct program is doing"
sounds extremely contrived.  Much more likely one wants to understand
what an /incorrect/ program is doing, or, if not that, the meaning of
some program of which we don't know whether or not it is correct.

Moreover, I don't see what is "unnatural" or "more labor-intensive"
about hand-evaluating in a certain specified order.  Without fixed
order of evaluation, since one does not know a-priori that one is
dealing with a correct program, one will have to do it left-to-right,
then right-to-left, and then with every other possible permutation to
convince yourself of that. If anything, I'd call /that/ "unnatural"
and "more labor-intensive".

> That would also be, in my view, against the spirit of equational
> reasoning (even the more limited form that call-by-value imposes).

???  Equational reasoning to the limited extend it is permitted under
cbv is *easier* when you fix the order of evaluation.

> I know that some people are uncomfortable that this imposes an undecidable
> correctness criterion on Scheme programs.

Indeed.

> But Schemers are no strangers to undecidable correctness criteria.
> Type correctness is also undecidable.

Well, yes.  But two wrongs don't make one right. :-)

> I would also argue that sequencing of effects should always be intentional
> and explicitly coded.

If you can enforce that, yes.  And realistically, the only way of
enforcing it without going to great length in terms of type- and
effects systems is to fix the order of evaluation.  Then, namely, the
sequencing of effects is always explicit.

> On the other hand, if they know that certain effects should be
> sequenced, they should be required to express that.  In my personal
> view, the function call syntax is probably not the best place to put
> that information.

Probably not.  But the function call syntax as also a terrible place
to put information of the form "these things are order-independent".
0
find19 (1244)
12/11/2003 4:46:00 PM
Matthias Blume wrote:

> ???  Equational reasoning to the limited extend it is permitted under
> cbv is *easier* when you fix the order of evaluation.

I'm not sure I understand what you mean.
Even in CBV, there is no fundamental reason to evaluate the components
of a tuple expression in any particular order, or why it should be easier left
to right.

> > I would also argue that sequencing of effects should always be intentional
> > and explicitly coded.
>
> If you can enforce that, yes.  And realistically, the only way of
> enforcing it without going to great length in terms of type- and
> effects systems is to fix the order of evaluation.

But we already have lambda for explicit sequencing (or if you like to disguise
it: let, let* and begin).

It is true that fixing OoE would give another, redundant, way of  expressing
sequenced effects.
At the price of  enforcing unintentional sequencing of the 99% of cases where
sequencing is not
needed.

A.





0
andre9567 (120)
12/11/2003 6:08:02 PM
"Daniel C. Wang" <danwang74@hotmail.com> writes:

> I'm surprised to admit this but after reading this thread, I see
> Bradd's point. I think, it is getting lost in terminology.

I don't think so.  The point was never lost, at least not on me.

However, I do think that there is a strong tendency to draw the wrong
conclusions from it.

Basically, the argument goes along these lines:

  We have a Whole program W(P) which contains a Part P.  Furthermore,
  we have a correctness criterion C_w(W(P)) on the whole program and
  a second correctness criterion C_p(P) on our part P.

  In a setting where we rely on testing to detect violations of
  correctness criteria, we need these criteria to be extensional.  In
  English, we must be able to detect violations by observing the
  behavior of the program.

  If we limit ourselves to observing just W(P), then we have a
  problem.  Even though C_p might be extensional with respect to P,
  this will not be the case relative to W(P).  To put it differently,
  we cannot in general detect correctness violations of a part of a
  program simply by observing the behavior of the whole program.

  This is a fundamental problem for which there is no silver bullet.
  Extensional correctness criteria regarding P are, in the general
  case, intensional with respect to W(P).

  Now, the current discussion is about one very specific kind of
  correctness criterion C_{order} for P, namely that pieces of P are
  order-independent.  The suggestion of making order of evaluation
  unspecified for certain language constructs is done in the hope that
  C_{order} -- which in general is an intensional criterion for W(P) --
  becomes extensional.  Again, rendering this in English, this means
  that a violation of C_{order} should then be reflected in a violation
  of C_w so it becomes observable when watching W(P).

  ---

  The first observation is that the general problem (existence of
  intensional correctness criteria) does not go away.  At best, it is
  "solved" for one very specific case.

  Second, the "solution" arguably comes at great cost in terms of
  the amount of work required when reasoning about programs.

  Third, in practice, C_{order} does not really become an extensional
  correctness criterion as most implementations end up choosing one
  particular order of evaluation anyway.  So we end up in a situation
  that is worse than before: Suppose P has a bug which, under some
  fixed order is not observable in W(P).  At least, in this case, we
  can argue that as long as we leave the order fixed it will continue
  to be unobservable, so for those who are only interested in C_w it
  does not matter.  With an unspecified order, a violation of C_{order}
  can /unexpectedly/ turn into a violation of C_w simply by switching to
  a new implementation, by upgrading to a new version of the compiler,
  or by fiddling optimizer switches.

  Fourth, the mechanism of turning violations of C_{order} into
  violations of C_w is fragile.  I actively have to write

      (f (VERY (LARGE (EXPRESSION-1 (WHICH IS ORDER)
                                    INDEPENDENT (RELATIVE TO) NOT)
                      (SO LARGE) (EXPRESSION-2)))
         EXPRESSION-2)

   in order to "express" order-independence.  If for some reason, say
   readability, I write

       (let ((arg1 (VERY (LARGE (EXPRESSION-1 (WHICH IS ORDER)
                                             INDEPENDENT (RELATIVE TO) NOT)
                        (SO LARGE) (EXPRESSION-2)))))

         (f arg1 EXPRESSION-2)

   I have effectively disabled the mechanism.

   Fifth, even under mandatory fixed-order evaluation, one can use
   test harnesses or compiler switches to test the code under
   evaluation orders which are not those mandated by the language
   definition.  This might be useful for testing purposes (although I
   personally think there are much better ways, see point 6).  When
   compiling for production, one would then go back to the fixed order.

   Sixth, there are better ways of making C_p violations observable.
   In fact, that's what everybody is doing already -- it's what unit tests
   are all about.  Formally, instead of considering W(P) and its correctness
   criterion C_w we consider the pair (W(P), P) and construct
   a new correctness criterion C_wp from C_w and C_p as follows:

         C_wp (w, p) = C_p (p) & C_w (w)

   In English: we consider (and verify) correctness of P separately.
   This is conceptually simple and practically more robust.

   Seventh, consider (f (foo) (bar)).  The information about
   order-independence of FOO and BAR in the call of F is arguably in
   the wrong place as it is a property of FOO and BAR and not a
   property of F or even just the call of F.  Suppose we specified
   along with the /definitions/ of FOO and BAR that their relative
   order does not matter.  Then suppose the maintainer of the code
   containing (f (foo) (bar)) decides she wants to swap the two
   arguments.  Now, under fixed order she cannot go ahead and "just do
   it".  Instead, she will have to go and look up the definitions (or
   specifications) of FOO and BAR.  Doing so should be easy in
   practice, and I'd say the result will be more robust because the
   required information comes "from the horses mouth" and not from
   mere "circumstancial evidence".

Ok, I think I'm done. :-)

Matthias
0
find19 (1244)
12/11/2003 6:11:45 PM
> "Daniel C. Wang" <danwang74@hotmail.com> writes:
>> I'm surprised to admit this but after reading this thread, I see
>> Bradd's point. I think, it is getting lost in terminology.

FWIW, Daniel's restatement of my argument wasn't quite correct. I'll
deal with that later, if I have time.

Matthias Blume wrote:
> The point was never lost, at least not on me. However, I do think that
> there is a strong tendency to draw the wrong conclusions from it.
> [Matthias restates part of my argument.]

That was an excellent summary! While you didn't cover all of my points,
it looks like you correctly restated the portion you did cover.

>   The first observation is that the general problem (existence of
>   intensional correctness criteria) does not go away.  At best, it is
>   "solved" for one very specific case.

Correct, I think. My goal is to use the construct which reduces the
overall cost, which is a complicated function of runtime performance,
build time, development costs, maintenance costs, testing costs, and
more indirect costs (including the difficulty of writing compilers and
testing tools).

>   Second, the "solution" arguably comes at great cost in terms of the
>   amount of work required when reasoning about programs.

That depends on *how* you reason about programs (which may be why it's
arguable). Imperative argument style makes some kinds of reasoning
easier; for example, it simplifies exhaustive reasoning, because it
limits the number of permutations. However, I believe that "functional"
argument style simplifies top-down reasoning and diagnosis, which are
more common in my experience.

>   Third, in practice, C_{order} does not really become an extensional
>   correctness criterion as most implementations end up choosing one
>   particular order of evaluation anyway.

Also correct. That's why I recommend the use of automated "stress" tools
like the "argument shaker." Some developers already get this benefit in
an ad hoc way, by porting programs to multiple Schemes.

>   So we end up in a situation that is worse than before: Suppose P has
>   a bug which, under some fixed order is not observable in W(P).  At
>   least, in this case, we can argue that as long as we leave the order
>   fixed it will continue to be unobservable, so for those who are only
>   interested in C_w it does not matter.

Often, that's all that end users and initial developers care about. If
that were the whole story, I wouldn't object to imperative argument
style so much. However, C_w is not sufficient for post-release
maintenance and development. I use the term "fragile" or "brittle" to
describe a program that satisfies C_w but not C_p (i.e., the overall
program produces correct output, but some subsystems do not). More
formally, when a program is C_w but not C_p, it's more expensive to
achieve C_w(W(P')), where W(P') is whole a program derived from the
original whole program W(P).

I do agree that fixed evaluation order reduces the cost of satisfying
C_w. If it had no effect on C_p, I would agree that fixed order is
cheaper oeverall. However, I believe that the premise is false, that:

1. Fixed evaluation order makes it more difficult to verify C_p, given
   typical engineering and testing practices. Specifically, it disallows
   one parameter for stress tests. Also, if the fixed order is
   left-to-right, I believe that it's more likely to coincidentally
   produce correct output (C_w) even if the code does not match its
   design (a kind of C_p error).
2. In my experience, many software developers believe (on some level)
   that "if it passed the tests, it must be correct"; the harder it is
   to verify C_p, the more likely they are to repeat mistakes that
   satisfy C_w but not C_p.
4. Therefore, I believe that fixed evaluation order both increases the
   likelihood of subprogram errors *and* reduces the likelihood of
   detecting those errors before release.
5. As a result, C_p is less likely to hold true in many application
   domains, including commercial, engineering, and scientific software.
6. Therefore, fixed evaluation order increases the cost of satisfying
   C_w(W(P')) for common application domains (i.e., software maintenance
   and enhancement is more expensive).

In short, fixed evaluation reduces the costs of achieving C_w(W(P)) but
increases the costs of achieving C_p and C_w(W(P')). Which is greater?

In some domains, fixed evaluation order may actually reduce both costs.
For example, life-critical applications that require exhaustive formal
proofs may find that fixed evaluation order reduces the cost of C_w
*and* C_p. For those applications, fixed order may be all-around better.

However, in my experience, software developers rely on informal code
reviews, walkthroughs, and incomplete (but focused) testing. In that
environment, it's easy to find some kinds of errors. System testing
finds most violations of C_w, and a combination of code review and unit
testing finds some violations of C_p. In this environment, I would
expect:

- Fixed argument evaluation reduces both the cost of producing the
  initial release (i.e., achieving C_w) and the cost of porting the
  program to other Scheme implementations. It will also eliminate some
  errors in subprograms (C_p). However, it will make detection more
  difficult for other subprogram errors, which will survive into
  post-release.

- Unspecified evaluation order increases the cost of producing the
  initial release and of porting to other implementations. However,
  since C_p errors are more likely to also appear as C_w errors (based
  on my experience), they're less likely to survive into post-release.

In short, I think the fixed order is likely to reduce the cost of
initial development and porting, but unspecified order is likely to
reduce the cost of post-release maintenance. It's well-documented that
the cost of defects grows dramatically when defects survive into later
phases of the software lifecycle, especially when they survive into
post-release maintenance. Then again, it's also well-known that time to
market is important in the commercial software industry. So which
approach is less costly overall?

My answer: It depends on the circumstances. Because of that, I'm
unwilling to accept any solution which precludes unspecified evaluation
order. Furthermore, I'm unwilling to accept a solution that makes
imperative argument style the "default" or "easier to type" solution,
because my gut feeling is that typical programmers would never use the
unspecified order in that environment; practically speaking, it's the
same as disallowing it entirely.

>   Fourth, the mechanism of turning violations of C_{order} into
>   violations of C_w is fragile.  I actively have to write
> 
>       (f (VERY (LARGE (EXPRESSION-1 (WHICH IS ORDER)
>                                     INDEPENDENT (RELATIVE TO) NOT)
>                       (SO LARGE) (EXPRESSION-2)))
>          EXPRESSION-2)
> 
>    in order to "express" order-independence. If for some reason, say
>    readability, I [use LET to name an intermediate result], I have
>    effectively disabled the mechanism.

I would consider that a general syntactic weakness of Scheme: It imposes
imperative style on all <body> forms. However, that's a whole 'nother
argument, and I'm not inclined to argue it. I'll just say for now that
Scheme imposes a trade-off between order-independence and naming of
intermediate results. I reject your claim that you've *disabled* the
mechanism; you've merely hindered it. While your example here does
support your argument, you've presented it as a false dichotomy, which
weakens the support.

>    Fifth, even under mandatory fixed-order evaluation, one can use
>    test harnesses or compiler switches to test the code under
>    evaluation orders which are not those mandated by the language
>    definition.

That's only possible if you write all the code with functional argument
style. Do you really think that's likely if the language guarantees that
imperative argument style is "correct"?

>    Sixth, there are better ways of making C_p violations observable.
>    In fact, that's what everybody is doing already -- it's what unit
>    tests are all about .... In English: we consider (and verify)
>    correctness of P separately. This is conceptually simple and
>    practically more robust.

Unit tests are not exhaustive. They're always an approximation. Instead
of using the whole-program output to verify code, you use subprogram
output. However, it's not generally affordable to test *every*
subprogram, down to the instruction level. Unit testing is only a
partial solution. Furthermore, it's poorly-suited for testing how well
code survives reorganization. This is *my* area of expertise, and I
reject your claim.

>    Seventh, consider (f (foo) (bar)).  The information about
>    order-independence of FOO and BAR in the call of F is arguably in
>    the wrong place as it is a property of FOO and BAR and not a
>    property of F or even just the call of F.

It's not just a property of FOO and BAR; it's a contract between FOO,
BAR, and users of each. The information belongs on both sides of the
contract, both sides of the call. Also, IME code maintainers typically
take a top-down approach to diagnosing errors, so it's very useful to
include the information at the call site.

>    Suppose we specified along with the /definitions/ of FOO and BAR
>    that their relative order does not matter.

In practice, it's undesirable to explicitly annotate every function with
a list of all functions that it doesn't interact with. (Seriously, talk
about a maintenance nightmare!) In practice, we generally note the
functions that *do* interact with each other, and implicitly list all of
the non-interactions by not listing them. That does introduce some
possibility for error, but that's a risk you take by not testing
everything exhaustively.

In most cases, it's an acceptable risk, and in many cases, exhaustive
testing and proofing is unacceptably expensive. That's why professional
developers generally use exhaustive formal proofs only for life-critical
applications (and similar "must not fail" apps).

> Ok, I think I'm done. :-)

You did make some good points, but I disagree with most of the
conclusions, and I think a couple of the points were invalid.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/11/2003 8:52:54 PM
> Daniel C. Wang wrote:
>> I'm surprised to admit this but after reading this thread, I see
>> Bradd's point. I think, it is getting lost in terminology. As, I
>> understand it Bradd is assuming that in the specification of a
>> problem, the specification states explicitly that the order of
>> evaluation of X and Y are non-deterministic or unspecified.

No, that's not not quite right. I'm assuming that programs contain at
least some calls where the order of evaluation doesn't matter (because
in my experience, it's true for *most* calls).

Thant Tessman <thant@acm.org> wrote:
> Argh! I (and if I may be so bold, Matthias Blume) understood this
> point *long* ago. The point I'm trying to make is that this argument,
> when applied to function argument OofE the way Bradd is doing so, by
> its very nature precludes objective evidence for or against it.

How does it preclude objective evidence? I'm basing my cost-benefit
analysis on three things:

1. The cost of specifying imperative argument style where it isn't
   necessary (i.e., the cases assumed above).
2. The benefits of specifying imperative argument style by default in
   cases where it *is* necessary.
3. The likelihood of each case occurring.

Now, if I were *solely* considering #1, then what you say would be true,
but #1 is not the entirety of my argument. In short, I believe that

1. The cost of unnecessary imperative style is high.
2. The benefits of default imperative style are great in a few cases,
   but small in most cases.
3. Imperative style is typically unnecessary.

Conclusion: Default imperative style is generally very expensive, but
there are some situations where it really is better overall. Therefore,
I don't object to making it *easy*, but I do object to making it
universal. (I also object to making it the default, based on my
observation of programmer habits.)

> Matthias and I have both already said that being able to explicitly
> specify undeterministic order of evaluation is a *good* thing for
> something like concurrency.

That's not the only situation where it makes sense, and I believe that
unspecified evaluation order is superior in *most* situations. That's
why I argue against making it universal (i.e., mandating it in the
language standard).
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/11/2003 9:23:31 PM
> Matthias Blume <find@my.address.elsewhere> schrieb:
>> Scheme *is* an imperative language.  Get used to it.

Adrian Kubala <adrian@sixfingeredman.net> wrote:
> I don't think it's terribly controversial to say that good design
> maintains a clean separation between imperative pieces of code and
> purely-functional ones. Scheme is a good language because it facilitates
> this separation, i.e. by providing a usable subset of the language which
> is purely functional and by clearly marking side-effecting procedures.
> 
> In this spirit there should be a way of explicitly distinguishing
> between where sequential evaluation is intended and where it is not. I
> agree with others that function application is a natural place to draw
> the line, since the purely-functional subset of the language still has
> to apply functions and the non-functional extension provides plenty of
> sequencing operators.
> 
> Even if you're against unspecified OoE, you must see the value of a way
> to signal to compilers and other programmers that some sequence of
> expressions may be executed in any order, and the burden of proof is
> left to the programmer. If you choose to signal this with something
> besides function application, functional programming becomes much harder
> as you have to litter your code with "parallel" forms or whatever.

Well said!
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/11/2003 9:24:39 PM
Thant Tessman <thant@acm.org> wrote:
> Rather, I suspect *most* of the time code will be OofE independent.

I agree. Furthermore, I believe that unnecessary use of imperative
argument style greatly increases maintenance costs. Therefore, I
conclude that it's a bad idea to make imperative argument evaluation the
default (e.g., by mandating a fixed evaluation order in the language
standard).

> It's the bizarre arguments used to defend undeterministic OofE that's
> goaded me into participating in this thread.

How are they bizarre? Perhaps you've merely misunderstood some parts of
the arguments (as evidenced by the fact that some of your restatements
of my claims look bizarre even to me)?
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/11/2003 9:29:38 PM
I think, you all are just talking past each other then. Rereading,
Bradd's last few points its pretty clear his definition of a "bug" is
not based on the operational behavor of the program, but the gap between
the specification and its implementation.

It is true that any program that always produces the same result under
non-deterministic semantics will produce the same result under
deterministic semantics.

The converse is obviously not true, which I'm sure we all agree. So, I'm
puzzled as to why this thread continues to go on. Must be a slow work
week for everyone.. :)


Thant Tessman wrote:

> Daniel C. Wang wrote:
> 
> [...]
> 
>> I'm surprised to admit this but after reading this thread, I see 
>> Bradd's point. I think, it is getting lost in terminology. As, I 
>> understand it Bradd is assuming that in the specification of a 
>> problem, the specification states explicitly that the order of 
>> evaluation of X and Y are non-deterministic or unspecified. (Imagine, 
>> you are impelmenting a simulator for an inherently concurent system in 
>> Scheme.)
> 
> 
> Argh! I (and if I may be so bold, Matthias Blume) understood this point 
> *long* ago. The point I'm trying to make is that this argument, when 
> applied to function argument OofE the way Bradd is doing so, by its very 
> nature precludes objective evidence for or against it. Matthias and I 
> have both already said that being able to explicitly specify 
> undeterministic order of evaluation is a *good* thing for something like 
> concurrency.

0
danwang742 (171)
12/11/2003 11:03:50 PM
Matthias Blume <find@my.address.elsewhere> schrieb:
>> [...] the function call syntax is probably not the best place to put
>> that information.
>
> Probably not.  But the function call syntax as also a terrible place
> to put information of the form "these things are order-independent".

It's a good enough place for the lambda calculus.

Even if Scheme is not purely functional, it's nice that you can write
purely functional sub-programs without littering your code with
"order-independent" annotations.
0
adrian61 (83)
12/11/2003 11:33:44 PM
Adrian Kubala <adrian@sixfingeredman.net> writes:

> Matthias Blume <find@my.address.elsewhere> schrieb:
> >> [...] the function call syntax is probably not the best place to put
> >> that information.
> >
> > Probably not.  But the function call syntax as also a terrible place
> > to put information of the form "these things are order-independent".
> 
> It's a good enough place for the lambda calculus.

No, it is not.  The "lambda calculus" makes absolutely *no* claims
about the order of evaluation.  The order comes in through the
operational semantics.  There are CBV and CBN semantics which differ
both in evaluation order and operational behavior.  There is no
conceptual difference between requiring arguments to be evaluated
before evaluating the body and arguments being evaluated in some
specific order. As it so happens, for a CBV semantics in the pure
lambda calculus that order truly does not matter.  In a calculus with
effects other than non-termination this is no longer the case.

> Even if Scheme is not purely functional, it's nice that you can write
> purely functional sub-programs without littering your code with
> "order-independent" annotations.

I can write purely functional subprograms even if the language
requires a particular fixed order.
0
find19 (1244)
12/12/2003 3:08:00 AM
Matthias Blume wrote:

> > While I have sympathy for this argument,  in programs where order does not
> > matter (i.e. all currently correct Scheme programs),
> > the *semantics* is /by definition/ deterministic independently
> > of which order you do the evaluations.
> 
> That's again that tautology which can be used to defend arbitrarily
> bizarre language designs.

Neither a tautology nor correct.  Sometimes a program can have
more than one correct result.

David
0
feuer (188)
12/12/2003 9:55:47 AM
Daniel C. Wang wrote:

> I think, you all are just talking past each other then. Rereading,
> Bradd's last few points its pretty clear his definition of a "bug" is
> not based on the operational behavor of the program, but the gap between
> the specification and its implementation.

Again, my claim is not that the correctness and robustness and 
maintainability of a program AND ALL ITS LITTLE BITS is not important. 
(I thought that would have been quite clear from my claims for the 
advantages of (static) type systems.) My claim is that unspecified 
function argument order of evaluation, with respect to the above 
desirable properties, makes things worse, not better. Bradd's claim is 
that the ability to signal one's intention that OofE doesn't matter 
improves maintainability. I claim that overloading the meaning of a 
function call with this intention is 1) unintuitive (as the creation of 
this thread proves), and 2) inherently unenforceable in a language like 
Scheme anyway. In other words, in practice it doesn't--and can't--narrow 
the "gap between specification and its implementation." More than that, 
I claim in practice it does just the opposite.

[...]

-thant

0
thant (332)
12/12/2003 2:39:14 PM
Bradd W. Szonye wrote:

> [...] In short, I believe that
> 
> 1. The cost of unnecessary imperative style is high. [...]

The issue is not whether it's better to do things in a functional versus 
imperative style. The issue is that (beyond some not-well-quantified 
performance benefits) specifying function argument OofE adds ZERO cost 
to a functional programming style and has a *positive* not negative 
impact in terms of maintenance of mixed functional/imperative styles.

-thant

0
thant (332)
12/12/2003 3:18:45 PM
Feuer <feuer@his.com> writes:

> Matthias Blume wrote:
> 
> > > While I have sympathy for this argument,  in programs where order does not
> > > matter (i.e. all currently correct Scheme programs),
> > > the *semantics* is /by definition/ deterministic independently
> > > of which order you do the evaluations.
> > 
> > That's again that tautology which can be used to defend arbitrarily
> > bizarre language designs.
> 
> Neither a tautology nor correct.  Sometimes a program can have
> more than one correct result.

You misunderstood.  I meant the following: If I criticize a language
because feature A lets the programmer do B which results in something
bad happening (C), and you tell me: "but if you are careful with A and
don't do B, then C won't happen", then this is not exactly a good line
of reasoning in favor of having A in the language since it works for
arbitrarily bad A, B, and C.

0
find19 (1244)
12/12/2003 4:09:41 PM
> Bradd W. Szonye wrote:
>> In short, I believe that
>> 1. The cost of unnecessary imperative style is high. [...]

Thant Tessman <thant@acm.org> wrote:
> The issue is not whether it's better to do things in a functional
> versus imperative style. The issue is that (beyond some
> not-well-quantified performance benefits) specifying function argument
> OofE adds ZERO cost to a functional programming style ....

That's certainly false! We've seen claims from Will Clinger and Scott G.
Miller that some orders of evaluation are better than others, depending
on the machine you're targeting.

> and has a *positive* not negative impact in terms of maintenance of
> mixed functional/imperative styles.

And I strongly disagree with this.

Now you're the one making claims as though they're universally true, but
those are exactly the claims I'm disputing.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
12/12/2003 4:53:12 PM
Bradd W. Szonye wrote:
>>Bradd W. Szonye wrote:
>>
>>>In short, I believe that
>>>1. The cost of unnecessary imperative style is high. [...]
> 
> 
> Thant Tessman <thant@acm.org> wrote:
> 
>>The issue is not whether it's better to do things in a functional
>>versus imperative style. The issue is that (beyond some
>>not-well-quantified performance benefits) specifying function argument
>>OofE adds ZERO cost to a functional programming style ....
> 
> 
> That's certainly false! We've seen claims from Will Clinger and Scott G.
> Miller that some orders of evaluation are better than others, depending
> on the machine you're targeting.

I didn't say there were no performance benefits. I said they weren'