COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

### Python from Wise Guy's Viewpoint

• Follow

THE GOOD:

1. pickle

2. simplicity and uniformity

3. big library (bigger would be even better)

1. f(x,y,z) sucks. f x y z  would be much easier to type (see Haskell)
90% of the code is function applictions. Why not make it convenient?

2. Statements vs Expressions business is very dumb. Try writing
a = if x :
y
else: z

3. no multimethods (why? Guido did not know Lisp, so he did not know
about them) You now have to suffer from visitor patterns, etc. like
lowly Java monkeys.

4. splintering of the language: you have the inefficient main language,
and you have a different dialect being developed that needs type
declarations. Why not allow type declarations in the main language
instead as an option (Lisp does it)

5. Why do you need "def" ? In Haskell, you'd write
square x = x * x

6. Requiring "return" is also dumb (see #5)

7. Syntax and semantics of "lambda" should be identical to
function definitions (for simplicity and uniformity)

8. Can you undefine a function, value, class or unimport a module?
(If the answer is no to any of these questions, Python is simply
not interactive enough)

9. Syntax for arrays is also bad [a (b c d) e f] would be better
than [a, b(c,d), e, f]

420

P.S. If someone can forward this to python-dev, you can probably save some
people a lot of soul-searching

 0
Reply mike420 (55) 10/19/2003 11:18:31 AM

mike420@ziplip.com <mike420@ziplip.com> pisze:

> 8. Can you undefine a function, value, class or unimport a module?
>    (If the answer is no to any of these questions, Python is simply
>     not interactive enough)

Yes. By deleting a name from namespace. You better read some tutorial,
this will save you some time.

--
Jarek Zgoda
Registered Linux User #-1
http://www.zgoda.biz/ JID:jarek@jabberpl.org http://zgoda.jogger.pl/

 0
Reply jzgoda1 (255) 10/19/2003 12:51:00 PM

Frode Vatvedt Fjeld wrote:
>
> > mike420@ziplip.com <mike420@ziplip.com> pisze:
> >
> >> 8. Can you undefine a function, value, class or unimport a module?
> >>    (If the answer is no to any of these questions, Python is simply
> >>     not interactive enough)
>
> Jarek Zgoda <jzgoda@gazeta.usun.pl> writes:
>
> > Yes. By deleting a name from namespace. You better read some
> > tutorial, this will save you some time.
>
> Excuse my ignorance wrt. to Python, but to me this seems to imply that
> one of these statements about functions in Python are true:
>
>   1. Function names (strings) are resolved (looked up in the
>      namespace) each time a function is called.
>
>   2. You can't really undefine a function such that existing calls to
>      the function will be affected.
>
> Is this (i.e. one of these) correct?

Both are correct, in essence.  (And depending on how one interprets
your second point, which is quite ambiguous.)

-Peter

 0
Reply peter34 (3696) 10/19/2003 1:19:14 PM

Warning!  Troll alert!  I missed the three newsgroup cross-post
the first time, so I thought this might be a semi-serious question.

-Peter

mike420@ziplip.com wrote:
>
> THE GOOD:
>
> 1. pickle
>
> 2. simplicity and uniformity
>
> 3. big library (bigger would be even better)
>
>
> 1. f(x,y,z) sucks. f x y z  would be much easier to type (see Haskell)
>    90% of the code is function applictions. Why not make it convenient?
>
> 2. Statements vs Expressions business is very dumb. Try writing
>    a = if x :
>            y
>        else: z
>
> 3. no multimethods (why? Guido did not know Lisp, so he did not know
>    about them) You now have to suffer from visitor patterns, etc. like
>     lowly Java monkeys.
>
> 4. splintering of the language: you have the inefficient main language,
>    and you have a different dialect being developed that needs type
>    declarations. Why not allow type declarations in the main language
>    instead as an option (Lisp does it)
>
> 5. Why do you need "def" ? In Haskell, you'd write
>    square x = x * x
>
> 6. Requiring "return" is also dumb (see #5)
>
> 7. Syntax and semantics of "lambda" should be identical to
>    function definitions (for simplicity and uniformity)
>
> 8. Can you undefine a function, value, class or unimport a module?
>    (If the answer is no to any of these questions, Python is simply
>     not interactive enough)
>
> 9. Syntax for arrays is also bad [a (b c d) e f] would be better
>    than [a, b(c,d), e, f]
>
> 420
>
> P.S. If someone can forward this to python-dev, you can probably save some
> people a lot of soul-searching

 0
Reply peter34 (3696) 10/19/2003 1:21:18 PM

> mike420@ziplip.com <mike420@ziplip.com> pisze:
>
>> 8. Can you undefine a function, value, class or unimport a module?
>>    (If the answer is no to any of these questions, Python is simply
>>     not interactive enough)

Jarek Zgoda <jzgoda@gazeta.usun.pl> writes:

> Yes. By deleting a name from namespace. You better read some
> tutorial, this will save you some time.

Excuse my ignorance wrt. to Python, but to me this seems to imply that
one of these statements about functions in Python are true:

1. Function names (strings) are resolved (looked up in the
namespace) each time a function is called.

2. You can't really undefine a function such that existing calls to
the function will be affected.

Is this (i.e. one of these) correct?

--
Frode Vatvedt Fjeld

 0
Reply frodef (343) 10/19/2003 1:24:18 PM

On Sun, 19 Oct 2003 15:24:18 +0200, Frode Vatvedt Fjeld <frodef@cs.uit.no>
wrote:

>> mike420@ziplip.com <mike420@ziplip.com> pisze:
>>
>>> 8. Can you undefine a function, value, class or unimport a module?
>>>    (If the answer is no to any of these questions, Python is simply
>>>     not interactive enough)
>
> Jarek Zgoda <jzgoda@gazeta.usun.pl> writes:
>
>> Yes. By deleting a name from namespace. You better read some
>> tutorial, this will save you some time.
>
> Excuse my ignorance wrt. to Python, but to me this seems to imply that
> one of these statements about functions in Python are true:
>
>   1. Function names (strings) are resolved (looked up in the
>      namespace) each time a function is called.
>
>   2. You can't really undefine a function such that existing calls to
>      the function will be affected.
>
> Is this (i.e. one of these) correct?
>
Neither is complely correct. Functions are internally delt with using
dictionaies.
The bytecode compiler gives it a ID and the look up is done using a
dictionary.
Removing the function from the dictionary removes the function.
(pythonese for hash-table)

--
Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/

 0

Peter Hansen <peter@engcorp.com> writes:

> Both are correct, in essence.  (And depending on how one interprets
> your second point, which is quite ambiguous.)

Frode Vatvedt Fjeld wrote:

>>   1. Function names (strings) are resolved (looked up in the
>>      namespace) each time a function is called.

But this implies a rather enormous overhead in calling a function,
doesn't it?

>>   2. You can't really undefine a function such that existing calls to
>>      the function will be affected.

What I meant was that if you do the following, in sequence:

a. Define function foo.
b. Define function bar, that calls function foo.
c. Undefine function foo

Now, if you call function bar, will you get a "undefined function"
exception? But if point 1. really is true, I'd expect you get a
"undefined name" execption or somesuch.

--
Frode Vatvedt Fjeld

 0
Reply frodef (343) 10/19/2003 1:47:26 PM

(I'm replying only because I made the mistake of replying to a
triply-crossposted thread which was, in light of that, obviously
troll-bait.  I don't plan to continue the thread except to respond
to Frode's questions.  Apologies for c.l.p readers.)

Frode Vatvedt Fjeld wrote:
>
> Peter Hansen <peter@engcorp.com> writes:
>
> > Both are correct, in essence.  (And depending on how one interprets
> > your second point, which is quite ambiguous.)
>
> Frode Vatvedt Fjeld wrote:
>
> >>   1. Function names (strings) are resolved (looked up in the
> >>      namespace) each time a function is called.
>
> But this implies a rather enormous overhead in calling a function,
> doesn't it?

"Enormous" is of course relative.  Yes, the overhead is more than in,
say C, but I think it's obvious (since people program useful software
using Python) that the overhead is not unacceptably high?

As John Thingstad wrote in his reply, there is a dictionary lookup
involved and dictionaries are extremely fast (yes, yet another relative
term... imagine that!) in Python so that part of the overhead is
relatively unimportant.  There is actually other overhead which is
involved (e.g. setting up the stack frame which is, I believe, much larger
than the trivial dictionary lookup).

Note also that if you have a reference to the original function is,
say, a local variable, removing the original doesn't really remove it,
but merely makes it unavailable by the original name.  The local variable
can still be used to call it.

> >>   2. You can't really undefine a function such that existing calls to
> >>      the function will be affected.
>
> What I meant was that if you do the following, in sequence:
>
>   a. Define function foo.
>   b. Define function bar, that calls function foo.
>   c. Undefine function foo
>
> Now, if you call function bar, will you get a "undefined function"
> exception? But if point 1. really is true, I'd expect you get a
> "undefined name" execption or somesuch.

See below.

Python 2.3.1 (#47, Sep 23 2003, 23:47:32) [MSC v.1200 32 bit (Intel)] on win32
>>> def foo():
....   print 'in foo'
....
>>> def bar():
....   foo()
....
>>> bar()
in foo
>>> del foo
>>> bar()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "<stdin>", line 2, in bar
NameError: global name 'foo' is not defined

On the other hand, as I said above, one can keep a reference to the original.
If I'd done "baz = foo" just before the "del foo", then I could easily have
done "baz()" and the original method would still have been called.

Python is dynamic.  Almost everything is looked up in dictionaries at
runtime like this.  That's its nature, and much of its power (as with
the many other such languages).

-Peter

 0
Reply peter34 (3696) 10/19/2003 2:20:11 PM

Peter Hansen <peter@engcorp.com> pisze:

> Warning!  Troll alert!  I missed the three newsgroup cross-post
> the first time, so I thought this might be a semi-serious question.

That's why I set FUT to this group.

--
Jarek Zgoda
Registered Linux User #-1
http://www.zgoda.biz/ JID:jarek@jabberpl.org http://zgoda.jogger.pl/

 0
Reply jzgoda1 (255) 10/19/2003 3:20:30 PM

John Thingstad <john.thingstad@chello.no> writes:

> [..] Functions are internally delt with using dictionaies.  The
> bytecode compiler gives it a ID and the look up is done using a
> dictionary.  Removing the function from the dictionary removes the
> function.  (pythonese for hash-table)

So to get from the ID to the bytecode, you go through a dictionary?
And the mapping from name to ID happens perhaps when the caller is
bytecode-compiled?

--
Frode Vatvedt Fjeld

 0
Reply frodef (343) 10/19/2003 5:38:40 PM

Oh, you're trolling for an inter-language flame fest...
well, anyway:

> 3. no multimethods (why? Guido did not know Lisp, so he did not know
>    about them) You now have to suffer from visitor patterns, etc. like
>     lowly Java monkeys.

Multimethods suck.

The longer answer: Multimethods have modularity issues (if whatever
domain they're dispatching on can be extended by independent developers:
different developers may extend the dispatch domain of a function in
different directions, and leave undefined combinations; standard
dispatch strategies as I've seen in some Lisps just cover up the
undefined behaviour, with a slightly less than 50% chance of being correct).

Regards,
Jo


 0
Reply joachim.durchholz (563) 10/19/2003 6:01:03 PM

Frode Vatvedt Fjeld <frodef@cs.uit.no> writes:
> > [..] Functions are internally delt with using dictionaies.  The
> > bytecode compiler gives it a ID and the look up is done using a
> > dictionary.  Removing the function from the dictionary removes the
> > function.  (pythonese for hash-table)
>
> So to get from the ID to the bytecode, you go through a dictionary?
> And the mapping from name to ID happens perhaps when the caller is
> bytecode-compiled?

Hah, you wish.  If the function name is global, there is a dictionary
lookup, at runtime, on every call.

def square(x):
return x*x

def sum_of_squares(n):
sum = 0
for i in range(n):
sum += square(x)
return sum

print sum_of_squares(100)

looks up "square" in the dictionary 100 times.  An optimization:

def sum_of_squares(n):
sum = 0
sq = square
for i in range(n):
sum += sq(x)
return sum

Here, "sq" is a local copy of "square".  It lives in a stack slot in
the function frame, so the dictionary lookup is avoided.

 0
Reply phr.cx (5483) 10/19/2003 6:04:20 PM

On Sun, 19 Oct 2003 20:01:03 +0200, Joachim Durchholz wrote:

> The longer answer: Multimethods have modularity issues (if whatever domain
> they're dispatching on can be extended by independent developers:
> different developers may extend the dispatch domain of a function in
> different directions, and leave undefined combinations;

This doesn't matter until you provide an equally powerful mechanism which
fixes that. Which is it?

--
__("<         Marcin Kowalczyk
\__/       qrczak@knm.org.pl
^^     http://qrnik.knm.org.pl/~qrczak/


 0
Reply qrczak (1265) 10/19/2003 6:38:13 PM

Frode Vatvedt Fjeld wrote:
...
> Excuse my ignorance wrt. to Python, but to me this seems to imply that
> one of these statements about functions in Python are true:
>
>   1. Function names (strings) are resolved (looked up in the
>      namespace) each time a function is called.
>
>   2. You can't really undefine a function such that existing calls to
>      the function will be affected.
>
> Is this (i.e. one of these) correct?

Both, depending on how you define "existing call".  A "call" that IS
in fact existing, that is, pending on the stack, will NOT in any way
be "affected"; e.g.:

def foo():
print 'foo, before'
remove_foo()
print 'foo, after'

def remove_foo():
print 'rmf, before'
del foo
print 'rmf, after'

the EXISTING call to foo() will NOT be "affected" by the "del foo" that
happens right in the middle of it, since there is no further attempt to
look up the name "foo" in the rest of that call's progress.

But any _further_ lookup is indeed affected, since the name just isn't
bound to the function object any more.  Note that other references to
the function object may have been stashed away in many other places (by
other names, in a list, in a dict, ...), so it may still be quite
possible to call that function object -- just not to look up its name
in the scope where it was earlier defined, once it has been undefined.

As for your worries elsewhere expressed that name lookup may impose
excessive overhead, in Python we like to MEASURE performance issues
rather than just reason about them "abstractly"; which is why Python
comes with a handy timeit.py script to time a code snippet accurately.
So, on my 30-months-old creaky main box (I keep mentioning its venerable
age in the hope Santa will notice...:-)...:

[alex@lancelot ext]$timeit.py -c -s'def foo():pass' 'foo' 10000000 loops, best of 3: 0.143 usec per loop [alex@lancelot ext]$ timeit.py -c -s'def foo():return' 'foo()'
1000000 loops, best of 3: 0.54 usec per loop

So: a name lookup takes about 140 nanoseconds; a name lookup plus a
call of the simplest possible function -- one that just returns at
once -- about 540 nanoseconds.  I.e., the call itself plus the
return take about 400 nanoseconds _in the simplest possible case_;
of the overall lookup-call-return pure overhead.

Yes, managing less than 2 million function calls a second, albeit on
an old machine, is NOT good enough for some applications (although,
for many of practical importance, it already is).  But the need for speed
is exactly the reason optimizing compilers exist -- for those times
in which you need MANY more millions of function calls per second.
Currently, the best optimizing compiler for Python is Psyco, the
"specializing compiler" by Armin Rigo.  Unfortunately, it currently only
only supports Intel-386-and-compatible CPU's -- so I can use it on my
old AMD Athlon, but not, e.g., on my tiny Palmtop, whose little CPU is
an "ARM" (Intel-made these days I believe, but not 386-compatible)
[ for plans by Armin, and many others of us, on how to fix that in the
reasonably near future, see http://codespeak.net/pypy/ ]

Anyway, here's psyco in action on the issue in question:

import time
import psyco

def non_compiled(name):
def foo(): return
start = time.clock()
for x in xrange(10*1000*1000): foo()
stend = time.clock()
print '%s %.2f' % (name, stend-start)

compiled = psyco.proxy(non_compiled)

non_compiled('noncomp')
compiled('psycomp')

Running this on the same good old machine produces:

[alex@lancelot ext]$python2.3 calfoo.py noncomp 5.93 psycomp 0.13 The NON-compiled 10 million calls took an average of 593 nanoseconds per call -- roughly the already-measured 540 nanoseconds for the call itself, plus about 50 nanoseconds for each leg of the loop's overhead. But, as you can see, Psyco has no trouble optimizing that by over 45 times -- to about 80 million function calls per second, which _is_ good enough for many more applications than the original less-than-2 million function calls per second was. Psyco entirely respects Python's semantics, but its speed-ups take particular good advantage of the "specialized" cases in which the possibilities for extremely dynamic behavior are not, in fact, being used in a given function that's on the bottleneck of your application (Psyco can also automatically use a profiler to find out about that bottleneck, if you want -- here, I used the finer-grained approach of having it compile ["build a compiled proxy for"] just one function in order to be able to show the speed-ups it was giving). Oh, BTW, you'll notice I explicitly ran that little test with python2.3 -- that was to ensure I was using the OLD release of psyco, 1.0; as my default Python I use the current CVS snapshot, and on that one I have installed psyco 1.1, which does more optimizations and in particular _inlines function calls_ under propitious conditions -- therefore, the fact that running just "python calfoo.py" would have shown a speed-up of _120_ (rather than just 45) would have been "cheating", a bit, as it's not measuring any more anything related to name lookup and function call overhead. That's a common problem with optimizing compilers: once they get smart enough they may "optimize away" the very construct whose optimization you were trying to check with a sufficiently small benchmark. I remember when the whole "SPEC" suite of benchmarks was made obsolete at a stroke by one advance in compiler optimization techniques, for example:-). Anyway, if your main interest is in having your applications run fast, rather than in studying optimization yields on specific constructs in various circumstances, be sure to get the current Psyco, 1.1.1, to go with the current Python, 2.3.2 (the pre-alpha Python 2.4a0 is recommended only to those who want to help with Python's development, including testing -- throughout at least 2004 you can count on 2.3.something, NOT 2.4, being the production, _stable_ version of Python, recommended to all). Alex   0 Reply aleax (648) 10/19/2003 7:09:04 PM |1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell) | 90% of the code is function applictions. Why not make it convenient? Haskell is cool. But to do what you want, you need uniform currying of all function calls (i.e. every call is a call with *exactly one* argument, often returning a new function). That's not a reasonable model for Python, for lots of reasons (but you are welcome to use Haskell, I understand you can download versions of it for free). |2. Statements vs Expressions business is very dumb. Try writing | a = if x : | y | else: z Try writing ANYTHING that isn't Python... wow, it doesn't run in the Python interpreter. |3. no multimethods (why? Guido did not know Lisp, so he did not know | about them) Been there, done that... we got them: http://gnosis.cx/download/gnosis/magic/multimethods.py |4. splintering of the language: you have the inefficient main language, | and you have a different dialect being developed that needs type I think this might be a reference to Pyrex. It's cool, but it's not a fork of Python. |5. Why do you need "def" ? In Haskell, you'd write | square x = x * x Again, you are welcome to use Haskell. If you'd like, you can also write the following in Python: square = lambda x: x*x |6. Requiring "return" is also dumb (see #5) 'return' is NOT required in a function. Functions will happily return None if you don't specify some other value you want returned. |7. Syntax and semantics of "lambda" should be identical to | function definitions (for simplicity and uniformity) Obviously, they can't be *identical* in syntax... the word 'lambda' is SPELLED differently than the word 'def'. The argument has been made for code blocks in Python at times, but never (yet) convincingly enough to persuade the BDFL. |8. Can you undefine a function, value, class or unimport a module? Yes. |9. Syntax for arrays is also bad [a (b c d) e f] would be better | than [a, b(c,d), e, f] Hmmm... was the OP attacked by a pride of commas as a child? It's true that the space bar is bigger on my keyboard than is the comma key... but I don't find it all THAT hard to press ','. Actually, the OP's example would require some new syntax for tuples as well, since there's no way of knowing whether '(b c d)' would be a function invocation or a tuple. Of course other syntaxes are *possible*. In fact, here's a quick solution to everything s/he wants: % cp hugs python Yours, Lulu... -- mertz@ | The specter of free information is haunting the Net! All the gnosis | powers of IP- and crypto-tyranny have entered into an unholy ..cx | alliance...ideas have nothing to lose but their chains. Unite | against "intellectual property" and anti-privacy regimes! -------------------------------------------------------------------------   0 Reply mertz (174) 10/19/2003 8:21:01 PM Joachim Durchholz wrote: > Oh, you're trolling for an inter-language flame fest... > well, anyway: > >> 3. no multimethods (why? Guido did not know Lisp, so he did not know >> about them) You now have to suffer from visitor patterns, etc. like >> lowly Java monkeys. > > Multimethods suck. Multimethods are wonderful, and we're using them as part of the implementation of pypy, the Python runtime coded in Python. Sure, we had to implement them, but that was a drop in the ocean in comparison to the amount of other code in pypy as it stands, much less the amount of code we want to add to it in the future. See http://codespeak.net/ for more about pypy (including all of its code -- subversion makes it available for download as well as for online browsing). So, you're both wrong:-). Alex   0 Reply aleax (648) 10/19/2003 9:20:38 PM Joachim Durchholz wrote: > Oh, you're trolling for an inter-language flame fest... > well, anyway: > >> 3. no multimethods (why? Guido did not know Lisp, so he did not know >> about them) You now have to suffer from visitor patterns, etc. like >> lowly Java monkeys. > > > Multimethods suck. Do they suck more or less than the Visitor pattern? > The longer answer: Multimethods have modularity issues (if whatever > domain they're dispatching on can be extended by independent developers: > different developers may extend the dispatch domain of a function in > different directions, and leave undefined combinations; standard > dispatch strategies as I've seen in some Lisps just cover up the > undefined behaviour, with a slightly less than 50% chance of being > correct). So how do you implement an equality operator correctly with only single dynamic dispatch? Pascal   0 Reply costanza (1427) 10/19/2003 9:20:54 PM "Frode Vatvedt Fjeld" <frodef@cs.uit.no> wrote in message news:2hn0bxm8kf.fsf@vserver.cs.uit.no... cc'ed in case you are not reading c.l.python, which I am limiting this to. > So to get from the ID to the bytecode, you go through a dictionary? > And the mapping from name to ID happens perhaps when the caller is > bytecode-compiled? No. In Python, all names are associated with objects in namespaces. Lookup is done as needed at the appropriate runtime. Function objects are 1st class and are no different from any others in this respect. The same goes for slots in collection objects being associated with member objects. The free online tutorial as www.python.org explains Python basics like this. Terry J. Reedy   0 Reply tjreedy (5184) 10/19/2003 9:41:18 PM  Joachim Durchholz wrote: > Oh, you're trolling for an inter-language flame fest... > well, anyway: > >> 3. no multimethods (why? Guido did not know Lisp, so he did not know >> about them) You now have to suffer from visitor patterns, etc. like >> lowly Java monkeys. > > > Multimethods suck. > > The longer answer: Multimethods have modularity issues Lisp consistently errs on the side of more expressive power. The idea of putting on a strait jacket while coding to protect us from ourselves just seems batty. Similarly, a recent ex-C++ journal editor recently wrote that test-driven development now gives him the code QA peace of mind he once sought from strong static typing. An admitted former static typing bigot, he finished by wondering aloud, "Will we all be coding in Python ten years from now?" kenny -- http://tilton-technology.com What?! You are a newbie and you haven't answered my: http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey   0 Reply ktilton (2220) 10/19/2003 10:22:50 PM  Kenny Tilton wrote: > > > Joachim Durchholz wrote: > >> Oh, you're trolling for an inter-language flame fest... >> well, anyway: >> >>> 3. no multimethods (why? Guido did not know Lisp, so he did not know >>> about them) You now have to suffer from visitor patterns, etc. like >>> lowly Java monkeys. >> >> >> >> Multimethods suck. >> >> The longer answer: Multimethods have modularity issues > > > Lisp consistently errs on the side of more expressive power. The idea of > putting on a strait jacket while coding to protect us from ourselves > just seems batty. Similarly, a recent ex-C++ journal editor recently > wrote that test-driven development now gives him the code QA peace of > mind he once sought from strong static typing. An admitted former static > typing bigot, he finished by wondering aloud, "Will we all be coding in > Python ten years from now?" http://www.artima.com/weblogs/viewpost.jsp?thread=4639 -- http://tilton-technology.com What?! You are a newbie and you haven't answered my: http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey   0 Reply ktilton (2220) 10/19/2003 10:30:48 PM Kenny Tilton wrote: > > Lisp consistently errs on the side of more expressive power. The idea of > putting on a strait jacket while coding to protect us from ourselves > just seems batty. Similarly, a recent ex-C++ journal editor recently > wrote that test-driven development now gives him the code QA peace of > mind he once sought from strong static typing. C++ is not the best example of strong static typing. It is a language full of traps, which can't be detected by its type system. > An admitted former static typing bigot, he finished by wondering > aloud, "Will we all be coding in Python ten years from now?" > > kenny Best regards, Tom -- ..signature: Too many levels of symbolic links   0 Reply t.zielonka (53) 10/19/2003 10:37:46 PM Frode Vatvedt Fjeld wrote: > John Thingstad <john.thingstad@chello.no> writes: > >> [..] Functions are internally delt with using dictionaies. The Rather, _names_ are dealt that way (for globals; it's faster for locals -- then, the compiler can turn the name into an index into the table of locals' values), whether they're names of functions or names of other values (Python doesn't separate those namespaces). >> bytecode compiler gives it a ID and the look up is done using a >> dictionary. Removing the function from the dictionary removes the >> function. (pythonese for hash-table) > > So to get from the ID to the bytecode, you go through a dictionary? No; it's up to the implementation, but in CPython the id is the memory address of the function object, so the bytecode's directly accessed from there (well, there's a couple of indirectness -- function object to code object to code string -- nothing important). > And the mapping from name to ID happens perhaps when the caller is > bytecode-compiled? No, it's a lookup. Dict lookup for globals, fast (index in table) lookup for locals (making locals much faster to access), but a lookup anyway. I've already posted about how psyco can optimize this, being a specializing compiler, when it notices the dynamic possibilities are not being used in a given case. Alex   0 Reply aleax (648) 10/19/2003 10:39:41 PM "Kenny Tilton" <ktilton@nyc.rr.com> wrote in message news:_8Ekb.7543$pT1.318@twister.nyc.rr.com...
>
>
> Joachim Durchholz wrote:
>
> > Oh, you're trolling for an inter-language flame fest...
> > well, anyway:
> >
> >> 3. no multimethods (why? Guido did not know Lisp, so he did not know
> >>    about them) You now have to suffer from visitor patterns, etc. like
> >>     lowly Java monkeys.
> >
> >
> > Multimethods suck.
> >
> > The longer answer: Multimethods have modularity issues
>
> Lisp consistently errs on the side of more expressive power. The idea of
> putting on a strait jacket while coding to protect us from ourselves
> just seems batty. Similarly, a recent ex-C++ journal editor recently
> wrote that test-driven development now gives him the code QA peace of
> mind he once sought from strong static typing. An admitted former static
> typing bigot, he finished by wondering aloud, "Will we all be coding in
> Python ten years from now?"
>
> kenny
>

There was a nice example from one of the ILC 2003 talks about a Europian
Space Agency rocket exploding with a valueable payload. My understanding was
that there was testing, but maybe too much emphasis was placed the static
type checking of the language used to control the rocket. The end result was
a run time arithmetic overflow which the code intepreted as "rocket off
course". The rocket code instructions in this event were to self destruct.
It seems to me that the Agency would have fared better if they just used
Lisp - which has bignums - and relied more on regression suites and less on
the belief that static type checking systems would save the day.

details.

-R. Scott McIntire


 0

"Scott McIntire" <mcintire_charlestown@comcast.net> wrote in message
news:MoEkb.821534YN5.832338@sccrnsc01... > There was a nice example from one of the ILC 2003 talks about a Europian > Space Agency rocket exploding with a valueable payload. My understanding was > that there was testing, but maybe too much emphasis was placed the static > type checking of the language used to control the rocket. The end result was > a run time arithmetic overflow which the code intepreted as "rocket off > course". The rocket code instructions in this event were to self destruct. > It seems to me that the Agency would have fared better if they just used > Lisp - which has bignums - and relied more on regression suites and less on > the belief that static type checking systems would save the day. > > I'd be interested in hearing more about this from someone who knows the > details. I believe you are referring to the first flight of the Ariane 5 (sp?). The report of the investigating commission is on the web somewhere and an interesting read. They identified about five distinct errors. Try google. Terry   0 Reply tjreedy (5184) 10/19/2003 11:27:52 PM Scott McIntire fed this fish to the penguins on Sunday 19 October 2003 15:39 pm: > > There was a nice example from one of the ILC 2003 talks about a > Europian Space Agency rocket exploding with a valueable payload. My > understanding was that there was testing, but maybe too much emphasis > was placed the static type checking of the language used to control > the rocket. The end result was a run time arithmetic overflow which > the code intepreted as "rocket off course". The rocket code > instructions in this event were to self destruct. It seems to me that > the Agency would have fared better if they just used Lisp - which has > bignums - and relied more on regression suites and less on the belief > that static type checking systems would save the day. > > I'd be interested in hearing more about this from someone who knows > the > details. > Just check the archives for comp.lang.ada and Ariane-5. Short version: The software performed correctly, to specification (including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS DESIGNED. The software was then dropped into the ARIANE 5 with NO REVIEW of requirements. Two things were different -- the A-5 was capable of more severe maneuvering, AND apparently the A-5 launch sequence did not need this code to run for some 40 seconds after ignition (something about the A-4 launch sequence allowed it to be aborted and restarted in the 40 second span, so the code had to keep up-to-date navigational fixes; the A-5 OTOH is in space by that point, no post ignition holds). On the A-4, any values that were that extreme were a sign of critical malfunction and the software was to shutdown. Which is what it did on the A-5. Of course, the backup computer then saw the same "malfunction" and shut down too... For the A-4, you wouldn't WANT the computer to try processing with those values that were so far out of performance specs that the rocket had to be tumbling out of control anyways. The bean-counters apparently did not allow the folks with the A-5 requirements to examine the A-4 code for compliance, and the A-4 Coders obviously never knew about the A-5 performance specs. LISP wouldn't have helped -- since the A-4 code was supposed to failure with values that large... And would have done the same thing if plugged in the A-5. (Or are you proposing that the A-4 code is supposed to ignore a performance requirement?) -- > ============================================================== < > wlfraed@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG < > wulfraed@dm.net | Bestiaria Support Staff < > ============================================================== < > Bestiaria Home Page: http://www.beastie.dm.net/ < > Home Page: http://www.dm.net/~wulfraed/ <   0 Reply wlfraed (4456) 10/19/2003 11:31:12 PM Dennis Lee Bieber <wlfraed@ix.netcom.com> writes: > LISP wouldn't have helped -- since the A-4 code was supposed to > failure with values that large... And would have done the same thing if > plugged in the A-5. (Or are you proposing that the A-4 code is supposed > to ignore a performance requirement?) Or perhaps it would have helped since LISP sources would have included a little expert system that would have asked itself: "Do I really want to commit suicide now? Let's see, everything looks ok but this old code from A4... I guess it's got Alzheimer, I'll ignore it for now". -- __Pascal_Bourguignon__ http://www.informatimago.com/ Do not adjust your mind, there is a fault in reality. Lying for having sex or lying for making war? Trust US presidents :-(   0 Reply spam173 (586) 10/20/2003 2:47:44 AM  Dennis Lee Bieber wrote: > Scott McIntire fed this fish to the penguins on Sunday 19 October 2003 > 15:39 pm: > > >>There was a nice example from one of the ILC 2003 talks about a >>Europian Space Agency rocket exploding with a valueable payload. My >>understanding was that there was testing, but maybe too much emphasis >>was placed the static type checking of the language used to control >>the rocket. The end result was a run time arithmetic overflow which >>the code intepreted as "rocket off course". The rocket code >>instructions in this event were to self destruct. It seems to me that >>the Agency would have fared better if they just used Lisp - which has >>bignums - and relied more on regression suites and less on the belief >>that static type checking systems would save the day. >> >> I'd be interested in hearing more about this from someone who knows >> the >>details. >> > > Just check the archives for comp.lang.ada and Ariane-5. > > Short version: The software performed correctly, to specification > (including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS > DESIGNED. Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html "The internal SRI software exception was caused during execution of a data conversion from 64-bit floating point to 16-bit signed integer value. The floating point number which was converted had a value greater than what could be represented by a 16-bit signed integer. This resulted in an Operand Error. The data conversion instructions (in Ada code) were not protected from causing an Operand Error, although other conversions of comparable variables in the same place in the code were protected. The error occurred in a part of the software that only performs alignment of the strap-down inertial platform. This software module computes meaningful results only before lift-off. As soon as the launcher lifts off, this function serves no purpose." > LISP wouldn't have helped -- since the A-4 code was supposed to > failure with values that large... And would have done the same thing if > plugged in the A-5. (Or are you proposing that the A-4 code is supposed > to ignore a performance requirement?) "supposed to" fail? chya. This was nothing more than an unhandled exception crashing the sytem and its identical backup. Other conversions were protected so they could handle things intelligently, this bad boy went unguarded. Note also that the code functionality was pre-ignition only, so there is no way they were thinking that a cool way to abort the flight would be to leave a program exception unhandled. What happened (aside from an unnecessary chunk of code running increasing risk to no good end) is that the extra power of the A5 caused oscillations greater than those seen in the A4. Those greater oscillations took the 64-bit float beyond what would fit in the 16-bit int. kablam. Operand Error. This is not a system saying "whoa, out of range, abort". As for Lisp not helping: > most-positive-fixnum ;; constant provided by implementation 536870911 > (1+ most-positive-fixnum) ;; overflow fixnum type and... 536870912 > (type-of (1+ most-positive-fixnum)) ;; ...auto bignum type BIGNUM > (round most-positive-single-float) ;; or floor or ceiling 340282346638528859811704183484516925440 0.0 > (type-of *) BIGNUM kenny -- http://tilton-technology.com What?! You are a newbie and you haven't answered my: http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey   0 Reply ktilton (2220) 10/20/2003 3:49:57 AM Alex Martelli <aleax@aleax.it> writes: > Joachim Durchholz wrote: > > > Oh, you're trolling for an inter-language flame fest... > > well, anyway: > > > >> 3. no multimethods (why? Guido did not know Lisp, so he did not know > >> about them) You now have to suffer from visitor patterns, etc. like > >> lowly Java monkeys. > > > > Multimethods suck. > > Multimethods are wonderful, and we're using them as part of the > implementation of pypy, the Python runtime coded in Python. Sure, > we had to implement them, but that was a drop in the ocean in > comparison to the amount of other code in pypy as it stands, much > less the amount of code we want to add to it in the future. So do the Python masses get to use multimethods? (with-lisp-trolling And have you seen the asymptote yet, or do you need to grow macros first?) -- /|_ .-----------------------. ,' .\ / | No to Imperialist war | ,--' _,' | Wage class war! | / / -----------------------' ( -. | | ) | (-. '--.) . )----'   0 Reply tfb3 (483) 10/20/2003 5:46:56 AM mike420@ziplip.com wrote in message news:<LVOAILABAJAFKMCPJ0F1IFP5N3JTNUL0EPMGKMDS@ziplip.com>... > THE BAD: > > 1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell) > 90% of the code is function applictions. Why not make it convenient? Python has been designed to attract non-programmers as well. Don't you think f(x,y,z) resembles the mathematical notation of passing a function some parameters, instead of "f x y z"? > 5. Why do you need "def" ? In Haskell, you'd write > square x = x * x The reason is just to make it clearer that we're defining a function. I wonder why you didn't complain about the colons at the beginning of each block.. Some syntax is there just to add readability. I suppose it means nothing to you that Python is compared to executable pseudocode. It means to Pythonistas. > 6. Requiring "return" is also dumb (see #5) You really don't get any of this "explicit is better than implicit" thing, do you? Requiring people to write "return" instead of leaving it as optional like in Ruby, is again one reason why Pythonistas *like* Python instead of Ruby. You come to a Python group (and cross-post this meaninglessly everywhere even though it only concerns Pythonistas) claiming that the features we like are dumb, and you wonder why people think of you as a troll.. Anyway, as a conclusion, I believe you'd be much happier with Ruby than with Python. It doesn't do this weird "statement vs expression" business, it has optional return, it has optional parens with function calls, and probably more of these things "fixed" that you consider Python's downsides. You're trying to make Python into a language that already exists, it seems, but for some reason Pythonistas are happy with Python and not rapidly converting to Ruby or Haskell. Instead of trying to tell us what we like (and failing at that, as you can see), maybe you should try to think for a while of why we like Python. By the way, have you already posted a similar message to comp.std.c++, saying what they should change about C++ to make it more like Haskell or Ruby? I'd love to read it (it could be hilarious) ;)   0 Reply hanzspam (49) 10/20/2003 7:22:13 AM Kenny Tilton <ktilton@nyc.rr.com> writes: >Dennis Lee Bieber wrote: > >> Just check the archives for comp.lang.ada and Ariane-5. >> >> Short version: The software performed correctly, to specification >> (including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS >> DESIGNED. > >Nonsense. No, that is exactly right. Like the man said, read the archives for comp.lang.ada. >From: http://www.sp.ph.ic.ac.uk/Cluster/report.html > >"The internal SRI software exception was caused during execution of a >data conversion from 64-bit floating point to 16-bit signed integer >value. The floating point number which was converted had a value greater >than what could be represented by a 16-bit signed integer. This resulted >in an Operand Error. The data conversion instructions (in Ada code) were >not protected from causing an Operand Error, although other conversions >of comparable variables in the same place in the code were protected. >The error occurred in a part of the software that only performs >alignment of the strap-down inertial platform. This software module >computes meaningful results only before lift-off. As soon as the >launcher lifts off, this function serves no purpose." That's all true, but it is only part of the story, and selectively quoting just that part is misleading in this context. For a more detailed answer, see <http://www.google.com.au/groups?as_umsgid=359BFC60.446B%40lanl.gov>. >> LISP wouldn't have helped -- since the A-4 code was supposed to >> failure with values that large... And would have done the same thing if >> plugged in the A-5. (Or are you proposing that the A-4 code is supposed >> to ignore a performance requirement?) > >"supposed to" fail? chya. This was nothing more than an unhandled >exception crashing the sytem and its identical backup. Other conversions >were protected so they could handle things intelligently, this bad boy >went unguarded. The reason that it went unguarded is that the programmers DELIBERATELY omitted an exception handler for it. The post at the URL quoted above explains why. -- Fergus Henderson <fjh@cs.mu.oz.au> | "I have always known that the pursuit The University of Melbourne | of excellence is a lethal habit" WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.   0 Reply fjh (268) 10/20/2003 7:38:00 AM mike420@ziplip.com wrote in news:LVOAILABAJAFKMCPJ0F1IFP5N3JTNUL0EPMGKMDS@ziplip.com: > 1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell) > 90% of the code is function applictions. Why not make it convenient? What syntax do you propose to use for f(x(y,z)), or f(x(y(z))), or f(x,y(z)) or f(x(y),z) or f(x)(y)(z) or numerous other variants which are not currently ambiguous? -- Duncan Booth duncan@rcp.co.uk int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3" "\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?   0 Reply duncan1 (177) 10/20/2003 9:09:03 AM Duncan Booth wrote: > mike420@ziplip.com wrote in > news:LVOAILABAJAFKMCPJ0F1IFP5N3JTNUL0EPMGKMDS@ziplip.com: > >> 1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell) >> 90% of the code is function applictions. Why not make it convenient? > > What syntax do you propose to use for f(x(y,z)), or f(x(y(z))), or > f(x,y(z)) or f(x(y),z) or f(x)(y)(z) or numerous other variants which are > not currently ambiguous? Haskell has it easy -- f x y z is the same as ((f x) y) z -- as an N-ary function is "conceptualized" as a unary function that returns an (N-1)-ary function [as Haskell Curry conceptualized it -- which is why the language is named Haskell, and the concept currying:-)]. So, your 5th case, f(x)(y)(z), would be exactly the same thing. When you want to apply operators in other than their normal order of priority, then and only then you must use parentheses, e.g. for your various cases they would be f (x y z) [1st case], f (x (y z)) [2nd case], f x (y z) [3rd case], f (x y) z [4th case]. You CAN, if you wish, add redundant parentheses, of course, just like in Python [where parentheses are overloaded to mean: function call, class inheritance, function definition, empty tuples, tuples in list comprehensions, apply operators with specified priority -- I hope I recalled them all;-)]. Of course this will never happen in Python, as it would break all backwards compatibility. And I doubt it could sensibly happen in any "simil-Python" without adopting many other Haskell ideas, such as implicit currying and nonstrictness. What "x = f" should mean in a language with assignment, everything first-class, and implicit rather than explicit calling, is quite troublesome too. Ruby allows some calls without parentheses, but the way it disambiguates "f x y" between f(x(y)) and f(x, y) is, IMHO, pricey -- it has to KNOW whether x is a method, and if it is it won't just let you pass it as such as an argument to f; that's the slippery slope whereby you end up having to write x.call(y) because not just any object is callable. "x = f" CALLS f if f is a method, so you can't just treat methods as first-class citizens like any other... etc, etc... AND good Ruby texts recommend AVOIDING "f x y" without parentheses, anyway, because it's ambiguous to a human reader, even when it's clear to the compiler -- so the benefit you get for that price is dubious indeed. Alex   0 Reply aleax (648) 10/20/2003 9:40:40 AM Hannu Kankaanp?? wrote: > mike420@ziplip.com wrote in message > news:<LVOAILABAJAFKMCPJ0F1IFP5N3JTNUL0EPMGKMDS@ziplip.com>... >> THE BAD: >> >> 1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell) >> 90% of the code is function applictions. Why not make it convenient? > > Python has been designed to attract non-programmers as well. Don't > you think f(x,y,z) resembles the mathematical notation of passing > a function some parameters, instead of "f x y z"? Yes -- which is exactly why many non-programmers would prefer the parentheses-less notation -- with more obvious names of course;-). E.g.: emitwarning URGENT "meltdown imminent!!!" DOES look nicer to non-programmers than emitwarning(URGENT, "meltdown imminent!!!") Indeed, such languages as Visual Basic and Ruby do allow calling without parentheses, no doubt because of this "nice look" thing. However, as I explained elsewhere, there are probably-insuperable language-design problems in merging "implicit call" and first-classness of all names unless you basically go all the way following Haskell with implicit currying and non-strictness (and assignments should probably go away too, else how to distinguish between assigning to x a nullary function f itself, and assigning to x the result of _calling_ f without arguments...?). Not to mention: emitwarning URGENT highlight "meltdown imminent!!!" where the need to disambiguate between highlight being the second of three parameters to emitwarning, or a function called with the string as its sole parameter and its RESULT being the second of two parameters to emitwarning, is important for human readers (indeed some languages that DO allow parentheses-less calls, such as Ruby, warn against actually USING this possibility in all cases where ambiguity-to-human-readers may result, such as the above -- the need to be very careful and selective in actually using the capability makes me less and less willing to pay any large price for it). In other words, it's a language design tradeoff, like so many others -- one which I believe both Python and Haskell got just right for their different audiences and semantics (I know VB _didn't_, and I suspend judgment on Ruby -- maybe firstclassness of all names isn't as important as it FEELS to me, but...). >> 6. Requiring "return" is also dumb (see #5) > > You really don't get any of this "explicit is better than implicit" > thing, do you? Requiring people to write "return" instead of > leaving it as optional like in Ruby, is again one reason why > Pythonistas *like* Python instead of Ruby. You come to I think that making return optional is slightly error-prone, but it DOES make the language easier to learn for newbies -- newbies often err, in Python, by writing such code as def double(x): x+x which indicates the lack of 'return' IS more natural than its mandatory presence. So, it's a tradeoff one could sensibly chose either way. Of course, such cases as: def buhandclose(boh): try: boh.buh() finally: boh.close() would give you a bad headache in trying to explain them to newbies ("hmyes the result of buhandclose IS that of the last expression it evaluates, BUT the one in the finally clause, although evaluated AFTER boh.buh(), doesn't really count because..." [keeps handwaving copiously & strenously]). So, mandatory 'return' does make the language more uniform, consistent, and easy to master, though not quite as easy to "pick up casually in a semi-cooked manner". Still, I for one don't condemn Ruby for making the opposite choice -- it IS a nicely balanced issue, IMHO. > Anyway, as a conclusion, I believe you'd be much happier with > Ruby than with Python. It doesn't do this weird "statement vs > expression" business, it has optional return, it has optional > parens with function calls, and probably more of these things > "fixed" that you consider Python's downsides. You're trying to But doesn't make higher-order-functions as much of a no-brainer as they're in Python, sigh. > make Python into a language that already exists, it seems, but > for some reason Pythonistas are happy with Python and not rapidly > converting to Ruby or Haskell. Instead of trying to tell us My own reasons for the choice of Python over Ruby are quite nuanced and complicated, actually (those for either of them over Haskell have much to do with pragmatism over purity:-). It boils down to my desire to write application programs, often requiring cooperation of middling-sized groups of people, rather than experimental frameworks, or programs written by a lone coder or a small group of highly-attuned experts. I have the highest respect for Ruby -- it just doesn't match my needs QUITE as well as Python does. But, yes, if somebody doesn't really think about what kind of programs they want to write, but rather focuses on syntax sugar issues such as return being optional or mandatory "per se", then it's definitely worthwhile for that somebody to try Ruby and leave c.l.py in peace:-). Alex   0 Reply aleax (648) 10/20/2003 10:10:54 AM Hannu Kankaanp?? wrote: > Anyway, as a conclusion, I believe you'd be much happier with > Ruby than with Python. It doesn't do this weird "statement vs > expression" business, it has optional return, it has optional > parens with function calls, and probably more of these things > "fixed" that you consider Python's downsides. You're trying to > make Python into a language that already exists, it seems, but > for some reason Pythonistas are happy with Python and not rapidly > converting to Ruby or Haskell. I wonder to what extent this statement is true. I know at least 1 Ruby programmer who came from Python, but this spot check should not be trusted, since I know only 1 Ruby programmer and only 1 former Python programmer <g>. But I have heard that there are a lot of former Python programmers in the Ruby community. I think it is safe to say that of all languages Python programmers migrate to, Ruby is the strongest magnet. OTOH, the migration of this part of the Python community to Ruby may have been completed already, of course. Gerrit. -- 53. If any one be too lazy to keep his dam in proper condition, and does not so keep it; if then the dam break and all the fields be flooded, then shall he in whose dam the break occurred be sold for money, and the money shall replace the corn which he has caused to be ruined. -- 1780 BC, Hammurabi, Code of Law -- Asperger Syndroom - een persoonlijke benadering: http://people.nl.linux.org/~gerrit/ Kom in verzet tegen dit kabinet: http://www.sp.nl/   0 Reply gerrit1 (293) 10/20/2003 10:41:56 AM Marcin 'Qrczak' Kowalczyk wrote: > On Sun, 19 Oct 2003 20:01:03 +0200, Joachim Durchholz wrote: > >>The longer answer: Multimethods have modularity issues (if whatever domain >>they're dispatching on can be extended by independent developers: >>different developers may extend the dispatch domain of a function in >>different directions, and leave undefined combinations; > > This doesn't matter until you provide an equally powerful mechanism which > fixes that. Which is it? I don't think there is a satisfactory one. It's a fundamental problem: if two people who don't know of each other can extend the same thing (framework, base class, whatever) in different directions, who's responsible for writing the code needed to combine these extensions? Solutions that I have seen or thought about are: 1. Let the system decide. Technically feasible for base classes (in the form of priorisation rules for multimethods), technically infeasible for frameworks. The problem here is that the system doesn't (usually) have enough information to reliably make the correct decision. 2. Let the system declare an error if the glue code isn't there. Effectively prohibits all forms of dynamic code loading. Can create risks in project management (unexpected error messages during code integration near a project deadline - yuck). Creates a temptation to hack the glue code up, by people who don't know the details of the two modules involved. 3. Disallow extending in multiple directions. In other words, no multimethods, and live with the asymmetry. Too restricted to be comfortable with. 4. As (3), but allow multiple extensions if they are contained within the same module. I.e. allow multiple dispatch within an "arithmetics" module that defines the classes Integer, Real, Complex, etc. etc., but don't allow additional multiple dispatch outside the module. (Single dispatch would, of course, be OK.) 5. As (3), but require manual intervention. IOW let the two authors who did the orthogonal extensions know about each other, and have each module refer to the other, and each module carry the glue code required to combine with the other. Actually, this is the practice for various open source projects. For example, authors of MTAs, mail servers etc. cooperate to set standards. Of course, if the authors aren't interested in cooperating, this doesn't work well either. 6. Don't use dynamic dispatch, use parametric polymorphism (or whatever your language offers for that purpose, be it "generics" or "templates"). Regards, Jo   0 Reply joachim.durchholz (563) 10/20/2003 11:06:08 AM Pascal Costanza wrote: > Joachim Durchholz wrote: > >> Oh, you're trolling for an inter-language flame fest... >> well, anyway: >> >>> 3. no multimethods (why? Guido did not know Lisp, so he did not know >>> about them) You now have to suffer from visitor patterns, etc. like >>> lowly Java monkeys. >> >> Multimethods suck. > > Do they suck more or less than the Visitor pattern? Well, the visitor pattern is worse. Generics would be better though. > So how do you implement an equality operator correctly with only single > dynamic dispatch? Good question. In practice, you don't use dispatch, you use some built-in mechanism. Even more in practice, all equality operators that I have seen tended to compare more or less than one wanted to have compared, at least for complicated types with large hidden internal structures, or different equivalent internal structures. I have seen many cases where people implemented several equality operators - of course, with different names, and for most cases, I'm under the impression they weren't even aware that it was equality that they were implementing :-) Examples are: Lisp with its multitude of equality predicates nicely exposes the problems, and provides a solution. Various string representations (7-bit Ascii, 8-bit Ascii, various Unicode flavors). Do you want to compare representations or contents? Do you need a code table to compare? Various number representation: do you want to make 1 different from 1.0, or do you want to have them equal? I think that dynamic dispatch is an interesting answer, but not to equality :-) Regards, Jo   0 Reply joachim.durchholz (563) 10/20/2003 11:13:45 AM Kenny Tilton wrote: > > Dennis Lee Bieber wrote: > >> Short version: The software performed correctly, to >> specification (including the failure mode) -- ON THE ARIANE 4 FOR >> WHICH IT WAS DESIGNED. > > Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html > > "The internal SRI software exception was caused during execution of a > data conversion from 64-bit floating point to 16-bit signed integer > value. The floating point number which was converted had a value greater > than what could be represented by a 16-bit signed integer. This resulted > in an Operand Error. The data conversion instructions (in Ada code) were > not protected from causing an Operand Error, although other conversions > of comparable variables in the same place in the code were protected. > The error occurred in a part of the software that only performs > alignment of the strap-down inertial platform. This software module > computes meaningful results only before lift-off. As soon as the > launcher lifts off, this function serves no purpose." That's the sequence of events that led to the crash. Why this sequence could happen though it shouldn't have happened is exactly how Dennis wrote it: the conversion caused an exception because the Ariane-5 had a tilt angle beyond what the SRI was designed for. > What happened (aside from an unnecessary chunk of code running > increasing risk to no good end) is that the extra power of the A5 caused > oscillations greater than those seen in the A4. Those greater > oscillations took the 64-bit float beyond what would fit in the 16-bit > int. kablam. Operand Error. This is not a system saying "whoa, out of > range, abort". > > As for Lisp not helping: > > > most-positive-fixnum ;; constant provided by implementation > 536870911 > > > (1+ most-positive-fixnum) ;; overflow fixnum type and... > 536870912 > > > (type-of (1+ most-positive-fixnum)) ;; ...auto bignum type > BIGNUM > > > (round most-positive-single-float) ;; or floor or ceiling > 340282346638528859811704183484516925440 > 0.0 > > > (type-of *) > BIGNUM Lisp might not have helped even in that case. 1. The SRI was designed for an angle that would have fit into a 16-bit operand. If the exception hadn't been thrown, some hardware might still have malfunctioned. 2. I'm pretty sure there's a reason (other than saving space) for that conversion to 16 bits. I suspect it was to be fed into some hardware register... in which case all bignums of the world aren't going to help. Ariane 5 is mostly a lesson in management errors. Software methodology might have helped, but just replacing the programming language would have been insufficient (as usual - languages can make proper testing easier or harder, but the trade-off will always be present). Regards, Jo   0 Reply joachim.durchholz (563) 10/20/2003 11:22:08 AM Followup-To: comp.lang.misc On Mon, 20 Oct 2003 13:06:08 +0200, Joachim Durchholz wrote: >>>The longer answer: Multimethods have modularity issues (if whatever >>>domain they're dispatching on can be extended by independent developers: >>>different developers may extend the dispatch domain of a function in >>>different directions, and leave undefined combinations; >> >> This doesn't matter until you provide an equally powerful mechanism >> which fixes that. Which is it? > > I don't think there is a satisfactory one. It's a fundamental problem: > if two people who don't know of each other can extend the same thing > (framework, base class, whatever) in different directions, who's > responsible for writing the code needed to combine these extensions? Indeed. I wouldn't thus blame the language mechanism. > 1. Let the system decide. Technically feasible for base classes (in the > form of priorisation rules for multimethods), technically infeasible for > frameworks. The problem here is that the system doesn't (usually) have > enough information to reliably make the correct decision. Sometimes the programmer can write enough default specializations that it can be freely extended. Example: drawing shapes on devices. If every shape is convertible to Bezier curves, and every device is capable of drawing Bezier curves, then the most generic specialization, for arbitrary shape and arbitrary device, will call 'draw' again with the shape converted to Bezier curves. The potential of multimethods is used: particular shapes have specialized implementations for particular devices (drawing text is usually better done more directly than through curves), separate modules can provide additional shapes and additional devices. Yet it is safe and modular, as long as people agree who provides a particular specialization. It's easy to agree with a certain restriction: the specialization is provided either by the module providing the shape or by module providing the device. In practice the restriction doesn't have to be always followed - it's enough that the module providing the specialization is known to all people who might want to write their own, so I wouldn't advocate enforcing the restriction on the language level. I would favor multimethods even if they provided only solutions extensible in one dimension, since they are nicer than having to enumerate all cases in one place. Better to have a partially extensible mechanism than nothing. Here it is extensible. > 2. Let the system declare an error if the glue code isn't there. > Effectively prohibits all forms of dynamic code loading. Can create risks > in project management (unexpected error messages during code integration > near a project deadline - yuck). Creates a temptation to hack the glue > code up, by people who don't know the details of the two modules involved. It would be interesting to let the system find the coverage of multimethods, but without making it an error if not all combinations are covered. It's useful to be able to test an incomplete program. There is no definite answer for what kind of errors should prevent running the program. It's similar to static/dynamic typing, or being able to compile calls to unimplemented functions or not. Even if the system shows that all combinations are covered, it doesn't imply that they do the right thing. It's analogous to failing to override a method in class-based OOP - the system doesn't know if the superclass implementation is appropriate for the subclass. So you can't completely rely on detection of such errors anyway. > 3. Disallow extending in multiple directions. In other words, no > multimethods, and live with the asymmetry. Too restricted to be > comfortable with. I agree. > 4. As (3), but allow multiple extensions if they are contained within the > same module. I.e. allow multiple dispatch within an "arithmetics" module > that defines the classes Integer, Real, Complex, etc. etc., but don't > allow additional multiple dispatch outside the module. (Single dispatch > would, of course, be OK.) For me it's still too restricted. It's a useful guideline to follow but it should not be a hard requirement. > 5. As (3), but require manual intervention. IOW let the two authors who > did the orthogonal extensions know about each other, and have each module > refer to the other, and each module carry the glue code required to > combine with the other. The glue code might reside in yet another module, especially if each of the modules makes sense without the other (so it might better not depend on it). Again, for me it's just a guideline - if one of the modules can ensure that it's composable with the other, it's a good idea to change it - but I would like to be able to provide the glue code elsewhere to make them working in my program which uses both, and remove it once the modules include the glue code themselves. > Actually, this is the practice for various open source projects. For > example, authors of MTAs, mail servers etc. cooperate to set standards. Of > course, if the authors aren't interested in cooperating, this doesn't work > well either. The modules might also be a part of one program, where it's relatively easy to make them cooperate. Inability to cope with some uses is generally not a sufficient reason to reject a language mechamism which also has well working uses. > 6. Don't use dynamic dispatch, use parametric polymorphism (or whatever > your language offers for that purpose, be it "generics" or "templates"). I think it can rarely solve the same problem. C++ templates (which can use overloaded operations, i.e. with implementation dependent on type parameters) help only in statically resolvable cases. Fully parametric polymorphism doesn't seem to help at all even in these cases (equality, arithmetic). -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/   0 Reply Marcin 10/20/2003 12:46:16 PM Gerrit Holl wrote: > > But I have heard that there are a > lot of former Python programmers in the Ruby community. I think > it is safe to say that of all languages Python programmers migrate > to, Ruby is the strongest magnet. OTOH, the migration of this part > of the Python community to Ruby may have been completed already, > of course. And also on the other hand, perhaps not enough time has yet passed for us to see the migration of these fickle people *back* to Python. :-) -Peter   0 Reply peter34 (3696) 10/20/2003 1:25:17 PM Marcin 'Qrczak' Kowalczyk wrote: >>1. Let the system decide. Technically feasible for base classes (in the >>form of priorisation rules for multimethods), technically infeasible for >>frameworks. The problem here is that the system doesn't (usually) have >>enough information to reliably make the correct decision. > > Sometimes the programmer can write enough default specializations that it > can be freely extended. Example: drawing shapes on devices. If every shape > is convertible to Bezier curves, and every device is capable of drawing > Bezier curves, then the most generic specialization, for arbitrary shape > and arbitrary device, will call 'draw' again with the shape converted to > Bezier curves. But then you don't need multimethods: you have a method "to_bezier" that dispatches over shapes, and a "draw_bezier" method that dispatches over devices. The base class has a "draw" routine that does a draw_bezier(to_bezier(self)) (syntactic variations aside). No multiple dispatch needed anymore. Actually, having a common base that covers all combinations is the standard technique of making stuff extensible. > The potential of multimethods is used: particular shapes have specialized > implementations for particular devices (drawing text is usually better > done more directly than through curves), separate modules can provide > additional shapes and additional devices. Yet it is safe and modular, as > long as people agree who provides a particular specialization. Using multimethods for optimization is the one borderline case where they do indeed make sense. > It's easy to agree with a certain restriction: the specialization is > provided either by the module providing the shape or by module providing > the device. In practice the restriction doesn't have to be always followed > - it's enough that the module providing the specialization is known to all > people who might want to write their own, so I wouldn't advocate enforcing > the restriction on the language level. Reconsider that option by replacing the author of one module with "Microsoft", the other with "IBM"... or Richard M. Stallman and anybody whom RMS hat a public flamefest with ;-) >>2. Let the system declare an error if the glue code isn't there. >>Effectively prohibits all forms of dynamic code loading. Can create risks >>in project management (unexpected error messages during code integration >>near a project deadline - yuck). Creates a temptation to hack the glue >>code up, by people who don't know the details of the two modules involved. > > It would be interesting to let the system find the coverage of multimethods, > but without making it an error if not all combinations are covered. It's > useful to be able to test an incomplete program. Well, right. But I'd want a full report on potential interactions before releasing the thing. Partly because I don't want it to crash with a run-time error, partly because that's a strong indicator of combinations that were not tested. > There is no definite answer for what kind of errors should prevent running > the program. It's similar to static/dynamic typing, or being able to > compile calls to unimplemented functions or not. Agreed. > Even if the system shows that all combinations are covered, it doesn't > imply that they do the right thing. It's analogous to failing to override > a method in class-based OOP - the system doesn't know if the superclass > implementation is appropriate for the subclass. So you can't completely > rely on detection of such errors anyway. Right. That's one of the reasons why I'm a bit sceptical about dynamic dispatch anyway. >>3. Disallow extending in multiple directions. In other words, no >>multimethods, and live with the asymmetry. Too restricted to be >>comfortable with. > > I agree. > >>4. As (3), but allow multiple extensions if they are contained within the >>same module. I.e. allow multiple dispatch within an "arithmetics" module >>that defines the classes Integer, Real, Complex, etc. etc., but don't >>allow additional multiple dispatch outside the module. (Single dispatch >>would, of course, be OK.) > > For me it's still too restricted. It's a useful guideline to follow but > it should not be a hard requirement. That's reasonable. Anyway: hard requirements should be absent anyway, or only applicable if the software goes to production. And the compiler should give some measure on the count of things to be fixed, so that project management can assess progress towards production state. (Of course, such a number would be just one among several factors.) >>5. As (3), but require manual intervention. IOW let the two authors who >>did the orthogonal extensions know about each other, and have each module >>refer to the other, and each module carry the glue code required to >>combine with the other. > > The glue code might reside in yet another module, especially if each of > the modules makes sense without the other (so it might better not depend > on it). Agreed. > Again, for me it's just a guideline - if one of the modules can > ensure that it's composable with the other, it's a good idea to change it - > but I would like to be able to provide the glue code elsewhere to make > them working in my program which uses both, and remove it once the modules > include the glue code themselves. Good point. >>6. Don't use dynamic dispatch, use parametric polymorphism (or whatever >>your language offers for that purpose, be it "generics" or "templates"). > > I think it can rarely solve the same problem. C++ templates (which can > use overloaded operations, i.e. with implementation dependent on type > parameters) help only in statically resolvable cases. Fully parametric > polymorphism doesn't seem to help at all even in these cases (equality, > arithmetic). Well, you don't need them. Equality is usually solved by extra-lingual means (usually via bytewise representation comparison - the black-box definition of equality is undecidable). For special forms of equality (like: those abstracting from representation), data is either transformed to normal form, or kept in normal form even internally. Arithmetic is sufficiently general and well-known to work with special language rules. It *could* be made user-definable by providing a conversion framework. I.e. if the language allows the definition of conversion hierarchies, you can have a single-dispatch method that converts to the leftmost type in, say, the Complex-Real-Rational-Integer chain, and for each of these types a single-dispatch method that does the actual arithmetic. Regards, Jo (I'm not reading comp.lang.misc, so I can't follow the public discussion anymore.)   0 Reply Joachim 10/20/2003 1:56:42 PM Dnia Sun, 19 Oct 2003 04:18:31 -0700 (PDT), mike420@ziplip.com napisa�(a): > THE GOOD: [...] > THE BAD: [...] Well, in the variety of languages and plenty of conceptions you can search for your language of choice. Because all the things you mentioned in "THE BAD" are available in other languages it doesn't mean it should also exist in Python. Languages are different, just as people are. If you find Python has more cons than pros it means that this is not a language from which you can take 100% of fun. Anyway, changing it into next haskell, smalltalk or ruby has no sense. Python fills certain niche and it does its job as it should. Differences are necessity, so don't waste your time on talks about making Python similar to something else. -- [ Wojtek Walczak - gminick (at) underground.org.pl ] [ <http://gminick.linuxsecurity.pl/> ] [ "...rozmaite zwroty, matowe od patyny dawnosci." ]   0 Reply gminick (12) 10/20/2003 2:25:21 PM Tomasz Zielonka wrote: ... > C++ is not the best example of strong static typing. It is a language Agreed. > full of traps, which can't be detected by its type system. A little overbid IMHO. Still, Robert Martin is also highly experienced with Java, which has fewer traps (but is still far from the best example). He may not have production experience in ML or Haskell, but, few do (and experience of using something in real-world production use is hard to replace -- "val piu la pratica della grammatica", says an old Italian proverb...:-). A hard situation to remedy: a professional consultant will not recommend to a client a language he has yet no practical experience of, and he ain't gonna get practical experience without using it in real-world projects. Typical bootstrap problem. So a new language either sneaks in by being appealing for some niche or some scripting-oid task (presumably that's how Martin got experience in e.g. Python and Ruby), or it has some great "historical" glow (such as Smalltalk or Lisp), or multimillion marketing (Java)... how to give FP languages a chance to get a fair try in real-world production applications isn't exactly obvious to me:-(. Alex   0 Reply aleax (648) 10/20/2003 2:35:42 PM [posted & mailed] On Mon, 20 Oct 2003 15:56:42 +0200, Joachim Durchholz wrote: > But then you don't need multimethods: you have a method "to_bezier" that > dispatches over shapes, and a "draw_bezier" method that dispatches over > devices. > The base class has a "draw" routine that does a > draw_bezier(to_bezier(self)) > (syntactic variations aside). No multiple dispatch needed anymore. In the base class of devices or shapes? In either case you can't dispatch on the other. > Using multimethods for optimization is the one borderline case where they > do indeed make sense. It's not the kind of optimization which gives the same result faster or cheaper. The quality of the output is better if you use specialized versions. Not only "quality" in the sense of visual quality, but also e.g. when exporting to a vector graphics format which has the concept of rectangles with rounded corners I prefer to export them as such instead of as generic curves, to preserve information useful for further editing. I would reserve the term "optimization" for variations with equivalent result. > Equality is usually solved by extra-lingual means (usually via bytewise > representation comparison - the black-box definition of equality is > undecidable). For special forms of equality (like: those abstracting from > representation), data is either transformed to normal form, or kept in > normal form even internally. I would say: almost each language has its unique approach to equality, with different warts. You describe the structural equality (see below). Another group of languages use only user-defined equality. The multimethod way is the cleanest I know among dynamically typed languages, and Haskell/Clean/Mercury classes for statically typed ones. In fact I rediscovered multimethods by porting Haskell classes into the dynamically typed world. How other languages do equality (by records I mean named product types where equality should compare field by field after checking that arguments are of the same kind of record): - C++ overloading - no automatic generation of equality for records, otherwise similar to Haskell classes, - Java single dispatch - doesn't statically typecheck the right argument despite the language being statically typed, must manually check the type of the right argument, must know all interesting types of the right argument if some objects of different types are to be considered equal, no automatic generation of equality for records, - Smalltalk, Ruby, Python single dispatch - as in Java, but dynamically typed, - SML equality types - only structural equality, no user-definable equivalence, in particular no equality on types containing functions even if a practical equality could be defined in terms of other fields, - OCaml polymorphic equality - as in SML, but the compiler doesn't check statically whether the type supports equality, - Prolog unification, Erlang built-in equality - only structural equality (there are no user-defined types). In particular almost all languages support either only user-defined equality or only builtin structural equality, but not both. Haskell's deriving mechanism and sufficiently smart multimethod-based frameworks support both. The single dispatch mechanism could support both (if all records had a common superclass) but I haven't seen it in practice. > (I'm not reading comp.lang.misc, so I can't follow the public discussion > anymore.) I'm sorry. It doesn't fit other groups it was originally crossposted to... -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/   0 Reply Marcin 10/20/2003 3:06:50 PM Pascal Costanza wrote: ... > So how do you implement an equality operator correctly with only single > dynamic dispatch? Equality is easy, as it's commutative -- pseudocode for it might be: def operator==(a, b): try: return a.__eq__(b) except I_Have_No_Idea: try: return b.__eq__(a) except I_Have_No_Idea: return False Non-commutative operators require a tad more, e.g. Python lets each type define both an __add__ and a __radd__ (rightwise-add): def operator+(a, b): try: return a.__add__(b) except (I_Have_No_Idea, AttributeError): try: return b.__radd__(a) except (I_Have_No_Idea, AttributeError): raise TypeError, "can't add %r and %r" % (type(a),type(b)) Multimethods really shine in HARDER problems, e.g., when you have MORE than just two operands (or, perhaps, some _very_ complicated inheritance structure -- but in such cases, even multimethods are admittedly no panacea). Python's pow(a, b, c) is an example -- and, indeed, Python does NOT let you overload THAT (3-operand) version, only the two-operand one that you can spell pow(a, b) or a**b. ALex   0 Reply aleax (648) 10/20/2003 3:15:34 PM Marcin 'Qrczak' Kowalczyk wrote: > [posted & mailed] > > On Mon, 20 Oct 2003 15:56:42 +0200, Joachim Durchholz wrote: > >>But then you don't need multimethods: you have a method "to_bezier" that >>dispatches over shapes, and a "draw_bezier" method that dispatches over >>devices. >>The base class has a "draw" routine that does a >> draw_bezier(to_bezier(self)) >>(syntactic variations aside). No multiple dispatch needed anymore. > > In the base class of devices or shapes? In either case you can't dispatch > on the other. Actually, it's irrelevant in which class this code snippet runs, since both to_bezier and draw_bezier could be public routines. >>Using multimethods for optimization is the one borderline case where they >>do indeed make sense. > > It's not the kind of optimization which gives the same result faster > or cheaper. The quality of the output is better if you use specialized > versions. Not only "quality" in the sense of visual quality, but also > e.g. when exporting to a vector graphics format which has the concept of > rectangles with rounded corners I prefer to export them as such instead > of as generic curves, to preserve information useful for further editing. > I would reserve the term "optimization" for variations with equivalent > result. Agreed. >>Equality is usually solved by extra-lingual means (usually via bytewise >>representation comparison - the black-box definition of equality is >>undecidable). For special forms of equality (like: those abstracting from >>representation), data is either transformed to normal form, or kept in >>normal form even internally. > > I would say: almost each language has its unique approach to equality, > with different warts. You describe the structural equality (see below). > Another group of languages use only user-defined equality. Agreed. > The multimethod way is the cleanest I know among dynamically typed > languages, and Haskell/Clean/Mercury classes for statically typed ones. > In fact I rediscovered multimethods by porting Haskell classes into the > dynamically typed world. > > How other languages do equality (by records I mean named product types > where equality should compare field by field after checking that arguments > are of the same kind of record): > [...] > - Java single dispatch - doesn't statically typecheck the right argument > despite the language being statically typed, must manually check the > type of the right argument, must know all interesting types of the right > argument if some objects of different types are to be considered equal, > no automatic generation of equality for records, Right. Multiple dispatch translates the "method must know all interesting types to the right side" to "developer must know all interesting types to the right side". My point is, essentially, that neither approach is satisfactory. > In particular almost all languages support either only user-defined > equality or only builtin structural equality, but not both. Probably because you don't need structural equality if you have user-defined equality. Well, at least in theory - having to define equality even if "it's obvious" (i.e. structural) means lots and lots of boilerplate code. Or did I overlook/misinterpret something here? >>(I'm not reading comp.lang.misc, so I can't follow the public discussion >>anymore.) > > I'm sorry. It doesn't fit other groups it was originally crossposted to... No need for apologies :-) Regards, Jo (Permission granted to repost this answer to comp.lang.misc *g*)   0 Reply Joachim 10/20/2003 3:26:10 PM In comp.lang.functional Kenny Tilton <ktilton@nyc.rr.com> wrote: > Dennis Lee Bieber wrote: >> Short version: The software performed correctly, to specification >> (including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS >> DESIGNED. > Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html Dennis is right: it was indeed a specification problem. AFAIK, the coder had actually even proved formally that the exception could not arise with the spec of Ariana 4. Lisp code, too, can suddenly raise unexpected exceptions. The default behaviour of the system was to abort the mission for safety reasons by blasting the rocket. This wasn't justified in this case, but one is always more clever after the event... > "supposed to" fail? chya. Indeed. Values this extreme were considered impossible on Ariane 4 and taken as indication of such a serious failure that it would justify aborting the mission. > This was nothing more than an unhandled exception crashing the sytem > and its identical backup. Depends on what you mean by "crash": it certainly didn't segfault. It just realized that something happened that wasn't supposed to happen and reacted AS REQUIRED. > Other conversions were protected so they could handle things > intelligently, this bad boy went unguarded. Bad, indeed, but absolutely safe with regard to the spec of Ariane 4. > Note also that the code functionality was pre-ignition > only, so there is no way they were thinking that a cool way to abort the > flight would be to leave a program exception unhandled. This is a serious design error, not a problem of the programming language. > What happened (aside from an unnecessary chunk of code running > increasing risk to no good end) Again, it's a design error. > is that the extra power of the A5 caused > oscillations greater than those seen in the A4. Those greater > oscillations took the 64-bit float beyond what would fit in the 16-bit > int. kablam. Operand Error. This is not a system saying "whoa, out of > range, abort". Well, the system was indeed programmed to say "whoa, out of range, abort". A design error. > As for Lisp not helping: There is basically no difference between checking the type of a value dynamically for validity and catching exceptions that get raised on violations of certain constraints. One can forget to do both or react to those events in a stupid way (or prove in both cases that the check / exception handling is unnecessary given the spec). Note that I am not defending ADA in any way or arguing against FPLs: in fact, being an FPL-advocate myself I do think that FPLs (including Lisp) have an edge what concerns writing safe code. But the Ariane-example just doesn't support this claim. It was an absolutely horrible management mistake to not check old code for compliance with the new spec. End of story... Regards, Markus Mottl -- Markus Mottl http://www.oefai.at/~markus markus@oefai.at   0 Reply markus1 (31) 10/20/2003 3:46:47 PM  Fergus Henderson wrote: > Kenny Tilton <ktilton@nyc.rr.com> writes: > > >>Dennis Lee Bieber wrote: >> >> >>> Just check the archives for comp.lang.ada and Ariane-5. >>> >>> Short version: The software performed correctly, to specification >>>(including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS >>>DESIGNED. >> >>Nonsense. > > > No, that is exactly right. Like the man said, read the archives for > comp.lang.ada. Yep, I was wrong. They /did/ handle the overflow by leaving the operation unguarded, trusting it to eventually bring down the system, their design goal. Apologies to Dennis. > > >>From: http://www.sp.ph.ic.ac.uk/Cluster/report.html >> >>"The internal SRI software exception was caused during execution of a >>data conversion from 64-bit floating point to 16-bit signed integer >>value. The floating point number which was converted had a value greater >>than what could be represented by a 16-bit signed integer. This resulted >>in an Operand Error. The data conversion instructions (in Ada code) were >>not protected from causing an Operand Error, although other conversions >>of comparable variables in the same place in the code were protected. >>The error occurred in a part of the software that only performs >>alignment of the strap-down inertial platform. This software module >>computes meaningful results only before lift-off. As soon as the >>launcher lifts off, this function serves no purpose." > > > That's all true, but it is only part of the story, and selectively quoting > just that part is misleading in this context. I quoted the entire paragraph and it seemed conclusive, so I did not read the rest of the report. ie, I was not being selective, I just assumed no one would consider crashing to be a form of error-handling. My mistake, they did. Well, the original question was, "Would Lisp have helped?". Let's see. They dutifully went looking for overflowable conversions and decided what to do with each, deciding in this case to do something appropriate for the A4 which was inappropriately allowed by management to go into the A5 unexamined. In Lisp, well, there are two cases. Did they have to dump a number into a 16-bit hardware channel? There was some reason for the conversion. If not, no Operand Error arises. It is an open question whether they decide to check anyway for large values and abort if found, but this one arose only during a sweep of all such conversions, so probably not. But suppose they did have to dance to the 16-bit tune of some hardware blackbox. they would go thru the same reasoning and decide to shut down the system. No advantage to Lisp. But they'd have to do some work to bring the system down, because there would be no overflow. So: (define-condition e-hardware-broken (e-pre-ignition e-fatal) ((component-id :initarg :component-id :reader component-id) (bad-value :initarg :bad-value :intiform nil :reader bad-value) ...etc etc... And then they would have to kick it off, and the exception handler of the controlling logic would get a look at the condition on the way out. Of course, it also sees operand errors, so one can only hope that at some point during testing they for some reason had /some/ condition of type e-pre-ignition get trapped by the in-flight supervisor, at which point someone would have said either throw it away or why is that module still running? Or, if they were as meticulous with their handlers as they were with numeric conversions, they would have during the inventory of explicit conditions to handle gotten to the pre-ignition module conditions and decided, "what does that software (which should not even be running) know about the hardware that the rest of the system does not know?". The case is not so strong now, but the odds are still better with Lisp. kenny -- http://tilton-technology.com What?! You are a newbie and you haven't answered my: http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey   0 Reply ktilton (2220) 10/20/2003 4:09:43 PM Thomas F. Burdick wrote: ... > So do the Python masses get to use multimethods? Sure! Check out http://codespeak.net/ : pypy is aggressively open-source, and both the masses and the elites get to download and reuse all they want. > (with-lisp-trolling > And have you seen the asymptote yet, or do you need to grow macros > first?) We felt absolutely no need to tweak Python's syntax in the least in order to implement multi-methods, so, no need for macros (including Armin Rigo, who, I think, does have extensive experience using CL). "The asymptote" of pypy is Python -- an implementation more flexible than the current C and Java ones, giving better optimization (Armin is convinced he can easily surpass his own psyco, that way), ease of fine-grained subsetting (building tiny runtimes for cellphones &c), and also, no doubt, ease of play and experimentation (oops, we'd better say "Research", it sounds way more dignified doesn't it!). Macros are definitely not part of our current plans. But, hey, this is just a summary: visit http://codespeak.net/ and see for yourself -- everything is spelled out in great detail, we have no secrets. Get a subversion client and download everything, check out all of the mailing lists' archives -- have a ball. Anybody who wants to play along is welcome to join any of our "sprints" for a week or so of nearly-nonstop heavy duty pair-programming -- "nearly" because we generally manage to schedule a barbecue, picnic, beer-bash, or other suchlike outing (and a lot of fruitful design discussion takes place during that scheduled break, in my observation). Between sprints, mailing lists, wikis, IRC and the like keep the fires going. Indeed, the social aspects of the pypy experience manage to be almost more fascinating than the technical ones, which IS saying something (and reinforces my beliefs about programming being first and foremost an issue of social interaction, but that's another thread:-). Ah, yeah, one sad thing for non-Europeans -- pypy's very much a European thing -- everybody's welcome, but you'll have a hard time convincing us to schedule a sprint elsewhere (each participant pays his or her own travel costs, you see...). Still, codespeak.net does give free access to all material anyway, wherever you are:-). [ducking back out of c.l.lisp...:-)] Alex   0 Reply aleax (648) 10/20/2003 4:12:31 PM On Mon, 20 Oct 2003 17:26:10 +0200, Joachim Durchholz wrote: >> In the base class of devices or shapes? In either case you can't >> dispatch on the other. > > Actually, it's irrelevant in which class this code snippet runs, since > both to_bezier and draw_bezier could be public routines. But how do you make specialized drawing called depend on both shapes and devices? Perhaps by double dispatch, if you are happy with a separate name for each specialization to shape (or to each device) and with changes to many classes required in order to extend in one of the directions... > Probably because you don't need structural equality if you have > user-defined equality. > Well, at least in theory - having to define equality even if "it's > obvious" (i.e. structural) means lots and lots of boilerplate code. Or did > I overlook/misinterpret something here? That's the point - it can be done manually (that's probably why those languages aren't in hurry to add that) but it's just tedious. It's yet worse for <, especially with multiple constructors. If you don't want quadratic size of code, the only solution I know is to first convert constructors to numbers, compare them, and if they are equal then invoke the constructor-specific version. You might want structural < for finding unique values in a set with repetitions (by sort & uniq) or using these values as keys of functional dictionaries (implemented by trees). Haskell's deriving of Eq and Ord handles that, but it's not extensible to other operations with a sensible structural default, e.g. hashing and serialization. This time a single dispatch suffices but there remains the problem with both user-defined and structural interpretations. It's easier to make such structural definitions of custom operations in dynamically typed languages; introspection in statically typed languages is more ugly. The combination of dynamic typing with multimethods and simple introspection gives all mentioned properties: a) both user-defined and structural definitions, b) arbitrary user-defined operations, c) symmetric treatment of arguments of symmetric binary operations. I'm not a fan of elaborate introspection mechanisms; they usually break encapsulation, i.e. not privacy but the freedom to choose data representation without affecting clients. My little language has a generic function for determining the names of fields of a record. The syntax for making record types makes the specialization of that function automatically. It also adds a supertype which says it's a record type, and similarly for singleton types, which makes possible to write generic implementations of generic functions which apply only to record types. Thus you get equality and hashing of "algebraic types" (records and singletons) for free, with the ability to define them manually for any type as well. -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/   0 Reply Marcin 10/20/2003 4:26:26 PM Alex Martelli <aleax@aleax.it> wrote in message news:<OwOkb.19485e5.710958@news1.tin.it>...
> Yes -- which is exactly why many non-programmers would prefer the
> parentheses-less notation -- with more obvious names of course;-).
> E.g.:
> emitwarning URGENT "meltdown imminent!!!"
> DOES look nicer to non-programmers than
> emitwarning(URGENT, "meltdown imminent!!!")

It depends on the background of the non-programmer. I'd
say most non-programmers who turn into programmers have at
least some math experience, so they won't be scared to type
1 + 2 instead of "give me the answer to one plus two, thank you".
The latter group we can always guide to COBOL ;) (if my
understanding of that language is correct). And the former
group should be familiar with the function notation.

Perhaps, despite of Guido's urge for "programming for everyone",
Python has been designed with such a group in mind that has at
least some hope of becoming programmers ;)

> > You really don't get any of this "explicit is better than implicit"
> > thing, do you? Requiring people to write "return" instead of
> > leaving it as optional like in Ruby, is again one reason why
> > Pythonistas *like* Python instead of Ruby. You come to
>
> I think that making return optional is slightly error-prone,
> but it DOES make the language easier to learn for newbies --
> newbies often err, in Python, by writing such code as
>     def double(x): x+x
> which indicates the lack of 'return' IS more natural than its
> mandatory presence.

You're right. That definition of double is closer to what
programming newbies probably have learned in math, than one
with "return". But that's not the point I was arguing really.
It was that Pythonistas prefer the explicit "return" and don't
want it to be changed -- So it's silly to present it as one of
Python's flaws.

Well ok, that was a pretty bold claim with no extensive
studies to back it up, and even contradicts my previously
expressed need to be compatible with math. So sure, it's
a tradeoff, but unlike the 'no-parens'-syntax, explicit
notation as comprehensively as the lack of parens in
function calls (such as making higher-order functions less
intuitive to use).

Actually my preference is to either always require return when
there's something to return, or never allow return. Making it
optional just leads to less uniformity. And disallowing it
entirely in an imperative language wouldn't be such a wise
move either.

 0
Reply hanzspam (49) 10/20/2003 4:47:43 PM

Fergus Henderson <fjh@cs.mu.oz.au> writes:

The post at that url writes  about the culture of the Ariane team, but
I would say  that it's even a more fundamental  problem of our culture
in general: we build brittle  stuff with very little margin for error.
Granted, it would  be costly to increase physical  margin, but in this
case, adopting a point of  view more like _robotics_ could help.  Even
in case of hardware failure, there's  no reason to shut down the mind;
just go on with what you have.

--
__Pascal_Bourguignon__
http://www.informatimago.com/
Lying for having sex or lying for making war?  Trust US presidents :-(

 0
Reply spam173 (586) 10/20/2003 5:03:10 PM


Markus Mottl wrote:

> In comp.lang.functional Kenny Tilton <ktilton@nyc.rr.com> wrote:
>
>>Dennis Lee Bieber wrote:
>>
>>>        Short version: The software performed correctly, to specification
>>>(including the failure mode) -- ON THE ARIANE 4 FOR WHICH IT WAS
>>>DESIGNED.
>
>
>>Nonsense. From: http://www.sp.ph.ic.ac.uk/Cluster/report.html
>
>
> Dennis is right: it was indeed a specification problem. AFAIK, the coder
> had actually even proved formally that the exception could not arise
> with the spec of Ariana 4. Lisp code, too, can suddenly raise unexpected
> exceptions. The default behaviour of the system was to abort the mission
> for safety reasons by blasting the rocket. This wasn't justified in this
> case, but one is always more clever after the event...
>
>
>>"supposed to" fail? chya.
>
>
> Indeed. Values this extreme were considered impossible on Ariane 4 and
> taken as indication of such a serious failure that it would justify
> aborting the mission.

Yes, I have acknowledged in another post that I was completely wrong in
my guesswork: everything was intentional and signed-off on by many.

A small side-note: as I now understand things, the idea was not to abort
the mission, but to bring down the system. The thinking was that the
error would signify a hardware failure, and with any luck shutting down
would mean either loss of the backup system (if that was where the HW
fault occurred) or correctly falling back on the still-functioning
backup system if the supposed HW fault had been in the primary unit. ie,
an HW fault would likely be isolated to one unit.

kenny

--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:


 0
Reply ktilton (2220) 10/20/2003 5:04:46 PM

In article <OAUkb.12491$pT1.1778@twister.nyc.rr.com>, Kenny Tilton <ktilton@nyc.rr.com> wrote: [Discussing the Arianne failure] > A small side-note: as I now understand things, the idea was not to abort > the mission, but to bring down the system. The thinking was that the > error would signify a hardware failure, and with any luck shutting down > would mean either loss of the backup system (if that was where the HW > fault occurred) or correctly falling back on the still-functioning > backup system if the supposed HW fault had been in the primary unit. ie, > an HW fault would likely be isolated to one unit. That's right. This is why hardware folks spend a lot of time thinking about common mode failures, and why software folks could learn a thing or two from the hardware folks in this regard. E.   0 Reply 10/20/2003 5:17:24 PM On 20 Oct 2003 19:03:10 +0200, Pascal Bourguignon <spam@thalassa.informatimago.com> wrote: >Even in case of hardware failure, there's no reason to shut down the >mind; just go on with what you have. When the thing that failed is a very large rocket having a very large momentum, and containing a very large amount of very volatile fuel, it makes sense to give up and shut down in the safest possible way. Also keep in mind that this was a "can't possibly happen" failure scenario. If you've deemed that it is something that can't possibly happen, you are necessarily admitting that you have no idea how to respond in a meaningful way if it somehow does happen. -Steve   0 Reply see94 (37) 10/20/2003 5:42:22 PM "Markus Mottl" <markus@oefai.at> wrote in message news:bn1017$m39$1@bird.wu-wien.ac.at... > Note that I am not defending ADA in any way or arguing against FPLs: in > fact, being an FPL-advocate myself I do think that FPLs (including Lisp) > have an edge what concerns writing safe code. But the Ariane-example just > doesn't support this claim. It was an absolutely horrible management > mistake to not check old code for compliance with the new spec. End > of story... The investigating commission reported about 5 errors that, in series, allowed the disaster. As I remember, another nonprogrammer/language one was in mockup testing. The particular black box, known to be 'good', was not included, but just simulated according to its expected behavior. If it has been included, and a flight similated in real time with appropriate tilting and shaking, it should probably have given the spurious abort message that it did in the real flight. TJR   0 Reply tjreedy (5184) 10/20/2003 5:55:21 PM Gerrit Holl wrote: > Hannu Kankaanp?? wrote: >> Anyway, as a conclusion, I believe you'd be much happier with >> Ruby than with Python. It doesn't do this weird "statement vs >> expression" business, it has optional return, it has optional >> parens with function calls, and probably more of these things >> "fixed" that you consider Python's downsides. You're trying to >> make Python into a language that already exists, it seems, but >> for some reason Pythonistas are happy with Python and not rapidly >> converting to Ruby or Haskell. > > I wonder to what extent this statement is true. I know at least > 1 Ruby programmer who came from Python, but this spot check should > not be trusted, since I know only 1 Ruby programmer and only 1 > former Python programmer <g>. But I have heard that there are a > lot of former Python programmers in the Ruby community. I think > it is safe to say that of all languages Python programmers migrate > to, Ruby is the strongest magnet. OTOH, the migration of this part > of the Python community to Ruby may have been completed already, > of course. Python and Ruby are IMHO very close, thus "compete" for roughly the same "ecological niche". I still don't have enough actual experience in "production" Ruby code to be able to say for sure, but my impression so far is that -- while no doubt there's a LOT of things for which they're going to be equally good -- Python's simplicity and uniformity help with application development for larger groups of programmers, while Ruby's extreme dynamism and more variegated style may be strengths for experimentation, or projects with one, or few and very well-attuned and experienced, developers. I keep coming back to Python (e.g. because I have no gmpy in Ruby for my own pet personal projects...:-) but I do mean to devote more of my proverbial "copious spare time" to Ruby explorations (e.g., porting gmpy, otherwise it's unlikely I'll ever get all that much combinatorial arithmetics done...;-). Alex   0 Reply aleax (648) 10/20/2003 6:19:49 PM Pascal Bourguignon wrote: > The post at that url writes about the culture of the Ariane team, but > I would say that it's even a more fundamental problem of our culture > in general: we build brittle stuff with very little margin for error. > Granted, it would be costly to increase physical margin, Which is exactly why the margin is kept as small as possible. Occasionally, it will be /too/ small. Anybody seen a car model series, every one working perfectly from the first one? From what I read, every new model has its small quirks and "near-perfect" gotchas. The difference is just that you're not allowed to do that in expensive things like rockets (which is, among many other things, one of the reasons why space vehicles and aircraft are so d*mn expensive: if something goes wrong, you can't just drive them on the nearest parking lot and wait for maintenance and repair...) > but in this > case, adopting a point of view more like _robotics_ could help. Even > in case of hardware failure, there's no reason to shut down the mind; > just go on with what you have. As Steve wrote, letting a rocket carry on regardless isn't a good idea in the general case: it would be a major disaster if it made it to the next coast and crashed into the next town. Heck, it would be enough if the fuel tanks leaked, and the whole fuel rained down on a ship somewhere in the Atlantic - most rocket fuels are toxic. Regards, Jo   0 Reply joachim.durchholz (563) 10/20/2003 6:28:04 PM Steve Schafer <see@reply.to.header> writes: > On 20 Oct 2003 19:03:10 +0200, Pascal Bourguignon > <spam@thalassa.informatimago.com> wrote: > > >Even in case of hardware failure, there's no reason to shut down the > >mind; just go on with what you have. > > When the thing that failed is a very large rocket having a very large > momentum, and containing a very large amount of very volatile fuel, it > makes sense to give up and shut down in the safest possible way. You have to define a "dangerous" situation. Remember that this "safest possible way" is usually to blow the rocket up. AFAIK, while this parameter was out of range, there was no instability and the rocket was not uncontrolable. > Also keep in mind that this was a "can't possibly happen" failure > scenario. If you've deemed that it is something that can't possibly > happen, you are necessarily admitting that you have no idea how to > respond in a meaningful way if it somehow does happen. My point. This "can't possibly happen" failure did happen, so clearly it was not a "can't possibly happen" physically, which means that the problem was with the software. We know it, but what I'm saying is that a smarter software could have deduced it on fly. We all agree that it would be better to have a perfect world and perfect, bug-free, software. But since that's not the case, I'm saying that instead of having software that behaves like simple unix C tools, where as soon as there is an unexpected situation, it calls perror() and exit(), it would be better to have smarter software that can try and handle UNEXPECTED error situations, including its own bugs. I would feel safer in an AI rocket. -- __Pascal_Bourguignon__ http://www.informatimago.com/ Do not adjust your mind, there is a fault in reality. Lying for having sex or lying for making war? Trust US presidents :-(   0 Reply spam173 (586) 10/20/2003 8:08:30 PM > THE GOOD: > THE BAD: > > 1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell) > 90% of the code is function applictions. Why not make it convenient? > > 9. Syntax for arrays is also bad [a (b c d) e f] would be better > than [a, b(c,d), e, f] Agreed with your analysis, except for these two items. #1 is a matter of opinion, but in general: - f(x,y) is the standard set by mathematical notation and all the mainstream programming language families, and is library neutral: calling a curried function is f(x)(y), while calling an uncurried function is f(x,y). - "f x y" is unique to the Haskell and LISP families of languages, and implies that most library functions are curried. Otherwise you have a weird asymmetry between curried calls "f x y" and uncurried calls which translate back to "f(x,y)". Widespread use of currying can lead to weird error messages when calling functions of many parameters: a missing third parameter in a call like f(x,y) is easy to report, while with curried notation, "f x y" is still valid, yet results in a type other than what you were expecting, moving the error up the AST to a less useful obvious. I think #9 is inconsistent with #1. In general, I'm wary of notations like "f x" that use whitespace as an operator (see http://www.research.att.com/~bs/whitespace98.pdf).   0 Reply tim9925 (19) 10/20/2003 8:52:14 PM On Mon, 20 Oct 2003 13:52:14 -0700, Tim Sweeney wrote: > - "f x y" is unique to the Haskell and LISP families of languages, and > implies that most library functions are curried. No, Lisp doesn't curry. It really writes "(f x y)", which is different from "((f x) y)" (which is actually Scheme, not Lisp). In fact the syntax "f x y" without mandatory parens fits non-lispish non-curried syntaxes too. The space doesn't have to be left- or right-associative; it just binds all arguments at once, and this expression is different both from "f (x y)" and "(f x) y". The only glitch is that you have to express application to 0 arguments somehow. If you use "f()", you can't use "()" as an expression (for empty tuple for example). But when you accept it, it works. It's my favorite function application syntax. -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/   0 Reply qrczak (1265) 10/20/2003 9:35:55 PM Alex Martelli wrote: > Tomasz Zielonka wrote: > ... >> C++ is not the best example of strong static typing. It is a language > > Agreed. > >> full of traps, which can't be detected by its type system. > > A little overbid IMHO. Still, Robert Martin is also highly experienced > with Java, which has fewer traps (but is still far from the best example). I think http://tinyurl.com/rnnq can be relevant in this context. > He may not have production experience in ML or Haskell, but, few do (and > experience of using something in real-world production use is hard to > replace -- "val piu la pratica della grammatica", says an old Italian > proverb...:-). A hard situation to remedy: a professional consultant > will not recommend to a client a language he has yet no practical > experience of, and he ain't gonna get practical experience without > using it in real-world projects. Typical bootstrap problem. So a > new language either sneaks in by being appealing for some niche or > some scripting-oid task (presumably that's how Martin got experience > in e.g. Python and Ruby), or it has some great "historical" glow > (such as Smalltalk or Lisp), or multimillion marketing (Java)... how > to give FP languages a chance to get a fair try in real-world production > applications isn't exactly obvious to me:-(. Yes, that's the real problem. > Alex Best regards, Tom -- ..signature: Too many levels of symbolic links   0 Reply t.zielonka (53) 10/20/2003 9:55:23 PM Pascal Bourguignon: > We all agree that it would be better to have a perfect world and > perfect, bug-free, software. But since that's not the case, I'm > saying that instead of having software that behaves like simple unix C > tools, where as soon as there is an unexpected situation, it calls > perror() and exit(), it would be better to have smarter software that > can try and handle UNEXPECTED error situations, including its own > bugs. I would feel safer in an AI rocket. Since it was written in Ada and not C, and since it properly raised an exception at that point (as originally designed), which wasn't caught at a recoverable point, ending up in the default "better blow up than kill people" handler ... what would your AI rocket have done with that exception? How does it decide that an UNEXPECTED error situation can be recovered? How would you implement it? How would you test it? (Note that the above software wasn't tested under realistic conditions; I assume in part because of cost.) I agree it would be better to have software which can do that. I have no good idea of how that's done. (And bear in mind that my XEmacs session dies about once a year, eg, once when NFS was acting flaky underneath it and a couple times because it couldn't handle something X threw at it. ;) The best examples of resilent architectures I've seen come from genetic algorithms and other sorts of feedback training; eg, subsumptive architectures for robotics and evolvable hardware. There was a great article in CACM on programming an FPGA via GAs, in 1998/'99 (link, anyone?). It worked quite well (as I recall) but pointed out the hard part about this approach is that it's hard to understand, and the result used various defects on the chip (part of the circuit wasn't used but the chip wouldn't work without it) which makes the result harder to mass produce. Andrew dalke@dalkescientific.com   0 Reply adalke (604) 10/20/2003 10:07:31 PM On 20 Oct 2003 22:08:30 +0200, Pascal Bourguignon <spam@thalassa.informatimago.com> wrote: >AFAIK, while this parameter was out of range, there was no instability >and the rocket was not uncontrolable. That's perfectly true, but also perfectly irrelevant. When your carefully designed software has just told you that your rocket, which, you may recall, is traveling at several thousand metres per second, has just entered a "can't possibly happen" state, you don't exactly have a lot of time in which to analyze all of the conflicting information and decide which to trust and which not to trust. Whether that sort of decision-making is done by engineers on the ground or by human pilots or by some as yet undesigned intelligent flight control system, the answer is the same: Do the safe thing first, and then try to figure out what happened. All well-posed problems have boundary conditions, and the solutions to those problems are bounded as well. No matter what the problem or its means of solution, a boundary is there, and if you somehow cross that boundary, you're toast. In particular, the difficulty with AI systems is that while they can certainly enlarge the boundary, they also tend to make it fuzzier and less predictable, which means that testing becomes much less reliable. There are numerous examples where human operators have done the "sensible" thing, with catastrophic consequences. >My point. Well, actually, no. I assure you that my point is very different from yours. >This "can't possibly happen" failure did happen, so clearly it was not >a "can't possibly happen" physically, which means that the problem was >with the software. No, it still was a "can't possibly happen" scenario, from the point of view of the designed solution. And there was nothing wrong with the software. The difficulty arose because the solution for one problem was applied to a different problem (i.e., the boundary was crossed). >it would be better to have smarter software that can try and handle >UNEXPECTED error situations I think you're failing to grasp the enormity of the concept of "can't possibly happen." There's a big difference between merely "unexpected" and "can't possibly happen." "Unexpected" most often means that you haven't sufficiently analyzed the situation. "Can't possibly happen," on the other hand, means that you've analyzed the situation and determined that the scenario is outside the realm of physical or logical possibility. There is simply no meaningful means of recovery from a "can't possibly happen" scenario. No matter how smart your software is, there will be "can't possibly happen" scenarios outside the boundary, and your software is going to have to shut down. >I would feel safer in an AI rocket. What frightens me most is that I know that there are engineers working on safety-critical systems that feel the same way. By all means, make your flight control system as sophisticated and intelligent as you want, but don't forget to include a simple, reliable, dumber-than-dirt ejection system that "can't possibly fail" when the "can't possibly happen" scenario happens. Let me try to summarize the philosophical differences here: First of all, I wholeheartedly agree that a more sophisticated software system _may_ have prevented the destruction of the rocket. Even so, I think the likelihood of that is rather small. (For some insight into why I think so, you might want to take a look at Henry Petroski's _To Engineer is Human_.) Where we differ is how much impact we believe that more sophisticated software would have on the problem. I get the impression that you believe that an AI-based system would drastically reduce (perhaps even eliminate?) the "can't possibly happen" scenario. I, on the other hand, believe that even the most sophisticated system enlarges the boundary of the solution space by only a very small amount--the area occupied by "can't possibly happen" scenarios remains far greater than that occupied by "software works correctly and saves the rocket" scenarios. -Steve   0 Reply see94 (37) 10/21/2003 12:27:34 AM On Mon, Oct 20, 2003 at 01:52:14PM -0700, Tim Sweeney wrote: > > 1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell) > > 90% of the code is function applictions. Why not make it convenient? > > > > 9. Syntax for arrays is also bad [a (b c d) e f] would be better > > than [a, b(c,d), e, f] > #1 is a matter of opinion, but in general: > > - f(x,y) is the standard set by mathematical notation and all the > mainstream programming language families, and is library neutral: > calling a curried function is f(x)(y), while calling an uncurried > function is f(x,y). And lambda notation is: \xy.yx or something like that. Math notation is rather ad-hoc, designed for shorthand scribbling on paper, and in general a bad idea to imitate for programming languages which are written on the computer in an ASCII editor (which is one thing which bothers me about ML and Haskell). > - "f x y" is unique to the Haskell and LISP families of languages, and > implies that most library functions are curried. Otherwise you have a > weird asymmetry between curried calls "f x y" and uncurried calls > which translate back to "f(x,y)". Here's an "aha" moment for you: In Haskell and ML, the two biggest languages with built-in syntactic support for currying, there is also a datatype called a tuple (which is a record with positional fields). All functions, in fact, only take a single argument. The trick is that the syntax for tuples and the syntax for currying combine to form the syntax for function calling: f (x, y, z) ==> calling f with a tuple (x, y, z) f x (y, z) ==> calling f with x, and then calling the result with (y, z). This, I think, is a win for a functional language. However, in a not-so-functionally-oriented language such as Lisp, this gets in the way of flexible parameter-list parsing, and doesn't provide that much value. In Lisp, a form's meaning is determined by its first element, hence (f x y) has a meaning determined by F (whether it is a macro, or functionally bound), and Lisp permits such things as "optional", "keyword" (a.k.a. by name) arguments, and ways to obtain the arguments as a list. "f x y", to Lisp, is just three separate forms (all symbols). > Widespread use of currying can lead > to weird error messages when calling functions of many parameters: a > missing third parameter in a call like f(x,y) is easy to report, while > with curried notation, "f x y" is still valid, yet results in a type > other than what you were expecting, moving the error up the AST to a > less useful obvious. Nah, it should still be able to report the line number correctly. Though I freely admit that the error messages spat out of compilers like SML/NJ are not so wonderful. > I think #9 is inconsistent with #1. I think that if the parser recognizes that it is directly within a [ ] form, it can figure out that these are not function calls but rather elements, though it would require that function calls be wrapped in ( )'s now. And the grammar would be made much more complicated I think. Personally, I prefer (list a (b c d) e f). > In general, I'm wary of notations like "f x" that use whitespace as an > operator (see http://www.research.att.com/~bs/whitespace98.pdf). Hmm, rather curious paper. I never really though of "f x" using whitespace as an operator--it's a delimiter in the strict sense. The grammar of ML and Haskell define that consecutive expressions form a function application. Lisp certainly uses whitespace as a simple delimiter. I'm not a big fan of required commas because it gets annoying when you are editting large tables or function calls with many parameters. The behavior of Emacs's C-M-t or M-t is not terribly good with extraneous characters like those, though it does try. -- ; Matthew Danish <mdanish@andrew.cmu.edu> ; OpenPGP public key: C24B6010 on keyring.debian.org ; Signed or encrypted mail welcome. ; "There is no dark side of the moon really; matter of fact, it's all dark."   0 Reply mdanish (271) 10/21/2003 1:30:26 AM Matthew Danish <mdanish@andrew.cmu.edu> writes: > On Mon, Oct 20, 2003 at 01:52:14PM -0700, Tim Sweeney wrote: > > > In general, I'm wary of notations like "f x" that use whitespace as an > > operator (see http://www.research.att.com/~bs/whitespace98.pdf). > > Hmm, rather curious paper. I never really though of "f x" using > whitespace as an operator--it's a delimiter in the strict sense. The > grammar of ML and Haskell define that consecutive expressions form a > function application. Lisp certainly uses whitespace as a simple > delimiter. I'm not a big fan of required commas because it gets > annoying when you are editting large tables or function calls with many > parameters. The behavior of Emacs's C-M-t or M-t is not terribly good > with extraneous characters like those, though it does try. It's true that (f x y) and "f x y" don't use whitespace as an operator; however, I attempted something sneaky once, trying to get lisp used via a custom reader that did use whitespace as an operator (for the record, it worked until someone figured out what was going on, then they were pissed, for no rational reason). Its real use used all domain-specific functions, but some example code that you can read with SNEAKY:READ : let (list list 1, 2, 3;; times 3) { dotimes (x, times) { format (t, "x is ~S", x); print list; } } It's all s-expressions, but they look like: f x, y, z; or f (x, y, z); or (sexp, sexp, sexp ...) or f x, y, {sexp; sexp; ...} or f x {sexp; sexp; ...} It can look remarkably non-lispy, but once one catches on that it's just a lot of ways of expressing where lists start and end, one can figure out what's happening pretty quickly. -- /|_ .-----------------------. ,' .\ / | No to Imperialist war | ,--' _,' | Wage class war! | / / -----------------------' ( -. | | ) | (-. '--.) . )----'   0 Reply tfb3 (483) 10/21/2003 2:11:48 AM > > In general, I'm wary of notations like "f x" that use whitespace as an > > operator (see http://www.research.att.com/~bs/whitespace98.pdf). > Hmm, rather curious paper. I never really though of "f x" using > whitespace as an operator--it's a delimiter in the strict sense. The > grammar of ML and Haskell define that consecutive expressions form a > function application. Lisp certainly uses whitespace as a simple > delimiter... Did you read the cited paper *all the way to the end*? -Mike   0 Reply Mike2226 (460) 10/21/2003 2:27:49 AM "Andrew Dalke" <adalke@mindspring.com> writes: > Pascal Bourguignon: > > We all agree that it would be better to have a perfect world and > > perfect, bug-free, software. But since that's not the case, I'm > > saying that instead of having software that behaves like simple unix C > > tools, where as soon as there is an unexpected situation, it calls > > perror() and exit(), it would be better to have smarter software that > > can try and handle UNEXPECTED error situations, including its own > > bugs. I would feel safer in an AI rocket. > > Since it was written in Ada and not C, and since it properly raised > an exception at that point (as originally designed), which wasn't > caught at a recoverable point, ending up in the default "better blow > up than kill people" handler ... what would your AI rocket have > done with that exception? How does it decide that an UNEXPECTED > error situation can be recovered? By having a view at the big picture! The blow up action would be activated only when the big picture shows that the AI has no control of the rocket and that it is going down. > How would you implement it? Like any AI. > How would you test it? (Note that the above software wasn't > tested under realistic conditions; I assume in part because of cost.) In a simulator. In any case, the point is to have a software that is able to handle even unexpected failures. > I agree it would be better to have software which can do that. > I have no good idea of how that's done. (And bear in mind that > my XEmacs session dies about once a year, eg, once when NFS > was acting flaky underneath it and a couple times because it > couldn't handle something X threw at it. ;) XEmacs is not AI. > The best examples of resilent architectures I've seen come from > genetic algorithms and other sorts of feedback training; eg, > subsumptive architectures for robotics and evolvable hardware. > There was a great article in CACM on programming an FPGA > via GAs, in 1998/'99 (link, anyone?). It worked quite well (as > I recall) but pointed out the hard part about this approach is > that it's hard to understand, and the result used various defects > on the chip (part of the circuit wasn't used but the chip wouldn't > work without it) which makes the result harder to mass produce. > > Andrew > dalke@dalkescientific.com In any case, you're right, the main problem may be that it was specified to blow up when an unhandled exception was raised... -- __Pascal_Bourguignon__ http://www.informatimago.com/ Do not adjust your mind, there is a fault in reality. Lying for having sex or lying for making war? Trust US presidents :-(   0 Reply spam173 (586) 10/21/2003 3:56:11 AM Me: > > How would you test it? (Note that the above software wasn't > > tested under realistic conditions; I assume in part because of cost.) Pascal Bourguignon: > In a simulator. In any case, the point is to have a software that is > able to handle even unexpected failures. Like I said, the existing code was not tested in a simulator. Why do you think some AI code *would* be tested for this same case? (Actually, I believe that an AI would need to be trained in a simulator, just like humans, but that it would require so much testing as to preclude its use, for now, in rocket control systems.) Nor have you given any sort of guideline on how to implement this sort of AI in the first place. Without it, you've just restated the dream of many people over the last few centuries. It's a dream I would like to see happen, which is why I agreed with you. > > couldn't handle something X threw at it. ;) > XEmacs is not AI Yup, which is why the smiley is there. You said that C was not the language to use (cf your perror/exit comment) and implied that Ada wasn't either, so I assumed you had a more resiliant programming language in mind. My response was to point out that Emacs Lisp also crashes (rarely) given unexpected errors and so imply that Lisp is not the answer. Truely I believe that programming languages as we know them are not the (direct) solution, hence my pointers to evolvable hardware and similar techniques. Even then, we still have a long way to go before they can be used to control a rocket. They require a lot of training (just like people) and software simulators just won't cut it. The first "AI"s will replace those things we find simple and commonplace [*] (because our brain evolved to handle it), and not hard and rare. Andrew dalke@dalkescientific.com [*] In thinking of some examples, I remembered a passage in on of Cordwainer Smith's stories. In them, dogs, cats, eagles, cows, and many other animals were artifically endowed with intelligence and a human-like shape. Turtles were bred for tasks which required long patience. For example, one turtle was assigned the task of standing by a door in case there was trouble, which he did for 100 years, without complaint.   0 Reply adalke (604) 10/21/2003 4:41:07 AM Andrew Dalke fed this fish to the penguins on Monday 20 October 2003 21:41 pm: > For example, one turtle was assigned the task of standing > by a door in case there was trouble, which he did for > 100 years, without complaint. > I do hope he was allowed time-out for the occassional lettuce leaf or other veggies... <G> -- > ============================================================== < > wlfraed@ix.netcom.com | Wulfraed Dennis Lee Bieber KD6MOG < > wulfraed@dm.net | Bestiaria Support Staff < > ============================================================== < > Bestiaria Home Page: http://www.beastie.dm.net/ < > Home Page: http://www.dm.net/~wulfraed/ <   0 Reply wlfraed (4456) 10/21/2003 7:31:15 AM On Mon, Oct 20, 2003 at 07:27:49PM -0700, Michael Geary wrote: > > > In general, I'm wary of notations like "f x" that use whitespace as an > > > operator (see http://www.research.att.com/~bs/whitespace98.pdf). > > > Hmm, rather curious paper. I never really though of "f x" using > > whitespace as an operator--it's a delimiter in the strict sense. The > > grammar of ML and Haskell define that consecutive expressions form a > > function application. Lisp certainly uses whitespace as a simple > > delimiter... > > Did you read the cited paper *all the way to the end*? Why bother? It says "April 1" in the Abstract, and got boring about 2 paragraphs later. I should have scare-quoted "operator" above, or rather the lack of one, which is interpreted as meaning function application. -- ; Matthew Danish <mdanish@andrew.cmu.edu> ; OpenPGP public key: C24B6010 on keyring.debian.org ; Signed or encrypted mail welcome. ; "There is no dark side of the moon really; matter of fact, it's all dark."   0 Reply mdanish (271) 10/21/2003 7:42:55 AM Alex Martelli <aleax@aleax.it> wrote in news:OwOkb.19485$e5.710958@news1.tin.it:

> Yes -- which is exactly why many non-programmers would prefer the
> parentheses-less notation -- with more obvious names of course;-).
> E.g.:
> emitwarning URGENT "meltdown imminent!!!"
> DOES look nicer to non-programmers than
> emitwarning(URGENT, "meltdown imminent!!!")
>
> Indeed, such languages as Visual Basic and Ruby do allow calling
> without parentheses, no doubt because of this "nice look" thing.

I know we are agreed that Visual Basic is fundamentally broken, but it
might be worth pointing out the massive trap that it provides for
programmers in the subtle difference between:

someProcedure x

and

someProcedure(x)

and

call someProcedure(x)

If 'someProcedure' is a procedure taking a single reference parameter, and
modifying that parameter, then the first and third forms will call the
procedure and modify 'x'. The second form on the other hand will call the
procedure and without any warning or error will simply discard the
modifications leaving 'x' unchanged.

--
Duncan Booth                                             duncan@rcp.co.uk
int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3"
"\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?

 0
Reply duncan1 (177) 10/21/2003 8:24:09 AM

Duncan Booth wrote:
...
>> Indeed, such languages as Visual Basic and Ruby do allow calling
>> without parentheses, no doubt because of this "nice look" thing.
>
> I know we are agreed that Visual Basic is fundamentally broken, but it
> might be worth pointing out the massive trap that it provides for

I'm not sure, but I think that's one of the many VB details changed
(mostly for the better, but still, _massive_ incompatibility) in the
current version (VB7 aka VB.NET) wrt older ones (VB6, VBA, etc).

Alex


 0
Reply aleax (648) 10/21/2003 10:24:54 AM

Pascal Bourguignon wrote:
> AFAIK, while this  parameter was out  of range,  there was  no
> instability  and the rocket was not uncontrolable.

Actually, the rocket had started correcting its orientation according to
the bogus data, which resulted in uncontrollable turning. The rocket
would have broken into parts in an uncontrollable manner, so it was
blewn up.
(The human operator decided to press the emergency self-destruct button
seconds before the control software would have initiated self destruct.)

> My point.  This "can't possibly happen" failure did happen, so
> clearly it was not a "can't  possibly happen" physically, which means
> that the problem was with the software. We know it, but what I'm
> saying is that a smarter software could have deduced it on fly.

No. The smartest software will not save you from human error. It was a
specification error.
The only way to detect this error (apart from more testing) would have
been to model the physics of the rocket, in software, and either verify
the flight control software against the rocket model or to test run the
whole thing in software. (I guess neither of these options would have
been cheaper than the simple test runs that were deliberately omitted,
probably on the grounds of "we /know/ it works, it worked in the Ariane 4".)

> We  all agree that  it would  be better  to have  a perfect  world
> and perfect,  bug-free, software.   But  since that's  not  the case,
> I'm saying that instead of having software that behaves like simple
> unix C tools, where  as soon  as there is  an unexpected situation,
> it calls perror() and exit(), it would  be better to have smarter
> software that can  try and  handle UNEXPECTED  error situations,
> including  its own bugs.  I would feel safer in an AI rocket.

This all may be true, but you're solving problems that didn't cause the
Ariane crash.

Regards,
Jo


 0
Reply joachim.durchholz (563) 10/21/2003 10:31:54 AM

Alex Martelli <aleax@aleax.it> wrote in
news:WP7lb.322453$R32.10677047@news2.tin.it: > Duncan Booth wrote: > ... >>> Indeed, such languages as Visual Basic and Ruby do allow calling >>> without parentheses, no doubt because of this "nice look" thing. >> >> I know we are agreed that Visual Basic is fundamentally broken, but it >> might be worth pointing out the massive trap that it provides for > > I'm not sure, but I think that's one of the many VB details changed > (mostly for the better, but still, _massive_ incompatibility) in the > current version (VB7 aka VB.NET) wrt older ones (VB6, VBA, etc). > Yes, I just checked and VB7 now requires parentheses on all argument lists, so: someProcedure x is now illegal. someProcedure(x) and call someProcedure(x) now do the same thing. The Visual Studio.Net editor will automatically 'correct' the first form into the second (unless you tell it not to). Of course, while it is less likely to cause a major headache, the confusing behaviour is still present, just pushed down a level. These are both legal, but the second one ignores changes to x. At least you are less likely to type it accidentally. someProcedure(x) someProcedure((x)) -- Duncan Booth duncan@rcp.co.uk int month(char *p){return(124864/((p[0]+p[1]-p[2]&0x1f)+1)%12)["\5\x8\3" "\6\7\xb\1\x9\xa\2\0\4"];} // Who said my code was obscure?   0 Reply duncan1 (177) 10/21/2003 10:56:10 AM Tim Sweeney wrote: >> >>1. f(x,y,z) sucks. f x y z would be much easier to type (see Haskell) >> 90% of the code is function applictions. Why not make it convenient? >> >>9. Syntax for arrays is also bad [a (b c d) e f] would be better >> than [a, b(c,d), e, f] > > Agreed with your analysis, except for these two items. > > #1 is a matter of opinion, but in general: > > - f(x,y) is the standard set by mathematical notation and all the > mainstream programming language families, and is library neutral: > calling a curried function is f(x)(y), while calling an uncurried > function is f(x,y). Well, in most languages, curried functions are the standard. This has some syntactic advantages, in areas that go beyond mathematical tradition. (Since each branch of mathematics has its own traditions, it's probably possible to find a branch where the functional programming way of writing functions is indeed tradition *g*) > - "f x y" is unique to the Haskell and LISP families of languages, and > implies that most library functions are curried. No, Lisp languages require parentheses around the call, i.e. (f x y) Lisp does share the trait that it doesn't need commas. > Otherwise you have a > weird asymmetry between curried calls "f x y" and uncurried calls > which translate back to "f(x,y)". It's not an asymmetry. "f x y" is a function of two parameters. "f (x, y)" is a function of a single parameter, which is an ordered pair. In most cases such a difference is irrelevant, but there are cases where it isn't. > Widespread use of currying can lead > to weird error messages when calling functions of many parameters: a > missing third parameter in a call like f(x,y) is easy to report, while > with curried notation, "f x y" is still valid, yet results in a type > other than what you were expecting, moving the error up the AST to a > less useful obvious. That's right. On the other hand, it makes it easy to write code that just fills the first parameter of a function, and returns the result. Such code is so commonplace that having weird error messages is considered a small price to pay. Actually, writing functional code is more about sticking together functions than actually calling them. With such use, having to write code like f (x, ...) instead of f x will gain in precision, but it will clutter up the code so much that I'd exptect the gain in readability to be little, nonexistent or even negative. It might be interesting to transform real-life code to a more standard syntax and see whether my expectation indeed holds. > In general, I'm wary of notations like "f x" that use whitespace as an > operator (see http://www.research.att.com/~bs/whitespace98.pdf). That was an April Fool's joke. A particularly clever one: the paper starts by laying a marginally reasonable groundwork, only to advance into realms of absurdity later on. It would be unreasonable to make whitespace an operator in C++. This doesn't mean that a language with a syntax designed for whitespace cannot be reasonable, and in fact some languages do that, with good effect. Reading Haskell code is like a fresh breeze, since you don't have to mentally filter out all that syntactic noise. The downside is that it's easy to get some detail wrong. One example is a decision (was that Python?) to equate a tab with eight blanks, which tends to mess up syntactic structure when editing the code with over-eager editors. There are some other lessons to learn - but then, whitespace-as-syntactic-element is a relatively new concept, and people are still playing with it and trying out alternatives. The idea in itself is useful, its incarnations aren't perfect (yet). Regards, Jo   0 Reply joachim.durchholz (563) 10/21/2003 10:58:54 AM Alex Martelli <aleax@aleax.it> writes: > [..] the EXISTING call to foo() will NOT be "affected" by the "del > foo" that happens right in the middle of it, since there is no > further attempt to look up the name "foo" in the rest of that call's > progress. [..] What this and my other investigations amount to, is that in Python a "name" is somewhat like a lisp symbol [1]. In particluar, it is an object that has a pre-computed hash-key, which is why hash-table/dictionary lookups are reasonably efficient. My worry was that the actual string hash-key would have to be computed at every function call, which I believe would slow down the process some 10-100 times. I'm happy to hear it is not so. [1] One major difference being that Pyhon names are not first-class objects. This is a big mistake wrt. to supporting interactive programming in my personal opinion. > As for your worries elsewhere expressed that name lookup may impose > excessive overhead, in Python we like to MEASURE performance issues > rather than just reason about them "abstractly"; which is why Python > comes with a handy timeit.py script to time a code snippet > accurately. [...] Thank you for the detailed information. Still, I'm sure you will agree that sometimes reasoning about things can provide insight with predictive powers that you cannot achieve by mere experimentation. -- Frode Vatvedt Fjeld   0 Reply frodef (343) 10/21/2003 3:13:30 PM "Frode Vatvedt Fjeld" <frodef@cs.uit.no> wrote in message news:2hk76ylj39.fsf@vserver.cs.uit.no... > What this and my other investigations amount to, is that in Python a > "name" is somewhat like a lisp symbol [1]. This is true in that names are bound to objects rather than representing a block of memory. >In particluar, it is an object that has a pre-computed hash-key, NO. There is no name type. 'Name' is a grammatical category, with particular syntax rules, for Python code, just like 'expression', 'statement' and many others. A name *may* be represented at runtime as a string, as CPython *sometimes* does. The implementation *may*, for efficiency, give strings a hidden hash value attributre, which CPython does. For even faster runtime 'name lookup' an implementation may represent names as slot numbers (indexes) for a hiddem, non-Python array. CPython does this (with C pointer arrays) for function locals whenever the list of locals is fixed at compile time, which is usually. (To prevent this optimization, add to a function body something like 'from mymod import *', if still allowed, that makes the number of locals unknowable until runtime.) To learn about generated bytecodes, read the dis module docs and use dis.dis. For example: >>> import dis >>> def f(a): .... b=a+1 .... >>> dis.dis(f) 0 SET_LINENO 1 3 SET_LINENO 2 6 LOAD_FAST 0 (a) 9 LOAD_CONST 1 (1) 12 BINARY_ADD 13 STORE_FAST 1 (b) 16 LOAD_CONST 0 (None) 19 RETURN_VALUE This says: load (onto stack) first pointer in local_vars array and second pointer in local-constants array, add referenced values and replace operand pointers with pointer to result, store that result pointer in the second slot of local_vars, load first constant pointer (always to None), and return. Who knows what *we* do when we read, parse, and possibly execute Python code. Terry J. Reedy   0 Reply tjreedy (5184) 10/21/2003 5:44:42 PM "Terry Reedy" <tjreedy@udel.edu> writes: > [..] For even faster runtime 'name lookup' an implementation may > represent names as slot numbers (indexes) for a hiddem, non-Python > array. CPython does this (with C pointer arrays) for function > locals whenever the list of locals is fixed at compile time, which > is usually. (To prevent this optimization, add to a function body > something like 'from mymod import *', if still allowed, that makes > the number of locals unknowable until runtime.) [..] This certainly does not ease my worries over Pythons abilities with respect to interactivity and dynamism. -- Frode Vatvedt Fjeld   0 Reply frodef (343) 10/21/2003 6:31:15 PM Frode Vatvedt Fjeld wrote: ... >> excessive overhead, in Python we like to MEASURE performance issues >> rather than just reason about them "abstractly"; which is why Python >> comes with a handy timeit.py script to time a code snippet >> accurately. [...] > > Thank you for the detailed information. Still, I'm sure you will agree > that sometimes reasoning about things can provide insight with > predictive powers that you cannot achieve by mere experimentation. A few centuries ago, a compatriot of mine was threatened with torture, and backed off, because he had dared state that "all science comes from experience" -- he refuted the "reasoning about things" by MEASURING (and fudging the numbers, if the chi square tests about his reports about the sloping-plane experiments are right -- but then, Italians _are_ notoriously untrustworthy, even though sometimes geniuses;-). These days, I'd hope not to be threatened with torture if I assert: "reasoning" is cheap, that's its advantage -- it can lead you to advance predictive hypotheses much faster than mere "data mining" through masses of data might yield them. But those hypotheses are very dubious until you've MEASURED what they predict. If you don't (or can't) measure, you don't _really KNOW_; you just _OPINE_ (reasonably or not, justifiably or not, etc). One independently repeatable measurement trumps a thousand clever reasonings, when that measurement gives numbers contradicting the reasonings' predictions -- that one number sends you back to the drawing board. Or, at least, that's how we humble engineers see the world... Alex   0 Reply aleaxit (1612) 10/21/2003 10:51:57 PM "Andrew Dalke" <adalke@mindspring.com> writes: > [...] > Nor have you given any sort of guideline on how to implement > this sort of AI in the first place. Without it, you've just restated > the dream of many people over the last few centuries. It's a > dream I would like to see happen, which is why I agreed with you. > [...] > Truely I believe that programming languages as we know > them are not the (direct) solution, hence my pointers to > evolvable hardware and similar techniques. You're right, I did not answer. I think that what is missing in classic software, and that ought to be present in AI software, is some introspective control: having a process checking that the other processes are live and progressing, and able to act to correct any infinite loop, break down or dead-lock. Some hardware may help in controling this controling software, like on the latest Macintosh: they automatically restart when the system is hung. And purely at the hardware level, for a real life system, you can't rely on only one processor. -- __Pascal_Bourguignon__ http://www.informatimago.com/   0 Reply spam173 (586) 10/21/2003 11:53:22 PM tim@epicgames.com (Tim Sweeney) writes: > In general, I'm wary of notations like "f x" that use whitespace as an > operator (see http://www.research.att.com/~bs/whitespace98.pdf). The \\ comment successor is GREAT! -- __Pascal_Bourguignon__ http://www.informatimago.com/   0 Reply spam173 (586) 10/22/2003 12:04:45 AM Andrew Dalke <adalke@mindspring.com> wrote: >The best examples of resilent architectures I've seen come from >genetic algorithms and other sorts of feedback training; eg, >subsumptive architectures for robotics and evolvable hardware. >There was a great article in CACM on programming an FPGA >via GAs, in 1998/'99 (link, anyone?). It worked quite well (as >I recall) but pointed out the hard part about this approach is >that it's hard to understand, and the result used various defects >on the chip (part of the circuit wasn't used but the chip wouldn't >work without it) which makes the result harder to mass produce. something along these lines? http://www.cogs.susx.ac.uk/users/adrianth/cacm99/node3.html John   0 Reply jatwood2 (44) 10/22/2003 12:25:01 AM "Jarek Zgoda" <jzgoda@gazeta.usun.pl> wrote in message news:bmu1bj$l82$1@nemesis.news.tpi.pl... > mike420@ziplip.com <mike420@ziplip.com> pisze: > > > 8. Can you undefine a function, value, class or unimport a module? > > (If the answer is no to any of these questions, Python is simply > > not interactive enough) > > Yes. By deleting a name from namespace. You better read some tutorial, > this will save you some time. Forgive my ignorance, but why would one want to delete a function name? What does it buy you? I can see a use for interactive redefinition of a function name, but deleting? Marshall   0 Reply mspight (144) 10/22/2003 2:25:32 AM Alex Martelli wrote: > Yes -- which is exactly why many non-programmers would prefer the > parentheses-less notation -- with more obvious names of course;-). > E.g.: > emitwarning URGENT "meltdown imminent!!!" > DOES look nicer to non-programmers than > emitwarning(URGENT, "meltdown imminent!!!") So let's write: raise URGENT, "meltdown imminent!!!" Gerrit. -- 182. If a father devote his daughter as a wife of Mardi of Babylon (as in 181), and give her no present, nor a deed; if then her father die, then shall she receive one-third of her portion as a child of her father's house from her brothers, but Marduk may leave her estate to whomsoever she wishes. -- 1780 BC, Hammurabi, Code of Law -- Asperger Syndroom - een persoonlijke benadering: http://people.nl.linux.org/~gerrit/ Kom in verzet tegen dit kabinet: http://www.sp.nl/   0 Reply gerrit1 (293) 10/22/2003 12:43:52 PM "Scott McIntire" <mcintire_charlestown@comcast.net> wrote in message news:MoEkb.821534$YN5.832338@sccrnsc01...
> It seems to me that the Agency would have fared better if they just used
> Lisp - which has bignums - and relied more on regression suites and less on
> the belief that static type checking systems would save the day.

I find that an odd conclusion. Given that the cost of bugs is so high
(especially in the cited case) I don't see a good reason for discarding
*anything* that leads to better correctness. Yes, bignums is a good
idea: overflow bugs in this day and age are as bad as C-style buffer
overruns. Why work with a language that allows them when there
are languages that don't?

But why should more regression testing mean less static type checking?
Both are useful. Both catch bugs. Why ditch one for the other?

Marshall


 0
Reply mspight (144) 10/22/2003 3:27:42 PM

Marshall Spight wrote:
> "Scott McIntire" <mcintire_charlestown@comcast.net> wrote in message news:MoEkb.821534$YN5.832338@sccrnsc01... > >>It seems to me that the Agency would have fared better if they just used >>Lisp - which has bignums - and relied more on regression suites and less on >>the belief that static type checking systems would save the day. > > > I find that an odd conclusion. Given that the cost of bugs is so high > (especially in the cited case) I don't see a good reason for discarding > *anything* that leads to better correctness. Yes, bignums is a good > idea: overflow bugs in this day and age are as bad as C-style buffer > overruns. Why work with a language that allows them when there > are languages that don't? > > But why should more regression testing mean less static type checking? > Both are useful. Both catch bugs. Why ditch one for the other? ....because static type systems work by reducing the expressive power of a language. It can't be any different for a strict static type system. You can't solve the halting problem in a general-purpose language. This means that eventually you might need to work around language restrictions, and this introduces new potential sources for bugs. (Now you could argue that current sophisticated type systems cover 90% of all cases and that this is good enough, but then I would ask you for empirical studies that back this claim. ;) I think soft typing is a good compromise, because it is a mere add-on to an otherwise dynamically typed language, and it allows programmers to override the decisions of the static type system when they know better. Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/22/2003 3:37:26 PM In article <bn687n$l6u$1@f1node01.rhrz.uni-bonn.de>, Pascal Costanza wrote: > Marshall Spight wrote: >> But why should more regression testing mean less static type checking? >> Both are useful. Both catch bugs. Why ditch one for the other? > > ...because static type systems work by reducing the expressive power of > a language. It can't be any different for a strict static type system. > You can't solve the halting problem in a general-purpose language. What do you mean by "reducing the expressive power of the language"? There are many general purpose statically typed programming languages that are Turing complete, so it's not a theoretical consideration, as you allude. > This means that eventually you might need to work around language > restrictions, and this introduces new potential sources for bugs. > > (Now you could argue that current sophisticated type systems cover 90% > of all cases and that this is good enough, but then I would ask you for > empirical studies that back this claim. ;) Empirically, i write a lot of O'Caml code, and i never have to write something in a non-intuitive manner to work around the type system. On the contrary, every type error the compiler catches in my code indicates code that *doesn't make sense*. I'd hate to imagine code that doesn't make sense passing into regression testing. What if i forget to test a non-sensical condition? On the flip-side of the coin, i've also written large chunks of Scheme code, and I *did* find myself making lots of nonsense errors that weren't caught until run time, which significantly increased development time and difficulty. Furthermore, thinking about types during the development process keeps me honest: i'm much more likely to write code that works if i've spent some time understanding the problem and the types involved. This sort of pre-development thinking helps to *eliminate* potential sources for bugs, not introduce them. Even Scheme advocates encourage this (as in Essentials of Programming Languages by Friedman, Wand, and Haynes). > I think soft typing is a good compromise, because it is a mere add-on to > an otherwise dynamically typed language, and it allows programmers to > override the decisions of the static type system when they know better. When do programmers know better? An int is an int and a string is a string, and nary the twain shall be treated the same. I would rather 1 + "bar"'' signal an error at compile time than at run time. Personally, i don't understand all this bally-hoo about "dynamic languages" being the next great leap. Static typing is a luxury! William   0 Reply wlovas (87) 10/22/2003 6:27:56 PM Pascal Costanza: > ...because static type systems work by reducing the expressive power of > a language. It can't be any different for a strict static type system. > You can't solve the halting problem in a general-purpose language. > > This means that eventually you might need to work around language > restrictions, and this introduces new potential sources for bugs. Given what I know of embedded systems, I can effectively guarantee you that all the code on the rocket was proven to halt in not only a finite amount of time but a fixed amount of time. So while what you say may be true for a general purpose language, that appeal to the halting problem doesn't apply given a hard real time constraint. Andrew dalke@dalkescientific.com   0 Reply adalke (604) 10/22/2003 7:34:40 PM Pascal Costanza <costanza@web.de> writes: > ...because static type systems work by reducing the expressive power > of a language. It depends a whole lot on what you consider "expressive". In my book, static type systems (at least some of them) work by increasing the expressive power of the language because they let me express certain intended invariants in a way that a compiler can check (and enforce!) statically, thereby expediting the discovery of problems by shortening the edit-compile-run-debug cycle. > (Now you could argue that current sophisticated type systems cover 90% > of all cases and that this is good enough, but then I would ask you > for empirical studies that back this claim. ;) In my own experience they seem to cover at least 99%. (And where are _your_ empirical studies which show that "working around language restrictions increases the potential for bugs"?) Matthias   0 Reply find19 (1245) 10/22/2003 7:55:45 PM William Lovas <wlovas@force.stwing.upenn.edu> writes: > [...] Static typing is a luxury! Very well put!   0 Reply find19 (1245) 10/22/2003 7:57:36 PM William Lovas wrote: > In article <bn687n$l6u$1@f1node01.rhrz.uni-bonn.de>, Pascal Costanza wrote: > >>Marshall Spight wrote: >> >>>But why should more regression testing mean less static type checking? >>>Both are useful. Both catch bugs. Why ditch one for the other? >> >>...because static type systems work by reducing the expressive power of >>a language. It can't be any different for a strict static type system. >>You can't solve the halting problem in a general-purpose language. > > What do you mean by "reducing the expressive power of the language"? There > are many general purpose statically typed programming languages that are > Turing complete, so it's not a theoretical consideration, as you allude. For example, static type systems are incompatible with dynamic metaprogramming. This is objectively a reduction of expressive power, because programs that don't allow for dynamic metaprogramming can't be extended in certain ways at runtime, by definition. >>This means that eventually you might need to work around language >>restrictions, and this introduces new potential sources for bugs. >> >>(Now you could argue that current sophisticated type systems cover 90% >>of all cases and that this is good enough, but then I would ask you for >>empirical studies that back this claim. ;) > > Empirically, i write a lot of O'Caml code, and i never have to write > something in a non-intuitive manner to work around the type system. On the > contrary, every type error the compiler catches in my code indicates code > that *doesn't make sense*. I'd hate to imagine code that doesn't make > sense passing into regression testing. What if i forget to test a > non-sensical condition? You need some testing discipline, which is supported well by unit testing frameworks. > On the flip-side of the coin, i've also written large chunks of Scheme > code, and I *did* find myself making lots of nonsense errors that weren't > caught until run time, which significantly increased development time > and difficulty. > > Furthermore, thinking about types during the development process keeps me > honest: i'm much more likely to write code that works if i've spent some > time understanding the problem and the types involved. This sort of > pre-development thinking helps to *eliminate* potential sources for bugs, > not introduce them. Even Scheme advocates encourage this (as in Essentials > of Programming Languages by Friedman, Wand, and Haynes). Yes, thinking about a problem to understand it better occasionally helps to write better code. This has nothing to do with static typing. This could also be achieved by placing some other arbitrary restrictions on your coding style. >>I think soft typing is a good compromise, because it is a mere add-on to >>an otherwise dynamically typed language, and it allows programmers to >>override the decisions of the static type system when they know better. > > When do programmers know better? An int is an int and a string is a > string, and nary the twain shall be treated the same. I would rather > 1 + "bar"'' signal an error at compile time than at run time. Such code would easily be caught very soon in your unit tests. Pascal   0 Reply costanza (1427) 10/23/2003 12:24:47 AM Andrew Dalke wrote: > Pascal Costanza: > >>...because static type systems work by reducing the expressive power of >>a language. It can't be any different for a strict static type system. >>You can't solve the halting problem in a general-purpose language. >> >>This means that eventually you might need to work around language >>restrictions, and this introduces new potential sources for bugs. > > > Given what I know of embedded systems, I can effectively > guarantee you that all the code on the rocket was proven > to halt in not only a finite amount of time but a fixed amount of > time. Yes, this is a useful restriction for a certian scenario. I don't have anything against restrictions put on code, provided these restrictions are justified. Static type systems are claimed to generally improve your code. I don't see that. Pascal   0 Reply costanza (1427) 10/23/2003 12:27:50 AM Pascal Costanza wrote: > ....because static type systems work by reducing the expressive power of > a language. It can't be any different for a strict static type system. > You can't solve the halting problem in a general-purpose language. The final statement is correct, but you don't need to solve the halting problem: it's enough to allow the specification of some easy-to-prove properties, without hindering the programmer too much. Most functional languages with a static type system don't require that the programmer writes down the types, they are inferred from usage. And the type checker will complain as soon as the usage of some data item is inconsistent. IOW if you write a = b + "asdf" the type checker will infer that both a and b are strings; however, if you continue with c = a + b + 3 it will report a type error because 3 and "adsf" don't have a common supertype with a "+" operation. It's the best of both worlds: no fuss with type declarations (which is one of the less interesting things one spends time with) while getting good static checking. (Nothing is as good in practice as it sounds in theory, and type inference is no exception. Interpreting type error messages requires some getting used to - just like interpreting syntax error messages is a bit of an art, leaving one confounded for a while until one "gets it".) > (Now you could argue that current sophisticated type systems cover 90% > of all cases and that this is good enough, but then I would ask you for > empirical studies that back this claim. ;) My 100% subjective private study reveals not a single complaint about over-restrictive type systems in comp.lang.functional in the last 12 months. Regards, Jo   0 Reply joachim.durchholz (563) 10/23/2003 12:36:02 AM Pascal Costanza <costanza@web.de> writes: >Marshall Spight wrote: >> But why should more regression testing mean less static type checking? >> Both are useful. Both catch bugs. Why ditch one for the other? > >...because static type systems work by reducing the expressive power of >a language. It can't be any different for a strict static type system. >You can't solve the halting problem in a general-purpose language. Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell, OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway. They generally have some support for *optional* dynamic typing. This is IMHO a good trade-off. Most of the time, you want static typing; it helps in the design process, with documentation, error checking, and efficiency. Sometimes you need a bit more flexibility than the static type system allows, and then in those few cases, you can make use of dynamic typing ("univ" in Mercury, "Dynamic" in ghc, "System.Object" in C#, etc.). The need to do this is not uncommon in languages like C# and Java that don't support parametric polymorphism, but pretty rare in languages that do. >I think soft typing is a good compromise, because it is a mere add-on to >an otherwise dynamically typed language, and it allows programmers to >override the decisions of the static type system when they know better. Soft typing systems give you dynamic typing unless you explicitly ask for static typing. That is the wrong default, IMHO. It works much better to add dynamic typing to a statically typed language than the other way around. -- Fergus Henderson <fjh@cs.mu.oz.au> | "I have always known that the pursuit The University of Melbourne | of excellence is a lethal habit" WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.   0 Reply fjh (268) 10/23/2003 12:38:50 AM Matthias Blume wrote: > Pascal Costanza <costanza@web.de> writes: > > >>...because static type systems work by reducing the expressive power >>of a language. > > > It depends a whole lot on what you consider "expressive". In my book, > static type systems (at least some of them) work by increasing the > expressive power of the language because they let me express certain > intended invariants in a way that a compiler can check (and enforce!) > statically, thereby expediting the discovery of problems by shortening > the edit-compile-run-debug cycle. The set of programs that are useful but cannot be checked by a static type system is by definition bigger than the set of useful programs that can be statically checked. So dynamically typed languages allow me to express more useful programs than statically typed languages. >>(Now you could argue that current sophisticated type systems cover 90% >>of all cases and that this is good enough, but then I would ask you >>for empirical studies that back this claim. ;) > > In my own experience they seem to cover at least 99%. I don't question that. If this works well for you, keep it up. ;) > (And where are _your_ empirical studies which show that "working around > language restrictions increases the potential for bugs"?) I don't need a study for that statement because it's a simple argument: if the language doesn't allow me to express something in a direct way, but requires me to write considerably more code then I have considerably more opportunities for making mistakes. Pascal   0 Reply costanza (1427) 10/23/2003 12:39:57 AM Fergus Henderson wrote: > Pascal Costanza <costanza@web.de> writes: > > >>Marshall Spight wrote: >> >>>But why should more regression testing mean less static type checking? >>>Both are useful. Both catch bugs. Why ditch one for the other? >> >>...because static type systems work by reducing the expressive power of >>a language. It can't be any different for a strict static type system. >>You can't solve the halting problem in a general-purpose language. > > > Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell, > OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway. > They generally have some support for *optional* dynamic typing. > > This is IMHO a good trade-off. Most of the time, you want static typing; > it helps in the design process, with documentation, error checking, and > efficiency. + Design process: There are clear indications that processes like extreme programming work better than processes that require some kind of specification stage. Dynamic typing works better with XP than static typing because with dynamic typing you can write unit tests without having the need to immediately write appropriate target code. + Documentation: Comments are usually better for handling documentation. ;) If you want your "comments" checked, you can add assertions. + Error checking: I can only guess what you mean by this. If you mean something like Java's checked exceptions, there are clear signs that this is a very bad feature. + Efficiency: As Paul Graham puts it, efficiency comes from profiling. In order to achieve efficiency, you need to identify the bottle-necks of your program. No amount of static checks can identify bottle-necks, you have to actually run the program to determine them. > Sometimes you need a bit more flexibility than the > static type system allows, and then in those few cases, you can make use > of dynamic typing ("univ" in Mercury, "Dynamic" in ghc, > "System.Object" in C#, etc.). The need to do this is not uncommon > in languages like C# and Java that don't support parametric polymorphism, > but pretty rare in languages that do. I wouldn't count the use of java.lang.Object as a case of dynamic typing. You need to explicitly cast objects of this type to some class in order to make useful method calls. You only do this to satisfy the static type system. (BTW, this is one of the sources for potential bugs that you don't have in a decent dynamically typed language.) >>I think soft typing is a good compromise, because it is a mere add-on to >>an otherwise dynamically typed language, and it allows programmers to >>override the decisions of the static type system when they know better. > > Soft typing systems give you dynamic typing unless you explicitly ask > for static typing. That is the wrong default, IMHO. It works much > better to add dynamic typing to a statically typed language than the > other way around. I don't think so. Pascal   0 Reply costanza (1427) 10/23/2003 1:15:34 AM Joachim Durchholz wrote: > Most functional languages with a static type system don't require that > the programmer writes down the types, they are inferred from usage. And > the type checker will complain as soon as the usage of some data item is > inconsistent. I know about type inference. The set of programs that can be checked with type inference is still a subset of all useful programs. > My 100% subjective private study reveals not a single complaint about > over-restrictive type systems in comp.lang.functional in the last 12 > months. I am not surprised. :) Pascal   0 Reply costanza (1427) 10/23/2003 1:21:17 AM Joachim Durchholz <joachim.durchholz@web.de> writes: >My 100% subjective private study reveals not a single complaint about >over-restrictive type systems in comp.lang.functional in the last 12 months. While I tend to agree that such complaints are rare, such complaints also tend to be language-specific, and thus get posted to language-specific forums, e.g. the Haskell mailing list, the Clean mailing list, the OCaml mailing list, etc., rather than to more general forums like comp.lang.functional. -- Fergus Henderson <fjh@cs.mu.oz.au> | "I have always known that the pursuit The University of Melbourne | of excellence is a lethal habit" WWW: <http://www.cs.mu.oz.au/~fjh> | -- the last words of T. S. Garp.   0 Reply fjh (268) 10/23/2003 1:26:48 AM On Thu, Oct 23, 2003 at 12:38:50AM +0000, Fergus Henderson wrote: > Pascal Costanza <costanza@web.de> writes: > >Marshall Spight wrote: > >> But why should more regression testing mean less static type checking? > >> Both are useful. Both catch bugs. Why ditch one for the other? > > > >...because static type systems work by reducing the expressive power of > >a language. It can't be any different for a strict static type system. > >You can't solve the halting problem in a general-purpose language. > > Most modern "statically typed" languages (e.g. Mercury, Glasgow Haskell, > OCaml, C++, Java, C#, etc.) aren't *strictly* statically typed anyway. > They generally have some support for *optional* dynamic typing. > > This is IMHO a good trade-off. Most of the time, you want static typing; > it helps in the design process, with documentation, error checking, and > efficiency. Sometimes you need a bit more flexibility than the > static type system allows, and then in those few cases, you can make use > of dynamic typing ("univ" in Mercury, "Dynamic" in ghc, > "System.Object" in C#, etc.). The need to do this is not uncommon > in languages like C# and Java that don't support parametric polymorphism, > but pretty rare in languages that do. The trouble with these dynamic' extensions is that they are dynamic type systems' from a statically typed viewpoint. A person who uses truly dynamically typed languages would not consider them to be the same thing. In SML, for example, such an extension might be implemented using a sum type, even using an exn' type so that it can be extended in separate places. The moment this system fails (and when a true dynamic system carries on) is when such a type is redefined. The reason is because the new type is not considered to be the same as the old type, due to generativity of type names, and old code requires recompilation. I'm told Haskell has extensions that will work around even this, but the last time I tried to play with those, it failed miserably because Haskell doesn't really support an interactive REPL so there was no way to test it. (Maybe this was ghc's fault?) As for Java/C#, downcasting is more of an example of static type systems getting in the way of OOP rather than of a dynamic type system. (It's because those languages are the result of an unholy union between the totally dynamic Smalltalk and the awkwardly static C++). > >I think soft typing is a good compromise, because it is a mere add-on to > >an otherwise dynamically typed language, and it allows programmers to > >override the decisions of the static type system when they know better. > > Soft typing systems give you dynamic typing unless you explicitly ask > for static typing. That is the wrong default, IMHO. It works much > better to add dynamic typing to a statically typed language than the > other way around. I view static typing as an added analysis stage. In that light, it makes no sense to add' dynamic typing to it. Also, I think that static typing should be part of a more comprehensive static analysis phase which itself is part of a greater suite of tests. -- ; Matthew Danish <mdanish@andrew.cmu.edu> ; OpenPGP public key: C24B6010 on keyring.debian.org ; Signed or encrypted mail welcome. ; "There is no dark side of the moon really; matter of fact, it's all dark."   0 Reply mdanish (271) 10/23/2003 1:52:34 AM Pascal Costanza <costanza@web.de> writes: > The set of programs that are useful but cannot be checked by a static > type system is by definition bigger than the set of useful programs > that can be statically checked. By whose definition? What *is* your definition of "useful"? It is clear to me that static typing improves maintainability, scalability, and helps with the overall design of software. (At least that's my personal experience, and as others can attest, I do have reasonably extensive experience either way.) A 100,000 line program in an untyped language is useless to me if I am trying to make modifications -- unless it is written in a highly stylized way which is extensively documented (and which usually means that you could have captured this style in static types). So under this definition of "useful" it may very well be that there are fewer programs which are useful under dynamic typing than there are under (modern) static typing. > So dynamically typed languages allow > me to express more useful programs than statically typed languages. There are also programs which I cannot express at all in a purely dynamically typed language. (By "program" I mean not only the executable code itself but also the things that I know about this code.) Those are the programs which are protected against certain bad things from happening without having to do dynamic tests to that effect themselves. (Some of these "bad things" are, in fact, not dynamically testable at all.) > I don't question that. If this works well for you, keep it up. ;) Don't fear. I will. > > (And where are _your_ empirical studies which show that "working around > > language restrictions increases the potential for bugs"?) > > I don't need a study for that statement because it's a simple > argument: if the language doesn't allow me to express something in a > direct way, but requires me to write considerably more code then I > have considerably more opportunities for making mistakes. This assumes that there is a monotone function which maps token count to error-proneness and that the latter depends on nothing else. This is a highly dubious assumption. In many cases the few extra tokens you write are exactly the ones that let the compiler verify that your thinking process was accurate (to the degree that this fact is captured by types). If you get them wrong *or* if you got the original code wrong, then the compiler can tell you. Without the extra tokens, the compiler is helpless in this regard. To make a (not so far-fetched, btw :) analogy: Consider logical statements and formal proofs. Making a logical statement is easy and can be very short. It is also easy to make mistakes without noticing; after all saying something that is false while still believing it to be true is extremely easy. Just by looking at the statement it is also often hard to tell whether the statement is right. In fact, computers have a hard time with this task, too. Theorem-proving is hard. On the other hand, writing down the statement with a formal proof is impossible to get wrong without anyone noticing because checking the proof for validity is trivial compared to coming up with it in the first place. So even though writing the statement with a proof seems harder, once you have done it and it passes the proof checker you can rest assured that you got it right. The longer "program" will have fewer "bugs" on average. Matthias   0 Reply find19 (1245) 10/23/2003 2:16:08 AM Pascal Costanza: > The set of programs that are useful but cannot be checked by a static > type system is by definition bigger than the set of useful programs that > can be statically checked. So dynamically typed languages allow me to > express more useful programs than statically typed languages. Ummm, both are infinite and both are countably infinite, so those sets are the same size. You're falling for Hilbert's Paradox. Also, while I don't know a proof, I'm pretty sure that type inferencing can do addition (and theorem proving) so is equal in power to programming. > I don't need a study for that statement because it's a simple argument: > if the language doesn't allow me to express something in a direct way, > but requires me to write considerably more code then I have considerably > more opportunities for making mistakes. The size comparisons I've seen (like the great programming language shootout) suggest that Ocaml and Scheme require about the same amount of code to solve small problems. Yet last I saw, Ocaml is strongly typed at compile time. How do you assume then that strongly&statically typed languages require "considerable more code"? Andrew dalke@dalkescientific.com   0 Reply adalke (604) 10/23/2003 4:02:32 AM "Pascal Costanza" <costanza@web.de> wrote in message news:bn774d$qj3$1@newsreader2.netcologne.de... > > > > When do programmers know better? An int is an int and a string is a > > string, and nary the twain shall be treated the same. I would rather > > 1 + "bar"'' signal an error at compile time than at run time. > > Such code would easily be caught very soon in your unit tests. Provided you think to write such a test, and expend the effort to do so. Contrast to what happens in a statically typed language, where this is done for you automatically. Unit tests are great; I heartily endorse them. But they *cannot* do everything that static type checking can do. Likewise, static type checking *cannot* do everything unit testing can do. So again I ask, why is it either/or? Why not both? I've had *great* success building systems with comprehensive unit test suites in statically typed languages. The unit tests catch some bugs, and the static type checking catches other bugs. Marshall   0 Reply mspight (144) 10/23/2003 4:39:31 AM Joachim Durchholz <joachim.durchholz@web.de> writes: >My 100% subjective private study reveals not a single complaint about >over-restrictive type systems in comp.lang.functional in the last 12 >months. I also read c.l.functional (albeit only lightly). In the last 12 months, I have encountered dozens of complaints about over-restrictive type sytems in Haskell, OCaml, SML, etc. The trick is that these complaints are not phrased in precisely that way. Rather, someone is trying to do some specific task, and has difficulty arriving at a usable type needed in the task. Often posters provide good answers--Durchholz included. But the underlying complaint -really was- about the restrictiveness of the type system. That's not even to say that the overall advantages of a strong type system are not worthwhile--even perhaps better than more dynamic languages. But it's quite disingenuous to claim that no one ever complains about it. Obviously, no one who finds a strong static type system unacceptable is going to be committed to using, e.g. Haskell--the complaint doesn't take the form of "I'm taking my marbles and going home". Yours, Lulu... -- Keeping medicines from the bloodstreams of the sick; food from the bellies of the hungry; books from the hands of the uneducated; technology from the underdeveloped; and putting advocates of freedom in prisons. Intellectual property is to the 21st century what the slave trade was to the 16th.   0 Reply mertz (174) 10/23/2003 5:01:44 AM Quoth Lulu of the Lotus-Eaters <mertz@gnosis.cx>: | Joachim Durchholz <joachim.durchholz@web.de> writes: |> My 100% subjective private study reveals not a single complaint about |> over-restrictive type systems in comp.lang.functional in the last 12 |> months. | | I also read c.l.functional (albeit only lightly). In the last 12 | months, I have encountered dozens of complaints about over-restrictive | type sytems in Haskell, OCaml, SML, etc. | | The trick is that these complaints are not phrased in precisely that | way. Rather, someone is trying to do some specific task, and has | difficulty arriving at a usable type needed in the task. Often posters | provide good answers--Durchholz included. But the underlying complaint | -really was- about the restrictiveness of the type system. | | That's not even to say that the overall advantages of a strong type | system are not worthwhile--even perhaps better than more dynamic | languages. But it's quite disingenuous to claim that no one ever | complains about it. Obviously, no one who finds a strong static type | system unacceptable is going to be committed to using, e.g. | Haskell--the complaint doesn't take the form of "I'm taking my marbles | and going home". No one said that strict typing is free, requires no effort or learning from the programmer. That would be ridiculous - of course a type system is naturally restrictive, that's its nature. A restrictive system that imposes a constraint on the programmer, who needs to learn about that in order to use the language effectively. Over-restrictive' is different. If there are questions about static typing, it does not follow that it's over-restrictive, nor that the questions constitute a complaint to that effect. Donn Cave, donn@drizzle.com   0 Reply donn (251) 10/23/2003 6:47:41 AM "Pascal Costanza" <costanza@web.de> wrote in message news:bn7a3p$1h6$1@newsreader2.netcologne.de... > > I wouldn't count the use of java.lang.Object as a case of dynamic > typing. You need to explicitly cast objects of this type to some class > in order to make useful method calls. You only do this to satisfy the > static type system. (BTW, this is one of the sources for potential bugs > that you don't have in a decent dynamically typed language.) Huh? The explicit-downcast construct present in Java is the programmer saying to the compiler: "trust me; you can accept this type of parameter." In a dynamically-typed language, *every* call is like this! So if this is a source of errors (which I believe it is) then dynamically-typed languages have this potential source of errors with every function call, vs. statically-typed languages which have them only in those few cases where the programmer explicitly puts them in. Marshall   0 Reply mspight (144) 10/23/2003 7:15:09 AM Here's a link to a relavant system that may be worthwhile to check out: http://www.simulys.com/guideto.htm -- ; Matthew Danish <mdanish@andrew.cmu.edu> ; OpenPGP public key: C24B6010 on keyring.debian.org ; Signed or encrypted mail welcome. ; "There is no dark side of the moon really; matter of fact, it's all dark."   0 Reply mdanish (271) 10/23/2003 7:58:25 AM Pascal Costanza <costanza@web.de> wrote: > You need some testing discipline, which is supported well by unit > testing frameworks. IMHO it helps to think about static typing as a special kind of unit tests. Like unit tests, they verify that for some input values, the function in question will produce the correct output values. Unlike unit tests, they do this for a class of values, instead of testing statistically by example. And unlike unit tests, they are pervasive: Every execution path will be automatically tested; you don't have to invest brain power to make sure you don't forget one. Type inference will automatically write unit tests for you (besides other uses like hinting that a routine may be more general than you thought). But since the computer is not very smart, they will test only more or less trivial things. But that's still good, because then you don't have to write the trivial unit tests, and only have to care about the non-trivial ones. Type annotations are an assertion language that you use to write down that kind of unit tests. > Static type systems are claimed to generally improve your code. I > don't see that. They do it for the same reason that unit tests do: * They are executable documention. * By writing them down first, you focus on what you want to do. * They help with refactoring. etc. Of course you can replace the benefits of static typing by enough unit tests. But they are different verification tools: For some kind of problems, one is better, for other kinds, the other. There's no reason not to use both. - Dirk   0 Reply dthierbach (210) 10/23/2003 8:10:17 AM Marshall Spight wrote: > "Pascal Costanza" <costanza@web.de> wrote in message news:bn774d$qj3$1@newsreader2.netcologne.de... > >>>When do programmers know better? An int is an int and a string is a >>>string, and nary the twain shall be treated the same. I would rather >>>1 + "bar"'' signal an error at compile time than at run time. >> >>Such code would easily be caught very soon in your unit tests. > > > Provided you think to write such a test, and expend the effort > to do so. Contrast to what happens in a statically typed language, > where this is done for you automatically. There are other things that are done automatically for me in dynamically typed languages that I care more about than such static checks. I don't recall ever writing 1 + "bar". (Yes, this is a rhetorical statement. ;) > Unit tests are great; I heartily endorse them. But they *cannot* > do everything that static type checking can do. Likewise, > static type checking *cannot* do everything unit testing > can do. Right. > So again I ask, why is it either/or? Why not both? I've had > *great* success building systems with comprehensive unit > test suites in statically typed languages. The unit tests catch > some bugs, and the static type checking catches other bugs. That's great for you, and if it works for you, just keep it up. But I have given reasons why one would not want to have static type checking by default. All I am trying to say is that this depends on the context. Static type systems are definitely not _generally_ better than dynamic type systems. Pascal   0 Reply costanza (1427) 10/23/2003 8:33:53 AM Pascal Costanza <costanza@web.de> wrote in message news:<bn7a3p$1h6$1@newsreader2.netcologne.de>... > > + Design process: There are clear indications that processes like > extreme programming work better than processes that require some kind of > specification stage. Dynamic typing works better with XP than static > typing because with dynamic typing you can write unit tests without > having the need to immediately write appropriate target code. This is utterly bogus. If you write unit tests beforehand, you are already pre-specifying the interface that the code to be tested will present. I fail to see how dynamic typing can confer any kind of advantage here. > + Documentation: Comments are usually better for handling documentation. > ;) If you want your "comments" checked, you can add assertions. Are you seriously claiming that concise, *automatically checked* documentation (which is one function served by explicit type declarations) is inferior to unchecked, ad hoc commenting? For one thing, type declarations *cannot* become out-of-date (as comments can and often do) because a discrepancy between type declaration and definition will be immidiately flagged by the compiler. > + Error checking: I can only guess what you mean by this. If you mean > something like Java's checked exceptions, there are clear signs that > this is a very bad feature. I think Fergus was referring to static error checking, but (and forgive me if I'm wrong here) that's a feature you seem to insist has little or no practical value - indeed, you seem to claim it is even an impediment to productive programming. I'll leave this point as one of violent disagreement... > + Efficiency: As Paul Graham puts it, efficiency comes from profiling. > In order to achieve efficiency, you need to identify the bottle-necks of > your program. No amount of static checks can identify bottle-necks, you > have to actually run the program to determine them. I don't think you understand much about language implementation. A strong, expressive, static type system provides for optimisations that cannot be done any other way. These optimizations alone can be expected to make a program several times faster. For example: - no run-time type checks need be performed; - data representation is automatically optimised by the compiler (e.g. by pointer tagging); - polymorphic code can be inlined and/or specialised according to each application; - if the language does not support dynamic typing then values need not carry their own type identifiers around with them, thereby saving space; - if the language does support explicit dynamic typing, then only those places using that facility need plumb in the type identifiers (something done automatically by the compiler.) On top of all that, you can still run your code through the profiler, although the need for hand-tuned optimization (and consequent code obfuscation) may be completely obviated by the speed advantage conferred by the compiler exploiting a statically checked type system. > I wouldn't count the use of java.lang.Object as a case of dynamic > typing. You need to explicitly cast objects of this type to some class > in order to make useful method calls. You only do this to satisfy the > static type system. (BTW, this is one of the sources for potential bugs > that you don't have in a decent dynamically typed language.) No! A thousand times, no! Let me put it like this. Say I have a statically, expressively, strongly typed language L. And I have another language L' that is identical to L except it lacks the type system. Now, any program in L that has the type declarations removed is also a program in L'. The difference is that a program P rejected by the compiler for L can be converted to a program P' in L' which *may even appear to run fine for most cases*. However, and this is the really important point, P' is *still* a *broken* program. Simply ignoring the type problems does not make them go away: P' still contains all the bugs that program P did. > > Soft typing systems give you dynamic typing unless you explicitly ask > > for static typing. That is the wrong default, IMHO. It works much > > better to add dynamic typing to a statically typed language than the > > other way around. > > I don't think so. Yes, but your arguments are unconvincing. I should point out that most of the people on comp.lang.functional (a) probably used weakly/ dynamically typed languages for many years, and at an expert level, before discovering statically typed (declarative) programming and (b) probably still do use such languages on a regular basis. Expressive, static typing is not a message shouted from ivory towers by people lacking real-world experience. Why not make the argument more concrete? Present a problem specification for an every-day programming task that you think seriously benefits from dynamic typing. Then we can discuss the pros and cons of different approaches. -- Ralph   0 Reply rafe (28) 10/23/2003 8:39:04 AM Matthias Blume wrote: > Pascal Costanza <costanza@web.de> writes: > > >>The set of programs that are useful but cannot be checked by a static >>type system is by definition bigger than the set of useful programs >>that can be statically checked. > > > By whose definition? What *is* your definition of "useful"? It is > clear to me that static typing improves maintainability, scalability, > and helps with the overall design of software. (At least that's my > personal experience, and as others can attest, I do have reasonably > extensive experience either way.) > > A 100,000 line program in an untyped language is useless to me if I am > trying to make modifications -- unless it is written in a highly > stylized way which is extensively documented (and which usually means > that you could have captured this style in static types). So under > this definition of "useful" it may very well be that there are fewer > programs which are useful under dynamic typing than there are under > (modern) static typing. A statically typed program is useless if one tries to make modifications _at runtime_. There are software systems out there that make use of dynamic modifications, and they have a strong advantage in specific areas because of this. If you can come up with a static type system for an unrestricted runtime metaobject protocol, then I am fine with static typing. >>So dynamically typed languages allow >>me to express more useful programs than statically typed languages. > > > There are also programs which I cannot express at all in a purely > dynamically typed language. (By "program" I mean not only the executable > code itself but also the things that I know about this code.) > Those are the programs which are protected against certain bad things > from happening without having to do dynamic tests to that effect > themselves. This is a circular argument. You are already suggesting the solution in your problem description. > (Some of these "bad things" are, in fact, not dynamically > testable at all.) For example? >>I don't question that. If this works well for you, keep it up. ;) > > > Don't fear. I will. > > ....and BTW, please let me keep up using dynamically typed languages, because this works well for me! (That's the whole of my answer to the original question, why one would want to give up static typing.) >>>(And where are _your_ empirical studies which show that "working around >>>language restrictions increases the potential for bugs"?) >> >>I don't need a study for that statement because it's a simple >>argument: if the language doesn't allow me to express something in a >>direct way, but requires me to write considerably more code then I >>have considerably more opportunities for making mistakes. > > > This assumes that there is a monotone function which maps token count > to error-proneness and that the latter depends on nothing else. This > is a highly dubious assumption. In many cases the few extra tokens > you write are exactly the ones that let the compiler verify that your > thinking process was accurate (to the degree that this fact is > captured by types). If you get them wrong *or* if you got the > original code wrong, then the compiler can tell you. Without the > extra tokens, the compiler is helpless in this regard. See the example of downcasts in Java. > To make a (not so far-fetched, btw :) analogy: Consider logical > statements and formal proofs. Making a logical statement is easy and > can be very short. It is also easy to make mistakes without noticing; > after all saying something that is false while still believing it to > be true is extremely easy. Just by looking at the statement it is > also often hard to tell whether the statement is right. In fact, > computers have a hard time with this task, too. Theorem-proving is > hard. > On the other hand, writing down the statement with a formal proof is > impossible to get wrong without anyone noticing because checking the > proof for validity is trivial compared to coming up with it in the > first place. So even though writing the statement with a proof seems > harder, once you have done it and it passes the proof checker you can > rest assured that you got it right. The longer "program" will have fewer > "bugs" on average. Yes, but then you have a proof that is tailored to the statement you have made. The claim of people who favor static type systems is that static type systems are _generally_ helpful. Pascal   0 Reply costanza (1427) 10/23/2003 8:44:06 AM Andrew Dalke wrote: > Pascal Costanza: > >>The set of programs that are useful but cannot be checked by a static >>type system is by definition bigger than the set of useful programs that >>can be statically checked. So dynamically typed languages allow me to >>express more useful programs than statically typed languages. > > > Ummm, both are infinite and both are countably infinite, so those sets > are the same size. You're falling for Hilbert's Paradox. > > Also, while I don't know a proof, I'm pretty sure that type inferencing > can do addition (and theorem proving) so is equal in power to > programming. Just give me a static type system CLOS + MOP. >>I don't need a study for that statement because it's a simple argument: >>if the language doesn't allow me to express something in a direct way, >>but requires me to write considerably more code then I have considerably >>more opportunities for making mistakes. > > > The size comparisons I've seen (like the great programming language > shootout) suggest that Ocaml and Scheme require about the same amount > of code to solve small problems. Yet last I saw, Ocaml is strongly typed > at compile time. How do you assume then that strongly&statically typed > languages require "considerable more code"? _small_ problems? Pascal   0 Reply costanza (1427) 10/23/2003 8:48:35 AM Marshall Spight wrote: > "Pascal Costanza" <costanza@web.de> wrote in message news:bn7a3p$1h6$1@newsreader2.netcologne.de... > >>I wouldn't count the use of java.lang.Object as a case of dynamic >>typing. You need to explicitly cast objects of this type to some class >>in order to make useful method calls. You only do this to satisfy the >>static type system. (BTW, this is one of the sources for potential bugs >>that you don't have in a decent dynamically typed language.) > > > Huh? The explicit-downcast construct present in Java is the > programmer saying to the compiler: "trust me; you can accept > this type of parameter." In a dynamically-typed language, *every* > call is like this! So if this is a source of errors (which I believe it > is) then dynamically-typed languages have this potential source > of errors with every function call, vs. statically-typed languages > which have them only in those few cases where the programmer > explicitly puts them in. What can happen in Java is the following: - You might accidentally use the wrong class in a class cast. - For the method you try to call, there happens to be a method with the same name and signature in that class. In this situation, the static type system would be happy, but the code is buggy. In a decent dynamically typed language, you have proper name space management, so that a method cannot ever be defined for a class only by accident. (Indeed, Java uses types for many different unrelated things - in this case as a very weak name space mechanism.) Pascal   0 Reply costanza (1427) 10/23/2003 8:57:03 AM Ralph Becket wrote: > Pascal Costanza <costanza@web.de> wrote in message news:<bn7a3p$1h6$1@newsreader2.netcologne.de>... > >>+ Design process: There are clear indications that processes like >>extreme programming work better than processes that require some kind of >>specification stage. Dynamic typing works better with XP than static >>typing because with dynamic typing you can write unit tests without >>having the need to immediately write appropriate target code. > > > This is utterly bogus. If you write unit tests beforehand, you are > already pre-specifying the interface that the code to be tested will > present. > > I fail to see how dynamic typing can confer any kind of advantage here. Read the literature on XP. >>+ Documentation: Comments are usually better for handling documentation. >>;) If you want your "comments" checked, you can add assertions. > > > Are you seriously claiming that concise, *automatically checked* > documentation (which is one function served by explicit type > declarations) is inferior to unchecked, ad hoc commenting? I am sorry, but in my book, assertions are automatically checked. > For one thing, type declarations *cannot* become out-of-date (as > comments can and often do) because a discrepancy between type > declaration and definition will be immidiately flagged by the compiler. They same holds for assertions as soon as they are run by the test suite. >>+ Error checking: I can only guess what you mean by this. If you mean >>something like Java's checked exceptions, there are clear signs that >>this is a very bad feature. > > > I think Fergus was referring to static error checking, but (and forgive > me if I'm wrong here) that's a feature you seem to insist has little or > no practical value - indeed, you seem to claim it is even an impediment > to productive programming. I'll leave this point as one of violent > disagreement... It has value for certain cases, but not in general. >>+ Efficiency: As Paul Graham puts it, efficiency comes from profiling. >>In order to achieve efficiency, you need to identify the bottle-necks of >>your program. No amount of static checks can identify bottle-necks, you >>have to actually run the program to determine them. > > > I don't think you understand much about language implementation. ....and I don't think you understand much about dynamic compilation. Have you ever checked some not-so-recent-anymore work about, say, the HotSpot virtual machine? > A strong, expressive, static type system provides for optimisations > that cannot be done any other way. These optimizations alone can be > expected to make a program several times faster. For example: > - no run-time type checks need be performed; > - data representation is automatically optimised by the compiler > (e.g. by pointer tagging); > - polymorphic code can be inlined and/or specialised according to each > application; > - if the language does not support dynamic typing then values need not > carry their own type identifiers around with them, thereby saving > space; > - if the language does support explicit dynamic typing, then only > those places using that facility need plumb in the type identifiers > (something done automatically by the compiler.) You are only talking about micro-efficiency here. I don't care about that, my machine is fast enough for a decent dynamically typed language. > On top of all that, you can still run your code through the profiler, > although the need for hand-tuned optimization (and consequent code > obfuscation) may be completely obviated by the speed advantage > conferred by the compiler exploiting a statically checked type system. Have you checked this? >>I wouldn't count the use of java.lang.Object as a case of dynamic >>typing. You need to explicitly cast objects of this type to some class >>in order to make useful method calls. You only do this to satisfy the >>static type system. (BTW, this is one of the sources for potential bugs >>that you don't have in a decent dynamically typed language.) > > > No! A thousand times, no! > > Let me put it like this. Say I have a statically, expressively, strongly > typed language L. And I have another language L' that is identical to > L except it lacks the type system. Now, any program in L that has the > type declarations removed is also a program in L'. The difference is > that a program P rejected by the compiler for L can be converted to a > program P' in L' which *may even appear to run fine for most cases*. > However, and this is the really important point, P' is *still* a > *broken* program. Simply ignoring the type problems does not make > them go away: P' still contains all the bugs that program P did. You are making several mistakes here. I don't argue for languages that don't have a type system, I argue for languages that are dynamically typed. We are not debating strong typing. Furthermore, a program P that is rejected by L is not necessarily broken. >>>Soft typing systems give you dynamic typing unless you explicitly ask >>>for static typing. That is the wrong default, IMHO. It works much >>>better to add dynamic typing to a statically typed language than the >>>other way around. >> >>I don't think so. > > > Yes, but your arguments are unconvincing. I should point out that > most of the people on comp.lang.functional (a) probably used weakly/ > dynamically typed languages for many years, and at an expert level, > before discovering statically typed (declarative) programming and Weak and dynamic typing is not the same thing. > (b) probably still do use such languages on a regular basis. > Expressive, static typing is not a message shouted from ivory towers > by people lacking real-world experience. > Why not make the argument more concrete? Present a problem > specification for an every-day programming task that you think > seriously benefits from dynamic typing. Then we can discuss the > pros and cons of different approaches. No. The original question asked in this thread was along the lines of why abandon static type systems and why not use them always. I don't need to convince you that a proposed general solution doesn't always work, you have to convince me that it always works. Otherwise I could come up with some other arbitrary restriction and claim that it is a general solution for writing better programs, and ask you to give counter-examples as well. This is not a reasonable approach IMHO. There are excellent programs out there that have been written with static type systems, and there are also excellent programs out there that have been written without static type systems. This is a clear indication that static type systems are not a necessary condition for writing excellent programs. Furthermore, there are crap programs out there that have been written with static type systems, so a static type system is also not a sufficient condition for writing good software. The burden of proof is on the one who proposes a solution. Pascal   0 Reply costanza (1427) 10/23/2003 9:24:24 AM Joachim Durchholz <joachim.durchholz@web.de> writes: > My 100% subjective private study reveals not a single complaint about > over-restrictive type systems in comp.lang.functional in the last 12 > months. I certainly recall a fair number of unproductive flame wars on the topic over the years. Personally, I have decided to chalk it up to de gustibus and to set the topic aside. Best, Thomas -- Thomas Lindgren "It's becoming popular? It must be in decline." -- Isaiah Berlin   0 Reply Thomas 10/23/2003 9:26:45 AM Matthias Blume <find@my.address.elsewhere> writes: > Pascal Costanza <costanza@web.de> writes: > >> The set of programs that are useful but cannot be checked by a static >> type system is by definition bigger than the set of useful programs >> that can be statically checked. > > By whose definition? What *is* your definition of "useful"? It is > clear to me that static typing improves maintainability, scalability, > and helps with the overall design of software. (At least that's my > personal experience, and as others can attest, I do have reasonably > extensive experience either way.) The opposing point is to assert that *no* program that cannot be statically checked is useful. Are you really asserting that?   0 Reply prunesquallor (871) 10/23/2003 9:44:13 AM Pascal Costanza wrote: > For example, static type systems are incompatible with dynamic > metaprogramming. This is objectively a reduction of expressive power, > because programs that don't allow for dynamic metaprogramming can't be > extended in certain ways at runtime, by definition. What is dynamic metaprogramming? Regards, Jo   0 Reply joachim.durchholz (563) 10/23/2003 10:42:15 AM Dirk Thierbach wrote: > Pascal Costanza <costanza@web.de> wrote: > > You need some testing discipline, which is supported well by unit > > testing frameworks. > > IMHO it helps to think about static typing as a special kind of unit > tests. Like unit tests, they verify that for some input values, the > function in question will produce the correct output values. Unlike > unit tests, they do this for a class of values, instead of testing > statistically by example. And unlike unit tests, they are pervasive: > Every execution path will be automatically tested; you don't have > to invest brain power to make sure you don't forget one. IMHO typical unit-tests in python go a lot further than just testing types. They test *behaviour* rather than just types. Thus I tend to think that for languages like python unittests are a *perfect match* because there is hardly any redundancy and they are very short to write down usually. Writing unittests in a statically typed language is more redundant because - like you say - type declarations already are a kind of (IMO very limited) tests. cheers, holger   0 Reply pyth (25) 10/23/2003 10:53:39 AM Dirk Thierbach wrote: > Pascal Costanza <costanza@web.de> wrote: > >>You need some testing discipline, which is supported well by unit >>testing frameworks. > > IMHO it helps to think about static typing as a special kind of unit > tests. Like unit tests, they verify that for some input values, the > function in question will produce the correct output values. Unlike > unit tests, they do this for a class of values, instead of testing > statistically by example. And unlike unit tests, they are pervasive: > Every execution path will be automatically tested; you don't have > to invest brain power to make sure you don't forget one. This is clear. > Type inference will automatically write unit tests for you (besides > other uses like hinting that a routine may be more general than you > thought). But since the computer is not very smart, they will test > only more or less trivial things. But that's still good, because then > you don't have to write the trivial unit tests, and only have to care > about the non-trivial ones. Unless the static type system takes away the expressive power that I need. > Type annotations are an assertion language that you use to write down > that kind of unit tests. Yep. > Of course you can replace the benefits of static typing by enough unit > tests. But they are different verification tools: For some kind of > problems, one is better, for other kinds, the other. There's no reason > not to use both. I have given reasons when not to use a static type system in this thread. Please take a look at the Smalltalk MOP or the CLOS MOP and tell me what a static type system should look like for these languages! Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 11:12:17 AM Andrew Dalke wrote: > Pascal Costanza: > >>The set of programs that are useful but cannot be checked by a static >>type system is by definition bigger than the set of useful programs that >>can be statically checked. So dynamically typed languages allow me to >>express more useful programs than statically typed languages. > > Ummm, both are infinite and both are countably infinite, so those sets > are the same size. You're falling for Hilbert's Paradox. The sets in question are not /all/ dynamically/statically typed programs, they are all dynamically/statically typed programs that fit any item in the set of specifications in existence. Which is a very finite set. > Also, while I don't know a proof, I'm pretty sure that type inferencing > can do addition (and theorem proving) so is equal in power to > programming. Nope. It depends on the type system used: some are decidable, some are undecidable, and for some, decidability is unknown. Actually, for decidable type inference systems, there's also the distinction between exponential, polynomial, O (N log N), and linear behaviour; for some systems, the worst-case behaviour is unknown but benevolent in practice. The vast majority of practical programming languages use a type inference system where the behavior is known to be O (N log N) or better :-) (meaning that the other type systems and associated inference algorithms are research subjects and/or research tools) Regards, Jo   0 Reply joachim.durchholz (563) 10/23/2003 11:17:30 AM Joachim Durchholz wrote: > Pascal Costanza wrote: > >> For example, static type systems are incompatible with dynamic >> metaprogramming. This is objectively a reduction of expressive power, >> because programs that don't allow for dynamic metaprogramming can't be >> extended in certain ways at runtime, by definition. > > What is dynamic metaprogramming? Writing programs that inspect and change themselves at runtime. Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 11:18:51 AM Pascal Costanza wrote: > Ralph Becket wrote: >> I fail to see how dynamic typing can confer any kind of advantage here. > > Read the literature on XP. Note that most literature contrasts dynamic typing with the static type systems of C++ and/or Java. Good type systems are /far/ better. >> Are you seriously claiming that concise, *automatically checked* >> documentation (which is one function served by explicit type >> declarations) is inferior to unchecked, ad hoc commenting? > > I am sorry, but in my book, assertions are automatically checked. But only at runtime, where a logic flaw may or may not trigger the assertion. (Assertions are still useful: if they are active, they prove that the errors checked by them didn't occur in a given program run. This can still be useful. But then, production code usually runs with assertion checking off - which is exactly the point where knowing that some bug occurred would be more important...) >> For one thing, type declarations *cannot* become out-of-date (as >> comments can and often do) because a discrepancy between type >> declaration and definition will be immidiately flagged by the compiler. > > They same holds for assertions as soon as they are run by the test suite. A test suite can never catch all permutations of data that may occur (on a modern processor, you can't even check the increment-by-one operation with that, the universe will end before the CPU has counted even half of the full range). >>> + Efficiency: As Paul Graham puts it, efficiency comes from >>> profiling. In order to achieve efficiency, you need to identify the >>> bottle-necks of your program. No amount of static checks can identify >>> bottle-necks, you have to actually run the program to determine them. >> >> I don't think you understand much about language implementation. > > > ....and I don't think you understand much about dynamic compilation. > Have you ever checked some not-so-recent-anymore work about, say, the > HotSpot virtual machine? Well, I did - and the results were, ahem, unimpressive. Besides, HotSpot is for Java, which is statically typed, so I don't really see your point here... unless we're talking about different VMs. And, yes, VMs got pretty fast these days (and that actually happened several years ago). It's only that compiled languages still have a good speed advantage - making a VM fast requires just that extra amount of effort which, when invested into a compiler, will make the compiled code still run faster than the VM code. Also, I have seen several cases where VM code just plain sucked performance-wise until it was carefully hand-optimized. (A concrete example: the all-new, great graphics subsystem for Squeak that could do wonders like rendering fonts with all sorts of funky effects, do 3D transformations on the fly, and whatnot... I left Squeak before those optimizations became mainstream, but I'm pretty sure that Squeak got even faster. Yet Squeak is still a bit sluggish... only marginally so, and certainly no more sluggish than the bloatware that's around and that commercial programmers are forced to write, but efficiency is simply more of a concern and a manpower hog than with a compiled language.) > There are excellent programs out there that have been written with > static type systems, and there are also excellent programs out there > that have been written without static type systems. This is a clear > indication that static type systems are not a necessary condition for > writing excellent programs. Hey, there are also excellent programs written in assembly. By your argument, using a higher language is not a necessary condition for writing excellent languages. The question is: what effort goes into an excellent program? Is static typing a help or a hindrance? One thing I do accept: that non-inferring static type systems like those of C++ and Java are a PITA. Changing a type in some interface tends to cost a day or more, chasing all the consequences in callers, subclasses, and whatnot, and I don't need that (though it does tell me all the places where I should take a look to check if the change didn't break anything, so this isn't entirely wasted time). I'm still unconvinced that an inferring type system is worse than run-time type checking. (Except for that "dynamic metaprogramming" thing I'd like to know more about. In my book, things that are overly powerful are also overly uncontrollable, but that may be an exception.) Regards, Jo   0 Reply joachim.durchholz (563) 10/23/2003 11:44:47 AM Pascal Costanza wrote: > See the example of downcasts in Java. Please do /not/ draw your examples from Java, C++, or Eiffel. Modern static type systems are far more flexible and powerful, and far less obtrusive than the type systems used in these languages. A modern type system has the following characteristics: 1. It's safe: Code that type checks cannot assign type-incorrect values (as opposed to Eiffel). 2. It is expressive: There's no need to write type casts (as opposed to C++ and Java). (The only exceptions where type casts are necessary are those where it is logically unavoidable: e.g. when importing binary data from an untyped source.) 3. It is unobtrusive: The compiler can infer most if not all types by itself. Modifying some code so that it is slightly more general will thus automatically acquire the appropriate slightly more general type. 4. It is powerful: any type may have other types as parameters. Not only for container types such as Array <Integer>, but also for other purposes. Advanced type systems can even express mutually recursive types - an (admittedly silly) example: trees that have alternating node types on paths from root to leaves. (And all that without type casts, Mum! *g*) Regards, Jo   0 Reply joachim.durchholz (563) 10/23/2003 11:51:20 AM Joachim Durchholz wrote: > > The vast majority of practical programming languages use a type > inference system where the behavior is known to be O (N log N) or better Not true, unfortunately. Type inference for almost all FP languages is a derivative from the original Hindley/Milner algorithm for ML, which is known to have exponential worst-case behaviour. Interestingly, such cases never show up in practice, most realistic programs can be checked in subquadratic time and space. For that reason even the inventors of the algorithm originally believed it was polynomial, until somebody found a counterexample. The good news is that, for similar reasons, undecidable type checking need not be a hindrance in practice. - Andreas -- Andreas Rossberg, rossberg@ps.uni-sb.de "Computer games don't affect kids; I mean if Pac Man affected us as kids, we would all be running around in darkened rooms, munching magic pills, and listening to repetitive electronic music." - Kristian Wilson, Nintendo Inc.   0 Reply rossberg (600) 10/23/2003 12:04:13 PM In comp.lang.lisp Matthias Blume <find@my.address.elsewhere> wrote: Apologies for the out-of-context snippage: > A 100,000 line program in an untyped language is useless to me if I am ^^^^^^^ Your choice of word here makes me suspect that you _may_ understand something quite different than most of the residents of cll and clp by dynamic typing: dynamic typing is *not* the same as untyped! Of course, maybe it was just an unfortunate choice of words. Cheers, -- Nikodemus   0 Reply demoss (40) 10/23/2003 12:31:35 PM Joachim Durchholz wrote: > Pascal Costanza wrote: > >> See the example of downcasts in Java. > > > Please do /not/ draw your examples from Java, C++, or Eiffel. Modern > static type systems are far more flexible and powerful, and far less > obtrusive than the type systems used in these languages. This was just one obvious example in which you need a workaround to make the type system happy. There exist others. > A modern type system has the following characteristics: I know what modern type systems do. Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 1:50:36 PM  Matthias Blume wrote: > Pascal Costanza <costanza@web.de> writes: > > >>The set of programs that are useful but cannot be checked by a static >>type system is by definition bigger than the set of useful programs >>that can be statically checked. > > > By whose definition? What *is* your definition of "useful"? It is > clear to me that static typing improves maintainability, scalability, > and helps with the overall design of software. That sounds right. When I divided a large app into half a dozen sensible packages, several violations of clean design were revealed. But just a few, and there was a ton of code. I did a little C++ and Java once, porting Cells to those languages. This was existing code, so I did not have to explore as I coded. It was a total pain, but then it was pretty easy to get working because so many casual goofs got caught by the compiler. I just would never want to write original code this way, because then I am working fast and loose, doing this, doing that, leaving all sorts of code in limbo which would have to be straightened out to satisfy a compiler. The other problem with static typing is that it does not address the real problem with scaling, viz, the exponential explosion of state interdependencies. A compiler cannot check the code I neglect to write, leaving state change unpropagated to dependent other state, nor can it check the sequence of correctly typed statements to make sure state used in calculation X is updated before I use that state. kenny -- http://tilton-technology.com What?! You are a newbie and you haven't answered my: http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey   0 Reply ktilton (2220) 10/23/2003 2:06:27 PM Joachim Durchholz wrote: > Pascal Costanza wrote: > >> Ralph Becket wrote: >> >>> I fail to see how dynamic typing can confer any kind of advantage here. >> >> Read the literature on XP. > > Note that most literature contrasts dynamic typing with the static type > systems of C++ and/or Java. Good type systems are /far/ better. You are changing topics here. In a statically typed language, when I write a test case that calls a specific method, I need to write at least one class that implements at least that method, otherwise the code won't compile. In a dynamically typed language I can concentrate on writing the test cases first and don't need to write dummy code to make some arbitrary static checker happy. >>> Are you seriously claiming that concise, *automatically checked* >>> documentation (which is one function served by explicit type >>> declarations) is inferior to unchecked, ad hoc commenting? >> >> I am sorry, but in my book, assertions are automatically checked. > > But only at runtime, where a logic flaw may or may not trigger the > assertion. I don't care about that difference. My development environment is flexible enough to make execution of test suites a breeze. I don't need a separate compilation and linking stage to make this work. > (Assertions are still useful: if they are active, they prove that the > errors checked by them didn't occur in a given program run. This can > still be useful. But then, production code usually runs with assertion > checking off - which is exactly the point where knowing that some bug > occurred would be more important...) Don't let your production code run with assertion checking off then. >>> For one thing, type declarations *cannot* become out-of-date (as >>> comments can and often do) because a discrepancy between type >>> declaration and definition will be immidiately flagged by the compiler. >> >> They same holds for assertions as soon as they are run by the test suite. > > A test suite can never catch all permutations of data that may occur (on > a modern processor, you can't even check the increment-by-one operation > with that, the universe will end before the CPU has counted even half of > the full range). I hear that in the worst case scenarios, static type checking in modern type systems needs exponential time, but for most practical cases this doesn't matter. Maybe it also doesn't matter for most practical cases that you can't check all permutations of data in a test suite. >>>> + Efficiency: As Paul Graham puts it, efficiency comes from >>>> profiling. In order to achieve efficiency, you need to identify the >>>> bottle-necks of your program. No amount of static checks can >>>> identify bottle-necks, you have to actually run the program to >>>> determine them. >>> >>> I don't think you understand much about language implementation. >> >> ....and I don't think you understand much about dynamic compilation. >> Have you ever checked some not-so-recent-anymore work about, say, the >> HotSpot virtual machine? > > Well, I did - and the results were, ahem, unimpressive. The results that are reported in the papers I have read are very impressive. Can you give me the references to the papers you have read? > Besides, HotSpot is for Java, which is statically typed, so I don't > really see your point here... unless we're talking about different VMs. Oh, so you haven't read the literature? And above you said you did. Well, the research that ultimately lead to the HotSpot Virtual Machine originated in virtual machines for Smalltalk and for Self. Especially Self is an "extremely" dynamic language, but they still managed to make it execute reasonably fast. When all you wanted to say is that Java is not fast, that's not quite true. The showstopper for Java is the Swing library. Java itself is very fast. In certain cases it's even faster than C++ because the HotSpot VM can make optimizations that a static compiler cannot make. (For example, inline virtual methods that are known not to be overridden in currently loaded classes.) > And, yes, VMs got pretty fast these days (and that actually happened > several years ago). > It's only that compiled languages still have a good speed advantage - > making a VM fast requires just that extra amount of effort which, when > invested into a compiler, will make the compiled code still run faster > than the VM code. > Also, I have seen several cases where VM code just plain sucked > performance-wise until it was carefully hand-optimized. (A concrete > example: the all-new, great graphics subsystem for Squeak that could do > wonders like rendering fonts with all sorts of funky effects, do 3D > transformations on the fly, and whatnot... I left Squeak before those > optimizations became mainstream, but I'm pretty sure that Squeak got > even faster. Yet Squeak is still a bit sluggish... only marginally so, > and certainly no more sluggish than the bloatware that's around and that > commercial programmers are forced to write, but efficiency is simply > more of a concern and a manpower hog than with a compiled language.) I know sluggish software written in statically typed languages. >> There are excellent programs out there that have been written with >> static type systems, and there are also excellent programs out there >> that have been written without static type systems. This is a clear >> indication that static type systems are not a necessary condition for >> writing excellent programs. > > Hey, there are also excellent programs written in assembly. By your > argument, using a higher language is not a necessary condition for > writing excellent languages. Right. > The question is: what effort goes into an excellent program? Is static > typing a help or a hindrance? Right, that's the question. > I'm still unconvinced that an inferring type system is worse than > run-time type checking. (Except for that "dynamic metaprogramming" thing > I'd like to know more about. In my book, things that are overly powerful > are also overly uncontrollable, but that may be an exception.) Check the literature about metaobject protocols. Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 2:11:06 PM Pascal Costanza <costanza@web.de> wrote: > Unless the static type system takes away the expressive power that I need. Even within a static type system, you can always revert to "dynamic typing" by introducing a sufficiently universal datatype (say, s-expressions). Usually the need for real runtime flexiblity is quite localized (but of course this depends of the application). Unless you really need runtime flexibility nearly everywhere (and I cannot think of an example where this is the case), the universal datatype approach works quite well (though you loose the advantages of static typing in these places, of course, and you have to compensate with more unit tests). > I have given reasons when not to use a static type system in this > thread. Nobody forces you to use a static type system. Languages, with their associated type systems, are *tools*, and not religions. You use what is best for the job. But it's a bit stupid to frown upon everything else but one's favorite way of doing things. There are other ways. They may work a bit differently, and it might be not obvious how to do it if you're used to doing it differently, but that doesn't mean other ways are completely stupid. And you might actually learn something once you know how to do it both ways :-) > Please take a look at the Smalltalk MOP or the CLOS MOP and tell > me what a static type system should look like for these languages! You cannot take an arbitrary language and attach a good static type system to it. Type inference will be much to difficult, for example. There's a fine balance between language design and a good type system that works well with it. If you want to use Smalltalk or CLOS with dynamic typing and unit tests, use them. If you want to use Haskell or OCaml with static typing and type inference, use them. None is really "better" than the other. Both have their advantages and disadvantages. But don't dismiss one of them just because you don't know better. - Dirk   0 Reply dthierbach (210) 10/23/2003 2:39:39 PM Pascal Costanza <costanza@web.de> writes: > Joachim Durchholz wrote: >> Pascal Costanza wrote: >> >>> Ralph Becket wrote: >>> >>>> I fail to see how dynamic typing can confer any kind of advantage here. >>> >>> Read the literature on XP. >> Note that most literature contrasts dynamic typing with the static >> type systems of C++ and/or Java. Good type systems are /far/ better. > > You are changing topics here. > > In a statically typed language, when I write a test case that calls a > specific method, I need to write at least one class that implements at > least that method, otherwise the code won't compile. Not in ocaml. ocaml is statically typed. -- R�mi Vanicat   0 Reply invalid5237 (7) 10/23/2003 2:52:49 PM prunesquallor@comcast.net writes: > Matthias Blume <find@my.address.elsewhere> writes: > > > Pascal Costanza <costanza@web.de> writes: > > > >> The set of programs that are useful but cannot be checked by a static > >> type system is by definition bigger than the set of useful programs > >> that can be statically checked. > > > > By whose definition? What *is* your definition of "useful"? It is > > clear to me that static typing improves maintainability, scalability, > > and helps with the overall design of software. (At least that's my > > personal experience, and as others can attest, I do have reasonably > > extensive experience either way.) > > The opposing point is to assert that *no* program that cannot be > statically checked is useful. Are you really asserting that? Actually, viewed from a certain angle, yes. Every programmer who writes a program ought to have a proof that the program is correct in her mind. (If not, fire her.) It ought to be possible to formalize that proof and to statically check it. (Now, I am not saying that current type systems that are in practical use let you do that. But they go some of the way.) Matthias   0 Reply find19 (1245) 10/23/2003 3:00:08 PM "Andrew Dalke" <adalke@mindspring.com> writes: > Pascal Costanza: >> The set of programs that are useful but cannot be checked by a static >> type system is by definition bigger than the set of useful programs that >> can be statically checked. So dynamically typed languages allow me to >> express more useful programs than statically typed languages. > > Ummm, both are infinite and both are countably infinite, so those sets > are the same size. You're falling for Hilbert's Paradox. They aren't the same size if you limit the length of the program. This is a reasonable restriction if you are interested in programs that might be realizable within your lifetime. > Also, while I don't know a proof, I'm pretty sure that type inferencing > can do addition (and theorem proving) so is equal in power to > programming. Yes, this is true. But it is also the case that a powerful enough static type checker cannot be proven to halt or produce an answer in a time less than that required to run the program being checked. It makes little difference if the type checker produces the answer or the program produces the answer if they both take about the same time to run. Of course, it is generally more difficult to program in the type metalanguage than in the target language.   0 Reply jrm (1311) 10/23/2003 3:02:38 PM Remi Vanicat wrote: > Pascal Costanza <costanza@web.de> writes: >>In a statically typed language, when I write a test case that calls a >>specific method, I need to write at least one class that implements at >>least that method, otherwise the code won't compile. > > Not in ocaml. > ocaml is statically typed. How does ocaml make sure that you don't get a message-not-understood exception at runtime then? Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 3:03:17 PM Nikodemus Siivola <demoss@random-state.net> writes: > In comp.lang.lisp Matthias Blume <find@my.address.elsewhere> wrote: > > Apologies for the out-of-context snippage: > > > A 100,000 line program in an untyped language is useless to me if I am > ^^^^^^^ > > Your choice of word here makes me suspect that you _may_ understand > something quite different than most of the residents of cll and clp by > dynamic typing: > > dynamic typing is *not* the same as untyped! Ah, are we quibbling about *that* again? Words, words, words... If you want to know how much I know about the difference between typed and untyped (or "statically typed" vs. "dynamically typed" as you prefer), look up my track record on implementing languages in either part of the PL world. Yes, "dynamically typed" programs are "typed", but the word "type" here means something quite different from what it means when it is used with the qualifier "static". I prefer the latter use, and from that point of view there is only one (static) type in dynamically typed programs, hence my use of the word "untyped". (If you have only one (static) type, you might as well not even think about that fact.) Anyway, unfortunate or not, we are both thinking about the same class of languages. That shall suffice. Matthias PS: When I say "untyped" I mean it as in "the _untyped_ lambda calculus".   0 Reply find19 (1245) 10/23/2003 3:06:56 PM Kenny Tilton <ktilton@nyc.rr.com> writes: > The other problem with static typing is that it does not address the > real problem with scaling, viz, the exponential explosion of state > interdependencies. A compiler cannot check the code I neglect to > write, leaving state change unpropagated to dependent other state, nor > can it check the sequence of correctly typed statements to make sure > state used in calculation X is updated before I use that state. Yes, the usefulness of static types seems to be inversely proportional to the imperativeness of one's programming style (Haskell, Miranda). Static types *really* shine in purely functional settings. In mostly functional settings (SML, OCaml) they lose some of their expressive "punch" if you start playing with mutable data structures. In languages that heavily rely on imperative features (mutable state, object identity, imperative I/O, exceptions) their usefulness goes increasingly down the drain. Matthias   0 Reply find19 (1245) 10/23/2003 3:12:05 PM Joachim Durchholz <joachim.durchholz@web.de> writes: > Pascal Costanza wrote: > > ....because static type systems work by reducing the expressive > > power of a language. It can't be any different for a strict static > > type system. You can't solve the halting problem in a > > general-purpose language. > > > The final statement is correct, but you don't need to solve the > halting problem: it's enough to allow the specification of some > easy-to-prove properties, without hindering the programmer too much. In fact, you should never need to "solve the halting problem" in order to statically check you program. After all, the programmer *already has a proof* in her mind when she writes the code! All that's needed (:-) is for her to provide enough hints as to what that proof is so that the compiler can verify it. (The smiley is there because, as we are all poinfully aware of, this is much easier said than done.) Matthias   0 Reply find19 (1245) 10/23/2003 3:17:16 PM Matthias Blume <find@my.address.elsewhere> writes: > Yes, the usefulness of static types seems to be inversely proportional > to the imperativeness of one's programming style (Haskell, Miranda). > Static types *really* shine in purely functional settings (****). [...] Obviously, the parenthetical remark "(Haskell, Miranda)" should be where the (****) is. Matthias   0 Reply find19 (1245) 10/23/2003 3:26:14 PM Pascal Costanza <costanza@web.de> writes: > Remi Vanicat wrote: >> Pascal Costanza <costanza@web.de> writes: > >>>In a statically typed language, when I write a test case that calls a >>>specific method, I need to write at least one class that implements at >>>least that method, otherwise the code won't compile. >> Not in ocaml. >> ocaml is statically typed. > > How does ocaml make sure that you don't get a message-not-understood > exception at runtime then? It make the verification when you call the test. I explain : you could define : let f x = x #foo which is a function taking an object x and calling its method foo, even if there is no class having such a method. When sometime latter you do a : f bar then, and only then the compiler verify that the bar object have a foo method. By the way, It might give you some headache when you have made a spelling error to a method name (because the error is not seen by the compiler where it happen, but latter, where the function using the wrong method is used). -- R�mi Vanicat   0 Reply invalid5237 (7) 10/23/2003 3:28:38 PM Matthias Blume wrote: > PS: When I say "untyped" I mean it as in "the _untyped_ lambda > calculus". What terms would you use to describe the difference between dynamically and weakly typed languages, then? For example, Smalltalk is clearly "more" typed than C is. Describing both as "untyped" seems a little bit unfair to me. Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 3:30:17 PM Pascal Costanza <costanza@web.de> writes: > Matthias Blume wrote: > > > PS: When I say "untyped" I mean it as in "the _untyped_ lambda > > calculus". > > What terms would you use to describe the difference between > dynamically and weakly typed languages, then? > > > For example, Smalltalk is clearly "more" typed than C is. Describing > both as "untyped" seems a little bit unfair to me. Safe and unsafe. BTW, C is typed, Smalltalk is untyped. C's type system just happens to be unsound (in the sense that, as you observed, well-typed programs can still be unsafe). Matthias   0 Reply find19 (1245) 10/23/2003 3:35:08 PM Matthias Blume wrote: > prunesquallor@comcast.net writes: > > >>Matthias Blume <find@my.address.elsewhere> writes: >> >> >>>Pascal Costanza <costanza@web.de> writes: >>> >>> >>>>The set of programs that are useful but cannot be checked by a static >>>>type system is by definition bigger than the set of useful programs >>>>that can be statically checked. >>> >>>By whose definition? What *is* your definition of "useful"? It is >>>clear to me that static typing improves maintainability, scalability, >>>and helps with the overall design of software. (At least that's my >>>personal experience, and as others can attest, I do have reasonably >>>extensive experience either way.) >> >>The opposing point is to assert that *no* program that cannot be >>statically checked is useful. Are you really asserting that? > > > Actually, viewed from a certain angle, yes. Every programmer who > writes a program ought to have a proof that the program is correct in > her mind. (If not, fire her.) It ought to be possible to formalize > that proof and to statically check it. You are thinking about a certain set of programs and a distinct programming style that seems to work well for you. Other people may prefer a different programming style and care about a different set of programs. > (Now, I am not saying that current type systems that are in practical > use let you do that. But they go some of the way.) Please inform me as soon as they go all the way, because I might reconsider my point of view then. Until then I use what works best for me. Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 3:44:36 PM Pascal Costanza <costanza@web.de> writes: > > There are also programs which I cannot express at all in a purely > > > dynamically typed language. (By "program" I mean not only the executable > > code itself but also the things that I know about this code.) > > Those are the programs which are protected against certain bad things > > from happening without having to do dynamic tests to that effect > > themselves. > > This is a circular argument. You are already suggesting the solution > in your problem description. Is it? Am I? Is it too much to ask to know that the invariants that my code relies on will, in fact, hold when it gets to execute? Actually, if you think that this problem description already contains the solution which is static typing, then we are basically on the same page here. > ...and BTW, please let me keep up using dynamically typed languages, > because this works well for me! Since I have no power over what you do, I am forced to grant you this wish. (Lucky you!) > See the example of downcasts in Java. You had to dig out the poorest example you could think of, didn't you? Make a note of it: When I talk about the power of static typing, I am *not* thinking of Java! > > To make a (not so far-fetched, btw :) analogy: Consider logical > > statements and formal proofs. Making a logical statement is easy and > > can be very short. It is also easy to make mistakes without noticing; > > after all saying something that is false while still believing it to > > be true is extremely easy. Just by looking at the statement it is > > also often hard to tell whether the statement is right. In fact, > > computers have a hard time with this task, too. Theorem-proving is > > hard. > > On the other hand, writing down the statement with a formal proof is > > impossible to get wrong without anyone noticing because checking the > > proof for validity is trivial compared to coming up with it in the > > first place. So even though writing the statement with a proof seems > > harder, once you have done it and it passes the proof checker you can > > rest assured that you got it right. The longer "program" will have fewer > > "bugs" on average. > > Yes, but then you have a proof that is tailored to the statement you > have made. The claim of people who favor static type systems is that > static type systems are _generally_ helpful. I am not sure you "got" it: Yes, the proof is tailored to the statement (how else could it be?!), but the axioms and rules of its underlying proof system are not. Just like not every program has the same type even though the type system is fixed. Matthias   0 Reply find19 (1245) 10/23/2003 3:45:52 PM Matthias Blume wrote: > Pascal Costanza <costanza@web.de> writes: > > >>Matthias Blume wrote: >> >> >>>PS: When I say "untyped" I mean it as in "the _untyped_ lambda >>>calculus". >> >>What terms would you use to describe the difference between >>dynamically and weakly typed languages, then? >> >> >>For example, Smalltalk is clearly "more" typed than C is. Describing >>both as "untyped" seems a little bit unfair to me. > > > Safe and unsafe. > > BTW, C is typed, Smalltalk is untyped. C's type system just happens > to be unsound (in the sense that, as you observed, well-typed programs > can still be unsafe). Can you give me a reference to a paper, or some other literature, that defines the terminology that you use? I have tried to find a consistent set of terms for this topic, and have only found the paper "Type Systems" by Luca Cardelli (http://www.luca.demon.co.uk/Bibliography.htm#Type systems ) He uses the terms of static vs. dynamic typing and strong vs. weak typing, and these are described as orthogonal classifications. I find this terminology very clear, consistent and useful. But I am open to a different terminology. Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 3:50:46 PM Pascal Costanza <costanza@web.de> writes: > > Actually, viewed from a certain angle, yes. Every programmer who > > > writes a program ought to have a proof that the program is correct in > > her mind. (If not, fire her.) It ought to be possible to formalize > > that proof and to statically check it. > > You are thinking about a certain set of programs and a distinct > programming style that seems to work well for you. Other people may > prefer a different programming style and care about a different set of > programs. Do you mean that I talk about programs written by programmers who know what they are doing while you are talking about a different set of programs? The problem really is that you often say "is correct" and "cannot be statically checked" about the same set of problems. But how can *you* yourself possibly know the first to be true given that you think the second is true? Matthias   0 Reply find19 (1245) 10/23/2003 3:51:41 PM Matthias Blume <find@my.address.elsewhere> writes: > Every programmer who writes a program ought to have a proof that the > program is correct in her mind. (If not, fire her.) Don't forget to fire the specification writer afterwards. Then the requirements guy. Then the customer. Best, Thomas -- Thomas Lindgren "It's becoming popular? It must be in decline." -- Isaiah Berlin   0 Reply Thomas 10/23/2003 3:52:18 PM Remi Vanicat wrote: > Pascal Costanza <costanza@web.de> writes: > > >>Remi Vanicat wrote: >> >>>Pascal Costanza <costanza@web.de> writes: >> >>>>In a statically typed language, when I write a test case that calls a >>>>specific method, I need to write at least one class that implements at >>>>least that method, otherwise the code won't compile. >>> >>>Not in ocaml. >>>ocaml is statically typed. >> >>How does ocaml make sure that you don't get a message-not-understood >>exception at runtime then? > > > It make the verification when you call the test. I explain : > > you could define : > > let f x = x #foo > > which is a function taking an object x and calling its method > foo, even if there is no class having such a method. > > When sometime latter you do a : > > f bar > > then, and only then the compiler verify that the bar object have a foo > method. Doesn't this mean that the occurence of such compile-time errors is only delayed, in the sense that when the test suite grows the compiler starts to issue type errors? Anyway, that's an interesting case that I haven't known about before. Thanks. Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 3:53:40 PM Thomas Lindgren <***********@*****.***> writes: > Matthias Blume <find@my.address.elsewhere> writes: > > > Every programmer who writes a program ought to have a proof that the > > program is correct in her mind. (If not, fire her.) > > Don't forget to fire the specification writer afterwards. Then the > requirements guy. Then the customer. Unfortunately, I am aware of "the Real World". In any case, is this really any excuse for shipping code of which we don't know will always work, written by programmers who we didn't fire even though they didn't know what they were doing, writing to specifications that were inconsistent, driven by requirements that were unreasonable to begin with, asked for by customers who were clueless? Matthias   0 Reply find19 (1245) 10/23/2003 3:55:51 PM Pascal Costanza wrote: > Joachim Durchholz wrote: > >> Pascal Costanza wrote: >> >>> For example, static type systems are incompatible with dynamic >>> metaprogramming. This is objectively a reduction of expressive power, >>> because programs that don't allow for dynamic metaprogramming can't >>> be extended in certain ways at runtime, by definition. >> >> >> What is dynamic metaprogramming? > > > Writing programs that inspect and change themselves at runtime. Ah. I used to do that in assembler. I always felt like I was aiming a shotgun between my toes. When did self-modifying code get rehabilitated? - ken   0 Reply kenrose (17) 10/23/2003 3:57:32 PM In article <3638acfd.0310230039.306b14f@posting.google.com>, rafe@cs.mu.oz.au (Ralph Becket) wrote: > Let me put it like this. Say I have a statically, expressively, strongly > typed language L. And I have another language L' that is identical to > L except it lacks the type system. Now, any program in L that has the > type declarations removed is also a program in L'. The difference is > that a program P rejected by the compiler for L can be converted to a > program P' in L' which *may even appear to run fine for most cases*. > However, and this is the really important point, P' is *still* a > *broken* program. Simply ignoring the type problems does not make > them go away: P' still contains all the bugs that program P did. No. The fallacy in this reasoning is that you assume that "type error" and "bug" are the same thing. They are not. Some bugs are not type errors, and some type errors are not bugs. In the latter circumstance simply ignoring them can be exactly the right thing to do. (On the other hand, many, perhaps most, type errors are bugs, and so having a type system provide warnings can be a very useful thing IMO.) E.   0 Reply 10/23/2003 3:57:35 PM On Thu, 23 Oct 2003, Remi Vanicat wrote: >> How does ocaml make sure that you don't get a message-not-understood >> exception at runtime then? > >It make the verification when you call the test. I explain : > >you could define : > >let f x = x #foo > >which is a function taking an object x and calling its method >foo, even if there is no class having such a method. > >When sometime latter you do a : > >f bar > >then, and only then the compiler verify that the bar object have a foo >method. you might want to mention that this is possible because of 'extensible record types'. Well, there is a good chance the pyhton/lisp community will not understand this, but it illustrates that a lot of the arguments (probably on both sides in fact) are based on ignorance. One more thing I remembered from a heavy cross-group fight between comp.lang.smalltalk and c.l.f. quite a while ago, is that so-called 'dynamically typed' languages are useful because they allow you to incrementally develop ill-typed programs into better-typed programs (the XP-way), where the ill-typed programs already (partially) work. OTOH, with a static type system, you have to think more in advance to get the types right. XP-people consider this a hindrance and that is what people mean with 'the type system getting the way'. With a Haskell-style or even Ocaml-style type system, you cannot seriously argue that you can write a program which cannot be easily(!) converted into one that fits such type systems. By program, I mean 'a finished production-reade piece of software', not a 'snapshot' in the development cycle. The arguments from the smalltalk people are arguably defendable and this is why this kind of discussion will pop up again and again. Using either static or dynamic (Blume: untyped) type systems is not the point at all. What actually matters is your development style/phylosophy and this is more an issue of software engineering really. Ok, I am phasing out again. Regards, Simon   0 Reply shelsen (24) 10/23/2003 3:58:24 PM Matthias Blume wrote: > Pascal Costanza <costanza@web.de> writes: > > >>>There are also programs which I cannot express at all in a purely >> >>>dynamically typed language. (By "program" I mean not only the executable >>>code itself but also the things that I know about this code.) >>>Those are the programs which are protected against certain bad things >>>from happening without having to do dynamic tests to that effect >>>themselves. >> >>This is a circular argument. You are already suggesting the solution >>in your problem description. > > > Is it? Am I? Is it too much to ask to know that the invariants that > my code relies on will, in fact, hold when it gets to execute? Yes, because the need might arise to change the invariants at runtime, and you might not want to stop the program and restart it in order just to change it. > Actually, if you think that this problem description already contains > the solution which is static typing, then we are basically on the same > page here. > > >>...and BTW, please let me keep up using dynamically typed languages, >>because this works well for me! > > > Since I have no power over what you do, I am forced to grant you this > wish. (Lucky you!) :-) >>See the example of downcasts in Java. > > > You had to dig out the poorest example you could think of, didn't you? > Make a note of it: When I talk about the power of static typing, I am > *not* thinking of Java! OK, sorry, this was my mistake. I have picked this example because it has been mentioned in another branch of this thread. >>>To make a (not so far-fetched, btw :) analogy: Consider logical >>>statements and formal proofs. Making a logical statement is easy and >>>can be very short. It is also easy to make mistakes without noticing; >>>after all saying something that is false while still believing it to >>>be true is extremely easy. Just by looking at the statement it is >>>also often hard to tell whether the statement is right. In fact, >>>computers have a hard time with this task, too. Theorem-proving is >>>hard. >>>On the other hand, writing down the statement with a formal proof is >>>impossible to get wrong without anyone noticing because checking the >>>proof for validity is trivial compared to coming up with it in the >>>first place. So even though writing the statement with a proof seems >>>harder, once you have done it and it passes the proof checker you can >>>rest assured that you got it right. The longer "program" will have fewer >>>"bugs" on average. >> >>Yes, but then you have a proof that is tailored to the statement you >>have made. The claim of people who favor static type systems is that >>static type systems are _generally_ helpful. > > > I am not sure you "got" it: Yes, the proof is tailored to the > statement (how else could it be?!), but the axioms and rules of its > underlying proof system are not. Just like not every program has the > same type even though the type system is fixed. Yes, but you have much more freedom when you write an arbitrary proof than when you need to make a type system happy. Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 4:00:17 PM Ken Rose wrote: > Pascal Costanza wrote: > >> Joachim Durchholz wrote: >> >>> Pascal Costanza wrote: >>> >>>> For example, static type systems are incompatible with dynamic >>>> metaprogramming. This is objectively a reduction of expressive >>>> power, because programs that don't allow for dynamic metaprogramming >>>> can't be extended in certain ways at runtime, by definition. >>> >>> What is dynamic metaprogramming? >> >> Writing programs that inspect and change themselves at runtime. > > Ah. I used to do that in assembler. I always felt like I was aiming a > shotgun between my toes. > > When did self-modifying code get rehabilitated? I think this was in the late 70's. Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 4:11:32 PM Simon Helsen wrote: > On Thu, 23 Oct 2003, Remi Vanicat wrote: > > >>>How does ocaml make sure that you don't get a message-not-understood >>>exception at runtime then? >> >>It make the verification when you call the test. I explain : >> >>you could define : >> >>let f x = x #foo >> >>which is a function taking an object x and calling its method >>foo, even if there is no class having such a method. >> >>When sometime latter you do a : >> >>f bar >> >>then, and only then the compiler verify that the bar object have a foo >>method. > > > you might want to mention that this is possible because of 'extensible > record types'. Well, there is a good chance the pyhton/lisp community will > not understand this, but it illustrates that a lot of the arguments > (probably on both sides in fact) are based on ignorance. Do you have a reference for extensible record types. Google comes up, among other things, with Modula-3, and I am pretty sure that's not what you mean. > One more thing I remembered from a heavy cross-group fight between > comp.lang.smalltalk and c.l.f. quite a while ago, is that so-called > 'dynamically typed' languages are useful because they allow you to > incrementally develop ill-typed programs into better-typed programs (the > XP-way), where the ill-typed programs already (partially) work. Sometimes the ill-typed program is all I need because it helps me to solve a problem that is covered by that program nonetheless. > OTOH, with > a static type system, you have to think more in advance to get the types > right. XP-people consider this a hindrance and that is what people mean > with 'the type system getting the way'. With a Haskell-style or even > Ocaml-style type system, you cannot seriously argue that you can write a > program which cannot be easily(!) converted into one that fits such type > systems. By program, I mean 'a finished production-reade piece of > software', not a 'snapshot' in the development cycle. > > The arguments from the smalltalk people are arguably defendable and this > is why this kind of discussion will pop up again and again. Using either > static or dynamic (Blume: untyped) type systems is not the point at all. > What actually matters is your development style/phylosophy and this is > more an issue of software engineering really. Exactly. Very well put! Thanks, Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 4:20:14 PM Pascal Costanza wrote: >> >> But only at runtime, where a logic flaw may or may not trigger the >> assertion. > > I don't care about that difference. My development environment is > flexible enough to make execution of test suites a breeze. I don't need > a separate compilation and linking stage to make this work. > >> (Assertions are still useful: if they are active, they prove that the >> errors checked by them didn't occur in a given program run. This can >> still be useful. But then, production code usually runs with assertion >> checking off - which is exactly the point where knowing that some bug >> occurred would be more important...) > > Don't let your production code run with assertion checking off then. You don't seem to see the fundamental difference, which has been stated as "Static typing shows the absence of [certain classes of] errors, while testing [with assertions] can only show the presence of errors." When you actively use a type system as a tool and turn it to your advantage that "certain class" can be pretty large, btw. > I hear that in the worst case scenarios, static type checking in modern > type systems needs exponential time, but for most practical cases this > doesn't matter. Maybe it also doesn't matter for most practical cases > that you can't check all permutations of data in a test suite. Come on, you're comparing apples and wieners. The implications are completely different. -- Andreas Rossberg, rossberg@ps.uni-sb.de "Computer games don't affect kids; I mean if Pac Man affected us as kids, we would all be running around in darkened rooms, munching magic pills, and listening to repetitive electronic music." - Kristian Wilson, Nintendo Inc.   0 Reply rossberg (600) 10/23/2003 4:25:46 PM Pascal Costanza wrote: > Matthias Blume wrote: > >> Pascal Costanza <costanza@web.de> writes: >> >> >>> Matthias Blume wrote: >>> >>> >>>> PS: When I say "untyped" I mean it as in "the _untyped_ lambda >>>> calculus". >>> >>> >>> What terms would you use to describe the difference between >>> dynamically and weakly typed languages, then? >>> >>> >>> For example, Smalltalk is clearly "more" typed than C is. Describing >>> both as "untyped" seems a little bit unfair to me. >> >> >> >> Safe and unsafe. >> >> BTW, C is typed, Smalltalk is untyped. C's type system just happens >> to be unsound (in the sense that, as you observed, well-typed programs >> can still be unsafe). > > > Can you give me a reference to a paper, or some other literature, that > defines the terminology that you use? > > I have tried to find a consistent set of terms for this topic, and have > only found the paper "Type Systems" by Luca Cardelli > (http://www.luca.demon.co.uk/Bibliography.htm#Type systems ) > > He uses the terms of static vs. dynamic typing and strong vs. weak > typing, and these are described as orthogonal classifications. I find > this terminology very clear, consistent and useful. But I am open to a > different terminology. My copy, http://research.microsoft.com/Users/luca/Papers/TypeSystems.A4.pdf on page 3 defines safety as orthogonal to typing in the way Matthias suggested. -- Andreas Rossberg, rossberg@ps.uni-sb.de "Computer games don't affect kids; I mean if Pac Man affected us as kids, we would all be running around in darkened rooms, munching magic pills, and listening to repetitive electronic music." - Kristian Wilson, Nintendo Inc.   0 Reply rossberg (600) 10/23/2003 4:35:47 PM Matthias Blume wrote: > Pascal Costanza <costanza@web.de> writes: > > >>>Actually, viewed from a certain angle, yes. Every programmer who >> >>>writes a program ought to have a proof that the program is correct in >>>her mind. (If not, fire her.) It ought to be possible to formalize >>>that proof and to statically check it. >> >>You are thinking about a certain set of programs and a distinct >>programming style that seems to work well for you. Other people may >>prefer a different programming style and care about a different set of >>programs. > > Do you mean that I talk about programs written by programmers who know > what they are doing while you are talking about a different set of > programs? The cool thing about dynamically typed languages is that you don't need to know what you are doing when you start to write a program. You gain an understanding of the problem you try to solve during development, by just trying out things and see if they work. Of course, in the end I should have gained a fairly deep understanding of the problem, otherwise I have failed. But you seem to suggest that I shouldn't even start programming before I have gained a complete understanding. And in my view this is a waste of resources. The process that you undergo when you try to figure out a problem consists of automatable and non-automatable elements. I prefer to let the automatable elements be executed by my computer from the very beginning. I see the computer as a tool that supports my reasoning process here. At the end of the day, when I have finished my understanding process I have also come up with a solution to the problem as a working program. At that stage, if you want to be really sure that certain conditions are always met and can never be violated it makes sense to _add_ static checks that you can even tailor to the concrete problem you have already solved. The difference between a specification and a program is that I can test the program. ;) > The problem really is that you often say "is correct" and "cannot be > statically checked" about the same set of problems. But how can *you* > yourself possibly know the first to be true given that you think the > second is true? For example I cannot check whether my program will always be able to successfully connect to the internet or not. I can still know that my program is "correct". This is a very simple example but it is nonetheless one that illustrates that dynamic checking can be much better than static checking. A similar situation appears when you don't know what actual code your program will run in a dynamically extensible system. Just another example. Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 4:44:28 PM Pascal Costanza wrote: > Ken Rose wrote: > >> Pascal Costanza wrote: >> >>> Joachim Durchholz wrote: >>> >>>> Pascal Costanza wrote: >>>> >>>>> For example, static type systems are incompatible with dynamic >>>>> metaprogramming. This is objectively a reduction of expressive >>>>> power, because programs that don't allow for dynamic >>>>> metaprogramming can't be extended in certain ways at runtime, by >>>>> definition. >>>> >>>> >>>> What is dynamic metaprogramming? >>> >>> >>> Writing programs that inspect and change themselves at runtime. >> >> >> Ah. I used to do that in assembler. I always felt like I was aiming >> a shotgun between my toes. >> >> When did self-modifying code get rehabilitated? > > > I think this was in the late 70's. Have you got a good reference for the uninitiated? Thanks - ken   0 Reply kenrose (17) 10/23/2003 4:54:32 PM Ken Rose wrote: > Pascal Costanza wrote: > >> Ken Rose wrote: >> >>> Pascal Costanza wrote: >>> >>>> Joachim Durchholz wrote: >>>> >>>>> Pascal Costanza wrote: >>>>> >>>>>> For example, static type systems are incompatible with dynamic >>>>>> metaprogramming. This is objectively a reduction of expressive >>>>>> power, because programs that don't allow for dynamic >>>>>> metaprogramming can't be extended in certain ways at runtime, by >>>>>> definition. >>>>> >>>>> >>>>> >>>>> What is dynamic metaprogramming? >>>> >>>> >>>> >>>> Writing programs that inspect and change themselves at runtime. >>> >>> >>> >>> Ah. I used to do that in assembler. I always felt like I was aiming >>> a shotgun between my toes. >>> >>> When did self-modifying code get rehabilitated? >> >> >> >> I think this was in the late 70's. > > > Have you got a good reference for the uninitiated? http://www.laputan.org/ref89/ref89.html and http://www.laputan.org/brant/brant.html are probably good starting points. http://www-db.stanford.edu/~paepcke/shared-documents/mopintro.ps is an excellent paper, but not for the faint of heart. ;) Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de R�merstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 5:13:00 PM Andreas Rossberg wrote: >> Can you give me a reference to a paper, or some other literature, that >> defines the terminology that you use? >> >> I have tried to find a consistent set of terms for this topic, and >> have only found the paper "Type Systems" by Luca Cardelli >> (http://www.luca.demon.co.uk/Bibliography.htm#Type systems ) >> >> He uses the terms of static vs. dynamic typing and strong vs. weak >> typing, and these are described as orthogonal classifications. I find >> this terminology very clear, consistent and useful. But I am open to a >> different terminology. > > > My copy, > > http://research.microsoft.com/Users/luca/Papers/TypeSystems.A4.pdf > > on page 3 defines safety as orthogonal to typing in the way Matthias > suggested. Yes, but it says dynamically typed vs statically typed where Matthias says untyped vs typed. Pascal -- Pascal Costanza University of Bonn mailto:costanza@web.de Institute of Computer Science III http://www.pascalcostanza.de Römerstr. 164, D-53117 Bonn (Germany)   0 Reply costanza (1427) 10/23/2003 5:15:48 PM Matthias Blume <find@my.address.elsewhere> writes: > prunesquallor@comcast.net writes: >> >> The opposing point is to assert that *no* program that cannot be >> statically checked is useful. Are you really asserting that? > > Actually, viewed from a certain angle, yes. Every programmer who > writes a program ought to have a proof that the program is correct in > her mind. (If not, fire her.) It ought to be possible to formalize > that proof and to statically check it. That's a little draconian. When I write programs I often have no clue as to what I am doing, let alone a proof that it is correct!   0 Reply jrm (1311) 10/23/2003 5:33:28 PM Joe Marshall <jrm@ccs.neu.edu> writes: > Matthias Blume <find@my.address.elsewhere> writes: > > > prunesquallor@comcast.net writes: > >> > >> The opposing point is to assert that *no* program that cannot be > >> statically checked is useful. Are you really asserting that? > > > > Actually, viewed from a certain angle, yes. Every programmer who > > writes a program ought to have a proof that the program is correct in > > her mind. (If not, fire her.) It ought to be possible to formalize > > that proof and to statically check it. > > That's a little draconian. When I write programs I often have no clue > as to what I am doing, let alone a proof that it is correct! You're fired.   0 Reply find19 (1245) 10/23/2003 5:37:31 PM Pascal Costanza wrote: >> >> My copy, >> >> http://research.microsoft.com/Users/luca/Papers/TypeSystems.A4.pdf >> >> on page 3 defines safety as orthogonal to typing in the way Matthias >> suggested. > > Yes, but it says dynamically typed vs statically typed where Matthias > says untyped vs typed. Huh? On page 2 Cardelli defines typed vs. untyped. Table 1 on page 5 clearly identifies Lisp as an untyped (but safe) language. He also speaks of statical vs. dynamical _checking_ wrt safety, but where do you find a definition of dynamic typing? - Andreas -- Andreas Rossberg, rossberg@ps.uni-sb.de "Computer games don't affect kids; I mean if Pac Man affected us as kids, we would all be running around in darkened rooms, munching magic pills, and listening to repetitive electronic music." - Kristian Wilson, Nintendo Inc.   0 Reply rossberg (600) 10/23/2003 5:38:57 PM Pascal Costanza wrote: > > The cool thing about dynamically typed languages is that you don't need > to know what you are doing when you start to write a program. You gain > an understanding of the problem you try to solve during development, by > just trying out things and see if they work. > > Of course, in the end I should have gained a fairly deep understanding > of the problem, otherwise I have failed. But you seem to suggest that I > shouldn't even start programming before I have gained a complete > understanding. And in my view this is a waste of resources. Even if you prefer this approach to programming - which definitely is not suitable for all problem domains - a type system can be very useful guidance for gaining understanding, so it might actually save resources. - Andreas -- Andreas Rossberg, rossberg@ps.uni-sb.de "Computer games don't affect kids; I mean if Pac Man affected us as kids, we would all be running around in darkened rooms, munching magic pills, and listening to repetitive electronic music." - Kristian Wilson, Nintendo Inc.   0 Reply rossberg (600) 10/23/2003 5:44:19 PM Matthias Blume <find@my.address.elsewhere> writes: > Joe Marshall <jrm@ccs.neu.edu> writes: > >> Matthias Blume <find@my.address.elsewhere> writes: >> >> > prunesquallor@comcast.net writes: >> >> >> >> The opposing point is to assert that *no* program that cannot be >> >> statically checked is useful. Are you really asserting that? >> > >> > Actually, viewed from a certain angle, yes. Every programmer who >> > writes a program ought to have a proof that the program is correct in >> > her mind. (If not, fire her.) It ought to be possible to formalize >> > that proof and to statically check it. >> >> That's a little draconian. When I write programs I often have no clue >> as to what I am doing, let alone a proof that it is correct! > > You're fired. See what static typing does to one's mind? You've turned into a PHB!   0 Reply jrm (1311) 10/23/2003 5:48:04 PM Joe Marshall <jrm@ccs.neu.edu> writes: > Matthias Blume <find@my.address.elsewhere> writes: > > > Joe Marshall <jrm@ccs.neu.edu> writes: > > > >> Matthias Blume <find@my.address.elsewhere> writes: > >> > >> > prunesquallor@comcast.net writes: > >> >> > >> >> The opposing point is to assert that *no* program that cannot be > >> >> statically checked is useful. Are you really asserting that? > >> > > >> > Actually, viewed from a certain angle, yes. Every programmer who > >> > writes a program ought to have a proof that the program is correct in > >> > her mind. (If not, fire her.) It ought to be possible to formalize > >> > that proof and to statically check it. > >> > >> That's a little draconian. When I write programs I often have no clue > >> as to what I am doing, let alone a proof that it is correct! > > > > You're fired. > > See what static typing does to one's mind? You've turned into a PHB! Right. Static typing is strong medicine. Beware of those side effects! No, seriously. I think that static typing actually helps quite a lot at exactly that "fuzzy" stage of program design. I bet everyone has experienced the following scenario (I have many times): You try to figure out some difficult problem, and you are stumped. So you go to your buddy next office, meaning to ask for help with the solution. And while you are explaining to him what the problem is in the first place and why you are having difficulties with it you suddenly go "I got it!" The mere act of carefully explaining one's own thinking processes to some patient listener, i.e., the act of putting these processes into words, helps. Now, this is exactly my experience with static typing: I often start out like Joe without a clue of what I am doing. (Of course, I don't tell my PHB so I don't get fired like Joe just did. :-) What I am doing at this stage is mostly fiddling with types (think "abstract interfaces"). In effect, I am trying to explain what I am planning on doing to the computer. At this stage, no actual interaction with the machine is necessary -- most of the time I already know what will typecheck and what won't. The mere act of trying to express myself in the language of types helps me with crystallizing my thoughts into something that is workable (and which, by nature of the process, has a good chance of passing the type checker). The fact that the type checker will also detect a certain amount of clerical errors in my code is a bonus. The main benefit (as far as initial design goes) comes from the above effect of being forced to explain my thoughts clearly to someone. Matthias   0 Reply find19 (1245) 10/23/2003 6:06:07 PM In article <m1r813onxg.fsf@tti5.uchicago.edu>, Matthias Blume <find@my.address.elsewhere> wrote: > Joe Marshall <jrm@ccs.neu.edu> writes: > > > Matthias Blume <find@my.address.elsewhere> writes: > > > > > prunesquallor@comcast.net writes: > > >> > > >> The opposing point is to assert that *no* program that cannot be > > >> statically checked is useful. Are you really asserting that? > > > > > > Actually, viewed from a certain angle, yes. Every programmer who > > > writes a program ought to have a proof that the program is correct in > > > her mind. (If not, fire her.) It ought to be possible to formalize > > > that proof and to statically check it. > > > > That's a little draconian. When I write programs I often have no clue > > as to what I am doing, let alone a proof that it is correct! > > You're fired. Really? While you're looking for someone to replace this person you just fired (and rejecting all the applicants who aren't trained in how to produce formal proofs of correctness) your competition is iteratively testing and refining a product which, while it doesn't have a proof of correctness, works well enough from the customer's point of view. So your competition gets the business because they have something to ship and you don't. The best you can offer is, "Wait! Don't buy their stuff. It might be broken. Just wait until we get our HR act together and you can buy *our* product which we can *prove* doesn't have any bugs." (Excecpt, of course, that all you can really prove is that it doesn't have any type errors, which is not the same thing.) So the result of getting up on your formal-proof high-horse is that the company is now bankrupt. If I were one of your stockholders I'd say the wrong person got fired. E.   0 Reply 10/23/2003 6:08:55 PM In article <m1n0bromls.fsf@tti5.uchicago.edu>, Matthias Blume <find@my.address.elsewhere> wrote: > The fact that the type checker will also detect a certain amount of > clerical errors in my code is a bonus. That depends on what you are trying to accomplish. If you are forced to spend time fixing clerical errors that are not really relevant to the problem you are trying to solve and you are competing with someone who is free to ignore those errors and move on then you will lose. > The main benefit (as far as > initial design goes) comes from the above effect of being forced to > explain my thoughts clearly to someone. That is not a benefit exclusive to static typing. The same benefit can be (and is) had from getting a program to run in a dynamically typed system. E.   0 Reply 10/23/2003 6:15:12 PM Matthias Blume <find@my.address.elsewhere> writes: > A 100,000 line program in an untyped language is useless to me if I am > trying to make modifications -- unless it is written in a highly > stylized way which is extensively documented (and which usually means > that you could have captured this style in static types). The only untyped languages I know are assemblers. (ISTR that even intercal can't be labelled "untyped" per se). Are we speaking about assembler here? -- __Pascal_Bourguignon__ http://www.informatimago.com/   0 Reply spam173 (586) 10/23/2003 6:33:38 PM Pascal Bourguignon <spam@thalassa.informatimago.com> writes: > The only untyped languages I know are assemblers. (ISTR that even > intercal can't be labelled "untyped" per se). > > Are we speaking about assembler here? No, we are speaking different definitions of "typed" and "untyped" here. Even assembler is typed if you look at it the right way. As I said before, I mean "untyped" as in "The Untyped Lambda Calculus" which is a well-established term. Matthias   0 Reply find19 (1245) 10/23/2003 6:42:33 PM Matthias Blume wrote: > Thomas Lindgren <***********@*****.***> writes: > > >>Matthias Blume <find@my.address.elsewhere> writes: >> >> >>>Every programmer who writes a program ought to have a proof that the >>>program is correct in her mind. (If not, fire her.) >> >>Don't forget to fire the specification writer afterwards. Then the >>requirements guy. Then the customer. > > > Unfortunately, I am aware of "the Real World". In any case, is this > really any excuse for shipping code of which we don't know will always > work, written by programmers who we didn't fire even though they > didn't know what they were doing, writing to specifications that were > inconsistent, driven by requirements that were unreasonable to begin > with, asked for by customers who were clueless? Without a solid definition of "the program is correct" all of this is really posturing, and not even interesting posturing at that. Among the choices: the program will do what the customer wanted (ha) the program will do what the customer asked for (maybe) the program will do what the req/spec people asked for the program will conform to the written spec the program will do what the programmer intended the program will do what the programmer documented the program will fail only in certain relatively harmless ways and a bunch of others feasible formal proofs apply only to some of those definitions and not even in any monotonic fashion paul   0 Reply pw38 (127) 10/23/2003 7:04:08 PM Matthias Blume <find@my.address.elsewhere> writes: > Thomas Lindgren <***********@*****.***> writes: > > > Matthias Blume <find@my.address.elsewhere> writes: > > > > > Every programmer who writes a program ought to have a proof that the > > > program is correct in her mind. (If not, fire her.) > > > > Don't forget to fire the specification writer afterwards. Then the > > requirements guy. Then the customer. > > Unfortunately, I am aware of "the Real World". In any case, is this > really any excuse for shipping code of which we don't know will always > work, written by programmers who we didn't fire even though they > didn't know what they were doing, writing to specifications that were > inconsistent, driven by requirements that were unreasonable to begin > with, asked for by customers who were clueless? What a jackass! So, if you haven't fired yourself, please share your amazing system that allows you to prove arbitrary properties of your code, and to specify what "correct" means in a way that's not just another programming language (which would then need to be proven correct using ... ?) -- /|_ .-----------------------. ,' .\ / | No to Imperialist war | ,--' _,' | Wage class war! | / / -----------------------' ( -. | | ) | (-. '--.) . )----'   0 Reply tfb3 (483) 10/23/2003 7:15:03 PM myfirstname.mylastname@jpl.nasa.gov (Erann Gat) writes: > In article <m1r813onxg.fsf@tti5.uchicago.edu>, Matthias Blume > <find@my.address.elsewhere> wrote: > > > Joe Marshall <jrm@ccs.neu.edu> writes: > > > > > Matthias Blume <find@my.address.elsewhere> writes: > > > > > > > prunesquallor@comcast.net writes: > > > >> > > > >> The opposing point is to assert that *no* program that cannot be > > > >> statically checked is useful. Are you really asserting that? > > > > > > > > Actually, viewed from a certain angle, yes. Every programmer who > > > > writes a program ought to have a proof that the program is correct in > > > > her mind. (If not, fire her.) It ought to be possible to formalize > > > > that proof and to statically check it. > > > > > > That's a little draconian. When I write programs I often have no clue > > > as to what I am doing, let alone a proof that it is correct! > > > > You're fired. > > Really? Relax. This was a joke. (Wasn't that obvious?) > While you're looking for someone to replace this person you just fired > (and rejecting all the applicants who aren't trained in how to produce > formal proofs of correctness) your competition is iteratively testing and > refining a product which, while it doesn't have a proof of correctness, > works well enough from the customer's point of view. So your competition > gets the business because they have something to ship and you don't. The > best you can offer is, "Wait! Don't buy their stuff. It might be > broken. Just wait until we get our HR act together and you can buy *our* > product which we can *prove* doesn't have any bugs." That's not what I said. I said that the programmer has a proof in her head. (At least she thinks she does.) My point was that since she has a proof, the proof obviously *exists* and *could* be written down and *could* be statically verified if one only went to the trouble of doing so. (And again, even this is obviously much easier said than done.) > (Excecpt, of course, > that all you can really prove is that it doesn't have any type errors, > which is not the same thing.) No, I wasn't thinking of contemporary type errors. I was thinking of a real proof of correctness, in all glory. The point is that even though we all know that we cannot prove all correct programs correct in general, we can do so for the programs we actually write (which is a proper subset of the set of all correct programs). Anyone who claims his program is correct but it cannot be proven correct must face the question "How do you know?" > So the result of getting up on your formal-proof high-horse is that the > company is now bankrupt. > > If I were one of your stockholders I'd say the wrong person got fired. Let me repeat one more time what I actually said: I want to fire the programmer who does not have a thorough understanding of what he/she is doing. Is that so wrong? What I did not say was: Let's fire the programmer who does not write down formal proofs for the things he/she is doing. Now, if you want to fire me because I insist on working with competent collegues, well, so be it. Actually, you don't have to because I quit. :-) Matthias   0 Reply find19 (1245) 10/23/2003 7:45:04 PM tfb@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes: > please share your > amazing system that allows you to prove arbitrary properties of your > code, and to specify what "correct" means in a way that's not just > another programming language (which would then need to be proven > correct using ... ?) The system: mathematics in general, logic in particular. Correctness: Certain statements (depending on the problem domain) which I want to hold true for my programs. Another programming language: To some degree, yes, logic is "another programming language". We are getting fairly deep into philosophy here if we want to discuss how much justification, e.g., foundational proofs require. Let's say we take, e.g., ZFC for granted. Let's build things from there. And in case you ask: No, this does not let me prove "arbitrary" properties. But, obviously, the ones I can't prove I can't claim my code to possess. In any case, what I tried to express was that even though there are customers who might not always be on top of things as far as expectations go (sorry having offended you with the "clueless" tonge-in-cheek remark), and even though requirements as well as specifications are often either imprecise or self-contradictory or both, this is NO EXCUSE for a programmer to not think about (and prove to herself) the correctness of the code she writes. I think what I am asking here is fairly modest. Shame on you for calling me names for it! Matthias   0 Reply find19 (1245) 10/23/2003 7:59:05 PM myfirstname.mylastname@jpl.nasa.gov (Erann Gat) writes: > Just wait until we get our HR act together and you can buy *our* product > which we can *prove* doesn't have any bugs." (Excecpt, of course, that all > you can really prove is that it doesn't have any type errors, which is not > the same thing.) While I agree with your essential point, I should point out that with "proper" formal methods you can do quite a bit better than detecting type errors. You can prove that, assuming a sane execution environment, your implementation realizes the specification, where the spec can describe behaviour as well as types, given a suitable spec language. The "sane environment" assumption is the kicker, as is the work required to actually take the trouble of verifying/deriving an honest-to-goodness real non-trivial application, as well as the work required to write a bug free spec (and who verifies the spec is anything reasonable or correct?). -- Cheers, The Rhythm is around me, The Rhythm has control. Ray Blaak The Rhythm is inside me, rAYblaaK@STRIPCAPStelus.net The Rhythm has my soul.   0 Reply rAYblaaK (363) 10/23/2003 8:04:09 PM Matthias Blume <find@my.address.elsewhere> writes: > That's not what I said. I said that the programmer has a proof in her > head. (At least she thinks she does.) My point was that since she has > a proof, the proof obviously *exists* and *could* be written down and > *could* be statically verified if one only went to the trouble of > doing so. (And again, even this is obviously much easier said than > done.) Much much easier said than done. So much so that practical formal methods are not currently useful. Still, one can *attempt* to program in a proof-like style, whereby you code according to the assumptions you know, reduce the number of exception cases, etc. That is, the attempt, the effort to think about such things even informally gives better code. Prototyping code out of ignorance is still useful though, since it lets you discover requirements and assumptions better. > The point is that even though we all know that we cannot prove all correct > programs correct in general, we can do so for the programs we actually write > (which is a proper subset of the set of all correct programs). We can't do so, at least if you insist on being formal. It is too difficult in general. > Anyone who claims his program is correct but it cannot be proven correct > must face the question "How do you know?" In the end they need to be able to give a convincing argument, but that is not the same as being formal. There are many ways of convincing humans. -- Cheers, The Rhythm is around me, The Rhythm has control. Ray Blaak The Rhythm is inside me, rAYblaaK@STRIPCAPStelus.net The Rhythm has my soul.   0 Reply rAYblaaK (363) 10/23/2003 8:10:31 PM In article <m1ekx3oi0v.fsf@tti5.uchicago.edu>, Matthias Blume <find@my.address.elsewhere> wrote: > myfirstname.mylastname@jpl.nasa.gov (Erann Gat) writes: > > > In article <m1r813onxg.fsf@tti5.uchicago.edu>, Matthias Blume > > <find@my.address.elsewhere> wrote: > > > > > Joe Marshall <jrm@ccs.neu.edu> writes: > > > > > > > Matthias Blume <find@my.address.elsewhere> writes: > > > > > > > > > prunesquallor@comcast.net writes: > > > > >> > > > > >> The opposing point is to assert that *no* program that cannot be > > > > >> statically checked is useful. Are you really asserting that? > > > > > > > > > > Actually, viewed from a certain angle, yes. Every programmer who > > > > > writes a program ought to have a proof that the program is correct in > > > > > her mind. (If not, fire her.) It ought to be possible to formalize > > > > > that proof and to statically check it. > > > > > > > > That's a little draconian. When I write programs I often have no clue > > > > as to what I am doing, let alone a proof that it is correct! > > > > > > You're fired. > > > > Really? > > Relax. This was a joke. (Wasn't that obvious?) Actually no. When you wrote "Every programmer who writes a program ought to have a proof that the program is correct in her mind. (If not, fire her.)" you sounded quite serious to me. > > While you're looking for someone to replace this person you just fired > > (and rejecting all the applicants who aren't trained in how to produce > > formal proofs of correctness) your competition is iteratively testing and > > refining a product which, while it doesn't have a proof of correctness, > > works well enough from the customer's point of view. So your competition > > gets the business because they have something to ship and you don't. The > > best you can offer is, "Wait! Don't buy their stuff. It might be > > broken. Just wait until we get our HR act together and you can buy *our* > > product which we can *prove* doesn't have any bugs." > > That's not what I said. I said that the programmer has a proof in her > head. (At least she thinks she does.) No, you said more than that. You said that "it ought to be possible to formalize the proof and to statically check it." But the only way to tell whether that is in fact possible is to actually do it. So either you are implicitly insisting that this proof be actually constructed and checked or your position is vacuous. If you're willing to take someone's word for it that they have a proof in their head then you may just as well take their word for it that the code works for whatever reason they choose to have for saying so. > My point was that since she has > a proof, the proof obviously *exists* and *could* be written down and > *could* be statically verified if one only went to the trouble of > doing so. (And again, even this is obviously much easier said than > done.) And unless you actually do it then all you really know is that she *thinks* she has a proof in her head (and actually you don't really know that either, especially if she knows that she will not be expected to actually produce the proof, and that confessing to not having one will get her fired). So once again I say that unless you insist on having people carry out the formal proofs your position is vacuous. > > (Excecpt, of course, > > that all you can really prove is that it doesn't have any type errors, > > which is not the same thing.) > > No, I wasn't thinking of contemporary type errors. I was thinking of > a real proof of correctness, in all glory. The point is that even > though we all know that we cannot prove all correct programs correct > in general, we can do so for the programs we actually write (which is > a proper subset of the set of all correct programs). Anyone who > claims his program is correct but it cannot be proven correct must > face the question "How do you know?" And IMO a perfectly legitimate answer to that question is, "Because I ran it and it worked." To which you will no doubt counter: but how do you know that it will work the *next* time you run it, or if you run it under different circumstances than those under which you tested it? To which my reply will be: how do you know that the exhibited proof is correct? Oh, you're going to run an automatic proof checker on it? How do you know that the proof checker is correct? How do you know that the hardware on which your proof checker runs is correct? What happens if you get a single-event upset in a processor register, or a bad byte of RAM? The whole business of computing, theoreticians wishes to the contrary notwithstanding, is at the end of the day still an empirical enterprise, and always will be as long as computers and their users are part of the physical world. > > So the result of getting up on your formal-proof high-horse is that the > > company is now bankrupt. > > > > If I were one of your stockholders I'd say the wrong person got fired. > > Let me repeat one more time what I actually said: I want to fire the > programmer who does not have a thorough understanding of what he/she > is doing. Is that so wrong? That's not what you said. What you said is that you want to fire the programmer who lacks a very particular kind of understanding of what s/he is doing. I don't know whether it's "so wrong", but I submit that it would be ultimately counterproductive. > What I did not say was: Let's fire the > programmer who does not write down formal proofs for the things he/she > is doing. Well, then your position is vacuous, as I pointed out above. > Now, if you want to fire me because I insist on working with competent > collegues, well, so be it. Actually, you don't have to because I > quit. :-) Thanks for saving me the trouble. Seriously, if you were working for me and you judged your colleagues incompetent simply because they confessed to not having a formal proof in their head for the correctness of the code they had written I would fire you. I would do so with regret because I think you're very smart and capable, but I would do it without hesitation. If you really think that having a formal proof of correctness trumps all other considerations then IMO you have completely lost sight of the big picture. E. P.S. Suppose your task is to write a typesetting program and one of the requirements is that the output look aesthetically pleasing. How would you go about proving that your code is correct?   0 Reply 10/23/2003 8:18:01 PM In article <uznfrd8jl.fsf@STRIPCAPStelus.net>, Ray Blaak <rAYblaaK@STRIPCAPStelus.net> wrote: > myfirstname.mylastname@jpl.nasa.gov (Erann Gat) writes: > > Just wait until we get our HR act together and you can buy *our* product > > which we can *prove* doesn't have any bugs." (Excecpt, of course, that all > > you can really prove is that it doesn't have any type errors, which is not > > the same thing.) > > While I agree with your essential point, I should point out that with "proper" > formal methods you can do quite a bit better than detecting type errors. You > can prove that, assuming a sane execution environment, your implementation > realizes the specification, where the spec can describe behaviour as well as > types, given a suitable spec language. > > The "sane environment" assumption is the kicker, as is the work required to > actually take the trouble of verifying/deriving an honest-to-goodness real > non-trivial application, as well as the work required to write a bug free spec > (and who verifies the spec is anything reasonable or correct?). Yes, your point is well taken. I am in fact a fan of formal methods. They can be very useful for finding certain classes of bugs that are hard to find any other way (race conditions for example). But they are neither necessary nor sufficient for producing "correct" code (whatever that means). See http://archive.larc.nasa.gov/shemesh/Lfm2000/Proc/cpp15.pdf for an interesting case study. E.   0 Reply 10/23/2003 8:30:59 PM  Thomas F. Burdick wrote: > Matthias Blume <find@my.address.elsewhere> writes: > > >>Thomas Lindgren <***********@*****.***> writes: >> >> >>>Matthias Blume <find@my.address.elsewhere> writes: >>> >>> >>>>Every programmer who writes a program ought to have a proof that the >>>>program is correct in her mind. (If not, fire her.) >>> >>>Don't forget to fire the specification writer afterwards. Then the >>>requirements guy. Then the customer. >> >>Unfortunately, I am aware of "the Real World". In any case, is this >>really any excuse for shipping code of which we don't know will always >>work, written by programmers who we didn't fire even though they >>didn't know what they were doing, writing to specifications that were >>inconsistent, driven by requirements that were unreasonable to begin >>with, asked for by customers who were clueless? <heh-heh> That is why we like Lisp. Makes all those things (a rather accurate description of my career in software development) manageable. It is easier to work with a slow-setting glue, and Lisp code is veritably non-setting. It just stops changing once it is right, tho it is ready to start changing again if the spec moves. I guess I keep my job, because I always have a proof in mind: the code is correct when it stops changing. > What a jackass! Down, Oakland! Down! :) -- http://tilton-technology.com What?! You are a newbie and you haven't answered my: http://alu.cliki.net/The%20Road%20to%20Lisp%20Survey   0 Reply ktilton (2220) 10/23/2003 9:21:08 PM Matthias Blume wrote: <snip> >>>>>Actually, viewed from a certain angle, yes. Every programmer who >>>>>writes a program ought to have a proof that the program is correct in >>>>>her mind. (If not, fire her.) It ought to be possible to formalize >>>>>that proof and to statically check it. >>>>> >>>>> <snip> >That's not what I said. I said that the programmer has a proof in her >head. (At least she thinks she does.) My point was that since she has >a proof, the proof obviously *exists* and *could* be written down and >*could* be statically verified if one only went to the trouble of >doing so. (And again, even this is obviously much easier said than >done.) > <snip> > > > >> (Excecpt, of course, >>that all you can really prove is that it doesn't have any type errors, >>which is not the same thing.) >> >> > >No, I wasn't thinking of contemporary type errors. I was thinking of >a real proof of correctness, in all glory. The point is that even >though we all know that we cannot prove all correct programs correct >in general, we can do so for the programs we actually write (which is >a proper subset of the set of all correct programs). Anyone who >claims his program is correct but it cannot be proven correct must >face the question "How do you know?" > > I'm not sure what you mean by a proof here. Do you mean proof as in a formal mathematical proof? Formally proving correctness of programs is very difficult, even for a few lines of code, it would not be practical for much larger programs. A pre-requisite would be a formal description of the requirements, which I have never seen from a client, nor do I want to. To clarify things, can you give me a formal proof that the following java code correctly sums an array of integers? public double sumArray(int[] array){ int sum = 0; for (int i = 0; i < array.length; ++i){ sum += array[i]; } return sum; } I don't believe anyone can guarantee the correctness of any substantial work, nor should they be expected to. Obviously this is not to say that people shouldn't do their best to reduce the number of bugs in their work. For me, unit testing and constant refactoring seem to work best.   0 Reply alex1420 (29) 10/23/2003 9:44:26 PM spammers_must_die@jpl.nasa.gov (Erann Gat) writes: [ I leave this for reference: ] > > > > > > Actually, viewed from a certain angle, yes. Every programmer who > > > > > > writes a program ought to have a proof that the program is correct in > > > > > > her mind. (If not, fire her.) It ought to be possible to formalize > > > > > > that proof and to statically check it. [...] > Actually no. When you wrote "Every programmer who writes a program ought > to have a proof that the program is correct in her mind. (If not, fire > her.)" you sounded quite serious to me. I was then. The "joke" was me firing Joe. (I don't think that Joe fits the description of the programmer I would fire -- even if he himself claims otherwise. Not to mention that I have no power over Joe's employment.) > No, you said more than that. You said that "it ought to be possible to > formalize the proof and to statically check it." But the only way to tell > whether that is in fact possible is to actually do it. So either you are > implicitly insisting that this proof be actually constructed and checked > or your position is vacuous. If you're willing to take someone's word for > it that they have a proof in their head then you may just as well take > their word for it that the code works for whatever reason they choose to > have for saying so. The question was whether statically uncheckable programs are useful or not. My assertion is that useful programs must be statically checkable, at least in principle. The point is that this is not a serious restriction because by virtue of the fact that competent programmers *already* have proofs for their programs in their heads, so *in principle* it should be possible to formalize those proofs. The only time no formal proof can be given *in principle* is when there, in fact, is no proof. If there is no proof, we cannot know whether the program is correct. I don't consider programs for which I cannot know whether they are correct (even in principle!) useful. Now, the programmer might think he has a proof but, in fact, does not. In that case the attempt of formalizing it would fail -- something that would be relatively easy to detect. So human errors notwithstanding, programmers do reason about their code, and that reasoning COULD be used to either verify correctness or to reject program, reasoning, or both on the basis of the fact that the reasoning was illogical. > And unless you actually do it then all you really know is that she > *thinks* she has a proof in her head (and actually you don't really know > that either, especially if she knows that she will not be expected to > actually produce the proof, and that confessing to not having one will get > her fired). This is true, but besides my point. > And IMO a perfectly legitimate answer to that question is, "Because I ran > it and it worked." To which you will no doubt counter: but how do you > know that it will work the *next* time you run it, or if you run it under > different circumstances than those under which you tested it? Right, that's how I would counter. Your answer is not "perfectly legitimate". > To which my reply will be: how do you know that the exhibited proof > is correct? Because I checked it. > Oh, you're going to run an automatic proof checker on > it? How do you know that the proof checker is correct? > How do you > know that the hardware on which your proof checker runs is correct? > What happens if you get a single-event upset in a processor > register, or a bad byte of RAM? What is the probability of this falsely giving a "proof ok" rather than a core dump? Sure, there has to be a "trusted computing base" just like every logic has a set of axioms which we don't question any further. The point is that the trusted computing base can be small: proof checkers are fairly simple programs, and enough inspection, paper-and-pencil reasoning, and, yes, testing, will provide us with the confidence we need. > Thanks for saving me the trouble. Seriously, if you were working for me > and you judged your colleagues incompetent simply because they confessed > to not having a formal proof in their head for the correctness of the code > they had written I would fire you. I did not say "formal proof in their head", please! I said that the proof in their head ought to be formalizable, which is something entirely different. You are right, btw, in that the discussion is becoming increasingly vacuous. Let's forget about the "I would fire her" remark, ok? What I really meant to express with that remark was my belief that every programm actually *does* have a proof of correctness for her program in her head. Otherwise what she is doing amounts to randomly cranking out code without any understanding at all. A monkey at the keyboard. That this is not what's going on was precisely my point: people do reason about their code (albeit informally in most cases, and many of them wouldn't be able to clearly communicate their reasoning). That's why I think that -- in principle -- all programs that people write and that turn out to be actually correct are, in fact, provably correct. Finding and writing down the formal proof in practice, however, is a different story. > I would do so with regret because I > think you're very smart and capable, but I would do it without > hesitation. If you really think that having a formal proof of correctness > trumps all other considerations then IMO you have completely lost sight of > the big picture. Well, glad we cleared that up. I have in no way demanded formal proofs of correctness from each of my co-workers. Sorry for not communicating well enough to make this clear from the beginning. > P.S. Suppose your task is to write a typesetting program and one of the > requirements is that the output look aesthetically pleasing. How would > you go about proving that your code is correct? Obviously, this task is not well-defined, so first I would ask the person who requested the above to specify what he/she means by "aesthetically pleasing" in concrete, well-defined terms. If I get a good answer, I work with that. If I don't, I would quit the job. Matthias   0 Reply find19 (1245) 10/23/2003 10:00:36 PM Alex McGuire <alex@alexmcguire.com> writes: > I'm not sure what you mean by a proof here. Do you mean proof as in a > formal mathematical proof? Formally proving correctness of programs is > very difficult, even for a few lines of code, it would not be > practical for much larger programs. I know. As I have said many times by now, I was merely talking about the existence of a formal proof (and therefore, about the theoretical possibility of producing it). > A pre-requisite would be a formal > description of the requirements, which I have never seen from a > client, nor do I want to. Indeed, this is one of the major hurdles. > To clarify things, can you give me a formal > proof that the following java code correctly sums an array of > integers? > > > public double sumArray(int[] array){ > int sum = 0; > for (int i = 0; i < array.length; ++i){ > sum += array[i]; > } > return sum; > } I think this can be proved fairly easily using Hoare-style logic. Basically, you show that at the beginning of each iteration of the loop you have the invariant sum = \sum_{k=0}^{i-1} array[k]. At the end of each iteration you then have sum = \sum_{k=0}^{i} array[k]. The loop terminates when i = array.length, so at this point we have sum = \sum_{k=0}^{array.length-1} array[k] which is what we wanted to prove. Obviously, a truly formal proof is much longer, but it would merely fill in the gaps that I left in the above... Matthias   0 Reply find19 (1245) 10/23/2003 10:08:40 PM Alex McGuire wrote: > Matthias Blume wrote: .... >> No, I wasn't thinking of contemporary type errors. I was thinking of >> a real proof of correctness, in all glory. The point is that even >> though we all know that we cannot prove all correct programs correct >> in general, we can do so for the programs we actually write (which is >> a proper subset of the set of all correct programs). Anyone who >> claims his program is correct but it cannot be proven correct must >> face the question "How do you know?" > > I'm not sure what you mean by a proof here. Do you mean proof as in a > formal mathematical proof? Formally proving correctness of programs is > very difficult, even for a few lines of code, it would not be > practical for much larger programs. A pre-requisite would be a formal > description of the requirements, which I have never seen from a > client, nor do I want to. To clarify things, can you give me a formal > proof that the following java code correctly sums an array of integers? > > public double sumArray(int[] array){ > int sum = 0; > for (int i = 0; i < array.length; ++i){ > sum += array[i]; > } > return sum; > } I'm not Matthias, but here's my guess at the sort of thing he might consider appropriate. // Return the sum of all elements in the array, // mod 2^32. // XXX: Why does this return a double? If the idea // is to avoid overflow, why do we accumulate with // an int? public double sumArray(int[] array) { int sum = 0; for (int i=0; i<array.length; ++i) { // loop invariant: sum == (sum of array elements with // indices < i) mod 2^32 sum += array[i]; } // on exit from the loop, the invariant holds with // i == array.length, so that's the sum of *all* // elements. return sum; } I'd guess Matthias wouldn't expect to see all that actually embedded in the code, but he would want the programmer to have a clear enough understanding that she could provide it quickly and confidently if required. Converting that to a really formal proof would be tiresome (depending on how formal "really formal" is taken to be) but easy. I don't agree with Matthias's position, but I wouldn't want to hire someone who *couldn't* provide a correctness (or incorrectness) proof for a piece of code that simple. Would you? Disclaimer: I've written about 20 lines of Java *ever*, so I may have missed things. I wouldn't advise hiring me to write Java without a bit of time in the schedule for me to learn the language (and, more to the point, the libraries) better :-). -- Gareth McCaughan ..sig under construc   0 Reply Gareth 10/23/2003 10:10:50 PM Matthias Blume <find@my.address.elsewhere> writes: > myfirstname.mylastname@jpl.nasa.gov (Erann Gat) writes: > > > In article <m1r813onxg.fsf@tti5.uchicago.edu>, Matthias Blume > > <find@my.address.elsewhere> wrote: > > > > > Joe Marshall <jrm@ccs.neu.edu> writes: > > > .... > > > That's a little draconian. When I write programs I often have no clue > > > > as to what I am doing, let alone a proof that it is correct! > > > > > > You're fired. > > > > Really? > > Relax. This was a joke. (Wasn't that obvious?) > > > While you're looking for someone to replace this person you just fired > > (and rejecting all the applicants who aren't trained in how to produce > > formal proofs of correctness) your competition is iteratively testing and > > refining a product which, while it doesn't have a proof of correctness, > > works well enough from the customer's point of view. So your competition > > gets the business because they have something to ship and you don't. The > > best you can offer is, "Wait! Don't buy their stuff. It might be > > broken. Just wait until we get our HR act together and you can buy *our* > > product which we can *prove* doesn't have any bugs." > > That's not what I said. I said that the programmer has a proof in her > head. (At least she thinks she does.) My point was that since she has The problem for you here is that this makes Erann's interpretation down right generous to your position. Really. I'd say noone (including you) has a "proof" in their heads in such circumstances. > a proof, the proof obviously *exists* and *could* be written down and > *could* be statically verified if one only went to the trouble of > doing so. (And again, even this is obviously much easier said than > done.) You can't be serious. Even we take your premise as true (that she _thinks_ she has a proof) this in absolutely no way implies that she does and even less that such a proof exists. Let's see... I _think_ I have a proof (in my head) that you are completely clueless wrt this topic, therefore such a proof "obviously" exists and could be written down. Yep, makes real good sense. /Jon   0 Reply j-anthony (99) 10/23/2003 10:42:07 PM Matthias Blume <find@my.address.elsewhere> writes: > spammers_must_die@jpl.nasa.gov (Erann Gat) writes: > > > P.S. Suppose your task is to write a typesetting program and one of the > > requirements is that the output look aesthetically pleasing. How would > > you go about proving that your code is correct? > > Obviously, this task is not well-defined, so first I would ask the > person who requested the above to specify what he/she means by > "aesthetically pleasing" in concrete, well-defined terms. If I get a > good answer, I work with that. If I don't, I would quit the job. Hmmm. Maybe I actually did have a proof in my head that you were clueless. You've even done the work here of giving a good first draft of writing it out for me. /Jon   0 Reply j-anthony (99) 10/23/2003 10:51:22 PM Matthias Blume <find@my.address.elsewhere> writes: > I bet everyone has experienced the following scenario (I have many > times): You try to figure out some difficult problem, and you are > stumped. So you go to your buddy next office, meaning to ask for help > with the solution. And while you are explaining to him what the > problem is in the first place and why you are having difficulties with > it you suddenly go "I got it!" The mere act of carefully explaining > one's own thinking processes to some patient listener, i.e., the act > of putting these processes into words, helps. > > Now, this is exactly my experience with static typing: I often start > out like Joe without a clue of what I am doing. (Of course, I don't > tell my PHB so I don't get fired like Joe just did. :-) What I am > doing at this stage is mostly fiddling with types (think "abstract > interfaces"). In effect, I am trying to explain what I am planning on > doing to the computer. At this stage, no actual interaction with the > machine is necessary -- most of the time I already know what will > typecheck and what won't. The mere act of trying to express myself in > the language of types helps me with crystallizing my thoughts into > something that is workable (and which, by nature of the process, has a > good chance of passing the type checker). I think you know where I stand on static type checking, but to re-iterate to the people that didn't read the argument last time it surfaced.... I welcome every bit of help the computer gives me, and if it can find a problem before I know about it, great! Static type checking is fine with me here. I get a little peeved, however, when the computer complains because it can't figure out whether there is a problem or not. I *really* don't like decorating my code with types. To the extent that a static type checker lets me live with those preferences, I'm all for it. Clearly a lot of brain-dead statically typed languages violate a lot of those.   0 Reply prunesquallor (871) 10/23/2003 11:26:26 PM Dirk Thierbach wrote: > Pascal Costanza <costanza@web.de> wrote: > >>I have given reasons when not to use a static type system in this >>thread. > > > Nobody forces you to use a static type system. Languages, with their > associated type systems, are *tools*, and not religions. You use > what is best for the job. _exactly!_ That's all I have been trying to say in this whole thread. Marshall Spight asked http://groups.google.com/groups?selm=MoEkb.821534%24YN5.832338%40sccrnsc01 why one would not want to use a static type system, and I have tried to give some reasons. I am not trying to force anyone to use a dynamically checked language. I am not even trying to convince anyone. I am just trying to say that someone might have very good reasons if they didn't want to use a static type system. >>Please take a look at the Smalltalk MOP or the CLOS MOP and tell >>me what a static type system should look like for these languages! > > > You cannot take an arbitrary language and attach a good static type > system to it. Type inference will be much to difficult, for example. > There's a fine balance between language design and a good type system > that works well with it. Right. As I said before, you need to reduce the expressive power of the language. > If you want to use Smalltalk or CLOS with dynamic typing and unit > tests, use them. If you want to use Haskell or OCaml with static typing > and type inference, use them. None is really "better" than the other. > Both have their advantages and disadvantages. But don't dismiss > one of them just because you don't know better. dito Thank you for rephrasing this in a probably better understandable way. Pascal   0 Reply costanza (1427) 10/23/2003 11:30:09 PM Andreas Rossberg wrote: > Pascal Costanza wrote: > >> >> The cool thing about dynamically typed languages is that you don't >> need to know what you are doing when you start to write a program. You >> gain an understanding of the problem you try to solve during >> development, by just trying out things and see if they work. >> >> Of course, in the end I should have gained a fairly deep understanding >> of the problem, otherwise I have failed. But you seem to suggest that >> I shouldn't even start programming before I have gained a complete >> understanding. And in my view this is a waste of resources. > > > Even if you prefer this approach to programming - which definitely is > not suitable for all problem domains - a type system can be very useful > guidance for gaining understanding, so it might actually save resources. Yes, _can_ be. In my case, I feel distracted by a tool that complains about things that are not relevant to my flow of thinking. I have experienced this very often. If you feel that a static type system supports your flow of thinking that's great for you. Just go ahead and use it to your advantage. Pascal   0 Reply costanza (1427) 10/23/2003 11:33:32 PM Paul Wallich wrote: > Without a solid definition of "the program is correct" all of this is > really posturing, and not even interesting posturing at that. Among the > choices: > > the program will do what the customer wanted (ha) > the program will do what the customer asked for (maybe) > the program will do what the req/spec people asked for > the program will conform to the written spec > the program will do what the programmer intended > the program will do what the programmer documented > the program will fail only in certain relatively harmless ways > and a bunch of others > > feasible formal proofs apply only to some of those definitions and not > even in any monotonic fashion All these "choices" have a common theme - you know beforehand what you want. (Apart from the fact that there is always a customer involved - that's also not always the case.) What about the following choice: the program will support the customer in ways they didn't even dream about How would you formalize that? Pascal   0 Reply costanza (1427) 10/23/2003 11:37:14 PM Matthias Blume <find@my.address.elsewhere> writes: > In fact, you should never need to "solve the halting problem" in order > to statically check you program. After all, the programmer *already > has a proof* in her mind when she writes the code! All that's needed > (:-) is for her to provide enough hints as to what that proof is so > that the compiler can verify it. (The smiley is there because, as we > are all poinfully aware of, this is much easier said than done.) I'm having trouble proving that MYSTERY returns T for lists of finite length. I an idea that it would but now I'm not sure. Can the compiler verify it? (defun kernel (s i) (list (not (car s)) (if (car s) (cadr s) (cons i (cadr s))) (cons 'y (cons i (cons 'z (caddr s)))))) (defconstant k0 '(t () (x))) (defun mystery (list) (let ((result (reduce #'kernel list :initial-value k0))) (cond ((null (cadr result))) ((car result) (mystery (cadr result))) (t (mystery (caddr result))))))   0 Reply prunesquallor (871) 10/23/2003 11:38:26 PM Matthias Blume wrote: > In any case, what I tried to express was that even though there are > customers who might not always be on top of things as far as > expectations go (sorry having offended you with the "clueless" > tonge-in-cheek remark), and even though requirements as well as > specifications are often either imprecise or self-contradictory or > both, this is NO EXCUSE for a programmer to not think about (and prove > to herself) the correctness of the code she writes. I think what I am > asking here is fairly modest. No, you are asking for more. You are asking for the proof to be automatically executable. Pascal   0 Reply costanza (1427) 10/23/2003 11:40:43 PM Matthias Blume wrote: > No, seriously. I think that static typing actually helps quite a lot > at exactly that "fuzzy" stage of program design. > > I bet everyone has experienced the following scenario (I have many > times): You try to figure out some difficult problem, and you are > stumped. So you go to your buddy next office, meaning to ask for help > with the solution. And while you are explaining to him what the > problem is in the first place and why you are having difficulties with > it you suddenly go "I got it!" The mere act of carefully explaining > one's own thinking processes to some patient listener, i.e., the act > of putting these processes into words, helps. > > Now, this is exactly my experience with static typing: I often start > out like Joe without a clue of what I am doing. (Of course, I don't > tell my PHB so I don't get fired like Joe just did. :-) What I am > doing at this stage is mostly fiddling with types (think "abstract > interfaces"). In effect, I am trying to explain what I am planning on > doing to the computer. At this stage, no actual interaction with the > machine is necessary -- most of the time I already know what will > typecheck and what won't. The mere act of trying to express myself in > the language of types helps me with crystallizing my thoughts into > something that is workable (and which, by nature of the process, has a > good chance of passing the type checker). That's great if it works for you. Go ahead, keep it up. (I don't mean this sarcastically!) But why on earth do you want to _force_ anybody else to use the same approach when it might not work for everyone? You have just described a creative process. Creative processes are _by definition_ not formal. The important stuff happens exactly in the moment when you go "eureka". Everyone has their own preferred approach to make this happen. All this nonsense about proof of program correctness, avoiding certain classes of program errors, achieving efficiency, and so on, are just posteriori rationalizations of what is essentially an irrational process. I think what you really gain when you use a static type system is a certain perspective on the problem you are trying to solve. And this is exactly what helps in solving problems: gaining different perspectives. The perspective you describe is just the perspective you feel most comfortable with, not more and not less. And if some of your assumptions of a program hold from different perspectives it makes you feel more convinced that you have found a right solution. But _no approach whatsoever guarantees correctness_. For people who prefer dynamic type systems, test suites work exactly the same way - they are just another perspective on the same problem. There is another important fact that you should consider: You probably know a lot people who immediately agree with your point of view, and are very happy to affirm to you, and to themselves, that the approach you describe is really the best way to develop software. But _exactly the same thing_ happens for people who like the approach that better goes along with dynamically checked languages. They also know quite a lot of people who affirm them and each other. I see this as clear evidence that there are just different programming styles that fit to different type of people. (Of course, it is an open question what is best when you have a team of programmers - should they beter be a homogeneous or a heterogeneous group. I don't know.) Pascal   0 Reply costanza (1427) 10/24/2003 12:03:34 AM Matthias Blume <find@my.address.elsewhere> writes: > Alex McGuire <alex@alexmcguire.com> writes: [snip] > > To clarify things, can you give me a formal proof that the > > following java code correctly sums an array of integers? > > > > > > public double sumArray(int[] array){ > > int sum = 0; > > for (int i = 0; i < array.length; ++i){ > > sum += array[i]; > > } > > return sum; > > } > > I think this can be proved fairly easily using Hoare-style logic. > Basically, you show that at the beginning of each iteration of the > loop you have the invariant sum = \sum_{k=0}^{i-1} array[k]. At the > end of each iteration you then have sum = \sum_{k=0}^{i} array[k]. > The loop terminates when i = array.length, so at this point we have > sum = \sum_{k=0}^{array.length-1} array[k] which is what we wanted to > prove. > Obviously, a truly formal proof is much longer, but it would merely > fill in the gaps that I left in the above... Uh, unless sum overflows. Better check that proof again. -Peter -- Peter Seibel peter@javamonkey.com Lisp is the red pill. -- John Fraser, comp.lang.lisp   0 Reply peter9330 (968) 10/24/2003 12:07:56 AM Andreas Rossberg wrote: > Pascal Costanza wrote: > >>> >>> My copy, >>> >>> http://research.microsoft.com/Users/luca/Papers/TypeSystems.A4.pdf >>> >>> on page 3 defines safety as orthogonal to typing in the way Matthias >>> suggested. >> >> >> Yes, but it says dynamically typed vs statically typed where Matthias >> says untyped vs typed. > > > Huh? On page 2 Cardelli defines typed vs. untyped. Table 1 on page 5 > clearly identifies Lisp as an untyped (but safe) language. He also > speaks of statical vs. dynamical _checking_ wrt safety, but where do you > find a definition of dynamic typing? Hmm, maybe I was wrong. I will need to check that again - it was some time ago that I have read the paper. Oh dear, I am getting old. ;) Thanks for pointing this out. Pascal   0 Reply costanza (1427) 10/24/2003 12:14:53 AM In article <m165ifobqz.fsf@tti5.uchicago.edu>, Matthias Blume <find@my.address.elsewhere> wrote: > The only time no formal proof can be given *in principle* is when > there, in fact, is no proof. If there is no proof, we cannot know > whether the program is correct. I don't consider programs for which I > cannot know whether they are correct (even in principle!) useful. I doubt that very much. You're posting to usenet, which means you are making use of a significant software infrastructure, which means that you ipso facto find it useful. I doubt very much that you could prove the software infrastructure correct. (I doubt very much that it *is* correct.) I am absolutely certain that you do not know that it is correct, and that you never will. I am also quite certain that you will continue to use it (and therefore judge it useful) regardless. > Now, the programmer might think he has a proof but, in fact, does not. > In that case the attempt of formalizing it would fail -- something > that would be relatively easy to detect. So human errors > notwithstanding, programmers do reason about their code, and that > reasoning COULD be used to either verify correctness or to reject > program, reasoning, or both on the basis of the fact that the > reasoning was illogical. Yes, but the only way to know whether or not this is possible in principle is to actually do it in practice. There are no non-constructive proofs of the existence of a proof. > > To which my reply will be: how do you know that the exhibited proof > > is correct? > > Because I checked it. How do you know that you didn't make a mistake when you checked it? > > Oh, you're going to run an automatic proof checker on > > it? How do you know that the proof checker is correct? > > How do you > > know that the hardware on which your proof checker runs is correct? > > What happens if you get a single-event upset in a processor > > register, or a bad byte of RAM? > > What is the probability of this falsely giving a "proof ok" rather > than a core dump? First, what difference does that make? I thought you were arguing for the desirability of proofs in an absolute sense, not a probabilistic one. If you are arguing probabilistically then we have to compare apples and apples and ask how much effort you have to put into a proof to gain a certain confidence in its correctness and compare that to how much effort you have to put into empirical testing to gain the same level of correctness. Second, I'll bet you that even a "proven" correct program would not produce the expected results when run on a Pentium with the fdiv bug. > Sure, there has to be a "trusted computing base" just like every logic > has a set of axioms which we don't question any further. The point is > that the trusted computing base can be small: proof checkers are > fairly simple programs, But they run on very complicated hardware. And they are compiled by very complicated compilers, at least if you want them to run fast. > I did not say "formal proof in their head", please! I said that the > proof in their head ought to be formalizable, which is something > entirely different. I do not see it as entirely different. The only way to know if a proof in someone's head is formalizable is to formalize it. Maybe they don't need to do this in their head, but they do have to do it. Otherwise you're just blowing smoke. > You are right, btw, in that the discussion is becoming increasingly > vacuous. Let's forget about the "I would fire her" remark, ok? Fine. > What > I really meant to express with that remark was my belief that every > programm actually *does* have a proof of correctness for her program > in her head. Otherwise what she is doing amounts to randomly cranking > out code without any understanding at all. A monkey at the keyboard. There is a whole branch of research in evolutionary programming that uses precisely that technique. In fact, some biologists are starting to look at biological systems in computational terms. No one proved your DNA correct, but it seems to get the job done nonetheless. > That this is not what's going on was precisely my point: people do > reason about their code (albeit informally in most cases, and many of > them wouldn't be able to clearly communicate their reasoning). That's > why I think that -- in principle -- all programs that people write and > that turn out to be actually correct are, in fact, provably correct. And I'm saying that you're wrong. You are wrong when you say that this is the case, and you are wrong when you say (or imply) that this ought to be the case. > Finding and writing down the formal proof in practice, however, is a > different story. Indeed. But unless one does write down the formal proof in practice, what is the point? Is there any content in your position beyond simply saying that all else being equal it is better to think clearly about a problem than not? I'll agree with that, but it doesn't strike me as a particularly noteworthy observation. > > P.S. Suppose your task is to write a typesetting program and one of the > > requirements is that the output look aesthetically pleasing. How would > > you go about proving that your code is correct? > > Obviously, this task is not well-defined It is perfectly well defined, it's just defined in terms that are not logical but rather psychological. There are people who make their living (indeed an entire industry devoted to) solving this problem. You will obviously not be among them. E.   0 Reply 10/24/2003 12:37:29 AM myfirstname.mylastname@jpl.nasa.gov (Erann Gat) wrote in message news:<myfirstname.mylastname-2310030857350001@192.168.1.51>... > > No. The fallacy in this reasoning is that you assume that "type error" > and "bug" are the same thing. They are not. Some bugs are not type > errors, and some type errors are not bugs. In the latter circumstance > simply ignoring them can be exactly the right thing to do. Just to be clear, I do not believe "bug" => "type error". However, I do claim that "type error" (in reachable code) => "bug". If at some point a program P' (in L') may eventually abort with an exception due to an ill typed function application then I would insist that P' is buggy. Here's the way I see it: (1) type errors are extremely common; (2) an expressive, statically checked type system (ESCTS) will identify almost all of these errors at compile time; (3) type errors flagged by a compiler for an ESCTS can pinpoint the source of the problem whereas ad hoc assertions in code will only identify a symptom of a type error; (4) the programmer does not have to litter type assertions in a program written in a language with an ESCTS; (5) an ESCTS provides optimization opportunities that would otherwise be unavailable to the compiler; (6) there will be cases where the ESCTS requires one to code around a constraint that is hard/impossible to express in the ESCTS (the more expressive the type system, the smaller the set of such cases will be.) The question is whether the benefits of (2), (3), (4) and (5) outweigh the occasional costs of (6). -- Ralph   0 Reply rafe (28) 10/24/2003 12:47:14 AM Pascal Costanza wrote: > Joachim Durchholz wrote: > >> Pascal Costanza wrote: >> >>> For example, static type systems are incompatible with dynamic >>> metaprogramming. This is objectively a reduction of expressive power, >>> because programs that don't allow for dynamic metaprogramming can't >>> be extended in certain ways at runtime, by definition. >> >> What is dynamic metaprogramming? > > Writing programs that inspect and change themselves at runtime. That's just the first part of the answer, so I have to make the second part of the question explicit: What is dynamic metaprogramming good for? I looked into the papers that you gave the URLs on later, but I'm still missing a compelling reason to use MOP. As far as I can see from the papers, MOP is a bit like pointers: very powerful, very dangerous, and it's difficult to envision a system that does the same without the power and danger but such systems do indeed exist. (For a summary, scroll to the end of this post.) Just to enumerate the possibilities in the various URLs given: - Prioritized forwarding to components (I think that's a non-recommended technique, as it either makes the compound object highly dependent on the details of its constituents, particularly if a message is understood by many contituents - but anyway, here goes:) Any language that has good support for higher-order functions can to this directly. - Dynamic fields Frankly, I don't understand why on earth one would want to have objects with a variant set of fields. I could do the same easily by adding a dictionary to the objects, and be done with it (and get the additional benefit that the dictionary entries will never collide with a field name). Conflating the name spaces of field names and dictionary keys might offer some syntactic advantages (callers don't need to differentiate between static and dynamic fields), but I fail to imagine any good use for this all... (which may, of course, be lack of imagination on my side, so I'd be happy to see anybody explain a scenario that needs exactly this - and then I'll try to see how this can be done without MOP *g*). - Dynamic protection (based on sender's class/type) This is a special case of "multiple views" (implement protection by handing out a view with a restricted subset of functions to those classes - other research areas have called this "capability-based programming"). - Multiple views Again, in a language with proper handling for higher-order functions (HOFs), this is easy: a view is just a record of accessor functions, and a hidden reference to the record for which the view holds. (If you really need that.) Note that in a language with good HOF support, calls that go through such records are syntactically indistinguishable from normal function calls. (Such languages do exist; I know for sure that this works with Haskell.) - Protocol matching I simply don't understand what's the point with this: yes of course this can be done using MOP, but where's the problem that's being simplified with that approach? - Collection of performance data That's nonportable anyway, so it can be built right into the runtime, and with less gotchas (if measurement mechanisms are integrated into the runtime, they will rather break than produce bogus data - and I prefer a broken instrument to one that will silently give me nonsense readings, thank you). - Result caching Languages with good HOF support usually have a "memo" or "memoize" function that does exactly this. - Coercion Well, of all things, this really doesn't need MOP to work well. - Persistency (and, as the original author forgot: network proxies - the issues are similar) Now here's a thing that indeed cannot be retrofitted to a language without MOP. (Well, performance counting can't be retrofitted as well, but that's just a programmer's tool that I'd /expect/ to be part of the development system. I have no qualms about MOP in the developer system, but IMHO it should not be part of production code, and persistence and proxying for remote objects are needed for running productive systems.) For the first paper, this leaves me with a single valid application for a MOP. At which point I can say that I can require that "any decent language should have this built in": not in the sense that every run-time system should include a working TCP/IP stack, but that every run-time system should include mechanisms for marshalling and unmarshalling objects (and quite many do). On to the second paper (Brant/Foote/Johnson/Roberts). - Image stripping I.e. finding out which functions might be called by a given application. While this isn't Smalltalk-specific, it's specific to dynamic languages, so this doesn't count: finding the set of called functions is /trivial/ in a static language, since statically-typed languages don't usually offer ways to construct function calls from lexical elements as typical dynamic languages do. - Class collaboration, interaction diagrams Useful and interesting tools. Of course, if the compiler is properly modularized, it's easy to write them based on the string representation, instead of using reflective capabilities. - Synchronized methods, pre/postcondition checking Here, the sole advantage of having an implementation in source code instead of in the run-time system seems to be that no recompilation is necessary if one wishes to change the status (method is synchronized or not, assertions are checked or not). Interestingly, this is not a difference between MOP and no MOP, it's a difference between static and dynamic languages. Even that isn't too interesting. For example, I have worked with Eiffel compilers, and at least two of them do not require any recompilation if you want to enable or disable assertion checking (plus, at least for one compiler, it's possible to switch checking on and off on a per-program, per-class, or even per-function basis), so this isn't the exclusive domain of dynamic languages. Of course, such things are easier to add as an afterthought if the system is dynamic and such changes can be done with user code - but since language and run-time system design are as much about giving power as guarantees to the developer, and giving guarantees necessarily entails restricting what a developer can do, I'm entirely unconvinced that a dynamic language is the better way to do that. - Multimethods Well, I don't see much value in them anyway... .... On to Andreas Paepcke's paper. I found it more interesting than the other two because it clearly spells out what MOPs are intended to be good for. One of the main purposes, in Paepcke's view, is making it easier to write tools. In fact reflective systems make this easier, because all the tricky details of converting source code into an internal data object have already been handled by the compiler. On the other hand, I don't quite see why this should be more difficult for a static language. Of course, if the language designer "just wanted to get it to compile", anybody who wants to write tools for the language has to rewrite the parser and decorator, simply because the original tools are not built for separating these phases (to phrase it in a polite manner). However, in the languages where it's easy to "get it to compile" without compromising modularity, I have seen lots of user-written tools, too. I think the main difference is that when designing a run-time system for introspection, designers are forced to do a very modular compiler design - which is a Good Thing, but you can do a good design for a non-introspective language just as well :-) In other words, I don't think that writing tools provides enough reason for introspection: the goals can be attained in other ways, too. The other main purpose in his book is the ability to /extend/ the language (and, as should go without saying, without affecting code that doesn't use the extensions). He claims it's good for experimentation (to which I agree, but I wouldn't want or need code for language experimentation in production code). Oh, I see that's already enough of reasons by his book... not by mine. Summary: ======== Most reasons given for the usefulness of a MOP are irrelevant. The categories here are (in no particular order): * Unneeded in a language without introspection (the argument becomes circular) * Easily replaced by good higher-order function support * Programmer tools (dynamic languages tend to be better here, but that's more of a historical accident: languages with a MOP are usually highly dynamic, so a good compiler interface is a must - but nothing prevents the designers of static languages from building their compilers with a good interface, and in fact some static languages have rich tool cultures just like the dynamic ones) A few points have remained open, either because I misunderstood what the respective author meant, or because I don't see any problem in handling the issues statically, or because I don't see any useful application of the mechanism. The uses include: * Dynamic fields * Protocol matching * Coercion And, finally, there's the list of things that can be done using MOP, but where I think that they are better handled as part of the run-time system: * (Un-)Marshalling * Synchronization * Multimethods For (un-)marshalling, I think that this should be closed off and hidden from the programmer's powers because it opens up all the implementation details of all the objects. Anybody inspecting source code will have to check the entire sources to be sure that a private field in a record is truly private, and not accessed via the mechanisms that make user-level implementation of (un-)marshalling possible. Actually, all you need is a builtin pair of functions that convert some data object from and to a byte stream; user-level code can then still implement all the networking protocol layers, connection semantics etc. For synchronization, guarantees are more important than flexibility. To be sure that a system has no race conditions, I must be sure that the locking mechanism in place (whatever it is) will work across all modules, regardless of author. Making libraries interoperate that use different locking strategies sounds like a nightmare to me - and if everybody must use the same locking strategy, it should be part of the language, not part of a user-written MOP library. However, that's just a preliminary view; I'd be interested in hearing reports from people who actually encountered such a situation (I haven't, so I may be seeing problems where there aren't any). For multimethods, I don't see that they should be part of a language anyway - but that's a discussion for another thread that I don't wish to repeat now (and this post is too long already). Rambling mode OFF. Regards, Jo   0 Reply joachim.durchholz (563) 10/24/2003 1:30:43 AM Pascal Costanza wrote: > Joachim Durchholz wrote: > >> Pascal Costanza wrote: >> >>> See the example of downcasts in Java. >> >> Please do /not/ draw your examples from Java, C++, or Eiffel. Modern >> static type systems are far more flexible and powerful, and far less >> obtrusive than the type systems used in these languages. > > This was just one obvious example in which you need a workaround to make > the type system happy. There exist others. Then give these examples, instead of presenting us with strawman examples. >> A modern type system has the following characteristics: > > I know what modern type systems do. Then I don't understand your point of view. Regards, Jo   0 Reply joachim.durchholz (563) 10/24/2003 1:31:49 AM Pascal Costanza wrote: > Matthias Blume wrote: > >> Pascal Costanza <costanza@web.de> writes: >> >> >>>> There are also programs which I cannot express at all in a purely >>>> dynamically typed language. (By "program" I mean not only the >>>> executable >>>> code itself but also the things that I know about this code.) >>>> Those are the programs which are protected against certain bad things >>>> from happening without having to do dynamic tests to that effect >>>> themselves. >>> >>> This is a circular argument. You are already suggesting the solution >>> in your problem description. >> >> Is it? Am I? Is it too much to ask to know that the invariants that >> my code relies on will, in fact, hold when it gets to execute? > > Yes, because the need might arise to change the invariants at runtime, > and you might not want to stop the program and restart it in order just > to change it. Then it's not an invariant. Or the invariant is something like "foo implies invariant_1 and not foo implies invariant_2", where "foo" is the condition that changes over the lifetime of the object. Invariants are, by definition, the properties of an object that will always hold. Or are you talking about system evolution and maintenance? That would be an entirely new aspect in the discussion, and you should properly forewarn us so that we know for sure what you're talking about. Regards, Jo   0 Reply joachim.durchholz (563) 10/24/2003 1:36:15 AM Pascal Costanza wrote: > In my case, I feel distracted by a tool that complains about things that > are not relevant to my flow of thinking. I have experienced this very > often. Which tools were that? Regards, Jo   0 Reply joachim.durchholz (563) 10/24/2003 1:38:38 AM Pascal Costanza wrote: > Matthias Blume wrote: > >> [Snip] I think what I am >> asking here is fairly modest. > > No, you are asking for more. You are asking for the proof to be > automatically executable. A programmer who writes code that's too complicated for automated reasoning will, in 99% of all cases, have written code that's too complicated for others to understand. And, for that matter, it will also be too complicated for himself to understand. (I have seen such code. Some of it was written by myself.) In practice, whenever code is written, the programmer should always be able to explain why and how his code works. The reasoning used in such explanations is generally simple (unless the programmer just invented a new algorithm - something that isn't done very often nowadays). The reasoning is in fact so simple that even an automatic inference engine should be able to reproduce it without help from the programmer. (How many of your loops and recursions go beyond iterating over a precomputed collection? Not many, I'd guess, unless you're routinely using some /very/ unusual patterns - and iterating over a collection would be easy enough to program as a heuristic into any theorem prover.) Regards, Jo   0 Reply joachim.durchholz (563) 10/24/2003 1:44:34 AM Pascal Costanza <costanza@web.de> writes: > No, you are asking for more. You are asking for the proof to be > automatically executable. Would people kindly stop telling me what I am asking for? Thank you.   0 Reply find19 (1245) 10/24/2003 2:14:10 AM spammers_must_die@jpl.nasa.gov (Erann Gat) writes: > It is perfectly well defined, it's just defined in terms that are not > logical but rather psychological. I don't think it is well-defined at all. Ask n people and you get n answers. That's not "well-defined". > There are people who make their living (indeed an entire industry > devoted to) solving this problem. I know. The point is that one can never say the program is "correct" with respect to the requirement of having the typesetting be aesthetical. One can, maybe, make statements like "the majority of our customers seems to be satisfied with the results". But that's not what "correctness" is about. > You will obviously not be among them. Indeed, I will not. But that's more because I'm not very good at arts. Matthias   0 Reply find19 (1245) 10/24/2003 2:26:33 AM j-anthony@rcn.com (Jon S. Anthony) writes: > Hmmm. Maybe I actually did have a proof in my head that you were > clueless. You've even done the work here of giving a good first draft > of writing it out for me. Glad to see some really coherent, intelligent contributions to this discussion. Thanks!   0 Reply find19 (1245) 10/24/2003 2:29:00 AM Peter Seibel <peter@javamonkey.com> writes: > Matthias Blume <find@my.address.elsewhere> writes: > > > Alex McGuire <alex@alexmcguire.com> writes: > > [snip] > > > > To clarify things, can you give me a formal proof that the > > > following java code correctly sums an array of integers? > > > > > > > > > public double sumArray(int[] array){ > > > int sum = 0; > > > for (int i = 0; i < array.length; ++i){ > > > sum += array[i]; > > > } > > > return sum; > > > } > > > > I think this can be proved fairly easily using Hoare-style logic. > > Basically, you show that at the beginning of each iteration of the > > loop you have the invariant sum = \sum_{k=0}^{i-1} array[k]. At the > > end of each iteration you then have sum = \sum_{k=0}^{i} array[k]. > > The loop terminates when i = array.length, so at this point we have > > sum = \sum_{k=0}^{array.length-1} array[k] which is what we wanted to > > prove. > > > Obviously, a truly formal proof is much longer, but it would merely > > fill in the gaps that I left in the above... > > Uh, unless sum overflows. Better check that proof again. Indeed. Once you sit down and do every step, even the trivial-looking ones, you find such bugs. Ok, so the above code is, in fact, not correct. (This means that one should better not be able to prove it!) Anyway, assuming Alex was not asking a trick question and truly wanted to know how one goes about proving things like the one he asked (provided they are actually true), let's modify his "theorem" to "... correctly sums an array of integers modulo 2^32." (I'm assuming 32-bit integers here.) If we are talking about SML code we could look at, say, fun sumList l = let fun sl ([], sum) = sum | sl (h :: t, sum) = sl (t, h + sum) in sl (l, 0) end and try to prove "If sumList returns, then the result is the sum of the integers in the argument list." (Notice the "if ... returns" condition which will not be satisfied on overflow -- which is what makes this go through.) This (or the modified Java statement -- unless I'm overlooking yet another pitfall) is easily provable using, e.g., the technique that I outlined. Matthias   0 Reply find19 (1245) 10/24/2003 2:39:04 AM j-anthony@rcn.com (Jon S. Anthony) writes: > You can't be serious. Even we take your premise as true (that she > _thinks_ she has a proof) this in absolutely no way implies that she > does and even less that such a proof exists. Let's see... I _think_ I > have a proof (in my head) that you are completely clueless wrt this > topic, therefore such a proof "obviously" exists and could be written > down. Yep, makes real good sense. No, my claim is: For every correct program written by a human there is a correctness proof. In other words, I find it unlikely that someone writes a correct program, but there actually is no such proof. People do reason about the programs they write, and usually they are not too far off from the truth -- especially if they actually got the code right. Your attempt at insulting me is cute, but it has little to do with what I said. Matthias   0 Reply find19 (1245) 10/24/2003 3:00:19 AM Pascal Costanza wrote: > Paul Wallich wrote: > >> Without a solid definition of "the program is correct" all of this is >> really posturing, and not even interesting posturing at that. Among >> the choices: >> >> the program will do what the customer wanted (ha) >> the program will do what the customer asked for (maybe) >> the program will do what the req/spec people asked for >> the program will conform to the written spec >> the program will do what the programmer intended >> the program will do what the programmer documented >> the program will fail only in certain relatively harmless ways >> and a bunch of others >> >> feasible formal proofs apply only to some of those definitions and not >> even in any monotonic fashion > > > All these "choices" have a common theme - you know beforehand what you > want. (Apart from the fact that there is always a customer involved - > that's also not always the case.) > > What about the following choice: > > the program will support the customer in ways they didn't even dream about > > How would you formalize that? That's in the first one. The customer doesn't have to know they want it until they see it. (In fact, if you look at a lot of software currently being sold or distributed, the customer doesn't know it does what they want until they have been convinced painfully and at great length. But perhaps that's not what you intended.) In some ways it's easier to prove your version of correctness, because it doesn't require nearly as rigorous a semantics of what the customer believes they want before the program is written... Practically speaking, of course, I'm on the side of very limited definitions of "correct" that entail a clear understanding that "correct" is not always or even often a useful descriptor. paul   0 Reply pw38 (127) 10/24/2003 3:46:55 AM Joachim Durchholz <joachim.durchholz@web.de> writes: > And, finally, there's the list of things that can be done using MOP, > but where I think that they are better handled as part of the run-time > system: > * (Un-)Marshalling > * Synchronization > * Multimethods The MOP is an interface to the run-time system for common object services. I do not understand your position that these would be better handled by the run-time. > For (un-)marshalling, I think that this should be closed off and > hidden from the programmer's powers because it opens up all the > implementation details of all the objects. What if I want to (un-)marshall from/to something besides a byte stream, such as an SQL database? I don't want one of the object services my system depends on to be so opaque because a peer thought I would be better off that way. Then again, I have never understand the desire to hide things in programming languages. > Anybody inspecting source code will have to check the entire sources > to be sure that a private field in a record is truly private, and > not accessed via the mechanisms that make user-level implementation > of (un-)marshalling possible. If you look at the MOP in CLOS, you can use the slot-value-using-class method to ensure that getting/setting the slot thru any interface will trigger the appropriate code. It does not matter, private, public, wether they use SLOT-VALUE or an accessor. This is also useful for transaction mgmt. The MOP is an interface to the run-time's object services. -- Sincerely, Craig Brozefsky <craig@red-bean.com> No war! No racist scapegoating! No attacks on civil liberties! Chicago Coalition Against War & Racism: www.chicagoantiwar.org   0 Reply craig8588 (5) 10/24/2003 4:22:49 AM Pascal Bourguignon fed this fish to the penguins on Thursday 23 October 2003 11:33 am: > > The only untyped languages I know are assemblers. (ISTR that even > intercal can't be labelled "untyped" per se). > > Are we speaking about assembler here? > REXX might qualify (hmmm, I think DCL is also untyped). notstring = 5 string = "5" what = string + notstring when = notstring + string say "5 + '5' (notstring + string)" when say "'5' + 5 (string + notstring)" what s. = "empty" s.string = 2.78 n. = 3.141592654 n.notstring = "Who?" say "s.5" s.5 say "s.'5'" s."5" say "s.1" s.1 say "s.'2'" s."2" say "s.string" s.string say "s.notstring" s.notstring say "n.string" n.string say "n.notstring" n.notstring [wulfraed@beastie wulfraed]$ rexx t.rx
5 + '5' (notstring + string) 10
'5' + 5 (string + notstring) 10
s.5 2.78
s.'5' empty5
s.1 empty
s.'2' empty2
s.string 2.78
s.notstring 2.78
n.string Who?
n.notstring Who?

Apparently literal strings are not allowed in the stem look-up,
resulting in the stem default of empty followed by the concatenated
literal.

--
> ============================================================== <
>   wlfraed@ix.netcom.com  | Wulfraed  Dennis Lee Bieber  KD6MOG <
>      wulfraed@dm.net     |       Bestiaria Support Staff       <
> ============================================================== <


 0
Reply wlfraed (4456) 10/24/2003 5:10:00 AM

Matthias Blume <find@my.address.elsewhere> writes:

> tfb@famine.OCF.Berkeley.EDU (Thomas F. Burdick) writes:
>
> > amazing system that allows you to prove arbitrary properties of your
> > code, and to specify what "correct" means in a way that's not just
> > another programming language (which would then need to be proven
> > correct using ... ?)
>
> The system: mathematics in general, logic in particular.
> Correctness: Certain statements (depending on the problem domain) which
>    I want to hold true for my programs.

So it's "correctness" that's broken.  If we're talking about
applications not just sorting algorithms, at least.  In every logical
system I've seen, if you any specification for a program's behavior
will be so complex, you've just recreated the problem: so your program
has certain properties, but how do you know that what you wrote means
what you think it did?

> Another programming language: To some degree, yes, logic is "another
>    programming language".

In fact, to such a degree that the solution is just as bad as the
problem.

> I think what I am asking here is fairly modest.  Shame on you for
> calling me names for it!

What you've been asking for seems to have becore a moving target, but
in the post I responded to, it sure as hell wasn't modest; it was
phrased in an insulting manner, too.  Shame on you.

(And I got no shame, anyway)

--
/|_     .-----------------------.
,'  .\  / | No to Imperialist war |
,--'    _,'   | Wage class war!       |
/       /      -----------------------'
(   -.  |
|     ) |
(-.  '--.)
. )----'

 0
Reply tfb3 (483) 10/24/2003 5:57:05 AM

spammers_must_die@jpl.nasa.gov (Erann Gat) writes:

> And IMO a perfectly legitimate answer to that question is, "Because I ran
> it and it worked."  To which you will no doubt counter: but how do you
> know that it will work the *next* time you run it, or if you run it under
> different circumstances than those under which you tested it?  To which my
> reply will be: how do you know that the exhibited proof is correct?  Oh,
> you're going to run an automatic proof checker on it?  How do you know
> that the proof checker is correct?
[ snip ]

That last one's easy -- there are nice logical systems for which proof
checkers are *really* easy to write.  So you can boot strap your way
up to a useful one, with the base case being a hand-generated and
hand-checked proof.  The *far* more difficult question is: how do you
know that your logical specification means what you think it does?
That's essentially the initial question: how do you know your program
is correct?  "Because I ran it and it worked."

--
/|_     .-----------------------.
,'  .\  / | No to Imperialist war |
,--'    _,'   | Wage class war!       |
/       /      -----------------------'
(   -.  |
|     ) |
(-.  '--.)
. )----'

 0
Reply tfb3 (483) 10/24/2003 6:02:16 AM

Matthias Blume <find@my.address.elsewhere> writes:
> No, my claim is: For every correct program written by a human there is
> a correctness proof.  In other words, I find it unlikely that someone
> writes a correct program, but there actually is no such proof.

There is room for Goedel here somewhere. [Hmm, see "G�del on Net" at
http://www.sm.luth.se/~torkel/eget/godel.html]

Formal proofs can only be done relative to a particular formalism. Within a
given (sufficiently complex) formalism it is impossible to prove all
statements that can be expressed in it.

Given that humans can bang out just about any possible piece of crap on the
keyboard if they are patient enough, it certainly follows that there are
programs that humans can write that cannot be proved correct.

Even if you restrict yourself to saying all *correct* programs humans can
write can be proved correct, you are on weak grounds. Formal proofs are only
sensical in the context of a particular formalism. It is certainly conceivable
that humans could consider a program correct according to informal standards
("hey it's working for me!", "hey that's pretty!"). It does not follow that
the formalisms currently at our disposal are rich enough to express the
correctness formally, however.

Humans work with a combination of logic, intuition, observation and caffeine.
What "feels" like a reasonable proof in one's head is unlikely to be easily
expressed as a formal proof.

> People do reason about the programs they write, and usually they are not too
> far off from the truth -- especially if they actually got the code right.

Reasonable enough. But this does not mean that there is a correctness *proof*
in the formal sense.

There is also the whole notion of a program being correct in one situation but
incorrect in another (e.g. using tried and true Ariane 4 software in the
Ariane 5 rocket), so even if you actually took the trouble of doing a formal
proof, you very quickly have to engage in informal thinking to adapt it to new
situations.

--
Cheers,                                        The Rhythm is around me,
The Rhythm has control.
Ray Blaak                                      The Rhythm is inside me,
rAYblaaK@STRIPCAPStelus.net                    The Rhythm has my soul.

 0
Reply rAYblaaK (363) 10/24/2003 6:40:01 AM

Pascal Costanza <costanza@web.de> wrote in message news:<bn86o9$gfh$1@newsreader2.netcologne.de>...
> Ralph Becket wrote:
> > This is utterly bogus.  If you write unit tests beforehand, you are
> > already pre-specifying the interface that the code to be tested will
> > present.
> >
> > I fail to see how dynamic typing can confer any kind of advantage here.
>
> Read the literature on XP.

What, all of it?

Why not just enlighten me as to the error you see in my contention

> > Are you seriously claiming that concise, *automatically checked*
> > documentation (which is one function served by explicit type
> > declarations) is inferior to unchecked, ad hoc commenting?
>
> I am sorry, but in my book, assertions are automatically checked.

*But* they are not required.
*And* if they are present, they can only flag a problem at runtime.
*And* then at only a single site.

> > For one thing, type declarations *cannot* become out-of-date (as
> > comments can and often do) because a discrepancy between type
> > declaration and definition will be immidiately flagged by the compiler.
>
> They same holds for assertions as soon as they are run by the test suite.

That is not true unless your test suite is bit-wise exhaustive.

> > I don't think you understand much about language implementation.
>
> ...and I don't think you understand much about dynamic compilation. Have
> you ever checked some not-so-recent-anymore work about, say, the HotSpot
> virtual machine?

Feedback directed optimisation and dynamic FDO (if that is what you
are suggesting is an advantage of HotSpot) are an implementation
techonology and hence orthogonal to the language being compiled.

On the other hand, if you are not referring to FDO, it's not clear
to me what relevance HotSpot has to the point under discussion.

> > A strong, expressive, static type system provides for optimisations
> > that cannot be done any other way.  These optimizations alone can be
> > expected to make a program several times faster.  For example:
>
> You are only talking about micro-efficiency here. I don't care about
> that, my machine is fast enough for a decent dynamically typed language.

Speedups (and resource consumption reduction in general) by (in many
cases) a factor or two or more consitute "micro-efficiency"?

> > On top of all that, you can still run your code through the profiler,
> > although the need for hand-tuned optimization (and consequent code
> > obfuscation) may be completely obviated by the speed advantage
> > conferred by the compiler exploiting a statically checked type system.
>
> Have you checked this?

Do you mean have I used a profiler to search for bottlenecks in programs
in a statically type checked language?  Then the answer is yes.

Or do you mean have I observed a significant speedup when porting from
C# or Python to Mercury?  Again the answer is yes.

> Weak and dynamic typing is not the same thing.

Let us try to draw some lines and see if we can agree on *something*.

UNTYPED: values in the language are just bit patterns and all
operations, primitive or otherwise, simply twiddle the bits
that come their way.

DYNAMICALLY TYPED: values in the language carry type identifiers, but
any value can be passed to any function.  Some built-in functions will
raise an exception if the type identifiers attached to their arguments
are of the wrong sort.  Such errors can only be identified at runtime.

STATICALLY TYPED: the compiler carries out a proof that no value of the
wrong type will ever be passed to a function expecting a different type,
anywhere in the program.  (Note that with the addition of a universal
type and a checked runtime dynamic cast operator, one can add dynamically
typed facilities to a statically typed language.)

The difference between an untyped program that doesn't work (it produces
the wrong answer) and a dynamically typed program with a type bug (it
may throw an exception) is so marginal that I'm tempted to lump them both
in the same boat.

> No. The original question asked in this thread was along the lines of
> why abandon static type systems and why not use them always. I don't
> need to convince you that a proposed general solution doesn't always
> work, you have to convince me that it always works.

Done: just add a universal type.  See Mercury for example.

> [...]
> The burden of proof is on the one who proposes a solution.

What?  You're the one claiming that productivity (presumably in the
sense of leading to a working, efficient, reliable, maintainable
piece of code) is enhanced by using languages that *do not tell you
at compile time when you've made a mistake*!

-- Ralph

 0
Reply rafe (28) 10/24/2003 6:46:51 AM

[All nsgroups but lang.functional removed]

Pascal Costanza continues his ping-pong with some other people:

>>>>>> What is dynamic metaprogramming?
>>>>>
>>>>> Writing programs that inspect and change themselves at runtime.
>>>>
>>>> Ah.  I used to do that in assembler.  I always felt like I was
>>>> aiming a shotgun between my toes.
>>>>
>>>> When did self-modifying code get rehabilitated?
>>>
>>> I think this was in the late 70's.
>>
>> Have you got a good reference for the uninitiated?
>
>
> http://www.laputan.org/ref89/ref89.html and
> http://www.laputan.org/brant/brant.html are probably good starting
> points. http://www-db.stanford.edu/~paepcke/shared-documents/mopintro.ps
> is an excellent paper, but not for the faint of heart. ;)

First two references concern Smalltalk 80, the papers have been written in
'89, and in '90 or later the second. They are about wrappers and reflexivity in
Smalltalk, about the *possibility* to change the Smalltalk machines using the
Smalltalk language, and to change the compiled method bytecodes. They are not
about self-modifying programs. Again, somebody here sees just what he wants
to see...

The third reference is about the MetaObject protocol in CLOS, and belongs also
to the '90ties, not '70ties. And does not discuss *self-modifying programs*,
but the possibility of changing the environments, to extend some existing
layers adding e.g. the persistency, etc.

So, sorry, no rehabilitation. Misleading references...

In the '70ties there was a high-level language in which you could write self-
modifying programs *without* being able to analyze the code internals. In
Snobol 4 it was possible to assemble a string representing some Snobol program,
to compile it at run-time, and to replace some labeled parts of the running
program by the new code. But it had nothing to do with the dilemma static/
dynamic typing...

I will not declare myself as a side in this silly war, but if somebody
advocates the dynamic typing because of the possibility to produce self-
modifying programs, he *obviously* belongs to the 5-th column whose aim
is to promote the static typing among all those who think seriously about

Jerzy Karczmarczuk


 0
Reply karczma (185) 10/24/2003 6:57:20 AM

On Thursday 23 October 2003 19:30, Joachim Durchholz wrote:
> - Dynamic fields
> Frankly, I don't understand why on earth one would want to have objects=20
> with a variant set of fields. I could do the same easily by adding a=20
> dictionary to the objects, and be done with it (and get the additional=20
> benefit that the dictionary entries will never collide with a field name).
> Conflating the name spaces of field names and dictionary keys might=20
> offer some syntactic advantages (callers don't need to differentiate=20
> between static and dynamic fields), but I fail to imagine any good use=20
> for this all... (which may, of course, be lack of imagination on my=20
> side, so I'd be happy to see anybody explain a scenario that needs=20
> exactly this - and then I'll try to see how this can be done without MOP=
=20
> *g*).

=46rom what I understand zope uses this extensively in how you do stuff wit=
h the=20
ZODB. For example when rendering an object it looks for the closest callabl=
e=20
item called index_html. This means you can add an object to a folder that i=
s=20
called index_html and is callable and it just works. I have a lot of object=
s=20
where it is not defined in the code what variables they will have and at=20
runtime these objects can be added. At least in python you can replace a=20
method with a callable object and this is very useful to do.

Overall when working with zope I can't imagine not doing it that way. It sa=
ves=20
a lot of time and it makes for very maintainable apps. You can view your=20
program as being transparently persistent so you override methods with=20
objects just like you normally would be inheriting from a class and then=20
overriding methods in it. I really like using an OODB for apps and one of t=
he=20
interesting things is that you end up refactoring objects in your database=
=20
just like you would normally refactor code and it is pretty much the same=20
process.


 0
Reply kosh1 (41) 10/24/2003 7:10:17 AM

Ralph Becket wrote:

> Here's the way I see it:
> (1) type errors are extremely common;
> (2) an expressive, statically checked type system (ESCTS) will identify
>   almost all of these errors at compile time;
> (3) type errors flagged by a compiler for an ESCTS can pinpoint the source
>   of the problem whereas ad hoc assertions in code will only identify a
>   symptom of a type error;
> (4) the programmer does not have to litter type assertions in a program
>   written in a language with an ESCTS;
> (5) an ESCTS provides optimization opportunities that would otherwise
>   be unavailable to the compiler;
> (6) there will be cases where the ESCTS requires one to code around a
>   constraint that is hard/impossible to express in the ESCTS (the more
>   expressive the type system, the smaller the set of such cases will be.)

However,

(7) Developing reliable software also requires extensive testing to
detect bugs other than type errors, and
(8) These tests will usually detect most of the bugs that static
type checking would have detected.

So the *marginal* benefit of static type checking is reduced, unless you
weren't otherwise planning to test your code very well.

BTW, is (3) really justified?  My (admittedly old) experience with ML
was that type errors can be rather hard to track back to their sources.

Paul


 0
Reply dietz (395) 10/24/2003 8:34:25 AM

Joachim Durchholz wrote:

> Or are you talking about system evolution and maintenance?
> That would be an entirely new aspect in the discussion, and you should
> properly forewarn us so that we know for sure what you're talking about.

Did I forget to mention this in the specifications? Sorry. ;)

Yes, I want my software to be adaptable to unexpected circumstances.

(I can't give you a better specification, by definition.)

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 8:41:45 AM

Joachim Durchholz wrote:
> Pascal Costanza wrote:
>
>> Matthias Blume wrote:
>>
>>> [Snip] I think what I am
>>> asking here is fairly modest.
>>
>>
>> No, you are asking for more. You are asking for the proof to be
>> automatically executable.
>
>
> A programmer who writes code that's too complicated for automated
> reasoning will, in 99% of all cases, have written code that's too
> complicated for others to understand. And, for that matter, it will also
> be too complicated for himself to understand.
> (I have seen such code. Some of it was written by myself.)

99% is a statistical measure. Where do you get your numbers from?

> In practice, whenever code is written, the programmer should always be
> able to explain why and how his code works.

That's not automated reasoning.

> The reasoning used in such
> explanations is generally simple (unless the programmer just invented a
> new algorithm - something that isn't done very often nowadays).

And even if this indeed didn't happen very often, it would still happen.
So there are situations in which automated reasoning can be a hindrance.
That's all I am trying to say.

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 8:49:15 AM

Matthias Blume wrote:
> Pascal Costanza <costanza@web.de> writes:
>
>
>>No, you are asking for more. You are asking for the proof to be
>>automatically executable.
>
> Would people kindly stop telling me what I am asking for?
> Thank you.

I am terribly sorry, but a static type system automatically executes a
proof about certain properties of a program. And you said you want
static type systems.

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 8:51:23 AM

Matthias Blume wrote:

> I know.  The point is that one can never say the program is "correct"
> with respect to the requirement of having the typesetting be
> aesthetical.  One can, maybe, make statements like "the majority of
> our customers seems to be satisfied with the results".  But that's not

On the other hand, the only important goal is that customer are
satisfied with the results, no matter what program you write. If you do
more than that you are wasting resources.

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 9:03:51 AM

Matthias Blume wrote:

> No, my claim is: For every correct program written by a human there is
> a correctness proof.  In other words, I find it unlikely that someone
> writes a correct program, but there actually is no such proof.  People
> do reason about the programs they write, and usually they are not too
> far off from the truth -- especially if they actually got the code
> right.

We are getting at the heart of the issue: What does "there is" mean?

Either it means that something exist in principle without necessarily
existing in reality.

Or it means that something exists in reality.

The first variant is an idealistic point of view, the second is a
materialistic point of view, in a philosophical sense.

The materialistic point of view means that something cannot exist only
in principle.

Both points of views cannot be proven. You have to believe either the
one or the other. This means that it is an irrational choice by definition.

(How do you "prove" that something exists in principle? By transforming
it into reality. But then it doesn't exist only in principle anymore.)

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 9:12:30 AM

Ralph Becket wrote:
> Pascal Costanza <costanza@web.de> wrote in message news:<bn86o9$gfh$1@newsreader2.netcologne.de>...
>
>>Ralph Becket wrote:
>>
>>>This is utterly bogus.  If you write unit tests beforehand, you are
>>>already pre-specifying the interface that the code to be tested will
>>>present.
>>>
>>>I fail to see how dynamic typing can confer any kind of advantage here.
>>
>
> What, all of it?
>
> Why not just enlighten me as to the error you see in my contention
> about writing unit tests beforehand?

Maybe we are talking at cross-purposes here. I didn't know about ocaml
not requiring target code to be present in order to have a test suite
acceptable by the compiler. I will need to take a closer look at this.

>>>For one thing, type declarations *cannot* become out-of-date (as
>>>comments can and often do) because a discrepancy between type
>>>declaration and definition will be immidiately flagged by the compiler.
>>
>>They same holds for assertions as soon as they are run by the test suite.
>
> That is not true unless your test suite is bit-wise exhaustive.

Assertions cannot become out-of-date. If an assertion doesn't hold
anymore, it will be flagged by the test suite.

>>>I don't think you understand much about language implementation.
>>
>>...and I don't think you understand much about dynamic compilation. Have
>>you ever checked some not-so-recent-anymore work about, say, the HotSpot
>>virtual machine?
>
> Feedback directed optimisation and dynamic FDO (if that is what you
> are suggesting is an advantage of HotSpot) are an implementation
> techonology and hence orthogonal to the language being compiled.
>
> On the other hand, if you are not referring to FDO, it's not clear
> to me what relevance HotSpot has to the point under discussion.

Maybe we both understand language implementation, and it is irrelevant?

>>>A strong, expressive, static type system provides for optimisations
>>>that cannot be done any other way.  These optimizations alone can be
>>>expected to make a program several times faster.  For example:
>>
>>that, my machine is fast enough for a decent dynamically typed language.
>
> Speedups (and resource consumption reduction in general) by (in many
> cases) a factor or two or more consitute "micro-efficiency"?

Yes. Since this kind of efficiency is just one of many factors when
developing software, it might not be the most important one and might be

> The difference between an untyped program that doesn't work (it produces
> the wrong answer) and a dynamically typed program with a type bug (it
> may throw an exception) is so marginal that I'm tempted to lump them both
> in the same boat.

Well, but that's a wrong perspective. The one that throws an exception
can be corrected and then continued exactly at the point of the
execution path when the exception was thrown.

>>[...]
>>The burden of proof is on the one who proposes a solution.
>
> What?  You're the one claiming that productivity (presumably in the
> sense of leading to a working, efficient, reliable, maintainable
> piece of code) is enhanced by using languages that *do not tell you
> at compile time when you've made a mistake*!

No, other people are claiming that one should _always_ use static type
sytems, and my claim is that there are situations in which a dynamic
type system is better.

If you claim that something (anything) is _always_ better, you better
have a convincing argument that _always_ holds.

I have never claimed that dynamic type systems are _always_ better.

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 9:33:15 AM

Pascal Costanza <costanza@web.de> wrote:
> Dirk Thierbach wrote:
>> You cannot take an arbitrary language and attach a good static type
>> system to it. Type inference will be much to difficult, for example.
>> There's a fine balance between language design and a good type system
>> that works well with it.

> Right. As I said before, you need to reduce the expressive power of the
> language.

Maybe that's where the problem is. One doesn't need to reduce the
"expressive power". I don't know your particular application, but what
you seem to need is the ability to dynamically change the program
execution. There's more than one way to do that. And MOPs (like
macros) are a powerful tool and sometimes quite handy, but it's also
easy to shoot yourself severly into your own foot with MOPs if you're
not careful, and often there are better solutions than using MOPs (for
example, apropriate flexible datatypes).

I may be wrong, but I somehow have the impression that it is difficult
to see other ways to solve a problem if you haven't done it in that
way at least once. So you see that with different tools, you cannot do
it in exactly the same way as with the old tools, and immediately you
start complaining that the new tools have "less expressive power",
just because you don't see that you have to use them in a different
way.  The "I can do lot of things with macros in Lisp that are
impossible to do in other languages" claim seems to have a similar
background.

I could complain that Lisp or Smalltalk have "less expressive power"
because I cannot declare algebraic datatypes properly, I don't have
pattern matching to use them efficiently, and there is no automatic
test generation (i.e., type checking) for my datatypes. But there
are ways to work around this, so when programming in Lisp or Smalltalk,
I do it in the natural way that is appropriate for these languages,
instead of wasting my time with silly complaints.

The only way out is IMHO to learn as many languages as possible, and
to learn as many alternative styles of solving problems as possible.
Then pick the one that is apropriate, and don't say "this way has
most expressive power, all others have less". In general, this will
be just wrong.

- Dirk

 0
Reply dthierbach (210) 10/24/2003 9:35:44 AM

prunesquallor@comcast.net wrote:

> I think you know where I stand on static type checking, but to
> re-iterate to the people that didn't read the argument last time
> it surfaced....
>
>   I welcome every bit of help the computer gives me, and if it can
>   find a problem before I know about it, great!  Static type checking
>   is fine with me here.

>   I get a little peeved, however, when the computer complains
>   because it can't figure out whether there is a problem or not.

You get this problem mostly in the "brain-dead" statically languages
that have a typesystem which is just not strong enough, so they include
type-casts in the language to work around this problem.

>   I *really* don't like decorating my code with types.

Especially if you have to do it several times, like in some languages.

But in the presence of type-inference, you don't have to decorate your
code with types -- the compiler does that for you.

But on the other hands, you do write tests for your code, don't you?

Writing type annotions is just like writing tests -- it allows you
to focus on what you reall want to write, and it allows the compiler
to verify for test so your code really does what you expect it to do.

For simple functions, I usually don't write type annotations. For
difficult functions, I write down the type first (because that's
easier then writing the function itself), and once I have sorted out
the type, I usually have enough hints in my head to make writing the
function easy.

And once after I have corrected all the typing errors in the function
I wrote (added missing parenthesis, etc.), i.e. once the tests pass,
the function is usually correct.

> To the extent that a static type checker lets me live with those
> preferences, I'm all for it.  Clearly a lot of brain-dead statically
> typed languages violate a lot of those.

Yes. That's why it is important to destinguish between statically
typed languages and statically typed languages. Some of them are quite

If you're curious about a static type system that lets you live with
those preferences, give Haskell or OCaml a try.

- Dirk


 0
Reply dthierbach (210) 10/24/2003 9:55:20 AM

Gareth McCaughan wrote:

>Alex McGuire wrote:
>
>
>
>>Matthias Blume wrote:
>>
>>
>...
>
>
>>>No, I wasn't thinking of contemporary type errors.  I was thinking of
>>>a real proof of correctness, in all glory.  The point is that even
>>>though we all know that we cannot prove all correct programs correct
>>>in general, we can do so for the programs we actually write (which is
>>>a proper subset of the set of all correct programs).  Anyone who
>>>claims his program is correct but it cannot be proven correct must
>>>face the question "How do you know?"
>>>
>>>
>>I'm not sure what you mean by a proof here. Do you mean proof as in a
>>formal mathematical proof? Formally proving correctness of programs is
>>very difficult, even for a few lines of code, it would not be
>>practical for much larger programs. A pre-requisite would be a formal
>>description of the requirements, which I have never seen from a
>>client, nor do I want to. To clarify things, can you give me a formal
>>proof that the following java code correctly sums an array of integers?
>>
>>public double sumArray(int[] array){
>>    int sum = 0;
>>    for (int i = 0; i < array.length; ++i){
>>       sum += array[i];
>>    }
>>    return sum;
>>}
>>
>>
>
>I'm not Matthias, but here's my guess at the sort of thing
>he might consider appropriate.
>
>// Return the sum of all elements in the array,
>// mod 2^32.
>// XXX: Why does this return a double? If the idea
>// is to avoid overflow, why do we accumulate with
>// an int?
>
>

Well spotted. This was really a mistake, I wasn't trying to be clever.
Looks like the code didn't actually do what I
thought it did. Is this another problem with such proofs? For example, I
can prove that the quicksort _algorithm_ works, but how would I
prove that my code correctly implements that algorithm, and there aren't
subtle errors like the one I inadvertently wrote above.

In my experience at least I find much more bugs due to the incorrect
implementation of algorithms, rather than the use of invalid algorithms.

>public double sumArray(int[] array) {
>  int sum = 0;
>  for (int i=0; i<array.length; ++i) {
>    // loop invariant: sum == (sum of array elements with
>    // indices < i) mod 2^32
>    sum += array[i];
>  }
>  // on exit from the loop, the invariant holds with
>  // i == array.length, so that's the sum of *all*
>  // elements.
>  return sum;
>}
>I'd guess Matthias wouldn't expect to see all that
>actually embedded in the code, but he would want
>the programmer to have a clear enough understanding
>that she could provide it quickly and confidently
>if required.
>
>Converting that to a really formal proof would be
>tiresome (depending on how formal "really formal"
>is taken to be) but easy.
>
>I don't agree with Matthias's position, but I wouldn't
>want to hire someone who *couldn't* provide a correctness
>(or incorrectness) proof for a piece of code that simple.
>Would you?
>
>Disclaimer: I've written about 20 lines of Java *ever*,
>so I may have missed things. I wouldn't advise hiring
>me to write Java without a bit of time in the schedule
>for me to learn the language (and, more to the point,
>the libraries) better :-).
>
>
>


 0
Reply alex1420 (29) 10/24/2003 10:02:58 AM

Matthias Blume <find@my.address.elsewhere> writes:

> Thomas Lindgren <***********@*****.***> writes:
>
> > Matthias Blume <find@my.address.elsewhere> writes:
> >
> > > Every programmer who writes a program ought to have a proof that the
> > > program is correct in her mind. (If not, fire her.)
> >
> > Don't forget to fire the specification writer afterwards. Then the
> > requirements guy. Then the customer.
>
> Unfortunately, I am aware of "the Real World".  In any case, is this
> really any excuse for shipping code of which we don't know will always
> work, written by programmers who we didn't fire even though they
> didn't know what they were doing, writing to specifications that were
> inconsistent, driven by requirements that were unreasonable to begin
> with, asked for by customers who were clueless?

Requirements and specifications can be 'reasonable' and 'consistent'
in an everyday sense of the word, yet not mathematical enough to
provide a basis for a correctness proof. Indeed, this is normally the
case.

In the vast majority of cases, customers furthermore prefer a
deliverable that does what they want to one that does something
provably correct. I'd call that shrewd rather than clueless, actually.

In short, firing the programmer for not providing a correctness
proof doesn't seem very constructive.

Best,
Thomas
--
Thomas Lindgren
"It's becoming popular? It must be in decline." -- Isaiah Berlin


 0


Ralph Becket wrote:
> STATICALLY TYPED: the compiler carries out a proof that no value of the
> wrong type will ever be passed to a function expecting a different type,
> anywhere in the program.

Big deal. From Robert C. Martin:

"I've been a statically typed bigot for quite a few years....I scoffed
at the smalltalkers who whined about the loss of flexibility. Safety,
after all, was far more important than flexibility -- and besides, we
can keep our software flexible AND statically typed, if we just follow
good dependency management principles.

"Four years ago I got involved with Extreme Programming. ...

"About two years ago I noticed something. I was depending less and less
on the type system for safety. My unit tests were preventing me from
making type errors. The more I depended upon the unit tests, the less I
depended upon the type safety of Java or C++ (my languages of choice).

"I thought an experiment was in order. So I tried writing some
applications in Python, and then Ruby (well known dynamically typed
languages). I was not entirely surprised when I found that type issues
simply never arose. My unit tests kept my code on the straight and
narrow. I simply didn't need the static type checking that I had
depended upon for so many years.

"I also realized that the flexibility of dynamically typed langauges
makes writing code significantly easier. Modules are easier to write,
and easier to change. There are no build time issues at all. Life in a
dynamically typed world is fundamentally simpler.

"Now I am back programming in Java because the projects I'm working on
call for it. But I can't deny that I feel the tug of the dynamically
typed languages. I wish I was programming in Ruby or Python, or even
Smalltalk.

"Does anybody else feel like this? As more and more people adopt test
driven development (something I consider to be inevitable) will they
feel the same way I do. Will we all be programming in a dynamically
typed language in 2010? "

Lights out for static typing.

kenny

--
http://tilton-technology.com
What?! You are a newbie and you haven't answered my:


 0
Reply ktilton (2220) 10/24/2003 10:44:09 AM

myfirstname.mylastname@jpl.nasa.gov (Erann Gat) writes:

> In article <m1n0bromls.fsf@tti5.uchicago.edu>, Matthias Blume

>> The fact that the type checker will also detect a certain amount of
>> clerical errors in my code is a bonus.

> That depends on what you are trying to accomplish.  If you are forced to
> spend time fixing clerical errors that are not really relevant to the
> problem you are trying to solve

I don't follow this.  How can a type error not be relevant?

-kzm
--
If I haven't seen further, it is by standing in the footprints of giants

 0
Reply news2 (145) 10/24/2003 11:14:27 AM

Ray Blaak <rAYblaaK@STRIPCAPStelus.net> writes:

>> That's not what I said.  I said that the programmer has a proof in her
>> head. (At least she thinks she does.)

Translation: she needs to have an idea of what kinds of input a
function can expect, and what kind of outputs it should generate, and
some kind of rough mental sketch on how the code goes about ensuring
that.  (That wasn't so hard, was it?)

> Much much easier said than done. So much so that practical formal
> methods are not currently useful.

I wouldn't say "not useful", but perhaps overkill in many cases.  Note
that static typing is the lightweight version of this, and handles the
two first points: ensuring that input and output belong to certain
categories.  If the type system is used a bit actively, you can IMHO
do a lot here.

You can of course use it in cases when you don't know exactly how to
solve a subproblem, but you do know its type. Just throw in:

solve_subproblem :: (type declaration here)
solve_subproblem = undefined

to defer it to later, while type-checking the rest of your program.

BTW, I'm not convinced you could successfully remove the type system
from a language like Haskell -- how would you handle e.g. partially
application of functions, for instance?

I find the static typing invaluable when refactoring, perhaps I'm
denser than most programmers or something, but I seem simply unable to
rearrange blocks of code without making errors.  The occasionally
quoted "type correct means correct" is a grave overstatement when
developing code, but when refactoring, it is almost always true.

-kzm
--
If I haven't seen further, it is by standing in the footprints of giants

 0
Reply news2 (145) 10/24/2003 11:29:58 AM

Pascal Costanza <costanza@web.de> wrote:
> No, other people are claiming that one should _always_ use static type
> sytems, and my claim is that there are situations in which a dynamic
> type system is better.
>
> If you claim that something (anything) is _always_ better, you better
> have a convincing argument that _always_ holds.
>
> I have never claimed that dynamic type systems are _always_ better.

To me, it certainly looked like you did in the beginning. Maybe your
impression that other people say that one should always use static
type systems is a similar misinterpretation?

Anyway, formulations like "A has less expressive power than B" aresvery
close to "B is always better than A". It's probably a good idea to
avoid such formulations if this is not what you mean.

- Dirk


 0
Reply dthierbach (210) 10/24/2003 11:31:18 AM

Dirk Thierbach wrote:
>>  I *really* don't like decorating my code with types.
>
> Especially if you have to do it several times, like in some languages.

actually, I do like writing explicit type definitions and declarations
-- I like it better than writing lots of comments explaining what kinds
of arguments functions expect, as I sometimes have to do in Scheme.

> For simple functions, I usually don't write type annotations. For
> difficult functions, I write down the type first (because that's
> easier then writing the function itself), and once I have sorted out
> the type, I usually have enough hints in my head to make writing the
> function easy.
>
> And once after I have corrected all the typing errors in the function
> I wrote (added missing parenthesis, etc.), i.e. once the tests pass,
> the function is usually correct.

right!

--
Segui il tuo corso, e lascia dir le genti.

Lars


 0

ketil+news@ii.uib.no wrote:

> I don't follow this.  How can a type error not be relevant?

The type annotations you were forced to add could be wrong.

Paul


 0
Reply dietz (395) 10/24/2003 11:47:52 AM

Pascal Costanza <costanza@web.de> wrote:
> Remi Vanicat wrote:

>>>>> In a statically typed language, when I write a test case that
>>>>> calls a specific method, I need to write at least one class that
>>>>> implements at least that method, otherwise the code won't
>>>>> compile.

>>>>Not in ocaml.
>>>>ocaml is statically typed.

>> It make the verification when you call the test. I explain :
>> let f x = x #foo
>>
>> which is a function taking an object x and calling its method
>> foo, even if there is no class having such a method.
>>
>> When sometime latter you do a :
>>
>> f bar
>>
>> then, and only then the compiler verify that the bar object have a foo
>> method.

BTW, the same thing is true for any language with type inference.  In
Haskell, there are to methods and objects. But to test a function, you
can write

test_func f = if (f 1 == 1) && (f 2 == 42) then "ok" else "fail"

The compiler will infer that test_func has type

test_func :: (Integer -> Integer) -> String

(I am cheating a bit, because actually it will infer a more general type),
so you can use it to test any function of type Integer->Integer, regardless
if you have written it already or not.

> Doesn't this mean that the occurence of such compile-time errors is only
> delayed, in the sense that when the test suite grows the compiler starts
> to issue type errors?

As long as you parameterize over the functions (or objects) you want to
test, there'll be no compile-time errors. That's what functioncal
programming and type inference are good for: You can abstract everything
away just by making it an argument. And you should know that, since
you say that you know what modern type-systems can do.

But the whole case is moot anyway, IMHO: You write the tests because
you want them to fail until you have written the correct code that
makes them pass, and it is not acceptable (especially if you're doing
XP) to continue as long as you have failing tests. You have to do the
minimal edit to make all the tests pass *right now*, not later on.

It's the same with compile-time type errors. The only difference is
that they happen at compile-time, not at test-suite run-time, but the
necessary reaction is the same: Fix your code so that all tests (or
the compiler-generated type "tests") pass. Then continue with the next
step.

I really don't see why one should be annoying to you, and you strongly
prefer the other. They're really just the same thing. Just imagine
that you run your test suite automatically when you compile your
program.

- Dirk

 0
Reply dthierbach (210) 10/24/2003 11:58:30 AM

Paul F. Dietz <dietz@dls.net> wrote:
> ketil+news@ii.uib.no wrote:

>> I don't follow this.  How can a type error not be relevant?

> The type annotations you were forced to add could be wrong.

In Hindley-Milner style typing (without extensions), this can never
happen.

No type annotations you add can make the type error go away. A type
error always points to some error in the code, often a quite trivial
one (like wrong parenthesis, or swapped variable names, etc.).

With some of the extensions (e.g., Haskell type classes) you
sometimes have to add annotations to help the compiler to decide
what particular instance of a type class you mean. But this happens
only with top-level functions, and you cannot add a "wrong" annotation.

This is different to other statically typed languages, where you
indeed have to add type annotations or casts to make an "irrelevant"
type error go away. Yes, this is very annoying.

- Dirk


 0
Reply dthierbach (210) 10/24/2003 12:29:12 PM

Kenny Tilton <ktilton@nyc.rr.com> wrote:
> Big deal. From Robert C. Martin:
>
>
> "I've been a statically typed bigot for quite a few years....I scoffed
> at the smalltalkers who whined about the loss of flexibility. Safety,
> after all, was far more important than flexibility -- and besides, we
> can keep our software flexible AND statically typed, if we just follow
> good dependency management principles.
>
> "Four years ago I got involved with Extreme Programming. ...
>
> "About two years ago I noticed something. I was depending less and less
> on the type system for safety. My unit tests were preventing me from
> making type errors. The more I depended upon the unit tests, the less I
> depended upon the type safety of Java or C++ (my languages of choice).

Note that he is speaking about languages with a very bad type system.
As has been said in this thread a few times, there are statically
typed languages and there are statically typed languages. Those two
can differ substantially from each other.

Here's a posting from Richard MacDonald in comp.software.extreme-programming,
MID <Xns9327E3738674Fmacdonaldrjworldneta@204.127.36.1>:

: Eliot, I work with a bunch of excellent programmers who came from AI to
: Smalltalk to Java. We despise Java. We love Smalltalk. Some months ago we
: took a vote and decided that we were now more productive in Java than we
: had ever been in Smalltalk. The reason is the Eclipse IDE. It more than
: makes up for the lousy, verbose syntax of Java. We find that we can get
: Eclipse to write much of our code for us anyway.
:
: Smalltalk is superior in getting something to work fast. But refactoring
: takes a toll on a dynamically typed language because it doesn't provide
: as much information to the IDE as does a statically-typed language (even
: a bad one). Let's face it. If you *always* check callers and implementors
: in Smalltalk, you can catch most of the changes. But sometimes you
: forget. With Eclipse, you can skip this step and it still lights up every
: problem with a big X and helps you refactor to fix it
:
: In Smalltalk, I *needed* unit tests because Smalltalk allowed me to be
: sloppy. In Eclipse, I can get away without writing unit tests and my code
: miraculously often works the first time I get all those Xs eliminated.
:
:
: No question but that a "crappy statically typed" (*) language can get you
: into a corner where you're faced with lousy alternatives. But say I
: figure out a massive refactoring step that gets me out of it. In
: Smalltalk, I would probably fail without a bank of unit tests behind me.
: In Eclipse, I could probably make that refactoring step in less time and
: with far great certainty that it is correct. I've done it before without
: the safety net of tests and been successful. No way I would ever have
: been able to do that as efficiently in Smalltalk. (I once refactored my
: entire Smalltalk app in 3 days and needed every test I had every written.
: I have not done the equivalent in Java, but I have complete confidence I
: could do it just as well if not much better.)
:
: As far as productivity, we still write unit tests. But unit test
: maintenance takes a lot of time. In Smalltalk, I would spend 30% of my
: time coding within the tests. I tested at all levels, i.e., low-level,
: medium, and integration, since it paid off when searching for bugs. But
: 30% is too much. With Eclipse, we're able to write good code with just a
: handful of high-level tests. Often we simply write the answer as a test
: and do the entire app with this one test. The reason is once again that
: the IDE is visually showing us right where we broke my code and we don't
: have to run tests to see it.
:
: (*) I suggest we use 3 categories: (1) dynamically typed, (2) statically
: typed, (3) lousy statically typed. Into the latter category, toss Java
: and C++. Into (2), toss some of the functional languages; they're pretty
: slick. Much of the classic typing wars are between dynamic-typists
: criticizing (3) vs. static-typists working with (2).
:
: P.S. I used to be one of those rabid dynamic defenders. I'm a little
: chastened and wiser now that I have a fantastic IDE in my toolkit.

- Dirk

 0
Reply dthierbach (210) 10/24/2003 12:35:14 PM

Dirk Thierbach <dthierbach@gmx.de> writes:

>Writing type annotions is just like writing tests -- it allows you to
>focus on what you reall want to write, and it allows the compiler to
>verify for test so your code really does what you expect it to do.
>
>For simple functions, I usually don't write type annotations. For
>difficult functions, I write down the type first (because that's
>easier then writing the function itself), and once I have sorted out
>the type, I usually have enough hints in my head to make writing the
>function easy.

Heh, so you are saying that this 'type-first development' is akin to
the test-first development method recommended by some gurus.

Unlike some up-front design work, writing a type annotation is design
that can be checked by the computer, and is constantly re-checked so
it cannot get out of date or be inconsistent with the code.

--
Ed Avis <ed@membled.com>

 0
Reply ed330 (49) 10/24/2003 1:02:23 PM

Dirk Thierbach wrote:
> Paul F. Dietz <dietz@dls.net> wrote:
>
>>ketil+news@ii.uib.no wrote:
>
>
>>>I don't follow this.  How can a type error not be relevant?
>
>
>>The type annotations you were forced to add could be wrong.
>
>
> In Hindley-Milner style typing (without extensions), this can never
> happen.
>
> No type annotations you add can make the type error go away. A type
> error always points to some error in the code, often a quite trivial
> one (like wrong parenthesis, or swapped variable names, etc.).

....or a wrong type annotation?

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 1:12:53 PM

Dirk Thierbach wrote:
> Pascal Costanza <costanza@web.de> wrote:
>
>>Remi Vanicat wrote:
>
>
>>>>>>In a statically typed language, when I write a test case that
>>>>>>calls a specific method, I need to write at least one class that
>>>>>>implements at least that method, otherwise the code won't
>>>>>>compile.
>
>
>>>>>Not in ocaml.
>>>>>ocaml is statically typed.
>
>
>>>It make the verification when you call the test. I explain :
>>>let f x = x #foo
>>>
>>>which is a function taking an object x and calling its method
>>>foo, even if there is no class having such a method.
>>>
>>>When sometime latter you do a :
>>>
>>>f bar
>>>
>>>then, and only then the compiler verify that the bar object have a foo
>>>method.
>
>
> BTW, the same thing is true for any language with type inference.  In
> Haskell, there are to methods and objects. But to test a function, you
> can write
>
> test_func f = if (f 1 == 1) && (f 2 == 42) then "ok" else "fail"
>
> The compiler will infer that test_func has type
>
> test_func :: (Integer -> Integer) -> String
>
> (I am cheating a bit, because actually it will infer a more general type),
> so you can use it to test any function of type Integer->Integer, regardless
> if you have written it already or not.

OK, I have got it. No, that's not what I want. What I want is:

testxyz obj = (concretemethod obj == 42)

Does the code compile as long as concretemethod doesn't exist?

>>Doesn't this mean that the occurence of such compile-time errors is only
>>delayed, in the sense that when the test suite grows the compiler starts
>>to issue type errors?
>
>
> As long as you parameterize over the functions (or objects) you want to
> test, there'll be no compile-time errors. That's what functioncal
> programming and type inference are good for: You can abstract everything
> away just by making it an argument. And you should know that, since
> you say that you know what modern type-systems can do.

Yes, I know that. I have misunderstood the claim. Does the code I
propose above work?

> But the whole case is moot anyway, IMHO: You write the tests because
> you want them to fail until you have written the correct code that
> makes them pass, and it is not acceptable (especially if you're doing
> XP) to continue as long as you have failing tests. You have to do the
> minimal edit to make all the tests pass *right now*, not later on.
>
> It's the same with compile-time type errors. The only difference is
> that they happen at compile-time, not at test-suite run-time, but the
> necessary reaction is the same: Fix your code so that all tests (or
> the compiler-generated type "tests") pass. Then continue with the next
> step.

The type system might test too many cases.

> I really don't see why one should be annoying to you, and you strongly
> prefer the other. They're really just the same thing. Just imagine
> that you run your test suite automatically when you compile your
> program.

I don't compile my programs. Not as a distinct conscious step during
development. I write pieces of code and execute them immediately. It's
much faster to run the code than to explicitly compile and/or run a type
checker.

This is a completely different style of developing code.

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 1:37:32 PM

Dirk Thierbach wrote:
> Pascal Costanza <costanza@web.de> wrote:
>
>>No, other people are claiming that one should _always_ use static type
>>sytems, and my claim is that there are situations in which a dynamic
>>type system is better.
>>
>>If you claim that something (anything) is _always_ better, you better
>>have a convincing argument that _always_ holds.
>>
>>I have never claimed that dynamic type systems are _always_ better.
>
> To me, it certainly looked like you did in the beginning. Maybe your
> impression that other people say that one should always use static
> type systems is a similar misinterpretation?

Please recheck my original response to the OP of this subthread. (How
much more "in the beginning" can one go?)

> Anyway, formulations like "A has less expressive power than B" aresvery
> close to "B is always better than A". It's probably a good idea to
> avoid such formulations if this is not what you mean.

"less expressive power" means that there exist programs that work but
that cannot be statically typechecked. These programs objectively exist.
By definition, I cannot express them in a statically typed language.

On the other hand, you can clearly write programs in a dynamically typed
language that can still be statically checked if one wants to do that.
So the set of programs that can be expressed with a dynamically typed
language is objectively larger than the set of programs that can be
expressed with a statically typed language.

It's definitely a trade off - you take away some expressive power and
you get some level of safety in return. Sometimes expressive power is
more important than safety, and vice versa.

It's not my problem that you interpret some arbitrary other claim into
this statement.

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 1:46:28 PM

Jerzy Karczmarczuk wrote:
> [...] if somebody
> advocates the dynamic typing because of the possibility to produce self-
> modifying programs, he *obviously* belongs to the 5-th column whose aim
> is to promote the static typing among all those who think seriously about

*giggle* but triggering yet another such response is /so/ much fun!

Seriously, I've been doing that ping-pong because it always pays to at
least see what he's after. Pascal is somewhat rude and down-the-nose,
but his comments are interesting enough that one can gain some insights
into what this MOP stuff is good for at all.
And I'm always striving to improve my understanding.

(Apologies to anybody who finds such public exchanges distasteful. I
tend to ignore the ad hominem parts and feast on the substance, and I

Regards,
Jo


 0
Reply joachim.durchholz (563) 10/24/2003 1:53:07 PM

Ray Blaak <rAYblaaK@STRIPCAPStelus.net> writes:

> Given that humans can bang out just about any possible piece of crap on the
> keyboard if they are patient enough, it certainly follows that there are
> programs that humans can write that cannot be proved correct.

I know.  The question was whether that actually happens in practice.
Typical "G�del" statements tend to be pretty contrived.

(The reason why I _believe_ (yes, I do not -- and cannot, see above --
have a proof for that belief!) is that people reason (however
informally) about the correctness of the programs they write while
they are writing them.  That was my whole point.  I am actually pretty
amazed that there is such resistence to this idea.)

Matthias

PS: Anyway, enough time wasted.  Over and out from me.

 0
Reply find19 (1245) 10/24/2003 1:53:29 PM

Pascal Costanza <costanza@web.de> writes:

> Matthias Blume wrote:
> > Pascal Costanza <costanza@web.de> writes:
> >
> >>No, you are asking for more. You are asking for the proof to be
> >>automatically executable.
> > Would people kindly stop telling me what I am asking for?
>
> > Thank you.
>
> I am terribly sorry, but a static type system automatically executes a
> proof about certain properties of a program. And you said you want
> static type systems.

That was in a different part of the discussion.  The topic had changed
slightly.  I did not ask for automatic correctness proofs.  Even
though it would be nice to have them, it is clearly not reasonable to

Matthias

 0
Reply find19 (1245) 10/24/2003 1:56:35 PM

"Paul F. Dietz" <dietz@dls.net> writes:

>> I don't follow this.  How can a type error not be relevant?

> The type annotations you were forced to add could be wrong.

Why would I be forced to add type annotations?  I often add them
last, when everything else is working. If they're not obvious, I can
query the system for them to see whether they correspond to my
expectations.

-kzm
--
If I haven't seen further, it is by standing in the footprints of giants

 0
Reply news2 (145) 10/24/2003 2:36:13 PM

Dirk Thierbach wrote:
> Pascal Costanza <costanza@web.de> wrote:
>
>>Dirk Thierbach wrote:
>>
>>>You cannot take an arbitrary language and attach a good static type
>>>system to it. Type inference will be much to difficult, for example.
>>>There's a fine balance between language design and a good type system
>>>that works well with it.
>
>
>>Right. As I said before, you need to reduce the expressive power of the
>>language.
>
>
> Maybe that's where the problem is. One doesn't need to reduce the
> "expressive power". I don't know your particular application, but what
> you seem to need is the ability to dynamically change the program
> execution. There's more than one way to do that.

Of course there is more than one way to do anything. You can do
everything in assembler. The important point is: what are the convenient
ways to do these things? (And convenience is a subjective matter.)

Expressive power is not Turing equivalence.

> I may be wrong, but I somehow have the impression that it is difficult
> to see other ways to solve a problem if you haven't done it in that
> way at least once.

No, you need several attempts to get used to a certain programming
style. These things don't fall from the sky. When you write your first
program in a new language, it is very likely that you a) try to imitate
what you have done in other languages you knew before and b) that you
don't know the standard idioms of the new language.

Mastering a programming language is a very long process.

> So you see that with different tools, you cannot do
> it in exactly the same way as with the old tools, and immediately you
> start complaining that the new tools have "less expressive power",
> just because you don't see that you have to use them in a different
> way.  The "I can do lot of things with macros in Lisp that are
> impossible to do in other languages" claim seems to have a similar
> background.

No, you definitely can do a lot of things with macros in Lisp that are
impossible to do in other languages. There are papers that show this
convincingly. Try
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-453.pdf for a
start. Then continue, for example, with some articles on Paul Graham's

> I could complain that Lisp or Smalltalk have "less expressive power"
> because I cannot declare algebraic datatypes properly,

I don't see why this shouldn't be possible, but I don't know.

> I don't have
> pattern matching to use them efficiently,

http://www.cliki.net/fare-matcher

> and there is no automatic
> test generation (i.e., type checking) for my datatypes.

http://www.plt-scheme.org/software/mrflow/

> The only way out is IMHO to learn as many languages as possible, and
> to learn as many alternative styles of solving problems as possible.

Right.

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 2:44:18 PM

Matthias Blume wrote:
> Pascal Costanza <costanza@web.de> writes:
>
>
>>Matthias Blume wrote:
>>
>>>Pascal Costanza <costanza@web.de> writes:
>>>
>>>
>>>>No, you are asking for more. You are asking for the proof to be
>>>>automatically executable.
>>>
>>>Would people kindly stop telling me what I am asking for?
>>
>>>Thank you.
>>
>>I am terribly sorry, but a static type system automatically executes a
>>proof about certain properties of a program. And you said you want
>>static type systems.
>
>
> That was in a different part of the discussion.  The topic had changed
> slightly.  I did not ask for automatic correctness proofs.  Even
> though it would be nice to have them, it is clearly not reasonable to

....but at least, you have used this reasoning to justify asking for
statically checkable proofs.

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 2:52:05 PM

In comp.lang.lisp Dirk Thierbach <dthierbach@gmx.de> wrote:

> I could complain that Lisp or Smalltalk have "less expressive power"
> because I cannot declare algebraic datatypes properly, I don't have
> pattern matching to use them efficiently, and there is no automatic
> test generation (i.e., type checking) for my datatypes. B

Would Qi apply here?

http://www.simulys.com/guideto.htm

Cheers,

-- Nikodemus

 0
Reply demoss (40) 10/24/2003 2:54:07 PM

Thomas Lindgren <***********@*****.***> writes:

> In short, firing the programmer for not providing a correctness
> proof doesn't seem very constructive.

Read what I actually wrote.  I never suggested such a thing.

(I said: fire the programmer who doesn't have a correctness proof --
even just an informal one -- in her mind when she writes her code.  I
also believe that such a programmer does not exist.  That was actually
my point, but it seems to be completely lost on some participants in
this discussion.  I am amazed that this is even controversial at all.)

Matthias

 0
Reply find19 (1245) 10/24/2003 2:54:55 PM

ketil+news@ii.uib.no wrote:
> "Paul F. Dietz" <dietz@dls.net> writes:
>
>>>I don't follow this.  How can a type error not be relevant?
>
>>The type annotations you were forced to add could be wrong.
>
> Why would I be forced to add type annotations?

every now and then. But I don't remember where (and I might be wrong).

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 2:55:36 PM

Pascal Costanza <costanza@web.de> writes:

> Matthias Blume wrote:
> > Pascal Costanza <costanza@web.de> writes:
> >
> >>Matthias Blume wrote:
> >>
> >>>Pascal Costanza <costanza@web.de> writes:
> >>>
> >>>
> >>>>No, you are asking for more. You are asking for the proof to be
> >>>>automatically executable.
> >>>
> >>>Would people kindly stop telling me what I am asking for?
> >>
> >>>Thank you.
> >>
> >>I am terribly sorry, but a static type system automatically executes a
> >>proof about certain properties of a program. And you said you want
> >>static type systems.
> > That was in a different part of the discussion.  The topic had
> > changed
>
> > slightly.  I did not ask for automatic correctness proofs.  Even
> > though it would be nice to have them, it is clearly not reasonable to
>
> ...but at least, you have used this reasoning to justify asking for
> statically checkable proofs.

Let's say it came up during the discussion.  IIRC, at some point it
was you who claimed that there are correct programs which are
impossible to verify statically.  This was the moment when the focus
shifted: I do not believe this claim to be truthful and argued to this
end, but I am not as crazy as to say that static verification in this
sense is possible through existing (real-world) type systems.  Sorry
if this didn't come across clearly.

Do we have agreement, at least as far as this meta-discussion on who
said what when goes?

Matthias

 0
Reply find19 (1245) 10/24/2003 3:01:08 PM

Matthias Blume wrote:

> (I said: fire the programmer who doesn't have a correctness proof --
> even just an informal one -- in her mind when she writes her code.  I
> also believe that such a programmer does not exist.  That was actually
> my point, but it seems to be completely lost on some participants in
> this discussion.  I am amazed that this is even controversial at all.)

This is why we are having this discussion. As I said before, there are
different programming styles.

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 3:02:17 PM

>>>>> Pascal Costanza wrote (on Fri, 24 Oct 2003 at 15:55):

> annotations every now and then. But I don't remember where (and I
> might be wrong).

Polymorphic recursion.

Peter Hancock

 0
Reply hancock (83) 10/24/2003 3:06:53 PM

Matthias Blume wrote:
> Pascal Costanza <costanza@web.de> writes:
>
>>Matthias Blume wrote:
>>
>>>Pascal Costanza <costanza@web.de> writes:
>>>
>>>>Matthias Blume wrote:

>>>I did not ask for automatic correctness proofs.  Even
>>>though it would be nice to have them, it is clearly not reasonable to
>>
>>...but at least, you have used this reasoning to justify asking for
>>statically checkable proofs.
>
>
> Let's say it came up during the discussion.  IIRC, at some point it
> was you who claimed that there are correct programs which are
> impossible to verify statically.  This was the moment when the focus
> shifted: I do not believe this claim to be truthful and argued to this
> end, but I am not as crazy as to say that static verification in this
> sense is possible through existing (real-world) type systems.  Sorry
> if this didn't come across clearly.

Ah, ok, I didn't see this subtle difference. Thanks for clarification.

> Do we have agreement, at least as far as this meta-discussion on who
> said what when goes?

OK, fine by me. ;)

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 3:12:31 PM

Dirk Thierbach <dthierbach@gmx.de> writes:

> Writing type annotions is just like writing tests -- it allows you
> to focus on what you reall want to write, and it allows the compiler
> to verify for test so your code really does what you expect it to do.

Type annotations are a very limited form of test.  Tests that check
co-variant and contravariant conditions can be hard to encode in a
type.  For example, I write a function that partitions a list into two
sublists.  The sum of the lengths of the two sublists must equal the
length of the input list.  I don't know how to encode that as a type
declaration.


 0
Reply jrm (1311) 10/24/2003 3:17:00 PM

In article <bnbeh9$p74$1@f1node01.rhrz.uni-bonn.de>,
Pascal Costanza  <costanza@web.de> wrote:
(snip)
>every now and then. But I don't remember where (and I might be wrong).

Yes, it does, but it's pretty infrequently IMHO, and trivially easy if
you're already thinking clearly enough to be writing correct code at
all.

-- Mark

 0
Reply markc8 (213) 10/24/2003 3:17:23 PM

Pascal Costanza wrote:

> Dirk Thierbach wrote:
>> In Hindley-Milner style typing (without extensions), this can never
>> happen.
>>
>> No type annotations you add can make the type error go away. A type
>> error always points to some error in the code, often a quite trivial
>> one (like wrong parenthesis, or swapped variable names, etc.).
>
> ...or a wrong type annotation?

Certainly a wrong type annotation will be flagged as an error.
As Dirk pointed out, the Hindley Milner type system does
not *require* any type annotations to type check a program.
But most programmers will add them anyway as essential
documentation for functions. The fact that the compiler
will automagically verify any such annotation is *correct*
is a big win.

Regards
--


 0
Reply ahey (217) 10/24/2003 3:19:50 PM

Kenny Tilton wrote:
> Ralph Becket wrote:
>> STATICALLY TYPED: the compiler carries out a proof that no value of the
>> wrong type will ever be passed to a function expecting a different type,
>> anywhere in the program.
>
> Big deal.

Yes it is a very big deal. I suspect from your choice of words
you have a closed mind on this issue, so there's no point in me
wasting my time trying to explain why.

<snip quote from someone who doesn't understand static typing at
all if the references to Java and C++ are anything to go by>

> Lights out for static typing.

That's complete bollocks. There are more than enough sufficiently
enlightened people to keep static typing alive and well, thank you
very much. If you chose to take advantage of it that's your loss.

Regards
--

 0
Reply ahey (217) 10/24/2003 3:25:17 PM

Pascal Costanza wrote:
>
> "less expressive power" means that there exist programs that work but
> that cannot be statically typechecked. These programs objectively exist.
> By definition, I cannot express them in a statically typed language.
>
> On the other hand, you can clearly write programs in a dynamically typed
> language that can still be statically checked if one wants to do that.
> So the set of programs that can be expressed with a dynamically typed
> language is objectively larger than the set of programs that can be
> expressed with a statically typed language.

Well, "can be expressed" is a very vague concept, as you noted yourself.
To rationalize the discussion on expressiveness, there is a nice
paper by Felleisen, "On the Expressive Power of Programming Languages"
which makes this terminology precise.

Anyway, you are right of course that any type system will take away some
expressive power (particularly the power to express bogus programs :-)
but also some sane ones, which is a debatable trade-off).

But you completely ignore the fact that it also adds expressive power at
another end! For one thing, by allowing you to encode certain invariants
in the types that you cannot express in another way. Furthermore, by
giving more knowledge to the compiler and hence allow the language to
that increases expressive power in certain ways and crucially relies on
static typing.

So there is no inclusion, the "expressiveness" relation is unordered wrt
static vs dynamic typing.

- Andreas

--
Andreas Rossberg, rossberg@ps.uni-sb.de

"Computer games don't affect kids; I mean if Pac Man affected us
as kids, we would all be running around in darkened rooms, munching
magic pills, and listening to repetitive electronic music."
- Kristian Wilson, Nintendo Inc.


 0
Reply rossberg (600) 10/24/2003 3:30:31 PM

In article <3638acfd.0310231647.16db77b4@posting.google.com>,
rafe@cs.mu.oz.au (Ralph Becket) wrote:

> myfirstname.mylastname@jpl.nasa.gov (Erann Gat) wrote in message
news:<myfirstname.mylastname-2310030857350001@192.168.1.51>...
> >
> > No.  The fallacy in this reasoning is that you assume that "type error"
> > and "bug" are the same thing.  They are not.  Some bugs are not type
> > errors, and some type errors are not bugs.  In the latter circumstance
> > simply ignoring them can be exactly the right thing to do.
>
> Just to be clear, I do not believe "bug" => "type error".  However, I do
> claim that "type error" (in reachable code) => "bug".

But that just begs the question of what you consider a type error.  Does
the following code contain a type error?

(defun rsq (a b)
"Return the square root of the sum of the squares of a and b"
(sqrt (+ (* a a) (* b b))))

(defun rsq1 (a b)
(or (ignore-errors (rsq a b)) 'FOO))

or:

(defun rsq2 (a b)
(or (ignore-errors (rsq a b)) (error "Foo")))

> Here's the way I see it:
> (1) type errors are extremely common;

In my experience they are quite rare.

> (2) an expressive, statically checked type system (ESCTS) will identify
>   almost all of these errors at compile time;

And then some.  That's the problem.

> (3) type errors flagged by a compiler for an ESCTS can pinpoint the source
>   of the problem whereas ad hoc assertions in code will only identify a
>   symptom of a type error;

Really?  If there's a type mismatch how does the type system know if the
problem is in the caller or the callee?

> (4) the programmer does not have to litter type assertions in a program
>   written in a language with an ESCTS;

But he doesn't have to litter type assertions in a program written in a
language without an ESCTS either.

> (5) an ESCTS provides optimization opportunities that would otherwise
>   be unavailable to the compiler;

That is true.  Whether this benefit outweighs the drawbacks is arguable.

> (6) there will be cases where the ESCTS requires one to code around a
>   constraint that is hard/impossible to express in the ESCTS (the more
>   expressive the type system, the smaller the set of such cases will be.)
>
> The question is whether the benefits of (2), (3), (4) and (5) outweigh
> the occasional costs of (6).

Yes, that's what it comes down to.  There are both costs and benefits.
The balance probably tips one way in some circumstances, the other way in
others.

E.

 0

Matthias Blume <find@my.address.elsewhere> writes:

> spammers_must_die@jpl.nasa.gov (Erann Gat) writes:
>
> > It is perfectly well defined, it's just defined in terms that are not
> > logical but rather psychological.
>
> I don't think it is well-defined at all.  Ask n people and you get n

What makes you think that this would be true (n people --> n totally
different answers)?  There's plenty of evidence that suggests you're
just plain wrong here.

> > There are people who make their living (indeed an entire industry
> > devoted to) solving this problem.
>
> I know.  The point is that one can never say the program is "correct"
> with respect to the requirement of having the typesetting be
> aesthetical.

Why not?  Presumably because you have some very narrow notion of "correct".

>  One can, maybe, make statements like "the majority of our customers
> seems to be satisfied with the results".  But that's not what

Ah, you _do_ have a very narrow (mostly useless) notion of "correct".

> > You will obviously not be among them.
>
> Indeed, I will not.  But that's more because I'm not very good at
> arts.

I don't see how that has anything much to do with it.

/Jon

 0
Reply j-anthony (99) 10/24/2003 3:42:35 PM

Alain Picard <apicard+die-spammer-die@optushome.com.au> writes:

>
>>
>> You're fired.
>
> No worries.  Joe: You're Hired!

Swoit!  I'm hooning to Austrailian for a bonzo job.
Got my lagerphone to scare off the drop bears.
Toss me a tinny.

 0
Reply jrm (1311) 10/24/2003 3:49:07 PM

Matthias Blume <find@my.address.elsewhere> writes:

> j-anthony@rcn.com (Jon S. Anthony) writes:
>
> > Hmmm.  Maybe I actually did have a proof in my head that you were
> > clueless.  You've even done the work here of giving a good first draft
> > of writing it out for me.
>
> Glad to see some really coherent, intelligent contributions to this
> discussion.

Glad to see you are beginning to get the point.

> Thanks!

You're welcome.

/Jon

 0
Reply j-anthony (99) 10/24/2003 3:51:32 PM

In article <m2ad7r74je.fsf@hanabi-air.shimizu.blume>, Matthias Blume

> spammers_must_die@jpl.nasa.gov (Erann Gat) writes:
>
> > It is perfectly well defined, it's just defined in terms that are not
> > logical but rather psychological.
>
> I don't think it is well-defined at all.  Ask n people and you get n

No, that's not true.  It turns out that there are universal aesthetic
principles that are hard-wired into the human brain.  That's why the
Parthenon or a Frank Ghery building look better than a Bronx tenement.  To
everyone.

> > There are people who make their living (indeed an entire industry
> > devoted to) solving this problem.
>
> I know.  The point is that one can never say the program is "correct"
> with respect to the requirement of having the typesetting be
> aesthetical.  One can, maybe, make statements like "the majority of
> our customers seems to be satisfied with the results".  But that's not

What is it about then?  I thought that correctness is about conforming to
a specification.  But now you insist that only certain kinds of
specifications are allowed.  They have to be "well defined" whatever that
means.  Well, I think it's perfectly legitimate to desire a typesetting
program that produces good looking output, so I'd say your view of
correctness is too narrow.

> > You will obviously not be among them.
>
> Indeed, I will not.  But that's more because I'm not very good at
> arts.

There's nothing wrong with that.  There is something wrong with saying
that it is illegitimate for others to strive to understand or care about
them, to dismiss an aesthetic specification because it is "not well
defined."  There are more things in heaven and earth, Matthias Blume, than
are dreamt of in your mathematics.

E.

 0

Matthias Blume <find@my.address.elsewhere> writes:

> j-anthony@rcn.com (Jon S. Anthony) writes:
>
> > You can't be serious.  Even we take your premise as true (that she
> > _thinks_ she has a proof) this in absolutely no way implies that she
> > does and even less that such a proof exists.  Let's see... I _think_ I
> > have a proof (in my head) that you are completely clueless wrt this
> > topic, therefore such a proof "obviously" exists and could be written
> > down.  Yep, makes real good sense.
>
> No, my claim is: For every correct program written by a human there is
> a correctness proof.

Well, that is _not_ what you _said_.  For someone so concerned about
correctness, you should pay a little more attention to saying what you
_mean_.

> In other words, I find it unlikely that someone writes a correct
> program, but there actually is no such proof.  People do reason

What's this have to do with "Joe wrote some code, so Joe 'had a proof

> Your attempt at insulting me is cute, but it has little to do with
> what I said.

Sorry - it has _everything_ to do with what you _said_.

/Jon

 0
Reply j-anthony (99) 10/24/2003 3:55:40 PM

spammers_must_die@jpl.nasa.gov (Erann Gat) writes:

> [...] It turns out that there are universal aesthetic principles that are
> hard-wired into the human brain.  That's why the Parthenon or a
> Frank Ghery building look better than a Bronx tenement.  To
> everyone.

You still get n answers.  Admittedly, they will tend to be correlated.
If aesthetics are so universal, how come Windows XP looks so hideously
ugly?  (To name just one example.)

Matthias

 0
Reply find19 (1245) 10/24/2003 4:07:10 PM

Joe Marshall <jrm@ccs.neu.edu> wrote:
> Dirk Thierbach <dthierbach@gmx.de> writes:
> Type annotations are a very limited form of test.

Yes. On the other hand, they are more powerful then test-by-example,
because they test classes of values of every execution path, instead
of single values on one particular execution path.

> Tests that check co-variant and contravariant conditions can be hard
> to encode in a type.  For example, I write a function that
> partitions a list into two sublists.  The sum of the lengths of the
> two sublists must equal the length of the input list.  I don't know
> how to encode that as a type declaration.

And you don't have to: Just write a normal test. A type system doesn't
replace all normal tests. But in some ways it subsumes some of them.

And with type inference, it comes for free: If you don't want it, just
don't write any type annotations.

- Dirk

 0
Reply dthierbach (210) 10/24/2003 4:33:26 PM

j-anthony@rcn.com (Jon S. Anthony) writes:

>
> > j-anthony@rcn.com (Jon S. Anthony) writes:
> >
> > > You can't be serious.  Even we take your premise as true (that she
> > > _thinks_ she has a proof) this in absolutely no way implies that she
> > > does and even less that such a proof exists.  Let's see... I _think_ I
> > > have a proof (in my head) that you are completely clueless wrt this
> > > topic, therefore such a proof "obviously" exists and could be written
> > > down.  Yep, makes real good sense.
> >
> > No, my claim is: For every correct program written by a human there is
> > a correctness proof.
>
> Well, that is _not_ what you _said_.  For someone so concerned about
> correctness, you should pay a little more attention to saying what you
> _mean_.

Right.  Let me quote myself verbatim:

"I said that the programmer has a proof in her head. (At least she
thinks she does.)  My point was that since she has a proof, the
proof obviously *exists* and *could* be written down and *could* be
statically verified if one only went to the trouble of doing so."

Now, what I should have added is that, of course, the verification
might fail -- indicating that that programmer was wrong thinking she
had a proof.  It is my belief that in those cases where the program
actually is correct it will be possible to either verify the proof
outright, or to fix whatever problems there are with it to make it go
through.  I strongly believe that there are virtually no correct
programs written by humans where this technique must fail.  (And in
those cases where it does, we wouldn't be able to find out that this
is in fact so.)

> > In other words, I find it unlikely that someone writes a correct
> > program, but there actually is no such proof.  People do reason
>
> What's this have to do with "Joe wrote some code, so Joe 'had a proof

The following: The only way that the above could be false is that two
conditions are met:

- Joe writes a correct program.
- There is no proof for the correctness of that program (in the sense
of "there is no such proof now and it is not possible for anyone
to produce such a proof in the future").

I find this extremely unlikely because I believe that Joe already had
the sketch of the proof in his head when he wrote his correct program.
That sketch could be made into a full proof (by fleshing it out and
possibly by correcting a few non-fatal problems that it might have).

> > Your attempt at insulting me is cute, but it has little to do with
> > what I said.
>
> Sorry - it has _everything_ to do with what you _said_.

Well, in a way, yes, you are right.  It shows that whenever you are
not absolutely precise, some smartass will come along and poke holes
into your argumentation, be it just for the fun of it or in the
pursuit of more serious agendas.  The software analogy for this is
that whenever you have not made absolutely sure that there are no weak
points in your program, someone will come along and exploit them.  I
find this a very compelling reason to take advantage of static
verification whenever possible.

Matthias

 0
Reply find19 (1245) 10/24/2003 4:38:36 PM

Pascal Costanza <costanza@web.de> wrote:
> Dirk Thierbach wrote:

> Of course there is more than one way to do anything. You can do
> everything in assembler. The important point is: what are the convenient
> ways to do these things? (And convenience is a subjective matter.)

Yes. The point is: It may be as convenient to do in one language as in
the other language. You just need a different approach.

> No, you definitely can do a lot of things with macros in Lisp that are
> impossible to do in other languages.

We just had this discussion here, and I am not going to repeat it.
I know Paul Graham's website, and I know many examples of what you
can do with macros. Macros are a wonderful tool, but you really can
get most of what you can do with macros by using HOFs. There are
some things that won't work, the most important of which is that
you cannot force calculation at compile time, and you have to hope
that the compiler does it for you (ghc actually does it sometimes.)

- Dirk

 0
Reply dthierbach (210) 10/24/2003 4:40:11 PM

Pascal Costanza <costanza@web.de> wrote:
> Dirk Thierbach wrote:

> OK, I have got it. No, that's not what I want. What I want is:
>
> testxyz obj = (concretemethod obj == 42)
>
> Does the code compile as long as concretemethod doesn't exist?

No. Does your test pass as long as conretemthod doesn't exist? It doesn't,
for the same reason.

>> It's the same with compile-time type errors. The only difference is
>> that they happen at compile-time, not at test-suite run-time, but the
>> necessary reaction is the same: Fix your code so that all tests (or
>> the compiler-generated type "tests") pass. Then continue with the next
>> step.

> The type system might test too many cases.

I have never experienced that, because every expression that is valid
code will have a proper type.

Can you think of an example (not in C++ or Java etc.) where the type
system may check too many cases?

> I don't compile my programs. Not as a distinct conscious step during
> development. I write pieces of code and execute them immediately.

I know. I sometimes do the same with Haskell: I use ghc in interactive
mode, write a piece of code and execute it immediately (which means it
gets compiled and type checked). When it works, I paste it into
the file. If there was a better IDE, I wouldn't have to do that,
but even in this primitive way it works quite well.

> It's much faster to run the code than to explicitly compile and/or
> run a type checker.

Unless your modules get very large, or you're in the middle of some
big refactoring, compiling or running the type checker is quite fast.

> This is a completely different style of developing code.

I have known this style of developing code for quite some time :-)

- Dirk


 0
Reply dthierbach (210) 10/24/2003 4:53:31 PM

Pascal Costanza  <costanza@web.de> wrote:

>Mastering a programming language is a very long process.
>
>> So you see that with different tools, you cannot do
>> it in exactly the same way as with the old tools, and immediately you
>> start complaining that the new tools have "less expressive power",
>> just because you don't see that you have to use them in a different
>> way.  The "I can do lot of things with macros in Lisp that are
>> impossible to do in other languages" claim seems to have a similar
>> background.
>
>No, you definitely can do a lot of things with macros in Lisp that are
>impossible to do in other languages. There are papers that show this
>convincingly. Try
>ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-453.pdf for a
>start. Then continue, for example, with some articles on Paul Graham's

That's a great paper; however, see Steele's later work:
http://citeseer.nj.nec.com/steele94building.html

John

 0
Reply jatwood2 (44) 10/24/2003 5:04:20 PM

In article <m14qxy39ht.fsf@tti5.uchicago.edu>, Matthias Blume

> spammers_must_die@jpl.nasa.gov (Erann Gat) writes:
>
> > [...] It turns out that there are universal aesthetic principles that are
> > hard-wired into the human brain.  That's why the Parthenon or a
> > Frank Ghery building look better than a Bronx tenement.  To
> > everyone.
>
> You still get n answers.  Admittedly, they will tend to be correlated.
> If aesthetics are so universal, how come Windows XP looks so hideously
> ugly?  (To name just one example.)

Do you realize that you just proved my point by stating unequivocally that
Windows XP looks hideous?  You're right about that.  (Maybe you're not as
hopeless as an artist as you think.)  The reason is very simple: Microsoft
doesn't care about aesthetics.  Never has.  Probably never will.

That's one of the reasons I use a Mac.

E.

---

"The only problem with Microsoft is they just have no taste...I don't mean
that in a small way--I mean that in a big way, in the sense that they
don't think of original ideas, and they don't bring much culture into
their product�So I guess I am saddened, not by Microsoft's success--I have
no problem with their success; they've earned their success for the most
part--I have a problem with the fact that they just make really third-rate
products."

-- Steve Jobs.  Triumph of the Nerds PBS documentary interview (May 1996)

 0

j-anthony@rcn.com (Jon S. Anthony) writes:

> > I find this extremely unlikely because I believe that Joe already had
> > the sketch of the proof in his head when he wrote his correct program.
>
> [...]  Second, your belief here is truly remarkable.

What I find remarkable in this discussion is that anyone would find
this belief of mine remarkable.

Cheers,
Matthias

 0
Reply find19 (1245) 10/24/2003 5:14:47 PM

In article <m1znfq1tgz.fsf@tti5.uchicago.edu>, Matthias Blume

> Well, in a way, yes, you are right.  It shows that whenever you are
> not absolutely precise, some smartass will come along and poke holes
> into your argumentation, be it just for the fun of it or in the
> pursuit of more serious agendas.  The software analogy for this is
> that whenever you have not made absolutely sure that there are no weak
> points in your program, someone will come along and exploit them.  I
> find this a very compelling reason to take advantage of static
> verification whenever possible.

Do you not see the irony here?  When people who like dynamic typing try to
use a statically type langauge they often feel like they are engaging in a
dialog with the compiler that is just as frustrating and pointless as the
the one you are having with j-anthony et al.  All he is doing is
attempting to hold you to the standards of logic and precision that you

E.

 0

Andreas Rossberg wrote:
> Pascal Costanza wrote:
>
>>
>> "less expressive power" means that there exist programs that work but
>> that cannot be statically typechecked. These programs objectively
>> exist. By definition, I cannot express them in a statically typed
>> language.
>>
>> On the other hand, you can clearly write programs in a dynamically
>> typed language that can still be statically checked if one wants to do
>> that. So the set of programs that can be expressed with a dynamically
>> typed language is objectively larger than the set of programs that can
>> be expressed with a statically typed language.
>
> Well, "can be expressed" is a very vague concept, as you noted yourself.
>   To rationalize the discussion on expressiveness, there is a nice paper
> by Felleisen, "On the Expressive Power of Programming Languages" which
> makes this terminology precise.

I have skimmed through that paper. It states the following in the
conclusion section:

"The most important criterion for comparing programming languages showed
that an increase in expressive power may destroy semantic properties of
the core language that programmers may have become accustomed to
(Theorem 3.14). Among other things, this invalidation of operational
laws through language extensions implies that there are now more
distinctions to be considered for semantic analyses of expressions in
the core language. On the other hand, the use of more expressive
languages seems to facilitate the programming process by making programs
more concise and abstract (Conciseness Conjecture). Put together, this
result says that

* an increase in expressive power is related to a decrease of the set of
natural'' (mathematically appealing) operational equivalences."

This seems to be compatible with my point of view. (However, I am not
really sure.)

> Anyway, you are right of course that any type system will take away some
> expressive power (particularly the power to express bogus programs :-)
> but also some sane ones, which is a debatable trade-off).

Thanks. ;)

> But you completely ignore the fact that it also adds expressive power at
> another end! For one thing, by allowing you to encode certain invariants
> in the types that you cannot express in another way. Furthermore, by
> giving more knowledge to the compiler and hence allow the language to
> automatize certain tedious things.

I think you are confusing things here. It gets much clearer when you
separate compilation/interpretation from type checking, and see a static
type checker as a distinct tool.

The invariants that you write, or that are inferred by the type checker,
are expressions in a domain-specific language for static program
analysis. You can only increase the expressive power of that
domain-specific language by adding a more elaborate static type system.
You cannot increase the expressive power of the language that it reasons

An increase of expressive power of the static type checker decreases the
expressive power of the target language, and vice versa.

As a sidenote, here is where Lisp comes into the game: Since Lisp
programs can easily reason about other Lisp programs, because there is
no distinction between programs and data in Lisp, it should be pretty
straightforward to write a static type checker for Lisp programs, and

It should also be relatively straightforward to make this a relatively
flexible type checker for which you can increase/decrease the level of
required conformance to the (a?) type system.

This would mean that you could have the benefits of both worlds: when
you need static type checking, you can add it. You can even enforce it
in a project, if the requirements are strict in this regard in a certain
setting. If the requirements are not so strict, you can relax the static
type soundness requirements, or maybe even go back to dynamic type checking.

In fact, such systems already seem to exist. I guess that's what soft
typing is good for, for example (see MrFlow). Other examples that come
to mind are Qi and ACL2.

Why would one want to switch languages for a single feature?

Note that this is just brainstorming. I don't know whether such an
approach can really work in practice. There are probably some nasty
details that are hard to solve.

> that increases expressive power in certain ways and crucially relies on
> static typing.

Overloading relies on static typing? This is news to me. What do you mean?

> So there is no inclusion, the "expressiveness" relation is unordered wrt
> static vs dynamic typing.

No, I don't think so.

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  Römerstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 5:20:47 PM

Matthias Blume <find@my.address.elsewhere> writes:

>
> You're fired.

No worries.  Joe: You're Hired!

--
It would be difficult to construe        Larry Wall, in  article
this as a feature.			 <1995May29.062427.3640@netlabs.com>

 0
Reply die-spammer-die (28) 10/24/2003 5:23:35 PM

Matthias Blume <find@my.address.elsewhere> writes:

> j-anthony@rcn.com (Jon S. Anthony) writes:
>
> > Matthias Blume <find@my.address.elsewhere> writes:
> >
> > > j-anthony@rcn.com (Jon S. Anthony) writes:
> > >
> > > > You can't be serious.  Even we take your premise as true (that she
> > > > _thinks_ she has a proof) this in absolutely no way implies that she
> > > > does and even less that such a proof exists.  Let's see... I _think_ I
> > > > have a proof (in my head) that you are completely clueless wrt this
> > > > topic, therefore such a proof "obviously" exists and could be written
> > > > down.  Yep, makes real good sense.
> > >
> > > No, my claim is: For every correct program written by a human there is
> > > a correctness proof.
> >
> > Well, that is _not_ what you _said_.  For someone so concerned about
> > correctness, you should pay a little more attention to saying what you
> > _mean_.
>
> Right.  Let me quote myself verbatim:
>
>   "I said that the programmer has a proof in her head. (At least she
>   thinks she does.)  My point was that since she has a proof, the
>   proof obviously *exists* and *could* be written down and *could* be
>   statically verified if one only went to the trouble of doing so."
>
> Now, what I should have added is that, of course, the verification
> might fail -- indicating that that programmer was wrong thinking she

This still indicates that you really believe that she "had a proof in
her head" at the time the she wrote the code.  I maintain that there
is absolutely no evidence for such a remarkable belief.

When you trot out the term "proof", especially in some formal
mathematical context as you do here, it has some pretty specific
meaning which really involves a level of verification.  Probably by
peers (as much as yourself) equally (or more) adept at dealing with
the reasoning and concepts involved.

>  It is my belief that in those cases where the program actually is
> correct it will be possible to either verify the proof outright, or
> to fix whatever problems there are with it to make it go through.

This is basically the halting problem.  I don't see how it in any way

> > What's this have to do with "Joe wrote some code, so Joe 'had a proof
>
> The following: The only way that the above could be false is that two
> conditions are met:
>
>   - Joe writes a correct program.
>   - There is no proof for the correctness of that program (in the sense
>     of "there is no such proof now and it is not possible for anyone
>     to produce such a proof in the future").

There is a third (and extremely obvious) way in which it could be
false.  Joe did _not_ have a proof in his head when he wrote the code.
The fact that he reasoned various details through and _believed_ the
code was/is correct in no way shape or form indicates 1) that he had a
proof, 2) that there is a proof, or even 3) that he _thought_ he

> I find this extremely unlikely because I believe that Joe already had
> the sketch of the proof in his head when he wrote his correct program.

First a _sketch_ is not a proof.  Second, your belief here is truly
remarkable.

> pursuit of more serious agendas.  The software analogy for this is
> that whenever you have not made absolutely sure that there are no weak
> points in your program, someone will come along and exploit them.  I
> find this a very compelling reason to take advantage of static
> verification whenever possible.

Fair enough.

/Jon

 0
Reply j-anthony (99) 10/24/2003 5:24:24 PM

Nikodemus Siivola <demoss@random-state.net> wrote:
> In comp.lang.lisp Dirk Thierbach <dthierbach@gmx.de> wrote:

>> I could complain that Lisp or Smalltalk have "less expressive power"
>> because I cannot declare algebraic datatypes properly, I don't have
>> pattern matching to use them efficiently, and there is no automatic
>> test generation (i.e., type checking) for my datatypes. B

> Would Qi apply here?
>
> http://www.simulys.com/guideto.htm

Qi is certainly interesting. It looks very ML-ish, so I suppose the
answer to "How do I add static typing to Lisp?" is "You implement
ML in Lisp" :-)

The type system is very flexible, and you can encode a lot into types
because it has a complete theorem prover, but you probably pay for
that with severe speed penalties.

Most important: Since Qi tries to make static typing optional, and
it also has to deal with the impure features of Lisp, there is
no type inference:

Qi is an explicitly typed language; this means that all defined
functions must be accompanied by their intended type. Failure to
supply a type will produce an error message.

So no automatic tests.

- Dirk


 0
Reply dthierbach (210) 10/24/2003 5:27:28 PM

Matthias Blume <find@my.address.elsewhere> writes:

> j-anthony@rcn.com (Jon S. Anthony) writes:
>
> > > I find this extremely unlikely because I believe that Joe already had
> > > the sketch of the proof in his head when he wrote his correct program.
> >
> > [...]  Second, your belief here is truly remarkable.
>
> What I find remarkable in this discussion is that anyone would find
> this belief of mine remarkable.

Wow.  That's even _more_ remarkable.

/Jon

 0
Reply j-anthony (99) 10/24/2003 5:35:03 PM

In article <m1znfq1tgz.fsf@tti5.uchicago.edu>, Matthias Blume

>   "I said that the programmer has a proof in her head. (At least she
>   thinks she does.)  My point was that since she has a proof, the
>   proof obviously *exists* and *could* be written down and *could* be
>   statically verified if one only went to the trouble of doing so."

I wrote myself a spam filter.  At no time did I have a proof in my head
that it is "correct".  In fact, I am quite certain that it is *not*
"correct" for any reasonable definition of "correct".  It is nonetheless
useful.  (In fact, it is indispensable.  I'm up to 400-500 spams a day now
with a growth rate that seems to be following Moore's law pretty closely.
Very scary.)

So there is a counterexample to your theory.

It gets even worse than that.  On your view, if someone asked you to write
a spam filter your response would be to demand that they first precisely
define for you what a spam is.  But producing that precise definition is
the hard part.  Once you have a precise definition of spam in hand
rendering that definition into code is trivial.  (Spam detection, by the
way, is precisely analogous to aesthetic typesetting in that there are
some universal principles that one can apply despite the fact that
individual opinions on what is and is not spam will vary.)

Many - perhaps most - interesting programming problems are like that.  The
heavy lifting is in producing the spec, not rendering the spec into code.
For those kinds of problems enforced static typing is often more of a
hindrance than a help because it prohibits you from discovering certain
kinds of problems (the ones that show up at run time) until you have
resolved *all* of the instances of another kind of problem, whether those
are relevant to the problem at hand or not.

E.

 0

Matthias Blume wrote:

>>What's this have to do with "Joe wrote some code, so Joe 'had a proof
>
>
> The following: The only way that the above could be false is that two
> conditions are met:
>
>   - Joe writes a correct program.
>   - There is no proof for the correctness of that program (in the sense
>     of "there is no such proof now and it is not possible for anyone
>     to produce such a proof in the future").
>
> I find this extremely unlikely because I believe that Joe already had
> the sketch of the proof in his head when he wrote his correct program.
> That sketch could be made into a full proof (by fleshing it out and
> possibly by correcting a few non-fatal problems that it might have).

Here is an example: Assume you are asked to write an interpreter. For
the sake of simplicity, we assume that it should be a Lisp interpreter,
because I can give you a complete implementation in one line of code:

Now, the specification states that this interpreter should never run
into an endless loop.

There are two possible ways to respond to this spec:

- The first programmer tells the customer that he cannot write a program
that can solve the halting problem. He even gives them a proof.

- The second programmer adds the feature that a certain combination of
key strokes breaks out of the read-eval-print loop.

In the second case, the programmer has found a solution for the problem.

Do you think the programmer of the second solution had a "proof in her

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 5:42:52 PM

Pascal Costanza <costanza@web.de> wrote:

>> I don't have pattern matching to use them efficiently,

> http://www.cliki.net/fare-matcher

Certainly an improvement, but no way to declare datatypes (i.e.,
pattern constructors) yet:

There also needs be improvements to the infrastructure to build
pattern constructors, so that you may build pattern constructors and
destructors at the same time (much like you do when you define ML
types).

The following might also be a show-stopper (I didn't test it,
but it doesn't look good):

; FIXME: several branches of an "or" pattern can't share variables;
; variables from all branches are visible in guards and in the body,
; and previous branches may have bound variables before failing.

The following comment is also interesting:

Nobody reported using the matcher -- ML/Erlang style pattern
matching seemingly isn't popular with LISP hackers.

Again, the way to get the benefits of "more expressive languages"
like ML in Lisp seems to be to implement part of them on top of Lisp :-)

>> and there is no automatic test generation (i.e., type checking) for
>> my datatypes.

> http://www.plt-scheme.org/software/mrflow/

I couldn't find any details on this page (it says "coming soon"), but
the name suggest a dataflow analyzer. As I have already said, the
problem with attaching static typing and inference to an arbitrary
language is that it is difficult to get it working without changing
the language design. Pure functional features make type inference
easy, imperative features make them hard. Full dataflow analysis might
help, but I'd have to look more closely to see if it works out.

- Dirk

 0
Reply dthierbach (210) 10/24/2003 5:46:54 PM

"Pascal Costanza" <costanza@web.de> wrote in message news:bnbds3$uui$1@f1node01.rhrz.uni-bonn.de...
>
> Expressive power is not Turing equivalence.

Agreed.

So, does anyone have a formal definition of "expressive power?"
Metrics? Examples? Theoretical foundations?

It seems like a hard concept to pin down. "Make it possible
to write programs that contain as few characters as possible"
strikes me as a really bad definition; it suggests that
bzip2-encoded C++ would be really expressive.

Marshall


 0
Reply mspight (144) 10/24/2003 5:48:50 PM

Andreas Rossberg <rossberg@ps.uni-sb.de> wrote:
> Pascal Costanza wrote:

> Anyway, you are right of course that any type system will take away some
> expressive power (particularly the power to express bogus programs :-)
> but also some sane ones, which is a debatable trade-off).

Yep. It turns out that you take away lots of bogus programs, and the
sane programs that are taken away are in most cases at least questionable
(they will be mostly of the sort: There is a type error in some execution
branch, but this branch will never be reached), and can usually be
expressed as equivalent programs that will pass.

"Taking away possible programs" is not the same as "decreasing expressive
power".

> So there is no inclusion, the "expressiveness" relation is unordered wrt
> static vs dynamic typing.

That's the important point.

- Dirk


 0
Reply dthierbach (210) 10/24/2003 5:54:09 PM

Dirk Thierbach wrote:
> Pascal Costanza <costanza@web.de> wrote:
>
>>Dirk Thierbach wrote:
>
>>OK, I have got it. No, that's not what I want. What I want is:
>>
>>testxyz obj = (concretemethod obj == 42)
>>
>>Does the code compile as long as concretemethod doesn't exist?
>
> No. Does your test pass as long as conretemthod doesn't exist? It doesn't,
> for the same reason.

As long as I am writing only tests, I don't care. When I am in the mood
of writing tests, I want to write as many tests as possible, without
having to think about whether my code is acceptable for the static type
checker or not.

>>>It's the same with compile-time type errors. The only difference is
>>>that they happen at compile-time, not at test-suite run-time, but the
>>>necessary reaction is the same: Fix your code so that all tests (or
>>>the compiler-generated type "tests") pass. Then continue with the next
>>>step.
>
>>The type system might test too many cases.
>
> I have never experienced that, because every expression that is valid
> code will have a proper type.
>
> Can you think of an example (not in C++ or Java etc.) where the type
> system may check too many cases?

Here is one:

(defun f (x)
(unless (< x 200)
(cerror "Type another number"
"You have typed a wrong number")
(* x 2))

Look up
http://www.lispworks.com/reference/HyperSpec/Body/f_cerror.htm#cerror
before complaining.

Pascal

--
Pascal Costanza               University of Bonn
mailto:costanza@web.de        Institute of Computer Science III
http://www.pascalcostanza.de  R�merstr. 164, D-53117 Bonn (Germany)


 0
Reply costanza (1427) 10/24/2003 5:56:22 PM

 Pascal Costanza writes: > Matthias Blume wrote: > > >>What's this have to do with "Joe wrote some code, so Joe 'had a proof >`