f



Verbose functional languages?

Hello everybody,

I got a question. Is there anything like a verbose functional language
that attempts to be easily readable?

What I am looking for would be something that looks kind of like
smalltalk, with an emphasis on easy to read code, but fully functional
with immutable data structures and a powerful type system.

I find that many people are confused by the very compact syntax of
existing functional languages such as haskell and ocaml and therefore
miss out on the big advantages of these languages such as referential
transparency.

regards,

R=FCdiger


0
rudi2468 (20)
12/2/2007 3:04:39 PM
comp.lang.functional 2791 articles. 0 followers. Post Follow

365 Replies
1811 Views

Similar Articles

[PageSpeed] 4

On Dec 2, 4:04 pm, "R=FCdiger Klaehn" <r...@lambda-computing.com> wrote:
> Hello everybody,
>
> I got a question. Is there anything like a verbose functional language
> that attempts to be easily readable?

As a beginner, I find SML easily readable.

  Michele Simionato
0
12/2/2007 3:09:22 PM
R�diger Klaehn wrote:
> Hello everybody,
> 
> I got a question. Is there anything like a verbose functional language
> that attempts to be easily readable?
> 
> What I am looking for would be something that looks kind of like
> smalltalk, with an emphasis on easy to read code, but fully functional
> with immutable data structures and a powerful type system.
> 
> I find that many people are confused by the very compact syntax of
> existing functional languages such as haskell and ocaml and therefore
> miss out on the big advantages of these languages such as referential
> transparency.

In case you don't insist on purity and can live with dynamic typing, try 
Scheme and Common Lisp, especially Common Lisp or ISLISP.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
0
pc56 (3929)
12/2/2007 3:10:36 PM
On Dec 2, 4:09 pm, "michele.simion...@gmail.com"
<michele.simion...@gmail.com> wrote:
> On Dec 2, 4:04 pm, "R=FCdiger Klaehn" <r...@lambda-computing.com> wrote:
>
> > Hello everybody,
>
> > I got a question. Is there anything like a verbose functional language
> > that attempts to be easily readable?
>
> As a beginner, I find SML easily readable.
>
Me too. SML is similar to mathematical notation, so if you are
familiar with college level math you will not have any problems with
something like SML.

But there are people that run away screaming when seeing it. And let's
face it: you can write some very difficult to read code in SML that
looks almost as bad as an obfuscated perl contest.

I know quite a few people (mostly engineers) that e.g. prefer visual
basic to C# because of its verbose and "english-like" syntax. These
people are quite intelligent, but nevertheless are quite irritated
even by C-Style languages. So there is no chance at all to get these
people to use something like SML. Which is a shame.
0
rudi2468 (20)
12/2/2007 4:46:09 PM
Pascal Costanza a �crit :
> R�diger Klaehn wrote:
> 
>> Hello everybody,
>>
>> I got a question. Is there anything like a verbose functional language
>> that attempts to be easily readable?
>>
>> What I am looking for would be something that looks kind of like
>> smalltalk, with an emphasis on easy to read code, but fully functional
>> with immutable data structures and a powerful type system.
>>
>> I find that many people are confused by the very compact syntax of
>> existing functional languages such as haskell and ocaml and therefore
>> miss out on the big advantages of these languages such as referential
>> transparency.
> 
> 
> In case you don't insist on purity and can live with dynamic typing, try 
> Scheme and Common Lisp, especially Common Lisp or ISLISP.

Ho, yes... Sooo readable. What names like car, cdr or progn are supposed 
  to mean is pretty obvious, indeed.

0
12/2/2007 5:09:50 PM
In article <4752e6f3$0$12394$426a74cc@news.free.fr>,
 Bruno Desthuilliers <bdesth.quelquechose@free.quelquepart.fr> wrote:

> Pascal Costanza a �crit :
> > R�diger Klaehn wrote:
> > 
> >> Hello everybody,
> >>
> >> I got a question. Is there anything like a verbose functional language
> >> that attempts to be easily readable?
> >>
> >> What I am looking for would be something that looks kind of like
> >> smalltalk, with an emphasis on easy to read code, but fully functional
> >> with immutable data structures and a powerful type system.
> >>
> >> I find that many people are confused by the very compact syntax of
> >> existing functional languages such as haskell and ocaml and therefore
> >> miss out on the big advantages of these languages such as referential
> >> transparency.
> > 
> > 
> > In case you don't insist on purity and can live with dynamic typing, try 
> > Scheme and Common Lisp, especially Common Lisp or ISLISP.
> 
> Ho, yes... Sooo readable. What names like car, cdr or progn are supposed 
>   to mean is pretty obvious, indeed.

'Obvious' is something different from being 'readable'.

Obvious is also kind of dangerous, since guessing meaning
from 'obvious' names is sometimes not a good idea.

Once one learn a certain base vocabulary, are the texts
written in that base vocabulary readable? How large
is that base vocabulary? Is the vocabulary based on words
or on other things (like special operators or funny characters (APL))?
How large is the base grammar of the language? Can you
parse the text easily? Are construct groups easily to
identify? Is the combination of constructs explicitly
written in the text?

'Readable' means something different for people with different
knowledge of a language. If one only knows few words in a language,
readable could mean that the used base vocabulary is small or that
some words can be guessed (though that is not always a good idea).
Readable would then also mean to have a simple grammar,
little or no precedence rules, all operations are explicitly
present in the textual presentation and so on.

If one knows more about a language and one is trained on large
amount of texts, then readable means again something different.

Easy to learn might be another criteria.

-- 
http://lispm.dyndns.org/
0
joswig8642 (2203)
12/2/2007 5:57:04 PM
R�diger Klaehn schrieb:
> But there are people that run away screaming when seeing it. And let's
> face it: you can write some very difficult to read code in SML that
> looks almost as bad as an obfuscated perl contest.

This is possible in any FPL, by using higher-order functions.

In most cases, that's because the code is getting more compact than what 
people are used to.

> I know quite a few people (mostly engineers) that e.g. prefer visual
> basic to C# because of its verbose and "english-like" syntax. These
> people are quite intelligent, but nevertheless are quite irritated
> even by C-Style languages. So there is no chance at all to get these
> people to use something like SML. Which is a shame.

Erlang is somewhat verbose, but rather "non-English".

Dunno otherwise. Clean, maybe?

Regards,
Jo
0
jo427 (1164)
12/2/2007 6:53:56 PM
Bruno Desthuilliers schrieb:
> Pascal Costanza a �crit :
>> In case you don't insist on purity and can live with dynamic typing, 
>> try Scheme and Common Lisp, especially Common Lisp or ISLISP.
> 
> Ho, yes... Sooo readable. What names like car, cdr or progn are supposed 
>  to mean is pretty obvious, indeed.

CL does use quite explicit naming in standard macro names, and the style 
of the standard library usually pervades third-party code, too, so CL 
programs should indeed be quite "English-like".
And the set of ugly legacy names in Lisp is just a handful or two. 
That's not optimal, but better than most.

I'm not sure that humoring the desire of those people is the right 
approach though. There are a few techniques and conventions to be 
learned for compact FPL code, but that's doable. Somebody really need to 
write a book about design patterns in FPLs.

Regards,
Jo
0
jo427 (1164)
12/2/2007 6:57:40 PM
Rüdiger Klaehn wrote:
> I find that many people are confused by the very compact syntax of
> existing functional languages such as haskell and ocaml and therefore
> miss out on the big advantages of these languages such as referential
> transparency.

Are they? I always find it difficult to convince people of the practical
utility of referential transparency. OTOH, few people who see an elegant
compact Haskell expression deny that it is a great advantage to express
things so concisely (that is, after they get it explained and understand
what it means). I wonder how a more verbose syntax could improve on the
readability of e.g.

  map (+1)

for a function which increments all elements in a list.

Anyway if you are looking for something with a Java-like syntax, take a look
at Scala.

Cheers
Ben
0
12/2/2007 9:36:49 PM
"R�diger Klaehn" <rudi@lambda-computing.com> writes:

> On Dec 2, 4:09 pm, "michele.simion...@gmail.com"
> <michele.simion...@gmail.com> wrote:
>> On Dec 2, 4:04 pm, "R�diger Klaehn" <r...@lambda-computing.com> wrote:
>>
>> > Hello everybody,
>>
>> > I got a question. Is there anything like a verbose functional language
>> > that attempts to be easily readable?
>>
>> As a beginner, I find SML easily readable.
>>
> Me too.

Not me.  There are a lot of very good things about SML,
but I find the syntax to be a big stumbling block.
Maybe I'd get used to it if I programmed in SML a lot.

I'm not sure why I don't like SML syntax, but I think:
SML is too terse for my taste.
I have trouble seeing at a glance where syntactic
constructs end.
The parentheses end up in the "wrong" places
(compared to what I'm used to).  This appears to be a
misguided attempt to avoid "too many" parens.
Constructs delimited by keywords get nested inside
constructs delimited by parentheses, which seems
inside-out to me.
Common practise is to use abbreviated names too
much for my taste.

I feel the same way about the syntax of OCaml and Haskell.

>...SML is similar to mathematical notation, so if you are
> familiar with college level math you will not have any problems with
> something like SML.

But programming is not math.  That's why most programming languages
allow multi-character identifiers, whereas in math, we mostly use
single-letter names, perhaps adorned with overbars and squiggles
and whatnot.  In math, if we run out of letters, we start using
greek letters and those squiggles and subscripts and ....

A program of 100,000 lines of code is not unusual, never mind the
programming language.  I have never seen a math formula of 100,000
lines.

> But there are people that run away screaming when seeing it. And let's
> face it: you can write some very difficult to read code in SML that
> looks almost as bad as an obfuscated perl contest.

You can write unreadable junk in any language.  The interesting
"readability" question must assume that the programmer is at least
_trying_ to write readable code.

> I know quite a few people (mostly engineers) that e.g. prefer visual
> basic to C# because of its verbose and "english-like" syntax. These
> people are quite intelligent, but nevertheless are quite irritated
> even by C-Style languages. So there is no chance at all to get these
> people to use something like SML. Which is a shame.

Indeed.  Hence the original poster's question.

- Bob
0
bobduff (1543)
12/2/2007 9:38:14 PM
Bruno Desthuilliers <bdesth.quelquechose@free.quelquepart.fr> writes:

> Pascal Costanza a �crit :
>> In case you don't insist on purity and can live with dynamic typing,
>> try Scheme and Common Lisp, especially Common Lisp or ISLISP.
>
> Ho, yes... Sooo readable. What names like car, cdr or progn are supposed
> to mean is pretty obvious, indeed.

I can memorize the meaning of car and cdr -- that's not a big hindrance
to me (although I think head and tail would be more civilized).

But most Lisp code uses pretty readable (longish) names for things,
which I like.

I'm not a huge fan of "Lots of Incredibly Silly Parentheses",
but the Lisp-style syntax seems far superior to the
syntax of ML-style languages.  At least in Lisp I can tell
where the end of each thing is (perhaps with the help of
a fancy editor, but at worst, by counting parens).
And Lisp syntax has the advantage of being simple.
No need to memorize precedence rules and the like.

- Bob
0
bobduff (1543)
12/2/2007 9:44:40 PM
In article <fiv8id$479$1@registered.motzarella.org>,
 Ben Franksen <ben.franksen@online.de> wrote:

> Rüdiger Klaehn wrote:
> > I find that many people are confused by the very compact syntax of
> > existing functional languages such as haskell and ocaml and therefore
> > miss out on the big advantages of these languages such as referential
> > transparency.
> 
> Are they? I always find it difficult to convince people of the practical
> utility of referential transparency. OTOH, few people who see an elegant
> compact Haskell expression deny that it is a great advantage to express
> things so concisely (that is, after they get it explained and understand
> what it means). I wonder how a more verbose syntax could improve on the
> readability of e.g.
> 
>   map (+1)
> 
> for a function which increments all elements in a list.

  (lambda (a-list)
     (mapcar (function 1+) a-list))

You explicitly see that it is

* an anonymous function
* takes exactly one argument
* calls MAPCAR with the function (!) 1+ and A-LIST as arguments

No magic going on.

You would even have standard places for documentation strings
and for declarations:

(lambda (a-list)
  "Increment each element of a list by 1. Return a fresh list."
  (declare (type list a-list))
  (mapcar (function 1+) a-list))

> 
> Anyway if you are looking for something with a Java-like syntax, take a look
> at Scala.
> 
> Cheers
> Ben

-- 
http://lispm.dyndns.org/
0
joswig8642 (2203)
12/2/2007 9:56:26 PM
Rainer Joswig <joswig@lisp.de> writes:

> In article <fiv8id$479$1@registered.motzarella.org>,
>  Ben Franksen <ben.franksen@online.de> wrote:
>
>> Rüdiger Klaehn wrote:
>> > I find that many people are confused by the very compact syntax of
>> > existing functional languages such as haskell and ocaml and therefore
>> > miss out on the big advantages of these languages such as referential
>> > transparency.
>> 
>> Are they? I always find it difficult to convince people of the practical
>> utility of referential transparency. OTOH, few people who see an elegant
>> compact Haskell expression deny that it is a great advantage to express
>> things so concisely (that is, after they get it explained and understand
>> what it means). I wonder how a more verbose syntax could improve on the
>> readability of e.g.
>> 
>>   map (+1)
>> 
>> for a function which increments all elements in a list.
>
>   (lambda (a-list)
>      (mapcar (function 1+) a-list))
>
> You explicitly see that it is
>
> * an anonymous function

Good.  But lambda's are so useful, that I'd like to have a
very concise syntax for it.

The whole point of using a greek letter is to be concise -- so taking a
greek letter and spelling it out in English is pretty silly, IMHO!

> * takes exactly one argument

Good.  But I'd like to see the expected type of a-list,
which would require more syntax.

> * calls MAPCAR with the function (!) 1+ and A-LIST as arguments

Good.

> No magic going on.

Good.

> You would even have standard places for documentation strings
> and for declarations:
>
> (lambda (a-list)
>   "Increment each element of a list by 1. Return a fresh list."

Good, but is that "a list of integers" or "a list of numbers"
or what?

- Bob
0
bobduff (1543)
12/2/2007 10:20:38 PM
On Dec 2, 10:38 pm, Robert A Duff <bobd...@shell01.TheWorld.com>
wrote:
> >> As a beginner, I find SML easily readable.
>
> > Me too.
>
> Not me.  There are a lot of very good things about SML,
> but I find the syntax to be a big stumbling block.
> Maybe I'd get used to it if I programmed in SML a lot.
>
> I'm not sure why I don't like SML syntax, but I think:
> SML is too terse for my taste.
>
That is kind of the point I was trying to make in the original
posting. I like language features like pattern matching, and I prefer
the syntax over the syntax of C-Style languages.

But sometimes I think that using a keyword instead of a special
character would make things easier to read.

> I have trouble seeing at a glance where syntactic
> constructs end.
> The parentheses end up in the "wrong" places
> (compared to what I'm used to).  This appears to be a
> misguided attempt to avoid "too many" parens.
> Constructs delimited by keywords get nested inside
> constructs delimited by parentheses, which seems
> inside-out to me.
> Common practise is to use abbreviated names too
> much for my taste.
>
That is certainly true.

> I feel the same way about the syntax of OCaml and Haskell.
>
> >...SML is similar to mathematical notation, so if you are
> > familiar with college level math you will not have any problems with
> > something like SML.
>
> But programming is not math.  That's why most programming languages
> allow multi-character identifiers, whereas in math, we mostly use
> single-letter names, perhaps adorned with overbars and squiggles
> and whatnot.  In math, if we run out of letters, we start using
> greek letters and those squiggles and subscripts and ....
>
I guess that is partly because programming languages have had to use
ASCII characters. Sometimes I would love to use greek characters and
subscripts in programs.

> A program of 100,000 lines of code is not unusual, never mind the
> programming language.  I have never seen a math formula of 100,000
> lines.
>
I have never seen a program with 100000 lines of code in the same
scope either. So the comparison is not completely fair. I have to
agree that I prefer long, descriptive identifiers though.

> > But there are people that run away screaming when seeing it. And let's
> > face it: you can write some very difficult to read code in SML that
> > looks almost as bad as an obfuscated perl contest.
>
> You can write unreadable junk in any language.  The interesting
> "readability" question must assume that the programmer is at least
> _trying_ to write readable code.
>
I was refering to some examples in books I saw. The authors probably
thought they were really clever, but for me it was kind of hard to
follow.
0
rudi2468 (20)
12/2/2007 10:41:31 PM
In article <wccve7g68nd.fsf@shell01.TheWorld.com>,
 Robert A Duff <bobduff@shell01.TheWorld.com> wrote:

> Rainer Joswig <joswig@lisp.de> writes:
> 
> > In article <fiv8id$479$1@registered.motzarella.org>,
> >  Ben Franksen <ben.franksen@online.de> wrote:
> >
> >> Rüdiger Klaehn wrote:
> >> > I find that many people are confused by the very compact syntax of
> >> > existing functional languages such as haskell and ocaml and therefore
> >> > miss out on the big advantages of these languages such as referential
> >> > transparency.
> >> 
> >> Are they? I always find it difficult to convince people of the practical
> >> utility of referential transparency. OTOH, few people who see an elegant
> >> compact Haskell expression deny that it is a great advantage to express
> >> things so concisely (that is, after they get it explained and understand
> >> what it means). I wonder how a more verbose syntax could improve on the
> >> readability of e.g.
> >> 
> >>   map (+1)
> >> 
> >> for a function which increments all elements in a list.
> >
> >   (lambda (a-list)
> >      (mapcar (function 1+) a-list))
> >
> > You explicitly see that it is
> >
> > * an anonymous function
> 
> Good.  But lambda's are so useful, that I'd like to have a
> very concise syntax for it.
> 
> The whole point of using a greek letter is to be concise -- so taking a
> greek letter and spelling it out in English is pretty silly, IMHO!

So is english. There are a lot of words with greek letters
spelled out. If you don't like it use some other word.
Some people used the greek letter lambda in source code.
But that is rare and requires a lambda in the character
set and on the keyboard to be useful. I think the letter lambda
has been used before Lisp to denote functions. So the
benefit was to relate to a existing concept, but spelling
out the greek character.

> > * takes exactly one argument
> 
> Good.  But I'd like to see the expected type of a-list,
> which would require more syntax.

There was an example below.

> 
> > * calls MAPCAR with the function (!) 1+ and A-LIST as arguments
> 
> Good.
> 
> > No magic going on.
> 
> Good.
> 
> > You would even have standard places for documentation strings
> > and for declarations:
> >
> > (lambda (a-list)
> >   "Increment each element of a list by 1. Return a fresh list."
> 
> Good, but is that "a list of integers" or "a list of numbers"
> or what?

Usually numbers. But whatever you have defined 1+ to do.

> 
> - Bob

-- 
http://lispm.dyndns.org/
0
joswig8642 (2203)
12/2/2007 10:43:45 PM
On Dec 2, 10:36 pm, Ben Franksen <ben.frank...@online.de> wrote:
> R=FCdiger Klaehn wrote:
> > I find that many people are confused by the very compact syntax of
> > existing functional languages such as haskell and ocaml and therefore
> > miss out on the big advantages of these languages such as referential
> > transparency.
>
> Are they? I always find it difficult to convince people of the practical
> utility of referential transparency.
>
When you are working on multi-threaded code, the utility of
referential transparency is huge. And almost every serious programmer
will have to work on multithreaded code in the next 5 years, given
that the number of cores on a cpu will grow exponentially for the
forseeable future.

> OTOH, few people who see an elegant
> compact Haskell expression deny that it is a great advantage to express
> things so concisely (that is, after they get it explained and understand
> what it means). I wonder how a more verbose syntax could improve on the
> readability of e.g.
>
>   map (+1)
>
> for a function which increments all elements in a list.
>
> Anyway if you are looking for something with a Java-like syntax, take a lo=
ok
> at Scala.
>
I like scala. But I think one should be able to do even better.
Besides, I fear that scala will stay an academic language that is
constantly changing.
0
rudi2468 (20)
12/2/2007 10:48:43 PM
Hi Robert,

>>>>> "Robert" == Robert A Duff <bobduff@shell01.TheWorld.com> writes:

    Robert> I can memorize the meaning of car and cdr -- that's not a big hindrance
    Robert> to me (although I think head and tail would be more civilized).

Well, then what about first and rest? Is that more to your liking?
It is part of CL after all.

'Andreas
-- 
Wherever I lay my .emacs, there's my $HOME.
0
12/2/2007 11:18:36 PM
Robert A Duff <bobduff@shell01.TheWorld.com> wrote:
> Rainer Joswig <joswig@lisp.de> writes:
>
>> In article <fiv8id$479$1@registered.motzarella.org>,
>>  Ben Franksen <ben.franksen@online.de> wrote:
>>
>>> [...] I wonder how a more verbose syntax could improve on the
>>> readability of e.g.
>>>
>>>   map (+1)
>>>
>>> for a function which increments all elements in a list.
>>
>>   (lambda (a-list)
>>      (mapcar (function 1+) a-list))
>>
>> You explicitly see that it is
>>
>> * an anonymous function
>
> Good.  But lambda's are so useful, that I'd like to have a
> very concise syntax for it.
>
> The whole point of using a greek letter is to be concise -- so taking a
> greek letter and spelling it out in English is pretty silly, IMHO!

How would you like:

    {list|:ok list every {element|:ok element + 1}}

or - in a failsafe version:

    {list|:try list every {element|:try element + 1}}

or - excluding non-incrementable elements:

    {list|:ok list each {element|:try element + 1}}

or - passing non-incrementable elements unchanged:

    {list|:ok list every {element|:ok element + 1 or: element}}

or - isolating the increment function:

    ;increment {element|:ok element + 1};
    {list|:ok list every (increment)}

or - also isolating the map concept:

    ;increment {element|:ok element + 1};
    ;map {function|:ok {list|:ok list every (function)}};
    (map) (increment)

(It's my own still unpublished PILS language - I need a Linux/GTK
version before I go public...)


0
12/3/2007 12:34:53 AM
Robert A Duff wrote:
> "Rüdiger Klaehn" <rudi@lambda-computing.com> writes:
> 
>> On Dec 2, 4:09 pm, "michele.simion...@gmail.com"
>> <michele.simion...@gmail.com> wrote:
>>...SML is similar to mathematical notation, so if you are
>> familiar with college level math you will not have any problems with
>> something like SML.
> 
> But programming is not math.

I know of a famous CS guru, known by the name of E.W.Dijkstra, who violently
disagrees ;-) To quote:

"Programming = Mathematics + Murphy's Law"

Cheers
Ben
0
12/3/2007 1:37:03 AM
Rüdiger Klaehn wrote:
> On Dec 2, 10:36 pm, Ben Franksen <ben.frank...@online.de> wrote:
>> Rüdiger Klaehn wrote:
>> > I find that many people are confused by the very compact syntax of
>> > existing functional languages such as haskell and ocaml and therefore
>> > miss out on the big advantages of these languages such as referential
>> > transparency.
>>
>> Are they? I always find it difficult to convince people of the practical
>> utility of referential transparency.
>>
> When you are working on multi-threaded code, the utility of
> referential transparency is huge. And almost every serious programmer
> will have to work on multithreaded code in the next 5 years, given
> that the number of cores on a cpu will grow exponentially for the
> forseeable future.

I work with people who do multithreaded programming the whole day and I
speak from experience when I tell you that such people are not easily
convinced.

BTW, the presence of multiple cores is (IMO) not necessarily best exploited
by writing multi-threaded code (threads = processes that communicate via
shared memory) but rather by annotating pure code with (semantically
transparent) hints for the compiler. However, maybe that was what you
meant, and internally the run-time system of course uses threads to
paralellize computations.

Cheers
Ben
0
12/3/2007 1:54:00 AM

On Mon, 3 Dec 2007, Ben Franksen wrote:

> > But programming is not math.
> 
> I know of a famous CS guru, known by the name of E.W.Dijkstra, who violently
> disagrees ;-) To quote:
> 
> "Programming = Mathematics + Murphy's Law"

Hmm, so if you say
"It is not true that programming is not math"
and
"Programming = Mathematics + Murphy's Law"

we get

Murphy's Law = 0

whatever that means :)

- Ville Oikarinen
0
ville1 (63)
12/3/2007 7:53:34 AM
On Dec 3, 2:54 am, Ben Franksen <ben.frank...@online.de> wrote:
> R=FCdiger Klaehn wrote:
> > On Dec 2, 10:36 pm, Ben Franksen <ben.frank...@online.de> wrote:
> >> R=FCdiger Klaehn wrote:
> >> > I find that many people are confused by the very compact syntax of
> >> > existing functional languages such as haskell and ocaml and therefore=

> >> > miss out on the big advantages of these languages such as referential=

> >> > transparency.
>
> >> Are they? I always find it difficult to convince people of the practica=
l
> >> utility of referential transparency.
>
> > When you are working on multi-threaded code, the utility of
> > referential transparency is huge. And almost every serious programmer
> > will have to work on multithreaded code in the next 5 years, given
> > that the number of cores on a cpu will grow exponentially for the
> > forseeable future.
>
> I work with people who do multithreaded programming the whole day and I
> speak from experience when I tell you that such people are not easily
> convinced.
>
> BTW, the presence of multiple cores is (IMO) not necessarily best exploite=
d
> by writing multi-threaded code (threads =3D processes that communicate via=

> shared memory) but rather by annotating pure code with (semantically
> transparent) hints for the compiler. However, maybe that was what you
> meant, and internally the run-time system of course uses threads to
> paralellize computations.
>
> Cheers
> Ben- Hide quoted text -
>
> - Show quoted text -

0
rudi2468 (20)
12/3/2007 10:05:59 AM
On Dec 3, 2:54 am, Ben Franksen <ben.frank...@online.de> wrote:
> R=FCdiger Klaehn wrote:
[snip]
> > When you are working on multi-threaded code, the utility of
> > referential transparency is huge. And almost every serious programmer
> > will have to work on multithreaded code in the next 5 years, given
> > that the number of cores on a cpu will grow exponentially for the
> > forseeable future.
>
> I work with people who do multithreaded programming the whole day and I
> speak from experience when I tell you that such people are not easily
> convinced.
>
I know. There are a lot of people that think that they can get a
traditional multi-threaded system using explicit synchronization via
monitors to work. But only a few of them really can.

I can write a correct program using fine grained locking when I
concentrate. But during normal everyday development, when the phone is
ringing every 30 minutes? No way.

> BTW, the presence of multiple cores is (IMO) not necessarily best exploite=
d
> by writing multi-threaded code (threads =3D processes that communicate via=

> shared memory) but rather by annotating pure code with (semantically
> transparent) hints for the compiler. However, maybe that was what you
> meant, and internally the run-time system of course uses threads to
> paralellize computations.
>
Indeed, that is what I meant. In the project I am currently involved
with, we try to avoid shared mutable state whenever possible.
Everything that is shared over multiple threads is immutable, and
everything that is mutable stays on one thread.

There is a mechanism to move state from one thread to another, but
other than that the threads are completely independent. They are
almost like processes, except that they share immutable data for
efficiency.
0
rudi2468 (20)
12/3/2007 10:31:37 AM
On Dec 2, 5:09 pm, Bruno Desthuilliers
<bdesth.quelquech...@free.quelquepart.fr> wrote:

>
> Ho, yes... Sooo readable. What names like car, cdr or progn are supposed
>   to mean is pretty obvious, indeed.

but you can always write a small helper function in Scheme:

(define (head x)
   (car x))

Okay it is not very safe and assumes a list is being passed. But
anyway that is what I often do in my Scheme code.

And if you settle yourself down to something like Scheme its srfi-1
list facilities you almost get all the bells and whistles of nice
'functional list based programming'.

I am now waiting on the arrival of the guy who tells: "Scheme and
Common Lisp ist unreadable .. you guess all the (((((()))))))

Btw: Clean, though a pure tragedgy and never got the start off in any
community, would be first class first beginners 'genuine' functional
prohramming language. It has clean syntax.




0
klohmuschel (196)
12/3/2007 12:08:54 PM
klohmuschel@yahoo.de a �crit :
> On Dec 2, 5:09 pm, Bruno Desthuilliers
> <bdesth.quelquech...@free.quelquepart.fr> wrote:
> 
> 
>>Ho, yes... Sooo readable. What names like car, cdr or progn are supposed
>>  to mean is pretty obvious, indeed.
> 
> 
> but you can always write a small helper function in Scheme:
> 
> (define (head x)
>    (car x))
> 
> Okay it is not very safe and assumes a list is being passed. But
> anyway that is what I often do in my Scheme code.

The problem being that - extra function call overhead set aside - it's 
*not* the standard. So when reading third-part material (either source 
code, tutorials etc), you still have to remember all those somewhat 
alien names...

(snip)

> I am now waiting on the arrival of the guy who tells: "Scheme and
> Common Lisp ist unreadable .. you guess all the (((((()))))))

That has never been a problem to me - my editor (emacs) does a good job 
wrt/ parenthesis !-)
0
12/3/2007 1:13:23 PM
Rainer Joswig a �crit :
> In article <4752e6f3$0$12394$426a74cc@news.free.fr>,
>  Bruno Desthuilliers <bdesth.quelquechose@free.quelquepart.fr> wrote:
> 
> 
>>Pascal Costanza a �crit :
>>
>>>R�diger Klaehn wrote:
>>>
>>>>Hello everybody,
>>>>
>>>>I got a question. Is there anything like a verbose functional language
>>>>that attempts to be easily readable?
>>>>
>>>>What I am looking for would be something that looks kind of like
>>>>smalltalk, with an emphasis on easy to read code, but fully functional
>>>>with immutable data structures and a powerful type system.
>>>>
>>>>I find that many people are confused by the very compact syntax of
>>>>existing functional languages such as haskell and ocaml and therefore
>>>>miss out on the big advantages of these languages such as referential
>>>>transparency.
>>>
>>>
>>>In case you don't insist on purity and can live with dynamic typing, try 
>>>Scheme and Common Lisp, especially Common Lisp or ISLISP.
>>
>>Ho, yes... Sooo readable. What names like car, cdr or progn are supposed 
>>  to mean is pretty obvious, indeed.
> 
> 
> 'Obvious' is something different from being 'readable'.

 From my experience, it may greatly help.

> Obvious is also kind of dangerous, since guessing meaning
> from 'obvious' names is sometimes not a good idea.

When I started learning Python some years ago, one of the first things 
that amazed me was that, 9 times out of 10, the feature I was looking 
for had one of the most obvious names to me - it was almost a game : 
writing the code *without* looking at the doc, and see if it worked. And 
quite a lot of times, it *did* work.

As you may have guessed, my experience with Common Lisp has been quite 
different...

In the context of the OP's question, I'd say that there's a strong 
relationship between "obvious" and "readable", ie you don't have to 
learn a whole brand new vocabulary for well known concepts.

(snip otherwise sensible considerations)
0
12/3/2007 1:21:01 PM
In article <47540108$0$26725$426a74cc@news.free.fr>,
 Bruno Desthuilliers <bdesth.quelquechose@free.quelquepart.fr> wrote:

> klohmuschel@yahoo.de a �crit :
> > On Dec 2, 5:09 pm, Bruno Desthuilliers
> > <bdesth.quelquech...@free.quelquepart.fr> wrote:
> > 
> > 
> >>Ho, yes... Sooo readable. What names like car, cdr or progn are supposed
> >>  to mean is pretty obvious, indeed.
> > 
> > 
> > but you can always write a small helper function in Scheme:
> > 
> > (define (head x)
> >    (car x))
> > 
> > Okay it is not very safe and assumes a list is being passed. But
> > anyway that is what I often do in my Scheme code.
> 
> The problem being that - extra function call overhead set aside

(define head car)

> - it's 
> *not* the standard. So when reading third-part material (either source 
> code, tutorials etc), you still have to remember all those somewhat 
> alien names...

True. I prefer a standard vocabulary. But if the user group
is large enough or the code base is large enough,
having an optimized vocabulary might be useful.

> 
> (snip)
> 
> > I am now waiting on the arrival of the guy who tells: "Scheme and
> > Common Lisp ist unreadable .. you guess all the (((((()))))))
> 
> That has never been a problem to me - my editor (emacs) does a good job 
> wrt/ parenthesis !-)

-- 
http://lispm.dyndns.org/
0
joswig8642 (2203)
12/3/2007 1:23:32 PM
klohmuschel@yahoo.de writes:
> On Dec 2, 5:09 pm, Bruno Desthuilliers
> <bdesth.quelquech...@free.quelquepart.fr> wrote:
>> Ho, yes... Sooo readable. What names like car, cdr or progn are
>>   supposed to mean is pretty obvious, indeed.
>
> but you can always write a small helper function in Scheme:

     Or just use 'first' and 'rest' which are already in Common Lisp.

> I am now waiting on the arrival of the guy who tells: "Scheme and
> Common Lisp ist unreadable .. you guess all the (((((()))))))

     If you can still see the parentheses, you need to write a little
more Lisp.  ;-)

Regards,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc.  | Large scale, mission-critical, distributed OO
                       | systems design and implementation.
          pjm@spe.com  | (C++, Java, Common Lisp, Jini, middleware, SOA)
0
pjm (703)
12/3/2007 1:56:51 PM
In article 
<0b6ecbee-34eb-45c1-85b8-40934b84500c@o42g2000hsc.googlegroups.com>,
 "R�diger Klaehn" <rudi@lambda-computing.com> wrote:

> On Dec 2, 10:38 pm, Robert A Duff <bobd...@shell01.TheWorld.com>
> wrote:

> > You can write unreadable junk in any language.  The interesting
> > "readability" question must assume that the programmer is at least
> > _trying_ to write readable code.

> I was refering to some examples in books I saw. The authors probably
> thought they were really clever, but for me it was kind of hard to
> follow.

I have this problem with Haskell, sometimes.  It isn't the
syntax, I believe, it's the extreme degree of abstraction.
It's a virtue of the language, that can be over-exercised.

I catch myself doing the same - paring down some code until
nothing remains that isn't essential.  The problem with this
is that those inessential parameters and whatnot may carry
some information, cues that remind the reader why this function
exists.

I expect that with more exposure to this, I grow less dependent
on such cues, but in the end I do think it adds up to a loss
of readability.  I don't know if programs are mathematical, but
I am fairly sure people aren't.

   Donn Cave, donn@u.washington.edu
0
Donn
12/3/2007 6:26:52 PM
On Sun, 02 Dec 2007 17:20:38 -0500, Robert A Duff
<bobduff@shell01.TheWorld.com> wrote:

>Rainer Joswig <joswig@lisp.de> writes:
>
>>   (lambda (a-list)
>>      (mapcar (function 1+) a-list))
>>
>> You explicitly see that it is
>>
>> * an anonymous function
>
>Good.  But lambda's are so useful, that I'd like to have a
>very concise syntax for it.
>
>The whole point of using a greek letter is to be concise -- so taking a
>greek letter and spelling it out in English is pretty silly, IMHO!

Well, I don't see a lambda on my keyboard - or any other Greek letters
for that matter.  But all you need is an editor that supports them.
PLT Scheme has a graphic editor that inserts the lambda symbol in the
code when you press "ctrl-\" and expands it to "lambda" internally.


>> * takes exactly one argument
>
>Good.  But I'd like to see the expected type of a-list,
>which would require more syntax.

Lists are generic - you want to know the type of the elements.  But
what if the list is heterogeneous and the function to be mapped over
it is selective?  Or have you never done that?


George
--
for email reply remove "/" from address
0
George
12/4/2007 6:06:15 AM
R�diger Klaehn skrev:
> Hello everybody,
> 
> I got a question. Is there anything like a verbose functional language
> that attempts to be easily readable?
> 
> What I am looking for would be something that looks kind of like
> smalltalk, with an emphasis on easy to read code, but fully functional
> with immutable data structures and a powerful type system.

Erlang is verbose, compared to e.g. Haskell. Most find it readable.
It has strong dynamic typing, in the sense that types cannot be
subverted, but no compile-time type checking to speak of.

BR,
Ulf W
0
ulf.wiger (50)
12/4/2007 8:58:11 AM
R�diger Klaehn wrote:
> Hello everybody,
> 
> I got a question. Is there anything like a verbose functional language
> that attempts to be easily readable?

No language attempts to be unreadable so you're really only asking for
verbose FPLs, of which there are many. Look at Scala and Lisp, for example.

> I find that many people are confused by the very compact syntax of
> existing functional languages such as haskell and ocaml and therefore
> miss out on the big advantages of these languages such as referential
> transparency.

The value-add of Haskell and OCaml (and F#) is primarily their powerful
static type systems.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/4/2007 9:14:21 AM
On Dec 3, 7:13 am, Bruno Desthuilliers
<bdesth.quelquech...@free.quelquepart.fr> wrote:
> > but you can always write a small helper function in Scheme:
>
> > (define (head x)
> >    (car x))
>
> > Okay it is not very safe and assumes a list is being passed. But
> > anyway that is what I often do in my Scheme code.
>
> The problem being that - extra function call overhead set aside - it's
> *not* the standard. So when reading third-part material (either source
> code, tutorials etc), you still have to remember all those somewhat
> alien names...

It is not that alien:

(define head car)
(define tail cdr)

or

(define first car)
(define rest cdr)
0
grettke (458)
12/4/2007 11:50:31 AM
On Dec 3, 7:21 am, Bruno Desthuilliers
<bdesth.quelquech...@free.quelquepart.fr> wrote:
> When I started learning Python some years ago, one of the first things
> that amazed me was that, 9 times out of 10, the feature I was looking
> for had one of the most obvious names to me

Have you ever looked at Ruby at looked at "Matz's law of least
surprise"?

Everything is guaranteed to just "intuitively" "make sense".... at
least to him! :)
0
grettke (458)
12/4/2007 11:51:36 AM
On Dec 3, 1:13 pm, Bruno Desthuilliers
<bdesth.quelquech...@free.quelquepart.fr> wrote:
> klohmusc...@yahoo.de a =E9crit :
>
> > On Dec 2, 5:09 pm, Bruno Desthuilliers
> > <bdesth.quelquech...@free.quelquepart.fr> wrote:
>
> >>Ho, yes... Sooo readable. What names like car, cdr or progn are supposed=

> >>  to mean is pretty obvious, indeed.
>
> > but you can always write a small helper function in Scheme:
>
> > (define (head x)
> >    (car x))
>
> > Okay it is not very safe and assumes a list is being passed. But
> > anyway that is what I often do in my Scheme code.
>
> The problem being that - extra function call overhead set aside - it's
> *not* the standard. So when reading third-part material (either source
> code, tutorials etc), you still have to remember all those somewhat
> alien names...
>
> (snip)
>
> > I am now waiting on the arrival of the guy who tells: "Scheme and
> > Common Lisp ist unreadable .. you guess all the (((((()))))))
>
> That has never been a problem to me - my editor (emacs) does a good job
> wrt/ parenthesis !-)


But the same arguments holds for all the other langues too. I mean
sure Python will give you some nice to understand basic meanings but
once in all the object oriented programming battle in Python you are
still left with something only often the creater understood and had in
mind how to interpret it. Not to speak of C++.
0
klohmuschel (196)
12/4/2007 12:16:00 PM
Bruno Desthuilliers wrote:
> Pascal Costanza a �crit :
>> R�diger Klaehn wrote:
>>
>>> Hello everybody,
>>>
>>> I got a question. Is there anything like a verbose functional language
>>> that attempts to be easily readable?
>>>
>>> What I am looking for would be something that looks kind of like
>>> smalltalk, with an emphasis on easy to read code, but fully functional
>>> with immutable data structures and a powerful type system.
>>>
>>> I find that many people are confused by the very compact syntax of
>>> existing functional languages such as haskell and ocaml and therefore
>>> miss out on the big advantages of these languages such as referential
>>> transparency.
>>
>> In case you don't insist on purity and can live with dynamic typing, 
>> try Scheme and Common Lisp, especially Common Lisp or ISLISP.
> 
> Ho, yes... Sooo readable. What names like car, cdr or progn are supposed 
>  to mean is pretty obvious, indeed.

Maybe it's necessary to spell this out:

There is no such thing as a 'natural' syntax. Every programming language 
syntax needs practice, and once you get used to one (of the good ones), 
you can probably work with pretty much everything.

Syntax is also a personal choice (as is programming style). It varies 
how much time you're willing to invest to learn a new syntax, and it 
also varies how much each one can get used to a new syntax (or a new 
programming style).

It may very well be that what works very well for one person may not 
work for another person at all (and this can depend on a lot of factors, 
including problem domain, overall goals, etc.). [That's why claims about 
single languages being the optimal choice for everyone, as for example 
suggested by some gravitationally challenged amphibians, are highly 
questionable.]

The OP asked for a functional language whose syntax is closer to 
Smalltalk. I still believe that Scheme and Common Lisp fulfill that 
criterion more than other functional languages, for a broad 
interpretation of that term.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
0
pc56 (3929)
12/4/2007 12:27:08 PM
On Dec 4, 10:14 am, Jon Harrop <use...@jdh30.plus.com> wrote:
[snip]
> No language attempts to be unreadable so you're really only asking for
> verbose FPLs, of which there are many. Look at Scala and Lisp, for example.
>
There are various languages that are designed more for compactness
than for easy reading. Or would you say that e.g. Perl attempts to be
easily readable?

> > I find that many people are confused by the very compact syntax of
> > existing functional languages such as haskell and ocaml and therefore
> > miss out on the big advantages of these languages such as referential
> > transparency.
>
> The value-add of Haskell and OCaml (and F#) is primarily their powerful
> static type systems.
>
True. The type system is a big plus. But for me another huge advantage
is that it is almost trivial to use multiple cores using functional
code.
0
rudi2468 (20)
12/4/2007 4:09:55 PM
R�diger Klaehn wrote:
> On Dec 4, 10:14 am, Jon Harrop <use...@jdh30.plus.com> wrote:
>> No language attempts to be unreadable so you're really only asking for
>> verbose FPLs, of which there are many. Look at Scala and Lisp, for
>> example.
>
> There are various languages that are designed more for compactness
> than for easy reading.

I would say there are two trade-offs between brevity and clarity. If the
syntax is too terse then it becomes less readable but also if it is too
verbose then it becomes less readable. Experience lets you handle extreme
brevity but you cannot control verbosity.

> Or would you say that e.g. Perl attempts to be easily readable?

For its purpose it does, yet. How readable is this Perl regexp for an
identifier:

  [a-zA-Z][a-zA-Z0-9]*

compared to the parser combinator equivalent from an FPL:

  let digit c = '0' <= c && c <= '9';;
  let alpha c = 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z';;
  let alphanum c = digit c || alpha c;;
  let rawident = some alpha ++ several alphanum >| (IDENT << collect)

The latter is certainly more verbose but is it really more readable? I would
say that the latter is so unreadable that it drove people to build regular
expression engines for almost all FPLs.

The same trade-off crops up everywhere:

Mathematica: {1, 2} /. {a_, b_} -> {b, a}
F#:          (1, 2) |> fun (a, b) -> b, a
Lisp:        ((lambda (pair) (cons (cdr pair) (car pair))) (cons 1 2))

The last one is more verbose but is it really more readable? Lisp's car and
cdr are no match for the pattern matchers found in all modern FPLs.

>> > I find that many people are confused by the very compact syntax of
>> > existing functional languages such as haskell and ocaml and therefore
>> > miss out on the big advantages of these languages such as referential
>> > transparency.
>>
>> The value-add of Haskell and OCaml (and F#) is primarily their powerful
>> static type systems.
>
> True. The type system is a big plus. But for me another huge advantage
> is that it is almost trivial to use multiple cores using functional
> code.

The Haskell community have been making a lot of noise about their new
parallel stuff lately but all I've seen is Fibonacci number generator that
was very slow and only used one of my two cores, i.e. is broken. There is
more use of parallelism in the OCaml world (particularly in industry) but I
would not call it easy. F# makes it easy but its implementation means
you'll need to run >5 cores flat out to beat a single core running OCaml.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/4/2007 4:57:06 PM
On Dec 4, 4:57 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> R=FCdiger Klaehn wrote:
> > On Dec 4, 10:14 am, Jon Harrop <use...@jdh30.plus.com> wrote:
> >> No language attempts to be unreadable so you're really only asking for
> >> verbose FPLs, of which there are many. Look at Scala and Lisp, for
> >> example.
>
> > There are various languages that are designed more for compactness
> > than for easy reading.
>
> I would say there are two trade-offs between brevity and clarity. If the
> syntax is too terse then it becomes less readable but also if it is too
> verbose then it becomes less readable. Experience lets you handle extreme
> brevity but you cannot control verbosity.
>
> > Or would you say that e.g. Perl attempts to be easily readable?
>
> For its purpose it does, yet. How readable is this Perl regexp for an
> identifier:
>
>   [a-zA-Z][a-zA-Z0-9]*
>
> compared to the parser combinator equivalent from an FPL:
>
>   let digit c =3D '0' <=3D c && c <=3D '9';;
>   let alpha c =3D 'a' <=3D c && c <=3D 'z' || 'A' <=3D c && c <=3D 'Z';;
>   let alphanum c =3D digit c || alpha c;;
>   let rawident =3D some alpha ++ several alphanum >| (IDENT << collect)
>
> The latter is certainly more verbose but is it really more readable? I wou=
ld
> say that the latter is so unreadable that it drove people to build regular=

> expression engines for almost all FPLs.
>
> The same trade-off crops up everywhere:
>
> Mathematica: {1, 2} /. {a_, b_} -> {b, a}
> F#:          (1, 2) |> fun (a, b) -> b, a
> Lisp:        ((lambda (pair) (cons (cdr pair) (car pair))) (cons 1 2))
>
> The last one is more verbose but is it really more readable? Lisp's car an=
d
> cdr are no match for the pattern matchers found in all modern FPLs.
>
> >> > I find that many people are confused by the very compact syntax of
> >> > existing functional languages such as haskell and ocaml and therefore=

> >> > miss out on the big advantages of these languages such as referential=

> >> > transparency.
>
> >> The value-add of Haskell and OCaml (and F#) is primarily their powerful=

> >> static type systems.
>
> > True. The type system is a big plus. But for me another huge advantage
> > is that it is almost trivial to use multiple cores using functional
> > code.
>
> The Haskell community have been making a lot of noise about their new
> parallel stuff lately but all I've seen is Fibonacci number generator that=

> was very slow and only used one of my two cores, i.e. is broken. There is
> more use of parallelism in the OCaml world (particularly in industry) but =
I
> would not call it easy. F# makes it easy but its implementation means
> you'll need to run >5 cores flat out to beat a single core running OCaml.


hello JON: most of the time i am not agreeing with you. however, i
find your motivation always informative from some point of view.

But why the hell do you insisting on writing code always in some
weirad kind  of a 1-liner?

I mean why not (I do not have an emacs editor right now):
=3D=3D
 ((lambda (pair)
      (cons (cdr pair) (car pair)))
   (cons 1 2))
=3D=3D

You often happen to post 1-liners (even in you favorite language
OCaml). I do not understand you why it is being important for you to
write code as compact as possible when there is no need for?
0
klohmuschel (196)
12/4/2007 5:40:51 PM
klohmuschel@yahoo.de wrote:
> hello JON: most of the time i am not agreeing with you. however, i
> find your motivation always informative from some point of view.

Thank you. I'm sure we'll all agree that this is a hopelessly subjective
discussion at any rate. :-)

> But why the hell do you insisting on writing code always in some
> weirad kind  of a 1-liner?
> 
> I mean why not (I do not have an emacs editor right now):
> ==
>  ((lambda (pair)
>       (cons (cdr pair) (car pair)))
>    (cons 1 2))
> ==
> 
> You often happen to post 1-liners (even in you favorite language
> OCaml). I do not understand you why it is being important for you to
> write code as compact as possible when there is no need for?

Simply because it is idiomatic in OCaml. Moreover, this is the way OCaml
itself prints OCaml code:

$ cat >foo.ml
let ( |> ) x f = f x;;
(1, 2) |> (fun (a, b) -> b, a);;

$ camlp4of foo.ml
let ( |> ) x f = f x
let _ = (1, 2) |> (fun (a, b) -> (b, a))

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/4/2007 6:04:39 PM
On Dec 4, 6:04 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> klohmusc...@yahoo.de wrote:
> > hello JON: most of the time i am not agreeing with you. however, i
> > find your motivation always informative from some point of view.
>
> Thank you. I'm sure we'll all agree that this is a hopelessly subjective
> discussion at any rate. :-)
>
> > But why the hell do you insisting on writing code always in some
> > weirad kind  of a 1-liner?
>
> > I mean why not (I do not have an emacs editor right now):
> > ==
> >  ((lambda (pair)
> >       (cons (cdr pair) (car pair)))
> >    (cons 1 2))
> > ==
>
> > You often happen to post 1-liners (even in you favorite language
> > OCaml). I do not understand you why it is being important for you to
> > write code as compact as possible when there is no need for?
>
> Simply because it is idiomatic in OCaml. Moreover, this is the way OCaml
> itself prints OCaml code:
>
> $ cat >foo.ml
> let ( |> ) x f = f x;;
> (1, 2) |> (fun (a, b) -> b, a);;
>
> $ camlp4of foo.ml
> let ( |> ) x f = f x
> let _ = (1, 2) |> (fun (a, b) -> (b, a))
>

But why do want to force Scheme or Common Lisp stype into OCaml style?
This does not make sense to me.
0
klohmuschel (196)
12/4/2007 6:22:05 PM
klohmuschel@yahoo.de wrote:
> On Dec 4, 6:04 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>> Simply because it is idiomatic in OCaml...
> 
> But why do want to force Scheme or Common Lisp stype into OCaml style?
> This does not make sense to me.

Sure. We can draw the same comparison with the Lisp code spread across
several lines but I do not think it changes the outcome:

Mathematica: {1, 2} /. {a_, b_} -> {b, a}

F#:          (1, 2) |> fun (a, b) -> b, a

Lisp:        ((lambda (pair)
                  (cons (cdr pair) (car pair)))
               (cons 1 2))

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/4/2007 6:25:19 PM
On Dec 4, 6:25 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> klohmusc...@yahoo.de wrote:
> > On Dec 4, 6:04 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> >> Simply because it is idiomatic in OCaml...
>
> > But why do want to force Scheme or Common Lisp stype into OCaml style?
> > This does not make sense to me.
>
> Sure. We can draw the same comparison with the Lisp code spread across
> several lines but I do not think it changes the outcome:
>
> Mathematica: {1, 2} /. {a_, b_} -> {b, a}
>
> F#:          (1, 2) |> fun (a, b) -> b, a
>
> Lisp:        ((lambda (pair)
>                   (cons (cdr pair) (car pair)))
>                (cons 1 2))



Now the lisp code is readable at least. if it is understandable is
another topic.
0
klohmuschel (196)
12/4/2007 6:51:32 PM
On Dec 4, 5:57 pm, Jon Harrop <use...@jdh30.plus.com> wrote:

[snip]
> > True. The type system is a big plus. But for me another huge advantage
> > is that it is almost trivial to use multiple cores using functional
> > code.
>
> The Haskell community have been making a lot of noise about their new
> parallel stuff lately but all I've seen is Fibonacci number generator that=

> was very slow and only used one of my two cores, i.e. is broken. There is
> more use of parallelism in the OCaml world (particularly in industry) but =
I
> would not call it easy. F# makes it easy but its implementation means
> you'll need to run >5 cores flat out to beat a single core running OCaml.
>
I was not talking about haskell but about referentially transparent
code in general. A current project of mine is written in C# using
mostly immutable data structures. Using immutable data structures in
C# can be somewhat painful, but multithreading is quite easy compared
to a lock-based approach.

By the way: The performance of F# will certainly improve significantly
with the next CLR update. The CLR currently has some very embarassing
limitations compared to the JVM. Most importantly this:
http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?Feedbac=
kID=3D93858

If F# uses structs internally, it will certainly benefit from this.

What makes multithreading easier in F# than in OCAml? I thought it was
more or less the same language.

regards,

R=FCdiger Klaehn
0
rudi2468 (20)
12/4/2007 7:18:57 PM
Griff a �crit :
> On Dec 3, 7:21 am, Bruno Desthuilliers
> <bdesth.quelquech...@free.quelquepart.fr> wrote:
> 
>>When I started learning Python some years ago, one of the first things
>>that amazed me was that, 9 times out of 10, the feature I was looking
>>for had one of the most obvious names to me
> 
> 
> Have you ever looked at Ruby at looked at "Matz's law of least
> surprise"?

Yes.

> Everything is guaranteed to just "intuitively" "make sense".... at
> least to him! :)

Indeed. There's some equivalent joke in Python's Zen:
"""
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
"""

And I - of course - realize that what's "obvious" to me may not be 
obvious to you...
0
12/4/2007 7:48:35 PM
Griff a �crit :
> On Dec 3, 7:13 am, Bruno Desthuilliers
> <bdesth.quelquech...@free.quelquepart.fr> wrote:
> 
>>>but you can always write a small helper function in Scheme:
>>
>>>(define (head x)
>>>   (car x))
>>
>>>Okay it is not very safe and assumes a list is being passed. But
>>>anyway that is what I often do in my Scheme code.
>>
>>The problem being that - extra function call overhead set aside - it's
>>*not* the standard. So when reading third-part material (either source
>>code, tutorials etc), you still have to remember all those somewhat
>>alien names...
> 
> 
> It is not that alien:
> 
> (define head car)
> (define tail cdr)
> 
> or
> 
> (define first car)
> (define rest cdr)

Please reread my post with more attention.
0
12/4/2007 7:49:41 PM
klohmuschel@yahoo.de a �crit :
> On Dec 3, 1:13 pm, Bruno Desthuilliers
> <bdesth.quelquech...@free.quelquepart.fr> wrote:
> 
>>klohmusc...@yahoo.de a �crit :
>>
>>
>>>On Dec 2, 5:09 pm, Bruno Desthuilliers
>>><bdesth.quelquech...@free.quelquepart.fr> wrote:
>>
>>>>Ho, yes... Sooo readable. What names like car, cdr or progn are supposed
>>>> to mean is pretty obvious, indeed.
>>
>>>but you can always write a small helper function in Scheme:
>>
>>>(define (head x)
>>>   (car x))
>>
>>>Okay it is not very safe and assumes a list is being passed. But
>>>anyway that is what I often do in my Scheme code.
>>
>>The problem being that - extra function call overhead set aside - it's
>>*not* the standard. So when reading third-part material (either source
>>code, tutorials etc), you still have to remember all those somewhat
>>alien names...
>>
>>(snip)
>>
>>
>>>I am now waiting on the arrival of the guy who tells: "Scheme and
>>>Common Lisp ist unreadable .. you guess all the (((((()))))))
>>
>>That has never been a problem to me - my editor (emacs) does a good job
>>wrt/ parenthesis !-)
>  
> 
> But the same arguments holds for all the other langues too. I mean
> sure Python will give you some nice to understand basic meanings but
> once in all the object oriented programming battle in Python you are
> still left with something only often the creater understood and had in
> mind how to interpret it.

Sorry, I never had much problems with Python's object model. And by 
experience (most of my work is on OSS, so I tend to read quite a lot of 
third-part code), the average Python code tend to be mostly readable 
when compared to some other languages I have working experience with.

> Not to speak of C++.

Hem. I would not compare Python to C++ (is C++ comparable to anything, 
anyway ?-)
0
12/4/2007 7:55:02 PM
Pascal Costanza a �crit :
> Bruno Desthuilliers wrote:
> 
>> Pascal Costanza a �crit :
>>
>>> R�diger Klaehn wrote:
>>>
>>>> Hello everybody,
>>>>
>>>> I got a question. Is there anything like a verbose functional language
>>>> that attempts to be easily readable?
>>>>
>>>> What I am looking for would be something that looks kind of like
>>>> smalltalk, with an emphasis on easy to read code, but fully functional
>>>> with immutable data structures and a powerful type system.
>>>>
>>>> I find that many people are confused by the very compact syntax of
>>>> existing functional languages such as haskell and ocaml and therefore
>>>> miss out on the big advantages of these languages such as referential
>>>> transparency.
>>>
>>>
>>> In case you don't insist on purity and can live with dynamic typing, 
>>> try Scheme and Common Lisp, especially Common Lisp or ISLISP.
>>
>>
>> Ho, yes... Sooo readable. What names like car, cdr or progn are 
>> supposed  to mean is pretty obvious, indeed.
> 
> 
> Maybe it's necessary to spell this out:
> 
> There is no such thing as a 'natural' syntax. Every programming language 
> syntax needs practice, and once you get used to one (of the good ones), 
> you can probably work with pretty much everything.

Pascal, it has *nothing* to do with syntax - it's about the *names*.

(snip - we obviously agree on this part, but that's not what I'm talking 
about)

> The OP asked for a functional language whose syntax is closer to 
> Smalltalk.

While he mentions syntax too, that's not how I understood the question. 
To me, it was more about overall readability - which includes syntax but 
is no restricted to it. Hence my joke about lisp.

Now I may of course be totally wrong !-)
0
12/4/2007 8:00:59 PM
On Tue, 04 Dec 2007 09:14:21 +0000, Jon Harrop <usenet@jdh30.plus.com>
wrote:

>No language attempts to be unreadable ...

You've obviously never seen BrainFuck.  
http://en.wikipedia.org/wiki/Brainfuck


George
--
for email reply remove "/" from address
0
George
12/4/2007 8:56:14 PM
Bruno Desthuilliers wrote:

> Now I may of course be totally wrong !-)

Me too. :)

I justed wanted to make sure that Scheme and Lisp are also considered. I 
have no problems if people decide against using them.


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
0
pc56 (3929)
12/4/2007 9:08:07 PM
klohmuschel@yahoo.de writes:

> On Dec 4, 6:25 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>> klohmusc...@yahoo.de wrote:
>> > On Dec 4, 6:04 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>> >> Simply because it is idiomatic in OCaml...
>>
>> > But why do want to force Scheme or Common Lisp stype into OCaml style?
>> > This does not make sense to me.
>>
>> Sure. We can draw the same comparison with the Lisp code spread across
>> several lines but I do not think it changes the outcome:
>>
>> Mathematica: {1, 2} /. {a_, b_} -> {b, a}
>>
>> F#:          (1, 2) |> fun (a, b) -> b, a
>>
>> Lisp:        ((lambda (pair)
>>                   (cons (cdr pair) (car pair)))
>>                (cons 1 2))
>
>
>
> Now the lisp code is readable at least. if it is understandable is
> another topic.

Well, for a sample of one, I now find it understandable (and gives me
some hint as to what the unknown operators /. and |> in the other two
languages is).  If you further, change car and cdr to first and rest
or head and tail like in some other thread, then you get closer to
something that the "general English speaking public" is likely to
understand.  Replacing cons with list (although that may change the
semantics in an undesired way) makes it even more understandable.

Since, this is relevant to the start/title of this thread, verbose
functional languages, here lisp appears to be the winner.  I'd be
curious to see the same line translated into scala to see if that was
sufficiently understanable.

I think Donn Cave's insight is correct here, "I don't know if programs
are mathematical, but I am fairly sure people aren't."  In fact,
although I was trained as a mathematician, I find the overly terse
mappings of most FP to be obfuscating more than it is elegant.
Perhaps, I'm just looking for the Cobol of FPs, something like

Evaluate 
    the list with elements 1 and 2 
applied as a parameter to 
    the anonymous function that takes one parameter which
        makes a list with elements
            the second element of the parameter
        and
            the first element of the parameter.

I could probably give that to my wife, an Opera singer, and with her
English-Bulgarian dicitionary, she could probably tell me what it
meant.  She would need the dictionary because terms like evaluate,
applied, anonymous function, parameter, and element are probably not
in her working vocabulary.

Now, I don't think the demand is for something quite that verbose, but
it was intended as an extreme example. Moreover, when I read the lisp,
that is essentialy what I read.  Well, once I parsed the two parens at
the beginning and saw that we were applying the result of an
expression as a function.  

And that gets to the verbosity point.  Even in the lisp, one has to
recognize that we are using the result of an expression as a function,
and the syntax to do so is so terse that unless one regularly reads
that kind of lisp (and I don't), one is likely to miss it, and having
missed it, one then reads formulating gobbledy-gook until one gets to
the point, where one's internal parser says too many errors, go back
and determine what clue I missed. The indented version helped with
that, as which parts went together were more obvious.

Maybe what I'm looking for is an FP to English program like those
programs which translated C declarations to and from English.
0
cfc (239)
12/4/2007 9:27:41 PM
In article <sddve7ei20i.fsf@shell01.TheWorld.com>,
 Chris F Clark <cfc@shell01.TheWorld.com> wrote:

> >> Mathematica: {1, 2} /. {a_, b_} -> {b, a}
> >>
> >> F#:          (1, 2) |> fun (a, b) -> b, a
> >>
> >> Lisp:        ((lambda (pair)
> >>                   (cons (cdr pair) (car pair)))
> >>                (cons 1 2))
> >
> >
> >
> > Now the lisp code is readable at least. if it is understandable is
> > another topic.
> 
> Well, for a sample of one, I now find it understandable (and gives me
> some hint as to what the unknown operators /. and |> in the other two
> languages is).  If you further, change car and cdr to first and rest
> or head and tail like in some other thread, then you get closer to
> something that the "general English speaking public" is likely to
> understand.  Replacing cons with list (although that may change the
> semantics in an undesired way) makes it even more understandable.
> 
> Since, this is relevant to the start/title of this thread, verbose
> functional languages, here lisp appears to be the winner.  I'd be
> curious to see the same line translated into scala to see if that was
> sufficiently understanable.
> 
> I think Donn Cave's insight is correct here, "I don't know if programs
> are mathematical, but I am fairly sure people aren't."  In fact,
> although I was trained as a mathematician, I find the overly terse
> mappings of most FP to be obfuscating more than it is elegant.
> Perhaps, I'm just looking for the Cobol of FPs, something like
> 
> Evaluate 
>     the list with elements 1 and 2 
> applied as a parameter to 
>     the anonymous function that takes one parameter which
>         makes a list with elements
>             the second element of the parameter
>         and
>             the first element of the parameter.

Well,that would be a bit like AppleScript, which has a functional
flavor.

A sub-routine for replacing items in a list by matching (from Apple's
page on Applescript):
 

on replace_matches(this_list, match_item, replacement_item, replace_all)
  repeat with i from 1 to the count of this_list
    set this_item to item i of this_list
    if this_item is the match_item then
      set item i of this_list to the replacement_item
      if replace_all is false then return this_list
    end if
  end repeat
  return this_list
end replace_matches

> 
> I could probably give that to my wife, an Opera singer, and with her
> English-Bulgarian dicitionary, she could probably tell me what it
> meant.  She would need the dictionary because terms like evaluate,
> applied, anonymous function, parameter, and element are probably not
> in her working vocabulary.
> 
> Now, I don't think the demand is for something quite that verbose, but
> it was intended as an extreme example. Moreover, when I read the lisp,
> that is essentialy what I read.  Well, once I parsed the two parens at
> the beginning and saw that we were applying the result of an
> expression as a function.  
> 
> And that gets to the verbosity point.  Even in the lisp, one has to
> recognize that we are using the result of an expression as a function,
> and the syntax to do so is so terse that unless one regularly reads
> that kind of lisp (and I don't), one is likely to miss it, and having
> missed it,

That's why in Common Lisp the full version is:

(funcall (function
            (lambda (pair)
              (cons (cdr pair) (car pair))))
         (cons 1 2))

Which usually is written a bit shorter as:

(funcall #'(lambda (pair)
              (cons (cdr pair) (car pair)))
         (cons 1 2))

FUNCALL invokes the first parameter (a function object)
on the arguments.

That the funcall can be omitted is only a special case
that has been added to Common Lisp to please some people.
The default is to use FUNCALL. FUNCTION is a special
operator that says that the enclosed thing is a function.

> one then reads formulating gobbledy-gook until one gets to
> the point, where one's internal parser says too many errors, go back
> and determine what clue I missed. The indented version helped with
> that, as which parts went together were more obvious.
> 
> Maybe what I'm looking for is an FP to English program like those
> programs which translated C declarations to and from English.

-- 
http://lispm.dyndns.org/
0
joswig8642 (2203)
12/4/2007 9:49:12 PM
klohmuschel@yahoo.de wrote:
> On Dec 4, 6:25 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>> klohmusc...@yahoo.de wrote:
>>> On Dec 4, 6:04 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>>>> Simply because it is idiomatic in OCaml...
>>> But why do want to force Scheme or Common Lisp stype into OCaml style?
>>> This does not make sense to me.
>> Sure. We can draw the same comparison with the Lisp code spread across
>> several lines but I do not think it changes the outcome:
>>
>> Mathematica: {1, 2} /. {a_, b_} -> {b, a}
>>
>> F#:          (1, 2) |> fun (a, b) -> b, a
>>
>> Lisp:        ((lambda (pair)
>>                   (cons (cdr pair) (car pair)))
>>                (cons 1 2))
> 
> Now the lisp code is readable at least. if it is understandable is
> another topic.

Not very idiomatic, though.

Not sure what this one-liner is supposed to mean. But you would rather 
say something like this:

; Scheme & Lisp
((lambda (a b) (cons b a)) 1 2)

or

; Scheme & Lisp
(apply (lambda (a b) (cons b a)) (list 1 2))

or

; Common Lisp
(destructuring-bind (a . b) (cons 1 2)
   (cons b a))


....and several other variations. Again, it depends on what you actually 
want to do here...


Pascal

-- 
My website: http://p-cos.net
Common Lisp Document Repository: http://cdr.eurolisp.org
Closer to MOP & ContextL: http://common-lisp.net/project/closer/
0
pc56 (3929)
12/4/2007 10:06:46 PM
George Neuner a �crit :
> On Tue, 04 Dec 2007 09:14:21 +0000, Jon Harrop <usenet@jdh30.plus.com>
> wrote:
> 
> 
>>No language attempts to be unreadable ...
> 
> 
> You've obviously never seen BrainFuck.  

keyboard !

This one is worth an entry in fortune...
0
12/4/2007 10:27:48 PM
Robert A Duff <bobduff@shell01.theworld.com> wrote:
[...]
> Not me.  There are a lot of very good things about SML,
> but I find the syntax to be a big stumbling block.
> Maybe I'd get used to it if I programmed in SML a lot.

> I'm not sure why I don't like SML syntax, but I think:
> SML is too terse for my taste.

That is a new one.  Well, to be honest, I've heard that opinion from a
few programmers, but most people I know who actually know a little
SML, would likely agree that the syntax of SML is accurately described
as "baroque" (http://en.wikipedia.org/wiki/Baroque#Modern_usage , in
particular "excessive ornamentation").  There are a couple places in
the syntax where SML requires a keyword that could be easily dropped
without introducing any ambiguity to the grammar.  There are also
several keywords used for specifying "blocks" that could also be
replaced to use simple parentheses without ambiguity.  There are also
several relatively long keywords.

> I have trouble seeing at a glance where syntactic constructs end.

Funny, I've never had trouble with that in SML, in particular.

Here are a few notes on the syntax of SML (no attempt is made for
completeness).  The expression that is typically considered difficult
to read (although I've never had major problems with it) due to the
lack of an explicit terminator is nested use of the case expression:

  case ...
   of ... => ...
  [ | ... => ... ]*

Perhaps similarly surprising is the handle expression (typically "try"
in other languages):

  ... handle ... => ...
         [ | ... => ... ]*

I use handle expressions rarely, as I usually use an implementation of
the try-in-unless construct of Benton and Kennedy for exception
handling purposes and other functions for scoped resource management.

Anonymous functions also do not have a terminating keyword, but they
are often written inside parentheses:

  fn ... => ...
 [ | ... => ... ]*

Other expression are simpler in that regard.  Conditional expressions
always include the else part:

  if ... then ... else ...

Local binding forms, whether at the declaration level or at the
expression level, have a terminator:

  local ... in ... end            (* declaration *)
  let ... in ... [ ; ... ]* end   (* expression *)

The same goes for sequential execution:

  ( ... [ ; ... ]* )

A declaration ends when the scope ends (keyword "in" or "end") or when
a new declaration begins (keywords "fun", "val", "structure",
"signature", "functor", "local", "type", "exception", "datatype",
"open", "infix", "nonfix", and "infixr").

The following page contains a few more notes on syntax:

  http://mlton.org/StandardMLGotchas

> The parentheses end up in the "wrong" places (compared to what I'm
> used to).  This appears to be a misguided attempt to avoid "too
> many" parens.

No, it is not. :-)  Lightweight syntax for calling functions is crucial
for stuff like combinator libraries.

> Constructs delimited by keywords get nested inside constructs
> delimited by parentheses, which seems inside-out to me.

That is also something I consider an advantage and is shared by most
FP languages.  The advantage is that constructs can be nested rather
freely, without arbitrary restrictions.  In a typical imperative
language, there is a separation between statements and expressions,
which is often quite awkward and forces one to introduce additional
syntactic complexity.

> Common practise is to use abbreviated names too much for my taste.

This may be partially true, but certainly isn't a property of the
syntax.  You can use as long names as you want.

-Vesa Karvonen
0
12/4/2007 10:40:21 PM
Rainer Joswig <joswig@lisp.de> writes:
> > >>
> > >> Lisp:        ((lambda (pair)
> > >>                   (cons (cdr pair) (car pair)))
> > >>                (cons 1 2))

Can you do that with a destructuring bind?  Something like:

  ((lambda ((a . b)) (b . a)) (1 . 2))
0
phr.cx (5493)
12/4/2007 11:05:05 PM
In article <7xsl2iysbi.fsf@ruckus.brouhaha.com>,
 Paul Rubin <http://phr.cx@NOSPAM.invalid> wrote:

> Rainer Joswig <joswig@lisp.de> writes:
> > > >>
> > > >> Lisp:        ((lambda (pair)
> > > >>                   (cons (cdr pair) (car pair)))
> > > >>                (cons 1 2))
> 
> Can you do that with a destructuring bind?  Something like:
> 
>   ((lambda ((a . b)) (b . a)) (1 . 2))

No.

Standard function parameters don't provide destructuring.

(b . a) is also not a valid expression, since b is not a function,
macro or special operator.
(1 . 2) is also not a valid expression, since 1 is not a function,
macro or special operator.

Destructuring is provided with DESTRUCTURING-BIND.

(destructuring-bind (a . b) (cons 1 2)
    (cons b a))

You can easily add a version with a shorter name
or functions that do destructuring, but there is
nothing else by default.

-- 
http://lispm.dyndns.org/
0
joswig8642 (2203)
12/4/2007 11:13:39 PM
Rainer Joswig <joswig@lisp.de> writes:
> >   ((lambda ((a . b)) (b . a)) (1 . 2))
> No.
> 
> Standard function parameters don't provide destructuring.

Oh, ok, somehow I thought they did, it's been a while.

Python version: (lambda (a,b): (b,a)) (1,2)

Haskell:  (\(a,b) -> (b,a)) (1,2)

Java:   class FlipTwoItemsInATuple   (500 lines of code snipped)
0
phr.cx (5493)
12/4/2007 11:24:03 PM
Thanks for your comments.

Vesa Karvonen <vesa.karvonen@cs.helsinki.fi> writes:

[snip]
> The following page contains a few more notes on syntax:
>
>   http://mlton.org/StandardMLGotchas

I'll have to read that before commenting on the above snipped stuff.
I'll just make some comments on the stuff below:

>> The parentheses end up in the "wrong" places (compared to what I'm
>> used to).  This appears to be a misguided attempt to avoid "too
>> many" parens.
>
> No, it is not. :-)  Lightweight syntax for calling functions is crucial
> for stuff like combinator libraries.

Smiley noted.  ;-)

But f(x) isn't exactly heavy!  Why is f x so much better?
And then you end up having to write (f x) when it's nested
inside certain other things, which is what I meant by the "wrong"
place above.  I freely admit this is heavily influenced by
what I'm used to.

>> Constructs delimited by keywords get nested inside constructs
>> delimited by parentheses, which seems inside-out to me.
>
> That is also something I consider an advantage and is shared by most
> FP languages.  The advantage is that constructs can be nested rather
> freely, without arbitrary restrictions.

I agree with the "free nesting" idea.  I just don't like the syntax.

This syntactic problem doesn't exist in Lisp, because every construct is
introduced by a paren, not a keyword.  I'm not a big fan of Lisp, but I
find that from a syntactic point of view, Lisp is easier to read than
ML-style languages (for me!).

>...In a typical imperative
> language, there is a separation between statements and expressions,
> which is often quite awkward and forces one to introduce additional
> syntactic complexity.

Yes, I agree about that.

>> Common practise is to use abbreviated names too much for my taste.
>
> This may be partially true, but certainly isn't a property of the
> syntax.  You can use as long names as you want.

Well, I can use long names myself, but I want to read _other_ people's
code.  So common practise matters.  And if I disobey common practise,
then others will have trouble reading _my_ code, which is equally bad.

- Bob
0
bobduff (1543)
12/5/2007 1:05:25 AM
Robert A Duff <bobduff@shell01.TheWorld.com> writes:
> But f(x) isn't exactly heavy!  Why is f x so much better?

Because of currying, maybe.  You'd have to write (f x y z) as
f(x)(y)(z) or even ((f(x))(y))(z).
0
phr.cx (5493)
12/5/2007 1:15:13 AM
Rainer Joswig <joswig@lisp.de> writes:

> That's why in Common Lisp the full version is:
>
> (funcall (function
>             (lambda (pair)
>               (cons (cdr pair) (car pair))))
>          (cons 1 2))
>
> Which usually is written a bit shorter as:
>
> (funcall #'(lambda (pair)
>               (cons (cdr pair) (car pair)))
>          (cons 1 2))
>

Good point.  I personally like the more explicit Common Lisp version
even better, although not the version with #' (I presume thats a
shorthand for function quotation), but disliking special character
sequences is probably just me.  There wouldn't be lots of languages
that have all sorts of different special character sequence operators,
macros, and what-have-you, if people didn't like the terseness they
provide.  

My problem is that I find myself ever more dis-inclined to learn new
notation and syntax--new ideas, concepts, paradigms, patterns,
etc. yes, but new syntax not so much.  And, that gets back to what is
nice about the full form Common Lisp version.  It exposes with the
function and funcall forms parts of the process that is otherwise
hidden from view, and that exposes the concepts, and by doing it in
with words, it means one has some idea of what those concepts are.
Moreover, one can often look up the words if one wants to peruse
further.  Special character sequences, while more compact don't
generally give one a name for the concept, I doubt I could google #'
and get any insight.  In fact, just looking at this code, I can see
that in common lisp there is a distinction between a function and a
lambda.  Now, at the moment I don't care, but if I were working with
some CL code, I might want to determine if that difference is
important.

So, thanks for that example....
0
cfc (239)
12/5/2007 2:57:18 AM
R�diger Klaehn wrote:
> By the way: The performance of F# will certainly improve significantly
> with the next CLR update. The CLR currently has some very embarassing
> limitations compared to the JVM. Most importantly this:
>
http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=93858
> 
> If F# uses structs internally, it will certainly benefit from this.

F# basically implements its own versions of anything that the CLR does
suboptimally and inlining is one such thing.

After extensive benchmarking, we decided that complex numbers are the only
data structure that warrants being a struct in F# rather than a class.

> What makes multithreading easier in F# than in OCAml?

You can do threads in OCaml but they will never run concurrently because
OCaml's GC is not multithreaded.

> I thought it was more or less the same language.

The intersection of OCaml and F# is usefully large (bigger than the whole of
SML) but the languages are quite different in many ways. In this context,
F# provides native constructs and syntactic support for Erlang-style
message passing and asynchronous workflows. OCaml has libraries but they
are not part of the language (and I haven't played with them).

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/5/2007 7:47:06 AM
On 5 Dez., 08:47, Jon Harrop <use...@jdh30.plus.com> wrote:
>
> The intersection of OCaml and F# is usefully large (bigger
> than the whole of SML)

Huh?

We've had that one before, but: F# is lacking most of the ML module
system, so you need to apply a very weird metric to make this
statement true (the module system being a major part of what defines
the ML language family of today).

- Andreas
0
rossberg (600)
12/5/2007 9:38:29 AM
On 4 Dez., 22:27, Chris F Clark <c...@shell01.TheWorld.com> wrote:

> Evaluate
>     the list with elements 1 and 2
> applied as a parameter to
>     the anonymous function that takes one parameter which
>         makes a list with elements
>             the second element of the parameter
>         and
>             the first element of the parameter.
>
> I could probably give that to my wife, an Opera singer, and with her
> English-Bulgarian dicitionary, she could probably tell me what it
> meant.  She would need the dictionary because terms like evaluate,
> applied, anonymous function, parameter, and element are probably not
> in her working vocabulary.

And that's exactly why the code is better *readable*, but reading and
understanding are different things, so it's not better understandable.
On the contrary, IMHO.
And remember that your 8 liner with ca. 40 words is an absolutely
trivial piece of code.
It would be not too long before users of that language demand
abbreviations for the verbose and self evident stuff:
- instead "evaluate X" write just "X"
- instead "X applied as a paramter to Y" write "Y X"
- instead "list with elements a,b, ... and z" write "[a,b,...,z]"
- instead "the anonymous function that takes one parameter which"
write "\x ->"
..... and so on.

But, if you really like to read such stuff, one could write a Haskell
(OCaml, Lisp or whatever) reader, that reads a program to you. Would
be nice to have when one can't get sleep :)

For example:

map (\(a,b) -> (b,a+1)) [(1, "foo"), (2, "bar")]

Evaluate the application of
 an anonymous function that takes one parameter
   that is a 2-tuple, which,
   given that the first component of said tuple is called a
   and the second component of the aforementioned tuple is called b,
   evaluates
      a tuple where the first component is b
      and the second component is the result of
        evaluation of the application of
           a
           and the constant one
           to the function +
......

0
quetzalcotl (241)
12/5/2007 9:57:19 AM
* Vesa Karvonen:

> That is a new one.  Well, to be honest, I've heard that opinion from a
> few programmers, but most people I know who actually know a little
> SML, would likely agree that the syntax of SML is accurately described
> as "baroque" (http://en.wikipedia.org/wiki/Baroque#Modern_usage , in
> particular "excessive ornamentation").  There are a couple places in
> the syntax where SML requires a keyword that could be easily dropped
> without introducing any ambiguity to the grammar.  There are also
> several keywords used for specifying "blocks" that could also be
> replaced to use simple parentheses without ambiguity.  There are also
> several relatively long keywords.

But, with one exception, core SML lacks keywords at the ends of blocks.
This bugs me a bit, too.  I've also run into the

  if condition then
    something ();
  else
    somethingElse ();
    oneMoreThing ();
  continueProcessing ();

problem. 8-/
0
fw12 (438)
12/5/2007 10:16:56 AM
* Robert A. Duff:

> But f(x) isn't exactly heavy!  Why is f x so much better?

Without parentheses, you can write

  val i : int = ...
  ...
  printf S "The value of i is: " I i S ".\n" $;

The alternative would be:

  printf (S ("The value of i is: " (I (i (S (".\n" ($)))))));
0
fw12 (438)
12/5/2007 10:19:34 AM
rossberg@ps.uni-sb.de wrote:
> We've had that one before, but: F# is lacking most of the ML module
> system, so you need to apply a very weird metric to make this
> statement true (the module system being a major part of what defines
> the ML language family of today).

That may have been true ten years ago but the ML family of languages has
evolved better alternatives since then and the rest of the module system is
now vestigial.

Look at some of the features that are compatible between OCaml and F#:

.. functional record update
.. array expressions and patterns
.. or-patterns
.. guarded patterns
.. Ad-hoc polymorphic printing (printf)
.. Can export infix operator definitions from modules
.. Module hierarchy can be reflected in source directory structure
.. Extensive currying in the stdlib
.. Mutable record fields
.. Polymorphic structural hashing
.. Marshalling

These are all far more widely exploited than functors and none of these
features are even in SML.

From my point of view, your statement is equivalent to claiming that C++ is
not of the C family of languages because it doesn't even have trigraphs.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/5/2007 11:33:21 AM
On Dec 5, 12:33 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> rossb...@ps.uni-sb.de wrote:
> > We've had that one before, but: F# is lacking most of the ML module
> > system, so you need to apply a very weird metric to make this
> > statement true (the module system being a major part of what defines
> > the ML language family of today).
>
> That may have been true ten years ago but the ML family of languages has
> evolved better alternatives since then and the rest of the module system is
> now vestigial.

Unless you count classes -- which are not in the common subset of
OCaml and C# -- I wouldn't know of an alternative to modules in any
current ML, not to mention a better one.

> [list of features that are mostly convenient syntactic sugar.]

You cannot work around the lack of functors half as easily as any of
this. More importantly, modules also provide generic type abstraction,
which I'd argue is used "more widely" than anything in your list.

- Andreas
0
rossberg (600)
12/5/2007 12:17:54 PM
I wrote:
>
> Unless you count classes -- which are not in the common subset of
> OCaml and C#

Oops, should be F#, of course.
0
rossberg (600)
12/5/2007 12:19:03 PM
Florian Weimer <fw@deneb.enyo.de> wrote:
[...]

> I've also run into the

>   if condition then
>     something ();
>   else
>     somethingElse ();
>     oneMoreThing ();
>   continueProcessing ();

> problem. 8-/

I can see how that can happen, but, honestly, I don't recall ever been
tripped by that in SML, because I use automatic indentation.  That is
probably also why I don't get tripped by the nested cases issue.  I
notice issues like that when I press tab (or invoke indent-region,
etc...).

But, I agree that SML's syntax could be improved.  However, I doubt
that there is any non-trivial group of SML programmers that could
agree on exactly how the syntax should be improved.  I don't recall
ever discussing SML's syntax with anyone whose opinions regarding the
syntax would have matched with mine completely.  Everyone has an
opinion on syntax.  Perhaps SML needs a dictator. :-)

-Vesa Karvonen
0
12/5/2007 2:08:01 PM
Jon Harrop (usenet@jdh30.plus.com) wrote:
: The latter is certainly more verbose but is it really more readable? I would

Verbosity doesn't help, if it does not add meaning.  Also, you can
add meaning without being verbose.  For instance, I would say that
Haskell code is much more readable when you make good use of the
$ operator, which you can very well do without.  For "a b c (d e f)"
may mean just the same as "a b c $ d e f", but "a b c (" does not
mean just the same as "a b c $" to the human reader.  What I mean is
that when you encounter a $ sign, you immediately know that
everything you are going to read from now on, until the end of the
expression, is the last argument of the function application you are
currently trying to understand.  When you see an opening parenthesis,
you only know that you are starting to read a subexpression, which
may or may not be the last one.  In long expressions with lots
of subexpressions, $ really helps me, because I do not have to scan
forward to understand the structure of the whole.  Extra meaning
really helps; extra verbosity without extra meaning may look
helpful to the beginner, but it just means more to read.

Dirk van Deun
-- 
Ceterum censeo Redmond delendum
0
dvandeun (36)
12/5/2007 2:56:30 PM
rossberg@ps.uni-sb.de wrote:
> Unless you count classes -- which are not in the common subset of
> OCaml and C# -- I wouldn't know of an alternative to modules in any
> current ML, not to mention a better one.

They are not in the common subset, yes.

>> [list of features that are mostly convenient syntactic sugar.]
> 
> You cannot work around the lack of functors half as easily as any of
> this.

Vesa only just posted a huge spawling mess of SML trying to work around the
absence of some of those features I cited by using functors.

Can you cite a single example of someone working around the absence of
functors?

> More importantly, modules also provide generic type abstraction, 
> which I'd argue is used "more widely" than anything in your list.

You really think people use modules for generic type abstraction more often
than printf?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/5/2007 3:10:58 PM
Ingo Menger <quetzalcotl@consultant.com> writes:

> On 4 Dez., 22:27, Chris F Clark <c...@shell01.TheWorld.com> wrote:
>
> And that's exactly why the code is better *readable*, but reading and
> understanding are different things, so it's not better understandable.
> On the contrary, IMHO.

Yes, but I suspect you don't read a lot of code in languages you don't
know.  These days I spend most of my life reading and correcting
programming in languages I have never learned.

> And remember that your 8 liner with ca. 40 words is an absolutely
> trivial piece of code.

My experience is most code is actually trivial and the only problem
with it is that it is obfuscated by overly terse names and enigmatic
notations.

> But, if you really like to read such stuff, one could write a Haskell
> (OCaml, Lisp or whatever) reader, that reads a program to you. Would
> be nice to have when one can't get sleep :)

I think that is actually part of the solution.  But, you have to
remember the title of this thread "verbose functional languages".
There are those of us who want functional languages: higher-order
functions, immutable data structures, garbage collectors, closures,
tail-recursion support, etc., but we want it in a notation that we can
take to audiences who are not mathematically adept (and perhaps not
even particularly programming literate).  

I don't want to deny those who desire a terse notation their options,
but we already have plenty of terse fp languages.  I'm looking for an
fp that I can take to the "unwashed" masses and which is subversively
simple, so that they start getting the advantanges without being
frightened off.  For example, closures are trivial for "normal" people
to understand, but introducing them with "\x ->" (presumably meaning
lambda x maps to) is scary and off-putting for most people not
previously exposed to those notations, whereas "function (x) is" just
seems a lot less so.

A truly greatly language would transparently convert between the more
terse and more verbose notation.  That way, when I'm reading code
where I don't understand some part of the notation I can expand it to
something that I can study without being a member of the internal
notational cabal.  Whereas, once I understand it, I can collapse to
the more terse notation to become more sophisticated and compress more
content into less space.

0
cfc (239)
12/5/2007 3:50:16 PM
On Dec 5, 4:10 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> rossb...@ps.uni-sb.de wrote:
> > Unless you count classes -- which are not in the common subset of
> > OCaml and C# -- I wouldn't know of an alternative to modules in any
> > current ML, not to mention a better one.
>
> They are not in the common subset, yes.

OK, so I remain puzzled how you justify your statement about modules
having become "vestigial" in ML.

> Vesa only just posted a huge spawling mess of SML trying to work around the
> absence of some of those features I cited by using functors.

Which post are you referring to?

> Can you cite a single example of someone working around the absence of
> functors?

In ML? No, why, you have functors. In languages that do not have
anything comparable? You fall back to copy & paste or to casting. I'm
sure you have seen plenty of examples of that before.

> > More importantly, modules also provide generic type abstraction,
> > which I'd argue is used "more widely" than anything in your list.
>
> You really think people use modules for generic type abstraction more often
> than printf?

Printf provides some local convenience. Modularity OTOH is fundamental
for programming in the large. You cannot compare them by nitpicking on
the number of occurrences of the words "printf" or "module" in average
code.

- Andreas
0
rossberg (600)
12/5/2007 5:35:44 PM
rossberg@ps.uni-sb.de wrote:
> On Dec 5, 4:10 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>> rossb...@ps.uni-sb.de wrote:
>> > Unless you count classes -- which are not in the common subset of
>> > OCaml and C# -- I wouldn't know of an alternative to modules in any
>> > current ML, not to mention a better one.
>>
>> They are not in the common subset, yes.
> 
> OK, so I remain puzzled how you justify your statement about modules

We were talking about parts of the module system not found in F#, i.e. not
modules themselves.

> having become "vestigial" in ML.

Note that we have different definitions of ML.

The main point of difference between these languages is functors because F#
doesn't have them. In OCaml, the main use of functors is working around the
lack of type classes. F# has a better solution to that problem and, hence,
has even less need for functors.

Even in OCaml, functors have a second-class implementation that
unnecessarily hampers optimization (no inlining across functor boundaries)
to the extent that functors are totally unsuitable for factoring over
numeric representation. For example, Vesa's post would be an awful idea in
OCaml. Unlike F#, OCaml provides no decent alternative so you must code up
such things manually.

>> Vesa only just posted a huge spawling mess of SML trying to work around
>> the absence of some of those features I cited by using functors.
> 
> Which post are you referring to?

http://groups.google.co.uk/group/comp.lang.functional/msg/5a30908fdef3bcd7

>> Can you cite a single example of someone working around the absence of
>> functors?
> 
> In ML? No, why, you have functors.

That is circular with your definition of ML ("must have functors").

> In languages that do not have 
> anything comparable? You fall back to copy & paste or to casting. I'm
> sure you have seen plenty of examples of that before.

My experience is exactly the opposite.

Almost all of my uses of functors in OCaml derive from their use in the
OCaml stdlib, specifically the Set and Map modules. You can create a set of
strings with:

  include Set.Make(String)

but only because the built-in String module happens to implement "t"
and "compare". If you want a map from ints then you must write your
own "Int" module:

  module Int : Map.OrderedType =
    type t = int
    let compare (n:t) m = compare n m
  end

and instantiate your map with it:

  include Map.Make(Int)

That Int module is boiler-plate code that I am forced to cut'n'paste
everywhere.

F# relieves me of this burden by providing generic equality and hashing
functions that may be overridden if necessary (which is very rarely the
case in user code). This is a restricted form of type classes (never any
need for run-time dispatch to the appropriate function) and it works
beautifully.

So the primary use of functors in OCaml is no longer necessary in F#. My use
of functors in other settings is negligible, maybe one functor every
million lines of production code, and that is easily replaced with classes
even if the code is slightly different between OCaml and F#.

>> > More importantly, modules also provide generic type abstraction,
>> > which I'd argue is used "more widely" than anything in your list.
>>
>> You really think people use modules for generic type abstraction more
>> often than printf?
> 
> Printf provides some local convenience.

Yes, this convenience and familiarity is the foundation of OCaml's
widespread use and is the reason F# copied it. SML would do well to follow.

> Modularity OTOH is fundamental 
> for programming in the large. You cannot compare them by nitpicking on
> the number of occurrences of the words "printf" or "module" in average
> code.

We were talking about "generic type abstraction" and not "modularity".

Are you seriously suggesting that SML is better equipped for "programming in
the large" than .NET?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/6/2007 7:32:32 AM
On 5 Dez., 16:50, Chris F Clark <c...@shell01.TheWorld.com> wrote:
> Ingo Menger <quetzalc...@consultant.com> writes:
> > On 4 Dez., 22:27, Chris F Clark <c...@shell01.TheWorld.com> wrote:
>
> > And that's exactly why the code is better *readable*, but reading and
> > understanding are different things, so it's not better understandable.
> > On the contrary, IMHO.
>
> Yes, but I suspect you don't read a lot of code in languages you don't
> know.  These days I spend most of my life reading and correcting
> programming in languages I have never learned.

And how exxctly would it help you if the language looked like english,
but wasn't actually english?

> My experience is most code is actually trivial and the only problem
> with it is that it is obfuscated by overly terse names and enigmatic
> notations.

Have you ever considered the possibility that longer names can be more
easily confused? As I get older, it happens more often to me, that I
stumble across some compiler message saying that some name is unknown,
and sometimes I need quite some time to figure out that I actually
switched 2 letters somwehere in a long name.

The notion that longer names make programs more readable is
fundamentally mistaken, IMHO. I hope you'll never have to maintain a
program written by someone that believes in the blessings of long,
"speaking" names, for speaking are they only if you know the langugae.
Assuming you're not a german speaker, what do you think do the
following names mean?
  DonauSchiffahrtKapit=E4nsWitwenPension
  DonauSchiffahrtsInspektionsIntervall
  BergwachtVereinigungsVorsitzendenEntsch=E4digung
  BergwachtVereinigungsVorsitzendenEntscheidung

>
> > But, if you really like to read such stuff, one could write a Haskell
> > (OCaml, Lisp or whatever) reader, that reads a program to you. Would
> > be nice to have when one can't get sleep :)
>
> I think that is actually part of the solution.  But, you have to
> remember the title of this thread "verbose functional languages".
> There are those of us who want functional languages: higher-order
> functions, immutable data structures, garbage collectors, closures,
> tail-recursion support, etc., but we want it in a notation that we can
> take to audiences who are not mathematically adept (and perhaps not
> even particularly programming literate).

I don't think this will work. This is my opinion only, of course.
On the ground of each PL lie exactness and formal rigor as well as
certain fundamental concepts. I doubt the possibility to make the
required understanding easier through relaxed syntax.

An illsutration: Take for example cooking receipts. If it were so that
merely use of natural language does the trick, we all could be great
cooks by just working down the receipt. Yet, the fact is that many of
us are miserable cooks (or do not cook at all), despite having access
to the finest receipts.

   
>
> I don't want to deny those who desire a terse notation their options,
> but we already have plenty of terse fp languages.  I'm looking for an
> fp that I can take to the "unwashed" masses and which is subversively
> simple, so that they start getting the advantanges without being
> frightened off.  For example, closures are trivial for "normal" people
> to understand, but introducing them with "\x ->" (presumably meaning
> lambda x maps to) is scary and off-putting for most people not
> previously exposed to those notations, whereas "function (x) is" just
> seems a lot less so.

Is this so? How about anonymous functions with two arguments?
"function (x) is function (y) is" perhaps? And for what reason do we
need the () parentheses?
I personally think that especially the Haskell syntax is so terse
because it abandons lots of braces, semicolons and parens that
dominate the look of program texts in other languages. And I think
it's a good thing.

>
> A truly greatly language would transparently convert between the more
> terse and more verbose notation.  That way, when I'm reading code
> where I don't understand some part of the notation I can expand it to
> something that I can study without being a member of the internal
> notational cabal.

I don't see that. The "more verbose notation" would still have to
convey the exact meaning. From our experiments here in this thread we
know, that expressing functional programs in "natural" like languages
is not per se promoting understanding. You'll have to have some formal
notation anyway. Once we realise this, we can as well choose one that
is consistent and easy in itself, i.e. governed by few rules only. (I
know this is vague, for this also applies to languages like brainf*ck
or unlambda. Perhaps in the latter cases there are too few rules.)

Yet, I admit,  such a program reader could be a useful tool for
beginners.
0
quetzalcotl (241)
12/6/2007 9:42:07 AM
In article 
<d7e8519f-1b6f-48a2-a9e9-c54130a9a281@n20g2000hsh.googlegroups.com>,
 Ingo Menger <quetzalcotl@consultant.com> wrote:

> On 5 Dez., 16:50, Chris F Clark <c...@shell01.TheWorld.com> wrote:
> > Ingo Menger <quetzalc...@consultant.com> writes:
> > > On 4 Dez., 22:27, Chris F Clark <c...@shell01.TheWorld.com> wrote:
> >
> > > And that's exactly why the code is better *readable*, but reading and
> > > understanding are different things, so it's not better understandable.
> > > On the contrary, IMHO.
> >
> > Yes, but I suspect you don't read a lot of code in languages you don't
> > know.  These days I spend most of my life reading and correcting
> > programming in languages I have never learned.
> 
> And how exxctly would it help you if the language looked like english,
> but wasn't actually english?

?

> 
> > My experience is most code is actually trivial and the only problem
> > with it is that it is obfuscated by overly terse names and enigmatic
> > notations.
> 
> Have you ever considered the possibility that longer names can be more
> easily confused? As I get older, it happens more often to me, that I
> stumble across some compiler message saying that some name is unknown,
> and sometimes I need quite some time to figure out that I actually
> switched 2 letters somwehere in a long name.
> 
> The notion that longer names make programs more readable is
> fundamentally mistaken, IMHO. I hope you'll never have to maintain a
> program written by someone that believes in the blessings of long,
> "speaking" names, for speaking are they only if you know the langugae.
> Assuming you're not a german speaker, what do you think do the
> following names mean?
>   DonauSchiffahrtKapit�nsWitwenPension
>   DonauSchiffahrtsInspektionsIntervall
>   BergwachtVereinigungsVorsitzendenEntsch�digung
>   BergwachtVereinigungsVorsitzendenEntscheidung

Still much better than BVVE or BrgVrVorsEnts.

Minor detail: In Lisp you would write:

Bergwacht-Vereinigungs-Vorsitzenden-Entscheidung

For typing these long identifiers, one usually uses
completion or some other help (like mouse-copy).
When one write these names, one types

   B-V-V-E and press completion. Then it expands to
   above name ro gives the choice of the possibilities.

> > > But, if you really like to read such stuff, one could write a Haskell
> > > (OCaml, Lisp or whatever) reader, that reads a program to you. Would
> > > be nice to have when one can't get sleep :)
> >
> > I think that is actually part of the solution.  But, you have to
> > remember the title of this thread "verbose functional languages".
> > There are those of us who want functional languages: higher-order
> > functions, immutable data structures, garbage collectors, closures,
> > tail-recursion support, etc., but we want it in a notation that we can
> > take to audiences who are not mathematically adept (and perhaps not
> > even particularly programming literate).
> 
> I don't think this will work. This is my opinion only, of course.
> On the ground of each PL lie exactness and formal rigor as well as
> certain fundamental concepts. I doubt the possibility to make the
> required understanding easier through relaxed syntax.

He does not want relaxed syntax. He wants explicit code with
named constructs that appear in the code.
Not syntax by white space or cryptic letter combinations.

....

-- 
http://lispm.dyndns.org/
0
joswig8642 (2203)
12/6/2007 10:17:21 AM
On 6 Dez., 08:32, Jon Harrop <use...@jdh30.plus.com> wrote:
>
> > OK, so I remain puzzled how you justify your statement about modules
>
> We were talking about parts of the module system not found in F#, i.e. not
> modules themselves.
>
> > having become "vestigial" in ML.
>
> Note that we have different definitions of ML.

The term "ML modules" is generally understood to mean a system with
type abstraction (sealing), nested namespaces, functors, structural
signature subtyping, composition (open, include). What remains if you
remove all those is basically Modula-2.

So what is there that can replace that?

> The main point of difference between these languages is functors because F#
> doesn't have them.

Note that F# also removes most of the above, or severely restricts it
(e.g. sealing and signature subtyping).

> In OCaml, the main use of functors is working around the
> lack of type classes.
> F# has a better solution to that problem and, hence,
> has even less need for functors.

First, modules are more general than type classes. And second, you are
now extrapolating from F# to ML as a whole?

> Even in OCaml, functors have a second-class implementation that
> unnecessarily hampers optimization (no inlining across functor boundaries)
> to the extent that functors are totally unsuitable for factoring over
> numeric representation. For example, Vesa's post would be an awful idea in
> OCaml.

I know somebody who in such cases would usually flame the outdated
compiler technology and claim that the whole language is useless.

> >> Vesa only just posted a huge spawling mess of SML trying to work around
> >> the absence of some of those features I cited by using functors.
>
> > Which post are you referring to?
>
> http://groups.google.co.uk/group/comp.lang.functional/msg/5a30908fdef...

Then I have no idea what you are talking about. This code defines
vector operators, plus (secondary) some helper generics for sequences.
And it is making straightforward use of the module system to structure
the code for good.

If you can show me how any of the features you listed would avoid
writing much of this code I'd be impressed.


> >> Can you cite a single example of someone working around the absence of
> >> functors?
>
> > In ML? No, why, you have functors.
>
> That is circular with your definition of ML ("must have functors").

Huh? How do you expect me to give an example in a language where you
don't need to work around it?


> Almost all of my uses of functors in OCaml derive from their use in the
> OCaml stdlib, specifically the Set and Map modules. You can create a set of
> strings with:
>
>   include Set.Make(String)
>
> but only because the built-in String module happens to implement "t"
> and "compare". If you want a map from ints then you must write your
> own "Int" module:
>
>   module Int : Map.OrderedType =
>     type t = int
>     let compare (n:t) m = compare n m
>   end
>
> and instantiate your map with it:
>
>   include Map.Make(Int)
>
> That Int module is boiler-plate code that I am forced to cut'n'paste
> everywhere.

Maybe you should consider a less verbose language then. ;-) For
example, in SML you'd just write

  Map.Make(open Int type t = int)

But more seriously, I think that you tend to vastly overemphasize
minor syntactic annoyances. Again, this is just a local thing. If you
do not have functors or something equivalent, you potentially have to
duplicate code on a much larger scale.

> F# relieves me of this burden by providing generic equality and hashing
> functions that may be overridden if necessary (which is very rarely the
> case in user code). This is a restricted form of type classes (never any
> need for run-time dispatch to the appropriate function) and it works
> beautifully.

I certainly agree that type classes are useful. But they do not
substitute modules.

> So the primary use of functors in OCaml is no longer necessary in F#. My use
> of functors in other settings is negligible, maybe one functor every
> million lines of production code, and that is easily replaced with classes
> even if the code is slightly different between OCaml and F#.

And what about abstract types? You never define any?

> > Modularity OTOH is fundamental
> > for programming in the large. You cannot compare them by nitpicking on
> > the number of occurrences of the words "printf" or "module" in average
> > code.
>
> We were talking about "generic type abstraction" and not "modularity".
>
> Are you seriously suggesting that SML is better equipped for "programming in
> the large" than .NET?

Not necessarily (depending on your needs), but it is certainly better
equipped than the intersection of OCaml and F# -- which you claimed
otherwise before in a way that I had to take issues with.

- Andreas
0
rossberg (600)
12/6/2007 10:48:41 AM
On Wed, 5 Dec 2007, Jon Harrop wrote:

> Can you cite a single example of someone working around the absence of
> functors?
> 

I went to quite spectacular lengths to do it in some Haskell code that was 
subsequently used by Galois to prototype an internal wiki system.

-- 
flippa@flippac.org

"I think you mean Philippa. I believe Phillipa is the one from an
alternate universe, who has a beard and programs in BASIC, using only
gotos for control flow." -- Anton van Straaten on Lambda the Ultimate
0
flippa (196)
12/6/2007 10:51:19 AM
On Wed, 5 Dec 2007, rossberg@ps.uni-sb.de wrote:

> In ML? No, why, you have functors. In languages that do not have
> anything comparable? You fall back to copy & paste or to casting.

Or to a whole pile of variants on factory patterns and the like, encoding 
modules as objects.

-- 
flippa@flippac.org

There is no magic bullet. There are, however, plenty of bullets that
magically home in on feet when not used in exactly the right circumstances.
0
flippa (196)
12/6/2007 10:52:28 AM
Ingo Menger schrieb:
> But, if you really like to read such stuff, one could write a Haskell
> (OCaml, Lisp or whatever) reader, that reads a program to you. Would
> be nice to have when one can't get sleep :)
> 
> For example:
> 
> map (\(a,b) -> (b,a+1)) [(1, "foo"), (2, "bar")]
> 
> Evaluate the application of
>  an anonymous function that takes one parameter
>    that is a 2-tuple, which,
>    given that the first component of said tuple is called a
>    and the second component of the aforementioned tuple is called b,
>    evaluates
>       a tuple where the first component is b
>       and the second component is the result of
>         evaluation of the application of
>            a
>            and the constant one
>            to the function +
> .....

Hey, that would be ultra-cool!

Target audience 1: programmers who don't know the language in question 
well enough to say for sure what the syntax du jour is. (E.g. Erlang 
programmers trying to understand a piece of OCaml code given on the 
newsgroup.)

Target audience 2: programmers trying to learn an FPL. When reading 
really terse code (as is typical for, say, the Haskel Prelude), it's all 
too easy to miss or misread the decisive symbol when reading terse code, 
and you get mental gobbledigook. Having an Expression Reader would allow 
you to to double-check that your interpretation of the syntax is correct 
so you can concentrate on the semantics.

E.g. in the above code, I'd have missed that the lambda takes a single 
parameter, simply because those traditional parameter list syntaxes are 
too deeply engraved in my brain.

Extra points if the Expression Reader deals well with runaway 
indentation ;-)

Regards,
Jo
0
jo427 (1164)
12/6/2007 12:35:01 PM
Philippa Cowderoy wrote:
> On Wed, 5 Dec 2007, Jon Harrop wrote:
>> Can you cite a single example of someone working around the absence of
>> functors?
> 
> I went to quite spectacular lengths to do it in some Haskell code that was
> subsequently used by Galois to prototype an internal wiki system.

May I ask what the signature of the functor's argument was?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/6/2007 12:50:50 PM
rossberg@ps.uni-sb.de wrote:
> On 6 Dez., 08:32, Jon Harrop <use...@jdh30.plus.com> wrote:
>> > OK, so I remain puzzled how you justify your statement about modules
>>
>> We were talking about parts of the module system not found in F#, i.e.
>> not modules themselves.
>>
>> > having become "vestigial" in ML.
>>
>> Note that we have different definitions of ML.
> 
> The term "ML modules" is generally understood to mean a system with
> type abstraction (sealing), nested namespaces, functors, structural
> signature subtyping, composition (open, include). What remains if you
> remove all those is basically Modula-2.
> 
> So what is there that can replace that?
> 
>> The main point of difference between these languages is functors because
>> F# doesn't have them.
> 
> Note that F# also removes most of the above, or severely restricts it
> (e.g. sealing and signature subtyping).

Exactly, yes. Just as Homo sapiens have an appendix that is severely
restricted in what it can do.

>> In OCaml, the main use of functors is working around the
>> lack of type classes.
>> F# has a better solution to that problem and, hence,
>> has even less need for functors.
> 
> First, modules are more general than type classes.

We were talking about functors, not modules.

> And second, you are now extrapolating from F# to ML as a whole?

I'm saying that anyone who fixes these deficiencies in SML or OCaml might as
well not bother with functors either, which is exactly what has happened
with F#.

>> Even in OCaml, functors have a second-class implementation that
>> unnecessarily hampers optimization (no inlining across functor
>> boundaries) to the extent that functors are totally unsuitable for
>> factoring over numeric representation. For example, Vesa's post would be
>> an awful idea in OCaml.
> 
> I know somebody who in such cases would usually flame the outdated
> compiler technology and claim that the whole language is useless.

Statements that widely used things are useless should stay in academia.

>> >> Vesa only just posted a huge spawling mess of SML trying to work
>> >> around the absence of some of those features I cited by using
>> >> functors.
>>
>> > Which post are you referring to?
>>
>> http://groups.google.co.uk/group/comp.lang.functional/msg/5a30908fdef...
> 
> Then I have no idea what you are talking about. This code defines
> vector operators, plus (secondary) some helper generics for sequences.
> And it is making straightforward use of the module system to structure
> the code for good.

Look at the original code as well:

http://mlton.org/pipermail/mlton/2005-October/028127.html

These gems are nothing more than polished turds in the big picture.

The authors gallantly try to work around the deficiencies of the SML
language and arrive at a solution that creates more problems than it
solves:

.. Inextensible: you can't overload "+" for your own numeric types later.

.. Unscalable: All numeric code must be wrapped in the same functor
instantiation to keep the abstract type compatible.

.. Invasive: you must manually box each and every constant and can no longer
pattern match over numeric types (!).

.. Inefficient: in all other SML implementations this will incur boxing,
unboxing and indirection at every use of every number.

In the process they were forced to use functors because SML provides
absolutely no alternatives.

That is not a killer reason to stuff functors into every new ML derivative:
it is a killer reason to fix the problems correctly in the first place by
creating languages that have extensible operator overloading built in.

> If you can show me how any of the features you listed would avoid
> writing much of this code I'd be impressed.

Exportable infix operators. No need to wrap your numeric code in a functor
(a technique that doesn't even scale anyway).

>> >> Can you cite a single example of someone working around the absence of
>> >> functors?
>>
>> > In ML? No, why, you have functors.
>>
>> That is circular with your definition of ML ("must have functors").
> 
> Huh? How do you expect me to give an example in a language where you
> don't need to work around it?

Can you cite a single example of someone working around the absence of
functors in F#?

>> That Int module is boiler-plate code that I am forced to cut'n'paste
>> everywhere.
> 
> Maybe you should consider a less verbose language then. ;-) For
> example, in SML you'd just write
> 
>   Map.Make(open Int type t = int)

That doesn't work in SML/NJ:

- Map.Make(open Int type t = int);
stdIn:1.10 Error: syntax error found at OPEN

In F# you write no code at all. Maybe you should consider a less verbose
language.

> But more seriously, I think that you tend to vastly overemphasize
> minor syntactic annoyances.

Absolutely not. If you fixed these problems with your own implementation
then you could garner users who would use it to do great work. I can think
of nothing more gratifying, but making the language familiar and easy is
absolutely essential for that to happen and, I suspect, would require you
to break SML compatibility (but who cares when there are no SML users).

> Again, this is just a local thing. If you 
> do not have functors or something equivalent, you potentially have to
> duplicate code on a much larger scale.

The F# solution is better in every respect.

>> F# relieves me of this burden by providing generic equality and hashing
>> functions that may be overridden if necessary (which is very rarely the
>> case in user code). This is a restricted form of type classes (never any
>> need for run-time dispatch to the appropriate function) and it works
>> beautifully.
> 
> I certainly agree that type classes are useful. But they do not
> substitute modules.

By all means keep the obscure parts of the module system when you implement
extensible operator overloading. All I'm saying is that few people will use
them compared to the number of people who will use operator overloading
(and any other the other features I listed).

>> So the primary use of functors in OCaml is no longer necessary in F#. My
>> use of functors in other settings is negligible, maybe one functor every
>> million lines of production code, and that is easily replaced with
>> classes even if the code is slightly different between OCaml and F#.
> 
> And what about abstract types? You never define any?

I use abstract types all the time in both OCaml and F#. Perhaps you mean
type aliases parameterized over phantom types?

>> Are you seriously suggesting that SML is better equipped for "programming
>> in the large" than .NET?
> 
> Not necessarily (depending on your needs), but it is certainly better
> equipped than the intersection of OCaml and F# -- which you claimed
> otherwise before in a way that I had to take issues with.

Do you think there are bigger projects in SML that OCaml+F#?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/6/2007 12:54:22 PM
On Thu, 6 Dec 2007, Jon Harrop wrote:

> Philippa Cowderoy wrote:
> > On Wed, 5 Dec 2007, Jon Harrop wrote:
> >> Can you cite a single example of someone working around the absence of
> >> functors?
> > 
> > I went to quite spectacular lengths to do it in some Haskell code that was
> > subsequently used by Galois to prototype an internal wiki system.
> 
> May I ask what the signature of the functor's argument was?
> 

It was a collection of IO operations for accessing the wiki's page 
database.

-- 
flippa@flippac.org

Ivanova is always right.
I will listen to Ivanova.
I will not ignore Ivanova's recomendations.
Ivanova is God.
And, if this ever happens again, Ivanova will personally rip your lungs out!
0
flippa (196)
12/6/2007 1:22:13 PM
Philippa Cowderoy wrote:
> It was a collection of IO operations for accessing the wiki's page
> database.

Just values then, and no types? Why not use a record of function values?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/6/2007 1:29:59 PM
On Dec 6, 1:54 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>
> > Note that F# also removes most of the above, or severely restricts it
> > (e.g. sealing and signature subtyping).
>
> Exactly, yes. Just as Homo sapiens have an appendix that is severely
> restricted in what it can do.

Sorry I'm being dense, but your point is?


> > First, modules are more general than type classes.
>
> We were talking about functors, not modules.

Not at all. /You/ have been trying to restrict the discussion to
functors. I am not. I have been talking about the whole module
language from the beginning.


> Statements that widely used things are useless should stay in academia.

Who made such a statement?

In any case, popularity certainly is no metric for usefulness.


>>http://groups.google.co.uk/group/comp.lang.functional/msg/5a30908fdef...
>
> > Then I have no idea what you are talking about. This code defines
> > vector operators, plus (secondary) some helper generics for sequences.
> > And it is making straightforward use of the module system to structure
> > the code for good.
>
> Look at the original code as well:
>
> http://mlton.org/pipermail/mlton/2005-October/028127.html

I fail to see what this code has to do with the other. In fact, the
latter is showing that you can actually simulate overloading in MLton
at zero runtime cost -- if you really want to. Vesa's post on the
other hand explicitly is intended to show that such overloading often
is unnecessary to start with.


> > If you can show me how any of the features you listed would avoid
> > writing much of this code I'd be impressed.
>
> Exportable infix operators. No need to wrap your numeric code in a functor
> (a technique that doesn't even scale anyway).

As far as I can see, the first would avoid the initial two lines of
Vesa's code -- for the price of not having natural precedences among
the operators he chose, or having to choose less uniform operator
names.

I don't see how anything in your feature list would replace the
functors in his code.


> Can you cite a single example of someone working around the absence of
> functors in F#?

In F# I suppose you'd use classes. But as I said before, my point was
all about the intersection of OCaml and F#. What would you use there?
You sort of said already that you use classes and have different
versions of the code. That sounds like a workaround to me.


> >> That Int module is boiler-plate code that I am forced to cut'n'paste
> >> everywhere.
>
> > Maybe you should consider a less verbose language then. ;-) For
> > example, in SML you'd just write
>
> >   Map.Make(open Int type t = int)
>
> That doesn't work in SML/NJ:
>
> - Map.Make(open Int type t = int);
> stdIn:1.10 Error: syntax error found at OPEN

No, of course not. Like in Ocaml it is a module expression, so you can
only use it in a module declaration:

  structure M = Map.Make(open Int type t = int)


> In F# you write no code at all. Maybe you should consider a less verbose
> language.

At the same time, you loose the ability to employ the type system to
easily distinguish different types of maps.


> > But more seriously, I think that you tend to vastly overemphasize
> > minor syntactic annoyances.
>
> Absolutely not. If you fixed these problems with your own implementation
> then you could garner users who would use it to do great work. I can think
> of nothing more gratifying, but making the language familiar and easy is
> absolutely essential for that to happen

Familiar relative to what? That is entirely subjective, unless you are
targetting a particular community.


> > And what about abstract types? You never define any?
>
> I use abstract types all the time in both OCaml and F#. Perhaps you mean
> type aliases parameterized over phantom types?

I mean abstract types not defined by wrapping the representation into
constructors -- like F# requires (and thereby essentially falling back
to the state-of-the art of Modula-2).


> Do you think there are bigger projects in SML that OCaml+F#?

I would make any bet that there are far bigger projects in SML than in
the intersection OCaml/\F#. I never meant to claim anything else.

Do you commonly try to keep your code within it?

Just to clarify again: I criticise neither OCaml nor F#, both of which
I have much respect for and sometimes enjoy using myself. Your
repeated suggestive claim that it would somehow be easier to write
programs that work on both than it would be for multiple SMLs is what
I consider unsound.

- Andreas
0
rossberg (600)
12/6/2007 2:02:28 PM
On 6 Dez., 11:17, Rainer Joswig <jos...@lisp.de> wrote:

>
> Still much better than BVVE or BrgVrVorsEnts.

"much better" in what sense?

Let's rephrase a bit:

  BergwachtVereinigungsVorsitzendenEntschaedigung
  BergwachtVereinigungsVorsitzendenEntschuldidung

Why is it "better" to being forced to scan endless words to see a tiny
difference?


> Minor detail: In Lisp you would write:
>
> Bergwacht-Vereinigungs-Vorsitzenden-Entscheidung

No way.


> For typing these long identifiers, one usually uses
> completion or some other help (like mouse-copy).

Isn't that interesting.
We use technical help to write such beasts, but there is none to help
us in reading them.

> When one write these names, one types
> B-V-V-E and press completion. Then it expands to
> above name ro gives the choice of the possibilities.

I prefer writing and reading "bvve" without pressing completion.


> Not syntax by white space

Why not?
0
quetzalcotl (241)
12/6/2007 2:18:56 PM
On 6 Dez., 13:35, Joachim Durchholz <j...@durchholz.org> wrote:
> Ingo Menger schrieb:

> Extra points if the Expression Reader deals well with runaway
> indentation ;-)
>

I think it would be fairly easy to make one that is correct (i.e.
utters grammatical sentences) but I wonder how hard it would be to
make one that is really good.

For example, given
   map (+1) as

it would be correct to say aomething like:
  evaluate the application of
    the anonymous function which
      evaluates the application of
        its arguement
        and 1
        to the function "+"
    and "as"
    to the function "map"

but it would be more desirable to hear: the list of all elements of
"as", in the same order, but incremented

Another difficulty would be "point free style" definitions, i.e.

notEmpty = not . empty

Here, perhaps one should do some form of epsilon conversion.
0
quetzalcotl (241)
12/6/2007 2:37:49 PM
In article 
<54f1bf09-3b3c-48e2-a667-8d74aaa1c394@s36g2000prg.googlegroups.com>,
 Ingo Menger <quetzalcotl@consultant.com> wrote:

> On 6 Dez., 11:17, Rainer Joswig <jos...@lisp.de> wrote:
> 
> >
> > Still much better than BVVE or BrgVrVorsEnts.
> 
> "much better" in what sense?
> 
> Let's rephrase a bit:
> 
>   BergwachtVereinigungsVorsitzendenEntschaedigung
>   BergwachtVereinigungsVorsitzendenEntschuldidung
> 
> Why is it "better" to being forced to scan endless words to see a tiny
> difference?

The purpose of long words is not to create tiny difference.
The purpose is to have descriptive identifiers.

When you read text, it is not much of a difference if
you have small words or long words. Remember,
everyone trained to read does not look much
at individual words or characters - that's much too
slow. Most trained people look at chucks of
words and shape. So for them it would not make
much difference if the text contains has the error
in small, medium sized or long words. 'more' vs.
'mere' vs. 'moer' does not make much a difference
if you look at some piece of code.

> > Minor detail: In Lisp you would write:
> >
> > Bergwacht-Vereinigungs-Vorsitzenden-Entscheidung
> 
> No way.

Yes.

> > For typing these long identifiers, one usually uses
> > completion or some other help (like mouse-copy).
> 
> Isn't that interesting.
> We use technical help to write such beasts, but there is none to help
> us in reading them.

No, I have help.

Well, there is light ;-) , source coloring, interactive help on symbols,
code browsers, and so on. When I have a symbol I don't know, I usually
place a cursor on it and press a show documentation command.
If that does not help, I inspect it with another key.
If that does not help I try to locate the source, M-. .
It helps me while reading to make the meanings of symbols
clearer.

> > When one write these names, one types
> > B-V-V-E and press completion. Then it expands to
> > above name ro gives the choice of the possibilities.
> 
> I prefer writing and reading "bvve" without pressing completion.

Really? I would make it 'verboten' by a coding style guideline. ;-)
Encryption and compression of symbols would be discouraged.

Do you talk with abbreviated words to other people?
You could say BVVE instead of the word above.

How is BVVE not easily to misunderstand? It has no correspondence
to natural language. If you read it in code, there is no
way to say if it is right or wrong by staring at it.
For the long word, you could at least recover
some meaning.


> > Not syntax by white space
> 
> Why not?

I like explicit constructs. Constructs that have a descriptive
name and can be spotted in the source code. I'm not a fan
of terse code with tiny abbreviated identifiers. This really
gets problematic, if a language has many identifiers.
Then the small ones are quickly gone. Unix shell scripts have so many
one to four letter commands - this is extremely ugly.

Gerald J. Sussman had a talk about readability in books,
formulas and programs:

  http://video.google.com/videoplay?docid=-2726904509434151616

He describes his struggle finding out what math in classical mechanics
was meaning when writing the book SICM (Structure and Interpretation
of Classical Mechanics, http://mitpress.mit.edu/SICM/book.html ).
He was trying to use Scheme to write down some of the formulas
and detected that the terse math notation often made things
unclear or was hiding lots of errors.

-- 
http://lispm.dyndns.org/
0
joswig8642 (2203)
12/6/2007 3:24:40 PM
Ingo Menger <quetzalcotl@consultant.com> writes:

> And how exxctly would it help you if the language looked like english,
> but wasn't actually english?

The same way knowing a smattering of Bulgarian helps me listen to
Russian, Czech, and Polish dialogue (all Slavic languanges, but each
distinct enough that speaking one does not let one *speak* another).

Some words are cognates and they help--especially since they are all
languages suited to the same purposes.  One forms a thesis with the
words one recognizes (or thinks one recognizes), and looks up unknown
words to see what they mean.  Eventually, if the two sides are *trying
to communicate*, they find a pigeon language with a subset of the
meanings that both can understand that gets across the barrier and
conveys the intended meaning.  If the meaning is all tied up in subtle
declensions, inflections, word order tricks, homonyms, puns, etc., it
will get lost--so jokes don't generally translate well.  And that's
the spoken equivalent of enigmatic notation.  If you want to speak to
just a closed group, use a complex notation.  If you want to address a
wide audience, use simple forms and simple concepts and let the
message carry itself.

"Ich bin ein Berliner." was incorrect, but it got the message across.
I don't speak enough Deutsch to correct it, but if I saw a similarly
short and correct version of the sentence, I would understand it.
And, so all reading goes.  If the writer is attempting to communicate,
the reader needs to know less of the subtleties of the language to
understand.  But if one hides the meaning in subtleties, e.g. if the
statement would have been correct as "Ich werde ein Berliner" or "Ich
wunsch ein Berliner" and then written "IWEB" and "IweB", you are
quickly likely to lose ones readers.

>   BergwachtVereinigungsVorsitzendenEntsch�digung
>   BergwachtVereinigungsVorsitzendenEntscheidung

I *can* look up Entsch�digung and Entscheidung.  I cannot look up BVVE
and know which of those two you mean.  If it makes a difference, and
you wish to communicate, you make the difference clear and obvious.
If it doesn't make a difference, then no one cares.

> I don't think this will work. This is my opinion only, of course.
> On the ground of each PL lie exactness and formal rigor as well as
> certain fundamental concepts. I doubt the possibility to make the
> required understanding easier through relaxed syntax.

As Rainer Joswig understood:

R> He does not want relaxed syntax. He wants explicit code with
R> named constructs that appear in the code.
R> Not syntax by white space or cryptic letter combinations.

Yes, I do not want to abandon formality.  Formality is good.  I would
much rather communicate in grammatically correct sentences, because
doing so helps me convey my meanings better.  Moreover, I like
explicitness.  I often have to correct my wife when she uses a pronoun
in a sentence with no obvious referent "that???, that what?, what
'that' are you talking about?".  I know she knows her topic, but I
don't know the stuff in her mind and if she doesn't tell me, I'm lost
in the conversation.

> An illsutration: Take for example cooking receipts. If it were so that
> merely use of natural language does the trick, we all could be great
> cooks by just working down the receipt. Yet, the fact is that many of
> us are miserable cooks (or do not cook at all), despite having access
> to the finest receipts.

Perfect example.  You right we are all not natural cooks, never will
happen.  I don't expect everyone to "write" fp programs.  However, a
good French cook can read a recipe for a Chinese dish and probably
make a passable rendition, despite not being familiar with the
cuisine.  That's the reading skill I'm interested in.  The French cook
is unlikely to succeed if the Chinese recipe hides its assumptions,
e.g. when I say chicken, everyone knows I mean hang the chicken out in
the window with these spices for three days before starting.
Especially if that is encoded as "Chicken" as opposed to "chicken".

> Is this so? How about anonymous functions with two arguments?
> "function (x) is function (y) is" perhaps? And for what reason do we
> need the () parentheses?

Because a naive reader will expect them, a naive reader may never have
seen a function without the arguments enclosed in parenthesis.
Removing the parenthesis is excessive terseness in my book.  Making
them optional, allows those of us who want to communicate with a wide
audience put them in, and that's ok in my book.  I want to be able to
write verbose programs so that I can reach an audience who may not
understand my field.  You may not have that goal.

> I personally think that especially the Haskell syntax is so terse
> because it abandons lots of braces, semicolons and parens that
> dominate the look of program texts in other languages. And I think
> it's a good thing.

Good for you.  Not good for me.

> I don't see that. The "more verbose notation" would still have to
> convey the exact meaning. 

Yes, the exact smae meaning.

> From our experiments here in this thread we
> know, that expressing functional programs in "natural" like languages
> is not per se promoting understanding. 

We do not "know" this, and I reject this claim.  In fact, the
axiomatic basis of this thread is that verbose languages have a use.
You may not need that use, but I do.

> You'll have to have some formal notation anyway. 

Formal is good.  Terse is what is not good in my book.

> Once we realise this, we can as well choose one that
> is consistent and easy in itself, i.e. governed by few rules only. (I
> know this is vague, for this also applies to languages like brainf*ck
> or unlambda. Perhaps in the latter cases there are too few rules.)

Consistent is good too.  Simple in general is good.  Simple is not
necessarily terse.  More characters is not necessarily more complex.

> Yet, I admit,  such a program reader could be a useful tool for
> beginners.

Or those of us who just want to read a text in a language we don't
know.
0
cfc (239)
12/6/2007 3:32:22 PM
Ingo Menger wrote:


>> Not syntax by white space
> Why not?

I'd say whitespace-sensitive syntax is not in itself bad, if it's
uniform and the rest of the syntax isn't too bloated with other stuff*.  
Most languages with whitespace-sensitive syntax also drag in lots of
other things (e.g. python, haskell), but you can e.g. replace lisp
parens by layout and leave the rest pretty much alone - see:  
http://srfi.schemers.org/srfi-49/srfi-49.html 
- not significantly different to paren-sexp scheme to read, just
using indentation rules and one additional keyword instead of parens. 
Not particularly annoying to parse/read IMO (though maybe could be more
annoying than parens for long functions)- still no horrible irregular
(can only be rote learned and tend to be different for all but the most
trivial ops in different languages according to the inscrutable whims
of the language creators) infix operator precedences etc.

One problem is that people know how to (and typically invest a
significant amount of their undergrad time in learning how to, wouldn't
want to "waste" that time, eh?) write parsers for much more complex
syntaxes.  But that doesn't mean you should use a complex syntax if you
can avoid it...

* Similarly, I don't even mind infix, if it's one of the few rules
in the syntax, and other complicated rules aren't simultaneously
introduced.  See: APL, which is infix but has a very simple
right-to-left interpretation except where explicitly parenthesised.
(People exposed to other languages before APL sometimes think
it's harder than it really is, because they're used to infix operators
having diverse precedences+associativities, and APL has lots of
single-character infix operators - However, they don't have complicated
rules associated with them, though there is a monadic (one-arg prefix)
vs. dyadic (two-arg infix) overloading...)




0
david.golden (500)
12/6/2007 3:35:23 PM
On 6 Dez., 16:32, Chris F Clark <c...@shell01.TheWorld.com> wrote:
> Ingo Menger <quetzalc...@consultant.com> writes:
> > And how exxctly would it help you if the language looked like english,
> > but wasn't actually english?
>
> The same way knowing a smattering of Bulgarian helps me listen to
> Russian, Czech, and Polish dialogue (all Slavic languanges, but each
> distinct enough that speaking one does not let one *speak* another).

Ok, knowing Bulgarian you can listen to a Russion speaker and have
some idea what he is talking about. No doubt.
The same would hold for language families like ML, so if you speak
OCaml you probalby get some idea when you read an F# program.
But I think the diversity of computer languages is too big on the one
hand and you do not read program texts just for fun but to set a base
for some informed decision (I suppose) on the other hand.
Thus, for example, you might think you understand a Haskell program
(as ML literate), but, actually, you don't, since a language is not
only syntax. The lazy Haskell semantics is  nowhere mentioned in the
progam text, yet it is crucial for understanding. Otherwise, you could
get tempted to "repair" the following code:

iterate f x =3D x : iterate f (f x)

> >   BergwachtVereinigungsVorsitzendenEntsch=E4digung
> >   BergwachtVereinigungsVorsitzendenEntscheidung
>
> I *can* look up Entsch=E4digung and Entscheidung.  I cannot look up BVVE
> and know which of those two you mean.

Yes, you can look it up at the declaration site, where in a good
program the introduction of the variable is commented:
   var bvve =3D 0  // compensation for the president of the
                 // alpine emergency team, in EUR/h

> > An illsutration: Take for example cooking receipts. If it were so that
> > merely use of natural language does the trick, we all could be great
> > cooks by just working down the receipt. Yet, the fact is that many of
> > us are miserable cooks (or do not cook at all), despite having access
> > to the finest receipts.
>
> Perfect example.  You right we are all not natural cooks, never will
> happen.  I don't expect everyone to "write" fp programs.  However, a
> good French cook can read a recipe for a Chinese dish and probably
> make a passable rendition, despite not being familiar with the
> cuisine.  That's the reading skill I'm interested in.  The French cook
> is unlikely to succeed if the Chinese recipe hides its assumptions,

Exactly, and this is what any text does. It "hides", so to speak, the
cultural background. I understand english because I learned
(unconsciously) the language culture (which was not too hard, since
it's not so far apart from the german one).
You can't learn a language just by reading a dictionary and some
grammar rules. According to our great philosopher Wittgenstein, you
have to learn how the language (or certain constructs thereof) is
being used.
You can't learn a programming language just by looking at a syntax
chart, even for passive use only.
So, more verbosity or even imitating natural language will buy you
absolutely nothing.
COBOL is a particularly good example. I've never managed to grasp it
fully because, despite being verbose, it has lots of hidden "cultural
background" that is totally different from mine. For example I
remember that I was looking for the concept of a "local variable" and
was then told that one declares it in the "working storage section" of
some "division" I forgot the name of, but one had to be sure to write
77 at a certain position. Brrrr. (Or was it 99? Never mind.)


> e.g. when I say chicken, everyone knows I mean hang the chicken out in
> the window with these spices for three days before starting.

This could well be the cultural background I mentioned. "Anybody ever
heard of some barbarian that would not hang out the chicken for 3
days? So, when I say chicken, I mean of course a chicken ready for
cooking, which implies this and that, of course!"

> > Is this so? How about anonymous functions with two arguments?
> > "function (x) is function (y) is" perhaps? And for what reason do we
> > need the () parentheses?
>
> Because a naive reader will expect them, a naive reader may never have
> seen a function without the arguments enclosed in parenthesis.

So what?
I do not understand the argument. Do you tell us that "naive readers"
may only be confronted with things they know already?
There's no justification for this.

> Removing the parenthesis is excessive terseness in my book.  Making
> them optional, allows those of us who want to communicate with a wide
> audience put them in, and that's ok in my book.  I want to be able to
> write verbose programs so that I can reach an audience who may not
> understand my field.  You may not have that goal.

This is correct, but it doesn't matter what my goal is. I simply deny
the claim that "more verbose" means "better to understand".


> > I don't see that. The "more verbose notation" would still have to
> > convey the exact meaning.
>
> Yes, the exact smae meaning.
>
> > From our experiments here in this thread we
> > know, that expressing functional programs in "natural" like languages
> > is not per se promoting understanding.
>
> We do not "know" this, and I reject this claim.

I see. But in your own example you used terms that are chinese to
"naive readers" like "list", "parameter", "anonymous function" etc.
etc.
Therefore, you'd better used real chinese words instead of words that
sound familiar. For, familiar words may suggest understanding where
there is none.

> Consistent is good too.  Simple in general is good.  Simple is not
> necessarily terse.

Granted. But the reverse may well be true. Convoluted is most likely
not simple to understand. Half a page long nested sentences like
"evaluate the application of an anonymous function, which ..." are
verbose, but surely not simple.

0
quetzalcotl (241)
12/6/2007 4:38:30 PM
On 6 Dez., 16:24, Rainer Joswig <jos...@lisp.de> wrote:
> In article
> <54f1bf09-3b3c-48e2-a667-8d74aaa1c...@s36g2000prg.googlegroups.com>,
>  Ingo Menger <quetzalc...@consultant.com> wrote:
>
> > On 6 Dez., 11:17, Rainer Joswig <jos...@lisp.de> wrote:
>
> > > Still much better than BVVE or BrgVrVorsEnts.
>
> > "much better" in what sense?
>
> > Let's rephrase a bit:
>
> >   BergwachtVereinigungsVorsitzendenEntschaedigung
> >   BergwachtVereinigungsVorsitzendenEntschuldidung
>
> > Why is it "better" to being forced to scan endless words to see a tiny
> > difference?
>
> The purpose of long words is not to create tiny difference.
> The purpose is to have descriptive identifiers.
>
> When you read text, it is not much of a difference if
> you have small words or long words.

Disagree.

> Remember,
> everyone trained to read does not look much
> at individual words or characters - that's much too
> slow. Most trained people look at chucks of
> words and shape. So for them it would not make
> much difference if the text contains has the error
> in small, medium sized or long words. 'more' vs.
> 'mere' vs. 'moer' does not make much a difference
> if you look at some piece of code.

Thats exactly my point.
For example, when reading a novel translated from a foreign language
(i.e. russian) it happened to me, that I confused the characters (i.e.
named persons in the novel), since I take "Nikolajewitsch" on first
reading as "that long name starting with N" and "Nikoforowitsch" also
as "that long name starting with N".

Therefore, I suggest to avoid variable names longer than 1 character,
whenever possible.

> Well, there is light ;-) , source coloring, interactive help on symbols,
> code browsers, and so on. When I have a symbol I don't know, I usually
> place a cursor on it and press a show documentation command.
> If that does not help, I inspect it with another key.

A key? Really? Why not type: please-inspect-that-word-that-I-am-not-
understanding? I wonder how a naive beginner will master those tools
with their cryptic, one-letter abbrevations!

> > I prefer writing and reading "bvve" without pressing completion.
>
> Really? I would make it 'verboten' by a coding style guideline. ;-)

And I would cite G=F6tz von Berlichingen to you, or better yet, increase
my hourly rate. :)


> Do you talk with abbreviated words to other people?
> You could say BVVE instead of the word above.

This happens all the time.
Do you ever ride the S-Bahn? Or the Stadtbahn? Same holds for U-Bahn
and Untergrundbahn. The workers of the GDL strike, so no ICEs are
running. And so forth.


> How is BVVE not easily to misunderstand?

A single word is not to misunderstand at all. A variable name is only
menaingful in a context, ie.e
   bvve *=3D 1.05;   // raise by 5%
And before, I read the comment saying that bvve was the compensation
of the president of the alpine emrgency troop, in euros per week.

> It has no correspondence
> to natural language.

Not true. Abbrevations are daily bread and butter. See above.

> If you read it in code, there is no
> way to say if it is right or wrong by staring at it.
> For the long word, you could at least recover
> some meaning.

But you couldn't be sure, since you know, that the only real meaning
of a variable name is to uniqely identify some language defined item.
No more, no less.

0
quetzalcotl (241)
12/6/2007 5:03:37 PM
In article 
<0815fbde-f4b1-443f-8377-b47ddce7de6b@d61g2000hsa.googlegroups.com>,
 Ingo Menger <quetzalcotl@consultant.com> wrote:

> > When you read text, it is not much of a difference if
> > you have small words or long words.
> 
> Disagree.
> 
> > Remember,
> > everyone trained to read does not look much
> > at individual words or characters - that's much too
> > slow. Most trained people look at chucks of
> > words and shape. So for them it would not make
> > much difference if the text contains has the error
> > in small, medium sized or long words. 'more' vs.
> > 'mere' vs. 'moer' does not make much a difference
> > if you look at some piece of code.
> 
> Thats exactly my point.
> For example, when reading a novel translated from a foreign language
> (i.e. russian) it happened to me, that I confused the characters (i.e.
> named persons in the novel), since I take "Nikolajewitsch" on first
> reading as "that long name starting with N" and "Nikoforowitsch" also
> as "that long name starting with N".
> 
> Therefore, I suggest to avoid variable names longer than 1 character,
> whenever possible.

You are kidding, right?

> 
> > Well, there is light ;-) , source coloring, interactive help on symbols,
> > code browsers, and so on. When I have a symbol I don't know, I usually
> > place a cursor on it and press a show documentation command.
> > If that does not help, I inspect it with another key.
> 
> A key? Really? Why not type: please-inspect-that-word-that-I-am-not-
> understanding? I wonder how a naive beginner will master those tools
> with their cryptic, one-letter abbrevations!
> 
> > > I prefer writing and reading "bvve" without pressing completion.
> >
> > Really? I would make it 'verboten' by a coding style guideline. ;-)
> 
> And I would cite G�tz von Berlichingen to you, or better yet, increase
> my hourly rate. :)

I guess you would not make it into the team.  :)

> > Do you talk with abbreviated words to other people?
> > You could say BVVE instead of the word above.
> 
> This happens all the time.
> Do you ever ride the S-Bahn? Or the Stadtbahn? Same holds for U-Bahn
> and Untergrundbahn. The workers of the GDL strike, so no ICEs are
> running. And so forth.

I live in Hamburg and there is a Containerschiff still a Containerschiff
and not a C-Ship. A Hafenf�hre is not a F-Bahn and the Elbtunnel
is not 'et'.

> > How is BVVE not easily to misunderstand?
> 
> A single word is not to misunderstand at all. A variable name is only
> menaingful in a context, ie.e
>    bvve *= 1.05;   // raise by 5%
> And before, I read the comment saying that bvve was the compensation
> of the president of the alpine emrgency troop, in euros per week.

Oh, you need added comments. That's even worse. A recipe for disaster. 

> 
> > It has no correspondence
> > to natural language.
> 
> Not true. Abbrevations are daily bread and butter. See above.

-. Abr r d br + bu. s ^.

I see that you write full words here.

> > If you read it in code, there is no
> > way to say if it is right or wrong by staring at it.
> > For the long word, you could at least recover
> > some meaning.
> 
> But you couldn't be sure, since you know, that the only real meaning
> of a variable name is to uniqely identify some language defined item.
> No more, no less.

The real meaning of a variable is to name something:

  SICP:
    Programs must be written for people to read, and only incidentally
    for machines to execute.

For me a well written program reads like a good novel.

-- 
http://lispm.dyndns.org/
0
joswig8642 (2203)
12/6/2007 5:33:26 PM
Ingo Menger schrieb:
> On 6 Dez., 13:35, Joachim Durchholz <j...@durchholz.org> wrote:
>> Ingo Menger schrieb:
> 
>> Extra points if the Expression Reader deals well with runaway
>> indentation ;-)
>>
> 
> I think it would be fairly easy to make one that is correct (i.e.
> utters grammatical sentences) but I wonder how hard it would be to
> make one that is really good.
> 
> For example, given
>    map (+1) as
> 
> it would be correct to say aomething like:
>   evaluate the application of
>     the anonymous function which
>       evaluates the application of
>         its arguement
>         and 1
>         to the function "+"
>     and "as"
>     to the function "map"
> 
> but it would be more desirable to hear: the list of all elements of
> "as", in the same order, but incremented

Sure, but the purpose of an Expression Reader would be to point out 
where the reader's interpretation went off-track, so the simpler 
solution is actually preferrable.

I.e. "don't interpret, just decode".
Interpretation would be a lifetime project anyway, I'd think.

> Another difficulty would be "point free style" definitions, i.e.
> 
> notEmpty = not . empty
> 
> Here, perhaps one should do some form of epsilon conversion.

Maybe "first get it right, then make it good"?

Regards,
Jo
0
jo427 (1164)
12/6/2007 8:00:17 PM
rossberg@ps.uni-sb.de wrote:
> On Dec 6, 1:54 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>> > Note that F# also removes most of the above, or severely restricts it
>> > (e.g. sealing and signature subtyping).
>>
>> Exactly, yes. Just as Homo sapiens have an appendix that is severely
>> restricted in what it can do.
> 
> Sorry I'm being dense, but your point is?

I don't miss functors and I don't miss my appendix.

>> Look at the original code as well:
>>
>> http://mlton.org/pipermail/mlton/2005-October/028127.html
> 
> I fail to see what this code has to do with the other.

They are both working around deficiencies in the language using functors.

Having to work around design flaws in the price you must pay for using a
mainstream language. Users of languages like SML and OCaml should not have
to suffer that burden. Getting it right has now been done before. It isn't
hard. Let's just agree that it needs to be done and do it.

>> Exportable infix operators. No need to wrap your numeric code in a
>> functor (a technique that doesn't even scale anyway).
> 
> As far as I can see, the first would avoid the initial two lines of
> Vesa's code -- for the price of not having natural precedences among
> the operators he chose, or having to choose less uniform operator
> names.

That isn't actually true because you can add arbitrary syntax extensions in
OCaml. That's missing the point anyway. Decorating operator names like that
just to satisfy the language at the expense of clarity is poor software
engineering.

> I don't see how anything in your feature list would replace the
> functors in his code.

Just rewrite his code in OCaml: you won't use any functors. The first
half-a-dozen lines of my ray tracer does exactly that.

>> Can you cite a single example of someone working around the absence of
>> functors in F#?
> 
> In F# I suppose you'd use classes.

If you're parameterizing over modules, yes. In practice, most uses of
functors in OCaml are not doing that though.

> You sort of said already that you use classes and have different
> versions of the code. That sounds like a workaround to me.

The "workaround" basically consists of removing the unnecessary boiler-plate
that OCaml's functors require.

>> That doesn't work in SML/NJ:
>>
>> - Map.Make(open Int type t = int);
>> stdIn:1.10 Error: syntax error found at OPEN
> 
> No, of course not. Like in Ocaml it is a module expression, so you can
> only use it in a module declaration:
> 
>   structure M = Map.Make(open Int type t = int)

That doesn't work either:

# sml
Standard ML of New Jersey v110.65 [built: Mon Aug  6 04:27:45 2007]
- structure M = Map.Make(open Int type t = int);
[autoloading]
[library $SMLNJ-BASIS/basis.cm is stable]
[autoloading done]
stdIn:1.15-1.46 Error: unbound structure: Map in path Map.Make

Incidentally, your statement about OCaml is incorrect. Module expressions
can be used without a module declaration:

  include Map.Make(String)

>> In F# you write no code at all. Maybe you should consider a less verbose
>> language.
> 
> At the same time, you loose the ability to employ the type system to
> easily distinguish different types of maps.

In practice, that doesn't matter.

>> Absolutely not. If you fixed these problems with your own implementation
>> then you could garner users who would use it to do great work. I can
>> think of nothing more gratifying, but making the language familiar and
>> easy is absolutely essential for that to happen
> 
> Familiar relative to what? That is entirely subjective, unless you are
> targetting a particular community. 

Far more people are familiar with printf than with all functional
programming languages combined, let alone the weird higher-order combinator
library with its function called ` that the SML community advocate as a
workaround for not having printf.

>> > And what about abstract types? You never define any?
>>
>> I use abstract types all the time in both OCaml and F#. Perhaps you mean
>> type aliases parameterized over phantom types?
> 
> I mean abstract types not defined by wrapping the representation into
> constructors -- like F# requires (and thereby essentially falling back
> to the state-of-the art of Modula-2).

I never noticed so I guess I don't use abstract type aliases very often.

>> Do you think there are bigger projects in SML that OCaml+F#?
> 
> I would make any bet that there are far bigger projects in SML than in
> the intersection OCaml/\F#. I never meant to claim anything else.

I'd be absolutely amazed if that were true. What is the biggest project in
SML?

> Do you commonly try to keep your code within it?

Most of my code happens to be in it.

> Just to clarify again: I criticise neither OCaml nor F#, both of which
> I have much respect for and sometimes enjoy using myself. Your
> repeated suggestive claim that it would somehow be easier to write
> programs that work on both than it would be for multiple SMLs is what
> I consider unsound.

I said that the intersection of the OCaml and F# languages is larger than
SML. I didn't say anything about ease of use.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/6/2007 8:17:06 PM
Chris F Clark schrieb:
> "Ich bin ein Berliner." was incorrect, but it got the message across.

Just to set the matter straight: that sentence is, in fact, 
syntactically valid German. The English translation would be "I am an 
inhabitant of Berlin", probably with a more concise wording for 
"inhabitant of Berlin".
(It could also have meant "I am a jam doughnut", since these are sold as 
"Berliners" in Germany, but almost no native German speaker would mix 
that up.)

Regards,
Jo
0
jo427 (1164)
12/6/2007 8:35:59 PM
In article <fj9mdk$587$1@online.de>,
 Joachim Durchholz <jo@durchholz.org> wrote:

> Chris F Clark schrieb:
> > "Ich bin ein Berliner." was incorrect, but it got the message across.
> 
> Just to set the matter straight: that sentence is, in fact, 
> syntactically valid German. The English translation would be "I am an 
> inhabitant of Berlin", probably with a more concise wording for 
> "inhabitant of Berlin".
> (It could also have meant "I am a jam doughnut", since these are sold as 
> "Berliners" in Germany, but almost no native German speaker would mix 
> that up.)
> 
> Regards,
> Jo

Ich bin ein Hamburger.

Oops.

-- 
http://lispm.dyndns.org/
0
joswig8642 (2203)
12/6/2007 8:43:27 PM
On Dec 2, 7:53 pm, Joachim Durchholz <j...@durchholz.org> wrote:
> R=FCdiger Klaehn schrieb:

[snip]
> Erlang is somewhat verbose, but rather "non-English".
>
I do not know erlang, and unfortunately I do not have time to check it
out. However it does not have static types, so I doubt I would like
it.

> Dunno otherwise. Clean, maybe?
>
Clean is my favorite language. In fact it was clean that finally
convinced me of the functional approach. But I would not call it
verbose or english-like.

I am not looking for the best functional language. I am looking for
something like the visual basic of functional languages. Something
with a verbose syntax that would probably annoy many professional
developers, but that is easy to use for non full-time developers.
0
rudi2468 (20)
12/6/2007 9:46:35 PM
On Dec 3, 7:26 pm, Donn Cave <d...@u.washington.edu> wrote:

[snip]
> I have this problem with Haskell, sometimes.  It isn't the
> syntax, I believe, it's the extreme degree of abstraction.
> It's a virtue of the language, that can be over-exercised.
>
> I catch myself doing the same - paring down some code until
> nothing remains that isn't essential.  The problem with this
> is that those inessential parameters and whatnot may carry
> some information, cues that remind the reader why this function
> exists.
>
Sometimes just assigning names to temporary constructs can make a big
difference.

> I expect that with more exposure to this, I grow less dependent
> on such cues, but in the end I do think it adds up to a loss
> of readability.  I don't know if programs are mathematical, but
> I am fairly sure people aren't.
>
With enough exposure I am sure it would be possible to read haskell
code like written english. But most people are not willing to invest
so much time on learning new languages, especially when you can get
things done with existing object-oriented languages.

Not everybody is a computer language fanatic like most
comp.lang.functional readers. But I think that FP offers a lot
especially for people that just want to "get the job done".

>    Donn Cave, d...@u.washington.edu

0
rudi2468 (20)
12/6/2007 9:54:13 PM
Ingo Menger <quetzalcotl@consultant.com> wrote much I disaggree with,
but in particular these things, which I state primarily to show how
much we disagree rather than to convince him:

> You can't learn a programming language just by looking at a syntax
> chart, even for passive use only.

Yes, you can.  In fact, I learn most programming languages that way
these days.

> some "division" I forgot the name of, but one had to be sure to write
> 77 at a certain position. Brrrr. (Or was it 99? Never mind.)

Exactly, little enigmatic notations that gave no clue as to why they
were chosen.  So, even languages like Cobol as subject to being too
terse.  There is a reason for 77, but not knowing the reason it makes
no sense and making no sense, it is hard to remember.

>> Because a naive reader will expect them, a naive reader may never have
>> seen a function without the arguments enclosed in parenthesis.
>
> So what?
> I do not understand the argument. Do you tell us that "naive readers"
> may only be confronted with things they know already?
> There's no justification for this.

I mean that when there is already a verbose form that naive readers
will have already been exposed to, shortening it just to be more
concise is being "too terse" in my book. (see next point for more)

> This is correct, but it doesn't matter what my goal is. I simply deny
> the claim that "more verbose" means "better to understand".

It does mean easier to understand for those who already use them as a
crutch.  I simply don't want to arbitrarily remove those crutches.
More verobse means you leave more clues that a reader may have used as
crutches.  Thus, more verbose means more understandable to those who
require those crutches.

> I see. But in your own example you used terms that are chinese to
> "naive readers" like "list", "parameter", "anonymous function" etc.
> etc.
> Therefore, you'd better used real chinese words instead of words that
> sound familiar. For, familiar words may suggest understanding where
> there is none.

And, I dispute this, partial understanding is better than no
understanding (the "Ich bin ein Berliner." quote).  I believe all
languages started because someone need a term for something and came
up with something that was suggestive in their mind of the item being
talked about (e.g. the sound the bird makes or its color often became
the name for the bird).  Even rigorous mathematical terms like "set"
initially got used because we talk of a tea pot, a sugar bowl, a
creamer, and cups, and saucers as a tea "set"--thus one set became a
general description for all sets.  Even U-bahn is called that not
Z-bahn, because a U-bahn runs mostly underground, except in Chicago
where they are called ells for elevated....

So, yes, I want to use something that is close to the desired meaning
in my readers mind.  Take "anonymous function", it's a borrowing of
the word anonymous, a term for a person without a name.  That clues
the reader in to something about the functions, e.g. don't go looking
for its name.

If I want to communicate, I will try to bring my diction as close as
possible to what my audience already understands.  That way I minimize
the gap and leave them less to fill in.

I know some mathematicians have eschewed that and tried to make their
symbolism as alien as possible to prevent people from bringing over
pre-conceptions and forcing them to concentrate on the precise rules.
However, to my mind those programs are a failure as one can never
remove all context and more importantly analogy is a very important
reasoning process.  Removing an important way of understanding
something does not make the thing more understandable--it makes it
less understandable.

> Granted. But the reverse may well be true. Convoluted is most likely
> not simple to understand. Half a page long nested sentences like
> "evaluate the application of an anonymous function, which ..." are
> verbose, but surely not simple.

A half-a-page nested sentence may not be simple, but just compressing
it by replacing each word by a 1 or 2 character abbreviation and
writing it on 1 line, is not necessarily any simpler.  Neither extreme
is a panacea.  Simplicity does not come from verbosity nor from
terseness.

However, the point of this thread is to find something more verbose.
We already know where to find terse.  If verbose were common, I might
be asking for terse, as I might need that.

If you wish to debate more, I may give you the last say, as I have
said what I meant about as well as I can, and I realize that you
disagree.  I should just call myself a verbosity facist and be done
with it.
0
cfc (239)
12/6/2007 9:57:29 PM
On Dec 5, 8:47 am, Jon Harrop <use...@jdh30.plus.com> wrote:
> > If F# uses structs internally, it will certainly benefit from this.
>
> F# basically implements its own versions of anything that the CLR does
> suboptimally and inlining is one such thing.
>
> After extensive benchmarking, we decided that complex numbers are the only
> data structure that warrants being a struct in F# rather than a class.
>
That might change when the above mentioned limitation is gone. But I
find some design decisions in F# very weird. For example the fact that
functions are not delegates.

And F# is much too complex as a language for beginners. It has almost
everything from OCaml plus a lot of constructs to work with managed
code.

It reminds me a bit of this atrocity of a language called managed C++.

I have gotten a colleague of mine interested in functional
programming. He is really smart and has just ordered a book about F#.
So far he does not seem to like it that much. He complains that it
defines too many operators.

> > What makes multithreading easier in F# than in OCAml?
>
> You can do threads in OCaml but they will never run concurrently because
> OCaml's GC is not multithreaded.
>
Sorry, but I did not get that. What is a thread that does not run
concurrently? A non-concurrent "stop-world" GC would not prevent
threads from running concurrently while no GC is happening.
0
rudi2468 (20)
12/6/2007 10:04:15 PM
On 6 Dez., 21:17, Jon Harrop <use...@jdh30.plus.com> wrote:
>
> >>http://mlton.org/pipermail/mlton/2005-October/028127.html
>
> > I fail to see what this code has to do with the other.
>
> They are both working around deficiencies in the language using functors.

Jon, come on. Vesa's code is not using functors to work around
overloading or anything like that. The use of functors here isn't
essential to the point of the example. It is there to provide high
potential for reuse. Apparently, the code wouldn't change much if it
was possible to define the actual vector operators using overloaded
names.



> >> Exportable infix operators. No need to wrap your numeric code in a
> >> functor (a technique that doesn't even scale anyway).
>
> > As far as I can see, the first would avoid the initial two lines of
> > Vesa's code -- for the price of not having natural precedences among
> > the operators he chose, or having to choose less uniform operator
> > names.
>
> That isn't actually true because you can add arbitrary syntax extensions in
> OCaml.

OK, if you want to resort to macros -- but you were arguing about
exportable infix operators. And macros certainly are a more complex
solution (and again not portable).



> Decorating operator names like that
> just to satisfy the language at the expense of clarity is poor software
> engineering.

I don't buy that absolutism. In fact, I know enough people who would
argue the exact opposite, namely that overloading actually decreases
clarity. Personally, I'm sitting on the fence here, though.



> > I don't see how anything in your feature list would replace the
> > functors in his code.
>
> Just rewrite his code in OCaml: you won't use any functors. The first
> half-a-dozen lines of my ray tracer does exactly that.

Your ray tracer is a toy example, optimised for low line count. If it
was written in a way you'd write real application code, and if you
cared for reuse as much, then I'm sure you would end up using modules
for structuring in similar ways.



> > You sort of said already that you use classes and have different
> > versions of the code. That sounds like a workaround to me.
>
> The "workaround" basically consists of removing the unnecessary boiler-plate
> that OCaml's functors require.

And it consists of having non-portable code.



> > Like in Ocaml it is a module expression, so you can
> > only use it in a module declaration:
>
> >   structure M = Map.Make(open Int type t = int)
>
> That doesn't work either:
>
> # sml
> Standard ML of New Jersey v110.65 [built: Mon Aug  6 04:27:45 2007]
> - structure M = Map.Make(open Int type t = int);
> [autoloading]
> [library $SMLNJ-BASIS/basis.cm is stable]
> [autoloading done]
> stdIn:1.15-1.46 Error: unbound structure: Map in path Map.Make

Jon, why are you trying to play games like that? I'm sure you are
fully aware that I was transliterating your example with respect to
syntax only, and that there is no functor named Map.Make in the NJ
library (try RedBlackMapFn if you wish).



> Incidentally, your statement about OCaml is incorrect. Module expressions
> can be used without a module declaration:
>
>   include Map.Make(String)

If you really enjoy going into nitpicking like that then I point out
that I didn't say that. I merely said it is a module expression like
in OCaml.



> > At the same time, you loose the ability to employ the type system to
> > easily distinguish different types of maps.
>
> In practice, that doesn't matter.

Mh, just today I made use of this possibility in some code I was
writing.



> >> Absolutely not. If you fixed these problems with your own implementation
> >> then you could garner users who would use it to do great work. I can
> >> think of nothing more gratifying, but making the language familiar and
> >> easy is absolutely essential for that to happen
>
> > Familiar relative to what? That is entirely subjective, unless you are
> > targetting a particular community.
>
> Far more people are familiar with printf than with all functional
> programming languages combined

Far more people are familiar with C++ templates than with all FPLs,
and use them on a daily basis. Should we hence put them into ML?

Come on, that is totally a bogus line of reasoning, and a guaranteed
way to disaster when used as a basis for language design.



> let alone the weird higher-order combinator
> library with its function called ` that the SML community advocate as a
> workaround for not having printf.

I don't know who is really advocating that. Rather, my impression is
that most of the SML community is considering printf an abomination
and doesn't care about it either way.



> >> Do you think there are bigger projects in SML that OCaml+F#?
>
> > I would make any bet that there are far bigger projects in SML than in
> > the intersection OCaml/\F#. I never meant to claim anything else.
>
> I'd be absolutely amazed if that were true. What is the biggest project in
> SML?

Sorry, no idea.



> I said that the intersection of the OCaml and F# languages is larger than
> SML. I didn't say anything about ease of use.

Maybe not explicitly, but like oftentimes, your statement had a very
obvious spin to it. Between the lines it was suggesting exactly that,
at least from my reading -- and I have a hard time believing you
didn't intend this. (And I still disagree with your highly subjective
notion of "larger".)

- Andreas
0
rossberg (600)
12/6/2007 11:26:50 PM
rossberg@ps.uni-sb.de wrote:
> On 6 Dez., 21:17, Jon Harrop <use...@jdh30.plus.com> wrote:
>> >>http://mlton.org/pipermail/mlton/2005-October/028127.html
>>
>> > I fail to see what this code has to do with the other.
>>
>> They are both working around deficiencies in the language using functors.
> 
> Jon, come on. Vesa's code is not using functors to work around
> overloading or anything like that. The use of functors here isn't
> essential to the point of the example. It is there to provide high
> potential for reuse.

That is a circular argument. If you leave SML then you can see that the
functors are purely incidental: the idiomatic OCaml and F# implementations
would make no use of functors whatsoever. The only reason to use functors
in this SML code is to make the infix operators reusable, i.e. to
workaround the lack of exportable infix operators in SML.

Objectively, that is not good and we know we can do a lot better because F#
already does. IMHO, the ML community should have identified and fixed this
problem years ago but they are far too obsessed with making everything
static.

> Apparently, the code wouldn't change much if it 
> was possible to define the actual vector operators using overloaded
> names.

No, that is 123 lines of code just to implement one kind of vector. That is
really bad code density compared to almost any other language including
OCaml and F#.

Rewrite it in OCaml:

  (* Fix a deficiency in the OCaml stdlib. *)
  module Array = struct
    include Array

    let map2 f xs ys =
      let n = length xs in
      if length ys <> n then invalid_arg "Array.map2";
      init n (fun i -> f xs.(i) ys.(i))
  end

  let ( ~| ) = Array.map ( ~-. )
  let ( +| ) = Array.map2 ( +. )
  let ( -| ) = Array.map2 ( -. )
  let ( *| ) s = Array.map (( *. ) s)
  let ( /| ) r s = (1. /. s) *| r

This code is worse in some ways:

.. Incomplete: Vesa implemented more operators (but no extra functionality).

.. Inefficient: polymorphism, no inlining of HOFs args mean this will be
several times slower than it needs to be in OCaml. For a production
library, you would inline all of the code by hand.

and better in others:

.. Doesn't pollute with unnecessary operators (e.g. vector * scalar)

.. Still allows pattern matching over vectors.

.. No need for functors when using this library.

F# already provides all of this so you write no code and code that uses this
functionality is even more concise thanks to overloading and (practically)
just as robust.

>> Decorating operator names like that
>> just to satisfy the language at the expense of clarity is poor software
>> engineering.
> 
> I don't buy that absolutism. In fact, I know enough people who would
> argue the exact opposite, namely that overloading actually decreases
> clarity. Personally, I'm sitting on the fence here, though.

They are wrong in this context (overloaded arithmetic operators).

Having used all of the different approaches both for computational science
in academia and now for commercial software in industry, I think there is
absolutely no question that the F# approach is substantially more
productive than the OCaml and SML approaches. SML's ad-hoc polymorphic
arithmetic over ints and floats was a step in the right direction that
OCaml should have copied but real programs need a lot more value types and
operators over them, which requires overloading for clarity.

Take Vesa's code, for example. In reality, I use complex numbers,
low-dimensional vectors and matrices and their equivalents in homogeneous
coordinates as well as arbitrary-dimensional vectors and matrices, all in
both 32- and 64-bit floats. Including scalars, you're looking at 24
different types. Vesa's approach scales with something like the factorial
of that. Are you going to write all of that code and remember all of those
operator names? I'm not. Even if you did write all of that code, would
MLton terminate in your lifetime? Maybe if you're an Elf...

>> > I don't see how anything in your feature list would replace the
>> > functors in his code.
>>
>> Just rewrite his code in OCaml: you won't use any functors. The first
>> half-a-dozen lines of my ray tracer does exactly that.
> 
> Your ray tracer is a toy example, optimised for low line count. If it
> was written in a way you'd write real application code, and if you
> cared for reuse as much, then I'm sure you would end up using modules
> for structuring in similar ways.

We use exactly the same style in our commercial visualization software,
which is hundreds of thousands of lines of code: probably longer than
anything ever written in SML.

>> > You sort of said already that you use classes and have different
>> > versions of the code. That sounds like a workaround to me.
>>
>> The "workaround" basically consists of removing the unnecessary
>> boiler-plate that OCaml's functors require.
> 
> And it consists of having non-portable code.

I hope to address that by creating another OCaml derivative that learns from
F# using LLVM as a backend. This seems quite feasible.

>> > Like in Ocaml it is a module expression, so you can
>> > only use it in a module declaration:
>>
>> >   structure M = Map.Make(open Int type t = int)
>>
>> That doesn't work either:
>>
>> # sml
>> Standard ML of New Jersey v110.65 [built: Mon Aug  6 04:27:45 2007]
>> - structure M = Map.Make(open Int type t = int);
>> [autoloading]
>> [library $SMLNJ-BASIS/basis.cm is stable]
>> [autoloading done]
>> stdIn:1.15-1.46 Error: unbound structure: Map in path Map.Make
> 
> Jon, why are you trying to play games like that? I'm sure you are
> fully aware that I was transliterating your example with respect to
> syntax only, and that there is no functor named Map.Make in the NJ
> library (try RedBlackMapFn if you wish).

Not at all: I have absolutely no idea what I'm doing or how to get that code
to work. I have to run SML/NJ in a 32-bit chroot so I haven't used it for
years. The last time I did any "serious" SML coding I was a first year
undergraduate!

Now I get this:

- structure M = RedBlackMapFn(open Int type t = int);
[autoloading]
 Error: (stable) $smlnj/smlnj-lib/smlnj-lib.cm: unable to find
$SMLNJ-LIB/Util/smlnj-lib.cm
(/smlnj/smlnj-110.65/sml.boot.x86-unix/SMLNJ-LIB/Util/smlnj-lib.cm)

unexpected exception (bug?) in SML/NJ: Format [Format]
  raised at: ../cm/stable/stabilize.sml:257.15-257.21
             ../cm/stable/stabilize.sml:360.44
             ../compiler/TopLevel/interact/evalloop.sml:44.55

Maybe my install is broken...

>> Incidentally, your statement about OCaml is incorrect. Module expressions
>> can be used without a module declaration:
>>
>>   include Map.Make(String)
> 
> If you really enjoy going into nitpicking like that then I point out
> that I didn't say that. I merely said it is a module expression like
> in OCaml.

Ah, when you said "you can only use it in a module declaration" you weren't
referring to OCaml. My mistake.

>> > At the same time, you loose the ability to employ the type system to
>> > easily distinguish different types of maps.
>>
>> In practice, that doesn't matter.
> 
> Mh, just today I made use of this possibility in some code I was
> writing.

Did it catch an error?

Personally, I think it was a mistake to not put structurally compared sets
and maps in the OCaml stdlib.

>> Far more people are familiar with printf than with all functional
>> programming languages combined
> 
> Far more people are familiar with C++ templates than with all FPLs,
> and use them on a daily basis. Should we hence put them into ML?

That is exactly what F# has done to keep compatibility with C#:

  'a set   ==   Set<'a>

> Come on, that is totally a bogus line of reasoning, and a guaranteed
> way to disaster when used as a basis for language design.

Please, enough with the theoretical reasons why popular languages are
a "disaster" with the implication that the ones that never left academia
are somehow a success. SML is the disaster.

>> let alone the weird higher-order combinator
>> library with its function called ` that the SML community advocate as a
>> workaround for not having printf.
> 
> I don't know who is really advocating that. Rather, my impression is
> that most of the SML community is considering printf an abomination
> and doesn't care about it either way.

Presumably they think ad-hoc polymorphism can be used to improve clarity
because they did so with arithmetic? Printf is just one of many useful
equivalents to that. Indexing "a.[i]" is another.

>> >> Do you think there are bigger projects in SML that OCaml+F#?
>>
>> > I would make any bet that there are far bigger projects in SML than in
>> > the intersection OCaml/\F#. I never meant to claim anything else.
>>
>> I'd be absolutely amazed if that were true. What is the biggest project
>> in SML?
> 
> Sorry, no idea.

Well, we have hundreds of thousands of lines of OCaml/F# code cross
compiling now and we are one of the smallest players in this area.

>> I said that the intersection of the OCaml and F# languages is larger than
>> SML. I didn't say anything about ease of use.
> 
> Maybe not explicitly, but like oftentimes, your statement had a very
> obvious spin to it. Between the lines it was suggesting exactly that,
> at least from my reading -- and I have a hard time believing you
> didn't intend this. (And I still disagree with your highly subjective
> notion of "larger".)

That honestly wasn't my intent but now that you put the words into my mouth
I'm going to have to go right ahead and piss you off by saying that I do
actually agree with the statement you attributed to me. Not because of the
technical merits of the languages but because many more libraries are
available that work transparently between OCaml and F#. That makes it
easier to use.

For example, you can spawn an adaptively tesselated hardware-accelerated
real-time interactive visualization of a sphere in both OCaml and F# with:

  Sphere(vec(0., 0., 0.), 1.)

I doubt anyone has ever successfully done that in SML...

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/7/2007 8:37:32 AM
On Dec 7, 12:26 am, rossb...@ps.uni-sb.de wrote:
> my impression is
> that most of the SML community is considering printf an abomination
> and doesn't care about it either way.

But why? The lack of string interpolation is causing me
endless suffering. Now, it is not really that I want
the ability to print out different types (a combinator
library is way too much), it would
be enough for me to be able to print strings in a saner
way than concatenating them by hand with ^.
For instance,
I would like a format utility able to interpolate
a template with a list of arguments. As an exercise
I wrote the following code:

signature STR_INTERP = sig
  val format : string -> string list -> string
end

structure StrInterp = struct

datatype token = C of char | S | R

val rec tokenize =
 fn [] => []
  | #"$" :: #"$" :: lst => C #"$" :: tokenize lst
  | #"$" :: #"s" :: lst => S :: tokenize lst
  | #"$" :: #"q" :: lst => R :: tokenize lst
  | c :: lst => C c :: tokenize lst

val interp = foldr
 (fn (C c, (lst, args)) => (c :: lst, args)
 | (S, (lst, args)) => (String.explode(hd args) @ lst, tl args)
 | (R, (lst, args)) => (#"\"" :: String.explode (hd args) @ #"\"" ::
lst,
                        tl args))

fun format templ = let
 val tokenlist = tokenize (String.explode templ)
in
 fn arglist => (
 assert length(arglist) = length (
        List.filter (fn (C x) => false | _ => true) tokenlist);
 String.implode (#1 (interp ([], rev arglist) tokenlist)))
end

end

Here is a test:

open StrInterp

do print (format "$s $q: $s$\n" ["pizza", "pepperoni", "5"])

Since I am pretty new at SML I am sure this code can be
much improved, so feel free to critique it ;)

  Michele Simionato

0
12/7/2007 10:33:35 AM
> Erlang is somewhat verbose, but rather "non-English".

I agree that Erlang might be better for two reasons.  First, (as far
as I know) there is no '|' to seperate patterns (for example, in
Haskell's data declarations).  Second, '=' is not used for function
declarations - I think this is confusing because people usually
associate '=' as a variable assignment.

The other thing that Erlang has (and maybe more importantly for
imperative programmers) is book in the Pragmatic Programmers Series
(or whatever its called).
0
12/7/2007 12:55:24 PM
R�diger Klaehn skrev:
> On Dec 2, 7:53 pm, Joachim Durchholz <j...@durchholz.org> wrote:
>> R�diger Klaehn schrieb:
> 
> [snip]
>> Erlang is somewhat verbose, but rather "non-English".
>>
> I do not know erlang, and unfortunately I do not have time
 > to check it out.

I'm sorry, but I must ask: you posted a question asking for tips
on some language (presumably one that you don't already know),
but then you say that you don't have time to check it out? (:


 > However it does not have static types, so I doubt I would like
> it.

To each his own. Of course with today's type inference, I guess
the main difference is between compile-time type checking and
run-time type checking.

Erlang does have something that might be described as "batch-mode
type checking", through the tool dialyzer, which is part of the
official Erlang/OTP release. Dialyzer will not give you compilation
errors, and indeed doesn't even run in the compilation phase, but
if you run it separately, it is able to give very precise warnings
about type discrepancies.

Starting with the R12B release (released on Dec 5), Dialyzer also
supports "type contracts", for example:

-type(tree(X) :: {X,tree(X),tree(X)} | nil).
-spec(my_hd/1::([X,...])->X when is atom(X)).

http://www.erlang.se/workshop/2007/proceedings/03lindah.pdf

They are (currently) ignored at compile-time, but Dialyzer will
check whether the code adheres to the given contracts.

Without type specs, Dialyzer will infer types instead.

BR,
Ulf W
0
ulf.wiger (50)
12/7/2007 1:13:21 PM
Ulf Wiger skrev:
> 
> Starting with the R12B release (released on Dec 5), Dialyzer also
> supports "type contracts", for example:
> 
> -type(tree(X) :: {X,tree(X),tree(X)} | nil).
> -spec(my_hd/1::([X,...])->X when is atom(X)).

That should have been:

 > -type(tree(X) :: {X,tree(X),tree(X)} | nil).
 > -type(my_hd/1::([X,...])->X when is atom(X)).

Apologies - copy-and-paste error.
The referenced presentation was given before the R12B
release, and did contain some typos like this one.

BR,
Ulf W
0
ulf.wiger (50)
12/7/2007 1:27:14 PM
On Thu, 6 Dec 2007, Jon Harrop wrote:

> Philippa Cowderoy wrote:
> > It was a collection of IO operations for accessing the wiki's page
> > database.
> 
> Just values then, and no types? Why not use a record of function values?
> 

That would be how I faked it, at the cost of both an aggravating quantity 
of obscuring boilerplate (sufficient that on more than one occasion I had 
to spend a fair amount of time explaining to experienced haskellers just 
what the code actually did!) and a number of losses from moving the 
mechanism from the module level to the term level:

* Staging and termination guarantees - you both didn't know that the 
elaboration-and-linking process would terminate and didn't get to find out 
until run-time.

* Compilation complications - I made essential use of recursive module 
dependencies, and these remained in the encoding unless you wanted to keep 
all your functors and the code using them in a single module. These are 
supported in both Haskell and some ML-style module systems, but Haskell 
implementations tend to require supplying what amounts to a 'boot' 
signiature. This also affects maintainability, because you can't factor 
the signiature.

* Typing restrictions limiting the architecture - I had a whole collection 
of tricks that I couldn't implement in Haskell due to (entirely sensible) 
type system limitations that needn't apply in a module system, many of 
which enabled designs that would've been near-impossible to demonstrate 
actually worked otherwise. Perhaps the most spectacular was being able to 
add security as a 'plugin' added just by adding the right value to a list 
in a config module before compilation, yet guaranteed resistant against 
attacks or exposed back doors from other plugins thanks to parametricity. 
Much of this amounts to reusing ancient lisp techniques in a manner 
amenable to static analysis.

In short, having a decent module system would have enabled much and 
produced much clearer code than encoding with records.

-- 
flippa@flippac.org

A problem that's all in your head is still a problem.
Brain damage is but one form of mind damage.
0
flippa (196)
12/7/2007 1:33:45 PM
Sean Gillespie wrote:
> Second, '=' is not used for function 
> declarations - I think this is confusing because people usually
> associate '=' as a variable assignment.

Sigh. It was a very bad case of misusing established (mathematical) language
when the designers of C did that. When I learned C I found it odd. But
other languages (Java,Perl,...) followed suit; my impression is that they
did so simply because ':=' reminded people of the Wirth family of languages
(Pascal, Modula) and that was considered uncool (we are real programmers,
after all).

So what comes next? Abandon the use of 'static' as in 'static type system'
because people usually associate it with a bunch of ill-defined and
ambiguous C language concepts, having to do with a missing module system
and dirty hacks (like using a macro preprocessor) to work around it?

What about 'function'? Should we rather not use it to actually
mean 'function in the mathematical sense' anymore because people tend to
associate it with what was, in ancient times, called a 'subroutine'?

No, no, and no!

We should not cater to mainstream misuse of vocabulary and notation, just to
make it easier for the masses to come over. Let them hunger and freeze out
there in their imperative world! Those who really have had enough of it are
hopefully humbled by their misery and will, for the promise of ending it,
be willing to unlearn some of what they are used to. Amen.

Cheers
Ben
0
12/7/2007 7:03:18 PM
On Dec 7, 2:13 pm, Ulf Wiger <ulf.wi...@e-r-i-c-s-s-o-n.com> wrote:
> R=FCdiger Klaehn skrev:> On Dec 2, 7:53 pm, Joachim Durchholz <j...@durchh=
olz.org> wrote:
> >> R=FCdiger Klaehn schrieb:
>
> > [snip]
> >> Erlang is somewhat verbose, but rather "non-English".
>
> > I do not know erlang, and unfortunately I do not have time
>
>  > to check it out.
>
> I'm sorry, but I must ask: you posted a question asking for tips
> on some language (presumably one that you don't already know),
> but then you say that you don't have time to check it out? (:
>
I must confess that I favor statically typed languages, so erlang is
probably not what I am looking for.

>  > However it does not have static types, so I doubt I would like
>
> > it.
>
> To each his own. Of course with today's type inference, I guess
> the main difference is between compile-time type checking and
> run-time type checking.
>
That is quite a large difference though.

> Erlang does have something that might be described as "batch-mode
> type checking", through the tool dialyzer, which is part of the
> official Erlang/OTP release. Dialyzer will not give you compilation
> errors, and indeed doesn't even run in the compilation phase, but
> if you run it separately, it is able to give very precise warnings
> about type discrepancies.
>
That is interesting. Does it give any guarantees about type
correctness? (When you run it over a program and it finds nothing, are
you safe from run time type erros?)

If it does, then it does not really matter that it is not run during
the compilation process...

regards,

R=FCdiger
0
rudi2468 (20)
12/7/2007 8:37:38 PM
On Dec 7, 8:03 pm, Ben Franksen <ben.frank...@online.de> wrote:

[snip]
> No, no, and no!
>
> We should not cater to mainstream misuse of vocabulary and notation, just =
to
> make it easier for the masses to come over. Let them hunger and freeze out=

> there in their imperative world! Those who really have had enough of it ar=
e
> hopefully humbled by their misery and will, for the promise of ending it,
> be willing to unlearn some of what they are used to. Amen.
>
Using =3D for declaring functions seems very natural to me. After all, a
function is just a value of the type x->y.

But nevertheless I think this attitude is not very helpful.

Every day I have to suffer from horrible programs written in
imperative languages that are full of bugs and do not utilize my quad
core cpu.

And to make a living, I have to put up with horrible imperative APIs
that are full of inconsistencies and error sources mostly caused by
mutable objects.

I want everybody to move to purely functional languages so I do not
have to suffer from horrible imperative APIs. And if that means that I
will have to put up with a less than perfect functional language, then
so be it.

cheers,

R=FCdiger
0
rudi2468 (20)
12/7/2007 8:46:16 PM
R�diger Klaehn skrev:
> On Dec 7, 2:13 pm, Ulf Wiger <ulf.wi...@e-r-i-c-s-s-o-n.com> wrote:
>> R�diger Klaehn skrev:> On Dec 2, 7:53 pm, Joachim Durchholz <j...@durchholz.org> wrote:
>>>> R�diger Klaehn schrieb:
 >>
>> To each his own. Of course with today's type inference, I guess
>> the main difference is between compile-time type checking and
>> run-time type checking.
>>
> That is quite a large difference though.

Agreed.


>> Erlang does have something that might be described as "batch-mode
>> type checking", through the tool dialyzer, which is part of the
>> official Erlang/OTP release. Dialyzer will not give you compilation
>> errors, and indeed doesn't even run in the compilation phase, but
>> if you run it separately, it is able to give very precise warnings
>> about type discrepancies.
>>
> That is interesting. Does it give any guarantees about type
> correctness? (When you run it over a program and it finds nothing,
 > are you safe from run time type erros?)

No, partly because there are constructs that are not statically
decidable(*), Dialyzer cannot guarantee freedom from runtime type
errors. It does offer the guarantee that it will not give false
positives, which of course also means that it will err on the side
of (un-)safety.  ;-)

Given that erlang /is/ dynamically typed, this strategy makes sense.
If you want Dialyzer to really find stuff, you can keep meta
programming to a minimum, and make liberal use of type guards
and pattern matching.

It may make your programs more verbose (but you asked for that),
but not necessarily slower, since the compiler can make use of the
type guards and produce more efficient code.

(*) Imagine a construct like

    {ok,[M,F]} = io:fread('',"~a~a"), M:F().

which will read two atoms from the tty and call a function using
those two atoms. There is no way to check type safety without
running it. Dialyzer will not warn, since it cannot determine that
it actually is a type error.

BR,
Ulf W
0
ulf.wiger (50)
12/7/2007 9:00:33 PM
In article 
<<801e90ef-f85b-4b7b-9501-8225e4c2d160@i29g2000prf.googlegroups.com>>,
michele.simionato@gmail.com <michele.simionato@gmail.com> wrote:
> On Dec 7, 12:26 am, rossb...@ps.uni-sb.de wrote:
>> my impression is
>> that most of the SML community is considering printf an abomination
>> and doesn't care about it either way.
> 
> But why? The lack of string interpolation is causing me
> endless suffering. 

Typechecking format strings is not trivial, because you have to
parse the string to figure out what the expected types of the 
fields should be, and SML programmers are generally not willing
to give up type safety.

> Now, it is not really that I want the ability to print out different
> types (a combinator library is way too much), it would be enough for
> me to be able to print strings in a saner way than concatenating
> them by hand with ^.  

The combinator library approach is not terribly heavyweight. Here's a 
tiny little example, based on Danvy's printing combinators.[*] 

  fun L x k s = k (s ^ x)
  fun nl k s = k (s ^ "\n")
  fun int k s n = k (s ^ (Int.toString n))
  fun str k s s2 = k (s ^ s2)
  fun print fmt = fmt (fn x => x) ""

Now, you can write something like:

  print (L"The square of " o int  o L" is " o  int  o nl)
        3 9

and get "The square of 3 is 9\n" as your result. 

[*] Since this is just a Usenet post, these combinators use
concatenation in their implementation -- a real implementation would
do something that's not quadratic. :) Here are the types of the
printing combinators:
 
  (* A literal string *)
  val L : string -> (string -> 'a) -> string -> 'a

  (* A newline *)
  val nl : (string -> 'a) -> string -> 'a

  (* A int format, like %d *)
  val int : (string -> 'a) -> string -> int -> 'a

  (* A string format, like %s *)
  val str : (string -> 'a) -> string -> string -> 'a

  (* Take a format and prep it for printing *)
  val print : (('a -> 'a) -> string -> 'b) -> 'b


-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
12/7/2007 9:11:07 PM
R�diger Klaehn skrev:
> On Dec 7, 8:03 pm, Ben Franksen <ben.frank...@online.de> wrote:
> 
> Every day I have to suffer from horrible programs written in
> imperative languages that are full of bugs and do not utilize my
 > quad core cpu.

Oh, you want to make use of your quad core?

Again, Erlang.  (:

http://www.franklinmint.fm/blog/archives/000792.html

The latest version also has built-in trace points and
a graphical viewer to allow for application-level
profiling of idle periods and bottlenecks in multicore
machines.

http://www.erlang.se/euc/07/papers/1630OTPupdateEUC07.pdf
(slides 10-13)

Functional, verbose, imperfect (no static typing),
and very scalable on multicore. ;-)


Of course, multicore isn't everything:

http://eric_rollins.home.mindspring.com/erlangAnt.html

While the author graciously commends the near-perfect
scalability of erlang in this benchmark, it is of course
difficult to overlook the fact that it starts out 130x
slower than the MLton solution. Not even your quad core
would help here.  (:  Erlang wasn't designed for this
type of problem.

BR,
Ulf W
0
ulf.wiger (50)
12/7/2007 9:16:12 PM
Ulf Wiger <ulf.wiger@e-r-i-c-s-s-o-n.com> writes:
> (*) Imagine a construct like
>     {ok,[M,F]} = io:fread('',"~a~a"), M:F().
> which will read two atoms from the tty and call a function using
> those two atoms. There is no way to check type safety without
> running it. Dialyzer will not warn, since it cannot determine that
> it actually is a type error.

Is there a reason it can't warn when it sees something like that?
I can understand that it can't totally separate type errors from
type correctness.  What I'm wondering is whether the uncertain
part affects so much code that it can't be usefully flagged.
0
phr.cx (5493)
12/7/2007 9:35:01 PM
On Dec 7, 10:16 pm, Ulf Wiger <ulf.wi...@e-r-i-c-s-s-o-n.com> wrote:
> R=FCdiger Klaehn skrev:> On Dec 7, 8:03 pm, Ben Franksen <ben.frank...@onl=
ine.de> wrote:
>
> > Every day I have to suffer from horrible programs written in
> > imperative languages that are full of bugs and do not utilize my
>
>  > quad core cpu.
>
> Oh, you want to make use of your quad core?
>
> Again, Erlang.  (:
>
Well, the problem is not so much that I want to make use of my quad
core. I know how to write functional, multithreaded code even in
imperative languages like C# and java. It looks kind of strange, but
it works.

I want average programmers to make use of quad- or more-core. And I
honestly think the only thing to make concurrency manageable for
average programmers* is functional programming.

*that also includes me when I have a bad day

[snip]
> While the author graciously commends the near-perfect
> scalability of erlang in this benchmark, it is of course
> difficult to overlook the fact that it starts out 130x
> slower than the MLton solution. Not even your quad core
> would help here.  (:  Erlang wasn't designed for this
> type of problem.
>
That is quite a large factor. For the stuff I am currently working on
(constraint satisfaction problems), that would not do.

=46rom the existing languages I looked at so far, scala seems to be the
most practical for real world applications. It works on the JVM and
you can use existing java code. And it has a concurrency library
inspired by erlang.

I still prefer clean, but the chances of me ever using clean in my
work are just about zero.

regards,

R=FCdiger
0
rudi2468 (20)
12/7/2007 9:59:37 PM
Rüdiger Klaehn wrote:
> On Dec 7, 8:03 pm, Ben Franksen <ben.frank...@online.de> wrote:
> 
> [snip]
>> No, no, and no!
>>
>> We should not cater to mainstream misuse of vocabulary and notation, just
>> to make it easier for the masses to come over. Let them hunger and freeze
>> out there in their imperative world! Those who really have had enough of
>> it are hopefully humbled by their misery and will, for the promise of
>> ending it, be willing to unlearn some of what they are used to. Amen.
>>
> Using = for declaring functions seems very natural to me. After all, a
> function is just a value of the type x->y.
> 
> But nevertheless I think this attitude is not very helpful.

Forgive me. I felt provocated. ;-)

> Every day I have to suffer from horrible programs written in
> imperative languages that are full of bugs and do not utilize my quad
> core cpu.
> 
> And to make a living, I have to put up with horrible imperative APIs
> that are full of inconsistencies and error sources mostly caused by
> mutable objects.

I can relate to that.

> I want everybody to move to purely functional languages so I do not
> have to suffer from horrible imperative APIs. And if that means that I
> will have to put up with a less than perfect functional language, then
> so be it.

Your attitude is appreciated, I even share your sentiment. (BTW, I regard my
favourite functional language, it starts with a capital H, as /far/ from
being perfect).

However, I seriously doubt that you'll actually win over people from the
imperative camp by compromising on syntax, especially superficial lexical
aspects of it. What makes functional programming difficult to start with
(when coming from imperative) has IME nothing to do with unusual syntax,
but with the fact that you have to structure your program in a completely
different way. You really have to unlearn a lot before the new way to think
about problems and their solutions become natural to you, and I guess a
somewhat different (but maybe not completely outlandish) syntax in
fact /helps/ you with that, otherwise you waste time by trying to do things
the old way ("everything looks so familiar, why doesn't it work like I am
used to?").

(OTOH, the feeling you get when your brains gets re-wired by starting to
think functionally has its own kind of thrill. I am sure I am telling you
nothing new here.)

What really gets you defectors in large numbers is working, non-trivial
programs and libraries with acceptable performance. It helps if the program
fills a gaping hole (Pugs). It also helps if the program has unique
abilities, even if it is bug-ridden and somewhat less than reliable
(darcs). We need more stuff like that (preferrably w/o the bugs). Xmonad (a
tiling window manager) is a near perfect example for how to achieve high
reliability and performance in an almost unbelievable 500 LOCs.

W.r.t. libraries, I happened to impress co-workers with Parsec, using it to
implement a Parser for some not-too-complicated configuration language in a
matter of minutes. The resulting program didn't cost more than a screen
full of code and reads more or less like a spec (BNF) annotated with result
values.

Cheers
Ben
0
12/7/2007 11:41:38 PM
Ben Franksen <ben.franksen@online.de> writes:
> Sean Gillespie wrote:
>> Second, '=' is not used for function 
>> declarations - I think this is confusing because people usually
>> associate '=' as a variable assignment.
>
> Sigh. It was a very bad case of misusing established (mathematical) language
> when the designers of C did that.

Since Fortran and PL/1 had already been misusing this language for
10-15 years can we really fault K&R for for their choice?
0
stephen104 (378)
12/8/2007 3:11:19 AM
Neelakantan Krishnaswami wrote:
> In article
> <<801e90ef-f85b-4b7b-9501-8225e4c2d160@i29g2000prf.googlegroups.com>>,
> michele.simionato@gmail.com <michele.simionato@gmail.com> wrote:
>> But why? The lack of string interpolation is causing me
>> endless suffering.
> 
> Typechecking format strings is not trivial, because you have to
> parse the string to figure out what the expected types of the
> fields should be, and SML programmers are generally not willing
> to give up type safety.

In what way do OCaml and F# give up type safety by supporting printf?

Note that OCaml goes a step further by providing pretty printing via
formatters.

>> Now, it is not really that I want the ability to print out different
>> types (a combinator library is way too much), it would be enough for
>> me to be able to print strings in a saner way than concatenating
>> them by hand with ^.
> 
> The combinator library approach is not terribly heavyweight. Here's a
> tiny little example, based on Danvy's printing combinators.[*]
> 
>   fun L x k s = k (s ^ x)
>   fun nl k s = k (s ^ "\n")
>   fun int k s n = k (s ^ (Int.toString n))
>   fun str k s s2 = k (s ^ s2)
>   fun print fmt = fmt (fn x => x) ""
> 
> Now, you can write something like:
> 
>   print (L"The square of " o int  o L" is " o  int  o nl)
>         3 9
> 
> and get "The square of 3 is 9\n" as your result.
>
> [*] Since this is just a Usenet post, these combinators use
> concatenation in their implementation -- a real implementation would
> do something that's not quadratic. :)

In OCaml and F# you just write:

  sprintf "The square of %d is %d\n" 3 9

Note that everything is provided: you don't need to cut'n'paste code from
usenet or some wiki just to get a decent implementation of "print".

My personal preference would be to have a decent graphics API in the stdlib.
Printing seems like a no-brainer in comparison...

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/8/2007 5:06:28 AM
On Dec 7, 10:11 pm, Neelakantan Krishnaswami <ne...@cs.cmu.edu> wrote:

> Typechecking format strings is not trivial, because you have to
> parse the string to figure out what the expected types of the
> fields should be, and SML programmers are generally not willing
> to give up type safety.

You are not reading what I wrote:

>> it is not really that I want the ability to print out different types
>> (a combinator library is way too much), it would be enough
>> for me to be able to print strings in a saner way than concatenating
>> them by hand with ^

I am not talking about type safety here, I am talking about
syntactical convenience when managing strings and *only* strings.
I am fine with writing the required type casts manually, I am not
fine with

[f1]  print ("The square of " ^ x ^ "is" ^ y ^ "\n")

nor with

[f2]  print (L"The square of " o str  o L" is " o  str  o nl) x y

I would be fine with something like

[f3]  printf ("The square of %s is %s" x y)

or even

[f4]  printf ("The square of %s is %s \n", [x, y])

Why?

First of all, for readability concerns: all the "^" in the first
expression and the "L", "o" in
the second expression are too intrusive and makes the code difficult
to read. There is a motivation why people invented templates.

Second, for reasons of familiarity: everybody would understand [f3]
and [f4] without
problems, whereas [f1] looks very primitive and [f2] just weird.

Forms [f3] and [f4] have respective advantages and disadvantages. [f3]
has the advantage that if I forget an argument I get a compile time
error (although it could be rather criptic to decipher for a beginner)
wheres [f4] would give a runtime error. OTOH, [f4] is more
dynamic. Suppose for instance you want to define a template language
to generate Web pages; if you change the template at runtime, by
adding additional placeholders, you could just append the required
additional arguments to the argument list, whereas using [f3] you
would have to add the arguments by hand to the source code and to
recompile the function containing the printf expression.

So, there are use cases both for [f3] and [f4]; an implementation of
[f4] could be

signature FORMAT = sig
  val f : string -> string list -> string
end

structure Format = struct
 (* use $ as placeholder character, an example of use is
 print(f"The square of $ is $\n" ["3", "9"])
*)
  exception ArityError of string

  fun checkArity(templN1, argsN) = let
    val n1 = Int.toString (length templN1 - 1)
    val n = Int.toString (length argsN)
  in
    if n1=n then () else raise ArityError("Expected "^n1^" arguments,
got"^n)
  end

  val rec interp' =
   fn (templ1, [], acc) => concat(rev (templ1 @ acc))
    | (templN1, argsN, acc) => interp'(
                             tl templN1, tl argsN,
                             hd argsN :: hd templN1 :: acc)
  and interp'' =
   fn (templN1, argsN) => (
      checkArity (templN1, argsN); interp' (templN1, argsN, []))
  and f =
   fn templ => let
          val templN1 = String.fields (fn c => c = #"$") templ
      in
          fn args => interp'' (templN1, args)
      end
end: FORMAT

I have not provided functionality to escape the special character"$"
here,
but this is just an usenet post.

My main gripe about combinators is that even if their implementation
is deceptively simple, they are actually far from trivial. How would
you explain the error messages to a newbie? I strongly believe that
simple things should be kept simple. On top of that, they are also not
standard: your implementation looks very similar to the one in the
FormatComb library of SML/NJ, whereas MLton use a different one. Last
saturday I was playing with combinators, to write my own format
library, just as an exercise, and I run in all sort of weird errors
that I believe are related to the value restriction of SML. All these
things are faily advanced and should not afflict an user wanting to do
a little bit more than print "Hello World" :-( I would accept
combinator libraries
for more advanced stuff, like implementing pickling of arbitrary
objects, but not for simple string interpolation.

      Michele Simionato
0
12/8/2007 8:58:22 AM
On 7 Dez., 09:37, Jon Harrop <use...@jdh30.plus.com> wrote:
> rossb...@ps.uni-sb.de wrote:
> > On 6 Dez., 21:17, Jon Harrop <use...@jdh30.plus.com> wrote:
> >> >>http://mlton.org/pipermail/mlton/2005-October/028127.html
>
> >> > I fail to see what this code has to do with the other.
>
> >> They are both working around deficiencies in the language using functors.
>
> > Jon, come on. Vesa's code is not using functors to work around
> > overloading or anything like that. The use of functors here isn't
> > essential to the point of the example. It is there to provide high
> > potential for reuse.
>
> That is a circular argument. If you leave SML then you can see that the
> functors are purely incidental: the idiomatic OCaml and F# implementations
> would make no use of functors whatsoever. The only reason to use functors
> in this SML code is to make the infix operators reusable, i.e. to
> workaround the lack of exportable infix operators in SML.

Jon, please. As you may very well know, it would be perfectly possible
to transliterate your version of the code to SML without much change.
But it is written in a different, more modular style for reasons. If
you don't appreciate those then fine, but don't blame it on something
unrelated.

> Objectively, that is not good and we know we can do a lot better because F#
> already does.

Like I said already, even with type classes or something alike you
have to make the same definitions. And the choice whether you keep it
plain and simple (like your code), or try to go for something more
extensible (like Vesa's code) is almost completely orthogonal to that.


> Take Vesa's code, for example. In reality, I use complex numbers,
> low-dimensional vectors and matrices and their equivalents in homogeneous
> coordinates as well as arbitrary-dimensional vectors and matrices, all in
> both 32- and 64-bit floats. Including scalars, you're looking at 24
> different types. Vesa's approach scales with something like the factorial
> of that. Are you going to write all of that code and remember all of those
> operator names? I'm not.

See, that's one of the reasons why the code is modularised. You'll
never need all those combinations in the same place, so you
instantiate the operators for the version you actually need, and open
them in scope selectively.

It is only your version that wouldn't scale at all.

I agree that overloading would make it more convenient, but it does
not seem to be that much of a big deal. And of course, you have to be
aware that the modular overhead is only looking bad for toy-sized
examples.


> > Your ray tracer is a toy example, optimised for low line count. If it
> > was written in a way you'd write real application code, and if you
> > cared for reuse as much, then I'm sure you would end up using modules
> > for structuring in similar ways.
>
> We use exactly the same style in our commercial visualization software,

Then I hope you are not longing for more overloading just to avoid
modular program design.


> which is hundreds of thousands of lines of code: probably longer than
> anything ever written in SML.

Please drop your propaganda. There are certainly projects of that
size.


> >> > You sort of said already that you use classes and have different
> >> > versions of the code. That sounds like a workaround to me.
>
> >> The "workaround" basically consists of removing the unnecessary
> >> boiler-plate that OCaml's functors require.
>
> > And it consists of having non-portable code.
>
> I hope to address that by creating another OCaml derivative that learns from
> F# using LLVM as a backend. This seems quite feasible.

In fact, I would be interested in seeing this. There has been much
hype around LLVM, but I have yet to see an implementation of a real
FPL with even simple features like full GC and proper TCO on it.

> > Mh, just today I made use of this possibility in some code I was
> > writing.
>
> Did it catch an error?

I think it did on one or two occasions where I screwed up argument
order.

> >> Far more people are familiar with printf than with all functional
> >> programming languages combined
>
> > Far more people are familiar with C++ templates than with all FPLs,
> > and use them on a daily basis. Should we hence put them into ML?
>
> That is exactly what F# has done to keep compatibility with C#:
>
>   'a set   ==   Set<'a>

That's not templates, it's merely (a bit of) template syntax.

> > Come on, that is totally a bogus line of reasoning, and a guaranteed
> > way to disaster when used as a basis for language design.
>
> Please, enough with the theoretical reasons why popular languages are
> a "disaster" with the implication that the ones that never left academia
> are somehow a success.

I did not refer to any language, popular or not.

> SML is the disaster.

Sigh. With unreasonable statements like this I think it's time to end
the discussion.
0
rossberg (600)
12/8/2007 10:07:41 AM
On 7 Dez., 11:33, "michele.simion...@gmail.com"
<michele.simion...@gmail.com> wrote:
> On Dec 7, 12:26 am, rossb...@ps.uni-sb.de wrote:
>
> > my impression is
> > that most of the SML community is considering printf an abomination
> > and doesn't care about it either way.
>
> But why? The lack of string interpolation is causing me
> endless suffering. Now, it is not really that I want
> the ability to print out different types (a combinator
> library is way too much), it would
> be enough for me to be able to print strings in a saner
> way than concatenating them by hand with ^.

Well, if that's all you want then that's not an issue. But printf is
all about allowing ad-hoc combinations of different types, mingled
with ad-hoc format control. OCaml deals with this by a couple of
gorgeous hacks in its type system and type checker.

> signature STR_INTERP = sig
>   val format : string -> string list -> string
> end
>
> structure StrInterp = struct
>
> datatype token = C of char | S | R
>
> val rec tokenize =
>  fn [] => []
>   | #"$" :: #"$" :: lst => C #"$" :: tokenize lst
>   | #"$" :: #"s" :: lst => S :: tokenize lst
>   | #"$" :: #"q" :: lst => R :: tokenize lst
>   | c :: lst => C c :: tokenize lst
>
> val interp = foldr
>  (fn (C c, (lst, args)) => (c :: lst, args)
>  | (S, (lst, args)) => (String.explode(hd args) @ lst, tl args)
>  | (R, (lst, args)) => (#"\"" :: String.explode (hd args) @ #"\"" ::
> lst,
>                         tl args))
>
> fun format templ = let
>  val tokenlist = tokenize (String.explode templ)
> in
>  fn arglist => (
>  assert length(arglist) = length (
>         List.filter (fn (C x) => false | _ => true) tokenlist);
>  String.implode (#1 (interp ([], rev arglist) tokenlist)))
> end
>
> end

> Since I am pretty new at SML I am sure this code can be
> much improved, so feel free to critique it ;)

Two immediate points: you should annotate the structure definition
with the signature name to ensure that it actually matches it. And you
should use pattern matching instead of hd and tl. That would
immediately yield a valuable compiler warning because your code does
not cover the case where the length of the list does not match the
format string (as a rule of thumb, /never/ use hd and tl).

On a more algorithmic level, turning strings into lists is
inefficient. A tuned approach would be to utilise SML's Substring
module to traverse the string. In the process, build a list of string
fragments for the result, and in the end concatenate them in one
operation using String.concat.

- Andreas
0
rossberg (600)
12/8/2007 10:26:10 AM
rossberg@ps.uni-sb.de wrote:
> It is only your version that wouldn't scale at all.

This is a joke. Not only did I write the book on it but I continue to sell
production-quality software far beyond the sophistication of anything
relevant that SML has ever seen.

>> which is hundreds of thousands of lines of code: probably longer than
>> anything ever written in SML.
> 
> Please drop your propaganda. There are certainly projects of that
> size.

Where?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/8/2007 12:35:32 PM
On 7 joulu, 12:33, "michele.simion...@gmail.com"
<michele.simion...@gmail.com> wrote:
> But why? The lack of string interpolation is causing me
> endless suffering. Now, it is not really that I want
> the ability to print out different types (a combinator
> library is way too much), it would
> be enough for me to be able to print strings in a saner
> way than concatenating them by hand with ^.
[...]
> do print (format "$s $q: $s$\n" ["pizza", "pepperoni", "5"])

Frankly, I'm not a fan of format strings, whether type checked or
not.  Whether they make the code more readable or more convenient to
write is debatable.  One readability issue with format strings is
associating the specifiers in the format string with the actual
positional parameters.  Another issue is having to remember yet
another family of specifiers.  In C++ I got used to the C++ IO streams
'cout << hippo << "\n"' style output.  It doesn't feel cumbersome to
me to do output by giving a list of strings to concatenate:

  val prints = print o concat

  prints ["Hippo: ", hippo, " Happo: ", happo, "\n"]

(I recently added prints (read: print strings) to my Extended Basis
library, but haven't yet committed the code.)

-Vesa Karvonen
0
12/8/2007 1:02:34 PM
On Dec 8, 11:26 am, rossb...@ps.uni-sb.de wrote:
> On a more algorithmic level, turning strings into lists is
> inefficient. A tuned approach would be to utilise SML's Substring
> module to traverse the string. In the process, build a list of string
> fragments for the result, and in the end concatenate them in one
> operation using String.concat.

Suppose I want to uppercase or lowercase a string: the
direct approach would be

fun upper str = String.implode (map Char.toUpper (String.explode str))
fun lower str = String.implode (map Char.toLower (String.explode str))

Is this is inefficient? What should I use instead?

 Michele Simionato
0
12/9/2007 5:12:13 AM
"michele.simionato@gmail.com" <michele.simionato@gmail.com> writes:
> Suppose I want to uppercase or lowercase a string: the
> direct approach would be
>
> fun upper str = String.implode (map Char.toUpper (String.explode str))
> fun lower str = String.implode (map Char.toLower (String.explode str))
>
> Is this is inefficient? What should I use instead?

fun upper str = String.map Char.toUpper str
fun lower str = String.map Char.toLower str
0
stephen104 (378)
12/9/2007 6:52:43 AM
In article <<13lka42nfo63pad@corp.supernews.com>>,
Jon Harrop <usenet@jdh30.plus.com> wrote:
> 
> In what way do OCaml and F# give up type safety by supporting printf?

They do not give up type safety. Unfortunately, printf is supported in
an ad-hoc, second-class, non-extensible way. Even for the limited case
of printing, I find it an inadequate solution.

> In OCaml and F# you just write:
> 
>   sprintf "The square of %d is %d\n" 3 9
> 
> Note that everything is provided: you don't need to cut'n'paste code from
> usenet or some wiki just to get a decent implementation of "print".

Everything? What's the format code for a list, or a pair, or an array,
or an option type? How do you add format codes for user-defined types?

With a combinator library, all of these are easy to add. For example,
we can add a formate code for lists as follows:

  fun list fmt k s xs =
    let fun fmt' x k s = fmt k s x 
  	fun loop k []        = k 
  	  | loop k [x]       = fmt' x k
  	  | loop k (x :: xs) = fmt' x (fn s => loop k xs (s ^ ", "))
    in
      loop (fn s => k (s ^ "]")) xs (s ^ "[")
    end

Now, we can write

  print (l"The squares of " o list int o l" are " o list int o nl)
        [1, 2, 3, 4]
        [1, 4, 9, 16]

and get 

  "The squares of [1, 2, 3, 4] are [1, 4, 9, 16]\n"

Or we can write 

  - print (l"happy" o list (list int)) [[1, 2, 3], [4, 5, 6], [7, 8, 9]];

  val it = "happy [[1, 2, 3], [4, 5, 6], [7, 8, 9]]" : string

> My personal preference would be to have a decent graphics API in the
> stdlib.  Printing seems like a no-brainer in comparison...

You'd think it would be easy, except that so many languages --
including Ocaml -- have badly incomplete solutions.

It seems reasonable to me to have some syntactic sugar to make these
combinator expressions prettier, but adding format strings as a
second-class construct to the core language is a clear design wart.



-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
12/9/2007 4:15:36 PM
In article 
<<09071b26-158d-4907-bc3d-b703388459d3@w40g2000hsb.googlegroups.com>>,
michele.simionato@gmail.com <michele.simionato@gmail.com> wrote:
>
> Suppose I want to uppercase or lowercase a string: the
> direct approach would be
> 
> fun upper str = String.implode (map Char.toUpper (String.explode str))
> fun lower str = String.implode (map Char.toLower (String.explode str))
> 
> Is this is inefficient? What should I use instead?

Use String.map:

  val upper = String.map Char.toUpper
  val lower = String.map Char.toLower


-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
12/9/2007 4:17:56 PM
Neelakantan Krishnaswami wrote:
> In article <<13lka42nfo63pad@corp.supernews.com>>,
> Jon Harrop <usenet@jdh30.plus.com> wrote:
>> In OCaml and F# you just write:
>> 
>>   sprintf "The square of %d is %d\n" 3 9
>> 
>> Note that everything is provided: you don't need to cut'n'paste code from
>> usenet or some wiki just to get a decent implementation of "print".
> 
> Everything? What's the format code for a list, or a pair, or an array,
> or an option type? How do you add format codes for user-defined types?

Using user-defined print functions and the %a format specifier.

> With a combinator library, all of these are easy to add.

Same with printf.

Printing a pair is just:

# let sprintf_pair f g () (x, y) =
    sprintf "(%a, %a)" f x g y;;
val sprintf_pair :
  (unit -> 'a -> string) ->
  (unit -> 'b -> string) -> unit -> 'a * 'b -> string = <fun>

So small that you wouldn't even bother factoring it out.

> For example, we can add a formate code for lists as follows:
> 
>   fun list fmt k s xs =
>     let fun fmt' x k s = fmt k s x
>   fun loop k []        = k
>   | loop k [x]       = fmt' x k
>   | loop k (x :: xs) = fmt' x (fn s => loop k xs (s ^ ", "))
>     in
>       loop (fn s => k (s ^ "]")) xs (s ^ "[")
>     end
> 
> Now, we can write
> 
>   print (l"The squares of " o list int o l" are " o list int o nl)
>         [1, 2, 3, 4]
>         [1, 4, 9, 16]
> 
> and get
> 
>   "The squares of [1, 2, 3, 4] are [1, 4, 9, 16]\n"
> 
> Or we can write
> 
>   - print (l"happy" o list (list int)) [[1, 2, 3], [4, 5, 6], [7, 8, 9]];
> 
>   val it = "happy [[1, 2, 3], [4, 5, 6], [7, 8, 9]]" : string

In OCaml, you write:

# let rec sprintf_list f () list =
    sprintf "[%a]" (sprintf_list_aux f) list
  and sprintf_list_aux f () = function
    | [] -> sprintf ""
    | [h] -> sprintf "%a" f h
    | h::t -> sprintf "%a; %a" f h (sprintf_list_aux f) t;;
val sprintf_list : (unit -> 'a -> string) -> unit -> 'a list -> string =
  <fun>
val sprintf_list_aux : (unit -> 'a -> string) -> unit -> 'a list -> string =
  <fun>

and then:

# sprintf_list (sprintf_list (fun () -> sprintf "%d")) ()
    [[1; 2; 3]; [4; 5; 6]; [7; 8; 9]];;
- : string = "[[1; 2; 3]; [4; 5; 6]; [7; 8; 9]]"

>> My personal preference would be to have a decent graphics API in the
>> stdlib.  Printing seems like a no-brainer in comparison...
> 
> You'd think it would be easy, except that so many languages --
> including Ocaml -- have badly incomplete solutions.
>
> It seems reasonable to me to have some syntactic sugar to make these
> combinator expressions prettier, but adding format strings as a
> second-class construct to the core language is a clear design wart.

I think you are greatly outnumbered by happy OCaml users.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/9/2007 5:07:25 PM
In article <<13lo9ap8iiemh17@corp.supernews.com>>,
Jon Harrop <usenet@jdh30.plus.com> wrote:
> Neelakantan Krishnaswami wrote:
>> In article <<13lka42nfo63pad@corp.supernews.com>>,
>> Jon Harrop <usenet@jdh30.plus.com> wrote:
>>> In OCaml and F# you just write:
>>> 
>>>   sprintf "The square of %d is %d\n" 3 9
>>> 
>>> Note that everything is provided: you don't need to cut'n'paste code from
>>> usenet or some wiki just to get a decent implementation of "print".
>> 
>> Everything? What's the format code for a list, or a pair, or an array,
>> or an option type? How do you add format codes for user-defined types?
> 
> Using user-defined print functions and the %a format specifier.

No, that's not a format specifier for lists or arrays.

What %a does is apply a function to a value and splice in the result;
it's basically the same as %s, only it does a function application for
us -- we write:

  printf "%a" print_foo foo  

instead of 
 
  printf "%s" (sprint_foo foo)

This can save memory when building a large string, but it does not do
what I asked.

>> It seems reasonable to me to have some syntactic sugar to make these
>> combinator expressions prettier, but adding format strings as a
>> second-class construct to the core language is a clear design wart.
> 
> I think you are greatly outnumbered by happy OCaml users.

You pack two fallacies into one sentence here.

First, you introduce a false dichotomy between me and "happy Ocaml
users", when you have no actual evidence that I am an unhappy Ocaml
user. Second, you appeal to popularity as evidence of good design.
This is of course false -- witness the fact that C++ is more popular
than ML.

This style of argument is extremely dishonest; please stop using it.

-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
12/9/2007 9:29:05 PM
Neelakantan Krishnaswami wrote:
> In article <<13lo9ap8iiemh17@corp.supernews.com>>,
> Jon Harrop <usenet@jdh30.plus.com> wrote:
>> Using user-defined print functions and the %a format specifier.
> 
> No, that's not a format specifier for lists or arrays.
> 
> What %a does is apply a function to a value and splice in the result;
> it's basically the same as %s, only it does a function application for
> us -- we write:
> 
>   printf "%a" print_foo foo
> 
> instead of
>  
>   printf "%s" (sprint_foo foo)
> 
> This can save memory when building a large string, but it does not do
> what I asked.

What more do you want (exactly)?

>>> It seems reasonable to me to have some syntactic sugar to make these
>>> combinator expressions prettier, but adding format strings as a
>>> second-class construct to the core language is a clear design wart.
>> 
>> I think you are greatly outnumbered by happy OCaml users.
> 
> You pack two fallacies into one sentence here.
> 
> First, you introduce a false dichotomy between me and "happy Ocaml
> users", when you have no actual evidence that I am an unhappy Ocaml
> user. Second, you appeal to popularity as evidence of good design.
> This is of course false -- witness the fact that C++ is more popular
> than ML.
>
> This style of argument is extremely dishonest; please stop using it.

I think that you (and Andreas) will not understand the point I am making
until you quit academia, found a company and earn your living through
direct sales. When you need sales to eat, you quickly learn that no users
is the definition of failure.

I would like to inspire you to build great tools. OCaml is a superb example
of success in this context. Hundreds of bioinformaticians are among our
customers thanks to OCaml. You could build even better tools.

I'm trying to tell you that OCaml's success in technical computing is
directly related to the fact that it augments SML with useful features like
printf. If you adopt these kinds of features then you can do the world a
great service by building tools that are used, for example, to cure cancer.

I can't think of anything more gratifying but I'm finding it incredibly
difficult to move the SML community in a productive direction.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/9/2007 11:16:18 PM
"michele.simionato@gmail.com" <michele.simionato@gmail.com> writes:

> On Dec 7, 10:11 pm, Neelakantan Krishnaswami <ne...@cs.cmu.edu> wrote:
>
>> Typechecking format strings is not trivial, because you have to
>> parse the string to figure out what the expected types of the
>> fields should be, and SML programmers are generally not willing
>> to give up type safety.
>
> You are not reading what I wrote:
>
>>> it is not really that I want the ability to print out different types
>>> (a combinator library is way too much), it would be enough
>>> for me to be able to print strings in a saner way than concatenating
>>> them by hand with ^
>
> I am not talking about type safety here, I am talking about
> syntactical convenience when managing strings and *only* strings.
> I am fine with writing the required type casts manually, I am not
> fine with
>
> [f1]  print ("The square of " ^ x ^ "is" ^ y ^ "\n")
>
> nor with
>
> [f2]  print (L"The square of " o str  o L" is " o  str  o nl) x y
>
> I would be fine with something like
>
> [f3]  printf ("The square of %s is %s" x y)
>
> or even
>
> [f4]  printf ("The square of %s is %s \n", [x, y])
>
> Why?
>
> First of all, for readability concerns: all the "^" in the first
> expression and the "L", "o" in
> the second expression are too intrusive and makes the code difficult
> to read. There is a motivation why people invented templates.

I always find those % signs to be too intrusive.  They make the code
difficult to read.

> Second, for reasons of familiarity: everybody would understand [f3]
> and [f4] without
> problems, whereas [f1] looks very primitive and [f2] just weird.

So let's adopt C syntax then for all of ML.  Face it: To the majority
of programmers, most of ML looks "weird".  The %whatever notation from
C's printf is an acquired taste.  There is absolute nothing natural or
simple about it, because the mapping from formatting characters to
types and how they get printed is completely ad-hoc and not extensible.

> My main gripe about combinators is that even if their implementation
> is deceptively simple, they are actually far from trivial. How would
> you explain the error messages to a newbie? I strongly believe that
> simple things should be kept simple. On top of that, they are also not
> standard: your implementation looks very similar to the one in the
> FormatComb library of SML/NJ, whereas MLton use a different one. Last
> saturday I was playing with combinators, to write my own format
> library, just as an exercise, and I run in all sort of weird errors
> that I believe are related to the value restriction of SML. All these
> things are faily advanced and should not afflict an user wanting to do
> a little bit more than print "Hello World" :-( I would accept
> combinator libraries
> for more advanced stuff, like implementing pickling of arbitrary
> objects, but not for simple string interpolation.

You were trying to IMPLEMENT the combinators, which admittedly can be
a little bit tricky.  (I speak from experience here, as you may know,
since you seem to be familiar with SML/NJ's FormatComb.)  However,
USING the combinators is pretty much trivial, and the error messages
aren't really that bad (at least not worse than other type errors).

Matthias
0
find19 (1244)
12/10/2007 5:53:37 AM
On 8 Dez., 13:35, Jon Harrop <use...@jdh30.plus.com> wrote:
> rossb...@ps.uni-sb.de wrote:
> > It is only your version that wouldn't scale at all.
>
> This is a joke. Not only did I write the book on it

Yet your version is the only one exhibiting the problem you described.
What does a book have to do with it?

> but I continue to sell
> production-quality software far beyond the sophistication of anything
> relevant that SML has ever seen.

Again using your very personal metric of "sophistication", I suppose.
There are many different dimensions to that.

> >> which is hundreds of thousands of lines of code: probably longer than
> >> anything ever written in SML.
>
> > Please drop your propaganda. There are certainly projects of that
> > size.
>
> Where?

Excuse me if I won't spend my time compiling statistics for you, but
the one data point I can easily give you is that the implementation of
the Alice system alone consists of ~350K loc, 200K of which are in
SML. Not that I think numbers have much relevance, though.

- Andreas
0
rossberg (600)
12/10/2007 8:53:36 AM
On 10 Dez., 00:16, Jon Harrop <use...@jdh30.plus.com> wrote:
>
> I think that you (and Andreas) will not understand the point I am making
> until you quit academia, found a company and earn your living through
> direct sales. When you need sales to eat, you quickly learn that no users
> is the definition of failure.

I understand the point you are making. But you seem to miss that it is
by no means the only metric for "success", and that your pet concerns
may be entirely irrelevant to other people and other projects, just as
theirs are to you. Regarding academia, technical progress /requires/
evading the inertia of "familiar ideas". If the PL community had
followed your kind of advice 20 years ago then you wouldn't be able to
enjoy OCaml and F# today. Hence, you should really try to get out of
your extremely narrowed, black-and-white view of the world.
0
rossberg (600)
12/10/2007 9:20:58 AM
rossberg@ps.uni-sb.de wrote:
> On 10 Dez., 00:16, Jon Harrop <use...@jdh30.plus.com> wrote:
>> I think that you (and Andreas) will not understand the point I am making
>> until you quit academia, found a company and earn your living through
>> direct sales. When you need sales to eat, you quickly learn that no users
>> is the definition of failure.
> 
> I understand the point you are making. But you seem to miss that it is
> by no means the only metric for "success", and that your pet concerns
> may be entirely irrelevant to other people and other projects, just as
> theirs are to you.

I am conveying the concerns of your potential users to you. If you intend
your work to never be used then that is fine but please do not pretend that
it can be considered a success by any meaningful metric.

> Regarding academia, technical progress /requires/ evading the inertia
> of "familiar ideas". 

Sure.

> If the PL community had 
> followed your kind of advice 20 years ago then you wouldn't be able to
> enjoy OCaml and F# today.

F# is primarily a product of industry.

> Hence, you should really try to get out of 
> your extremely narrowed, black-and-white view of the world.

I am the one with the broad experience here, having left academia to stand
on my own two feet. If you ever do the same you will see that it was not my
view that was "narrow".

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/10/2007 10:09:55 AM
Matthias Blume wrote:
> So let's adopt C syntax then for all of ML.  Face it: To the majority
> of programmers, most of ML looks "weird".  The %whatever notation from
> C's printf is an acquired taste.  There is absolute nothing natural or
> simple about it, because the mapping from formatting characters to
> types and how they get printed is completely ad-hoc and not extensible.

Sure. At the end of the day this is simply about clarity. Printf is made
more clear by the fact that it is familiar to a lot of people.

If anyone can come up with something better than printf for an ML then
please do, but plain combinators isn't it because they aren't concise and
they aren't clear.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/10/2007 10:12:13 AM
On Dec 10, 11:09 am, Jon Harrop <use...@jdh30.plus.com> wrote:
>
> I am conveying the concerns of your potential users to you.

You are projecting your own, very specific needs on other people. It
should be clear enough from a variety of usenet discussions that there
are lots of people not sharing your opinions and preferences on
everything.

> If you intend
> your work to never be used then that is fine but please do not pretend that
> it can be considered a success by any meaningful metric.

This brings us into the principal discussion about the value of
foundational research. Your argument implies that you consider such
research useless. I couldn't disagree more, but I have no interest in
diving into that discussion.

Btw, I never claimed my work to be a success.

> > If the PL community had
> > followed your kind of advice 20 years ago then you wouldn't be able to
> > enjoy OCaml and F# today.
>
> F# is primarily a product of industry.

....harvesting 35 years of a branch of PL history that a likely 99% of
people considered totally unsuccessful at times (and most actually
still do, AFAICT).

> I am the one with the broad experience here, having left academia to stand
> on my own two feet. If you ever do the same you will see that it was not my
> view that was "narrow".

I do not question your experience, but it is kind of obvious that your
tendency for absolutism occasionally compromises your judgement. (Or
it doesn't, and your style of rhetoric instead is part of some grand
strategy whose criterium for success I'm too naive to grasp. ;-) )
0
rossberg (600)
12/10/2007 11:02:04 AM
rossberg@ps.uni-sb.de wrote:
> On 8 Dez., 13:35, Jon Harrop <use...@jdh30.plus.com> wrote:
>> rossb...@ps.uni-sb.de wrote:
>> > It is only your version that wouldn't scale at all.
>>
>> This is a joke. Not only did I write the book on it
> 
> Yet your version is the only one exhibiting the problem you described.

Consider the fact that my code sells but you can't even give SML away for
free and tell me again whose code has "problems"?

> What does a book have to do with it?

This is what I do for a living, i.e. there is no point in trying to tell me
that my code won't scale because I already scaled it.

>> but I continue to sell
>> production-quality software far beyond the sophistication of anything
>> relevant that SML has ever seen.
> 
> Again using your very personal metric of "sophistication", I suppose.
> There are many different dimensions to that.

Absolutely. If you compare with all other languages then I can claim that
our OCaml is 10x than its nearest competitor (including commercial software
like Microsoft's Windows Presentation Foundation).

If you're talking about sophisticated programs using vector-matrix routines
then there is a lot of software that is much more complicated written in
C++, Java and Fortran.

But we're talking about SML. In SML, there is basically nothing at all in
this area beyond toy programs written for shootouts or lectures like this:

  http://www.cs.cornell.edu/courses/cs312/2003sp/hw/ps2/ps2.html

That is not production quality software and is nowhere near the level of
sophistication of something like Smoke. If there were anything at all
available in SML then I would agree that this is hopelessly subjective, but
there isn't.

>> >> which is hundreds of thousands of lines of code: probably longer than
>> >> anything ever written in SML.
>>
>> > Please drop your propaganda. There are certainly projects of that
>> > size.
>>
>> Where?
> 
> Excuse me if I won't spend my time compiling statistics for you, but
> the one data point I can easily give you is that the implementation of
> the Alice system alone consists of ~350K loc, 200K of which are in
> SML.

Ok. So our code has dozens of corporate users, so I know is tested and
works. Who has taken the plunge with Alice ML and what do they use it for?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/10/2007 11:08:55 AM
rossberg@ps.uni-sb.de wrote:
> On Dec 10, 11:09 am, Jon Harrop <use...@jdh30.plus.com> wrote:
>>
>> I am conveying the concerns of your potential users to you.
> 
> You are projecting your own, very specific needs on other people.

I am conveying the requirements placed upon me by my customers, so these are
not "my specific needs". We're talking about the needs of tens of thousands
of potential ML users, most of whom do work of incredible importance in
completely unrelated fields.

I feel that the SML community almost completely ignore the needs of
potential users and I am trying to address that.

> It 
> should be clear enough from a variety of usenet discussions that there
> are lots of people not sharing your opinions and preferences on
> everything.

On the contrary, I know of only one person who is not a language researcher
and who chose SML for his work having examined OCaml and that is Vesa. The
SML community now has another potential user in Michele Simionato and he,
like me, is telling you that you should make IO easier but the only
response he gets is sprawling workarounds using advanced FP concepts that
completely miss his point: it doesn't make life easy.

>> If you intend
>> your work to never be used then that is fine but please do not pretend
>> that it can be considered a success by any meaningful metric.
> 
> This brings us into the principal discussion about the value of
> foundational research. Your argument implies that you consider such
> research useless. I couldn't disagree more, but I have no interest in
> diving into that discussion.

Foundational research is valuable only if it gets used. Granted that may
take decades but SML already had decades and still doesn't see significant
use. Note that the only two MLs that are seeing widespread use both
incorporate many of the features that I detailed.

>> > If the PL community had
>> > followed your kind of advice 20 years ago then you wouldn't be able to
>> > enjoy OCaml and F# today.
>>
>> F# is primarily a product of industry.
> 
> ...harvesting 35 years of a branch of PL history that a likely 99% of
> people considered totally unsuccessful at times (and most actually
> still do, AFAICT).

If the PL community had followed my kind of advice 20 years ago then we'd
have a single SML implementation with bindings to all major libraries, a
production-quality IDE and hundreds of thousands of professional users. And
SML would include all the necessary syntactic sugar to make it easy for
newbies to learn.

>> I am the one with the broad experience here, having left academia to
>> stand on my own two feet. If you ever do the same you will see that it
>> was not my view that was "narrow".
> 
> I do not question your experience, but it is kind of obvious that your
> tendency for absolutism occasionally compromises your judgement. (Or
> it doesn't, and your style of rhetoric instead is part of some grand
> strategy whose criterium for success I'm too naive to grasp. ;-) )

My criterium for success is really very simple: help users to do their work
more efficiently by solving their problems for them. Someone in your
position and with your talent would have to put in relatively little effort
to build something of enormous practical value. I can't think of anything
more gratifying than building a foundation that improves scientific
research. Just imagine what that would achieve in the grand scheme of
things...

I predict that very interesting things will happen when someone produces the
first working MiniML with LLVM. Once that is done, people like me will be
able to build what they need on top of it quite easily.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/10/2007 11:54:57 AM
On Dec 10, 12:08 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>
> > Yet your version is the only one exhibiting the problem you described.
>
> Consider the fact that my code sells but you can't even give SML away for
> free and tell me again whose code has "problems"?

The alleged scaling problems are a technical observation (that you
made yourself) and successfully selling code apparently changes
nothing about them.

> This is what I do for a living, i.e. there is no point in trying to tell me
> that my code won't scale because I already scaled it.

And why did you claim that there is "language deficiency" then and
that you need overloading to fix it?


> If you're talking about sophisticated programs using vector-matrix routines
> then there is a lot of software that is much more complicated written in
> C++, Java and Fortran.
>
> But we're talking about SML. In SML, there is basically nothing at all in
> this area beyond toy programs

And this area being the only one of relevance? How many theorem
provers with the sophistication of, say, Isabelle/HOL (SML) or Coq
(OCaml) have you written, and where are they used?

(Sorry for bringing up good old theorem provers again, but I can
hardly think of anything more "sophisticated".)

> Ok. So our code has dozens of corporate users, so I know is tested and
> works. Who has taken the plunge with Alice ML and what do they use it for?

It is mainly used for teaching, and being a small research project
probably will never see commercial users. That never was its point
either. So?
0
rossberg (600)
12/10/2007 12:00:52 PM
rossberg@ps.uni-sb.de wrote:
> On Dec 10, 12:08 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>> > Yet your version is the only one exhibiting the problem you described.
>>
>> Consider the fact that my code sells but you can't even give SML away for
>> free and tell me again whose code has "problems"?
> 
> The alleged scaling problems are a technical observation (that you
> made yourself) and successfully selling code apparently changes
> nothing about them.

Our commercial code doesn't use the functor-based approach. In fact, I don't
think it introduces any functors at all.

>> This is what I do for a living, i.e. there is no point in trying to tell
>> me that my code won't scale because I already scaled it.
> 
> And why did you claim that there is "language deficiency" then and
> that you need overloading to fix it?

Overloading makes our code a lot clearer and makes our lives a lot easier,
and makes the lives of our users a lot easier. We can and do use deficient
languages and implementations. Due to their deficiencies, some languages
are preferable to others. However, it would take very little to make SML
desirable for us as a language.

>> If you're talking about sophisticated programs using vector-matrix
>> routines then there is a lot of software that is much more complicated
>> written in C++, Java and Fortran.
>>
>> But we're talking about SML. In SML, there is basically nothing at all in
>> this area beyond toy programs
> 
> And this area being the only one of relevance?

We can consider any area you like. I believe the result will be the same.

> How many theorem provers with the sophistication of, say, Isabelle/HOL
> (SML) or Coq (OCaml) have you written, and where are they used? 

Without users, who knows whether or not they work. In fact, wasn't Coq was
used to prove the correctness of OCaml's parallel GC that never worked? ;-)

Having said that, apparently Coq has more than twice as many installs on
Ubuntu machines than any SML compiler...

> (Sorry for bringing up good old theorem provers again, but I can
> hardly think of anything more "sophisticated".)

That's fine. Theorems provers are certainly extremely complicated pieces of
software. I don't disagree that MLs are very well suited to that. My point
is that they are often much less well suited to other kinds of software.

For example, I would say that C++ is preferable to SML for numerics because
its syntax is better, libraries are better and performance is better. OCaml
is preferable to C++ for numerics because it addresses those problems and
its syntax, although decidedly suboptimal, is offset by its functional-
programming capabilities. F# is much better than C++ for numerics because
it does a much better job of addressing these problems.

>> Ok. So our code has dozens of corporate users, so I know is tested and
>> works. Who has taken the plunge with Alice ML and what do they use it
>> for?
> 
> It is mainly used for teaching, and being a small research project
> probably will never see commercial users. That never was its point
> either. So?

Having users proves that software works. For example, I do not consider it a
coincidence that MLton crashes all the time and has few users but OCaml
rarely crashes and has many more users.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/10/2007 12:45:44 PM
On Dec 10, 6:45 am, Jon Harrop <use...@jdh30.plus.com> wrote:
> Having users proves that software works. For example, I do not consider it a
> coincidence that MLton crashes all the time

This is by no means the experience of the majority of MLton users
(which include users with >200K code).
If you have specific bugs to report, please do so on the MLton mailing
lists (http://www.mlton.org/Contact).
Usenet should not be used as a bug reporting facility, for MLton or
any project.
0
12/10/2007 3:07:21 PM
On Dec 10, 12:54 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> SML community now has another potential user in Michele Simionato

Well, I am playing with SML and I want to write something on it
because I am collaborating with an Italian technical magazine,
but I am not sure if I will become a real user od SML.
For instance, a few years ago I spent some time learning
Scheme, but at the end I never did any serious project
in it (the showstopper there was the (lack of) module system).
In any case, I would consider SML for open source projects (i.e
fun projects) not for enterprise projects, for a number of
reasons, mostly non technical. Having printf or not would not make
any difference.

 Michele Simionato
0
12/10/2007 4:26:24 PM
On Dec 10, 4:26 pm, "michele.simion...@gmail.com"
<michele.simion...@gmail.com> wrote:
> On Dec 10, 12:54 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>
> > SML community now has another potential user in Michele Simionato
>
> Well, I am playing with SML and I want to write something on it
> because I am collaborating with an Italian technical magazine,
> but I am not sure if I will become a real user od SML.
> For instance, a few years ago I spent some time learning
> Scheme, but at the end I never did any serious project
> in it (the showstopper there was the (lack of) module system).
> In any case, I would consider SML for open source projects (i.e
> fun projects) not for enterprise projects, for a number of
> reasons, mostly non technical. Having printf or not would not make
> any difference.
>
>  Michele Simionato

I know there is no such feature as a standarized Scheme module system.
However, a lot of Scheme implementations will give you such a feature.
If you dare to browse the Bigloo manual you will quickly realize
modules in Bigloo are an important goody.
0
klohmuschel (196)
12/10/2007 5:36:33 PM
On Dec 10, 11:54 am, Jon Harrop <use...@jdh30.plus.com> wrote:

> My criterium for success is really very simple: help users to do their work
> more efficiently by solving their problems for them. Someone in your
> position and with your talent would have to put in relatively little effort
> to build something of enormous practical value. I can't think of anything
> more gratifying than building a foundation that improves scientific
> research. Just imagine what that would achieve in the grand scheme of
> things...

Scientific programming is really a diverse field. A lot of people want
symply this: libraries to read hdf5, binary format, etc. files from
satellite data bases. Libraries for all the matrix stuff (blas, etc.).
I agree a lot of researchers simply take a brute force method: collect
the data, make the calculation and write a paper. I have never met a
colleague in my field who really does "abstract programming".

If you want to help me (a scientist for a living): give me some
bindings for Bigloo to important numerical libraries (blas, netcdf,
hdfeos, ...). Yes I know Ocaml has some interesting bindings but then
there is the impediment that I won't like to program in OCaml.


> I predict that very interesting things will happen when someone produces the
> first working MiniML with LLVM. Once that is done, people like me will be
> able to build what they need on top of it quite easily.

What would be the projects you are going to realize in cases of
MiniML. I mean what will MiniML give you OCaml or F# (or how they call
it) cannot deliver to you?

Btw: I have never become accustomed to printf in C.

0
klohmuschel (196)
12/10/2007 5:48:13 PM
On Dec 10, 6:53 am, Matthias Blume <f...@my.address.elsewhere> wrote:
> You were trying to IMPLEMENT the combinators, which admittedly can be
> a little bit tricky.  (I speak from experience here, as you may know,
> since you seem to be familiar with SML/NJ's FormatComb.)  However,
> USING the combinators is pretty much trivial, and the error messages
> aren't really that bad (at least not worse than other type errors).

I am not convinced. Even just to USE combinators you need an idea of
what is happening. At very least, the user must know what higher
functions are and how function composition works, all things which are
not required for printf. Also, there are many different combinator
libraries in SML, and an user will be naturally
inclined to think
about which to use and how it works compared to others. It is only
when you have an unique stardard syntax that the use does not even
think to what is happening internally.

Personally, I have scratched my head reading the MLton wiki for bit,
then I
looked at the source code of FormatComb and I did not understand it.
At some point I looked at Philip Wadler's paper ("A prettier printer")
too. I don't know exactly when, but at a certain moment I got
combinators
by relating them to the theory of group representations in function
spaces (I have a background in Theoretical Physics, I am very familiar
with operators in Hilbert spaces and things like that). The set of
strings
is a monoid under the "^" operator and combinators are just a
representation
of strings as operators in a function space, where "^" is mapped to
the
composition law of operators, as usual. When I got this, it was easy
to understand
what the code in FormatComb was doing, but otherwise I could not make
sense of it. Now, I don't think an user should be required to know the
theory of group representations just to be able to print 2+2 :-(
Of course all IMHO,


           Michele Simionato
0
12/10/2007 7:29:12 PM
Matthew Fluet wrote:
> On Dec 10, 6:45 am, Jon Harrop <use...@jdh30.plus.com> wrote:
>> Having users proves that software works. For example, I do not consider
>> it a coincidence that MLton crashes all the time
> 
> This is by no means the experience of the majority of MLton users
> (which include users with >200K code).

May I ask who?

> If you have specific bugs to report, please do so on the MLton mailing
> lists (http://www.mlton.org/Contact).
> Usenet should not be used as a bug reporting facility, for MLton or
> any project.

I do my best to provide bug reports but, unfortunately, I don't have time to
work out much beyond "it segfaulted". I would suggest putting the ray
tracers into the MLton test code though.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/10/2007 10:04:05 PM
Matthias Blume wrote:
> You were trying to IMPLEMENT the combinators, which admittedly can be
> a little bit tricky.  (I speak from experience here, as you may know,
> since you seem to be familiar with SML/NJ's FormatComb.)  However,
> USING the combinators is pretty much trivial, and the error messages
> aren't really that bad (at least not worse than other type errors).

If that is true then a good combinator library should be put into the
stdlib.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/10/2007 10:05:05 PM
klohmuschel@yahoo.de wrote:
> What would be the projects you are going to realize in cases of
> MiniML. I mean what will MiniML give you OCaml or F# (or how they call
> it) cannot deliver to you?

Lots of things. :-)

Advantages over OCaml:

.. Freedom for the community to innovate the language, which primarily means
adding all of the obvious things that OCaml should have (a try..finally
construct, fuller pattern matching, a complete stdlib). The OCaml community
have already done more innovation than the OCaml developers themselves only
to have their contributions refused for entry into the OCaml distribution.

.. Operator overloading.

.. High-performance FFI by using C-friendly data structures rather than
OCaml's silly 4M-element arrays, Bigarrays and "Raw" arrays.

.. Type safe marshalling.

.. Free polymorphism.

.. Generic printing.

.. Per-type functions (e.g. comparison).

.. Machine-precision ints rather than 31- or 63-bit ints.

.. No weird boxing problems, e.g. having to add "+. 0.0" to the end of
numeric functions to improve performance.

.. Much better performance on numeric code that exploits abstractions.

.. DLLs.

.. Native-code performance from the REPL.

.. Better REPL: e.g. saving and loading of state.

.. Lots more useful functionality in the stdlib and none of the cruft.

.. Commerce friendly, i.e. no brittle interfaces making it practically
impossible to sell libraries written in OCaml.

Advantages over F#:

.. Platform independence (many scientists and engineers don't run Windows).

.. Faster symbolics thanks to a custom GC that isn't optimized for C#
programs.

.. No .NET baggage, e.g. different types for closures and raw functions.

.. Better support for structural types, e.g. .NET has trouble reloaded
marshalled data from a different REPL instantiation.

Generally, I want the stdlib to include support for modern graphics and GUI
programming, e.g. OpenGL and GTK+/Qt.

LLVM makes it extremely easy to generate very high performance numerical
code (including SIMD instructions), which makes it the perfect foundation
for a technical computing platform.

There are several very talented people looking at doing the same thing. I'm
sure we won't have trouble collaborating and LLVM will make it incredibly
easy to build something useful in a relatively short amount of time.

> Btw: I have never become accustomed to printf in C.

Printf is very useful in OCaml and F#.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/10/2007 10:19:13 PM
On Dec 10, 4:04 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> Matthew Fluet wrote:
> > On Dec 10, 6:45 am, Jon Harrop <use...@jdh30.plus.com> wrote:
> >> Having users proves that software works. For example, I do not consider
> >> it a coincidence that MLton crashes all the time
>
> > If you have specific bugs to report, please do so on the MLton mailing
> > lists (http://www.mlton.org/Contact).
> > Usenet should not be used as a bug reporting facility, for MLton or
> > any project.
>
> I do my best to provide bug reports but, unfortunately, I don't have time to
> work out much beyond "it segfaulted".

Verifiably untrue.  No messages from 'Jon Harrop' or any
'*@ffconsultancy.com' e-mail address have ever been sent to the MLton
mailing lists.

> I would suggest putting the ray
> tracers into the MLton test code though.

All of the ray traces from http://www.ffconsultancy.com/languages/ray_tracer/benchmark.html
compile and run without errors using MLton 20070826 on both amd64-
linux and x86-darwin.

In the future, please refrain from extrapolating from an unreported
and unreproducible error to "MLton crashes all the time".
0
12/10/2007 11:33:38 PM
On Dec 10, 10:19 pm, Jon Harrop <use...@jdh30.plus.com> wrote:

>
> Lots of things. :-)
>
> Advantages over OCaml:
>
> . Freedom for the community to innovate the language, which primarily means
> adding all of the obvious things that OCaml should have (a try..finally
> construct, fuller pattern matching, a complete stdlib). The OCaml community
> have already done more innovation than the OCaml developers themselves only
> to have their contributions refused for entry into the OCaml distribution.
>
> . Operator overloading.
>
> . High-performance FFI by using C-friendly data structures rather than
> OCaml's silly 4M-element arrays, Bigarrays and "Raw" arrays.
>
> . Type safe marshalling.
>
> . Free polymorphism.
>
> . Generic printing.
>
> . Per-type functions (e.g. comparison).
>
> . Machine-precision ints rather than 31- or 63-bit ints.
>
> . No weird boxing problems, e.g. having to add "+. 0.0" to the end of
> numeric functions to improve performance.
>
> . Much better performance on numeric code that exploits abstractions.
>
> . DLLs.
>
> . Native-code performance from the REPL.
>
> . Better REPL: e.g. saving and loading of state.
>
> . Lots more useful functionality in the stdlib and none of the cruft.
>
> . Commerce friendly, i.e. no brittle interfaces making it practically
> impossible to sell libraries written in OCaml.
>
> Advantages over F#:
>
> . Platform independence (many scientists and engineers don't run Windows).
>
> . Faster symbolics thanks to a custom GC that isn't optimized for C#
> programs.
>
> . No .NET baggage, e.g. different types for closures and raw functions.
>
> . Better support for structural types, e.g. .NET has trouble reloaded
> marshalled data from a different REPL instantiation.
>
> Generally, I want the stdlib to include support for modern graphics and GUI
> programming, e.g. OpenGL and GTK+/Qt.
>
> LLVM makes it extremely easy to generate very high performance numerical
> code (including SIMD instructions), which makes it the perfect foundation
> for a technical computing platform.
>
> There are several very talented people looking at doing the same thing. I'm
> sure we won't have trouble collaborating and LLVM will make it incredibly
> easy to build something useful in a relatively short amount of time.
>
> > Btw: I have never become accustomed to printf in C.
>
> Printf is very useful in OCaml and F#.

Have to top post i am lazy. That is interesting. Your statements come
somewhat surprisingly in light of your strong voicing for OCaml.

But why do not you start contributing to ML?

But your comment of OCaml and C is interesting. Personall I think a
good foreign (an easy to use) function interface to C is key. Bigloo
for example has a very nice C interface.

I believe Bigloo has also a binding to printf.

0
klohmuschel (196)
12/10/2007 11:36:50 PM
In article <<13loub93h9te32a@corp.supernews.com>>,
Jon Harrop <usenet@jdh30.plus.com> wrote:
> Neelakantan Krishnaswami wrote:
>> In article <<13lo9ap8iiemh17@corp.supernews.com>>,
>> Jon Harrop <usenet@jdh30.plus.com> wrote:
>>> Using user-defined print functions and the %a format specifier.
>> 
>> No, that's not a format specifier for lists or arrays.
>> 
>> What %a does is apply a function to a value and splice in the result;
>> it's basically the same as %s, only it does a function application for
>> us -- we write:
>> 
>>   printf "%a" print_foo foo
>> 
>> instead of
>>  
>>   printf "%s" (sprint_foo foo)
>> 
>> This can save memory when building a large string, but it does not do
>> what I asked.
> 
> What more do you want (exactly)?

I'd like an inline format code for lists, something like:

  printf "This -- %{list int} -- is a list" [1; 2; 3]

I find this nicer than

  print (l"This -- " o list int o l" -- is a list") [1; 2; 3]

which in turn is nicer than
 
  printf "This -- %a -- is a list" (print_list print_int) [1; 2; 3]

which is about the same as, but potentially more efficient than

  printf "This -- %s -- is a list" (print_list print_int [1; 2; 3])

If I were serious about building something like this, what I'd do is
1) write a really solid extensible combinator library for printing,
with stuff like pretty-printing, 2) take camlp4/5 and build a custom
syntax for applying these combinators, and 3) try to get it into one
of the Ocaml community extensions to the stdlib like extlib.

This way, we can get both a convenient syntax and extensibility,
without having to make ad-hoc extensions to the type system.

>>>> It seems reasonable to me to have some syntactic sugar to make these
>>>> combinator expressions prettier, but adding format strings as a
>>>> second-class construct to the core language is a clear design wart.
>>> 
>>> I think you are greatly outnumbered by happy OCaml users.
>> 
>> You pack two fallacies into one sentence here.
>> 
>> First, you introduce a false dichotomy between me and "happy Ocaml
>> users", when you have no actual evidence that I am an unhappy Ocaml
>> user. Second, you appeal to popularity as evidence of good design.
>> This is of course false -- witness the fact that C++ is more popular
>> than ML.
>>
>> This style of argument is extremely dishonest; please stop using it.

A meta point: I found your response below basically well-thought and
sensible. Below, you make a coherent argument, with actual evidence
and sound reasoning. Even where I disagree, I can find value in your
argument. OTOH, I didn't find value in your previous response, since
it lacked these features.

> I think that you (and Andreas) will not understand the point I am
> making until you quit academia, found a company and earn your living
> through direct sales. When you need sales to eat, you quickly learn
> that no users is the definition of failure.

I actually entered academia after doing a startup. Even there, there
are a lot of definitions of failure, including "I'm making money by
doing things I don't care about," and "we don't have the time to
really do something different."


> I would like to inspire you to build great tools. OCaml is a superb
> example of success in this context. Hundreds of bioinformaticians
> are among our customers thanks to OCaml. You could build even better
> tools.
>
> I'm trying to tell you that OCaml's success in technical computing
> is directly related to the fact that it augments SML with useful
> features like printf. If you adopt these kinds of features then you
> can do the world a great service by building tools that are used,
> for example, to cure cancer.
>
> I can't think of anything more gratifying but I'm finding it
> incredibly difficult to move the SML community in a productive
> direction.

So, I fully agree with you that comprehensive libraries and a good
performance story are valuable if you want to see adoption, and I
further agree with you that solid I/O, graphics, and concurrency
libraries are valuable, both in their own right and to ease adoption.

However, I am not in grad school to learn about adoption. If that were
my interest, I would be interested in design-as-consolidation: I'd be
trying to build Haskell + functors, or ML + typeclasses, without
worrying too much about the potential overlap. I'd also try to design
libraries that follow familiar design principles, but make use of all
the convenience affordances of overloading and parameterized modules.

But: what I'm really interested in is learning ideas that will
hopefully yield the *next* order of magnitude improvement. I want a
language that's as much better than the ML/Haskell family than the
designs they superceded. I want fewer bugs, faster runtimes, AND
shorter programs than the ML/Haskell approach can manage.

This necessarily means a willingness to discard familiar approaches,
and to say "there is not a clean story here, so we'll do without until
there is one." Yeah, this definitely sucks for adoption, but that's 
a necessary price.

-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
12/10/2007 11:44:48 PM
Matthew Fluet wrote:
> On Dec 10, 4:04 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>> I do my best to provide bug reports but, unfortunately, I don't have time
>> to work out much beyond "it segfaulted".
> 
> Verifiably untrue.  No messages from 'Jon Harrop' or any
> '*@ffconsultancy.com' e-mail address have ever been sent to the MLton
> mailing lists.

I posted that here not there.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/11/2007 12:04:28 AM
klohmuschel@yahoo.de wrote:
> Have to top post i am lazy. That is interesting. Your statements come
> somewhat surprisingly in light of your strong voicing for OCaml.

Don't get me wrong: OCaml is still the best there is on Linux and Mac OS X.
I just want to do better. :-)

> But why do not you start contributing to ML?

That is exactly what I plan to do. If you mean "why do I not contribute to
OCaml" then there are several reasons:

.. The OCaml distribution is controlled by INRIA who reject virtually all
contributions.

.. The OCaml developers refuse to add even trivial additions themselves, like
try..finally, let alone overloading.

.. Although package maintainers for the major Linux and Mac distros could
theoretically replace the OCaml package with a fork, they are not willing
to do so.

.. OCaml is not a conventional open source license and, in particular,
requires you to distribute all changes as patches to INRIA's core, making
it tedious to fork.

Finally, OCaml contains enough baggage and LLVM is sufficiently powerful
that I think it is well worth investigating a new implementation.

> But your comment of OCaml and C is interesting. Personall I think a
> good foreign (an easy to use) function interface to C is key. Bigloo
> for example has a very nice C interface.

Yes. I find an expressive static type system to be enormously beneficial
when writing non-trivial programs though, so I won't be developing anything
serious in Lisp or Scheme.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/11/2007 12:39:48 AM
On Dec 10, 6:04 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> Matthew Fluet wrote:
> > On Dec 10, 4:04 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> >> I do my best to provide bug reports but, unfortunately, I don't have time
> >> to work out much beyond "it segfaulted".
>
> > Verifiably untrue.  No messages from 'Jon Harrop' or any
> > '...@ffconsultancy.com' e-mail address have ever been sent to the MLton
> > mailing lists.
>
> I posted that here not there.

As I said, Usenet should not be considered a bug reporting facility,
for MLton or for any project.

Furthermore, a Google search in comp.lang.functional for 'Harrop MLton
crash', 'Harrop MLton segfault', 'Harrop MLton segmentation', or
'Harrop MLton fault' fails to turn up any post that describes MLton
crashing, let alone one that could be considered an actual bug
report.  If you have evidence to the contrary, please direct me
towards it.

Finally, your own 'Ray tracer language comparison' web pages suggest
that all of the ray tracer programs were successfully compiled and run
with MLton 20070826.

If you cannot explain how you came to the conclusion that 'MLton
crashes all the time', then please refrain from making such
unsubstantiated claims in the future.

0
12/11/2007 12:45:36 AM
Paul Rubin skrev:
> Ulf Wiger <ulf.wiger@e-r-i-c-s-s-o-n.com> writes:
>> (*) Imagine a construct like
>>     {ok,[M,F]} = io:fread('',"~a~a"), M:F().
>> which will read two atoms from the tty and call a function using
>> those two atoms. There is no way to check type safety without
>> running it. Dialyzer will not warn, since it cannot determine that
>> it actually is a type error.
> 
> Is there a reason it can't warn when it sees something like that?
> I can understand that it can't totally separate type errors from
> type correctness.  What I'm wondering is whether the uncertain
> part affects so much code that it can't be usefully flagged.

I believe that depends on the users' expectations. Dialyzer tries
to use an approach that is least likely to assault the user with
possibly irrelevant warnings. The goal is that any time Dialyzer
does tell you something, you should take the time to inspect the
code in question.

There are cases where some people believe a warning is in order,
and where others disagree. One such case is improperly formed
lists. It is legal to construct [Head | Tail] pattern, where
Tail is not a list. Some think that this is an abomination, and
others think of it as a useful optimization. Dialyzer lets you
turn off warnings about improper lists.

It's possible that Dialyzer will become more aggressive over time.
It depends on to what extent the Erlang community finds the type
analysis useful.

BR,
Ulf W
0
ulf.wiger (50)
12/11/2007 8:30:56 AM
Matthew Fluet wrote:
> On Dec 10, 6:04 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>> I posted that here not there.
> 
> As I said, Usenet should not be considered a bug reporting facility,
> for MLton or for any project.

Sure. As I said, I cannot afford the time to chase down bugs in MLton or
report them elsewhere.

> Furthermore, a Google search in comp.lang.functional for 'Harrop MLton
> crash', 'Harrop MLton segfault', 'Harrop MLton segmentation', or
> 'Harrop MLton fault' fails to turn up any post that describes MLton
> crashing, let alone one that could be considered an actual bug
> report.  If you have evidence to the contrary, please direct me
> towards it.

Yes, I couldn't find my posts either. IIRC, they were in response to the
announcement that MLton had a 64-bit codegen. The first time I tried it, it
segfaulted. Then I waited for the next releases, tried that and it also
segfaulted.

> Finally, your own 'Ray tracer language comparison' web pages suggest
> that all of the ray tracer programs were successfully compiled and run
> with MLton 20070826.

They compile fine now, yes.

> If you cannot explain how you came to the conclusion that 'MLton
> crashes all the time', then please refrain from making such
> unsubstantiated claims in the future.

Following encouragement from the SML community, I tried MLton several times
this year. Two out of three of the releases were unusable due to
segfaulting. In the absence of any evidence that MLton worked, I
concluded "MLton crashes all the time". The latest version of MLton has not
segfaulted but I have hardly tested it.

Incidentally, I found that -align 8 makes MLton much faster on AMD64.
Perhaps it should be the default?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/11/2007 10:01:34 AM
On Dec 11, 4:01 am, Jon Harrop <use...@jdh30.plus.com> wrote:
> Matthew Fluet wrote:
> > On Dec 10, 6:04 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> >> I posted that here not there.
>
> > As I said, Usenet should not be considered a bug reporting facility,
> > for MLton or for any project.
>
> Sure. As I said, I cannot afford the time to chase down bugs in MLton or
> report them elsewhere.

Yet you have ample time to post your dissatisfaction on c.l.f.
No one is asking you to fix bugs, simply properly report them.

> > Furthermore, a Google search in comp.lang.functional for 'Harrop MLton
> > crash', 'Harrop MLton segfault', 'Harrop MLton segmentation', or
> > 'Harrop MLton fault' fails to turn up any post that describes MLton
> > crashing, let alone one that could be considered an actual bug
> > report.  If you have evidence to the contrary, please direct me
> > towards it.
>
> Yes, I couldn't find my posts either. IIRC, they were in response to the
> announcement that MLton had a 64-bit codegen. The first time I tried it, it
> segfaulted. Then I waited for the next releases, tried that and it also
> segfaulted.

MLton has had exactly one release with a 64-bit codegen (MLton
20070826).
While there has been (exactly) one report of this version of MLton
producing a program that could segfault, I am certain that you have
not experienced this bug.  And there have been no reports of MLton
itself segfaulting (on any platform).
Any other version (e.g., the *experimental* package referenced by
http://groups.google.com/group/comp.lang.functional/msg/f17fa3e02fe095b1)
were pre-releases, whose only raison d'etre was for early adopters to
help improve the official release by reporting bugs.

> > If you cannot explain how you came to the conclusion that 'MLton
> > crashes all the time', then please refrain from making such
> > unsubstantiated claims in the future.
>
> Following encouragement from the SML community, I tried MLton several times
> this year. Two out of three of the releases were unusable due to
> segfaulting. In the absence of any evidence that MLton worked, I
> concluded "MLton crashes all the time". The latest version of MLton has not
> segfaulted but I have hardly tested it.

As I said, any 64-bit version of MLton prior to 20070826 was a pre-
release, experimental version.
By your own claims, the release version of MLton 20070826 has worked
fine.
At the very least, if you are unwilling to report bugs, then please
properly qualify your claims: "I had difficulties with pre-release
versions of MLton crashing, but the release version has worked (for my
limited testing)" is the reality; "MLton crashes all the time" is an
unwarranted hyperbole.

> Incidentally, I found that -align 8 makes MLton much faster on AMD64.
> Perhaps it should be the default?

Unfortunately, I can't afford the time to chase down suggestions from
random Usenet posts.
0
12/11/2007 2:01:46 PM
Matthew Fluet <Matthew.Fluet@gmail.com> wrote:
> Jon:
> > Matthew Fluet wrote:
> > > Jon:
> > >> I posted that here not there.
> >
> > > As I said, Usenet should not be considered a bug reporting facility,
> > > for MLton or for any project.
> >
> > Sure. As I said, I cannot afford the time to chase down bugs in MLton or
> > report them elsewhere.

> Yet you have ample time to post your dissatisfaction on c.l.f.

Not only here, but Jon seems to have enough spare time to make such
claims on other forums as well:

  http://www.mail-archive.com/jvm-languages@googlegroups.com/msg00170.html

-Vesa Karvonen
0
12/11/2007 2:26:58 PM
Matthew Fluet wrote:
> Yet you have ample time to post your dissatisfaction on c.l.f.
> No one is asking you to fix bugs, simply properly report them.
> ...
> Unfortunately, I can't afford the time to chase down suggestions from
> random Usenet posts.

In other words, the maintainers are not willing to put the time and effort
in but they expect me to.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/11/2007 3:18:08 PM
Vesa Karvonen wrote:
> Not only here, but Jon seems to have enough spare time to make such
> claims on other forums as well:
> 
>   http://www.mail-archive.com/jvm-languages@googlegroups.com/msg00170.html

If you don't want me to review SML compilers (prerelease or not) then don't
advertise them.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/11/2007 3:22:41 PM
On Dec 11, 9:18 am, Jon Harrop <use...@jdh30.plus.com> wrote:
> Matthew Fluet wrote:
> > Yet you have ample time to post your dissatisfaction on c.l.f.
> > No one is asking you to fix bugs, simply properly report them.
> > ...
> > Unfortunately, I can't afford the time to chase down suggestions from
> > random Usenet posts.
>
> In other words, the maintainers are not willing to put the time and effort
> in but they expect me to.

Not at all.  I'm simply pointing out that Usenet is not a forum for
MLton development; the proper forums are the MLton mailing lists.
No one can "put the time and effort" into fixing bugs that they don't
know about (and can't reproduce).  And (at least in my opinion), a
developer should give priority to issues raised by users who are
willing to work with them (by submitting bug reports, by participating
in the proper development forums, etc.).
If you want to have a discussion on the pros and cons of '-align 8' as
a default for the amd64 platform, I'm willing to have it on the MLton
mailing lists.
0
12/11/2007 3:47:58 PM
On Dec 11, 4:01 am, Jon Harrop <use...@jdh30.plus.com> wrote:
> Matthew Fluet wrote:

> > As I said, Usenet should not be considered a bug reporting facility,
> > for MLton or for any project.
>
> Sure. As I said, I cannot afford the time to chase down bugs in MLton or
> report them elsewhere.

Ok, let me get this straight:  You have time to post a gazillion of
messages to various newsgroups and get into lengthy staircases ABOUT a
bugreport that you do not have time to file?  Are you intentionally
trying to be funny, or is it just that you can't help it?

> Following encouragement from the SML community, I tried MLton several times
> this year. Two out of three of the releases were unusable due to
> segfaulting. In the absence of any evidence that MLton worked, I
> concluded "MLton crashes all the time". The latest version of MLton has not
> segfaulted but I have hardly tested it.

You are one fine scientist!  "2 out of 3" = "all the time"?
(Even if it were true that MLton crashes 2 out of 3 times -- which for
the vast majority of people is not true, your conclusion would be
slanderous.)

F.
0
12/11/2007 4:03:48 PM
On Dec 11, 4:22 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> >  http://www.mail-archive.com/jvm-languages@googlegroups.com/msg00170.html
>
> If you don't want me to review SML compilers (prerelease or not) then don't
> advertise them.

You seriously call unsubstantiated bashing in passing a "review"??
0
rossberg (600)
12/11/2007 4:38:34 PM
On Dec 11, 12:44 am, Neelakantan Krishnaswami <ne...@cs.cmu.edu>
wrote:

>>   printf "This -- %{list int} -- is a list" [1; 2; 3]

I like that, it would solve all my concerns (it may use combinators
internally, but they are not exposed to the user, he does not
have to think too much). Unfortunately it cannot be implemented
in SML without macros :-(

 Michele Simionato
0
12/11/2007 4:45:50 PM
F.Bellheim@gmail.com wrote:
> On Dec 11, 4:01 am, Jon Harrop <use...@jdh30.plus.com> wrote:
>> Matthew Fluet wrote:
>> > As I said, Usenet should not be considered a bug reporting facility,
>> > for MLton or for any project.
>>
>> Sure. As I said, I cannot afford the time to chase down bugs in MLton or
>> report them elsewhere.
> 
> Ok, let me get this straight:  You have time to post a gazillion of
> messages to various newsgroups and get into lengthy staircases ABOUT a
> bugreport that you do not have time to file?

If the MLton team want bug reports then they should garner users who would
test their software. That is precisely what I've been advocating in this
thread.

> (Even if it were true that MLton crashes 2 out of 3 times -- which for
> the vast majority of people is not true, your conclusion would be
> slanderous.)

The "the vast majority of people"? Just how many people do you think have
ever tried to compile anything using the 64-bit MLton?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/11/2007 5:28:56 PM
rossberg@ps.uni-sb.de wrote:
> On Dec 11, 4:22 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>> > 
http://www.mail-archive.com/jvm-languages@googlegroups.com/msg00170.html
>>
>> If you don't want me to review SML compilers (prerelease or not) then
>> don't advertise them.
> 
> You seriously call unsubstantiated bashing in passing a "review"??

When the software does nothing more than segfault there is little else that
I can say. At least it is now on the ray tracer language comparison. :-)

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/11/2007 5:35:44 PM
michele.simionato@gmail.com wrote:
> On Dec 11, 12:44 am, Neelakantan Krishnaswami <ne...@cs.cmu.edu>
> wrote:
>>>   printf "This -- %{list int} -- is a list" [1; 2; 3]
> 
> I like that, it would solve all my concerns (it may use combinators
> internally, but they are not exposed to the user, he does not
> have to think too much). Unfortunately it cannot be implemented
> in SML without macros :-(

Yes. You need a better ML, which is exactly what I'm angling for... :-)

IMHO, this is an area where OCaml is already significantly better than SML
and F# is already significantly better than OCaml.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/11/2007 5:40:45 PM
On Dec 11, 6:35 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>
> > You seriously call unsubstantiated bashing in passing a "review"??
>
> When the software does nothing more than segfault there is little else that
> I can say.

Good for you that people who experienced the same with your software
(like recently reported on the Caml mailing list) reacted more
constructively and politely asked for advice, when they could have as
well started running around telling everybody that your software just
crashes all the time.
0
rossberg (600)
12/11/2007 6:15:18 PM
Neelakantan Krishnaswami wrote:
> I'd like an inline format code for lists, something like:
> 
>   printf "This -- %{list int} -- is a list" [1; 2; 3]
> 
> I find this nicer than
> 
>   print (l"This -- " o list int o l" -- is a list") [1; 2; 3]
> 
> which in turn is nicer than
>  
>   printf "This -- %a -- is a list" (print_list print_int) [1; 2; 3]
> 
> which is about the same as, but potentially more efficient than
> 
>   printf "This -- %s -- is a list" (print_list print_int [1; 2; 3])

What about F#s:

  printf "This -- %A -- is a list" [1; 2; 3]

> If I were serious about building something like this, what I'd do is
> 1) write a really solid extensible combinator library for printing,
> with stuff like pretty-printing, 2) take camlp4/5 and build a custom
> syntax for applying these combinators, and 3) try to get it into one
> of the Ocaml community extensions to the stdlib like extlib.

I think it is a bad idea to reach for advanced concepts from FP like
functors and combinators when they aren't necessary. Plenty of languages
provide much more usable support for printing and I think F# is very right
to simply absorb their functionality.

>>> This style of argument is extremely dishonest; please stop using it.
> 
> A meta point: I found your response below basically well-thought and
> sensible. Below, you make a coherent argument, with actual evidence
> and sound reasoning. Even where I disagree, I can find value in your
> argument. OTOH, I didn't find value in your previous response, since
> it lacked these features.

Sorry: I was losing my rag trying to convey my beliefs convincingly.

>> I can't think of anything more gratifying but I'm finding it
>> incredibly difficult to move the SML community in a productive
>> direction.
> 
> So, I fully agree with you that comprehensive libraries and a good
> performance story are valuable if you want to see adoption, and I
> further agree with you that solid I/O, graphics, and concurrency
> libraries are valuable, both in their own right and to ease adoption.
> 
> However, I am not in grad school to learn about adoption. If that were
> my interest, I would be interested in design-as-consolidation: I'd be
> trying to build Haskell + functors, or ML + typeclasses, without
> worrying too much about the potential overlap. I'd also try to design
> libraries that follow familiar design principles, but make use of all
> the convenience affordances of overloading and parameterized modules.
> 
> But: what I'm really interested in is learning ideas that will
> hopefully yield the *next* order of magnitude improvement. I want a
> language that's as much better than the ML/Haskell family than the
> designs they superceded. I want fewer bugs, faster runtimes, AND
> shorter programs than the ML/Haskell approach can manage.
> 
> This necessarily means a willingness to discard familiar approaches,
> and to say "there is not a clean story here, so we'll do without until
> there is one." Yeah, this definitely sucks for adoption, but that's
> a necessary price.

You are clearly setting your goals high, which I have a lot of respect for.
However, our goals are quite different. I want to turn ML theory into
practice by building a platform with a big userbase composed of primarily
professionals. This requires an implementation that marries the
foundational technology from existing MLs with the practically-important
gismos from outside ML.

I'm not sure I can do this by myself and I'm not sure I can persuade the
people who know ML to help because their purpose is to do research, and
this is not research.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/11/2007 6:34:36 PM
rossberg@ps.uni-sb.de wrote:
> Good for you that people who experienced the same with your software
> (like recently reported on the Caml mailing list) reacted more
> constructively and politely asked for advice, when they could have as
> well started running around telling everybody that your software just
> crashes all the time.

On the contrary, the reliability of our compiled OpenGL software written in
OCaml was so poor that we stopped selling it and have never sold anything
similar since. Instead, our only option is to sell source code licenses (if
you compile it yourself then it works) but they are prohibitively expensive
for most users.

There is no logical reason why OCaml should be this commerce-unfriendly and
I would dearly love to remedy the situation.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/11/2007 6:39:45 PM
michele.simionato@gmail.com <michele.simionato@gmail.com> wrote:
[...]
> >>   printf "This -- %{list int} -- is a list" [1; 2; 3]

> I like that, it would solve all my concerns (it may use combinators
> internally, but they are not exposed to the user, he does not
> have to think too much). Unfortunately it cannot be implemented
> in SML without macros :-(

In SML, you can implement combinators that let you write:

   printf`"This -- "%(list int)`" -- is a list"$ [1, 2, 3]

Actually, since this seems to be such an important issue for some
people, I'll add a Format module providing printf and scanf to my
Extended Basis library this week (it is 21:14, so I'm not adding them
today).

-Vesa Karvonen
0
12/11/2007 7:15:31 PM
michele.simionato@gmail.com <michele.simionato@gmail.com> wrote:
[...]
> [f4]  printf ("The square of %s is %s \n", [x, y])

> [...] OTOH, [f4] is more dynamic. Suppose for instance you want to
> define a template language to generate Web pages; if you change the
> template at runtime, by adding additional placeholders, you could just
> append the required additional arguments to the argument list, whereas
> using [f3] you would have to add the arguments by hand to the source
> code and to recompile the function containing the printf expression.

Sorry, but what you say above makes little sense to me in many respects.
For example, using printf to generate web pages sounds silly to me.  I
would not recommend such an approach.  For example, it would be much
better to use combinators that ensure that you are actually generating
valid HTML in as many respects as possible.  Also, if I understand you
correctly, you are suggesting doing something like

  val template = inputLine file
  val arguments = readArgs file
  ...
  printf (template, arguments)

which is pretty much a recipe for disasters.  Why so?  Because printf uses
positional arguments.  The kind of templating you, I assume, are talking
about really isn't what printf should be used for.  If I would for some
reasonable reason go with a dynamic templating approach, I would rather
define a substitution function that would be given a template containing
references to named variables using some easily recognizable syntax and a
mapping of names to strings.

> My main gripe about combinators is that even if their implementation is
> deceptively simple, they are actually far from trivial.  How would you
> explain the error messages to a newbie?

I can't explain an error message I haven't seen.  Do you have an
example of an actual error message from the use of such combinators?
If you wish to argue that the error messages will be difficult to
understand and diagnose, please post short (incorrect) programs whose
compilation produces such error messages and we can then take a look
at them.  I will be compiling the programs with MLton, which usually
gives very clear error messages that show precisely the parts of types
that do not match and the expression that cause the error.  In fact,
here is an example, having two different errors, using the Fold based
printf implementation from http://mlton.org/Printf :

val () = printf `"Int="I`"  Bool="B`"  Real="R`"\n" $ 1 "false" 2.0
val () = printf `"Int="I`"  Bool="B"  Real="R`"\n" $ 1 false 2.0

From the above, MLton gives the following messages:

$ mlton -stop tc printf-error.sml
Error: printf-error.sml 83.10.
  Function applied to incorrect argument.
    expects: [bool]
    but got: [string]
    in: (((((((((((((printf `) "Int=") I)  ...   "\n") $) 1) "false"
Error: printf-error.sml 84.10.
  Function applied to incorrect argument.
    expects: [(TextIO.outstream
               * (((unit -> unit) -> ???) -> (unit -> ???) -> int -> bool -> ???))
              * (??? * (((unit -> ???) -> ???) -> (??? -> unit) -> ???) -> ???)
              -> ???]
    but got: [string]
    in: ((((((printf `) "Int=") I) `) "  Bool=") B) "  Real="
compilation aborted: parseAndElaborate reported errors

IMHO, the first error message simply needs no further explanation.
The first type in the second error message may look more complicated,
but I think it is really easy to diagnose thanks to pinpointing the
expression.  Look at the expression (after "in:") in the error
message.  It shows that the application that fails has the string "
Real=" as the argument.  From that it is easy to figure out that a
tick ` is missing at that point.  The types in the second error
message could probably be further improved by giving Fold a signature
that hides the use of a pair as the fold state and giving a signature
for Printf that further hides the Printf fold state.

> I strongly believe that simple things should be kept simple.

By which metric?  I think that OCaml's solution to implement support for
format strings in the compiler is far from simple.  Format strings are a
very special construct that have very little utility in the larger scheme
of things.

> On top of that, they are also not standard: your implementation looks
> very similar to the one in the FormatComb library of SML/NJ, whereas
> MLton use a different one.

That is just silly.  You can use the combinators used in MLton with
SML/NJ and vice verse.  Indeed, both are just libraries written in
plain Standard ML.  In fact, IIRC, the FormatComb library is part of
SML/NJ's library, which is also available with MLton (out of the box).

The world is too large for everything to be in some a "standard".  If
you show me any non-trivial program you've written, I can most likely
easily point out places where existing libraries already provide
similar functionality.  I can probably also point out places where
your own program uses snippets of code that have already been
implemented as reusable functions in your own code or are otherwise
duplicated several times over.

> Last saturday I was playing with combinators, to write my own format
> library, just as an exercise, and I run in all sort of weird errors that
> I believe are related to the value restriction of SML.

Well, nobody can really tell whether that is the case, because we
haven't seen the code, but it should be noted that the value
restriction affects all safe, statically typed languages with
parametric polymorphism and mutable objects --- not just SML.  This
includes such languages as OCaml, and Scala, for example.  Without the
equivalent of value restriction, parametric polymorphism with mutable
objects is unsafe.

-Vesa Karvonen
0
12/11/2007 7:34:21 PM
michele.simionato@gmail.com <michele.simionato@gmail.com> wrote:
[...]

> I am not convinced.  Even just to USE combinators you need an idea of
> what is happening.

I hope that you are not recommending that people use format strings
without having any idea of what is happening.  Such an approach to
programming is referred to as "Cargo Cult Programming":

  http://en.wikipedia.org/wiki/Cargo_cult_programming

I've run into people doing it many times and the results are never
pleasant.

> At very least, the user must know what higher functions are and how
> function composition works, all things which are not required for
> printf.  [snip -- see later] It is only when you have an unique stardard
> syntax that the use does not even think to what is happening internally.
> [snip] Personally, I have scratched my head reading the MLton wiki for
> bit, then I looked at the source code of FormatComb and I did not
> understand it.

I disagree.  What you are talking about is a documentation issue.  The
main purpose of the Printf page (http://mlton.org/Printf) in the MLton
wiki is to show how to *implement* such a combinator library, which
can take some effort to understand.  However, the syntax for *using*
Printf is dead simple.  It can be described to a user with a simple
regex and a bit of English:

The basic syntax for using printf is:

    printf (<conversion-specifier> <arg>*)* $ <arg>*

A format specification consists of a sequence of conversion specifiers
with their inline arguments.  The format specification ends with a $
after which any additional arguments required by format specifiers are
given in the order in which the corresponding format specifiers appear
in the format string.

Here are the available conversion specifiers:

           |      arguments       |
 specifier | inline   | after     | output
-----------+----------+-----------+-----------------------------------
 `         | a string |           | the specified string verbatim
 I         |          | an int    | the integer value in decimal
 B         |          | a boolean | "true" or "false"

And so on.

> Also, there are many different combinator libraries in SML, and an user
> will be naturally inclined to think about which to use and how it works
> compared to others.

So, instead of being forced to use a "standard", non-extensible, ad hoc
mechanism, the user can choose the best design for the task at hand from
several implementations.  I consider that an advantage.

> [...] Now, I don't think an user should be required to know the theory
> of group representations just to be able to print 2+2 :-( Of course all
> IMHO

I assure you that it isn't necessary.

-Vesa Karvonen
0
12/11/2007 9:15:52 PM
michele.simionato@gmail.com <michele.simionato@gmail.com> wrote:
[...]
> Suppose I want to uppercase or lowercase a string: the
> direct approach would be

> fun upper str = String.implode (map Char.toUpper (String.explode str))
> fun lower str = String.implode (map Char.toLower (String.explode str))

> Is this is inefficient?

I would call it indirect and verbose.

> What should I use instead?

My Extended Basis library, for example, has provided String.toUpper
and String.toLower for some time already.  They are implement as

         val toUpper = map Char.toUpper
         val toLower = map Char.toLower
in

   http://mlton.org/cgi-bin/viewsvn.cgi/*checkout*/mltonlib/trunk/com/ssh/extended-basis/unstable/detail/text/mk-text-ext.fun

-Vesa Karvonen
0
12/11/2007 9:26:34 PM
In article <<13ltmj2ecighu12@corp.supernews.com>>,
Jon Harrop <usenet@jdh30.plus.com> wrote:
> Neelakantan Krishnaswami wrote:
>> I'd like an inline format code for lists, something like:
>> 
>>   printf "This -- %{list int} -- is a list" [1; 2; 3]
>> 
> 
> What about F#s:
> 
>   printf "This -- %A -- is a list" [1; 2; 3]

I find this adequate, but not ideal. For me, it's about the same level
of convenience as being able to declare a Show instance for a type in
Haskell.

However, we often need multiple format codes for a particular type.
For example, you can control a floating point number is printed in
scientific notation or not by using %f or %e. If you have a list of
floating point numbers, it is nice to be able to parameterize the
list, as well:

  printf "This -- %{list float} -- is a list" [1.0; 2.0; 3.0]

versus

  printf "This -- %{list sci} -- is a list" [1.0; 2.0; 3.0]

And similarly for things like significant digits and rounding and
stuff like that.

> I think it is a bad idea to reach for advanced concepts from FP like
> functors and combinators when they aren't necessary. Plenty of
> languages provide much more usable support for printing and I think
> F# is very right to simply absorb their functionality.

I don't mind using advanced techniques in library routines, if a) it's
used throughout the standard library, so that the learning costs can
be amortized, and b) it opens the implementation up to the user, so
that they can extend it. The other benefit of uniformity is that it's
often possible to invent a little syntactic sugar that pays off
multiple times. Haskell's do-notation for monadic combinators is a
very good example of this.

However, I think printing combinators are definitely more complex than
would be necessary in an ideal world, due to the relative weakness of
ML's type system. We're going through contortions to encode the fact
that the expected values depend on the /value/ of the format code. But
I'm willing to use combinators until the general solution (true
dependent types) becomes feasible.

>> A meta point: I found your response below basically well-thought and
>> sensible. Below, you make a coherent argument, with actual evidence
>> and sound reasoning. Even where I disagree, I can find value in your
>> argument. OTOH, I didn't find value in your previous response, since
>> it lacked these features.
> 
> Sorry: I was losing my rag trying to convey my beliefs convincingly.

No problem; we all lose our cool now and then. 

> You are clearly setting your goals high, which I have a lot of
> respect for.  However, our goals are quite different. I want to turn
> ML theory into practice by building a platform with a big userbase
> composed of primarily professionals. This requires an implementation
> that marries the foundational technology from existing MLs with the
> practically-important gismos from outside ML.

That makes sense.

> I'm not sure I can do this by myself and I'm not sure I can persuade
> the people who know ML to help because their purpose is to do
> research, and this is not research.

This is a tough problem. Thinking in terms of building an open source
language, the big challenge you will face will be in building a
developer community that knows how to hack the compiler and libraries.

For the backend, the trouble is that while the math needed to write
basic optimizing compilers isn't /that/ hard, it is not very
accessible to typical programmers, and becomes less so if you go
through the optimizations that yield an efficient implementation.
Using LLVM, the JVM, or .NET might be a good choice.

For the type system, things will be tough, because writing type
checkers is not a subject with many tutorials around. So you run the
risk of being the only person who understands it. Probably the best
way to manage that risk is to consciously type-checking trade
performance for clarity of code, so that it is easy for other people
to get their feet wet. (I've been meaning to write one, actually, but
have never had the time. :/)

For the garbage collector and runtime, it's a similar story. I'd be
very, very tempted by the JVM or .NET, simply for the chance to hand
that problem off to someone else.

Happily, getting many good libraries is a solvable problem: start with
*some* good libraries, and be very aggressive about incorporating user
contributions into the standard library. Having that happen is big
egoboo, and doing it a lot will create the expectation that it can
happen, which can attract programmers to the effort.

This is a place where having a really simple native code embedding
story can help a lot (whether "native" is C, .NET, or the Java
libraries), because that makes it easier for hackers to add big pieces
of functionality. If you can provide (semi) automation, perhaps by
writing a program that uses CIL to read header files, that will be
even easier. CIL is George Necula's open source C parsing engine,
targeted at people writing C analyses. It knows a LOT of C, including
the quirks of both the gcc and MS C dialects.

<http://hal.cs.berkeley.edu/cil/>

-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
12/12/2007 12:03:22 AM
Neelakantan Krishnaswami <neelk@cs.cmu.edu> wrote:
[...]
> However, we often need multiple format codes for a particular type.
> For example, you can control a floating point number is printed in
> scientific notation or not by using %f or %e. If you have a list of
> floating point numbers, it is nice to be able to parameterize the
> list, as well:

>   printf "This -- %{list float} -- is a list" [1.0; 2.0; 3.0]

> versus

>   printf "This -- %{list sci} -- is a list" [1.0; 2.0; 3.0]

Indeed, what you suggest above is easy to achieve with the generic
pretty printing function

  http://mlton.org/cgi-bin/viewsvn.cgi/*checkout*/mltonlib/trunk/com/ssh/generic/unstable/public/value/pretty.sig

provided by my generics library

  http://mlton.org/cgi-bin/viewsvn.cgi/mltonlib/trunk/com/ssh/generic/unstable/

for SML.  First of all, to format a list of reals with default
formatting options, you can just write:

open Generic

val "[3.14159265359]" = show (list real) [Math.pi]

If you wish to print reals in scientific notation, there are several
ways to achieve that.  One is to simply specify the default formatting
of all reals in the formatted value:

val fmtSci = let open Fmt in default & realFmt := StringCvt.SCI NONE end

val "[3.141593E0]" = Prettier.render NONE (fmt (list real) fmtSci [Math.pi])

(Note that show is a special case of fmt.)

If you wish to customize the formatting of reals that appear at
particular positions in the type, you can use a customized type
representation for those positions.  One way to achieve that is to set
the formatting options using the convenience function withFmt:

val sci = withFmt fmtSci real

val "3.141593E0 & 3.14159265359" = show (sci &` real) (Math.pi & Math.pi)

Another way is to replace the pretty printing function.  In this case
we can use the convenience function withShow:

val sci = withShow (Real.fmt (StringCvt.SCI NONE)) real

val "3.141593E0 & 3.14159265359" = show (sci &` real) (Math.pi & Math.pi)

> And similarly for things like significant digits and rounding and
> stuff like that.

As you can see from pretty signature pointed above, stuff like that is
supported by the generic pretty printing function in my generics
library.

[...]
> I don't mind using advanced techniques in library routines, if a) it's
> used throughout the standard library, so that the learning costs can
> be amortized, and b) it opens the implementation up to the user, so
> that they can extend it.

I don't mind the use of "advanced techniques" as long as the end
result is easy to use (for me) and the semantics is right.  (Allowing
user extensions is a good thing, but it should not allow users to
break abstractions or unnecessarily constrain the implementation.)

> No problem; we all lose our cool now and then. 

If you spend a few moments thinking about it and talk with people
having observed Jon's posts or having spent time discussing with him,
here or elsewhere, you should understand that the main problem with
him isn't that he would lose his cool.

-Vesa Karvonen
0
12/12/2007 8:00:27 AM
On Dec 11, 8:34 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
wrote:
> Sorry, but what you say above makes little sense to me in many respects.
> For example, using printf to generate web pages sounds silly to me.

Yes, it is silly if you just have positional arguments. Actually, I
had in mind something like Python string interpolation, which works
both for positional and keyword arguments:

In [1]: "%s %s" % ('hello', 'michele')
Out[1]: 'hello michele'

In [2]: "%(greeting)s %(user)s" % dict(greeting='hello',
user='michele')
Out[2]: 'hello michele'

But don't look at the Web page example too much, in that case a
specialized template language makes more sense. printf is more
useful for quirk and dirty scripts and for exploratory code
(that means *a lot* of code).

> I can't explain an error message I haven't seen.  Do you have an
> example of an actual error message from the use of such combinators?

>val () = printf `"Int="I`"  Bool="B"  Real="R`"\n" $ 1 false 2.0

> From the above, MLton gives the following messages:
> Error: printf-error.sml 84.10.
>   Function applied to incorrect argument.
>     expects: [(TextIO.outstream
>                * (((unit -> unit) -> ???) -> (unit -> ???) -> int -> bool -> ???))
>               * (??? * (((unit -> ???) -> ???) -> (??? -> unit) -> ???) -> ???)
>               -> ???]
>     but got: [string]
>     in: ((((((printf `) "Int=") I) `) "  Bool=") B) "  Real="
> compilation aborted: parseAndElaborate reported errors
>
> The first type in the second error message may look more complicated,
> but I think it is really easy to diagnose thanks to pinpointing the
> expression.

That's a perfect example of error message which looks
terribly scaring to a newbie. Of course it is easy to diagnose
for you and for anybody experience with SML, but my point was
that I want to spare such error messages to newbies.

>> I strongly believe that simple things should be kept simple.

> By which metric?  I think that OCaml's solution to implement support for
> format strings in the compiler is far from simple.  Format strings are a
> very special construct that have very little utility in the larger scheme
> of things.

I disagree (not the OCaml implementation, on the usefulness of
printf).
I use format strings every day several times per day.
print is perhaps the Python keyword I use most (I should perform a
statistical
analysis on my code base to check this assertion, and I would I print
out
the results? of course with printf!)

>> On top of that, they are also not standard: your implementation looks
>> very similar to the one in the FormatComb library of SML/NJ, whereas
>> MLton use a different one.

> That is just silly.  You can use the combinators used in MLton with
> SML/NJ and vice verse.  Indeed, both are just libraries written in
> plain Standard ML.  In fact, IIRC, the FormatComb library is part of
> SML/NJ's library, which is also available with MLton (out of the box).

> The world is too large for everything to be in some a "standard".

Right, but the standard should covered at least the most basic
things.
IMNSO printf is one of the most basic things ever.

                Michele Simionato
0
12/12/2007 12:34:54 PM
Vesa Karvonen wrote:
> Sorry, but what you say above makes little sense to me in many respects.
> For example, using printf to generate web pages sounds silly to me.

Generating web pages is actually an excellent example of using printf:

  let rec print ff = function
    | PCData text -> fprintf ff "%s" text
    | Element(tag, []) -> fprintf ff "<%s />" tag
    | Element(tag, xs) -> fprintf ff "<%s>%a</%s>" tag prints xs tag
  and prints ff = function
    | [] -> ()
    | h::t -> fprintf ff "%a%a" print h prints t;;

> I 
> would not recommend such an approach.  For example, it would be much
> better to use combinators that ensure that you are actually generating
> valid HTML in as many respects as possible.

That will only make matters worse: worsening reliability by unnecessarily
bloating and obfuscating code. This is why OCaml programmers almost always
choose printf over a combinator library (they have the choice).

> By which metric?  I think that OCaml's solution to implement support for
> format strings in the compiler is far from simple.  Format strings are a
> very special construct that have very little utility in the larger scheme
> of things.

The same can be said of regular expressions. Would you rather replace those
with a combinator library as well?

>> Last saturday I was playing with combinators, to write my own format
>> library, just as an exercise, and I run in all sort of weird errors that
>> I believe are related to the value restriction of SML.
> 
> Well, nobody can really tell whether that is the case, because we
> haven't seen the code, but it should be noted that the value
> restriction affects all safe, statically typed languages with
> parametric polymorphism and mutable objects --- not just SML.  This
> includes such languages as OCaml, and Scala, for example.  Without the
> equivalent of value restriction, parametric polymorphism with mutable
> objects is unsafe.

Actually, the value restriction is another area where OCaml greatly improves
upon SML in practical terms. Moreover, the value restriction is another
source of incompatibility between SML implementations that has bitten me,
e.g. SML/NJ and MLton.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/12/2007 12:59:18 PM
On Dec 11, 10:15 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
wrote:
> I hope that you are not recommending that people use format strings
> without having any idea of what is happening.  Such an approach to
> programming is referred to as "Cargo Cult Programming"

When I give

print "hello world"

I have no idea of what the machine is *really* doing, and I
don't want to know, otherwise I would program in assembly.
The whole point of having higher level
languages is to spare the implementation details to the users.
I would like a builtin syntax for printf (but I realized
this is an useless rant, so I am nearly ready to stop
beating this dead horse) so that people could just use
it without knowing that it is implemented in terms of
combinators. I fully agree with Neelakantan Krishnaswami's
ideas.

> So, instead of being forced to use a "standard", non-extensible, ad hoc
> mechanism, the user can choose the best design for the task at hand from
> several implementations.  I consider that an advantage.

A standard decent mechanism (such as the one Neelakantan is
proposing) would be better. But even a flawed mechanism, if
standard, would be better, since it could always
be fixed in a second moment.
This is of corse a personal opinion, and we may agree to disagree.
Let's use our energies for something more constructive than
neverending usenet discussions! ;)

        Michele Simionato
0
12/12/2007 1:06:37 PM
On Dec 12, 1:59 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>
> Actually, the value restriction is another area where OCaml greatly improves
> upon SML in practical terms.

Can you show us a convincing example that "greatly" benefits from this
relaxation (which, for the uninitiated, merely consists of allowing
generalisation of type variables in the rare case that they appear
only in covariant position)?

You hardly will know, but SML'90 used to have a significantly more
permissive approach, which was replaced by the much simpler value
restriction in SML'97 (following OCaml, btw) as it turned out that the
additional expressiveness was almost entirely irrelevant in practice.
0
rossberg (600)
12/12/2007 3:51:59 PM
rossberg@ps.uni-sb.de wrote:
> On Dec 12, 1:59 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>> Actually, the value restriction is another area where OCaml greatly
>> improves upon SML in practical terms.
> 
> Can you show us a convincing example that "greatly" benefits from this
> relaxation (which, for the uninitiated, merely consists of allowing
> generalisation of type variables in the rare case that they appear
> only in covariant position)?

The humble hash table is the obvious example:

# let m = Hashtbl.create 1;;
val m : ('_a, '_b) Hashtbl.t = <abstr>

The idiomatic solution when using SML interactively is to annotate the type,
but you don't want to do that (hence we invented type inference) and even
if you do that you don't want it polluting your compiled code. So you end
up typing the type annotation into the interactive session manually.

References to lists are another common example:

# let a = ref [];;
val a : '_a list ref = {contents = []}

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/12/2007 3:56:37 PM
On Dec 12, 4:56 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> rossb...@ps.uni-sb.de wrote:
> > On Dec 12, 1:59 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> >> Actually, the value restriction is another area where OCaml greatly
> >> improves upon SML in practical terms.
>
> > Can you show us a convincing example that "greatly" benefits from this
> > relaxation (which, for the uninitiated, merely consists of allowing
> > generalisation of type variables in the rare case that they appear
> > only in covariant position)?
>
> The humble hash table is the obvious example:
>
> # let m = Hashtbl.create 1;;
> val m : ('_a, '_b) Hashtbl.t = <abstr>

Oh, now I see what you were actually referring to. That isn't even an
actual difference, the behaviour is mostly the same in SML. I assume
what you have observed is a well-known deviation of SML/NJ from the
Standard, sometimes requiring a type annotation for non-local
declarations of this form (which is slightly annoying, I agree).

But the situation actually is more subtle than you realize. To some
degree, this behaviour is inevitable if you want to have a toplevel
that runs on native code AND performs type specialising optimisations.
In that situation, you generally cannot produce code before all types
have been fully determined.
0
rossberg (600)
12/12/2007 4:47:58 PM
Jon Harrop schrieb:
> 
> Generating web pages is actually an excellent example of using printf:
> 
>   let rec print ff = function
>     | PCData text -> fprintf ff "%s" text
>     | Element(tag, []) -> fprintf ff "<%s />" tag
>     | Element(tag, xs) -> fprintf ff "<%s>%a</%s>" tag prints xs tag
>   and prints ff = function
>     | [] -> ()
>     | h::t -> fprintf ff "%a%a" print h prints t;;

Not at all IME.

I just finished setting up a PHP framework and am using it in 
application programs.
printf and the like are essentially useless there. The formatting 
information is typically carried in data definitions, with the 
possibility to override in widget definitions; this decouples formatting 
and text so well that there's simply no need for printf anymore.

>> I would not recommend such an approach.  For example, it would be much
>> better to use combinators that ensure that you are actually generating
>> valid HTML in as many respects as possible.
> 
> That will only make matters worse: worsening reliability by unnecessarily
> bloating and obfuscating code.  This is why OCaml programmers almost always
> choose printf over a combinator library (they have the choice).

Maybe they simply don't have a *good* combinator library available.
("Good" meaning "fulfilling their requirements".)

>> By which metric?  I think that OCaml's solution to implement support for
>> format strings in the compiler is far from simple.  Format strings are a
>> very special construct that have very little utility in the larger scheme
>> of things.
> 
> The same can be said of regular expressions. Would you rather replace those
> with a combinator library as well?

Perl actually goes in that direction.
Look at Larry Wall's Apocalypse on regexes and its Exegesis. (Too lazy 
to look up the URL right now.)

Regards,
Jo
0
jo427 (1164)
12/12/2007 10:40:10 PM
rossberg@ps.uni-sb.de wrote:
> Oh, now I see what you were actually referring to. That isn't even an
> actual difference, the behaviour is mostly the same in SML. I assume
> what you have observed is a well-known deviation of SML/NJ from the
> Standard,

Indeed. I didn't know this was a non-compliance by SML/NJ.

> sometimes requiring a type annotation for non-local 
> declarations of this form (which is slightly annoying, I agree).

Exactly, yes.

> But the situation actually is more subtle than you realize. To some
> degree, this behaviour is inevitable if you want to have a toplevel
> that runs on native code AND performs type specialising optimisations.
> In that situation, you generally cannot produce code before all types
> have been fully determined.

That certainly explains why F# is the other implementation that suffers from
this problem. I really don't understand why though: can't you just generate
lazily, i.e. defer generation until you know the type?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/13/2007 12:12:23 PM
Joachim Durchholz wrote:
> Jon Harrop schrieb:
>> 
>> Generating web pages is actually an excellent example of using printf:
>> 
>>   let rec print ff = function
>>     | PCData text -> fprintf ff "%s" text
>>     | Element(tag, []) -> fprintf ff "<%s />" tag
>>     | Element(tag, xs) -> fprintf ff "<%s>%a</%s>" tag prints xs tag
>>   and prints ff = function
>>     | [] -> ()
>>     | h::t -> fprintf ff "%a%a" print h prints t;;
> 
> Not at all IME.
> 
> I just finished setting up a PHP framework and am using it in
> application programs.
> printf and the like are essentially useless there. The formatting
> information is typically carried in data definitions, with the
> possibility to override in widget definitions; this decouples formatting
> and text so well that there's simply no need for printf anymore.

Can you elaborate? I don't understand what the code would look like.

>>> I would not recommend such an approach.  For example, it would be much
>>> better to use combinators that ensure that you are actually generating
>>> valid HTML in as many respects as possible.
>> 
>> That will only make matters worse: worsening reliability by unnecessarily
>> bloating and obfuscating code.  This is why OCaml programmers almost
>> always choose printf over a combinator library (they have the choice).
> 
> Maybe they simply don't have a *good* combinator library available.
> ("Good" meaning "fulfilling their requirements".)

Maybe. Typically anything done in SML has already been done better in OCaml
(IME), at least when it comes to anything of practical use. The reason is
simply that it is so easy for the OCaml community to steal from the SML
community.

>> The same can be said of regular expressions. Would you rather replace
>> those with a combinator library as well?
> 
> Perl actually goes in that direction.
> Look at Larry Wall's Apocalypse on regexes and its Exegesis. (Too lazy
> to look up the URL right now.)

Sure, but has it gained as much traction as regexs? If not, does it incur a
practical overhead and might that be why?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/13/2007 12:20:27 PM
On Dec 13, 1:12 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> rossb...@ps.uni-sb.de wrote:
>
> > But the situation actually is more subtle than you realize. To some
> > degree, this behaviour is inevitable if you want to have a toplevel
> > that runs on native code AND performs type specialising optimisations.
> > In that situation, you generally cannot produce code before all types
> > have been fully determined.
>
> That certainly explains why F# is the other implementation that suffers from
> this problem. I really don't understand why though: can't you just generate
> lazily, i.e. defer generation until you know the type?

But in the interactive toplevel, you want to execute the code
immediately - that is, before you would know the type.
0
rossberg (600)
12/13/2007 12:30:38 PM
rossberg@ps.uni-sb.de wrote:
> On Dec 13, 1:12 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>> rossb...@ps.uni-sb.de wrote:
>>
>> > But the situation actually is more subtle than you realize. To some
>> > degree, this behaviour is inevitable if you want to have a toplevel
>> > that runs on native code AND performs type specialising optimisations.
>> > In that situation, you generally cannot produce code before all types
>> > have been fully determined.
>>
>> That certainly explains why F# is the other implementation that suffers
>> from this problem. I really don't understand why though: can't you just
>> generate lazily, i.e. defer generation until you know the type?
> 
> But in the interactive toplevel, you want to execute the code
> immediately - that is, before you would know the type.

I don't think I want to execute that code immediately. Why not just defer
its execution (the creation of an empty hash table) until the hash table is
first used and we know its type?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/13/2007 8:27:44 PM
On Dec 13, 9:27 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>
> I don't think I want to execute that code immediately. Why not just defer
> its execution (the creation of an empty hash table) until the hash table is
> first used and we know its type?

You mean you want to introduce laziness into the language? What if the
function isn't pure? The compiler cannot easily know. And how is the
compiler supposed to know which input you want to execute lazily and
which eagerly? Some has to be eager, otherwise you'll never get an
answer.

Moreover, the first use does not necessarily fully determine the type.
You could try to lookup something in the empty table. Or you could
enter an empty list.

0
rossberg (600)
12/13/2007 10:24:17 PM
rossberg@ps.uni-sb.de wrote:
> On Dec 13, 9:27 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>> I don't think I want to execute that code immediately. Why not just defer
>> its execution (the creation of an empty hash table) until the hash table
>> is first used and we know its type?
> 
> You mean you want to introduce laziness into the language? What if the
> function isn't pure?

What function?

> The compiler cannot easily know. And how is the 
> compiler supposed to know which input you want to execute lazily and
> which eagerly? Some has to be eager, otherwise you'll never get an
> answer.

I'm not sure that matters: it should be transparent to the user anyway. This
is just an implementation detail.

> Moreover, the first use does not necessarily fully determine the type.
> You could try to lookup something in the empty table. Or you could
> enter an empty list.

Ok, so later code does:

  a := []
  printf "%d\n" (List.length a)

Hmm. Could you just use dummy type like int? So you generate a List.length
for an int list and pass it the empty list? The actual values in the list
are necessarily irrelevant otherwise we'd already have a type for '_a.
How's that?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/13/2007 11:34:27 PM
On Dec 14, 12:34 am, Jon Harrop <use...@jdh30.plus.com> wrote:
> rossb...@ps.uni-sb.de wrote:
> > On Dec 13, 9:27 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> >> I don't think I want to execute that code immediately. Why not just defer
> >> its execution (the creation of an empty hash table) until the hash table
> >> is first used and we know its type?
>
> > You mean you want to introduce laziness into the language? What if the
> > function isn't pure?
>
> What function?

In this case, the create function, but in general, any function you
apply interactively.

> > The compiler cannot easily know. And how is the
> > compiler supposed to know which input you want to execute lazily and
> > which eagerly? Some has to be eager, otherwise you'll never get an
> > answer.
>
> I'm not sure that matters: it should be transparent to the user anyway. This
> is just an implementation detail.

Not at all: in a language with arbitrary side effects it obviously
changes the semantics. And as long as effects aren't tracked in types
the compiler cannot even know what it is doing.

> > Moreover, the first use does not necessarily fully determine the type.
> > You could try to lookup something in the empty table. Or you could
> > enter an empty list.
>
> Ok, so later code does:
>
>   a := []
>   printf "%d\n" (List.length a)
>
> Hmm. Could you just use dummy type like int? So you generate a List.length
> for an int list and pass it the empty list? The actual values in the list
> are necessarily irrelevant otherwise we'd already have a type for '_a.

No. The whole point about type specialisation is that the
representation of an otherwise polymorphic value (and thus any code
using it) can depend on the instantiation type. For example, there
might be a more compact representation for bool lists, where even nil
is represented differently (arguably a contrived example, but you see
the point).

Moreover, the scheme you propose would run havoc with typed target
languages, such as dot net's IL.

0
rossberg (600)
12/14/2007 8:23:56 AM
michele.simionato@gmail.com <michele.simionato@gmail.com> wrote:
[...]
> When I give

> print "hello world"

> I have no idea of what the machine is *really* doing, and I don't
> want to know, otherwise I would program in assembly.

What you do understand, however, is that the effect is to write "hello
world" to the standard output.  IOW, you understand the meaning of the
expression 'print "hello world"'.  Otherwise you are doing Cargo Cult
Programming.

> The whole point of having higher level languages is to spare the
> implementation details to the users.

Sigh.  You are beating the wrong horse.  Just as you don't need to
understand how a built-in printf is implemented internally, users do not
need to understand how a combinator library is implemented internally.
All they need to understand is the meanings of expressions written using
the combinators.  And again, this is just like needing to understand the
meanings of format strings.

You said that you read (or at least looked at) Wadler's article on pretty
printing combinators [1].  As you can see from the article, the meanings
of the combinators (<>, nil, text, line, nest, and layout) are first
explained and examples presented without discussing actual implementation
details.  Do you believe that the user of a library of pretty printing
combinators, like Wadler's, really needs to know how those combinators are
implemented internally?

[1] http://homepages.inf.ed.ac.uk/wadler/papers/prettier/prettier.pdf

> I would like a builtin syntax for printf [...] so that people could
> just use it without knowing that it is implemented in terms of
> combinators.

Whether or not the form of printf is standard or not or whether it is
syntactic sugar or a combinator library does not make it fundamentally
different for plain users (see below) to understand.  The users still need
to able to map printf expressions to their meanings.  In either case,
standard or not, the user expects to see a document of some form that
explains that mapping (from printf expressions to their meanings).

However, if a user wants to understand not just how to use printf, but
also how it is implemented, then there is a clear advantage to the
combinator approach over the built-in approach.  The combinators are just
ordinary code written in the language and the user can use her
understanding of the language to work out how they are really implemented
without needing to understand additional formal notations or additional
internal implementation details of any particular compiler or interpreter.

[...]
> But even a flawed mechanism, if standard, would be better, since it
> could always be fixed in a second moment.

I disagree.  Fixing a flawed mechanism can be impossible without breaking
existing code.  If such a flawed mechanism is made a standard and becomes
widely used, the situation can be much worse than not having a standard.

Labeling something as standard does not magically make it golden.
Just because you are using a standard something, does not mean that it
would be best (or better) for the job you are doing.  Here is a blog
post on the topic standards that I found particularly insightful:

  http://www.artima.com/weblogs/viewpost.jsp?thread=4840

The first two paragraphs and the last paragraph are the most relevant to
this discussion.

-Vesa Karvonen
0
12/14/2007 11:00:57 AM
Vesa Karvonen wrote:
>> The whole point of having higher level languages is to spare the
>> implementation details to the users.
> 
> Sigh.  You are beating the wrong horse.

No, he isn't. Michele has already explained to you that the difference
becomes obvious as soon as you make a mistake: the combinator approach
gives more obfuscated errors than printf. We've even both provided examples
of this. Yet your response has complete avoiding this topic.

> Just as you don't need to 
> understand how a built-in printf is implemented internally, users do not
> need to understand how a combinator library is implemented internally.
> All they need to understand is the meanings of expressions written using
> the combinators.  And again, this is just like needing to understand the
> meanings of format strings.
> 
> You said that you read (or at least looked at) Wadler's article on pretty
> printing combinators [1].  As you can see from the article, the meanings
> of the combinators (<>, nil, text, line, nest, and layout) are first
> explained and examples presented without discussing actual implementation
> details.  Do you believe that the user of a library of pretty printing
> combinators, like Wadler's, really needs to know how those combinators are
> implemented internally?
> 
> [1] http://homepages.inf.ed.ac.uk/wadler/papers/prettier/prettier.pdf

That assumes you write perfect code first time every time.

>> I would like a builtin syntax for printf [...] so that people could
>> just use it without knowing that it is implemented in terms of
>> combinators.
> 
> Whether or not the form of printf is standard or not or whether it is
> syntactic sugar or a combinator library does not make it fundamentally
> different for plain users (see below) to understand.

Not true when a mistake is made.

>> But even a flawed mechanism, if standard, would be better, since it
>> could always be fixed in a second moment.
> 
> I disagree.  Fixing a flawed mechanism can be impossible without breaking
> existing code.  If such a flawed mechanism is made a standard and becomes
> widely used, the situation can be much worse than not having a standard.
> 
> Labeling something as standard does not magically make it golden.
> Just because you are using a standard something, does not mean that it
> would be best (or better) for the job you are doing.  Here is a blog
> post on the topic standards that I found particularly insightful:
> 
>   http://www.artima.com/weblogs/viewpost.jsp?thread=4840
> 
> The first two paragraphs and the last paragraph are the most relevant to
> this discussion.

The argument in favor of using standards is much stronger than the argument
in favor of using really obscure workarounds for missing features.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/14/2007 12:22:42 PM
On Dec 14, 12:00 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
wrote:
> Just because you are using a standard something, does not mean that it
> would be best (or better) for the job you are doing.  Here is a blog
> post on the topic standards that I found particularly insightful:
>
>  http://www.artima.com/weblogs/viewpost.jsp?thread=4840
>
> The first two paragraphs and the last paragraph are the most relevant to
> this discussion.

I don't see why. He is against standards designed by committee
(and I largely agree with him) whereas I am talking about
de facto standards such as printf.

 Michele Simionato
0
12/14/2007 12:58:37 PM
michele.simionato@gmail.com <michele.simionato@gmail.com> wrote:
> On Dec 14, 12:00 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
> wrote:
> > Just because you are using a standard something, does not mean that it
> > would be best (or better) for the job you are doing.  Here is a blog
> > post on the topic standards that I found particularly insightful:
> >
> >  http://www.artima.com/weblogs/viewpost.jsp?thread=4840
> >
> > The first two paragraphs and the last paragraph are the most relevant to
> > this discussion.

> I don't see why. He is against standards designed by committee (and
> I largely agree with him) whereas I am talking about de facto
> standards such as printf.

In almost complete contrast to the spirit of your comments such as

  "But even a flawed mechanism, if standard, would be better, since it
  could always be fixed in a second moment."

he cautions users to question the usefulness of standards.  For example:

  "Ask why only standards can be used."

  "Ask if the standard [...] will really solve the problem under
  discussion."

I also strongly disagree with your comments to the effect that if
something is in a standard, then it will be easier for users to
understand, because they don't have to understand how it is
implemented, and that being in a standard inherently makes something
better (this is also a major theme of the blog post I pointed at).

Just consider the sentence above I quoted from you.  It suggests that
being in a standard is better, because you can then fix the flaws
later.  I'm sorry, but that is just absurd.  Being in a standard
invariably makes it more difficult to fix things later.  Usually when
something isn't in a standard, it can be just fixed by changing the
implementation.  Often such a fix can be done in a matter of minutes.
When something becomes a standard, fixing it typically becomes a
complex political issue, requires large amounts of (often formal)
communication, and always requires at least changing the standard in
addition to any actual implementations.  The whole process of fixing a
flaw in a standard may take several calendar years and it may take
several more years before all implementations are actually fixed to
conform to the fixed standard.

-Vesa Karvonen
0
12/14/2007 1:30:35 PM
michele.simionato@gmail.com <michele.simionato@gmail.com> wrote:
> On Dec 11, 8:34 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
> wrote:
[...]
> But don't look at the Web page example too much, in that case a
> specialized template language makes more sense.  printf is more useful
> for quirk and dirty scripts and for exploratory code (that means *a lot*
> of code).

If you have actual concrete examples (rather than contrived one
liners), then we can look at and discuss them.  A vague phrase like
"quick and dirty scripts" or "exploratory code" can mean many things
and it is more than likely we could come up with better solutions for
most actual cases.

> > val () = printf `"Int="I`"  Bool="B"  Real="R`"\n" $ 1 false 2.0

> > From the above, MLton gives the following messages:
> > Error: printf-error.sml 84.10.
> >   Function applied to incorrect argument.
> >     expects: [(TextIO.outstream
> >                * (((unit -> unit) -> ???) -> (unit -> ???) -> int -> bool -> ???))
> >               * (??? * (((unit -> ???) -> ???) -> (??? -> unit) -> ???) -> ???)
> >               -> ???]
> >     but got: [string]
> >     in: ((((((printf `) "Int=") I) `) "  Bool=") B) "  Real="
> > compilation aborted: parseAndElaborate reported errors
> >
> > The first type in the second error message may look more complicated,
> > but I think it is really easy to diagnose thanks to pinpointing the
> > expression.

> That's a perfect example of error message which looks terribly scaring
> to a newbie.  Of course it is easy to diagnose for you and for anybody
> experience with SML, but my point was that I want to spare such error
> messages to newbies.

The first type in the above error message is perhaps slightly more
complicated than in an average error message, but it really isn't
uncommon to get type errors with constructed types.  The more
sophisticated and more structural the type system is, the more
complicated the error messages tend to get.  Error messages in simply,
nominally typed languages like C and Pascal are usually quite
readable.  When you throw in additional powerful features like
polymorphism, overloading, and structural types, error messages tend
to get much more complicated.  Type inference lets one largely forget
about the types of expressions and it is not uncommon, at least in my
experience, to be (mildly) surprised at the complexity of the types of
simple looking expressions when they are written out explicitly (in
error messages).

I agree with your comment that error messages can be scaring to
newbies.  I also think that learning to program effectively in a safe,
higher-order, language with a fairly expressive type system like SML
(with parametric polymorphism and modules with abstract types) is much
more difficult and takes more time than learning to program
effectively in a (often unsafe) typed language with just simple types
like C or a dynamically checked language like Python.

For example, I think that it would be fair to say that I learned to
program in Scheme effectively roughly immediately while reading SICP
(I already had done some functional/higher-order programming and had
years of experience in imperative and OO programming at that point).
By "effectively" I mean that I could straightforwardly implement
anything I could imagine.  When I got back to ML, I found out that
there were many things I didn't know how to implement
straightforwardly.  I think that I'm now almost as effective in SML as
in Scheme, but there are still some things I could "just do" in
Scheme, but would require me to spend potentially a lot of time
thinking in SML.

> >> I strongly believe that simple things should be kept simple.
[...]
> I use format strings every day several times per day. [...]

I think this has been said a couple of times before, but let me just
say it again.  You are confusing your acquired taste for format
strings with some notion of simplicity you have never actually
explained.  Format strings do not make the creation of ad hoc
compositions asymptotically simpler.  The syntactic complexity of
using format strings and simple string concatenation with conversion
functions is essentially the same.  Using just string concatenation
and conversion functions, any example of using format strings can be
written roughly as concisely and using roughly as many distinguishable
syntactic elements.  Compared to simple concatenation of string
snippets, format strings just let/force you to arrange the snippets to
two groups: the template with holes and the arguments to fill the
holes.  This perhaps makes it easier to see the overall template, but
also makes it more difficult to see the connection between holes in
the template and the arguments that fill those holes.  (BTW, there is
plenty of evidence for the latter.  People regularly make simple
mistakes like specify arguments in the wrong order when using format
strings.)

-Vesa Karvonen
0
12/14/2007 1:45:21 PM
On Dec 14, 2:30 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
wrote:
>   "But even a flawed mechanism, if standard, would be better, since it
>   could always be fixed in a second moment."

Acc, you got me this time ;) This sentence, as quoted, is
indefensible, so I will change it into that:

"""
A single implementation language (say Python) can afford to get
a few things wrong; the important thing is to get the functionality
right away, get a lot of users, and fix the issues later, say
in Py3k. The process is long and painful, because breaking
compatibility is always painful, but the advantages are
greater than the disadvantages.
"""

which is closer to what I had in mind.
For instance, Microsoft is (ab)using this ideology
("worse is better") and this is one of the reasons why they
are dominating the market. I don't like when "worse is better"
is used to justify anything, but I also don't like the extreme
perfectionism - as you can see for instance in the Scheme
community. The risk of perfectionism is that people keep having
endless discussions and nothing is done. There are areas where
perfectionism is justified (i.e. Mathematics) but programming
is not one of this areas.

    Michele Simionato
0
12/14/2007 2:10:43 PM
On Dec 14, 2:30 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
wrote:
> I also strongly disagree with your comments to the effect that if
> something is in a standard, then it will be easier for users to
> understand, because they don't have to understand how it is
> implemented, and that being in a standard inherently makes something
> better

In the context of printf it is more a matter of syntax; a syntax
such the one proposed by Neelakantan Krishnaswami would hide
the combinators from the user which I think is a good thing.
If SML had macros, you could hide the combinators yourself,
but everybody would do it differently. A standard would
be useful anyway because for an user it is easier to learn a
single syntax and not many syntaxes doing the same thing.

 Michele Simionato
0
12/14/2007 2:23:40 PM
Jon:
[...]

BTW, some SML compilers, including MLton and Poly/ML, do provide limited,
non-standard means to define new, redefine old, or/and extend existing
overloads.

Here is a snippet from code I wrote for Poly/ML's Basis library, which
extend standard overloads for Int32:

RunCall.addOverload Int32.~ "~";
RunCall.addOverload Int32.+ "+";
RunCall.addOverload Int32.- "-";
RunCall.addOverload Int32.* "*";
RunCall.addOverload Int32.div "div";
RunCall.addOverload Int32.mod "mod";
RunCall.addOverload Int32.< "<";
RunCall.addOverload Int32.> ">";
RunCall.addOverload Int32.<= "<=";
RunCall.addOverload Int32.>= ">=";
RunCall.addOverload Int32.abs "abs";

With the "allowOverload true" expert annotation, one can define new
overloads (or redefine old overloads) in MLton.  Here is a silly example:

_overload 4 / : 'a * 'a -> 'a
as  Real./   (*     standard *)
and Int.div  (* non-standard *)
and Word.div (* non-standard *)

val true = 2 = 7 / 3
val true = 0w2 = 0w7 / 0w3
val true = Real.== (2.5, 7.5 / 3.0)

These non-standard constructs are, of course, not portable across SML
implementations and are quite limited as they have been designed for
implementing the limited overloads specified by the SML Basis library (and
the Definition to a lesser extent).

-Vesa Karvonen
0
12/14/2007 5:17:57 PM
Vesa Karvonen schrieb:
>> But even a flawed mechanism, if standard, would be better, since it
>> could always be fixed in a second moment.
> 
> I disagree.  Fixing a flawed mechanism can be impossible without breaking
> existing code.  If such a flawed mechanism is made a standard and becomes
> widely used, the situation can be much worse than not having a standard.

That depends.
For example, whenever programs of different origin must interoperate, 
living without a standard is worse than living with a bad or 
ill-motivated one. Ask any MTA programmer.

Of course, for printf and the like, that's less of an issue. But in that 
case, it is in fact possible to deprecate an old standard and install a 
new one. It is being done all the time for Java, or in some of the 
library code I have worked on.
True, it didn't happen for printf - I suspect because it's impossible to 
design something better in C. Or, possibly, because all replacements 
ever proposed were implemented as proprietary libraries and not widely 
available.

> Labeling something as standard does not magically make it golden.
> Just because you are using a standard something, does not mean that it
> would be best (or better) for the job you are doing.

Sure, and thanks for the link - it wasn't really news, but brought out 
into the explicit some of the things I have been thinking.

Though I suspect it's irrelevant to the question whether printf should 
get a quick-and-dirty first implementation and a reengineered, better 
version later.

I have seen this approach work reasonably well in the Linux kernel. 
These guys have been rooting out stuff, rewriting it, and replacing it 
with better implementations, details, and even concepts, sometimes even 
replacing entire subsystems. The Linux kernel is now something far, far 
better than could be fathomed initially.
Of course, some decisions can't be easily reversed. However, those OS 
efforts that made more efforts at being clean never attracted the 
following. Amoeba, the Hurd, etc. etc. all have remained obscure, 
announcementware, or worse - mostly because they weren't useful to 
enough outsiders to attract volunteer and financial support.

It's the classical "worse is better" approach.
Though that slogan is misleading; it should be "time-to-market is more 
important than getting every detail right". Which translates to "there's 
never time to do it right, but always time to do it over" for the engineers.


While this all supports Jon's stance at the surfice, in this case, I'd 
still stick with a combinator interface.
First, the argument that control-code style printf is what programmers 
expect and know would be strong if it were valid; C++ stdio already 
deviates from it, so you don't erect a barrier if you deviate, too.
Second, known better alternatives have been implemented and work. So 
there's no development risk involved (as would be with a novel way to 
distribute computations across a network or similar stuff).
Third, control codes means you need escaping conventions, and these make 
all kinds of higher-order programming more difficult by an order of 
magnitude. The problems with printf aren't just that "it's not clean", 
they have quite concrete negative consequences.

In the end, however, the decision which design to choose boils down to 
the question: "Do we have somebody who's willing and able to implement it?"
If there's nobody, and you have somebody who can and will implement a 
control-code variant, then the decision is pretty much predetermined: 
better some printf than nothing at all, since in the former case, you at 
least have a chance of getting something better later on, while in the 
latter case, you don't get anything at all done.

Regards,
Jo
0
jo427 (1164)
12/14/2007 8:02:03 PM
Jon Harrop schrieb:
> Joachim Durchholz wrote:
>> Jon Harrop schrieb:
>>> Generating web pages is actually an excellent example of using printf:
>>>
>>>   let rec print ff = function
>>>     | PCData text -> fprintf ff "%s" text
>>>     | Element(tag, []) -> fprintf ff "<%s />" tag
>>>     | Element(tag, xs) -> fprintf ff "<%s>%a</%s>" tag prints xs tag
>>>   and prints ff = function
>>>     | [] -> ()
>>>     | h::t -> fprintf ff "%a%a" print h prints t;;
>> Not at all IME.
>>
>> I just finished setting up a PHP framework and am using it in
>> application programs.
>> printf and the like are essentially useless there. The formatting
>> information is typically carried in data definitions, with the
>> possibility to override in widget definitions; this decouples formatting
>> and text so well that there's simply no need for printf anymore.
> 
> Can you elaborate? I don't understand what the code would look like.

This is how generating HTML looks like (with apologies for the yucky 
syntax, PHP is what you get if you do web programming, so I don't really 
have a choice):

<td><?php
   input ('birth_date', $some_value)
?></td>

where that input function generates an <input name="birth_date" 
value="$some_value"> tag.
Formatting is looked up in a global array (if this were an FPL, I'd have 
wrapped it into a parameter, but this is PHP). That array is declared 
similarly to this:

   $metadata = array (
     'birth_date' => array (
       'date', # Type specifier
       'YYYY-MM-DD', # a birth_date is usually displayed in ISO format
       ... # Validation information etc.
     )
   );

The input function knows how to format dates, numbers, strings, etc.
Its operation is mostly based on the $metadata that is finds under the 
field name, though it's possible to replace some or all of the 
information by passing in override data.

Other field types are monetary values, which are formatted with 
thousands separators and two decimal places, person height (three 
digits, no decimals), phone numbers, social security numbers, etc. etc. etc.
(Well, actually we don't have entries for phone and social security 
numbers, because the software currently doesn't use that data in any way 
except for display, so formatting outputs or validating inputs is 
pointless and we don't need a display type for these.)

Note that $metadata contains information not only for output formatting, 
but also for input validation and database mapping. And the general 
principle is "$metadata just provides defaults which can be overridden 
if needed" (so far, there has been surprisingly little need for such 
overrides).

Regards,
Jo
0
jo427 (1164)
12/15/2007 12:22:15 PM
(belated reply, sorry)

Stephen J. Bevan wrote:
> Ben Franksen <ben.franksen@online.de> writes:
>> Sean Gillespie wrote:
>>> Second, '=' is not used for function
>>> declarations - I think this is confusing because people usually
>>> associate '=' as a variable assignment.
>>
>> Sigh. It was a very bad case of misusing established (mathematical)
>> language when the designers of C did that.
> 
> Since Fortran and PL/1 had already been misusing this language for
> 10-15 years can we really fault K&R for for their choice?

Well, then, probably not. (Not that I care very much who to blame. When I
learned about FPLs I was reassured by having '=' restored to some
resemblance of its mathematical meaning.)

Cheers
Ben
0
12/15/2007 8:47:12 PM
Jon Harrop wrote:
> rossberg@ps.uni-sb.de wrote:
>> On Dec 10, 12:08 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
>>> Ok. So our code has dozens of corporate users, so I know is tested and
>>> works. Who has taken the plunge with Alice ML and what do they use it
>>> for?
>> 
>> It is mainly used for teaching, and being a small research project
>> probably will never see commercial users. That never was its point
>> either. So?
> 
> Having users proves that software works. For example, I do not consider it
> a coincidence that MLton crashes all the time and has few users but OCaml
> rarely crashes and has many more users.

So when M$-Windows did crash all the time (not to speak of their 'office'
applications) it was a sign of not many users using this crap? I think not.

Do you really believe in that line of reasoning?

Cheers
Ben
0
12/15/2007 9:27:12 PM
Jon Harrop wrote:
> klohmuschel@yahoo.de wrote:
>> What would be the projects you are going to realize in cases of
>> MiniML. I mean what will MiniML give you OCaml or F# (or how they call
>> it) cannot deliver to you?
> 
> Lots of things. :-)

Interesting. Let me compare to what Haskell offers.

> Advantages over OCaml:
> 
> . Freedom for the community to innovate the language, which primarily
> means adding all of the obvious things that OCaml should have (a
> try..finally construct, fuller pattern matching, a complete stdlib). The
> OCaml community have already done more innovation than the OCaml
> developers themselves only to have their contributions refused for entry
> into the OCaml distribution.

Haskell community has a very different attitude, here. Though nowadays the
tendency is away from bundling everything with the compiler toward a more
Perl-CPAN like thing (hackage). This is a GoodThing, IMO.

> . Operator overloading.

Yes. And functions, too (via type classes).

> . High-performance FFI ...

Yes.

> ...by using C-friendly data structures rather than
> OCaml's silly 4M-element arrays, Bigarrays and "Raw" arrays.

C-friendly data structures are provided in Haskell's FFI. They are normally
not used by non-FFI Haskell code, of course; that would be very
inconvenient. Except arrays, maybe, sometimes.

> . Type safe marshalling.

Yes.

> . Free polymorphism.

Of course, polymorphism doesn't cost anything extra ;-)

Seriously: I am not familiar with this term. Could you explain?

> . Generic printing.

Yes: class Show. Or, if you want fancy (i.e. readable) output: use one of
two popular (fully generic) pretty-printing libs (but they are not
deriveable, so maybe not 'generic' enough for you).

> . Per-type functions (e.g. comparison).

Hm, you mean type classes?

> . Machine-precision ints rather than 31- or 63-bit ints.

Yes, though dependent on implementation. Standard requires at least 30 bits
but GHC (and I think most others) actually have 32/64.

> . No weird boxing problems, e.g. having to add "+. 0.0" to the end of
> numeric functions to improve performance.

Never heard of that one from Haskell users. Sometimes you'll have to make
things stricter than the compiler already guesses. There is a clean way to
do so (use 'seq', or even better: use a strict function; or use the new
bang patterns).

> . Much better performance on numeric code that exploits abstractions.

Supposedly not a strong point of Haskell.

> . DLLs.

Don't know about that one.

> . Native-code performance from the REPL.

No REPL for Haskell, unfortunately, though ghci (which is /not/ a full REPL)
compiles to native code AFAIK (w/o optimisation, of course).

> . Better REPL: e.g. saving and loading of state.
> 
> . Lots more useful functionality in the stdlib and none of the cruft.

This is a bit too unspecific to comment on.

> . Commerce friendly, i.e. no brittle interfaces making it practically
> impossible to sell libraries written in OCaml.

If you mean 'closed source', I fear GHC has similar problems: There is no
ABI that is stable between compiler versions.

> Advantages over F#:
> 
> . Platform independence (many scientists and engineers don't run Windows).
> 
> . Faster symbolics thanks to a custom GC that isn't optimized for C#
> programs.
> 
> . No .NET baggage, e.g. different types for closures and raw functions.

Yes to these, since not .NET dependent.

> . Better support for structural types, e.g. .NET has trouble reloaded
> marshalled data from a different REPL instantiation.
> 
> Generally, I want the stdlib to include support for modern graphics and
> GUI programming, e.g. OpenGL and GTK+/Qt.

Haskell has gtk2hs. Very nice, although I miss a more functional feeling
layer on top of it.

> LLVM makes it extremely easy to generate very high performance numerical
> code (including SIMD instructions), which makes it the perfect foundation
> for a technical computing platform.
> 
> There are several very talented people looking at doing the same thing.
> I'm sure we won't have trouble collaborating and LLVM will make it
> incredibly easy to build something useful in a relatively short amount of
> time.
> 
>> Btw: I have never become accustomed to printf in C.
> 
> Printf is very useful in OCaml and F#.

Haskell has printf, but the format string is not used in type checking (it
is built on Data.Dynamic). It is not a good idea to build an ad-hoc feature
into the language (i.e. type checking (based on) the content of a string)
only to support printf. BTW, what does Ocaml do if the format string is the
result of a function call?

I use printf in Haskell almost only to format floating point values (and
that is only because I am too lazy to write my own parameterised conversion
function which wouldn't be too hard).

BTW, I think this MiniML-on-LLVM would be a /very/ nice thing to have.

Cheers
Ben
0
12/15/2007 10:35:43 PM
Ben Franksen wrote:
> Jon Harrop wrote:
>> Having users proves that software works. For example, I do not consider
>> it a coincidence that MLton crashes all the time and has few users but
>> OCaml rarely crashes and has many more users.
> 
> So when M$-Windows did crash all the time (not to speak of their 'office'
> applications) it was a sign of not many users using this crap? I think
> not.
> 
> Do you really believe in that line of reasoning?

Absence of evidence is not evidence of absence. The fact that nobody files
bug reports against SML compilers is not a testament to their robustness.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/15/2007 11:12:03 PM
Joachim Durchholz wrote:
> I have seen this approach work reasonably well in the Linux kernel.
> These guys have been rooting out stuff, rewriting it, and replacing it
> with better implementations, details, and even concepts, sometimes even
> replacing entire subsystems. The Linux kernel is now something far, far
> better than could be fathomed initially.
> Of course, some decisions can't be easily reversed. However, those OS
> efforts that made more efforts at being clean never attracted the
> following. Amoeba, the Hurd, etc. etc. all have remained obscure,
> announcementware, or worse - mostly because they weren't useful to
> enough outsiders to attract volunteer and financial support.
> 
> It's the classical "worse is better" approach.
> Though that slogan is misleading; it should be "time-to-market is more
> important than getting every detail right". Which translates to "there's
> never time to do it right, but always time to do it over" for the
> engineers.

Your statement is something very different from the "worse is better"
slogan. While the former is about a less-than-optimal implementation of a
basically sound design, the latter suggests that even major design flaws
can be corrected over time. IMO this is /not/ true. The major flaws in the
design of e.g. Unix (and thus Linux) are not fixable, no matter how long
you are willing to wait. You will, for instance, never get a proveably
correct Linux kernel, or get a Linux system API that doesn't suck. If you
want that, you'll have to start building a new system from scratch. What
you can do to gain users is to /emulate/ the legacy system as a stop-gap
measure, so that all the old programs can still be used.

Similar thoughts apply to libraries.

Cheers
Ben
0
12/15/2007 11:58:18 PM
Vesa Karvonen wrote:
> michele.simionato@gmail.com <michele.simionato@gmail.com> wrote:
> [...]
>> [f4]  printf ("The square of %s is %s \n", [x, y])
> 
>> [...] OTOH, [f4] is more dynamic. Suppose for instance you want to
>> define a template language to generate Web pages; if you change the
>> template at runtime, by adding additional placeholders, you could just
>> append the required additional arguments to the argument list, whereas
>> using [f3] you would have to add the arguments by hand to the source
>> code and to recompile the function containing the printf expression.
> 
> Sorry, but what you say above makes little sense to me in many respects.
> For example, using printf to generate web pages sounds silly to me.

Indeed. Though I wouldn't put that past some people. It is comparable to
writing a parser solely with Perl regexes, and I can tell you I have seen
such beasts.

Cheers
Ben
0
12/16/2007 12:04:44 AM
Ben Franksen wrote:
> Jon Harrop wrote:
>> . Operator overloading.
> 
> Yes. And functions, too (via type classes).

I'd like to know if the overloads have been resolved statically.

>> . High-performance FFI ...
> 
> Yes.

Any evidence of that? I'd be interested to see, for example,
high-performance visualizations using dynamic OpenGL vertex buffer objects
(VBOs).

>> . Type safe marshalling.
> 
> Yes.

If you define some types, create some data structures and marshal them to a
file from an interactive session. Then start a new interactive session,
redefine the types and reload the data, does that work?

>> . Free polymorphism.
> 
> Of course, polymorphism doesn't cost anything extra ;-)
> 
> Seriously: I am not familiar with this term. Could you explain?

Is there a run-time cost associated with polymorphism?

>> . Per-type functions (e.g. comparison).
> 
> Hm, you mean type classes?

Yes.

>> . Machine-precision ints rather than 31- or 63-bit ints.
> 
> Yes, though dependent on implementation. Standard requires at least 30
> bits but GHC (and I think most others) actually have 32/64.

I would only consider using GHC and Hugs. None of the other Haskell
implementations have enough users, e.g. they're untested.

>> . No weird boxing problems, e.g. having to add "+. 0.0" to the end of
>> numeric functions to improve performance.
> 
> Never heard of that one from Haskell users. Sometimes you'll have to make
> things stricter than the compiler already guesses. There is a clean way to
> do so (use 'seq', or even better: use a strict function; or use the new
> bang patterns).

This is actually the killer reason why I'd never use Haskell for my work (at
least not with my current knowledge). Optimizing Haskell programs is
unpredictably and arbtrarily difficult. In contrast, optimizing OCaml is
basically very easy: you just write C-like OCaml with a couple of tricks.

>> . Much better performance on numeric code that exploits abstractions.
> 
> Supposedly not a strong point of Haskell.

Lennart Augustsson's new Haskell implementations of the ray tracer are doing
ok now. However, nobody can explain why his are fast but Phil's were 2x
slower. This is exactly what scares me the most about Haskell.

>> . DLLs.
> 
> Don't know about that one.

I think this is another serious problem with Haskell (and OCaml).

>> . Native-code performance from the REPL.
> 
> No REPL for Haskell,

Isn't Hugs a REPL?

> unfortunately, though ghci (which is /not/ a full 
> REPL) compiles to native code AFAIK (w/o optimisation, of course).

Great.

>> . Better REPL: e.g. saving and loading of state.
>> 
>> . Lots more useful functionality in the stdlib and none of the cruft.
> 
> This is a bit too unspecific to comment on.

Things like vectors and matrices, FFTs and so on.

>> . Commerce friendly, i.e. no brittle interfaces making it practically
>> impossible to sell libraries written in OCaml.
> 
> If you mean 'closed source', I fear GHC has similar problems: There is no
> ABI that is stable between compiler versions.

:-(

>> Advantages over F#:
>> 
>> . Platform independence (many scientists and engineers don't run
>> Windows).
>> 
>> . Faster symbolics thanks to a custom GC that isn't optimized for C#
>> programs.
>> 
>> . No .NET baggage, e.g. different types for closures and raw functions.
> 
> Yes to these, since not .NET dependent.

How fast is Haskell at symbolics?

>> . Better support for structural types, e.g. .NET has trouble reloaded
>> marshalled data from a different REPL instantiation.
>> 
>> Generally, I want the stdlib to include support for modern graphics and
>> GUI programming, e.g. OpenGL and GTK+/Qt.
> 
> Haskell has gtk2hs. Very nice, although I miss a more functional feeling
> layer on top of it.

What are the major applications written using gtk2hs? (there are at least
thousands of people using GTK applications written in OCaml)

>>> Btw: I have never become accustomed to printf in C.
>> 
>> Printf is very useful in OCaml and F#.
> 
> Haskell has printf, but the format string is not used in type checking (it
> is built on Data.Dynamic). It is not a good idea to build an ad-hoc
> feature into the language (i.e. type checking (based on) the content of a
> string) only to support printf.

If you want to get users, it is a good idea to provide easy-to-use printing
facilities like printf.

> BTW, what does Ocaml do if the format string is the result of a function
> call? 
> 
> I use printf in Haskell almost only to format floating point values (and
> that is only because I am too lazy to write my own parameterised
> conversion function which wouldn't be too hard).
> 
> BTW, I think this MiniML-on-LLVM would be a /very/ nice thing to have.

Absolutely. :-)

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/16/2007 12:14:09 AM
Jon Harrop wrote:
> Vesa Karvonen wrote:
>> I think that OCaml's solution to implement support for
>> format strings in the compiler is far from simple.  Format strings are a
>> very special construct that have very little utility in the larger scheme
>> of things.
> 
> The same can be said of regular expressions. Would you rather replace
> those with a combinator library as well?

Indeed. Having a good parser combinator library at hand is /far/ more useful
(in the long run) than having regexes built in. Sure, some simple things
are a bit more verbose with a parser combinator library. OTOH difficult
things become a lot easier and /much/ more correct and reliable.

Cheers
Ben
0
12/16/2007 12:22:34 AM
Ben Franksen wrote:
> Indeed. Having a good parser combinator library at hand is /far/ more
> useful (in the long run) than having regexes built in. Sure, some simple
> things are a bit more verbose with a parser combinator library. OTOH
> difficult things become a lot easier and /much/ more correct and reliable.

My impression is that parser combinators are a nice abstraction but often
far too inefficient for real programs. After all, OCaml has a custom regexp
interpreter written in C.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/16/2007 12:56:43 AM
Ben Franksen <ben.franksen@online.de> writes:
> > . Generic printing.
> Yes: class Show.

Doesn't work for GADT's for reasons I don't understand even slightly.
0
phr.cx (5493)
12/16/2007 5:22:46 AM
On Dec 14, 2:45 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
wrote:
> michele.simion...@gmail.com <michele.simion...@gmail.com> wrote:
> .  printf is more useful
> > for quirk and dirty scripts and for exploratory code (that means *a lot*
> > of code).
>
> If you have actual concrete examples (rather than contrived one
> liners), then we can look at and discuss them.  A vague phrase like
> "quick and dirty scripts" or "exploratory code" can mean many things
> and it is more than likely we could come up with better solutions for
> most actual cases.

I don't think so. A lot of times I run
small scripts on our database to compute a few statistical
data and I want to print out floats and integers, as well
as descriptive strings, and nothing beats the convenience
of printf for that. But I will give you another example.
Recently there was a post of Antonio Cangiano about
Ruby 1.9 that got a lot of attention in many communities
(see for instance
http://antoniocangiano.com/2007/11/30/more-on-fibonacci-oops-sorry-lisp-haskell-runs-it-5-times-faster/
). This kind of examples (as contrived as they are) are
real attention catchers and tend to generate infinite
debates, so they are good if you are writing a paper on
a technical review and you want to attract readers.
In particular, you may want to popularize a non-mainstream
language by showing off his virtues: see for instance
the Haskell implementation compared to Python and Ruby:
http://cgi.cse.unsw.edu.au/~dons/blog/2007/11/29#smoking

The use of printf here is particularly convenient. If
I were to post a SML version, something like

fun fib 0 = 0
  | fib 1 = 1
  | fib n = fib (n-1) + fib (n-2)

val () = app print (List.tabulate(36,
  fn i => (Int.toString i) ^ " => " ^ (Int.toString (fib i)) ^ "\n"))

I am sure people would say: "What? What kind of language
SML is, it does not *even* have printf??" and get away
with a bad (and false) impression of the language.
So, I would like popularize SML to the masses, and I can't
(at least it is more difficult than it needs to be) and
this is the reason why I am unhappy :-(

          Michele Simionato
0
12/16/2007 7:13:52 AM
On Dec 14, 2:45 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
wrote:
> I also think that learning to program effectively in a safe,
> higher-order, language with a fairly expressive type system like SML
> (with parametric polymorphism and modules with abstract types) is much
> more difficult and takes more time than learning to program
> effectively in a (often unsafe) typed language with just simple types
> like C or a dynamically checked language like Python.
>
> For example, I think that it would be fair to say that I learned to
> program in Scheme effectively roughly immediately while reading SICP
> (I already had done some functional/higher-order programming and had
> years of experience in imperative and OO programming at that point).
> By "effectively" I mean that I could straightforwardly implement
> anything I could imagine.  When I got back to ML, I found out that
> there were many things I didn't know how to implement
> straightforwardly.  I think that I'm now almost as effective in SML as
> in Scheme, but there are still some things I could "just do" in
> Scheme, but would require me to spend potentially a lot of time
> thinking in SML.

Dunno, my first impression of SML was of a relatively
easy language. Combinators are not trivial, right,
but they are much easier than things like continuations
or syntax-case in Scheme. Of course my perception is
flawed because I am learning SML after Scheme, and
I already know a lot of functional tricks from Scheme,
so I cannot be sure. For the moment however, I would
say that learning SML has been easier then learning
Scheme, at least for me. Certainly the Basis library
helps a lot.

   Michele Simionato
0
12/16/2007 7:46:49 AM
On Dec 14, 2:45 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
wrote:
> I think that I'm now almost as effective in SML as
> in Scheme, but there are still some things I could "just do" in
> Scheme, but would require me to spend potentially a lot of time
> thinking in SML.

Can you give examples? I am just curious.

 M.S.
0
12/16/2007 7:50:54 AM
Ben Franksen schrieb:
>> . Type safe marshalling.
> 
> Yes.

Not really.

Marshalling forces evaluation of the values marshalled.

IOW you may get nontermination (if the data structure contains infinite 
substructures). You will lose the advantages of having a non-strict 
languages.

>> . No weird boxing problems, e.g. having to add "+. 0.0" to the end of
>> numeric functions to improve performance.
> 
> Never heard of that one from Haskell users. Sometimes you'll have to make
> things stricter than the compiler already guesses. There is a clean way to
> do so (use 'seq', or even better: use a strict function; or use the new
> bang patterns).

'seq' isn't considered easy to use though.
(Dunno about strict functions and bang patterns. Pattern application is 
strict in Haskell, so I'm not sure how this relates to strictness.)

Regards,
Jo
0
jo427 (1164)
12/16/2007 10:28:54 AM
Jon Harrop schrieb:
> How fast is Haskell at symbolics?

FWIW, it's fast enough for writing Haskell compilers in Haskell.

Regards,
Jo
0
jo427 (1164)
12/16/2007 10:31:05 AM
Joachim Durchholz <jo@durchholz.org> writes:
> Marshalling forces evaluation of the values marshalled.

I don't see why any fundamental reason this is necessary.  Nothing
wrong with writing out unevaluated thunks in some fashion.
0
phr.cx (5493)
12/16/2007 11:05:40 AM
michele.simionato@gmail.com <michele.simionato@gmail.com> wrote:
> On Dec 14, 2:45 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
> wrote:
[...]
> > If you have actual concrete examples (rather than contrived one
> > liners), then we can look at and discuss them.  A vague phrase like
> > "quick and dirty scripts" or "exploratory code" can mean many things
> > and it is more than likely we could come up with better solutions for
> > most actual cases.

> I don't think so.  A lot of times I run small scripts on our database to
> compute a few statistical data and I want to print out floats and
> integers, as well as descriptive strings, and nothing beats the
> convenience of printf for that.

If you do a lot of such ad hoc statistics computations and queries, I
would suggests thinking about designing and implementing libraries for
that purpose.  IOW, introduce a library for expressing queries using
combinators and computing statistics from such queries and printing them.
You might even be able to mostly eliminate explicit formatting from your
code (just specify the shape of the result and the combinators perform the
formatting) while making it much easier to write the queries and
statistics computations.

Again, please provide actual examples.  You argument is but an assertion
without evidence.  Repeating it does not make it true.  Your style of
argumentation is called Proof by Assertion and it is a well known logical
fallacy:

  http://en.wikipedia.org/wiki/Proof_by_assertion

> But I will give you another example.
[...]
> The use of printf here is particularly convenient.  If I were to post a
> SML version, something like

> fun fib 0 = 0
>   | fib 1 = 1
>   | fib n = fib (n-1) + fib (n-2)

> val () = app print (List.tabulate(36,
>   fn i => (Int.toString i) ^ " => " ^ (Int.toString (fib i)) ^ "\n"))
            ^              ^            ^                    ^

As an aside, the parentheses pointed ^ above are redundant.  There is a
very simple rule to remember here.  See the section "Parentheses" on the
page http://mlton.org/TipsForWritingConciseSML .

Honestly, I'm not entirely sure what you are trying here.  It is a shame
that communicating through a newsgroup is so difficult.  First of all,
your program doesn't quite do the same thing as most of the examples in
other languages.  Unlike the other examples, your program constructs an
intermediate list of strings whose elements are then printed.  The output
also differs from the output of most of the other examples.

Perhaps your goal here is to write the example in such a manner that only
the fib function is defined by you and everything else comes from the
Basis library.  But, frankly, I find such a goal misguided.

Have you heard about the "What if?" principle?  It is by no means the only
name for the idea.  I think I read it under a name like that from the book
"Haskell: The Craft of Functional Programming", but I don't have a copy of
that book at hand.  The idea is simple.  When you want to write something,
you begin by asking yourself a simple question: What if I could have all
the functions in the world, which functions would I use to solve this
problem?

Now, what (simple) functions could you possibly use to simplify the above?

To avoid the complexity of constructing a useless intermediate list, you
could use a simple function for iterating over a range of integers:

fun upto i j f = if i < j then (f i ; up (i+1) j f) else ()

For a more comprehensive solution, see the Iter : ITER structure on the
page http://mlton.org/ForLoops .

To make the conversion of integers to strings more concise, you could use
a shorter binding for Int.toString:

val D = Int.toString

To make it more concise to print a sequence of strings with a newline, you
could use a function printlns, that, unsurprisingly, prints a list of
strings with a newline.  A simple implementation would be something like
this:

fun printlns ss = (app print ss ; print "\n")

But a slightly more efficient (due to different semantics) implementation
is also possible (and available from my library).

With these very simple utilities, you could then write the example as:

fun fib 0 = 0
  | fib 1 = 1
  | fib n = fib (n-1) + fib (n-2)

val () = upto 0 36 (fn i => printlns ["n=", D i, " => ", D (fib i)])

Now, if you anticipate writing similar programs later, you should ask how
to make that simpler for you in the future.  The simple answer is that you
put the stuff into a library.  You write a simple library for expressing
loops and a simple library for formatting output.  The page
http://mlton.org/ForLoops already shows how to write a simple library for
loops.  Here is a sketch of how to write a dead simple library for
relatively concise formatting:

structure Cvt : sig
   type 'a t = 'a -> string
   val PL : char -> int -> 'a t -> 'a t
   val PR : char -> int -> 'a t -> 'a t
   val L : 'a t -> 'a list t
   val D : int t
   val R : real t
end = struct
   type 'a t = 'a -> string
   fun PL c n f x = StringCvt.padLeft c n (f x)
   fun PR c n f x = StringCvt.padRight c n (f x)
   fun L c =
    fn [] => "[]"
     | x::xs => concat ("["::c x::foldr (fn (x, ss) => ", "::c x::ss) ["]"] xs)
   val D = Int.toString
   val R = Real.toString
end

To use a library such the above conveniently, you just open it in an
appropriate scope.

> I am sure people would say: "What? What kind of language SML is, it does
> not *even* have printf??" and get away with a bad (and false) impression
> of the language.

I've actually heard a comment to the effect, IIRC, "I can't believe how
one could design a language without printf!" from a colleague, referring
to SML.  Later, with that same colleague, we wrote basically something
that you could describe as a binding and a toy extension module in SML to
the system (not written in SML) we were maintaining.  He seemed quite
impressed by the fact that the extension module (and the binding)
(together still a small, but non-trivial collection of code) worked
correctly the first time.  Part of the "binding" was a small combinator
library, used for certain kind of interfacing with the system, that, I
gather, he also found quite interesting in terms of the level of
abstraction, functionality, and brevity.

> So, I would like popularize SML to the masses, and I can't (at least it
> is more difficult than it needs to be) and this is the reason why I am
> unhappy :-(

Again, I think that you are putting far too much emphasis on printf.
Looking at the examples in other languages from the pages that you pointed
at, I can see that not even all of them use printf.  Frankly, it makes me
think that you are projecting your own beliefs and insecurity upon others.

The way I see it, formatting simple ad hoc output is one of the absolutely
easiest parts of programming.  I don't think I've ever had problems with
it in any language (with or without printf).  Formatting output using the
simple approach I outlined above is not more difficult than the use of
format strings.  Compared to printf and string interpolation, it has the
advantage the any user can trivially add new formatting utilities
(corresponding to specifiers in printf).  It also has the advantage that
it is much less prone to simple mistakes like specifying arguments in the
wrong order, or using incorrect format specifiers, so you'll probably more
often just write the correct formatting code the first time.  Those are
concrete advantages over printf.

I haven't gone through all of your messages just now, but I don't recall
that you would ever have actually explained, in English, why you think
that printf is better or what are the real advantages of printf compared
to other solutions.  All of the examples you have shown can be expressed
with comparably complex code using the simple approach I've outlined
above.  Side-by-side comparisons of roughly similar complexity one-liners
without actually expressing why one alternative is better than another
also don't work.  For example, in one of your posts you present the
following examples (among others):

> [f1]  print ("The square of " ^ x ^ "is" ^ y ^ "\n")

I would write this as:

        printlns ["The square of ", x, " is ", y]

> [f3]  printf ("The square of %s is %s" x y)

And say that you would not be happy with the first one, because it has
^'s.  To me, the first example (and my version of it) reads like English
("The square of x is y") and the little formatting mistake you have there
was also immediately obvious to me.  Also note that second example lacks a
newline (so if you are counting mistakes the balance is +- 0).  The second
example cannot be read as English: The square of %s --- what the heck is
%s?  And the more complex the format string gets, the more difficult it
becomes to match specifiers to their arguments.  Exaggerating a bit,
printf is write-only code.

BTW, I think that the syntax of SML alone is different enough from most
mainstream languages that popularizing SML to the "masses" will be
difficult.  If you wish to promote SML, I recommend that you target a more
mature audience that can actually see past simple syntactic differences.
I recommend that you think hard about what the relative advantages of a
language like SML are.  I definitely have clear ideas of what the relative
advantages of a language like SML are and why I'm programming in SML.
Catering to newbies or adhering to popular beliefs aren't them.

-Vesa Karvonen
0
12/16/2007 1:18:46 PM
Paul Rubin schrieb:
> Joachim Durchholz <jo@durchholz.org> writes:
>> Marshalling forces evaluation of the values marshalled.
> 
> I don't see why any fundamental reason this is necessary.  Nothing
> wrong with writing out unevaluated thunks in some fashion.

The problem is that you somehow must make sure that the thunks will find 
the same functions they're referring to when they were marshalled out.

This either implies some means of identifying the functions across 
process boundaries (e.g. via code signing), or a way to send the 
functions' code together with the data (e.g. byte code plus a way to 
avoid having to resend the same function over and over).

Either way, there's no fundamental reason against doing it but it's 
quite some special-purpose infrastructure that somebody has to find 
interest in, design, implement, test, and maintain.

AFAIK it's on the to-do list of more than one Haskell compiler but not 
yet done.
(For me, it has been the one missing link to use Haskell in production, 
since with it, I'd be able to argue, "Look, we won't ever need to 
interface with Mysql with that!")

Regards,
Jo
0
jo427 (1164)
12/16/2007 1:58:57 PM
On Dec 16, 2:18 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
wrote:
> Again, please provide actual examples.

I gave examples, printing results of statistics, the
Fibonacci one, etc.
It is clear that you don't feel those examples compelling
so I will stop the argument. Your standard answer is
"You can write a library for that" which is of course true,
but completely misses my point that one should not write a custom
library for simple things like string formatting.

> Perhaps your goal here is to write the example in such a manner that only
> the fib function is defined by you and everything else comes from the
> Basis library.  But, frankly, I find such a goal misguided.

Yes, that was *exactly* my goal. Why in all languages the Fibonacci
example
takes three lines where in SML it would take thirteen, three for the
Fibonacci and ten
for  auxiliary functions that all the others languages have builtin??

> I've actually heard a comment to the effect, IIRC, "I can't believe how
> one could design a language without printf!" from a colleague, referring
> to SML.

So, I am not the only one,  at least.

> Again, I think that you are putting far too much emphasis on printf.

printf is just an example of many. Take for instance the syntax to
extract a sublist in Python, such as alist[5:10]. It is very
convenient
and everybody uses it. In SML it would be easy to write a library to
get an equivalent syntax, but it is not in the core and
everybody would use a different library. The general point is that SML
misses many little conveniences compared to other languages.
I think you are underestimating the advantages of little conveniences.
Many little conveniences together make a
lot of difference.

> Looking at the examples in other languages from the pages that you pointed
> at, I can see that not even all of them use printf.  Frankly, it makes me
> think that you are projecting your own beliefs and insecurity upon others.

??

> BTW, I think that the syntax of SML alone is different enough from most
> mainstream languages that popularizing SML to the "masses" will be
> difficult.  If you wish to promote SML, I recommend that you target a more
> mature audience that can actually see past simple syntactic differences.
> I recommend that you think hard about what the relative advantages of a
> language like SML are.  I definitely have clear ideas of what the relative
> advantages of a language like SML are and why I'm programming in SML.
> Catering to newbies or adhering to popular beliefs aren't them.

That's your opinion. My opinion is that the important
target of programmers to look at are the teenagers,
smart kids that are going to write the killer applications of
tomorrow. Consider for instance Ruby on Rails, who raised the
popularity of Ruby from zero to infinity in a couple of years. His
author was in his twenties when he wrote it (I think he
is still in his twenties). The only way SML can get some
popularity is to have some popular application
written in it, and only kids are ambitious and foolish
enough to attempt something like that.

  Michele Simionato
0
12/16/2007 2:04:58 PM
michele.simionato@gmail.com <michele.simionato@gmail.com> wrote:
> On Dec 16, 2:18 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
> wrote:
> > Again, please provide actual examples.

> I gave examples, printing results of statistics, the Fibonacci one, etc.

I meant actual concrete examples.  More specifically, non-trivial code
snippets.

> It is clear that you don't feel those examples compelling so I will stop
> the argument.  Your standard answer is "You can write a library for
> that" which is of course true, but completely misses my point that one
> should not write a custom library for simple things like string
> formatting.

Well, I think that we have to agree to disagree here.  I have no objection
to writing libraries.  In fact, programmers who don't write libraries, but
instead write the same kind of boilerplate over and over again are a pet
peeve of mine.  I find that wasteful and boring.

> Yes, that was *exactly* my goal.  Why in all languages the Fibonacci
> example takes three lines where in SML it would take thirteen, three for
> the Fibonacci and ten for auxiliary functions that all the others
> languages have builtin??

We definitely completely disagree here.  I have no trouble writing my own
libraries in any language.  I have never encountered a language where all
the functions I ever wanted were already predefined.  I personally
maintain a collection of several open source libraries for SML and one of
the main purposes is precisely to provide basic stuff for convenience not
available in the Basis library.

Besides, your count of lines is plain incorrect.  Most of the examples,
that I saw on the pages you linked to, do not fit in 3 lines and/or
actually show all the necessary lines (e.g. such as import declarations
for libraries used).  I also have no idea of where you got the number 10
for the utility lines in SML.  The correct number is 3:

val D = Int.toString
fun upto i j f = if i < j then (f i = () ; upto (i+1) j f) else ()
fun printlns ss = (app print ss ; print "\n")

fun fib 0 = 0
  | fib 1 = 1
  | fib n = fib (n-1) + fib (n-2)

val () = upto 0 36 (fn i => printlns ["n=", D i, " => ", D (fib i)])

Of the above, printlns is already provided by my Extended Basis library.
The upto function and D are on their way there.  Of course, my library
isn't the Basis library, but there is very little I can do to change that
overnight.  In fact, one of the purposes of my Extended Basis library is,
like in Boost for C++, to establish "existing practice" and provide
reference implementations of stuff that might later be added to the Basis
library.  My library is supported and can be used with several SML
compilers, including MLton, SML/NJ, Poly/ML and MLKit.  I'm happy to port
it to other compilers as long as they don't have bugs that make it
unnecessarily difficult (e.g. Alice ML currently has a bug with sharing
constraints that put my porting effort to a hold for the moment).

> printf is just an example of many.  Take for instance the syntax to
> extract a sublist in Python, such as alist[5:10].  It is very convenient
> and everybody uses it.  In SML it would be easy to write a library to
> get an equivalent syntax, but it is not in the core and everybody would
> use a different library.

Why the hell do you think that I'm maintaining a library (well, several
libraries) that provides little convenience functions like that on top of
the SML Basis library?

During our conversations here I've added several small convenience
functions to my libraries and started working on a couple of new
libraries.  I do that continuously as I write code in SML.  Whenever I
come up with a case where I find that I could use a reasonably general
purpose utility that isn't in some library already, I consider adding it
to some library and usually do add it.

Nobody can anticipate all the functions that someone might some day
require.  Simplistic ad hoc formatting simply isn't something I would be
doing lots of (I often use more sophisticated pretty printing combinators
or just use the simple approach I've described here earlier) nor the kind
of splicing supported by the Python notation (with which I've been
familiar with for a long time already).

If you actually wish to help, feel free to submit specific requests and
contribute patches to my libraries or other SML libraries.  As is clearly
stated in the README files with my libraries, contributions are welcome.

> The general point is that SML misses many little conveniences compared
> to other languages.  I think you are underestimating the advantages of
> little conveniences.

That is the exact opposite of the truth.  Like I said, I'm personally
maintaining libraries that provide many little conveniences for
programming with SML.

> Many little conveniences together make a lot of difference.

I totally agree.

> > Looking at the examples in other languages from the pages that you pointed
> > at, I can see that not even all of them use printf.  Frankly, it makes me
> > think that you are projecting your own beliefs and insecurity upon others.

> ??

One of the beliefs I'm referring to is the relative merits of printf.  I
really haven't seen any solid technical reasons for your preferences.  In
contrast, I have presented good reasons (in summary: error prone) against
printf.

The insecurity I'm referring to is that you are afraid that people would
get a bad impression of SML, because you feel bad about your example code.
The feeling is partly justified, as I have shown how to write the example
in a better way, and partly unjustified as you continue with your beliefs
that without printf it gives a bad impression and that everything should
come from the Basis library (which makes your fears a self-fulfilling
prophecy).

> That's your opinion.  My opinion is that the important target of
> programmers to look at are the teenagers, smart kids that are going to
> write the killer applications of tomorrow.  Consider for instance Ruby
> on Rails, who raised the popularity of Ruby from zero to infinity in a
> couple of years.  His author was in his twenties when he wrote it (I
> think he is still in his twenties).  The only way SML can get some
> popularity is to have some popular application written in it, and only
> kids are ambitious and foolish enough to attempt something like that.

So, you should then ask yourself how could you help to make SML more
attractive to your target group.  I guess that you want SML to appear more
attractive in toy examples and benchmarks where the main issue is that
everything used in such examples already needs to be ready in some
library.  (This happens to be another pet peeve of mine as you might
already be aware.)  So, how do you help with that?  By implementing the
utilities and packaging them as conveniently usable libraries (like I'm
doing on a daily basis) or by contributing them as additions to existing
libraries.

AFAIK, you have not contributed anything to any SML library.  If you have
something to contribute, please do.  I welcome contributions to the
libraries I maintain.

-Vesa Karvonen
0
12/16/2007 3:12:56 PM
BTW, as I have said here and elsewhere, the lack of libraries is perhaps
the biggest weakness of SML today.  Which is also *precisely* why I have
been thinking about and working on libraries and programming techniques
worth putting into libraries for about as long as I've been using SML more
seriously:

  http://mlton.org/pipermail/mlton/2005-June/027165.html

Feel free help!

-Vesa Karvonen
0
12/16/2007 3:39:14 PM
Vesa Karvonen wrote:
> BTW, as I have said here and elsewhere, the lack of libraries is perhaps
> the biggest weakness of SML today.  Which is also *precisely* why I have
> been thinking about and working on libraries and programming techniques
> worth putting into libraries for about as long as I've been using SML more
> seriously:
> 
>   http://mlton.org/pipermail/mlton/2005-June/027165.html
> 
> Feel free help!

That is an admirable goal but I think it is quite impossible for any one
person to hope to write so many bindings well. I think your time would be
much better spent making it easier for other people (experts in the
respective fields) to create the bindings for you.

This is essentially what I'm trying to achieve with OCaml.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/16/2007 3:53:42 PM
Joachim Durchholz wrote:
> Jon Harrop schrieb:
>> How fast is Haskell at symbolics?
> 
> FWIW, it's fast enough for writing Haskell compilers in Haskell.

Well, GHC is one of the slowest compilers around. However, that may be a
reflection of how much harder it is to write an optimizing compiler for
Haskell compared to an ML.

For example, GHC is >10x slower at compiling the fastest ray tracer (5s for
under 100LOC!) and the resulting program is still slower.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/16/2007 3:56:23 PM
Joachim Durchholz wrote:
> Paul Rubin schrieb:
>> Joachim Durchholz <jo@durchholz.org> writes:
>>> Marshalling forces evaluation of the values marshalled.
>> 
>> I don't see why any fundamental reason this is necessary.  Nothing
>> wrong with writing out unevaluated thunks in some fashion.
> 
> The problem is that you somehow must make sure that the thunks will find
> the same functions they're referring to when they were marshalled out.
> 
> This either implies some means of identifying the functions across
> process boundaries (e.g. via code signing), or a way to send the
> functions' code together with the data (e.g. byte code plus a way to
> avoid having to resend the same function over and over).

This is exactly the same problem that you encounter in OCaml and F#, albeit
much less severe there because you can easily avoid function values in
eager code.

> AFAIK it's on the to-do list of more than one Haskell compiler but not
> yet done.

Yes. I understood marshalling to be a weak point of Haskell rather than a
strong point. Despite its simplicity, I think OCaml's approach is much more
useful than Haskell and F#.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/16/2007 3:59:24 PM
Jon Harrop schrieb:
> Joachim Durchholz wrote:
>> Paul Rubin schrieb:
>>> Joachim Durchholz <jo@durchholz.org> writes:
>>>> Marshalling forces evaluation of the values marshalled.
>>> I don't see why any fundamental reason this is necessary.  Nothing
>>> wrong with writing out unevaluated thunks in some fashion.
>> The problem is that you somehow must make sure that the thunks will find
>> the same functions they're referring to when they were marshalled out.
>>
>> This either implies some means of identifying the functions across
>> process boundaries (e.g. via code signing), or a way to send the
>> functions' code together with the data (e.g. byte code plus a way to
>> avoid having to resend the same function over and over).
> 
> This is exactly the same problem that you encounter in OCaml and F#,

OCaml doesn't really solve it, since thunks are marshalled as function 
addresses.
Recompile the program and all data that was marshalled to disk will 
segfault on you...

Don't know what F# does.

> albeit much less severe there because you can easily avoid function
> values in eager code.

That's true.
For a nonstrict language, you'd have to take care about the consequences 
of forcing all subvalues. You can certainly program around the problems, 
but there's too much abstraction breakage going on for my taste.

>> AFAIK it's on the to-do list of more than one Haskell compiler but not
>> yet done.
> 
> Yes. I understood marshalling to be a weak point of Haskell rather than a
> strong point. Despite its simplicity, I think OCaml's approach is much more
> useful than Haskell and F#.

Hm. Marshalling function as addresses is a good stopgap measure, but 
nothing that I'd consider "useful" in the long term.

Regards,
Jo
0
jo427 (1164)
12/16/2007 4:48:50 PM
Joachim Durchholz wrote:
> Jon Harrop schrieb:
>> This is exactly the same problem that you encounter in OCaml and F#,
> 
> OCaml doesn't really solve it, since thunks are marshalled as function
> addresses.

Actually OCaml's default is to reject function values at run time when
marshalling, IIRC. You can obtain the behaviour you cite but only if you
specifically ask for it.

> Recompile the program and all data that was marshalled to disk will
> segfault on you...
> 
> Don't know what F# does.

Similar to OCaml but F# is currently very statically typed: repeating a type
definition gives a new type. So saving your data from a top-level where you
defined some types works perfectly in OCaml (and is very useful) but you've
lost all of your data in F# because you can never recover that type.

That really sucks in F# if you're using it for interactive technical
computing: you must save all of the types that you define into DLLs and
load them rather than specifying them interactively. Hopefully they'll fix
this.

>> albeit much less severe there because you can easily avoid function
>> values in eager code.
> 
> That's true.
> For a nonstrict language, you'd have to take care about the consequences
> of forcing all subvalues. You can certainly program around the problems,
> but there's too much abstraction breakage going on for my taste.
> 
>>> AFAIK it's on the to-do list of more than one Haskell compiler but not
>>> yet done.
>> 
>> Yes. I understood marshalling to be a weak point of Haskell rather than a
>> strong point. Despite its simplicity, I think OCaml's approach is much
>> more useful than Haskell and F#.
> 
> Hm. Marshalling function as addresses is a good stopgap measure, but
> nothing that I'd consider "useful" in the long term.

Yes. I think the key point is that there are more serious concerns with
OCaml.

I don't know about Erlang but I think it is rather sad that none of the main
FPLs provide decent support for marshalling. :-(

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/16/2007 8:28:53 PM
On Sun, 16 Dec 2007, Jon Harrop wrote:

> Ben Franksen wrote:
> > Indeed. Having a good parser combinator library at hand is /far/ more 
> > useful (in the long run) than having regexes built in. Sure, some 
> > simple things are a bit more verbose with a parser combinator library. 
> > OTOH difficult things become a lot easier and /much/ more correct and 
> > reliable.
> 
> My impression is that parser combinators are a nice abstraction but 
> often far too inefficient for real programs. After all, OCaml has a 
> custom regexp interpreter written in C.
> 

This is something of an oversimplification - there are many ways to build 
a parsing combinator library, with ranging implications for performance. 
Historically, designs have tended towards greater parsing power rather 
than speed.

-- 
flippa@flippac.org

There is no magic bullet. There are, however, plenty of bullets that
magically home in on feet when not used in exactly the right circumstances.
0
flippa (196)
12/16/2007 8:42:14 PM
Jon Harrop wrote:
> Ben Franksen wrote:
>> Jon Harrop wrote:
>>> Having users proves that software works. For example, I do not consider
>>> it a coincidence that MLton crashes all the time and has few users but
>>> OCaml rarely crashes and has many more users.
>> 
>> So when M$-Windows did crash all the time (not to speak of their 'office'
>> applications) it was a sign of not many users using this crap? I think
>> not.
>> 
>> Do you really believe in that line of reasoning?
> 
> Absence of evidence is not evidence of absence. The fact that nobody files
> bug reports against SML compilers is not a testament to their robustness.

Very true but completely irrelevant. I disproved your claim that 'A proves
B' by counterexample, presenting a case where B did not hold, but A did.

Jon, your rhetoric is excellent, but your logic is weak. You may convince
business partners with this kind of rhetoric, but in this newsgroup people
are not so easily fooled; /especially/ the sort of people you are trying to
convince. This is hopeless, believe me.

Cheers
Ben
0
12/16/2007 11:10:20 PM
Jon Harrop wrote:
> Ben Franksen wrote:
>> Jon Harrop wrote:
>>> . Operator overloading.
>> 
>> Yes. And functions, too (via type classes).
> 
> I'd like to know if the overloads have been resolved statically.

Why?

>>> . High-performance FFI ...
>> 
>> Yes.
> 
> Any evidence of that? 

XMonad.

> I'd be interested to see, for example,
> high-performance visualizations using dynamic OpenGL vertex buffer objects
> (VBOs).

Sorry, I don't know much about 3D graphics.

>>> . Type safe marshalling.
>> 
>> Yes.
> 
> If you define some types, create some data structures and marshal them to
> a file from an interactive session. Then start a new interactive session,
> redefine the types and reload the data, does that work?

I am pretty sure the existing serialization libs do exactly that.

(Wouldn't they be completely useless, otherwise?)

>>> . Free polymorphism.
>> 
>> Of course, polymorphism doesn't cost anything extra ;-)
>> 
>> Seriously: I am not familiar with this term. Could you explain?
> 
> Is there a run-time cost associated with polymorphism?

Depends on the implementation. And what you compare. Can you give an
example?

>>> . No weird boxing problems, e.g. having to add "+. 0.0" to the end of
>>> numeric functions to improve performance.
>> 
>> Never heard of that one from Haskell users. Sometimes you'll have to make
>> things stricter than the compiler already guesses. There is a clean way
>> to do so (use 'seq', or even better: use a strict function; or use the
>> new bang patterns).
> 
> This is actually the killer reason why I'd never use Haskell for my work
> (at least not with my current knowledge). Optimizing Haskell programs is
> unpredictably and arbtrarily difficult. In contrast, optimizing OCaml is
> basically very easy: you just write C-like OCaml with a couple of tricks.

By all means, go ahead and write C-like Ocaml if that is what you prefer to
do.

>>> . Much better performance on numeric code that exploits abstractions.
>> 
>> Supposedly not a strong point of Haskell.
> 
> Lennart Augustsson's new Haskell implementations of the ray tracer are
> doing ok now. However, nobody can explain why his are fast but Phil's were
> 2x slower. This is exactly what scares me the most about Haskell.

I can understand that.

>>> . DLLs.
>> 
>> Don't know about that one.
> 
> I think this is another serious problem with Haskell (and OCaml).

More precisely: it is a problem of the existing implementations. It is
definitely not a problem of the languages.

>>> . Native-code performance from the REPL.
>> 
>> No REPL for Haskell,
> 
> Isn't Hugs a REPL?

Not like you are used to in the ML family or Lisp. You can't cut-n-paste
definitions into Hugs or Ghci. Top-level definitions must be preceded
with 'let' and can only span one line. There are more issues. The usual way
to use them is to write stuff to a file, load it, evaluate some expressions
(or query types, whatever), make changes in the file, then reload.

>> unfortunately, though ghci (which is /not/ a full
>> REPL) compiles to native code AFAIK (w/o optimisation, of course).
> 
> Great.
> 
>>> . Better REPL: e.g. saving and loading of state.
>>> 
>>> . Lots more useful functionality in the stdlib and none of the cruft.
>> 
>> This is a bit too unspecific to comment on.
> 
> Things like vectors and matrices, FFTs and so on.

AFAIK, there is not yet very much libraries for this kind of stuff. But I
may be wrong. Contributions are always welcome, btw.

>>> . Commerce friendly, i.e. no brittle interfaces making it practically
>>> impossible to sell libraries written in OCaml.
>> 
>> If you mean 'closed source', I fear GHC has similar problems: There is no
>> ABI that is stable between compiler versions.
> 
> :-(
> 
>>> Advantages over F#:
>>> 
>>> . Platform independence (many scientists and engineers don't run
>>> Windows).
>>> 
>>> . Faster symbolics thanks to a custom GC that isn't optimized for C#
>>> programs.
>>> 
>>> . No .NET baggage, e.g. different types for closures and raw functions.
>> 
>> Yes to these, since not .NET dependent.
> 
> How fast is Haskell at symbolics?

(Sorry, answer was meant only to first and third cited points.)

What do you mean with symbolics?

>>> . Better support for structural types, e.g. .NET has trouble reloaded
>>> marshalled data from a different REPL instantiation.
>>> 
>>> Generally, I want the stdlib to include support for modern graphics and
>>> GUI programming, e.g. OpenGL and GTK+/Qt.
>> 
>> Haskell has gtk2hs. Very nice, although I miss a more functional feeling
>> layer on top of it.
> 
> What are the major applications written using gtk2hs? (there are at least
> thousands of people using GTK applications written in OCaml)

Can't tell. I don't do usage surveys. I try stuff out and use it if I like
it (or rather, in this case, can live with it.)

>>>> Btw: I have never become accustomed to printf in C.
>>> 
>>> Printf is very useful in OCaml and F#.
>> 
>> Haskell has printf, but the format string is not used in type checking
>> (it is built on Data.Dynamic). It is not a good idea to build an ad-hoc
>> feature into the language (i.e. type checking (based on) the content of a
>> string) only to support printf.
> 
> If you want to get users, it is a good idea to provide easy-to-use
> printing facilities like printf.

Printf easy to use? Opinions differ, I guess.

>> BTW, what does Ocaml do if the format string is the result of a function
>> call?

(repeating question)

Cheers
Ben
0
12/16/2007 11:49:50 PM
Ben Franksen <ben.franksen@online.de> wrote:
> Jon:
[...]
> > Absence of evidence is not evidence of absence. The fact that nobody files
> > bug reports against SML compilers is not a testament to their robustness.

> Very true but completely irrelevant. [...]

Oh, come on!  Jon is just bullshiting you:

  http://mlton.org/Bugs20070826
  http://mlton.org/Bugs20051202
  http://mlton.org/Bugs20041109

  http://mlton.org/cgi-bin/viewsvn.cgi/mlton/trunk/doc/changelog?view=auto
  (search for "thanks")

During the time I've been using MLton, all reported and verified bugs have
been fixed promptly.  If anyone wishes to look for evidence, it shouldn't
take more than a minute from anyone with a fast internet connection to
find bug reports and responses from developers from MLton's mailing lists:

  http://mlton.org/pipermail/mlton/
  http://mlton.org/pipermail/mlton-user/
 [http://mlton.org/pipermail/mlton-commit/]

If you can find a report of a verifiable bug, older than two weeks, from
MLton's mailing lists that wasn't addressed at all or was left unfixed,
let me know, and I'll take a look.

Let's face it.  Perhaps it is time we all stop feeding Jon the Troll
(http://en.wikipedia.org/wiki/Troll_%28Internet%29).

-Vesa Karvonen
0
12/17/2007 12:00:17 AM
Joachim Durchholz wrote:
> Ben Franksen schrieb:
>>> . Type safe marshalling.
>> 
>> Yes.
> 
> Not really.
> 
> Marshalling forces evaluation of the values marshalled.

But it is still type safe. That was the question.

> IOW you may get nontermination (if the data structure contains infinite
> substructures). You will lose the advantages of having a non-strict
> languages.

If you just print values to teh console they are evaluated, too. People do
that all the time and still find non-strictness in their programs useful.

>>> . No weird boxing problems, e.g. having to add "+. 0.0" to the end of
>>> numeric functions to improve performance.
>> 
>> Never heard of that one from Haskell users. Sometimes you'll have to make
>> things stricter than the compiler already guesses. There is a clean way
>> to do so (use 'seq', or even better: use a strict function; or use the
>> new bang patterns).
> 
> 'seq' isn't considered easy to use though.
> (Dunno about strict functions and bang patterns. Pattern application is
> strict in Haskell, so I'm not sure how this relates to strictness.)

You probably meant 'pattern matching'. It is not always strict, notably if
the pattern is a variable. Example:

  id :: a -> a
  id x = x

is lazy. This version

  id' !x = x

would be strict. What is strict by default is pattern matching on a /data
constructor/, but you can use a lazy (a.k.a. irrefutable) pattern to
restore non-strictness, as in

  iKnowThisIsJust ~(Just x) = someFunction x

Cheers
Ben
0
12/17/2007 12:09:40 AM
Joachim Durchholz wrote:
> Paul Rubin schrieb:
>> Joachim Durchholz <jo@durchholz.org> writes:
>>> Marshalling forces evaluation of the values marshalled.
>> 
>> I don't see why any fundamental reason this is necessary.  Nothing
>> wrong with writing out unevaluated thunks in some fashion.
> 
> The problem is that you somehow must make sure that the thunks will find
> the same functions they're referring to when they were marshalled out.
> 
> This either implies some means of identifying the functions across
> process boundaries (e.g. via code signing), or a way to send the
> functions' code together with the data (e.g. byte code plus a way to
> avoid having to resend the same function over and over).
> 
> Either way, there's no fundamental reason against doing it but it's
> quite some special-purpose infrastructure that somebody has to find
> interest in, design, implement, test, and maintain.
> 
> AFAIK it's on the to-do list of more than one Haskell compiler but not
> yet done.
> (For me, it has been the one missing link to use Haskell in production,
> since with it, I'd be able to argue, "Look, we won't ever need to
> interface with Mysql with that!")

I don't understand. Do you want to store /any/ kind of value in MySQL, as
BLOBs or what?

Note: even Mnesia, the Erlang database, can store only data terms, not
functions.

Cheers
Ben
0
12/17/2007 12:14:29 AM
Jon Harrop wrote:
> Joachim Durchholz wrote:
>> Jon Harrop schrieb:
>>> This is exactly the same problem that you encounter in OCaml and F#,
>> 
>> OCaml doesn't really solve it, since thunks are marshalled as function
>> addresses.
> 
> Actually OCaml's default is to reject function values at run time when
> marshalling, IIRC. You can obtain the behaviour you cite but only if you
> specifically ask for it.

Same in Haskell, only you get the error at compile time ("No instance
Serializable (Int -> Double)...".

You will get an error at run-time (or non-termination) if you try to
serialize an infinite value.

>>>> AFAIK it's on the to-do list of more than one Haskell compiler but not
>>>> yet done.
>>> 
>>> Yes. I understood marshalling to be a weak point of Haskell rather than
>>> a strong point. Despite its simplicity, I think OCaml's approach is much
>>> more useful than Haskell and F#.
>> 
>> Hm. Marshalling function as addresses is a good stopgap measure, but
>> nothing that I'd consider "useful" in the long term.
> 
> Yes. I think the key point is that there are more serious concerns with
> OCaml.
> 
> I don't know about Erlang but I think it is rather sad that none of the
> main FPLs provide decent support for marshalling. :-(

From what I've heard easy marshalling is one of the main strengths of
Erlang.

Cheers
Ben
0
12/17/2007 12:20:11 AM
Vesa Karvonen <vesa.karvonen@cs.helsinki.fi> wrote:
[...]
> val D = Int.toString
> fun upto i j f = if i < j then (f i = () ; upto (i+1) j f) else ()
> fun printlns ss = (app print ss ; print "\n")

> fun fib 0 = 0
>   | fib 1 = 1
>   | fib n = fib (n-1) + fib (n-2)

> val () = upto 0 36 (fn i => printlns ["n=", D i, " => ", D (fib i)])

> Of the above, printlns is already provided by my Extended Basis library.
> The upto function and D are on their way there. [...]

The D, and more, is now provided by my Extended Basis library:

  http://mlton.org/cgi-bin/viewsvn.cgi/mltonlib/trunk/com/ssh/extended-basis/unstable/public/text/cvt.sig?view=auto

-Vesa Karvonen
0
12/17/2007 1:01:02 AM
Philippa Cowderoy wrote:
> On Sun, 16 Dec 2007, Jon Harrop wrote:
> 
>> Ben Franksen wrote:
>> > Indeed. Having a good parser combinator library at hand is /far/ more
>> > useful (in the long run) than having regexes built in. Sure, some
>> > simple things are a bit more verbose with a parser combinator library.
>> > OTOH difficult things become a lot easier and /much/ more correct and
>> > reliable.
>> 
>> My impression is that parser combinators are a nice abstraction but
>> often far too inefficient for real programs. After all, OCaml has a
>> custom regexp interpreter written in C.
> 
> This is something of an oversimplification - there are many ways to build
> a parsing combinator library, with ranging implications for performance.
> Historically, designs have tended towards greater parsing power rather
> than speed.

Yes. An exception is Parsec which compromises power for speed in a small but
important area, namely the choice operator, which in Parsec does /not/
backtrack by default. Thus alternative choices are tried only if the one
that failed did not consume any tokens (other than the one currently under
consideration). You can restore backtracking with the 'try' operator which
incurs the usual cost (but only where you need the additional power).

I have written a number of parsers with Parsec and IME it has always been
fast enough. I never even resorted to using a separate lexer but parsed
character strings directly. Although it is certainly not as fast as a
hand-written C parser, or even a yacc generated one.

W.r.t. lexing, there is an interesting lexer combinator lib (regex based)
that lazily constructs a DFA at runtime and thus combines low start-up
costs, simplicity, and speed. It is described in a very nice and readable
paper, here: http://citeseer.ist.psu.edu/chakravarty99lazy.html

Cheers
Ben
0
12/17/2007 1:13:13 AM
Vesa Karvonen wrote:
> Ben Franksen <ben.franksen@online.de> wrote:
>> Jon:
> [...]
>> > Absence of evidence is not evidence of absence. The fact that nobody
>> > files bug reports against SML compilers is not a testament to their
>> > robustness.
> 
>> Very true but completely irrelevant. [...]
> 
> Oh, come on!  Jon is just bullshiting you:

I am sorry if I have left the impression that I agree with Jon's statement
about SML compilers. I should have been more careful. In fact I know very
little about SML compilers. Nevertheless I wouldn't believe for a moment
without further evidence that "nobody files bug reports against SML
compilers".

My "very true" was meant solely in response to the preceding "Absence of
evidence is not evidence of absence" as a general statement. It applies to
Jon's last statement in particular.

The fact is: many people are regularly using malfunctioning software. Thus
the claim that "having many users proves that software works" is false. It
is as false as claiming "having many voters prooves that a politician is
competent".

Cheers
Ben
0
12/17/2007 2:51:40 AM
On Dec 16, 4:12 pm, Vesa Karvonen wrote:
> I meant actual concrete examples.  More specifically, non-trivial code
> snippets.

Ah, well, if you insist. Here is some Python code
I wrote Friday at work.
The goal is to explore our production database; at the
end this is intended to become a Web application,
you may see the code as a very quick
and very dirty prototype:

REQUESTED_NOT_COVERED_PER_MONTH = '''
select clientcode from requested_message
where date_part('month', firstrequest)=%(month)s
and date_part('year', firstrequest)=2007 -- year hard-coded for the
moment
except
select clientcode from covered_product
'''

REQUESTED_PER_MONTH = '''
select clientcode from requested_message where
date_part('month', firstrequest)
=%(month)s and date_part('year', firstrequest)=2007
'''

def requested_not_covered(month):
    req = len(rcare(REQUESTED_PER_MONTH, dict(month=month)))
    nc = len(rcare(REQUESTED_NOT_COVERED_PER_MONTH,
dict(month=month)))
    return 'Month: %d NotCovered/Requested: %d/%d [%2d%%]' % (
        month, nc, req, float(nc)/req*100)

if __name__ == '__main__':
    for i in range(6, 12): # statistics for the second half of 2007
        print requested_not_covered(i+1)

I did not post this before since the real issue here is
the database interaction,
not printf. The database is PostgreSQL in this case, but
we also use
MS SQL, MySQL, SQLite and the Berkeley DB. The 'rcare' function you
see in requested_not_covered is a library I wrote (yes, sometimes I
do write custom libraries) wrapping the various low level database
drivers; it is just performing a SELECT with a prepared
query, passing the arguments as a Python dictionary.

> We definitely completely disagree here.  I have no trouble writing my own
> libraries in any language.

Me too, but especially when learning a new language I make an effort
to learn
the standard idioms of the language. I do not want to implement my own
wrappers that nobody except me will understand.

> Of course, my library
> isn't the Basis library, but there is very little I can do to change that
> overnight.

Nobody expects you to make miracles ;)

> In fact, one of the purposes of my Extended Basis library is,
> like in Boost for C++, to establish "existing practice" and provide
> reference implementations of stuff that might later be added to the Basis
> library.  My library is supported and can be used with several SML
> compilers, including MLton, SML/NJ, Poly/ML and MLKit.  I'm happy to port
> it to other compilers as long as they don't have bugs that make it
> unnecessarily difficult (e.g. Alice ML currently has a bug with sharing
> constraints that put my porting effort to a hold for the moment).

This is a laudable goal and I respect that.

> The insecurity I'm referring to is that you are afraid that people would
> get a bad impression of SML, because you feel bad about your example code.

LOL! Of course I am a beginner and before publishing any paper on SML
I
will ask for review to experienced programmers, but I assure you that
I do
not feel insecure at all. I make mistakes due to lack of experience
(for instance, some post ago I missed String.map) but that's natural
and it
does not bother me the least. In particular, for the Fibonacci example
I wrote what I wrote on purpose:

1. I put some redundant parenthesis because I felt it was clearer for
beginners,
    ignoring MLton Wiki recommendations *on purpose* [AFAICT the MLton
    Wiki is not the bible];

2. I used List.tabulate instead of writing my own for loop since
List.tabulate
    is in the standard library and the for loop is not.

>  I guess that you want SML to appear more
> attractive in toy examples and benchmarks where the main issue is that
> everything used in such examples already needs to be ready in some
> library.  (This happens to be another pet peeve of mine as you might
> already be aware.)  So, how do you help with that?  By implementing the
> utilities and packaging them as conveniently usable libraries (like I'm
> doing on a daily basis) or by contributing them as additions to existing
> libraries.

Here we definitely agree. So let me ask this: is there a web page (as
opposed to
a SVN repository) where the libraries you mantain are documented and
where one could download a tarball with them?
I assure you that from a psycological point of view people feel much
better
when downloading a tarball than when performing a SVN checkout.

> AFAIK, you have not contributed anything to any SML library.  If you have
> something to contribute, please do.  I welcome contributions to the
> libraries I maintain.

I am not competent enough to contribute (yet), but the situation may
change,
so have patience ...

       Michele Simionato
0
12/17/2007 5:43:17 AM
Ben Franksen schrieb:
>> If you define some types, create some data structures and marshal them to
>> a file from an interactive session. Then start a new interactive session,
>> redefine the types and reload the data, does that work?
> 
> I am pretty sure the existing serialization libs do exactly that.
> 
> (Wouldn't they be completely useless, otherwise?)

Even without the "redefine the type" bit: yes.
Without the "start a new interactive session" bit: probably. OCaml had 
exactly this type of serialization. (I could imagine a few uses for such 
a mechanism, but I agree it would be quite limited.)

>>>> . DLLs.
>>> Don't know about that one.
>> I think this is another serious problem with Haskell (and OCaml).
> 
> More precisely: it is a problem of the existing implementations. It is
> definitely not a problem of the languages.

Bunch-of-functions DLLs won't go well with nonstrict languages. 
Efficient implementations of such languages don't do the usual 
call-return sequence, and bunch-of-functions DLLs are built around that 
model.

>> If you want to get users, it is a good idea to provide easy-to-use
>> printing facilities like printf.
> 
> Printf easy to use? Opinions differ, I guess.

I found it error-prone, too.

Regards,
Jo
0
jo427 (1164)
12/17/2007 9:41:32 AM
Ben Franksen schrieb:
> From what I've heard easy marshalling is one of the main strengths of
> Erlang.

It doesn't marshall function values at all.
Erlang is strict like OCaml, so this is less of a problem than in 
non-strict languages. (Though I wouldn't want to try serialize an 
iterator in Erlang...)

Erlang is a bit atypical for an FPL because Erlang programmers don't 
usually use very high-order constructs. That's probably because Erlang 
is used in context where it's more important to get the job done with 
the mindset a C programmer brings into the project, rather than to try 
and see how far one can drive abstraction.

Regards,
Jo
0
jo427 (1164)
12/17/2007 9:44:41 AM
Ben Franksen schrieb:
> Joachim Durchholz wrote:
>> (For me, it has been the one missing link to use Haskell in production,
>> since with it, I'd be able to argue, "Look, we won't ever need to
>> interface with Mysql with that!")
> 
> I don't understand. Do you want to store /any/ kind of value in MySQL, as
> BLOBs or what?

I want to get rid of the impedance mismatch between live data structures 
and the database.

This means:
* not using a database at all,
* storing function and data values without distinction.

Regards,
Jo
0
jo427 (1164)
12/17/2007 9:55:37 AM
Ben Franksen schrieb:
> Joachim Durchholz wrote:
>> Ben Franksen schrieb:
>>>> . Type safe marshalling.
>>> Yes.
>> Not really.
>>
>> Marshalling forces evaluation of the values marshalled.
> 
> But it is still type safe. That was the question.

OK, I was taking the requirement a bit beyond what was written. But I 
don't care about type safety if marshalling a value risks evaluating 
some infinite data structure hidden deep inside an abstract data type.

>> IOW you may get nontermination (if the data structure contains infinite
>> substructures). You will lose the advantages of having a non-strict
>> languages.
> 
> If you just print values to teh console they are evaluated, too.

Yes, but printing to the console is just a debugging aid, and you see it 
immediately if there's a problem.
Marshalling is usually part of the application logic and must not run 
into an endless loop.

>>>> . No weird boxing problems, e.g. having to add "+. 0.0" to the end of
>>>> numeric functions to improve performance.
>>> Never heard of that one from Haskell users. Sometimes you'll have to make
>>> things stricter than the compiler already guesses. There is a clean way
>>> to do so (use 'seq', or even better: use a strict function; or use the
>>> new bang patterns).
>> 'seq' isn't considered easy to use though.
>> (Dunno about strict functions and bang patterns. Pattern application is
>> strict in Haskell, so I'm not sure how this relates to strictness.)
> 
> You probably meant 'pattern matching'. It is not always strict, notably if
> the pattern is a variable. Example:
> 
>   id :: a -> a
>   id x = x
> 
> is lazy.

It's lazy in that the inner structure of a isn't explored.

A pattern match is strict in Haskell to the degree that constructors are 
used in it.

 > This version
> 
>   id' !x = x
> 
> would be strict.

Strange. Why restrict the force operation to pattern matching, instead 
of simply making it part of the expression syntax? That would then be

   id' x = !x

 > What is strict by default is pattern matching on a /data
> constructor/, but you can use a lazy (a.k.a. irrefutable) pattern to
> restore non-strictness, as in
> 
>   iKnowThisIsJust ~(Just x) = someFunction x

Right.

Regards,
Jo
0
jo427 (1164)
12/17/2007 10:01:34 AM
Joachim Durchholz wrote:
> Ben Franksen schrieb:
>> Joachim Durchholz wrote:
>>> Ben Franksen schrieb:
>>>>> . Type safe marshalling.
>>>> Yes.
>>> Not really.
>>>
>>> Marshalling forces evaluation of the values marshalled.
>> 
>> But it is still type safe. That was the question.
> 
> OK, I was taking the requirement a bit beyond what was written.

Can I just clarify: that wasn't a requirement but a list of deficiencies in
OCaml that could be fixed. Haskell fixes many of OCaml's deficiencies but
also introduces many more.

>> If you just print values to teh console they are evaluated, too.
> 
> Yes, but printing to the console is just a debugging aid, and you see it
> immediately if there's a problem.
> Marshalling is usually part of the application logic and must not run
> into an endless loop.

Exactly. In contrast, OCaml's marshalling already handles cycles...

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/17/2007 1:47:41 PM
Ben Franksen wrote:
> From what I've heard easy marshalling is one of the main strengths of
> Erlang.

Interesting. :-)

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/17/2007 1:48:41 PM
Ben Franksen wrote:
> Jon Harrop wrote:
>> Ben Franksen wrote:
>>> Jon Harrop wrote:
>>>> Having users proves that software works. For example, I do not consider
>>>> it a coincidence that MLton crashes all the time and has few users but
>>>> OCaml rarely crashes and has many more users.
>>> 
>>> So when M$-Windows did crash all the time (not to speak of their
>>> 'office' applications) it was a sign of not many users using this crap?
>>> I think not.
>>> 
>>> Do you really believe in that line of reasoning?
>> 
>> Absence of evidence is not evidence of absence. The fact that nobody
>> files bug reports against SML compilers is not a testament to their
>> robustness.

This was me restating the point that I failed to convey the first time
around.

> Very true...

Yes, I know.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/17/2007 2:15:00 PM
Ben Franksen wrote:
> Jon Harrop wrote:
>> I'd like to know if the overloads have been resolved statically.
> 
> Why?

Predictable performance.

>>>> . High-performance FFI ...
>>> 
>>> Yes.
>> 
>> Any evidence of that?
> 
> XMonad.

That seems to be a tiling window manager. How is that computationally
intensive?

>> If you define some types, create some data structures and marshal them to
>> a file from an interactive session. Then start a new interactive session,
>> redefine the types and reload the data, does that work?
> 
> I am pretty sure the existing serialization libs do exactly that.
> 
> (Wouldn't they be completely useless, otherwise?)

Yes, and F# currently does that.

>>>> . Free polymorphism.
>>> 
>>> Of course, polymorphism doesn't cost anything extra ;-)
>>> 
>>> Seriously: I am not familiar with this term. Could you explain?
>> 
>> Is there a run-time cost associated with polymorphism?
> 
> Depends on the implementation. And what you compare. Can you give an
> example?

Will:

  Array.fold_left (+.) 0.

be optimized to run as fast as a C loop. OCaml has two problems: +. is not
inlined and the Array.fold_left function is polymorphic which incurs a
significant (2x) slowdown. F# inherits generics from .NET that fix the
latter problem but it still doesn't inline the function argument.

>>>> . DLLs.
>>> 
>>> Don't know about that one.
>> 
>> I think this is another serious problem with Haskell (and OCaml).
> 
> More precisely: it is a problem of the existing implementations. It is
> definitely not a problem of the languages.

I don't think that is true in any useful sense. The languages impose great
demands that are totally uncatered for in this respect.

>>>> . Native-code performance from the REPL.
>>> 
>>> No REPL for Haskell,
>> 
>> Isn't Hugs a REPL?
> 
> Not like you are used to in the ML family or Lisp. You can't cut-n-paste
> definitions into Hugs or Ghci. Top-level definitions must be preceded
> with 'let' and can only span one line. There are more issues. The usual
> way to use them is to write stuff to a file, load it, evaluate some
> expressions (or query types, whatever), make changes in the file, then
> reload.

That explains why I found it so hard to learn. :-)

>>>> . Lots more useful functionality in the stdlib and none of the cruft.
>>> 
>>> This is a bit too unspecific to comment on.
>> 
>> Things like vectors and matrices, FFTs and so on.
> 
> AFAIK, there is not yet very much libraries for this kind of stuff. But I
> may be wrong. Contributions are always welcome, btw.

Much of its needs to be in the language or at least have enough support from
the language. For example, you need an unboxed complex number
representation to achieve decent performance. F# makes that possible (and
even provides it!) but none of the other FPLs can.

>>>> . Platform independence (many scientists and engineers don't run
>>>> Windows).
>>>> 
>>>> . Faster symbolics thanks to a custom GC that isn't optimized for C#
>>>> programs.
>>>> 
>>>> . No .NET baggage, e.g. different types for closures and raw functions.
>>> 
>>> Yes to these, since not .NET dependent.
>> 
>> How fast is Haskell at symbolics?
> 
> (Sorry, answer was meant only to first and third cited points.)
> 
> What do you mean with symbolics?

Compilers, interpreters, computer algebra and so on.

>>> BTW, what does Ocaml do if the format string is the result of a function
>>> call?
> 
> (repeating question)

I don't know.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/17/2007 2:22:19 PM
Philippa Cowderoy wrote:
> This is something of an oversimplification - there are many ways to build
> a parsing combinator library, with ranging implications for performance.
> Historically, designs have tended towards greater parsing power rather
> than speed.

Is it theoretically possible to write a combinator library that is as
efficient as, for example, something generated by lex?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/17/2007 2:23:15 PM
On Mon, 17 Dec 2007, Ben Franksen wrote:

> Philippa Cowderoy wrote:
> > This is something of an oversimplification - there are many ways to build
> > a parsing combinator library, with ranging implications for performance.
> > Historically, designs have tended towards greater parsing power rather
> > than speed.
> 
> Yes. An exception is Parsec which compromises power for speed in a small 
> but important area, namely the choice operator, which in Parsec does 
> /not/ backtrack by default.

This doesn't compromise power for speed - it compromises ease of use for 
speed.

-- 
flippa@flippac.org

"I think you mean Philippa. I believe Phillipa is the one from an
alternate universe, who has a beard and programs in BASIC, using only
gotos for control flow." -- Anton van Straaten on Lambda the Ultimate
0
flippa (196)
12/17/2007 3:38:39 PM
On Mon, 17 Dec 2007, Jon Harrop wrote:

> Philippa Cowderoy wrote:
> > This is something of an oversimplification - there are many ways to build
> > a parsing combinator library, with ranging implications for performance.
> > Historically, designs have tended towards greater parsing power rather
> > than speed.
> 
> Is it theoretically possible to write a combinator library that is as
> efficient as, for example, something generated by lex?
> 

Give me staging or room to convince the compiler that it can inline away 
to its heart's content without termination problems and you bet. Said 
library may well not have any greater parsing power than lex, of course. 
Choice of algorithm matters. And I guess I'm really comparing to 
lex-targetting-the-same-language, but fair's fair, no?

To put it another way - at a cost of some minor ugliness I can do this 
with the right set of extensions enabled in GHC today. There are some 
important catches in how the lib can be used though - mainly that the lib 
mustn't be fed an infinite grammar (something that's done with libs like 
Parsec regularly because it doesn't ever evaluate the whole grammar).

-- 
flippa@flippac.org

Performance anxiety leads to premature optimisation
0
flippa (196)
12/17/2007 3:47:25 PM
In article <<Pine.WNT.4.64.0712171542030.440@sleek>>,
Philippa Cowderoy <flippa@flippac.org> wrote:
> On Mon, 17 Dec 2007, Jon Harrop wrote:
>> 
>> Is it theoretically possible to write a combinator library that is as
>> efficient as, for example, something generated by lex?
> 
> Give me staging or room to convince the compiler that it can inline
> away to its heart's content without termination problems and you
> bet. 

The big strike against parser combinators is not their micro-
efficiency, but their macro-inefficiency. It's far, far, too easy to
write parsers that go into backtracking hell, and take exponential
time on grammars that ought to be linear time.

However, their microefficiency is utterly unproblematic. The inlining
heuristic you need to get code that runs faster than lex is clear,
well-defined, and has no termination problems at all. However, you
additionally need a compiler smart enough to do some commuting
conversions (a compiler jock would call this "conditional constant
propagation"), again with a relatively straightforward heuristic.

Concretely, let's take this type for parsers:

  type 'a t = string -> int -> ('a * int) option

then the monadic bind, fail and alt functions will have the following
definitions:

  let bind f p = fun s i -> 
    match p s i with
    | None -> None
    | Some(a, j) -> f a s j

  let fail = fun s i -> None

  let alt p1 p2 = fun s i ->
    match p1 s i with
    | None -> p2 s i 
    | Some(a, j) -> Some(a, j)

Note that in bind, f and p appear only once, and in alt, p1 and p2
appear only once. This means inlining alt and bind can only increase
code size by a constant factor, no matter what expressions appear in 
those argument positions. So these functions can be aggressively 
inlined.

This means that a parser like

  let rec foo s i = 
     (alt (char 'b') (bind (fun _ -> foo) (char 'a'))) s i

should, at a minimum, be compiled to:

  let rec foo s i = 
    match char 'b' s i with
    | Some(c, j) -> Some(c, j)
    | None -> None
       (match char 'a' s j with
        | None -> None
        | Some(_, k) -> foo s k)

This is okay, but not particularly wonderful. However, suppose char is
defined as:

  let char c s i = 
    if i < String.length s && s.[i] = c then 
      Some(c, i+1)
    else
      None

*and* your compiler is smart enough to perform the commuting
conversion:

  match (if e then e1 else e2) with ...

to 
 
  if e then
    match e1 with ...
  else
    match e2 with ...

Now let's inline char and do the commuting conversions: 

  let rec foo s i = 
    if i < String.length s && s.[i] = 'b' then 
      match Some('b', i+1) with
      | Some(c, j) -> Some(c, j)
      | None ->
         (if i < String.length s && s.[i] = 'a' then 
            match Some('a', i+1) with
            | None -> None
            | Some(_, k) -> foo s k  
          else 
            match None with
            | None -> None
            | Some(_, k) -> foo s k)
    else
      match None with
      | Some(c, j) -> Some(c, j)
      | None -> 
         (if i < String.length s && s.[i] = 'a' then 
            match Some('a', i+1) with
            | None -> None
            | Some(_, k) -> foo s k  
          else 
            match None with
            | None -> None
            | Some(_, k) -> foo s k)

This is big, but note that now ALL of those pattern matches can be
easily evaluated statically, yielding:

  let rec foo s i = 
    if i < String.length s && s.[i] = 'b' then 
      Some('b', i+1)
    else if i < String.length s && s.[i] = 'a' then
      foo s (i+1)
    else
      None
 
This is pretty darn close to optimal. The most notable remaining
overhead left is the redundant check of the string's length in the
second branch. This will probably run much faster than any popular
regexp engine, though, and will probably be a bit faster than lex,
since there aren't any table lookups.

-*-*-*-

To sum up, the inlining heuristic you need is:

    Inline a function whenever the arguments to it are all either
    simple (ie, variables or immediate constants) or occur at formal
    arguments which are affine (ie, the argument is used at most one
    time).

This is a fairly aggressive heuristic, but it should not cause any
exponential code expansions, so it's safe to use.

The commuting conversion heuristic you need is:

    If you have a nested case statement of the form 
 
       match (match t with p -> e ) with p' -> e'

    and all the "e" forms are constructor calls, then hoist the test 
    to rewrite it to:

       match t with 
       p -> (match e with p' -> e')

I think this should not cause exponential code expansion, if you do at
least another round of beta-reduction.

-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
12/17/2007 6:34:24 PM
In article <Pine.WNT.4.64.0712162040350.440@sleek>,
Philippa Cowderoy  <flippa@flippac.org> wrote:
> This is something of an oversimplification - there are many ways to build 
> a parsing combinator library, with ranging implications for performance. 

For those who want to know more about the design space of parser
combinators, Peter Ljungl�f's licentiate thesis is an excellent survey:

http://www.ling.gu.se/~peb/pubs/Ljunglof-2002a.pdf


Lauri
0
la (473)
12/17/2007 7:49:11 PM
Joachim Durchholz skrev:
> Ben Franksen schrieb:
>> From what I've heard easy marshalling is one of the main strengths of
>> Erlang.

I assume that this refers to the built-in marshalling that's done
between erlang nodes?

It's pretty much a function of two things: dynamic typing, and
a strong focus on distribution transparency.


> It doesn't marshall function values at all.

It does, but it makes some assumptions about the code modules
available on each side. Wouldn't Haskell and Ocaml require
compatible type signatures on both sides?

A bigger problem for some applications might be that the marshalling
isn't structure-preserving. All data structures containing relative
references are flattened out.


> Erlang is a bit atypical for an FPL because Erlang programmers don't 
> usually use very high-order constructs. That's probably because Erlang 
> is used in context where it's more important to get the job done with 
> the mindset a C programmer brings into the project, rather than to try 
> and see how far one can drive abstraction.

In some sense, yes. Plain readable code is greatly favored, since the
code is supposed to be understood by support personnel, testers and
others, many of whom probably have never heard of a higher-order 
function. And tracing and debugging are a lot easier on named
functions.

Still, maps and folds are quite common, and list comprehensions are
getting there as well.

BR,
Ulf W
0
ulf.wiger (50)
12/17/2007 10:39:06 PM
Ben Franksen skrev:
 >
> Note: even Mnesia, the Erlang database, can store only data
 > terms, not functions.

It can store functions as abstract terms, which are easily
evaluated at run-time.  ;-)

1> F = 
{'fun',1,{clauses,[{clause,1,[{var,1,'X'}],[],[{op,1,'+',{var,1,'X'},{integer,1,1}}]}]}}.
2> 
erl_eval:exprs([{match,1,{var,1,'F'},F},{call,1,{var,1,'F'},[{integer,1,17}]}],[]). 
 
  {value,18,[{'F',#Fun<erl_eval.6.49591080>}]}

For those who don't do erlang abstract forms on a daily basis:

3> erl_scan:string("fun(X) -> X+1 end. "). 
        {ok,[{'fun',1}, 

      {'(',1},
      {var,1,'X'},
      {')',1},
      {'->',1},
      {var,1,'X'},
      {'+',1},
      {integer,1,1},
      {'end',1},
      {dot,1}],
     1}
4> erl_parse:parse_exprs(element(2,v(3))). 
       {ok,[{'fun',1, 

       {clauses,
        [{clause,1,
          [{var,1,'X'}],
          [],
          [{op,1,'+',{var,1,'X'},
           {integer,1,1}}]}]}}]}


BR,
Ulf W
0
ulf.wiger (50)
12/17/2007 11:08:52 PM
Ulf Wiger schrieb:
> Joachim Durchholz skrev:
>> Ben Franksen schrieb:
>>> From what I've heard easy marshalling is one of the main strengths of
>>> Erlang.
> 
> I assume that this refers to the built-in marshalling that's done
> between erlang nodes?

Yes.

>> It doesn't marshall function values at all.
> 
> It does, but it makes some assumptions about the code modules
> available on each side.

OK, I stand corrected.
What are these assumptions?

 > Wouldn't Haskell and Ocaml require
> compatible type signatures on both sides?

Hopefully ;-)

> A bigger problem for some applications might be that the marshalling
> isn't structure-preserving. All data structures containing relative
> references are flattened out.

I don't understand - a list of lists wouldn't become a list, would it?

I do remember that shared substructures are not shared anymore after 
unmarshalling (which means an exponential blowup in the worst case, 
which obviously Doesn't Happen In Practice (TM) else the Erlang RTS 
would have been changed to handle that properly years ago).

Regards,
Jo
0
jo427 (1164)
12/18/2007 11:17:29 AM
Joachim Durchholz skrev:
> Ulf Wiger schrieb:
>
>>> It doesn't marshall function values at all.
>>
>> It does, but it makes some assumptions about the code modules
>> available on each side.
> 
> OK, I stand corrected.
> What are these assumptions?

That roughly the same code is loaded on both sides.
It doesn't have to be identical, but close enough that
the "fingerprinting" done by the compiler to identify
anonymous functions still checks out.

You could of course say that this is not marshalling function
/values/, but rather function references. Perhaps that's
what you meant?


>> A bigger problem for some applications might be that the marshalling
>> isn't structure-preserving. All data structures containing relative
>> references are flattened out.
> 
> I don't understand - a list of lists wouldn't become a list, would it?

That would of course be bad. (:


> I do remember that shared substructures are not shared anymore after 
> unmarshalling (which means an exponential blowup in the worst case, 
> which obviously Doesn't Happen In Practice (TM) else the Erlang RTS 
> would have been changed to handle that properly years ago).

He he, that's what I meant, and of course it never happens in practice.
I've been bitten by it really badly once (as in: I could go an get a
cup of coffee while the RTS was busy allocating half the universe),
and John Hughes ran into this when trying to do some magic with
QuickCheck.

http://www.erlang.org/pipermail/erlang-questions/2005-November/017924.html
http://www.erlang.org/pipermail/erlang-bugs/2007-November/000488.html

BR,
Ulf W
0
ulf.wiger (50)
12/18/2007 12:38:05 PM
On Dec 18, 1:38 pm, Ulf Wiger <ulf.wi...@e-r-i-c-s-s-o-n.com> wrote:
>
> > I do remember that shared substructures are not shared anymore after
> > unmarshalling (which means an exponential blowup in the worst case,
> > which obviously Doesn't Happen In Practice (TM) else the Erlang RTS
> > would have been changed to handle that properly years ago).
>
> He he, that's what I meant, and of course it never happens in practice.
> I've been bitten by it really badly once (as in: I could go an get a
> cup of coffee while the RTS was busy allocating half the universe),
> and John Hughes ran into this when trying to do some magic with
> QuickCheck.
>
> http://www.erlang.org/pipermail/erlang-questions/2005-November/017924...

That looks familiar. :-) The problem you encountered with
serialisation there actually is a combination of two separate ones:
not maintaining sharing, and not lifting projections. In Alice ML we
found that the second is crucial (with respect to modules) even if you
already do the first - at least if you want to serialise code. Hence
the Alice compiler performs the lifting transformation you suggest for
module accesses. In fact, it uniformly lifts all module projections as
far as to the binding declaration of the respective module identifier.
It does not do it for records, though, because that would be a visible
change to the language semantics (changing evaluation order).

- Andreas
0
rossberg (600)
12/18/2007 1:03:01 PM
rossberg@ps.uni-sb.de skrev:
> On Dec 18, 1:38 pm, Ulf Wiger <ulf.wi...@e-r-i-c-s-s-o-n.com> wrote:
>> http://www.erlang.org/pipermail/erlang-questions/2005-November/017924...
> 
> That looks familiar. :-) The problem you encountered with
> serialisation there actually is a combination of two separate ones:
> not maintaining sharing, and not lifting projections. In Alice ML we
> found that the second is crucial (with respect to modules) even if you
> already do the first - at least if you want to serialise code. Hence
> the Alice compiler performs the lifting transformation you suggest for
> module accesses. In fact, it uniformly lifts all module projections as
> far as to the binding declaration of the respective module identifier.
> It does not do it for records, though, because that would be a visible
> change to the language semantics (changing evaluation order).

Nice.

FWIW, Erlang leaves the evaluation order (inside patters) undefined.
There may be other problems... this is outside my area of expertise.

BR,
Ulf W
0
ulf.wiger (50)
12/18/2007 1:21:21 PM
In article <<fk6jrn$s0b$1@oravannahka.helsinki.fi>>,
Lauri Alanko <la@iki.fi> wrote:
> 
> For those who want to know more about the design space of parser
> combinators, Peter Ljungl�f's licentiate thesis is an excellent survey:
> 
> http://www.ling.gu.se/~peb/pubs/Ljunglof-2002a.pdf

Thanks for the link!

-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
12/18/2007 5:04:22 PM
Jon Harrop wrote:
> Ben Franksen wrote:
>> Jon Harrop wrote:
>>> I'd like to know if the overloads have been resolved statically.
>> 
>> Why?
> 
> Predictable performance.

In no way can this be compared with things that can happen due to laziness.
Any performance hit you might get is surely quite predictable, since even
the most simple minded translation of type classes adds just one method
table lookup per call (and sometimes an extra dictionary argument). This is
certainly no worse than using virtual methods in C++.

Some implementations (ghc, jhc) do optimizations in this area. You might
want to ask the developers for details. I believe jhc claims to resolve
everything statically relying on whole program analysis. But jhc is not
ready for production use.

>>>>> . High-performance FFI ...
>>>> 
>>>> Yes.
>>> 
>>> Any evidence of that?
>> 
>> XMonad.
> 
> That seems to be a tiling window manager. How is that computationally
> intensive?

Who said it was? It does lots of FFI, due to its very nature, and it is fast
at that; I thought that's what you were asking.

If you want to do number-crunching, look elsewhere.

>>>>> . Free polymorphism.
>>>> 
>>>> Of course, polymorphism doesn't cost anything extra ;-)
>>>> 
>>>> Seriously: I am not familiar with this term. Could you explain?
>>> 
>>> Is there a run-time cost associated with polymorphism?
>> 
>> Depends on the implementation. And what you compare. Can you give an
>> example?
> 
> Will:
> 
>   Array.fold_left (+.) 0.
> 
> be optimized to run as fast as a C loop. OCaml has two problems: +. is not
> inlined and the Array.fold_left function is polymorphic which incurs a
> significant (2x) slowdown. F# inherits generics from .NET that fix the
> latter problem but it still doesn't inline the function argument.

Take a look at the latest GHC release (6.8.1). I heard that they drastically
improved the optimizations for low-level code like loops and such.

GHC is /very/ aggressive w.r.t. inlining. Don't forget -O2 switch ;)

BTW, if you like to program low-level C-like, you might be interested in
this nice blog post:

http://augustss.blogspot.com/2007/08/programming-in-c-ummm-haskell-heres.html

>>>>> . DLLs.
>>>> 
>>>> Don't know about that one.
>>> 
>>> I think this is another serious problem with Haskell (and OCaml).
>> 
>> More precisely: it is a problem of the existing implementations. It is
>> definitely not a problem of the languages.
> 
> I don't think that is true in any useful sense. The languages impose great
> demands that are totally uncatered for in this respect.

Ok, efficiency (again). Well, we are talking about functional languages,
right? As I understand it, functional programming uses functions for
(almost) everything. Thus optimisations like inlining are important, which
is difficult to do with the state of the art in shared libraries. But this
would apply regardless of whether the semantics is strict or non-strict,
right?

>>>>> . Lots more useful functionality in the stdlib and none of the cruft.
>>>> 
>>>> This is a bit too unspecific to comment on.
>>> 
>>> Things like vectors and matrices, FFTs and so on.
>> 
>> AFAIK, there is not yet very much libraries for this kind of stuff. But I
>> may be wrong. Contributions are always welcome, btw.
> 
> Much of its needs to be in the language or at least have enough support
> from the language. For example, you need an unboxed complex number
> representation to achieve decent performance. 

Not at all, if the compiler unboxes them for you by way of optimization. Not
all compilers are able to do that, though.

> F# makes that possible (and
> even provides it!) but none of the other FPLs can.

Not true. GHC supports unboxed primitive types and tuples (module GHC.Prim),
although this is of course not portable and there are certain restrictions.

OTOH, you hardly need it, since you can just use the 'built-in'
(library-wise) complex numbers. Look at Data.Complex, you'll see they are
defined as strict in the elements (real and imaginary part) and
with -O2 -funbox-strict-fields GHC will certainly unbox them for you.

>>>>> . Platform independence (many scientists and engineers don't run
>>>>> Windows).
>>>>> 
>>>>> . Faster symbolics thanks to a custom GC that isn't optimized for C#
>>>>> programs.
>>>>> 
>>>>> . No .NET baggage, e.g. different types for closures and raw
>>>>> functions.
>>>> 
>>>> Yes to these, since not .NET dependent.
>>> 
>>> How fast is Haskell at symbolics?
>> 
>> (Sorry, answer was meant only to first and third cited points.)
>> 
>> What do you mean with symbolics?
> 
> Compilers, interpreters, computer algebra and so on.

Fast enough for everything I have ever used it for. I guess it largely
depends on the way you represent your symbols. The naive way is to use
strings for symbol names. But strings were always a very slow and memory
consuming thing in Haskell, because of the (transparent) representation as
lists of characters. Now, since we have Data.ByteString, things look
better. I would still prefer to use abstract names instead of strings.

>>>> BTW, what does Ocaml do if the format string is the result of a
>>>> function call?
>> 
>> (repeating question)
> 
> I don't know.

And you've even written a book on OCaml. Might give you an idea why some
people don't find printf so easy ;-)

Cheers
Ben
0
12/18/2007 8:40:19 PM
Ulf Wiger wrote:
> Joachim Durchholz skrev:
>> Ben Franksen schrieb:
>>> From what I've heard easy marshalling is one of the main strengths of
>>> Erlang.
> 
> I assume that this refers to the built-in marshalling that's done
> between erlang nodes?

No, not only between erlang nodes. I remember a guy (what was his name
again, *google..google*, ah: Joel Reymont) who used it for some poker
server he wrote. He was impressed how easy it was to 'pickle' his data for
communication with some external application.

Cheers
Ben
0
12/18/2007 8:52:19 PM
Joachim Durchholz wrote:
> Ben Franksen schrieb:
>> Joachim Durchholz wrote:
>>> Ben Franksen schrieb:
>>>>> . Type safe marshalling.
>>>> Yes.
>>> Not really.
>>>
>>> Marshalling forces evaluation of the values marshalled.
>> 
>> But it is still type safe. That was the question.
> 
> OK, I was taking the requirement a bit beyond what was written. But I
> don't care about type safety if marshalling a value risks evaluating
> some infinite data structure hidden deep inside an abstract data type.

I think you exaggerate the risk a bit. As programmer you typically know very
well whether your value is (supposed to be) finite or not. There are /many/
functions in the standard library that are strict and each one of them
could bottom out if applied to the wrong (e.g. an infinite) value. Everyone
knows that you should not call e.g. 'length' on a list that might be
infinite, but there are other functions where it is not /that/ obvious
e.g. 'foldr' (as opposed to 'foldl' which works just fine on infinite
lists).

>>> IOW you may get nontermination (if the data structure contains infinite
>>> substructures). You will lose the advantages of having a non-strict
>>> languages.
>> 
>> If you just print values to teh console they are evaluated, too.
> 
> Yes, but printing to the console is just a debugging aid, and you see it
> immediately if there's a problem.

Console output is not /only/ for debugging but expected program behaviour
for very many kinds of programs. But that's a side issue.

> Marshalling is usually part of the application logic and must not run
> into an endless loop.

Of course not. What I meant to say is this: Suppose you are programming in a
strict language and you have a large and complex data structure that must
somehow be serialized, be it for console output or to save it in a file or
for sending over the network. Surely the code that does this serialisation
might contain a non-termination bug, right? So, how is this any different?
In both cases, the only way to make sure that this won't happen is to prove
that it can't happen.

Cheers
Ben
0
12/18/2007 9:30:52 PM
Jon Harrop wrote:
> Joachim Durchholz wrote:
>> Ben Franksen schrieb:
>>> Joachim Durchholz wrote:
>>>> Ben Franksen schrieb:
>>>>>> . Type safe marshalling.
>>>>> Yes.
>>>> Not really.
>>>>
>>>> Marshalling forces evaluation of the values marshalled.
>>> 
>>> But it is still type safe. That was the question.
>> 
>> OK, I was taking the requirement a bit beyond what was written.
> 
> Can I just clarify: that wasn't a requirement but a list of deficiencies
> in OCaml that could be fixed. Haskell fixes many of OCaml's deficiencies
> but also introduces many more.

Right, Haskell has many weaknesses, the most important IMO being a very poor
module system and lack of records. But laziness (which we were talking
about, at least indirectly) is not a deficiency but a feature and an
extremely valuable one, if you ask me, even though it does cause problems
sometimes.

>>> If you just print values to teh console they are evaluated, too.
>> 
>> Yes, but printing to the console is just a debugging aid, and you see it
>> immediately if there's a problem.
>> Marshalling is usually part of the application logic and must not run
>> into an endless loop.
> 
> Exactly. In contrast, OCaml's marshalling already handles cycles...

That's not too hard if cycles are always explicit. A non-strict language
with /infinite values/ is a completely different beast: cycles in Haskell
are not observable other than through non-termination. You might prefer the
slightly less expressive but more predictable (resource-wise) strict
semantics. That's ok. I think it depends very much on the type of problems
you want to solve. There are times when I wish for a language with
Haskell's cool syntax and type system but strict semantics. Even better if
it has a module system to speak of (like the MLs) and Real Records (tm) :-)

Cheers
Ben
0
12/18/2007 9:54:01 PM
Philippa Cowderoy wrote:
> On Mon, 17 Dec 2007, Ben Franksen wrote:
>> Philippa Cowderoy wrote:
>> > This is something of an oversimplification - there are many ways to
>> > build a parsing combinator library, with ranging implications for
>> > performance. Historically, designs have tended towards greater parsing
>> > power rather than speed.
>> 
>> Yes. An exception is Parsec which compromises power for speed in a small
>> but important area, namely the choice operator, which in Parsec does
>> /not/ backtrack by default.
> 
> This doesn't compromise power for speed - it compromises ease of use for
> speed.

Right. My mistake.

Cheers
Ben
0
12/18/2007 9:57:45 PM
Ulf Wiger wrote:
> Ben Franksen skrev:
>  >
>> Note: even Mnesia, the Erlang database, can store only data
>  > terms, not functions.
> 
> It can store functions as abstract terms, which are easily
> evaluated at run-time.  ;-)

Cool. For many applications this would be enough. Is there some magic to
automatically convert(*) a function (assuming access to source code) to
such a term? I.e. is there an inverse to erl_eval:exprs?

> 1> F =
> {'fun',1,{clauses,[{clause,1,[{var,1,'X'}],[],[{op,1,'+',{var,1,'X'}
{integer,1,1}}]}]}}.
> 2>
> erl_eval:exprs([{match,1,{var,1,'F'},F},{call,1,{var,1,'F'}
[{integer,1,17}]}],[]).
>  
>   {value,18,[{'F',#Fun<erl_eval.6.49591080>}]}
> 
> For those who don't do erlang abstract forms on a daily basis:
> 
> 3> erl_scan:string("fun(X) -> X+1 end. ").
>         {ok,[{'fun',1},
> 
>       {'(',1},
>       {var,1,'X'},
>       {')',1},
>       {'->',1},
>       {var,1,'X'},
>       {'+',1},
>       {integer,1,1},
>       {'end',1},
>       {dot,1}],
>      1}
> 4> erl_parse:parse_exprs(element(2,v(3))).
>        {ok,[{'fun',1,
> 
>        {clauses,
>         [{clause,1,
>           [{var,1,'X'}],
>           [],
>           [{op,1,'+',{var,1,'X'},
>            {integer,1,1}}]}]}}]}

(*) I wanted to say 'reify' but then wasn't sure if it shouldn't rather
be 'reflect', instead. Can anyone tell me exactly when to use which term?

Cheers
Ben
0
12/18/2007 10:05:17 PM
Ben Franksen wrote:
> Jon Harrop wrote:
>> Ben Franksen wrote:
>>> Jon Harrop wrote:
>>>> I'd like to know if the overloads have been resolved statically.
>>> 
>>> Why?
>> 
>> Predictable performance.
> 
> In no way can this be compared with things that can happen due to
> laziness.

I wasn't referring to laziness.

> Any performance hit you might get is surely quite predictable, 
> since even the most simple minded translation of type classes adds just
> one method table lookup per call (and sometimes an extra dictionary
> argument). This is certainly no worse than using virtual methods in C++.

I need to know whether or not the lookup was resolved statically and
inlined. The single most important use of type classes is arithmetic and
you cannot afford to have every arithmetic operation silently indirected.

Hmm, wait a minute. Can't you always statically resolve the lookups if
you're JIT compiling specialized functions over each of their type
variables? So this probably wouldn't be an issue anyway.

> Some implementations (ghc, jhc) do optimizations in this area. You might
> want to ask the developers for details. I believe jhc claims to resolve
> everything statically relying on whole program analysis. But jhc is not
> ready for production use.

Yes. F# only allows static resolution.

>>>>>> . High-performance FFI ...
>>>>> 
>>>>> Yes.
>>>> 
>>>> Any evidence of that?
>>> 
>>> XMonad.
>> 
>> That seems to be a tiling window manager. How is that computationally
>> intensive?
> 
> Who said it was? It does lots of FFI, due to its very nature, and it is
> fast at that; I thought that's what you were asking.
> 
> If you want to do number-crunching, look elsewhere.

That's exactly what I want to do, yes. F# handles this well.

>>>>>> . DLLs.
>>>>> 
>>>>> Don't know about that one.
>>>> 
>>>> I think this is another serious problem with Haskell (and OCaml).
>>> 
>>> More precisely: it is a problem of the existing implementations. It is
>>> definitely not a problem of the languages.
>> 
>> I don't think that is true in any useful sense. The languages impose
>> great demands that are totally uncatered for in this respect.
> 
> Ok, efficiency (again).

Not efficiency, just functionality. I need to be able to sell DLLs to
customers who can then use them with as little fiddle arsing around as
possible.

OCaml has platform independent .cma files but they only work when everything
they dependend upon is identical, including the compiler. So our free
edition of Smoke must have separate downloads for all of the different
versions (even minor-minor versions) of the OCaml compilers and LablGL:

  http://www.ffconsultancy.com/products/smoke_vector_graphics/?clf

Someone else wants Smoke for OCaml 3.09.3...

No logical reason for it and if there were more dependencies (e.g. LablGTK2)
then we would need all combinations of all of them. This is completely
absurd, of course, and makes it prohibitively difficult to commercialize
code in this way. Source code works but because of the risk of losing our
competitive edge, a source code licence costs 10x as much.

In contrast, just tell Visual Studio to generate a DLL from the OCaml source
and you can sell an F# DLL immediately with almost no hassle:

  http://www.ffconsultancy.com/products/fsharp_for_visualization/?clf

>>>> Things like vectors and matrices, FFTs and so on.
>>> 
>>> AFAIK, there is not yet very much libraries for this kind of stuff. But
>>> I may be wrong. Contributions are always welcome, btw.
>> 
>> Much of its needs to be in the language or at least have enough support
>> from the language. For example, you need an unboxed complex number
>> representation to achieve decent performance.
> 
> Not at all, if the compiler unboxes them for you by way of optimization.
> Not all compilers are able to do that, though.

The compiler needs to know what to unbox and what to inline so the language
must expose that to the programmer.

>> F# makes that possible (and
>> even provides it!) but none of the other FPLs can.
> 
> Not true. GHC supports unboxed primitive types and tuples (module
> GHC.Prim), although this is of course not portable and there are certain
> restrictions.
> 
> OTOH, you hardly need it, since you can just use the 'built-in'
> (library-wise) complex numbers. Look at Data.Complex, you'll see they are
> defined as strict in the elements (real and imaginary part) and
> with -O2 -funbox-strict-fields GHC will certainly unbox them for you.

I had no idea Haskell already supported complexes. Sounds like an FFT
benchmark is in order.

>>>> How fast is Haskell at symbolics?
>>> 
>>> (Sorry, answer was meant only to first and third cited points.)
>>> 
>>> What do you mean with symbolics?
>> 
>> Compilers, interpreters, computer algebra and so on.
> 
> Fast enough for everything I have ever used it for. I guess it largely
> depends on the way you represent your symbols. The naive way is to use
> strings for symbol names. But strings were always a very slow and memory
> consuming thing in Haskell, because of the (transparent) representation as
> lists of characters.

Ugh.

> Now, since we have Data.ByteString, things look 
> better. I would still prefer to use abstract names instead of strings.

You can implement symbols efficiently as a type yourself, of course, using
hash consing and physical equality. I did this in OCaml and it worked quite
well.

>>>>> BTW, what does Ocaml do if the format string is the result of a
>>>>> function call?
>>> 
>>> (repeating question)
>> 
>> I don't know.
> 
> And you've even written a book on OCaml. Might give you an idea why some
> people don't find printf so easy ;-)

The value of printf lies in its ease of use and not in its theoretical
properties, of course.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/18/2007 10:18:26 PM
Ben Franksen <ben.franksen@online.de> wrote:
[...]
> Of course not. What I meant to say is this: Suppose you are programming in a
> strict language and you have a large and complex data structure that must
> somehow be serialized, be it for console output or to save it in a file or
> for sending over the network. Surely the code that does this serialisation
> might contain a non-termination bug, right? So, how is this any different?
> In both cases, the only way to make sure that this won't happen is to prove
> that it can't happen.

In SML, the only way to build cyclic data (ignoring closures) is through
references and arrays (+ either an exception or a recursive datatype),
which are also the only kind of types whose values are mutable and have
identity.  So, if you are serializing a data structure that may contain
cycles (or observable sharing), all you need to do is to keep a map of ref
cells and arrays that you have encountered.  It should be relatively easy
to prove correct and there are only two types at which you need to
consider cycles.

In my generic pickling implementation for SML, there is 1 function
(cyclic) that deals with cycles, for both potentially cyclic refs and
arrays, and 1 that deals with sharing (share), for both provably acyclic
refs and arrays as well as immutable data, cyclic or not.

http://mlton.org/cgi-bin/viewsvn.cgi/*checkout*/mltonlib/trunk/com/ssh/generic/unstable/public/value/pickle.sig
http://mlton.org/cgi-bin/viewsvn.cgi/*checkout*/mltonlib/trunk/com/ssh/generic/unstable/detail/value/pickle.sml
http://mlton.org/cgi-bin/viewsvn.cgi/*checkout*/mltonlib/trunk/com/ssh/generic/unstable/test/pickle.sml

-Vesa Karvonen
0
12/18/2007 11:01:51 PM
Ben Franksen <ben.franksen@online.de> wrote:
[...]
> Right, Haskell has many weaknesses, the most important IMO being a very poor
> module system and lack of records. But laziness (which we were talking
> about, at least indirectly) is not a deficiency but a feature and an
> extremely valuable one, if you ask me, even though it does cause problems
> sometimes.

Well, as a kind of data point, I've converted some Haskell (combinator)
libraries to SML (e.g. Parsec (I've converted the core combinators a long
time ago, but my port of the lib is not yet ready/available), QuickCheck
[1], and PPrint [2] --- those are not the only conversions I've made).  In
many cases, the paper introducing the combinators has expressed the
importance of laziness.  However, in my experience, adding the necessary
laziness explicitly using delay/force (well [3]) or just thunks has always
been easy, if not trivial, and has also, IMO, forced me to gain a better
understanding of the underlying algorithms.

-Vesa Karvonen

[1] http://mlton.org/cgi-bin/viewsvn.cgi/mltonlib/trunk/com/ssh/unit-test/unstable/
[2] http://mlton.org/cgi-bin/viewsvn.cgi/mltonlib/trunk/com/ssh/prettier/unstable/
[3] http://mlton.org/cgi-bin/viewsvn.cgi/*checkout*/mltonlib/trunk/com/ssh/extended-basis/unstable/public/lazy/lazy.sig?rev=5532
0
12/18/2007 11:25:21 PM
Ben Franksen wrote:
> Jon Harrop wrote:
>> Can I just clarify: that wasn't a requirement but a list of deficiencies
>> in OCaml that could be fixed. Haskell fixes many of OCaml's deficiencies
>> but also introduces many more.
> 
> Right, Haskell has many weaknesses, the most important IMO being a very
> poor module system and lack of records. But laziness (which we were
> talking about, at least indirectly) is not a deficiency but a feature and
> an extremely valuable one, if you ask me, even though it does cause
> problems sometimes.

I agree. However, I think laziness should be opt-in. One lazy addition I'd
like to see in ML is the ability to pattern match over lazy values, forcing
them only when required. This would solve the practical problem of it being
hard to enumerate sequence containers in an abstract way.

>>>> If you just print values to teh console they are evaluated, too.
>>> 
>>> Yes, but printing to the console is just a debugging aid, and you see it
>>> immediately if there's a problem.
>>> Marshalling is usually part of the application logic and must not run
>>> into an endless loop.
>> 
>> Exactly. In contrast, OCaml's marshalling already handles cycles...
> 
> That's not too hard if cycles are always explicit. A non-strict language
> with /infinite values/ is a completely different beast: cycles in Haskell
> are not observable other than through non-termination. You might prefer
> the slightly less expressive but more predictable (resource-wise) strict
> semantics. That's ok. I think it depends very much on the type of problems
> you want to solve. There are times when I wish for a language with
> Haskell's cool syntax and type system but strict semantics. Even better if
> it has a module system to speak of (like the MLs) and Real Records (tm)
> :-)

Yes. So what Haskell features do you value the most?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/18/2007 11:57:39 PM
Jon Harrop wrote:
>>>>> If you just print values to teh console they are evaluated, too.
>>>> 
>>>> Yes, but printing to the console is just a debugging aid, and you see
>>>> it immediately if there's a problem.
>>>> Marshalling is usually part of the application logic and must not run
>>>> into an endless loop.
>>> 
>>> Exactly. In contrast, OCaml's marshalling already handles cycles...
>> 
>> That's not too hard if cycles are always explicit. A non-strict language
>> with /infinite values/ is a completely different beast: cycles in Haskell
>> are not observable other than through non-termination. You might prefer
>> the slightly less expressive but more predictable (resource-wise) strict
>> semantics. That's ok. I think it depends very much on the type of
>> problems you want to solve. There are times when I wish for a language
>> with Haskell's cool syntax and type system but strict semantics. Even
>> better if it has a module system to speak of (like the MLs) and Real
>> Records (tm)
>> :-)
> 
> Yes. So what Haskell features do you value the most?

Off the top of my head

- advanced type system (type classes, GADTs, higher rank types,
  polymorphic recursion,...)
- laziness (yes, I love it)
- referential transparency ('purity')
- elegant and concise syntax with a minimum of 'punctuation noise',
  thereby still readable, sometimes even 'talkative'
- active, highly intelligent, and very friendly community

But the real reason I like Haskell is this: programming in Haskell is fun!
Yes, it is often an intellectual challenge, but to me that is part of the
fun, and it is what programming is supposed to be, IMO. It is a
distinguished pleasure when I finally find the right abstraction so that
everything else nicely falls into place. The strong but expressive type
system helps a lot, as does the enforced purity and my own personal
perfectionism, to make me struggle until I reach that goal.

Cheers
Ben
0
12/19/2007 1:09:34 AM
I just promised I'd stop posting for today. So much for promises. Sigh.

Jon Harrop wrote:
> Ben Franksen wrote:
>> Jon Harrop wrote:
>>> Ben Franksen wrote:
>>>> Jon Harrop wrote:
>>>>> I'd like to know if the overloads have been resolved statically.
>>>> 
>>>> Why?
>>> 
>>> Predictable performance.
>> 
>> In no way can this be compared with things that can happen due to
>> laziness.
> 
> I wasn't referring to laziness.

I know. I wanted to suggest that in practice this isn't a show stopper,
whereas a memory leak due to too much laziness in the wrong place can be.
And hard to fix, too.

>> Any performance hit you might get is surely quite predictable,
>> since even the most simple minded translation of type classes adds just
>> one method table lookup per call (and sometimes an extra dictionary
>> argument). This is certainly no worse than using virtual methods in C++.
> 
> I need to know whether or not the lookup was resolved statically and
> inlined. The single most important use of type classes is arithmetic and
> you cannot afford to have every arithmetic operation silently indirected.

I don't think this is the case in Haskell. For numeric operations, GHC
unboxes practically everything if this is semantically transparent. It may
need a bit of help here or there (i.e. strictness annotation) to make such
a conclusion, but in most cases it finds out all by itself. This is because
it does a lot of cross-module inlining and in the final application you
most probably /have/ a concrete numeric type; and the operations on the
concrete types are all strict by definition.

>>>>>>> . High-performance FFI ...
>>>>>> 
>>>>>> Yes.
>>>>> 
>>>>> Any evidence of that?
>>>> 
>>>> XMonad.
>>> 
>>> That seems to be a tiling window manager. How is that computationally
>>> intensive?
>> 
>> Who said it was? It does lots of FFI, due to its very nature, and it is
>> fast at that; I thought that's what you were asking.
>> 
>> If you want to do number-crunching, look elsewhere.
> 
> That's exactly what I want to do, yes. F# handles this well.

Ok. A lazy language might, in the end, not be what you need. Although it's a
pity... ;)

>>>>> Things like vectors and matrices, FFTs and so on.
>>>> 
>>>> AFAIK, there is not yet very much libraries for this kind of stuff. But
>>>> I may be wrong. Contributions are always welcome, btw.
>>> 
>>> Much of its needs to be in the language or at least have enough support
>>> from the language. For example, you need an unboxed complex number
>>> representation to achieve decent performance.
>> 
>> Not at all, if the compiler unboxes them for you by way of optimization.
>> Not all compilers are able to do that, though.
> 
> The compiler needs to know what to unbox and what to inline so the
> language must expose that to the programmer.

No, not necessarily. The compiler can safely unbox everything that it finds
is used strictly; and strictness can be infered to a great extent. GHC does
quite a good job of optimizing boxed-ness away. With whole program analysis
you can do even better.

>>> F# makes that possible (and
>>> even provides it!) but none of the other FPLs can.
>> 
>> Not true. GHC supports unboxed primitive types and tuples (module
>> GHC.Prim), although this is of course not portable and there are certain
>> restrictions.
>> 
>> OTOH, you hardly need it, since you can just use the 'built-in'
>> (library-wise) complex numbers. Look at Data.Complex, you'll see they are
>> defined as strict in the elements (real and imaginary part) and
>> with -O2 -funbox-strict-fields GHC will certainly unbox them for you.
> 
> I had no idea Haskell already supported complexes. Sounds like an FFT
> benchmark is in order.

You might want to take a look at the Haskell'98 report. It is online at
http://www.haskell.org/onlinelibrary.

For the benchmark, I recommend using ghc-6.8.1. Native code generation has
been improved quite a bit since 6.6.1. Don't forget the optimization
switches (they are well documented). You might even want to play with
the 'par' combinator (see Data.Parallel) if you have cores to waste, just
to see if it works as advertised. Although I don't know how well FFT can be
parallelised. (You might guess that numerics isn't my exactly specialty.)

>>>>> How fast is Haskell at symbolics?
>>>> 
>>>> (Sorry, answer was meant only to first and third cited points.)
>>>> 
>>>> What do you mean with symbolics?
>>> 
>>> Compilers, interpreters, computer algebra and so on.
>> 
>> Fast enough for everything I have ever used it for. I guess it largely
>> depends on the way you represent your symbols. The naive way is to use
>> strings for symbol names. But strings were always a very slow and memory
>> consuming thing in Haskell, because of the (transparent) representation
>> as lists of characters.
> 
> Ugh.

It is ok for very many applications and extremely elegant. In an application
I wrote lately I didn't even bother to use ByteString, although almost all
data is text; it was fast enough with normal Haskell Strings. It could be
that this is because, although the total amount of data processed is large,
each single string is typically short.

>> Now, since we have Data.ByteString, things look
>> better. I would still prefer to use abstract names instead of strings.
> 
> You can implement symbols efficiently as a type yourself, of course, using
> hash consing and physical equality. I did this in OCaml and it worked
> quite well.

Yes, that's what I meant.

Cheers
Ben
0
12/19/2007 1:09:39 AM
Ben Franksen wrote:
> Jon Harrop wrote:
>> I wasn't referring to laziness.
> 
> I know. I wanted to suggest that in practice this isn't a show stopper,
> whereas a memory leak due to too much laziness in the wrong place can be.
> And hard to fix, too.

An order of magnitude performance hit in arithmetics would be a show stopper
for me if I can't track it down.

>> I need to know whether or not the lookup was resolved statically and
>> inlined. The single most important use of type classes is arithmetic and
>> you cannot afford to have every arithmetic operation silently indirected.
> 
> I don't think this is the case in Haskell. For numeric operations, GHC
> unboxes practically everything if this is semantically transparent.

You mean for numerics with its own types but what about my types
(quaternions, low-dimensional vectors etc.)?

What exactly do you mean by "semantically transparent"?

>>> If you want to do number-crunching, look elsewhere.
>> 
>> That's exactly what I want to do, yes. F# handles this well.
> 
> Ok. A lazy language might, in the end, not be what you need. Although it's
> a pity... ;)

Do you think a new language could incorporate the benefits of laziness
without sacrificing the easy optimizability of something like OCaml or F#?

>> The compiler needs to know what to unbox and what to inline so the
>> language must expose that to the programmer.
> 
> No, not necessarily. The compiler can safely unbox everything that it
> finds is used strictly; and strictness can be infered to a great extent.
> GHC does quite a good job of optimizing boxed-ness away. With whole
> program analysis you can do even better.

How would it know to unbox a complex but not a 2D vector, for example? Only
the programmer knows that...

>> I had no idea Haskell already supported complexes. Sounds like an FFT
>> benchmark is in order.
> 
> You might want to take a look at the Haskell'98 report. It is online at
> http://www.haskell.org/onlinelibrary.
> 
> For the benchmark, I recommend using ghc-6.8.1. Native code generation has
> been improved quite a bit since 6.6.1. Don't forget the optimization
> switches (they are well documented). You might even want to play with
> the 'par' combinator (see Data.Parallel) if you have cores to waste, just
> to see if it works as advertised.

I haven't been able to get that to work here.

> Although I don't know how well FFT can 
> be parallelised. (You might guess that numerics isn't my exactly
> specialty.)

:-)

>>> Fast enough for everything I have ever used it for. I guess it largely
>>> depends on the way you represent your symbols. The naive way is to use
>>> strings for symbol names. But strings were always a very slow and memory
>>> consuming thing in Haskell, because of the (transparent) representation
>>> as lists of characters.
>> 
>> Ugh.
> 
> It is ok for very many applications and extremely elegant. In an
> application I wrote lately I didn't even bother to use ByteString,
> although almost all data is text; it was fast enough with normal Haskell
> Strings. It could be that this is because, although the total amount of
> data processed is large, each single string is typically short.

That's a lot of wastage on a 64-bit machine though. Something else that is
of interest to me is memory efficiency. A lot of technical users will
simply attack the largest problems that they can, meaning anything that
fits in RAM. If a language or implementation is very memory inefficient
then that can cost them their research. Having lots of 64-bit pointers can
kill that, as can OCaml's lack of 32-bit floats as a storage format in
arbitrary data structures...

>>> Now, since we have Data.ByteString, things look
>>> better. I would still prefer to use abstract names instead of strings.
>> 
>> You can implement symbols efficiently as a type yourself, of course,
>> using hash consing and physical equality. I did this in OCaml and it
>> worked quite well.
> 
> Yes, that's what I meant.

You can even integrate it into a polymorphic variant type in OCaml, if you
know what you're doing. :-)

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/19/2007 10:22:48 AM
Ben Franksen <ben.franksen@online.de> wrote:
[...]
> Off the top of my head

> - advanced type system (type classes, GADTs, higher rank types,
>   polymorphic recursion,...)

Yeah, I'd love to have higher-rank types in SML, too.  When higher-rank
types are used as a hidden implementation detail, it is possible to make
do in SML by replacing the higher-rank types with a universal type.  OTOH,
when higher-rank types appear at the interface, things are not as nice.

> - laziness (yes, I love it)

Personally, I find that the value of laziness is overrated and that strict
evaluation is the better default (I'm not opposed to having a convenient
syntax (some sugar) for laziness in a strict language).  Every other
Haskell paper extols laziness, but when it comes down to the details, I've
pretty much always (I don't remember a single counterexample, but I don't
doubt that programs where laziness is truly pervasive may exists) found it
both easy and illuminating to insert the necessary laziness explicitly.
Frankly, based on my experience of converting Haskell combinator libraries
to SML, I've gotten the feeling that even expert Haskellers don't quite
understand the dynamic semantics of their programs and tend to just
naively attribute the working of their programs to laziness.

The main trouble with laziness is that it tends to lead to programs with
much less predictable space usage and, to a lesser degree of trouble, less
predictable time usage.  There are many articles on the web that discuss
problems people have run into with laziness and space leaks in Haskell.
OTOH, I can only remember ever having a single "space leak" in a SML
program.  It was caused by the program failing (by raising an exception)
and then the (MLton) run-time kept trying to write the error message
(concerning the uncaught exception) to a non-existent stdout (or stderr, I
don't remember which one it was) stream (this was a Windows app with no
stdout).  The observable side-effect was that the program started to
allocate more and more memory.

> - referential transparency ('purity')

I'd also love to have some convenient way to write pure functions in SML
so that it would be reflected in the types.  It is possible to write pure
functions in SML using combinators [SKI], but I wouldn't call it
convenient for writing arbitrary pure functions.  It might be usable in
some special cases.  In fact, it is not uncommon to expose a set of
combinators for writing some simple functions (although the arrow type
might not even be exposed) that has some desired property (even in
Haskell).

[SKI] http://lambda-the-ultimate.org/node/1681#comment-20516

> - elegant and concise syntax with a minimum of 'punctuation noise',
>   thereby still readable, sometimes even 'talkative'

Yeah, I'm personally not into layout sensitive syntax, but I think SML's
syntax is, in some places, unnecessarily verbose and could be trimmed.
However, the partly verbose syntax doesn't really bother me that much
(otherwise I wouldn't be programming in SML).  It used to bother me more,
but then I changed the way I layout code (to essentially avoid excessive
nesting).

> - active, highly intelligent, and very friendly community

The Haskell community also seems to be xenophobic to the point of
discussing assassinating people [ASS].  ;-)

[ASS] http://tuukka.iki.fi/tmp/haskell-2007-05-25.html

Frankly, the Haskell community also seems somewhat double-faced to me.  On
one hand, starting with "Why FP Matters", they glorify laziness and
purity.  On the other hand, they seem to throw them out as soon as the
going gets a little tough.  Look at just about any serious Haskell library
--- full of strictness annotations.  Look at just about any Haskell toy
benchmark --- it doesn't only have strictness annotations, but might even
use the foreign function interface for basic operations (that could also
be implemented safely without any use of the unsafe FFI).  The Haskell
slogan seems to: "do as I say, don't do as I do".  This is one aspect that
keeps me away from Haskell, for both social and technical reasons (I got
tired of hiding unsafe hacks behind trusted interfaces while programming
in C++).

> But the real reason I like Haskell is this: programming in Haskell is
> fun!  Yes, it is often an intellectual challenge, but to me that is part
> of the fun, and it is what programming is supposed to be, IMO.  It is a
> distinguished pleasure when I finally find the right abstraction so that
> everything else nicely falls into place.  The strong but expressive type
> system helps a lot, as does the enforced purity and my own personal
> perfectionism, to make me struggle until I reach that goal.

Aside from the enforced purity, which I'd also welcome, I would say the
same thing about SML.  Regarding purity, like probably most people here, I
have extensive experience programming in imperative/OO mainstream
languages.  SML is very different from those in that it has good support
for avoiding side-effects.

-Vesa Karvonen
0
12/19/2007 12:03:20 PM
Jon Harrop wrote:
> Ben Franksen wrote:
>> Jon Harrop wrote:
>>> I wasn't referring to laziness.
>> 
>> I know. I wanted to suggest that in practice this isn't a show stopper,
>> whereas a memory leak due to too much laziness in the wrong place can be.
>> And hard to fix, too.
> 
> An order of magnitude performance hit in arithmetics would be a show
> stopper for me if I can't track it down.

That I believe.

>>> I need to know whether or not the lookup was resolved statically and
>>> inlined. The single most important use of type classes is arithmetic and
>>> you cannot afford to have every arithmetic operation silently
>>> indirected.
>> 
>> I don't think this is the case in Haskell. For numeric operations, GHC
>> unboxes practically everything if this is semantically transparent.
> 
> You mean for numerics with its own types but what about my types
> (quaternions, low-dimensional vectors etc.)?

If you declare your data structure elements as strict, and the primitive
functions you define on them are strict too, then GHC can unbox them, given
appropriate optimization switches. If the primitive functions on your
self-defined numerical data types are not naturally strict, then you would
have to use seq or bang patterns.

> What exactly do you mean by "semantically transparent"?

A code transformation is semantically transparent, if it respects (does not
change) the semantics of the program.

Unboxing can make functions more strict than they would otherwise be. In
general this can be observed by applying the function to _|_, and thus such
a transormation is not, in general, done. However, there are many cases
where the function is 'naturally' strict anyway, and where the compiler is
able to detect this it can unbox as it sees fit.

>>>> If you want to do number-crunching, look elsewhere.
>>> 
>>> That's exactly what I want to do, yes. F# handles this well.
>> 
>> Ok. A lazy language might, in the end, not be what you need. Although
>> it's a pity... ;)
> 
> Do you think a new language could incorporate the benefits of laziness
> without sacrificing the easy optimizability of something like OCaml or F#?

Maybe. I am not nearly expert enough to tell if this is possible.

>>> The compiler needs to know what to unbox and what to inline so the
>>> language must expose that to the programmer.
>> 
>> No, not necessarily. The compiler can safely unbox everything that it
>> finds is used strictly; and strictness can be infered to a great extent.
>> GHC does quite a good job of optimizing boxed-ness away. With whole
>> program analysis you can do even better.
> 
> How would it know to unbox a complex but not a 2D vector, for example?
> Only the programmer knows that...

I think size matters, for the optimization heuristics.

>>> I had no idea Haskell already supported complexes. Sounds like an FFT
>>> benchmark is in order.
>> 
>> You might want to take a look at the Haskell'98 report. It is online at
>> http://www.haskell.org/onlinelibrary.
>> 
>> For the benchmark, I recommend using ghc-6.8.1. Native code generation
>> has been improved quite a bit since 6.6.1. Don't forget the optimization
>> switches (they are well documented). You might even want to play with
>> the 'par' combinator (see Data.Parallel) if you have cores to waste, just
>> to see if it works as advertised.
> 
> I haven't been able to get that to work here.

Oh. What's the problem?

>>>> Fast enough for everything I have ever used it for. I guess it largely
>>>> depends on the way you represent your symbols. The naive way is to use
>>>> strings for symbol names. But strings were always a very slow and
>>>> memory consuming thing in Haskell, because of the (transparent)
>>>> representation as lists of characters.
>>> 
>>> Ugh.
>> 
>> It is ok for very many applications and extremely elegant. In an
>> application I wrote lately I didn't even bother to use ByteString,
>> although almost all data is text; it was fast enough with normal Haskell
>> Strings. It could be that this is because, although the total amount of
>> data processed is large, each single string is typically short.
> 
> That's a lot of wastage on a 64-bit machine though. 

It's a lot of waste in any case. But often it doesn't matter at all.

> Something else that is
> of interest to me is memory efficiency. A lot of technical users will
> simply attack the largest problems that they can, meaning anything that
> fits in RAM. If a language or implementation is very memory inefficient
> then that can cost them their research. Having lots of 64-bit pointers can
> kill that, as can OCaml's lack of 32-bit floats as a storage format in
> arbitrary data structures...

Yes. If you want to use your machine to the limit, you should use a compact
string implementation.

>>>> Now, since we have Data.ByteString, things look
>>>> better. I would still prefer to use abstract names instead of strings.
>>> 
>>> You can implement symbols efficiently as a type yourself, of course,
>>> using hash consing and physical equality. I did this in OCaml and it
>>> worked quite well.
>> 
>> Yes, that's what I meant.
> 
> You can even integrate it into a polymorphic variant type in OCaml, if you
> know what you're doing. :-)

I wasn't reading carefully, sorry. I did not mean using physical equality.

Cheers
Ben
0
12/19/2007 11:05:27 PM
Vesa Karvonen wrote:
> Ben Franksen <ben.franksen@online.de> wrote:
>> - active, highly intelligent, and very friendly community
> 
> The Haskell community also seems to be xenophobic to the point of
> discussing assassinating people [ASS].  ;-)
> 
> [ASS] http://tuukka.iki.fi/tmp/haskell-2007-05-25.html

Oh, come on. This is british humor.

> Frankly, the Haskell community also seems somewhat double-faced to me.  On
> one hand, starting with "Why FP Matters", they glorify laziness and
> purity.  On the other hand, they seem to throw them out as soon as the
> going gets a little tough.

/Some/ people do so, yes. Many do not.

> Look at just about any serious Haskell library
> --- full of strictness annotations.

IME this is not the case. Just to make sure, I checked this for HAppS, a web
application framework (which I hadn't looked at, but heard often mentioned
before). If anything can be called a 'serious Haskell library' than this. I
found merely a handful or two of 'seq' or strict data constructors in 89
source files (8.5kLOC). "Full of strictness annotations" is something
different in my book. In case by pure chance I hit upon one of the few
serious libraries that don't follow your general rule, I checked another
quite serious library, HaXML (which is as you can guess for XML
processing). Again: same pattern, a handful ocurrences of seq, one or two
strict data constructors.

I think your claim must be regarded as unfounded or at least hugely
exaggerated.

You might want to cite the ByteString library. However, this case should be
regarded more as a language extension (a fix for a missing feature, if you
will).

>  Look at just about any Haskell toy
> benchmark --- it doesn't only have strictness annotations, but might even
> use the foreign function interface for basic operations (that could also
> be implemented safely without any use of the unsafe FFI).  

Yes, there are these young and very enthusiastic Haskell advocates. Let them
get their hands dirty with FFI and unsafe stuff to squeeze out the last
millisecond from a stupid micro-benchmark. I am ready to pay a certain
price for elegant and maintainable code, as long as my programs are 'fast
enough'. And I believe that compiler technology will continue to progress
so the gap will become ever smaller.

> The Haskell
> slogan seems to: "do as I say, don't do as I do".  This is one aspect that
> keeps me away from Haskell, for both social and technical reasons (I got
> tired of hiding unsafe hacks behind trusted interfaces while programming
> in C++).

Yet you rely on a language where e.g. strings with a compact representation
are a built-in feature. How do you suppose this is implemented? You could
say that it /should/ be built-in and that this is better than adding it as
a library (using unsafe and non-portable techniques) and I'd agree: this is
another weakness in Haskell; and, as I noted above, ByteString is trying to
fix that. Note however that the effort to create ByteStrings in Haskell had
the side-effect of inventing a new and very effective stream fusion
frame-work, which has the potential to make good old list processing a lot
more efficient. (Who said that side-effects are always bad? ;-)))

Cheers
Ben
0
12/20/2007 12:37:42 AM
Ben Franksen <ben.franksen@online.de> wrote:
> Vesa Karvonen wrote:
> > Ben Franksen <ben.franksen@online.de> wrote:
> >> - active, highly intelligent, and very friendly community
> >
> > The Haskell community also seems to be xenophobic to the point of
> > discussing assassinating people [ASS].  ;-)
> >
> > [ASS] http://tuukka.iki.fi/tmp/haskell-2007-05-25.html

> Oh, come on. This is british humor.

I'm not sure what you are referring to.  You did notice the smiley, did
you?  Still, it would be factually correct to say that the idea of using
assassination as a way to increase the popularity of Haskell has not
escaped the minds of some members of the Haskell community.  It would be
interesting to know how many other language communities have (publicly)
considered assassination as a method for promoting their language.  ;-)

> > Frankly, the Haskell community also seems somewhat double-faced to me.  On
> > one hand, starting with "Why FP Matters", they glorify laziness and
> > purity.  On the other hand, they seem to throw them out as soon as the
> > going gets a little tough.

> /Some/ people do so, yes.  Many do not.

Indeed.  That is exactly what I'm talking about.  Many of those people who
can't wait to both glorify laziness and then throw it out the next minute
seem to be rather profilic members of the community and seem to spend a
lot of their time endorsing Haskell.

> > Look at just about any serious Haskell library
> > --- full of strictness annotations.

> IME this is not the case. [...]

Well, "full of" may be an exaggeration, but almost every serious Haskell
library I've actually looked at has had strictness annotations.  Some of
them more and some of them less.  (I think that the only exception has
been QuickCheck (although I haven't looked at all versions of it).)

> Just to make sure, I checked this for HAppS, a web application framework
> (which I hadn't looked at, but heard often mentioned before).  If
> anything can be called a 'serious Haskell library' than this.  I found
> merely a handful or two of 'seq' or strict data constructors in 89
> source files (8.5kLOC). [...]

So even HAppS and HaXML (snipped) have strictness annotations.  Yes, that
is exactly what I was talking about.  It seems that to get decent space
and time efficiency in Haskell, one regularly has to throw away laziness
and use manual strictness annotations.  Sometimes, if you spend enough
time profiling, you may be able do with a few annotations that fix the
obvious space/time leaks, but there is still no proof that your program
might not have leaks under some circumstances.  Like I said, in my
experience, adding necessary laziness to algorithms and data structures
has always been easy and, I might add, rarely necessary.  IMO, strictness
is the better default.

> I think your claim must be regarded as unfounded or at least hugely
> exaggerated.

I admit that it is exaggerated but not completely.  Your own examination,
looking at two particular libraries, showed that both of them had
strictness annotations.  That is precisely what I'm talking about.  Saying
that almost every serious Haskell library has strictness annotations
really is not an exaggeration (and I base that claim on having actually
looked at several Haskell libraries).  Saying that those libraries are
full of strictness annotations is perhaps an exaggeration, but there
definitely are also Haskell libraries that literally are full of
strictness annotations.

> You might want to cite the ByteString library.  However, this case
> should be regarded more as a language extension (a fix for a missing
> feature, if you will).

Yeah, IMO, that missing (language) feature is strict and specified order
of evaluation by default.  Like I said earlier, I think that strict
evaluation is the better default.  It leads to more easily predictable and
understandable space (and time) usage.  If you couple that with convenient
syntax (something at least as convenient as bang patterns), which is
something no ML I'm aware of provides (Alice ML probably gets closest),
one would get the best of both worlds, I think.

> > Look at just about any Haskell toy
> > benchmark --- it doesn't only have strictness annotations, but might even
> > use the foreign function interface for basic operations (that could also
> > be implemented safely without any use of the unsafe FFI).

> Yes, there are these young and very enthusiastic Haskell advocates.  Let
> them get their hands dirty with FFI and unsafe stuff to squeeze out the
> last millisecond from a stupid micro-benchmark.

Toy benchmarks are stupid and miss the point, I agree.  However, using FFI
and unsafe stuff for performance is even more stupid.  It would be much
more productive to fix the compiler (and the language).

> And I believe that compiler technology will continue to progress so the
> gap will become ever smaller.

Perhaps.  OTOH, strictness analysis has been a known optimization
technique for decades already, but you still seem to need manual
strictness annotations in Haskell programs for performance (often even
when you know it isn't safe in general (IOW, you are really changing the
semantics of your program) and couldn't be safely inserted by a compiler).
The tendency of having surprising space leaks with lazy evaluation has
also been known for decades, and many techniques have been developed to
reduce them, but you still run into such problems in Haskell.  I think
that the fundamental problem is lazy evaluation.  It is the wrong default.

> Yet you rely on a language where e.g. strings with a compact
> representation are a built-in feature.

In SML, strings are char vectors (http://mlton.org/basis/string.html).
And, indeed, vectors are built-in immutable sequences with an efficient
subscript operation.  That is basically unavoidable, because it would be
impossible to implement such sequences (vectors) in terms of other
language primitives.

For optimization purposes, the SML Basis library makes a distinction
between mono vectors (http://mlton.org/basis/mono-vector.html) and
(polymorphic) vectors (http://mlton.org/basis/vector.html).  However, in
MLton, for example, all types of vectors (both mono vectors and
(polymorphic) vectors) are actually implemented using the exact same set
of primitives.  So, in MLton, the underlying type/implementation of
"CharVector.vector" is the same as "char Vector.vector".

Personally, I think that the SML Basis library design, making a
distinction between mono vectors and (polymorphic) vectors is misguided,
because it should not be very difficult for even a fairly simple
implementation to use the type information to specialize the
representation of (polymorphic) vectors for a few important types.  In my
Extended Basis library design, for example, I intend to reduce the need to
use mono vectors (and mono arrays) as much as possible.

Anyway, Haskell also provides special syntax for strings and a sequence
type similar to SML's vectors
(http://en.wikibooks.org/wiki/Haskell/Hierarchical_libraries/Arrays).  The
difference between Haskell and SML is that, in Haskell, built-in strings
are implemented in terms of built-in lists, while, in SML, built-in
strings are implemented in terms of built-in vectors.

> How do you suppose this is implemented?  You could say that it /should/
> be built-in and that this is better than adding it as a library (using
> unsafe and non-portable techniques) and I'd agree: this is another
> weakness in Haskell; and, as I noted above, ByteString is trying to fix
> that.

Actually, I would disagree.  I think that adding strings as a complete
special case built-in for efficiency is not a good idea.  It is much
better to provide a more general purpose (efficient) sequence type and
implement strings in terms of that.

> Note however that the effort to create ByteStrings in Haskell had the
> side-effect of inventing a new and very effective stream fusion
> frame-work, which has the potential to make good old list processing a
> lot more efficient.  (Who said that side-effects are always bad? ;-)))

Yeah, it has some interesting properties [FUS].  However, it seems to be
one more example where Haskellers pay lip service to proper laziness.
AFAIK, the semantics of the stream fusion technique are very slightly
different from the semantics of ordinary lists in Haskell.  And even
though it probably doesn't matter a lot in practice it still doesn't make
them the same.

[FUS] http://mlton.org/pipermail/mlton-user/2007-April/001091.html

-Vesa Karvonen
0
12/20/2007 11:16:11 AM
Vesa Karvonen schrieb:
>>> Look at just about any serious Haskell library
>>> --- full of strictness annotations.
> 
>> IME this is not the case. [...]
> 
> Well, "full of" may be an exaggeration, but almost every serious Haskell
> library I've actually looked at has had strictness annotations.

Well, then it *is* an exaggeration.

>> I think your claim must be regarded as unfounded or at least hugely
>> exaggerated.
> 
> I admit that it is exaggerated but not completely.

It is completely exaggerated if you say "full of strictness 
annotations". I would have interpreted that as at least one per 
screenful of code, which obviously is far from the truth.

I can see that needing even a handful of annotations per library could 
be a problem. I have other reservations about a per-default lazy system; 
to the very least, potentially infinite data structures should have type 
annotations since some operations are not useful on them (such as length 
for infinite lists), and I suspect that with such a type system, 
infinite lists get a type of Generator and it doesn't matter much what 
the default strategy is because it's always nailed down in the types. 
(Well, maybe not - there could be a strict, a lazy and an indeterminate 
variant for every type, the latter being useful to make more flexible 
code and still allow the compiler to optimize everything.)

>>> Look at just about any Haskell toy
>>> benchmark --- it doesn't only have strictness annotations, but might even
>>> use the foreign function interface for basic operations (that could also
>>> be implemented safely without any use of the unsafe FFI).
> 
>> Yes, there are these young and very enthusiastic Haskell advocates.  Let
>> them get their hands dirty with FFI and unsafe stuff to squeeze out the
>> last millisecond from a stupid micro-benchmark.
> 
> Toy benchmarks are stupid and miss the point, I agree.  However, using FFI
> and unsafe stuff for performance is even more stupid.  It would be much
> more productive to fix the compiler (and the language).

Optimizing the compiler just to compile some specific benchmarks is PR 
silliness.
Which doesn't mean it isn't done, of course...

>> How do you suppose this is implemented?  You could say that it /should/
>> be built-in and that this is better than adding it as a library (using
>> unsafe and non-portable techniques) and I'd agree: this is another
>> weakness in Haskell; and, as I noted above, ByteString is trying to fix
>> that.
> 
> Actually, I would disagree.  I think that adding strings as a complete
> special case built-in for efficiency is not a good idea.  It is much
> better to provide a more general purpose (efficient) sequence type and
> implement strings in terms of that.

Been there, done that, got the picture, can recommend the experience.
(That was Eiffel, but the problems with efficiently subscriptable, yet 
potentially polymorphic data structures are similar.)

Regards,
Jo
0
jo427 (1164)
12/20/2007 11:49:22 AM
Ben Franksen schrieb:
> Joachim Durchholz wrote:
>> Ben Franksen schrieb:
>>> Joachim Durchholz wrote:
>>>> Ben Franksen schrieb:
>>>>>> . Type safe marshalling.
>>>>> Yes.
>>>> Not really.
>>>>
>>>> Marshalling forces evaluation of the values marshalled.
>>> But it is still type safe. That was the question.
>> OK, I was taking the requirement a bit beyond what was written. But I
>> don't care about type safety if marshalling a value risks evaluating
>> some infinite data structure hidden deep inside an abstract data type.
> 
> I think you exaggerate the risk a bit. As programmer you typically know very
> well whether your value is (supposed to be) finite or not. There are /many/
> functions in the standard library that are strict and each one of them
> could bottom out if applied to the wrong (e.g. an infinite) value. Everyone
> knows that you should not call e.g. 'length' on a list that might be
> infinite, but there are other functions where it is not /that/ obvious
> e.g. 'foldr' (as opposed to 'foldl' which works just fine on infinite
> lists).

Of course, but with foldr and length I know the risk and can avoid it 
(by using a different idiom).
For marshalling, there's no alternate idiom available. If the 
third-party library whose data objects I'm about to marshall uses an 
infinite list somewhere inside, I'm stuck.

>>>> IOW you may get nontermination (if the data structure contains infinite
>>>> substructures). You will lose the advantages of having a non-strict
>>>> languages.
>>> If you just print values to teh console they are evaluated, too.
>> Yes, but printing to the console is just a debugging aid, and you see it
>> immediately if there's a problem.
> 
> Console output is not /only/ for debugging but expected program behaviour
> for very many kinds of programs. But that's a side issue.

In those cases, you don't indiscrimiately print some data structure, you 
set up a string representation and select rather carefully what you 
represent.
In other words, there's always an idiom that won't get too far into an 
infinite list for the output that I want.

>> Marshalling is usually part of the application logic and must not
>> run into an endless loop.
> 
> Of course not. What I meant to say is this: Suppose you are
> programming in a strict language and you have a large and complex
> data structure that must somehow be serialized, be it for console
> output or to save it in a file or for sending over the network.
> Surely the code that does this serialisation might contain a
> non-termination bug, right? So, how is this any different? In both
> cases, the only way to make sure that this won't happen is to prove 
> that it can't happen.

Yes, but standard library code has a far better trust level than what 
some third party has whipped up between midnight and morning. Actually I 
can be pretty sure that any nontermination problems in that code are 
either publicly discussed and known or long fixed; for third-party code, 
such problems can go unnoticed for a very long time.

Regards,
Jo
0
jo427 (1164)
12/20/2007 11:56:33 AM
Vesa Karvonen <vesa.karvonen@cs.helsinki.fi> wrote:
[...]
> understandable space (and time) usage.  If you couple that with convenient
> syntax (something at least as convenient as bang patterns), which is
        ^
   for lazy evaluation

> something no ML I'm aware of provides (Alice ML probably gets closest),
> one would get the best of both worlds, I think.

-Vesa Karvonen
0
12/20/2007 12:04:05 PM
On Dec 20, 12:16 pm, Vesa Karvonen <vesa.karvo...@cs.helsinki.fi>
wrote:
>
> Personally, I think that the SML Basis library design, making a
> distinction between mono vectors and (polymorphic) vectors is misguided,
> because it should not be very difficult for even a fairly simple
> implementation to use the type information to specialize the
> representation of (polymorphic) vectors for a few important types.

I don't like the duplication of vector and array types much either,
but saying it is "misguided" ignores the significant trade-off you
have to make to avoid it. As far as current state of the art goes,
achieving the same performance by specialising polymorphic vector
types requires at least one of the following techniques: (1) whole
program compilation, (2) runtime type passing (either for (2a)
intensional type analysis or (2b) JIT specialisation).

Either of these techniques has significant costs that makes it
prohibitive for some (most?) implementations. (2a) may very well eat
up the performance gain. Also, monomorphisation as in (1) and (2b)
limits the expressiveness of the type system by precluding effective
first-class polymorphism or polymorphic recursion (at least, nobody so
far has demonstrated the opposite).

So I would be careful with your conclusions. ;-)

- Andreas
0
rossberg (600)
12/20/2007 12:52:32 PM
Joachim Durchholz <jo@durchholz.org> wrote:
> Vesa Karvonen schrieb:
[...]
> >> I think your claim must be regarded as unfounded or at least hugely
> >> exaggerated.
> > 
> > I admit that it is exaggerated but not completely.

> It is completely exaggerated if you say "full of strictness 
> annotations". I would have interpreted that as at least one per 
> screenful of code, which obviously is far from the truth.

I disagree.  One per "screenful of code" is a rather naive measurement.
What you should be looking at is something more indicative such as the
ratio of data declarations with and without strictness annotations.
Looking at a quick grep of a version of Parsec (it probably isn't the
latest version), I see a total of 12 data declarations of which 6, meaning
50%, use strictness annotations.  If you restrict the attention to the
central data declarations, e.g. Consumed, Reply, State, SourcePos,
i.e. those that are directly used by the primitive parser combinators,
rather than by some convenience modules, the percentage jumps to about
100%.  So, in the case of Parsec, for example, although you don't
necessarily see bangs on every screenful of code, I think that it is
fairly accurate to say that it is full of strictness annotations.
Practically every kind of data manipulate by Parsec's central combinators
has manually declared strict parts.

-Vesa Karvonen
0
12/20/2007 3:01:40 PM
rossberg@ps.uni-sb.de wrote:
> I don't like the duplication of vector and array types much either,
> but saying it is "misguided" ignores the significant trade-off you
> have to make to avoid it. As far as current state of the art goes,
> achieving the same performance by specialising polymorphic vector
> types requires at least one of the following techniques: (1) whole
> program compilation, (2) runtime type passing (either for (2a)
> intensional type analysis or (2b) JIT specialisation).
> 
> Either of these techniques has significant costs that makes it
> prohibitive for some (most?) implementations. (2a) may very well eat
> up the performance gain. Also, monomorphisation as in (1) and (2b)
> limits the expressiveness of the type system by precluding effective
> first-class polymorphism or polymorphic recursion (at least, nobody so
> far has demonstrated the opposite).

F# already provides free polymorphism and first-class polymorphism (because
you distribution the intermediate representation).

> So I would be careful with your conclusions. ;-)

Indeed.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/20/2007 3:02:04 PM
Joachim Durchholz wrote:
> Optimizing the compiler just to compile some specific benchmarks is PR
> silliness.

That depends entirely upon how specialized the optimizations are. If anyone
optimizes any compiler to give better performance for 3D vectors on my ray
tracer benchmark, I think that would be a good thing, for example.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/20/2007 3:04:40 PM
On Dec 20, 4:02 pm, Jon Harrop <use...@jdh30.plus.com> wrote:
> rossb...@ps.uni-sb.de wrote:
> > I don't like the duplication of vector and array types much either,
> > but saying it is "misguided" ignores the significant trade-off you
> > have to make to avoid it. As far as current state of the art goes,
> > achieving the same performance by specialising polymorphic vector
> > types requires at least one of the following techniques: (1) whole
> > program compilation, (2) runtime type passing (either for (2a)
> > intensional type analysis or (2b) JIT specialisation).
>
> > Either of these techniques has significant costs that makes it
> > prohibitive for some (most?) implementations. (2a) may very well eat
> > up the performance gain. Also, monomorphisation as in (1) and (2b)
> > limits the expressiveness of the type system by precluding effective
> > first-class polymorphism or polymorphic recursion (at least, nobody so
> > far has demonstrated the opposite).
>
> F# already provides free polymorphism and first-class polymorphism (because
> you distribution the intermediate representation).

AFAICT it does 2b, which does not exactly fit my definition of "free".
For me that implies that you do not have to jit every (or lots of)
instances separately - for something like existential types (a form of
first-class polymorphism) an approach like that can become very
expensive. Polymorphic recursion probably is less problematic, at
least if many instances can share the same code.

There is no free lunch. It's always a trade-off, which very much
depends on the application.

- Andreas
0
rossberg (600)
12/20/2007 4:09:06 PM
Jon Harrop schrieb:
> Joachim Durchholz wrote:
>> Optimizing the compiler just to compile some specific benchmarks is PR
>> silliness.
> 
> That depends entirely upon how specialized the optimizations are. If anyone
> optimizes any compiler to give better performance for 3D vectors on my ray
> tracer benchmark, I think that would be a good thing, for example.

Not if there are other, more general optimizations that happen to 
optimize the microbenchmark less, and if optimization programmer 
resources are limited.

In other words, letting microbenchmark performance influence the 
decision which optimization to try next is rating PR higher than 
technical merit.
(This doesn't mean PR should always be ignored. There are situations and 
goals where PR is indeed more important than technical excellence, and 
it can even be a valid short-term goal in a project with technical 
long-term goals.)

Regards,
Jo
0
jo427 (1164)
12/20/2007 6:23:40 PM
Joachim Durchholz wrote:
> Jon Harrop schrieb:
>> Joachim Durchholz wrote:
>>> Optimizing the compiler just to compile some specific benchmarks is PR
>>> silliness.
>> 
>> That depends entirely upon how specialized the optimizations are. If
>> anyone optimizes any compiler to give better performance for 3D vectors
>> on my ray tracer benchmark, I think that would be a good thing, for
>> example.
> 
> Not if there are other, more general optimizations that happen to
> optimize the microbenchmark less, and if optimization programmer
> resources are limited.

The OCaml implementors certainly seem to agree with you but I and the F#
creators certainly don't. I care a lot more about a 5x performance
improvement on complex numbers than a 3% improvement on the GC.

> In other words, letting microbenchmark performance influence the
> decision which optimization to try next is rating PR higher than
> technical merit.

Taken too far and you're notion of "technical merit" becomes "valuing
uselessness" because you let too many special cases slide.

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?u
0
usenet116 (1778)
12/20/2007 6:54:32 PM
On Thu, 20 Dec 2007, Jon Harrop wrote:

> Joachim Durchholz wrote:
> > Jon Harrop schrieb:
> >> Joachim Durchholz wrote:
> >>> Optimizing the compiler just to compile some specific benchmarks is PR
> >>> silliness.
> >> 
> >> That depends entirely upon how specialized the optimizations are. If
> >> anyone optimizes any compiler to give better performance for 3D vectors
> >> on my ray tracer benchmark, I think that would be a good thing, for
> >> example.
> > 
> > Not if there are other, more general optimizations that happen to
> > optimize the microbenchmark less, and if optimization programmer
> > resources are limited.
> 
> The OCaml implementors certainly seem to agree with you but I and the F#
> creators certainly don't. I care a lot more about a 5x performance
> improvement on complex numbers than a 3% improvement on the GC.
> 

But the GHC approach was to support unboxed tuples instead. Which actually 
get passed in registers where possible. Short of SSE support, that's about 
as good as you're going to get.

-- 
flippa@flippac.org

"The reason for this is simple yet profound. Equations of the form
x = x are completely useless. All interesting equations are of the
form x = y." -- John C. Baez