f



Will too many paradigms addle my brain.

I've just started a comp sci degree. The main language used to teach is
Java - I certainly don't like it but thats the way it is. In the 1st
year we do Java and Prolog (compulsory).

Functional Programming is a 3rd year elective. I reckon I know enough
Scheme to convince the department that I will be able to cope with it
in my 1st year (although I think they use Miranda) and I think I'll be
allowed to choose it.

I am wondering though whether trying to study all 3 paradigms at the
same time is advisable.

Any views.

0
wookiz (347)
10/11/2005 9:42:16 PM
comp.lang.functional 2791 articles. 0 followers. Post Follow

272 Replies
1472 Views

Similar Articles

[PageSpeed] 24

>>>>> "wooks" == wooks  <wookiz@hotmail.com> writes:

    wooks> I am wondering though whether trying to study all 3
    wooks> paradigms at the same time is advisable.

If you can't handle three languages in school, you might not be able
to handle three languages in the Real World.  Believe me, many jobs
require you to use multiple languages at once; I'm working on
something now that uses

        * C++
        * perl
        * Delphi
        * Visual Basic
        * SQL

-- 
Raffarin said he wants to see secure Internet voting in France
by 2009, and he said if he had a homosexual son, he would love
him ...
        -- from the Chicago Tribune
0
offby1 (56)
10/11/2005 11:19:38 PM
wooks wrote:
> I am wondering though whether trying to study all 3 paradigms at the
> same time is advisable.

Yes it is, definitely. Also get yourself a copy of CTM:
<http://www2.info.ucl.ac.be/people/PVR/book.html>.

-- 
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>
0
10/11/2005 11:37:34 PM
David Hopwood <david.nospam.hopwood@blueyonder.co.uk> writes:
> Yes it is, definitely. Also get yourself a copy of CTM:
> <http://www2.info.ucl.ac.be/people/PVR/book.html>.

Is the final version a lot different than the draft that was online in
pdf form?  I read the first few chapters of that and have been wanting
to get around to the rest.  However, it didn't much resemble what I
think of as functional programming.  And Oz's use of logic variables
for communicating between concurrent processes seemed to invite
deadlock, etc.
0
phr.cx (5493)
10/12/2005 1:20:23 AM
wooks wrote:
> I am wondering though whether trying to study all 3 paradigms at the
> same time is advisable.

Yes. Paradigms typically complement one another and the more you know of all
of them, the better your code is likely to be in any one of them. However,
I have never found a use for OO.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
0
usenet116 (1778)
10/12/2005 1:36:35 AM
Jon Harrop wrote:
> wooks wrote:
> > I am wondering though whether trying to study all 3 paradigms at the
> > same time is advisable.
>
> Yes. Paradigms typically complement one another and the more you know of all
> of them, the better your code is likely to be in any one of them. However,
> I have never found a use for OO.
>
> --
> Dr Jon D Harrop, Flying Frog Consultancy
> http://www.ffconsultancy.com

I will definitely take all 3 but I am not sure you have addressed the
nub of my question though... which is whether it is advisable to do
them simultaneously bearing in mind I will have to pass exams.

0
wookiz (347)
10/12/2005 5:20:23 AM
Eric Hanchrow wrote:
> >>>>> "wooks" == wooks  <wookiz@hotmail.com> writes:
>
>     wooks> I am wondering though whether trying to study all 3
>     wooks> paradigms at the same time is advisable.
>
> If you can't handle three languages in school, you might not be able
> to handle three languages in the Real World.  Believe me, many jobs
> require you to use multiple languages at once; I'm working on
> something now that uses
>
>         * C++
>         * perl
>         * Delphi
>         * Visual Basic
>         * SQL
>
> --

I'm not worried about the real world as I'm already an experienced
programmer. I am focusing on the academic aspect.

Is it a good idea to/better to learn them at the same time when you
don't have to.

0
wookiz (347)
10/12/2005 5:25:19 AM
"wooks" <wookiz@hotmail.com> writes:
>Is it a good idea to/better to learn them at the same time when you
>don't have to.

I don't think this can be answered in principle.  If you were my advisee,
I'd be asking questions about "what else might you take instead" and "if
not now, when, and along with what else?"

Some of our students do report being confused when they are doing two very
different things at once, e.g., SICP along with our machine organization/
machine language course.  In one case we are asking them to pay careful
attention to low-level details, and in the other case, the entire project is
to abstract those details away (under the rug).  But it sounds as if you will
be studying all more or less high level approaches, so I don't think you'll
be too confused.  On the other hand, you may be more inclined to strangle the
instructor of the OOP course for making things too complicated. :-)
0
bh150 (210)
10/12/2005 5:51:02 AM
The difficulties of your workload may have nothing to do with computer
paradigms or computer science per se.  Beware of biting off more than
you can chew.  Maybe you are more productive and disciplined than a
"lazybones" such as myself, but my formula at Cornell always was 1 hard
course, 1 medium difficult course, and 2 easy courses.  I did too much
of my own personal, uncredited cogitation to be saddled with relentless
pressure from every possible corner.  Frankly I don't believe in it.
Some people think there's moral virtue in it, but I think the vast
majority of people can't handle that much stuff dumped on them all at
once.  Better to master 1 thing well.

I'm surprised Jon said he never found a use for OO, as clearly, one has
to talk to all the people in industry who believe in it.  :-)


Cheers,
Brandon J. Van Every
   (cruise (director (of SeaFunc)
           '(Seattle Functional Programmers)))
http://groups.yahoo.com/group/SeaFunc

0
SeaFuncSpam (366)
10/12/2005 6:39:37 AM
wooks wrote:
> Functional Programming is a 3rd year elective. I reckon I know enough
> Scheme to convince the department that I will be able to cope with it
> in my 1st year (although I think they use Miranda) and I think I'll be
> allowed to choose it.

I don't think they will, because Miranda and Haskell are lazy 
functional, while ML and Scheme are strict (i.e. evaluate all their 
function arguments first).  This leads to quite different programming. 
Anyway, learning Miranda in School and Scheme or Lisp at home doesn't 
hurt. ;)

> I am wondering though whether trying to study all 3 paradigms at the
> same time is advisable.

Sure, why not?  When I started school we had a Scheme class (2nd 
semester Java), and I learnt C and asm.  Since then I learnt quite a few 
other languages (and looked a some others).

Keep your mind open and learn to think in program structure, not 
language-specific constructs.

-- 
State, the new religion from the friendly guys who brought you fascism.
0
u.hobelmann (1643)
10/12/2005 7:58:44 AM
wooks wrote:
> ...In the 1st
> year we do Java and Prolog (compulsory).
> 
> Functional Programming is a 3rd year elective. ...

> I am wondering though whether trying to study all 3 paradigms at the
> same time is advisable.

Several people say "Of course YES...". That the paradigms complement
themselves, that you can compare different views of the same algorithmic
problem, etc. Nice, optimistic, not too far from my own feelings.

But Brian Harvey says:

> I don't think this can be answered in principle.  If you were my advisee,
> I'd be asking questions about "what else might you take instead" and "if
> not now, when, and along with what else?"
> 
> Some of our students do report being confused when they are doing two very
> different things at once, e.g., SICP along with our machine organization/
> machine language course. 

=

Now, PLEASE, when Brian Harvey says something about the pedagogy of computing,
listen to him, he knows what he says!

It is known from a completely different fairy-tale that children embedded
in a bi-linguistic milieu, and who typically, after some time, master both,
and become perfectly bilingual, acquire both languages at a reduced rate,
they assimilate the same amount of information per time, spread into two
layers.

So an additional question is: can you afford that, if the analogy works in
your case?

I have to say, for example, that I learnt the non-deterministic, monadic
style of the lazy functional implementation of some algorithms much
easier than some of (cleverer than myself) people I know, since I could
translate the stuff to the logic programming paradigms.

The relation, important, although partial and specific, between objects and
closures, is probably easier to grasp when dealing with sequentially.

Anyway, a full-fledged software specialist would need those three paradigms
anyway. But your time-schedule is also a function of many other constraints...


Jerzy Karczmarczuk
0
karczma (331)
10/12/2005 8:09:57 AM
Jon Harrop wrote:
> wooks wrote:
> 
>>I am wondering though whether trying to study all 3 paradigms at the
>>same time is advisable.
> 
> 
> Yes. Paradigms typically complement one another and the more you know of all
> of them, the better your code is likely to be in any one of them. However,
> I have never found a use for OO.

One of the simple uses of objects is reducing namespace collisions.  For 
example, Dylan support multi-methods, but like CLOS, the number of parameters 
for each generic function must be equal.  One way around this I found is to 
first dispatch on the type, then return a function, with as many parameters as 
I like:

define method move-to (self :: <window>)
   curry (method (self :: <window>, x, y)
     format-out ("window.move-to\n");
   end, self)
end;

move-to (window)(10, 20);

Now I don't have to wory about collisions when defining methods.


Mike
0
noone3 (3603)
10/12/2005 8:25:29 AM
Jerzy Karczmarczuk <karczma@info.unicaen.fr> writes:

> It is known from a completely different fairy-tale that children embedded
> in a bi-linguistic milieu, and who typically, after some time, master both,
> and become perfectly bilingual, acquire both languages at a reduced rate,
> they assimilate the same amount of information per time, spread into two
> layers.

This is, indeed, a fairy tale.  Children who learn two languages when
growing up become equally proficient in their primary language as
children who learn only one, and depending on to what degree the
secondary language is used, they may learn to be fluent in this too.

From the studies I've seen, it is only when they learn three or more
languages that they become confused, and then only mildly.  It may be
because a bilingual family typically has one parent speaking one
language and another parent speaking the other, so the child can keep
the languages apart by asssociating each language with a parent.
 
> So an additional question is: can you afford that, if the analogy works in
> your case?
> 
> I have to say, for example, that I learnt the non-deterministic, monadic
> style of the lazy functional implementation of some algorithms much
> easier than some of (cleverer than myself) people I know, since I could
> translate the stuff to the logic programming paradigms.
> 
> The relation, important, although partial and specific, between objects and
> closures, is probably easier to grasp when dealing with sequentially.
> 
> Anyway, a full-fledged software specialist would need those three paradigms
> anyway. But your time-schedule is also a function of many other
> constraints...

With programming languages as well as natural languages, the best way
to learn a language is to use it.  If you don't have time to make
reasonably sized projects with all the languages you learn, you will
not learn them properly.

However, if you learn languages sequentially, you may have the "ways"
of the first language stuck in your head when learning the second, so
you in the beginning don't really think about the language on its own
premises but rather think about how programs in the language you know
can be converted to the new language.  Learning several languages at
the same time and solving the same programming problems in all of them
concurrently would be the ideal way of learning their relative
strengths and weaknesses.  But it requires time enough to do this.

        Torben

0
10/12/2005 9:19:58 AM
wooks schrieb:
> I am wondering though whether trying to study all 3 paradigms at the
> same time is advisable.

In general, we don't know what your brain can manage :-)

Learning too many paradigms at the same time can indeed be confusing.

On the other hand, the more paradigms you know, the more ways to attack 
a problem are at your disposition.

On the third hand, it can be immensely frustrating to be forced to 
program in, say, Java, and know that some problem that requires hundreds 
of lines of boilerplate code for every class could be done in a single 
five-liner in Haskell.

On the fourth hand, any predisposition for this kind of frustration is 
something that you'll have to overcome sooner or later, anyway... 90% of 
most professional careers consist of legacy code maintenance, which 
means that you'll likely be stuck with suboptimal tools 90% of your 
professional time.
Actually that's nothing to moan about, it's just the way it is - when 
you're writing new software, it will be legacy tomorrow, and the tools 
that are top-of-the-cream today will be considered suboptimal when it 
comes to maintaining your code.


Personally, I'd go for learning as many paradigms as possible anyway.
I'd start with as many courses as I feel comfortable with, and if things 
start to get confusing, I'd drop courses until I can handle the 
workload, and come back to them later.

Regards,
Jo
0
jo427 (1164)
10/12/2005 9:23:23 AM
Torben �gidius Mogensen wrote:
> Jerzy Karczmarczuk <karczma@info.unicaen.fr> writes:
> 
> 
>>It is known from a completely different fairy-tale that children embedded
>>in a bi-linguistic milieu, and who typically, after some time, master both,
>>and become perfectly bilingual, acquire both languages at a reduced rate,
>>they assimilate the same amount of information per time, spread into two
>>layers.
> 
> 
> This is, indeed, a fairy tale.  Children who learn two languages when
> growing up become equally proficient in their primary language as
> children who learn only one, and depending on to what degree the
> secondary language is used, they may learn to be fluent in this too.
> 
> From the studies I've seen, it is only when they learn three or more
> languages that they become confused, and then only mildly. 

Would you please reread what I wrote?

I said myself that children acquire perfectly well both languages, only
the *speed* may be affected. And I never mentioned any confusion.
And this "fairy tale" concerning this speed came to me from children
education specialists I happen to know well.

J. Karczmarczuk
0
karczma (331)
10/12/2005 10:02:10 AM
Brian Harvey wrote:
> "wooks" <wookiz@hotmail.com> writes:
> >Is it a good idea to/better to learn them at the same time when you
> >don't have to.
>
> I don't think this can be answered in principle.  If you were my advisee,
> I'd be asking questions about "what else might you take instead" and "if
> not now, when, and along with what else?"
>

Well I could do a course on e-business entrepeneurship but we don't get
many electives and I'd rather spend them on something more academic and
see if I can wangle my way on to that course on a not for credit basis.

I do have an imperative/OO background (VB) so should not find Java
unduly difficult whereas in year 2 there will be more Java - Concurrent
Programming which I am not familiar with as well as Compilers.

My background for believing I could cope with a functional programming
course albeit one in Miranda is that I read half of your book over the
summer - Simply Scheme (got a bit stuck on the pattern matching program
but was fine with all the exercises up until then) as well as the first
8 chapters of The Little Schemer (i.e before they introduce
continuations which was too much for my brain at the time).

Strategically I am trying to free up more choice of elective for myself
in Year 3 which is where the university have placed their FP course.
The year 1 and 2 electives are either not practicable or not of
interest.


> Some of our students do report being confused when they are doing two very
> different things at once, e.g., SICP along with our machine organization/
> machine language course.  In one case we are asking them to pay careful
> attention to low-level details, and in the other case, the entire project is
> to abstract those details away (under the rug).  But it sounds as if you will
> be studying all more or less high level approaches, so I don't think you'll
> be too confused.

Well we are not doing SICP (although I hope to read it at some point)
but we are also doing MIPS programming but to me that is such a
different mindset that I am not concerned.

>  On the other hand, you may be more inclined to strangle the
> instructor of the OOP course for making things too complicated. :-)

If you mean like having to write about 12 lines of code just to do
Hello World and being encouraged to come to terms with gizmo packed
IDE's , editors  (we've been encourage to experiment with 3 different
environments) and API's before we even start programming. Yes.

0
wookiz (347)
10/12/2005 11:46:21 AM
Cruise Director wrote:
> The difficulties of your workload may have nothing to do with computer
> paradigms or computer science per se.  Beware of biting off more than
> you can chew.

This is not extra work.... I am proposing doing it in place of one the
prescribed 1st year electives.

> Maybe you are more productive and disciplined than a
> "lazybones" such as myself, but my formula at Cornell always was 1 hard
> course, 1 medium difficult course, and 2 easy courses.

Are there any easy courses in a CS degree.

0
wookiz (347)
10/12/2005 11:51:24 AM
"wooks" <wookiz@hotmail.com> writes:
>Well I could do a course on e-business entrepeneurship but we don't get
>many electives and I'd rather spend them on something more academic and
>see if I can wangle my way on to that course on a not for credit basis.

Ugh.

I will tell you right in this message everything in the business curriculum:

	1.  It's good to be greedy.

	2.  Assorted techniques for manipulating people who think it
	    isn't good to be greedy.

Since #1 is false, there's really no need for you to study #2.


But you should definitely think beyond computer science.  I don't know where
you're going to school, but I'm willing to bet there are courses available in
literature, philosophy, psychology, art, mathematics, physics, etc.
I advise fitting as many of those in as you can, even if it means a little
less computer science.
0
bh150 (210)
10/12/2005 2:14:52 PM
"wooks" <wookiz@hotmail.com> writes:

> I am wondering though whether trying to study all 3 paradigms at the
> same time is advisable.

I am firmly ambivalent about your question.

Many have mentioned the value of knowing multiple paradigms, and
learning 3 at once might turn out great for you.

On the other hand, it's good to learn a paradigm deeply by immersing
oneself completely in it.  You might find this hard to do this when
you're learning all three.  One paradigm might creep into another, when
it would be better not to combine them until each is fully learned.

In high school I took French 1 and Spanish 1 at the same time.  To keep
the languages apart I focused on the subtle differences in
pronunciation.  I did get tripped up on an oral exam with one word that
was difficult.  Every phoneme was exactly the same in French and
Spanish, only the stress was on a different syllable.  Five brownie
points for anyone who knows what word I'm referring too.
0
brlspam (219)
10/12/2005 2:35:02 PM
"wooks" <wookiz@hotmail.com> writes:

> Are there any easy courses in a CS degree.

*Far* too many if the poor quality of many CS graduates is any
indication.

0
jmarshall (140)
10/12/2005 3:17:44 PM
Joe Marshall wrote:
> "wooks" <wookiz@hotmail.com> writes:
>
> > Are there any easy courses in a CS degree.
>
> *Far* too many if the poor quality of many CS graduates is any
> indication.

I am in one of the top rated schools in the country. I assure you there
aren't any easy courses , however the pass mark for all courses is 40%
so I guess it is possible to achieve a degree with low marks.

0
wookiz (347)
10/12/2005 3:52:56 PM
"wooks" <wookiz@hotmail.com> writes:

> Joe Marshall wrote:
>> "wooks" <wookiz@hotmail.com> writes:
>>
>> > Are there any easy courses in a CS degree.
>>
>> *Far* too many if the poor quality of many CS graduates is any
>> indication.
>
> I am in one of the top rated schools in the country. 

Great!

I was reminded of Alan Kay's quote:

  ``Most undergraduate degrees in computer science these days are
    basically Java vocational training.''

0
jmarshall (140)
10/12/2005 3:59:14 PM
wooks wrote:
>>> Are there any easy courses in a CS degree.

Joe Marshall wrote:
>> *Far* too many if the poor quality of many CS graduates is any
>> indication.

> I am in one of the top rated schools in the country.

Even the best schools are hit-or-miss at teaching CS and CE undergrads.
If you're lucky, you'll get instructors who actually enjoy teaching, are
good at teaching, and know the subject well. Even then, you'll only get
a very shallow understanding of the material if you're just looking to
finish your homework and pass your exams.

Undergraduate curricula are modeled after apprenticeships; their goal is
to teach you the basic tools of the thinking man's trade, just as
vocational schools teach you the basics of carpentry, plumbing, etc.
They don't aim for actual mastery of a subject -- that's what the
master's degree program is for (and even masters have very limited
experience).

The "basic tools" for an academic or engineer are critical thinking,
problem-solving skills, and general background material for your
speciality. Unless you go well beyond the course material, you won't
master functional programming (for example); you'll just acquire some
rough familiarity with the paradigm. You certainly won't become an
expert functional programmer just by taking a class, even if the teacher
/is/ excellent and you ace the class.

Unfortunately, it's easy to earn a bachelor's degree with /just/ the
general background material but no real skills, especially if you over-
emphasize the CS/CE portions of your curriculum. Whenever you have a
choice between taking a CS/CE class and taking philosophy, psychology,
literature, art, etc., I'd recommend the humanities class. While there's
no guarantee, in my experience the humanities teachers are a little more
likely than the CS/CE geeks to hammer the critical thinking stuff into
you. The geeks tend to get hung up on the cool details of the subject
instead of the (more important long-term) thinking skills. Philosophy
classes are especially good, so long as you have an engaging professor
or TA.

The other important thing you learn at typical American universities
(not sure if the rest of the world is the same) is work-life balance.
For most undergrads, it's your first experience living on your own.
Learning how to have fun and still get the job done is a big part of
going to school, in my experience. (I was always better at the "having
fun" than the "still getting the job done" part.)
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
10/12/2005 5:49:32 PM
Joe Marshall wrote:
>   ``Most undergraduate degrees in computer science these days are
>     basically Java vocational training.''

I assume Java is taught because it is commercially viable now. However, I
seem to spend most of my time teaching Java programmers in industry how to
use FPLs...

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
0
usenet116 (1778)
10/12/2005 5:50:20 PM
Mike Austin wrote:
> One of the simple uses of objects is reducing namespace collisions.
> ...

Yes. You can avoid name collisions with other approaches, such as modules in
SML:

structure Window = struct
  fun move_to window x y =
    print "window.move_to"
end

or OCaml:

module Window = struct
  let move_to window x y =
    print_endline "window.move_to"
end

So that is not a justification for OOP, AFAICT.

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
0
usenet116 (1778)
10/12/2005 5:50:45 PM
Jon Harrop schrieb:
> Joe Marshall wrote:
> 
>>  ``Most undergraduate degrees in computer science these days are
>>    basically Java vocational training.''
> 
> 
> I assume Java is taught because it is commercially viable now. However, I
> seem to spend most of my time teaching Java programmers in industry how to
> use FPLs...

Where do they get jobs after they were trained?
(I'd like one *g*)

Regards,
Jo
0
jo427 (1164)
10/12/2005 8:14:30 PM
Bradd W. Szonye wrote:
> wooks wrote:
> >>> Are there any easy courses in a CS degree.
>
> Joe Marshall wrote:
> >> *Far* too many if the poor quality of many CS graduates is any
> >> indication.
>
> > I am in one of the top rated schools in the country.
>
> Even the best schools are hit-or-miss at teaching CS and CE undergrads.
> If you're lucky, you'll get instructors who actually enjoy teaching, are
> good at teaching, and know the subject well. Even then, you'll only get
> a very shallow understanding of the material if you're just looking to
> finish your homework and pass your exams.
>

I don't believe I said anything here that would have conveyed that
impression.

> Undergraduate curricula are modeled after apprenticeships; their goal is
> to teach you the basic tools of the thinking man's trade, just as
> vocational schools teach you the basics of carpentry, plumbing, etc.
> They don't aim for actual mastery of a subject -- that's what the
> master's degree program is for (and even masters have very limited
> experience).
>

I am doing an undergraduate masters.

> The "basic tools" for an academic or engineer are critical thinking,
> problem-solving skills, and general background material for your
> speciality. Unless you go well beyond the course material, you won't
> master functional programming (for example); you'll just acquire some
> rough familiarity with the paradigm. You certainly won't become an
> expert functional programmer just by taking a class, even if the teacher
> /is/ excellent and you ace the class.
>

It should be apparent from reading from reading my contributions to
this thread that that is not my circumstance.

> Unfortunately, it's easy to earn a bachelor's degree with /just/ the
> general background material but no real skills, especially if you over-
> emphasize the CS/CE portions of your curriculum. Whenever you have a
> choice between taking a CS/CE class and taking philosophy, psychology,
> literature, art, etc., I'd recommend the humanities class.

Our courses are organised into half units.
My first choice of elective was Cognitive Science. The Philosophy
course on offer was a whole unit which would have meant I couldn't do
Cog Sci if I took it. Other options that I considered had timetabling
difficulties. I am not really seeking advice on getting an all round
university education. The FP class  is one that I want to take to build
on my what I have already learnt and it has the benefit of freeing up
my options in year 3.

> While there's
> no guarantee, in my experience the humanities teachers are a little more
> likely than the CS/CE geeks to hammer the critical thinking stuff into
> you.

I studied critical thinking by myself before I even decided to apply to
go to university.

> The geeks tend to get hung up on the cool details of the subject
> instead of the (more important long-term) thinking skills. Philosophy
> classes are especially good, so long as you have an engaging professor
> or TA.
>

I am not a geek. I want to do FP. Given a choice I would dump the Java
and OO courses but they are compulsory.

> The other important thing you learn at typical American universities
> (not sure if the rest of the world is the same) is work-life balance.
> For most undergrads, it's your first experience living on your own.

I am a mature student. I have worked in New Zealand, the USA, Africa
and the UK . For a variety of reasons I am doing a degree late in life.

> Learning how to have fun and still get the job done is a big part of
> going to school, in my experience. (I was always better at the "having
> fun" than the "still getting the job done" part.)

Had plenty of fun in my time and think I have a good work/life balance
perspective. I am taking up a completely new sport and have signed up
for some community projects in my first year.

0
wookiz (347)
10/12/2005 8:56:01 PM
Joe Marshall wrote:
> "wooks" <wookiz@hotmail.com> writes:
>
> > Joe Marshall wrote:
> >> "wooks" <wookiz@hotmail.com> writes:
> >>
> >> > Are there any easy courses in a CS degree.
> >>
> >> *Far* too many if the poor quality of many CS graduates is any
> >> indication.
> >
> > I am in one of the top rated schools in the country.
>
> Great!
>
> I was reminded of Alan Kay's quote:
>
>   ``Most undergraduate degrees in computer science these days are
>     basically Java vocational training.''

One of the schools with which we have a international student exchange
program is MIT. I had no idea standards had dropped so far in your old
school.

0
wookiz (347)
10/12/2005 9:05:50 PM
Brian Harvey wrote:
> "wooks" <wookiz@hotmail.com> writes:
> >Well I could do a course on e-business entrepeneurship but we don't get
> >many electives and I'd rather spend them on something more academic and
> >see if I can wangle my way on to that course on a not for credit basis.
>
> Ugh.
>
> I will tell you right in this message everything in the business curriculum:
>
> 	1.  It's good to be greedy.
>
> 	2.  Assorted techniques for manipulating people who think it
> 	    isn't good to be greedy.
>
> Since #1 is false, there's really no need for you to study #2.
>
>
> But you should definitely think beyond computer science.  I don't know where
> you're going to school, but I'm willing to bet there are courses available in
> literature, philosophy, psychology, art, mathematics, physics, etc.
> I advise fitting as many of those in as you can, even if it means a little
> less computer science.

I am taking Cognitive Science and would have taken a maths class as my
last elective but for timetabling clash.  I have mentioned in my post
to Bradd why I am not taking Philosophy. I am also doing a course in
academic writing on a not for credit basis.

The main reason I am thinking of bringing forward the FP class is
because there is a wider range of options to choose in year 3 and 4 so
it will free up an extra slot for then.

0
wookiz (347)
10/12/2005 9:25:48 PM
wooks wrote:
> Cruise Director wrote:
>
> > Maybe you are more productive and disciplined than a
> > "lazybones" such as myself, but my formula at Cornell always was 1 hard
> > course, 1 medium difficult course, and 2 easy courses.
>
> Are there any easy courses in a CS degree.

I'm not entirely sure.  I majored in Sociocultural Anthropology and
minored in CS so that my live would be sane.  Also, because Computer
Graphics was cancelled on me sophomore year when I was deciding, and
because all CS professors I had had to date were decidedly dull.
Didn't get the good prof and the good course until I was a junior.  I
found a lot of the CS hard, but part of that was because I was drifting
in and out of it with a minor, rather than sticking with it the whole
time.  I generally did better on the practical lab courses, i.e.
programming, because I valued them more than the theory courses and put
far more energy into them.  I mean, what's more important, book
knowledge or producing something that works and runs fast?

Anyways, nobody said that every course you take has to be in CS.  Easy
courses were typically things like Art Appreciation or some crap like
that.  I say "crap" only because I had a shitty prof for that.  In a
parallel universe I am a painter, and that universe might be
overlapping this one sooner than I'd think.


Cheers,
Brandon J. Van Every
   (cruise (director (of SeaFunc)
           '(Seattle Functional Programmers)))
http://groups.yahoo.com/group/SeaFunc

0
SeaFuncSpam (366)
10/12/2005 10:44:38 PM
wooks wrote:
> Joe Marshall wrote:
> > "wooks" <wookiz@hotmail.com> writes:
> >
> > > Are there any easy courses in a CS degree.
> >
> > *Far* too many if the poor quality of many CS graduates is any
> > indication.
>
> I am in one of the top rated schools in the country. I assure you there
> aren't any easy courses , however the pass mark for all courses is 40%
> so I guess it is possible to achieve a degree with low marks.

Caveat Emptor.  When I went through Cornell you had to sustain a grade
much higher than that to be a major.  IIRC, better than the C~'s I was
getting in a number of courses.  Otherwise I could have been a double
major, as I was only a few theory / math courses shy of it.  I think
that standard persists today.  A friend of had to go EE because the CS
dept. wouldn't have him.  Berated him for what a bad CS student he was,
etc.  Well, when he graduated, he got a job at Microsoft and the vast
majority of his peers didn't.  Now, I don't think that's yet another
indication of how shoddy Microsoft is.  :-)  Rather, he had practical
skill and really knew his stuff.  It just wasn't appreciated by the
Cornell CS dept.


Cheers,
Brandon J. Van Every
   (cruise (director (of SeaFunc)
           '(Seattle Functional Programmers)))
http://groups.yahoo.com/group/SeaFunc

0
SeaFuncSpam (366)
10/12/2005 10:52:50 PM
Joachim Durchholz wrote:
> Jon Harrop schrieb:
> > Joe Marshall wrote:
> >
> >>  ``Most undergraduate degrees in computer science these days are
> >>    basically Java vocational training.''
> >
> >
> > I assume Java is taught because it is commercially viable now. However, I
> > seem to spend most of my time teaching Java programmers in industry how to
> > use FPLs...
>
> Where do they get jobs after they were trained?
> (I'd like one *g*)

When my friend graduated from Cornell in 2003 (?) they didn't.  Only he
did.  If you exit college in a bad economy and don't actually know how
to do anything, you're dead.  Well, "dead" in the sense of "alternate
life experiences will be forced upon you."


Cheers,
Brandon J. Van Every
   (cruise (director (of SeaFunc)
           '(Seattle Functional Programmers)))
http://groups.yahoo.com/group/SeaFunc

0
SeaFuncSpam (366)
10/12/2005 10:56:09 PM
Brian Harvey wrote:
>
> I will tell you right in this message everything in the business curriculum:
>
> 	1.  It's good to be greedy.
>
> 	2.  Assorted techniques for manipulating people who think it
> 	    isn't good to be greedy.
>
> Since #1 is false, there's really no need for you to study #2.

I dunno, some marketing know-how could be damn useful if you want to
promote your own career, do what you want instead of what others would
force you to do, cause your products to get bought, etc.  Maybe not
revel in it, maybe outsource a lot of it to specialists, but certainly
be familiar with basic marketing principles.  So sayeth a Cruise
Director.


Cheers,
Brandon J. Van Every
   (cruise (director (of SeaFunc)
           '(Seattle Functional Programmers)))
http://groups.yahoo.com/group/SeaFunc

0
SeaFuncSpam (366)
10/12/2005 11:01:26 PM
Speaking as a cs student at Karlsruhe, Germany: yes.

I now got around the first year, where i had the following courses:
Linear Algebra, Higher maths (analysis): hard
Computer Science 2: medium
Computer Science 1, Numeric: easy
Statistics: laughable

I'm not sure how this is comparable to US curriculums. Older students
tell me most people don't take LA and HM both in the first year. I
probably will now have a lazy second half for my undergraduation.

(not sure if i translated all terms correctly)

0
10/13/2005 10:12:34 AM
beza1e1 wrote:
> Speaking as a cs student at Karlsruhe, Germany: yes.
> 
> I now got around the first year, where i had the following courses:
> Linear Algebra, Higher maths (analysis): hard
> Computer Science 2: medium
> Computer Science 1, Numeric: easy
> Statistics: laughable

I did my Vordiplom in Braunschweig.  Lots of hard, hard maths (I got 
straight As in high school without any problems, but there I was 
fighting through my homework all afternoon and night and barely got 
through...), but everything CS-related very, very, very, very easy, 
totally laughable.  From two weeks of reading stuff (one algorithms 
book, and SICP) I learned much more than from the first two semesters in 
my CS classes.  Mostly I didn't bother to even attend them, just worked 
on my maths during that time ;)  Oh yes, we also did some reasonably 
hard theoretical CS and technical CS, not that that did my any good; 
I'll never design a CPU, nor do I profit from knowing about weird 
constructions to prove the halting problem.

> I'm not sure how this is comparable to US curriculums. Older students
> tell me most people don't take LA and HM both in the first year. I
> probably will now have a lazy second half for my undergraduation.

Hm, at a reportedly quite good Midwest university (undergrad) I found 
the 4xx CS classes very easy.  OTOH I sometimes read about US undergrads 
having compiler classes where they do cool stuff (such as building whole 
compilers in Scheme or SML), which in Germany I did in my 7th semester, 
but without much practical or useful emphasis (basically just extracts 
from the Dragon book; but a second class involved a very very basic 
compiler for the JVM written in C(!) as a group project).

But I think the US have large differences in education, even for the 
same degrees.

But since I sat through my first two weeks at the University, I gave up 
on learning anyway (I do that in my free-time, when I'm not forced to 
stuff for college).

You don't learn stuff there (that you couldn't just learn yourself in 
20% the time), you just earn your piece of paper ("diploma") by doing 
lots of bull**** for whoever teaches (i.e. doing repetitive exercises 
that don't teach you anything you don't already know, but you have to do 
them anyway; not that the professors would care...).

-- 
A government which robs Peter to pay Paul can always
depend on the support of Paul.
	George Bernard Shaw
0
u.hobelmann (1643)
10/13/2005 11:24:59 AM
"wooks" <wookiz@hotmail.com> writes:

> Joe Marshall wrote:
> > "wooks" <wookiz@hotmail.com> writes:
> >
> > > Are there any easy courses in a CS degree.
> >
> > *Far* too many if the poor quality of many CS graduates is any
> > indication.
> 
> I am in one of the top rated schools in the country. I assure you there
> aren't any easy courses , however the pass mark for all courses is 40%
> so I guess it is possible to achieve a degree with low marks.

The percentage required to pass isn't really a good indication of
quality -- it depends on how difficult the exercises are.  40% of a
number of exercises that require deep insight into the subject may be
a lot harder than 90% of trivial surface knowledge questions.

        Torben

0
torbenm5 (42)
10/13/2005 12:12:03 PM
Joe Marshall <jmarshall@alum.mit.edu> writes:


> I was reminded of Alan Kay's quote:
> 
>   ``Most undergraduate degrees in computer science these days are
>     basically Java vocational training.''

And when they aren't, the students complain (or simply fail to pass
their exams).  :-)

You wouldn't believe (well, maybe you would) how often I hear students
complain that our CS curriculum is too "ivory tower" and out of touch
with "real life" (i.e., the internet economy).

        Torben
0
torbenm5 (42)
10/13/2005 12:17:03 PM
Torben �gidius Mogensen wrote:
> You wouldn't believe (well, maybe you would) how often I hear students
> complain that our CS curriculum is too "ivory tower" and out of touch
> with "real life" (i.e., the internet economy).

Sure but, by definition, students don't know what they're talking about.

When I did physics, some students complained that we should not have had
lectures or examinations on using computers. They took a vote and 80% of
the students said that computers have nothing to do with physics so they
should be removed from the course. Fortunately, the "powers that be"
basically ignored the vote...

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
0
usenet116 (1778)
10/13/2005 12:20:02 PM
Torben �gidius Mogensen wrote:
> Joe Marshall <jmarshall@alum.mit.edu> writes:
> 
> 
>> I was reminded of Alan Kay's quote:
>>
>>   ``Most undergraduate degrees in computer science these days are
>>     basically Java vocational training.''
> 
> And when they aren't, the students complain (or simply fail to pass
> their exams).  :-)
> 
> You wouldn't believe (well, maybe you would) how often I hear students
> complain that our CS curriculum is too "ivory tower" and out of touch
> with "real life" (i.e., the internet economy).

Yes, that's funny.  In Germany we distinguish between universities and 
practical schools (good idea, IMHO).  The former are supposed to focus 
more on the scientific side, but for some reason the vast majority of 
students (despite no interest in science at all) goes to the university. 
  The result is that there have been numerous accusations regarding 
ivory-towerness, and the universities have embraced practical, pointless 
programming drills (using Java) and abandoned most teaching of 
fundamentals and principles behind those languages.

As a result someone like me, who used to be interested in the ivory 
tower side of CS couldn't find a place to study, and everybody else 
still complains about lack of practical stuff.  IMHO, even the practical 
stuff couldn't hurt to see some formal fundamentals, but that's just me.

-- 
A government which robs Peter to pay Paul can always
depend on the support of Paul.
	George Bernard Shaw
0
u.hobelmann (1643)
10/13/2005 1:44:28 PM
Jon Harrop wrote:
....
> When I did physics, some students complained that we should not have had
> lectures or examinations on using computers. They took a vote and 80% of
> the students said that computers have nothing to do with physics ...

Do you mind an off-topic anecdote?

When I was a physicist, we put on one examination a question: "How would
behave the following instruments on the surface of Moon:
   a barometer (mercury)
   an aneroid barometer
   a pendulum clock
   ...
   a pycnometer
   ... etc. About 10.

Students gave very rich answers, taking into account the gravitation, the
high/low temperatures, the vacuum, etc. I still remember one answer:

"... Unfortunately, I haven't the slightest idea what a pycnometer is for.
  But I presume that, if you throw it away, it will fly much farther than
  on Earth".

Now, this experiment works with computers as well!!

Those folks who claimed that computers have nothing to do with physics, had
shown simply no imagination. They would never become decent physicists!


Jerzy Karczmarczuk
0
karczma (331)
10/13/2005 3:55:29 PM
Cruise Director schrieb:
> I mean, what's more important, book
> knowledge or producing something that works and runs fast?

That depends.

If you're designing large systems, a good measure of theory is actually 
indispensable.

The problem is: there's far more theory than anybody really needs, but 
you don't know in advance what you'll need. So universities have started 
to teach "working with theory" (and, I fear, many professors are using 
that as an excuse to spread their ivory-tower theoretic framework, so 
this isn't perfect).

Regards,
Jo
0
jo427 (1164)
10/13/2005 9:09:33 PM
Ulrich Hobelmann <u.hobelmann@web.de> writes:
> nor do I profit from knowing about weird 
>constructions to prove the halting problem.

It's a shame that you feel that way.  First, a shame that you think about
your education in terms of "profit from," if I'm correctly understanding that
to mean "this will have direct application to my future job."  Do you feel
that way about all the math courses you took?  And second, a shame that you
don't see the beauty and the phenomenal genius in Turing's development of a
way to prove things about the theoretical limits of computers at a time when
they were just beginning to build actual ones.

>You don't learn stuff there (that you couldn't just learn yourself in 
>20% the time)

In a sense this is true about any learning of anything -- you could do it
on your own with some effort.  Nevertheless, many people find it beneficial
to be part of a community of scholars, helping each other learn and push the
limits of what's known.  Perhaps you are overgeneralizing from a few bad
teaching experiences?
0
bh150 (210)
10/13/2005 9:38:43 PM
Torben =C6gidius Mogensen wrote:
> "wooks" <wookiz@hotmail.com> writes:
>
> > Joe Marshall wrote:
> > > "wooks" <wookiz@hotmail.com> writes:
> > >
> > > > Are there any easy courses in a CS degree.
> > >
> > > *Far* too many if the poor quality of many CS graduates is any
> > > indication.
> >
> > I am in one of the top rated schools in the country. I assure you there
> > aren't any easy courses , however the pass mark for all courses is 40%
> > so I guess it is possible to achieve a degree with low marks.
>
> The percentage required to pass isn't really a good indication of
> quality -- it depends on how difficult the exercises are.  40% of a
> number of exercises that require deep insight into the subject may be
> a lot harder than 90% of trivial surface knowledge questions.
>

I am amazed that you think there was more than one way to interpret
what I actually said.

0
wookiz (347)
10/13/2005 10:23:11 PM
 Lisp is like kung FOO!  Lisp is a meta paradigm. WIthout lisp you
can't understand any other languages.learning the language makes you a
better person. Computer Science is the assault of paradigm to
languages. Lisp is the only thing that has survived for 40 years and it
should be respected. Another meta paradigm is the construction of new
languages. That is what we call Backus Normal Form. Learn both and then
write a lisp macro that does something useful.

0
10/13/2005 10:38:00 PM
In article <dij5os$823$1@abbenay.CS.Berkeley.EDU>, Brian Harvey wrote:
> "wooks" <wookiz@hotmail.com> writes:
>>Well I could do a course on e-business entrepeneurship but we don't get
>>many electives and I'd rather spend them on something more academic and
>>see if I can wangle my way on to that course on a not for credit basis.
> 
> Ugh.
> 
> I will tell you right in this message everything in the business curriculum:
> 
> 	1.  It's good to be greedy.
> 	2.  Assorted techniques for manipulating people who think it
> 	    isn't good to be greedy.
> 
> Since #1 is false, there's really no need for you to study #2.

This is a rather blinkered outlook. I had a great deal of fun working
at a startup, and the main things that I learned while working at a
startup were humility and a sense of service.

The job of a businessperson is to organize a group of people to create
goods and services so useful that other people will voluntarily part
with their hard-earned money for them. Doing a good job at this is
quite a bit harder than it might seem, since it's fundamentally an act
of empathy and creativity -- you have to focus on what someone else
needs, and understand their needs well enough to create things that
are both valuable to them and whose value can be communicated. But
there's a lot of organizational cruft associated with running a
business, and learning how to handle it is a very useful skill.

In fact, my main regret is going to work at a startup, rather than
starting up a business myself.

> But you should definitely think beyond computer science.  I don't
> know where you're going to school, but I'm willing to bet there are
> courses available in literature, philosophy, psychology, art,
> mathematics, physics, etc.  I advise fitting as many of those in as
> you can, even if it means a little less computer science.

This is good advice, though. My only caveat is that I found that
reading great literature was more useful than literature classes. I'd
suggest reading on your own and taking writing classes instead.

-- 
Neel Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
10/13/2005 11:34:40 PM
"zitterbewegung@gmail.com" <zitterbewegung@gmail.com> writes:

> Lisp is the only thing that has survived for 40 years and it should
> be respected.

What we call LISP today has only a superficial resemblance to the LISP
of 40 years ago.  Same with Fortran and COBOL (that also existed 40+
years ago).  Who was it that said "I don't know which language people
will use in the year 2000, but I know they will call it Fortran!"?

And while LISP, Fortran and COBOL retained their names while changing,
you can argue that Algol is still with us in the shape of C, Java and
similar languages.  See the "family tree" of programming languages at
http://www.levenez.com/lang/history.html for more details.

        Torben
0
torbenm610 (36)
10/14/2005 8:49:22 AM
In article <3r6ugdFhe05lU1@individual.net>,
Ulrich Hobelmann  <u.hobelmann@web.de> wrote:
> nor do I profit from knowing about weird constructions to prove the
> halting problem.

I doubt that. I believe everyone stumbles into Rice's Theorem sooner
or later. It's good to be ready for that:

"All right, we need something that analyzes a program and tells
whether it--"
"Sorry, can't be done."

This can save anything from minutes of fruitless pondering to
man-years of work in vain. :)


Lauri
0
la (473)
10/14/2005 9:09:52 AM
Brian Harvey wrote:
> Ulrich Hobelmann <u.hobelmann@web.de> writes:
>> nor do I profit from knowing about weird 
>> constructions to prove the halting problem.
> 
> It's a shame that you feel that way.  First, a shame that you think about
> your education in terms of "profit from," if I'm correctly understanding that
> to mean "this will have direct application to my future job."  Do you feel

No, actually not.  I went to University (while we have practical schools 
too in Germany) because I was interested in more thorough background 
knowledge, not just applied stuff.  But when I have to learn a theory, I 
expect that it is for handling a practical problem in a formal way.

It seems that an awful lot of theoretical CS is just theory without 
applications, while the practical people (who seem to hate theory) don't 
even bother to use formal methods or study principles behind programming 
for instance, but just create ad-hoc solutions / languages instead 
(often under the discipline name software engineering).  The gap in 
between is what I'd be interested in, but there aren't too many people 
teaching that I guess.

The contrived construction of a funny machine that can't be proven to 
halt isn't interesting to me.  Many practical algorithms don't just 
infinite-loop, and the people writing code *know* that their code (most 
often) won't loop.  The same with G�del's stuff.  I don't consider weird 
constructions practical or useful at all, just because there exists one 
totally made up case that refutes something.

> that way about all the math courses you took?  And second, a shame that you
> don't see the beauty and the phenomenal genius in Turing's development of a
> way to prove things about the theoretical limits of computers at a time when
> they were just beginning to build actual ones.

Ok, it's nice to know that there are limits, but I'd rather be concerned 
about practical limits.  Turing machines are a weird design to begin 
with (one-dimensional tape, infinite...).

>> You don't learn stuff there (that you couldn't just learn yourself in 
>> 20% the time)
> 
> In a sense this is true about any learning of anything -- you could do it
> on your own with some effort.  Nevertheless, many people find it beneficial
> to be part of a community of scholars, helping each other learn and push the
> limits of what's known.  Perhaps you are overgeneralizing from a few bad
> teaching experiences?

Mostly I just think my degree's first two years were a TOTAL waste of 
time.  Ok, in my free time I read lots of (to me) interesting stuff, so 
the years afterwards weren't too exciting either, but had I had those 
years right at the beginning, they would have been.  There are really 
interesting things in theoretical (let's call it "formal") CS, such as 
semantics, process calculi, type systems, automata, but incomputability 
is more a legend that CS people should have heard of, than something 
they should have to study in depth, IMHO.

-- 
A government which robs Peter to pay Paul can always
depend on the support of Paul.
	George Bernard Shaw
0
u.hobelmann (1643)
10/14/2005 9:20:22 AM
On 2005-10-14, Ulrich Hobelmann <u.hobelmann@web.de> wrote:
> Brian Harvey wrote:
>> Ulrich Hobelmann <u.hobelmann@web.de> writes:
>>> nor do I profit from knowing about weird 
>>> constructions to prove the halting problem.
>> 
>> It's a shame that you feel that way.  First, a shame that you think about
>> your education in terms of "profit from," if I'm correctly understanding that
>> to mean "this will have direct application to my future job."  Do you feel
>
> No, actually not.  I went to University (while we have practical schools 
> too in Germany) because I was interested in more thorough background 
> knowledge, not just applied stuff.  But when I have to learn a theory, I 
> expect that it is for handling a practical problem in a formal way.
>
> It seems that an awful lot of theoretical CS is just theory without 
> applications, while the practical people (who seem to hate theory) don't 
> even bother to use formal methods or study principles behind programming 
> for instance, but just create ad-hoc solutions / languages instead 
> (often under the discipline name software engineering).  The gap in 
> between is what I'd be interested in, but there aren't too many people 
> teaching that I guess.
>
> The contrived construction of a funny machine that can't be proven to 
> halt isn't interesting to me.  Many practical algorithms don't just 
> infinite-loop, and the people writing code *know* that their code (most 
> often) won't loop.  The same with G�del's stuff.  I don't consider weird 
> constructions practical or useful at all, just because there exists one 
> totally made up case that refutes something.

the trick is here, that most programming languages are equivalent to
Turing Machines, just that they are simpler than most languages. On the
other hand they are not really impractical. If you consider infinite
tapes as impractical than programming languages that don't bound your
memory usage are impractical, too (well I know many of them). And look
at the book of Schoenhage et.al. "Fast Algorithms - A Multitape Turing
Machine Implementation"
(http://www.informatik.uni-bonn.de/~schoe/tp/TPpage.html).

>
>> that way about all the math courses you took?  And second, a shame that you
>> don't see the beauty and the phenomenal genius in Turing's development of a
>> way to prove things about the theoretical limits of computers at a time when
>> they were just beginning to build actual ones.
>
> Ok, it's nice to know that there are limits, but I'd rather be concerned 
> about practical limits.  Turing machines are a weird design to begin 
> with (one-dimensional tape, infinite...).
>
>>> You don't learn stuff there (that you couldn't just learn yourself in 
>>> 20% the time)
>> 
>> In a sense this is true about any learning of anything -- you could do it
>> on your own with some effort.  Nevertheless, many people find it beneficial
>> to be part of a community of scholars, helping each other learn and push the
>> limits of what's known.  Perhaps you are overgeneralizing from a few bad
>> teaching experiences?
>
> Mostly I just think my degree's first two years were a TOTAL waste of 
> time.  Ok, in my free time I read lots of (to me) interesting stuff, so 
> the years afterwards weren't too exciting either, but had I had those 
> years right at the beginning, they would have been.  There are really 
> interesting things in theoretical (let's call it "formal") CS, such as 
> semantics, process calculi, type systems, automata, but incomputability 
> is more a legend that CS people should have heard of, than something 
> they should have to study in depth, IMHO.
>

In good classes you should learn a lot, even for pratical purposes.
There is of course some kind of theory that you can't directly apply
practical problems, for example the PCP-theorem is only indirectly
useful in practice, because we don't have PCP-machines, but you can
proof some stuff concerning the APX, PTAS, EPTAS, etc.  classes which
are very interesting for practical purposes. E.g. you have some problem
and you need to have an algorithm that quickly calculates the solution
of an instance. If you have this theoretical background you can save a
lot of time, if you don't, well ... (I assume P unequal to NP for now).

The halting problem is tightly connected to problems found in practical
problems and theories like type theory. If you want to ensure that your
type system is decideable you know you can't have the power of a turing
machine. If you want this power, your compiler may not halt on every
module/program instance. Looking at a compiler in more detail. You have
different passes, e.g. look at register allocation. register allocation
is as hard as graph colouring if you have an architecture where the
registers are different and depending on the operation you have to
choose a different one like x86. If you know how good one is able to
approximate graph colouring you know how bad a compiler is at this as
long as (as long as P != NP) no worst-case super-polynomial time
register allocation algorithm is used. This are just some examples, so I
hope you see, having this theoretical background is a good thing in case
one wants to do things right.

Knowing just of these complexity classes or other kind of theoretical
stuff is often not enough, because in reality you don't get your 3SAT
problem or Graph Colouring. You get problem XYZ and you have to figure
out from there. Sometimes, as in the case of register allocation, it is
trivial to reduce it to graph colouring or more important find a
L-reducation for XYZ to some known problem.

-- 
Matthias Kretschmer
0
mccratch (26)
10/14/2005 10:01:16 AM
Ulrich Hobelmann schrieb:
> Brian Harvey wrote:
> 
>> Ulrich Hobelmann <u.hobelmann@web.de> writes:
>>
>>> nor do I profit from knowing about weird constructions to prove the 
>>> halting problem.
> 
> No, actually not.  I went to University (while we have practical schools 
> too in Germany) because I was interested in more thorough background 
> knowledge, not just applied stuff.  But when I have to learn a theory, I 
> expect that it is for handling a practical problem in a formal way.

The halting problem *is* a practical one. It would be nice if there were 
a way to check that you haven't inadvertently written an endless loop. 
Compilers could warn about nonterminating recursion.
Also, there's a whole class of problems that can be mapped to the 
halting problem (the undecidable ones). Knowing which kind of problem is 
in that class is of practical value, too.

Of course, the proof that the halting problem cannot be handled 
algorithmically isn't in itself practical. No proof that tells us "this 
can't be done" is practical. So from a practical perspective, you can be 
content with the proof.

On the other hand, if you have a problem and are unsure whether it's 
decidable, knowing such proof techniques can help deciding. So there is 
some remote practical use even for this kind of knowledge.

 > Many practical algorithms don't just
> infinite-loop, and the people writing code *know* that their code (most 
> often) won't loop.

Ah, but I had one such instance. I was writing a table control - very 
basic down-to-earth GUI stuff.

The "interesting" part here was that row height was determined by 
contents. I had to divide the processing into two phases: height 
determination and actual laying-out. It's *very* easy to confuse the 
steps, and that usually ends up with the laying-out part recursing back 
into the height determination code - sometimes that will terminate, 
sometimes it will not.

This kind of work isn't too common, with that I agree. But termination 
problems can happen if you're doing things that aren't straightforward.

 > The same with G�del's stuff.  I don't consider weird
> constructions practical or useful at all, just because there exists one 
> totally made up case that refutes something.

It's not the weird construction that's interesting, it's the refutation.

> Ok, it's nice to know that there are limits, but I'd rather be concerned 
> about practical limits.  Turing machines are a weird design to begin 
> with (one-dimensional tape, infinite...).

They are commonplace simply because they are the easiest-to-explain 
equivalent of an infinite computer. In the 50ies, when it was unclear 
what an algorithm could do or not, and when it was unclear whether 
different ways to writing down algorithms would affect the power or not, 
people invented dozens of algorithmic notations. Turing machines and the 
lambda calculus are what is still in (relatively) common knowledge, but 
there were others, and some of them were *really* weird.

The Turing machine survived because it was easiest to prove some things 
with it, and because every other formalism could be proved to be 
Turing-equivalent.

The lambda calculus survived because it's so abstract that it has served 
as a model for many a programming language.

That's all. You don't *need* to know about Turing machines; it's just 
that you won't understand the term "Turing equivalence" unless you know 
what a Turing machine is.

> Mostly I just think my degree's first two years were a TOTAL waste of 
> time.

Me too, in a sense. It was dedicated to learning programming, something 
that I had done in my free time before. It was rather sparse on theory, 
which was the area where I *could* learn.

The theoretical backgrounds have helped me a lot. It's simply additional 
perspectives, and I can make use of them when I'm doing advanced stuff.

Learning this stuff was also a rewarding experience for me, but of 
course that's no justification for forcing this stuff on students that 
may not feel rewarded. The additional-perspective argument is.

 > Ok, in my free time I read lots of (to me) interesting stuff, so
> the years afterwards weren't too exciting either, but had I had those 
> years right at the beginning, they would have been.  There are really 
> interesting things in theoretical (let's call it "formal") CS, such as 
> semantics, process calculi, type systems, automata, but incomputability 
> is more a legend that CS people should have heard of, than something 
> they should have to study in depth, IMHO.

Sorry, that's an undefensible position. You need to know about 
decidability if you design type systems, or any kind of inference engine.

You also need to know about decidability to assess the limitations of 
inference engines, to check whether the limitations are arbitrary or 
really due to undecidability issues.

This knowledge also helps when *using* such engines. If you know that 
what you're trying to achieve is undecidable, you automatically try to 
transform the problem into something decidable. You know quite exactly 
what information needs to be added so that the engine can work.
Without that background knowledge, exploring the problem space is 
largely guesswork.

Regards,
Jo
0
jo427 (1164)
10/14/2005 10:05:05 AM
In article <3r9binFi2qh6U1@individual.net>,
Ulrich Hobelmann  <u.hobelmann@web.de> wrote:
> knowledge, not just applied stuff.  But when I have to learn a theory, I 
> expect that it is for handling a practical problem in a formal way.

Science involves both basic research and applied research. Basic
research strives only to deepen our understanding about things without
aspiring towards practical applications. A university is a scientific
institution, so it's no wonder that some things being taught there are
not eminently practical. If you don't like this approach, then the
university is probably not the right place for you.

That being said, one of the wonderful things about science is that
anything at all may turn out to have unexpected practical
applications. It's just impossible to know beforehand, which. If
people were taught only things whose practical uses were well-known,
no one would ever come up with applications for other things.

> The contrived construction of a funny machine that can't be proven to 
> halt isn't interesting to me.  Many practical algorithms don't just 
> infinite-loop, and the people writing code *know* that their code (most 
> often) won't loop.

If they do, they know it because they have proven it. 

> Ok, it's nice to know that there are limits, but I'd rather be concerned 
> about practical limits.

Practical limits change all the time. Theoretical ones don't. Guess
which ones are more useful to know about in the long run?

> Mostly I just think my degree's first two years were a TOTAL waste of 
> time.

If you don't find use for what you have learned, why do you think the
problem is in what you learned? You might just as well think that
there's something wrong with what you are doing, or how you are doing
it, if you don't get to apply your knowledge enough.

> There are really interesting things in theoretical (let's call it
> "formal") CS, such as semantics, process calculi, type systems,
> automata, but incomputability is more a legend that CS people should
> have heard of, than something they should have to study in depth,
> IMHO.

That's funny, as all of the interesting examples you mention involve
incomputability e.g. in the form of undecidability. Don't you think it
is at all interesting to know e.g. whether a compiler will actually
finish when given a program to compile? Or do you think it's ok for
any old program to get stuck occasionally? You can always push
control-C?


Lauri
0
la (473)
10/14/2005 10:12:26 AM
In comp.lang.scheme Lauri Alanko <la@iki.fi> wrote:
> [...] If people were taught only things whose practical uses were
> well-known, no one would ever come up with applications for other
> things.

Other than the above, I think that you are right on the money. However, I
don't think that teaching practical things, or teaching theory through
applications, or teaching abstractions through concrete examples,
necessarily stiffles creativity. If anything, I believe that cool
applications of theoretical insights can motivate most people to learn
more theory.

-Vesa Karvonen
0
10/14/2005 10:45:34 AM
zitterbewegung@gmail.com schrieb:
>  Lisp is like kung FOO!  Lisp is a meta paradigm.

Nope. Lisp is a family of languages, not a paradigm. It is quite 
flexible, and can be used for many paradigms - but you still have to 
learn the paradigms; Lisp is just the "substrate".

 > WIthout lisp you can't understand any other languages.

Nonsense. I have learned Lisp, and yes it was an eye-opener since it 
introduced me into higher-order functions - but I would have learned 
that from Haskell, or any other FPL.

 > Learning the language makes you a better person.

Now that's *utter* nonsense. Unless you specify what you mean with 
"better person".
It certainly doesn't make me more compassionate, for one instance :-)

 > Computer Science is the assault of paradigm to languages.

Doesn't compute.

 > Lisp is the only thing that has survived for 40 years and it
> should be respected.

If that were of any relevance, I'd also have to respect Fortran and RPG.

Fortran... well, current-day Fortran bears little to no resemblance to 
40-year-old Fortran dialects, so it doesn't really count.

But RPG? It's an unspeakable evil from ancient times, and should better 
be left untouched. (For the curious and foolhardy: three-address code, 
GOSUB but no local variables, no pointers, no variable-length strings, 
variable names limited to six characters. And that's just the largest 
deficits, there are numerous useless limitations in the details.)

I do respect Lisp. I wouldn't respect it for its age along, though that 
certainly adds to the respect.

> Another meta paradigm is the construction of new languages. That is
> what we call Backus Normal Form.

BNF is unrelated to Lisp.

> Learn both and then write a lisp macro that does something useful.

I have always been sceptical about self-definable syntax. It tends to 
encourage code that nobody but the original macro author understands.

Regards,
Jo
0
jo427 (1164)
10/14/2005 10:53:45 AM
Matthias Kretschmer wrote:
>> The contrived construction of a funny machine that can't be proven to 
>> halt isn't interesting to me.  Many practical algorithms don't just 
>> infinite-loop, and the people writing code *know* that their code (most 
>> often) won't loop.  The same with G�del's stuff.  I don't consider weird 
>> constructions practical or useful at all, just because there exists one 
>> totally made up case that refutes something.
> 
> the trick is here, that most programming languages are equivalent to
> Turing Machines, just that they are simpler than most languages. On the
> other hand they are not really impractical. If you consider infinite
> tapes as impractical than programming languages that don't bound your
> memory usage are impractical, too (well I know many of them). And look
> at the book of Schoenhage et.al. "Fast Algorithms - A Multitape Turing
> Machine Implementation"
> (http://www.informatik.uni-bonn.de/~schoe/tp/TPpage.html).

But isn't the Lambda Calculus, or Recursive Functions equivalent in 
power to the funny tape machine?  Both are much easier to understand, 
IMHO, and the first one even provides the basis for lots of programming 
languages.  Why does every CS student have to suffer through the Turing 
stuff, but most haven't even *heard* of Lambda Calculus?  This just 
doesn't make sense to me.

> If you have this theoretical background you can save a
> lot of time, if you don't, well ... (I assume P unequal to NP for now).

Yes, that's one thing one should know.

> The halting problem is tightly connected to problems found in practical
> problems and theories like type theory. If you want to ensure that your
> type system is decideable you know you can't have the power of a turing
> machine. If you want this power, your compiler may not halt on every
> module/program instance. Looking at a compiler in more detail. You have
> different passes, e.g. look at register allocation. register allocation
> is as hard as graph colouring if you have an architecture where the
> registers are different and depending on the operation you have to
> choose a different one like x86. If you know how good one is able to
> approximate graph colouring you know how bad a compiler is at this as
> long as (as long as P != NP) no worst-case super-polynomial time
> register allocation algorithm is used. This are just some examples, so I
> hope you see, having this theoretical background is a good thing in case
> one wants to do things right.

Sure, one should know some facts, but I don't see the use of studying 
all the theoretical background behind it as very necessary.  Students 
can always read up on it, and one of the most important things to learn 
in CS is *where* to find things, not exactly how everything works in 
detail.  The world is too big for that.

> Knowing just of these complexity classes or other kind of theoretical
> stuff is often not enough, because in reality you don't get your 3SAT
> problem or Graph Colouring. You get problem XYZ and you have to figure
> out from there. Sometimes, as in the case of register allocation, it is
> trivial to reduce it to graph colouring or more important find a
> L-reducation for XYZ to some known problem.

Maybe sometimes... ;)

-- 
A government which robs Peter to pay Paul can always
depend on the support of Paul.
	George Bernard Shaw
0
u.hobelmann (1643)
10/14/2005 11:50:46 AM
Lauri Alanko wrote:
> In article <3r9binFi2qh6U1@individual.net>,
> Ulrich Hobelmann  <u.hobelmann@web.de> wrote:
>> knowledge, not just applied stuff.  But when I have to learn a theory, I 
>> expect that it is for handling a practical problem in a formal way.
> 
> Science involves both basic research and applied research. Basic
> research strives only to deepen our understanding about things without
> aspiring towards practical applications. A university is a scientific
> institution, so it's no wonder that some things being taught there are
> not eminently practical. If you don't like this approach, then the
> university is probably not the right place for you.

I just wish the emphasis had been different...  Many things could have 
been done in shorter time, and they could have taught students more 
interesting stuff.  As it is, everybody I know *hates* theoretical CS, 
and I only don't because I know there are cool advanced things out there.

You could say we only learn '60s and '70s stuff.  The same goes for 
operating system, for instance.  Semaphores are cool, and you should 
understand them.  But *writing* programs with them for 3-4 weeks, and 
the prof never telling you about more modern approaches (process 
calculi, message passing) that programmers can actually write 
non-deadlocking programs in (try to write a large multithreaded program 
given only semaphores ;) ), is nonsense.  Our programming teaching 
focuses on Algol-derivatives, nothing else.  For a *university*, the 
highest possible ivory tower in my country, that's clearly unacceptable. 
  I only have to think of all the things most graduates here will never 
have heard of in their whole life and I wonder why we spend so much time 
doing nothing.

>> There are really interesting things in theoretical (let's call it
>> "formal") CS, such as semantics, process calculi, type systems,
>> automata, but incomputability is more a legend that CS people should
>> have heard of, than something they should have to study in depth,
>> IMHO.
> 
> That's funny, as all of the interesting examples you mention involve
> incomputability e.g. in the form of undecidability. Don't you think it
> is at all interesting to know e.g. whether a compiler will actually
> finish when given a program to compile? Or do you think it's ok for
> any old program to get stuck occasionally? You can always push
> control-C?

Well, for instance most compilers process an abstract syntax tree 
underneath.  These ASTs are defined recursively/inductively, so at some 
point the recursion *has* to end.  This simple knowledge is far more 
relevant to me than the fact that there are loops or recursion patterns 
that nobody *wants* to program, that never terminate.  By the way, most 
computer software probably isn't even supposed to terminate!

-- 
A government which robs Peter to pay Paul can always
depend on the support of Paul.
	George Bernard Shaw
0
u.hobelmann (1643)
10/14/2005 12:01:17 PM
In article <3r9kcnFil2vlU1@individual.net>,
Ulrich Hobelmann  <u.hobelmann@web.de> wrote:
> Why does every CS student have to suffer through the Turing 
> stuff, but most haven't even *heard* of Lambda Calculus?  This just 
> doesn't make sense to me.

I have often wondered the same. I think it is mostly just for
historical reasons. Many concepts about computability are _much_
simpler to grasp in LC. For example, LC programs are much easier to
compose together than Turing machines.

Then again, there is some justification for TMs: arguably, of the
various equivalent models of computation, TMs are most
"down-to-earth", i.e. closest to physical reality. This gives it
credibility, since the _point_ of the computational models is that
they should express things that are really computable in the real
world.

> Sure, one should know some facts, but I don't see the use of studying 
> all the theoretical background behind it as very necessary.  Students 
> can always read up on it, and one of the most important things to learn 
> in CS is *where* to find things, not exactly how everything works in 
> detail.  The world is too big for that.

You are exactly right. That's why basic undergrad education just
glances quickly at about gazillion things: you don't really learn
anything "deeply" but later, you'll remember "hey, I remember reading
about something like this at that class..."

And reading through the proof of the undecidability of the halting
problem is _not_ very deep. The average student walks away with a
vague feeling that there was some problem about halting that couldn't
be solved, and he'll know where to look for more info if he ever needs
it. It's valuable, but not very much. If the theorem was only stated
in the class _without_ going through the proof, the average student
would forget completely about it after the exam. :)


Lauri
0
la (473)
10/14/2005 12:09:44 PM
In article <dio2nq$uks$1@online.de>,
Joachim Durchholz  <jo@durchholz.org> wrote:
[...]
>I have always been sceptical about self-definable syntax. It tends to 
>encourage code that nobody but the original macro author understands.

Would you claim this about functions, datatypes or classes?
What's so different about (my-function a ...) versus (my-macro a ...)?
Don't you just see "my-function" or "my-macro" and look up its documentation?

Gary Baumgartner

0
gfb (30)
10/14/2005 1:36:13 PM
Ulrich Hobelmann <u.hobelmann@web.de> writes:

> The contrived construction of a funny machine that can't be proven to
> halt isn't interesting to me.  Many practical algorithms don't just
> infinite-loop, and the people writing code *know* that their code
> (most often) won't loop.  The same with G�del's stuff.  I don't
> consider weird constructions practical or useful at all, just because
> there exists one totally made up case that refutes something.

It is interesting (and should be interesting to anyone with an
interest in computing) in the sense that it gives an upper bound on
what can be done.  There are quite practical problems whose
solvability would be very desirable, but which are not solvable in
general.  Many examples can be derived directly from Rice's theorem,
which in turn is an almost /immediate/ consequence of the
unsolvability of the Halting Problem.  In particular, we cannot
construct a machine that that can compare two arbitrary programs for
semantic equality.  Similarly, there can be no "perfect" optimizer
that yields the smallest or fastest equivalent to a given program.

Of course, there are stronger constraints, such as complexity
constraints, that can make particular problems infeasible to be solved
precisely on a computer.  NP-hardness can be a serious bummer (unless
there exists a good enough approximation algorithm), and even a lower
bound of n^3 or n^2 can be a serious problem depending on the
application.

The bottom line is that in order to understand these things one has to
have a firm graps of what one's computational model is, what it can
do, and what it can't.

By the way, the whole point of discussing the Halting Problem is to
show that there are /practically relevant/ problems which cannot be
solved on a computer.  If it weren't for that, a simple cardinality
argument is all that is needed to show that there exist incomputable
functions.

>> that way about all the math courses you took?  And second, a shame that you
>> don't see the beauty and the phenomenal genius in Turing's development of a
>> way to prove things about the theoretical limits of computers at a time when
>> they were just beginning to build actual ones.
>
> Ok, it's nice to know that there are limits, but I'd rather be
> concerned about practical limits.  Turing machines are a weird design
> to begin with (one-dimensional tape, infinite...).

Yes, discussing practical limits is important.  The notion of
NP-hardness and NP-completeness has spawned a huge amount of research.
Few theoreticians today worry about the Halting Problem -- that's just
a given.  Everybody worries about upper and lower bounds (and hopes
them to be at most polynomial).

The Turing Machine (which is not my favorite model of computation
either, btw.) is constructed the way it is to be /extremely simple/.
Apart from the idealization of having an infinite tape, it is obvious
that it can be practically realized.  With the lambda calculus or
mu-recursive functions one needs a few steps to see that (usually by
showing how to implement them on something akin to the TM).

> Mostly I just think my degree's first two years were a TOTAL waste of
> time.

I am beginning to agree with you.  At least you don't sound like to
"got" it.

> Ok, in my free time I read lots of (to me) interesting stuff,
> so the years afterwards weren't too exciting either, but had I had
> those years right at the beginning, they would have been.  There are
> really interesting things in theoretical (let's call it "formal") CS,
> such as semantics, process calculi, type systems, automata, but
> incomputability is more a legend that CS people should have heard of,
> than something they should have to study in depth, IMHO.

If you are capable of understanding type systems in depth, you should
have no problem with computability.  The proof of the unsolvability of
the Halting Problem is completely trivial and fits on a few lines once
you have the notion of a universal function in place.  A universal
function, in turn, is a fundamental concept that you can't weasel
around: it is the essence most existing programming language
implementations.  If you study type systems, you have to understand
things like strong normalization, system F, etc.  One important
property that a type system either possesses or doesn't is
decidability.  And that's directly related to computability.

If you really think that studying (in)computability was a waste of
time, your entire CS education has been a waste of time.

Regards,
Matthias
0
find19 (1244)
10/14/2005 2:21:55 PM
Lauri Alanko <la@iki.fi> writes:

> Then again, there is some justification for TMs: arguably, of the
> various equivalent models of computation, TMs are most
> "down-to-earth", i.e. closest to physical reality. This gives it
> credibility, since the _point_ of the computational models is that
> they should express things that are really computable in the real
> world.

Yes, that's the main point behind TMs.  One other issue that comes up
with the LC is that there is a less obvious complexity model.  What is
the time and space consumption of a LC reduction sequence?  Counting
beta steps is not good enough as on a real machine one needs to
implement deep substitution.  One could go to explicit substitutions
(i.e., the lambda-sigma calculus with DeBruijn indices etc), but that
is *much* more complicated, especially to the uninitiated.  Similar
things can be said about space consumption.  Again, simply looking at
the size of an expression might not be good enough because of sharing.
Reasoning about sharing is not easy at all.  (There is a reason why
optimal reductions have been studied for such a long time.)

Of course, this being said, the TM is not a very good (read:
realistic) model for thinking about complexity either.  That's why
there are also plenty of other models out there.  But at least it is
easy to analyze on its own terms should you choose to do so.  That's
something that cannot be said quite so easily about the LC.

> And reading through the proof of the undecidability of the halting
> problem is _not_ very deep.

Yes.  The proof is dead simple.  All the real work is in the
construction of the universal function.

Matthias
0
find19 (1244)
10/14/2005 2:33:42 PM
Matthias Kretschmer <mccratch@gmx.net> writes:

> the trick is here, that most programming languages are equivalent to
> Turing Machines,

Not quite.
http://lambda-the-ultimate.org/node/view/203
http://lambda-the-ultimate.org/node/view/1038

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
0
qrczak (1266)
10/14/2005 2:41:33 PM
Ulrich Hobelmann <u.hobelmann@web.de> writes:

> Well, for instance most compilers process an abstract syntax tree
> underneath.  These ASTs are defined recursively/inductively, so at
> some point the recursion *has* to end.  This simple knowledge is far
> more relevant to me than the fact that there are loops or recursion
> patterns that nobody *wants* to program, that never terminate.

That's a naive (or shall I say: uninformed) view of what a compiler
does.  Yes, the frontend will terminate, fine.  What about the
optimizer?  If the optimizer performs abstract interpretation of some
form, it might not terminate unless one is careful.  If it performs
partical evaluation, it might not terminate.  If it "evaluates under
the lambda" it might not terminate.  A few years ago an otherwise
excellent entry in the ICFP programming contest (raytracer) stumbled
over this problem by being over-agressive in its implementation of the
GML language where an infinite loop in an otherwise unused texture
would send it into an infinite loop...

These things /do/ matter in practice!

> By the way, most computer software probably isn't even supposed to
> terminate!

Nonsense.  Almost every computer software I know is supposed to terminate, at
least given the appropriate input:

An application should shut quit after I press Apple-Q, an OS should
shut down after I issue the "shutdown" command, an interactive program
should eventually come back and wait for new input from the user, and
so on and so on.

And even if you consider embedded software that controls some
long-running device, there is usually a pretty obvious decomposition
of the program into a terminating program which is run repeatedly.

Matthias
0
find19 (1244)
10/14/2005 2:44:16 PM
Ulrich Hobelmann wrote:
> It seems that an awful lot of theoretical CS is just theory without 
> applications, while the practical people (who seem to hate theory) don't 
> even bother to use formal methods or study principles behind programming 
> for instance, but just create ad-hoc solutions / languages instead 
> (often under the discipline name software engineering).  The gap in 
> between is what I'd be interested in, but there aren't too many people 
> teaching that I guess.

The dichotomy of "practical" vs "theoretical" is false.

Good practical choices are often backed by good theory.

Besides the amount of theory that you need in practice varies with the
kind of practice you do.  If you're writing some simple application,
you may find all that theory totall wasted.  But, if you're writing
a compiler, you may find that theory comes in very handy.

> The contrived construction of a funny machine that can't be proven to 
> halt isn't interesting to me.  Many practical algorithms don't just 
> infinite-loop, and the people writing code *know* that their code (most 
> often) won't loop.  The same with G�del's stuff.  I don't consider weird 
> constructions practical or useful at all, just because there exists one 
> totally made up case that refutes something.

The weird construction is NOT the point, it merely proves a very
important statement with significant implications on the limitations
of real practical programs.  As in: you cannot write a program to
decide whether another program will (or will not) halt on a particular
input.

And, we know, from practice, that infinite loops occur.  Put the two
together, and you understand why no compiler writer has bothered
implementing a warning for infinite loops, or various other
runtime pathologies, in spite of our ability to detect many of these
cases by visually inspecting the code.

Throw in complexity theory, analysis of algorithms, etc... and now
you have a good foundation for estimating the computational cost of
your practical solutions, BEFORE you start designing and implementing
them.

> Ok, it's nice to know that there are limits, but I'd rather be concerned 
> about practical limits.  Turing machines are a weird design to begin 
> with (one-dimensional tape, infinite...).

This particular limit (halting is undecidable) is PRACTICAL and
UNIVERSAL.  The universality derives primarily from the essential
simplicity of the Turing Machine.

And, you should note that the approach is VERY PRACTICAL, since
it make generalization very easy.

> Mostly I just think my degree's first two years were a TOTAL waste of 
> time.  Ok, in my free time I read lots of (to me) interesting stuff, so 
> the years afterwards weren't too exciting either, but had I had those 
> years right at the beginning, they would have been.  There are really 
> interesting things in theoretical (let's call it "formal") CS, such as 
> semantics, process calculi, type systems, automata, but incomputability 
> is more a legend that CS people should have heard of, than something 
> they should have to study in depth, IMHO.

Computability (and lack thereof) is only a small part of a formal
education, and it happens to be a fundamental part of it, since it
defines a universal limit on our ability to solve problems by
algorithmic means.  I don't undestand why you're singling it out.
-- 
A. Kanawati
0
antounk (33)
10/14/2005 3:11:31 PM
Matthias Blume wrote:
>> Mostly I just think my degree's first two years were a TOTAL waste of
>> time.
> 
> I am beginning to agree with you.  At least you don't sound like to
> "got" it.

Back then I got it alright.  I just stopped caring about these things. 
Since my time is limited, I'd rather spend it on things that help me in 
real-life, such as getting a job.  The world has enough problems, so I 
don't worry about computational limits anymore.  Ask 80+% of CS people 
in the world if they *really* apply this theory in their daily 
programming, and I think most don't.  Except in this newsgroup I've 
never heard of one.

>> Ok, in my free time I read lots of (to me) interesting stuff,
>> so the years afterwards weren't too exciting either, but had I had
>> those years right at the beginning, they would have been.  There are
>> really interesting things in theoretical (let's call it "formal") CS,
>> such as semantics, process calculi, type systems, automata, but
>> incomputability is more a legend that CS people should have heard of,
>> than something they should have to study in depth, IMHO.
> 
> If you are capable of understanding type systems in depth, you should
> have no problem with computability.  The proof of the unsolvability of
> the Halting Problem is completely trivial and fits on a few lines once
> you have the notion of a universal function in place.  A universal

It's not about understanding.  It was about being posed lots of stupid 
homework involving these things.  Most (all?) of theoretical CS was MUCH 
easier than what I had to do for maths.

> function, in turn, is a fundamental concept that you can't weasel
> around: it is the essence most existing programming language
> implementations.  If you study type systems, you have to understand
> things like strong normalization, system F, etc.  One important
> property that a type system either possesses or doesn't is
> decidability.  And that's directly related to computability.
> 
> If you really think that studying (in)computability was a waste of
> time, your entire CS education has been a waste of time.

Maybe.  The part involving the university definitely, but I already knew 
that after spending two weeks there.  The only reason I studied all this 
time is to get a diploma, a piece of paper that will magically allow me 
to earn several times of what a student's programming job pays (for the 
same work). ;)

-- 
A government which robs Peter to pay Paul can always
depend on the support of Paul.
	George Bernard Shaw
0
u.hobelmann (1643)
10/14/2005 3:34:12 PM
Marcin 'Qrczak' Kowalczyk schrieb:
> Matthias Kretschmer <mccratch@gmx.net> writes:
> 
> 
>>the trick is here, that most programming languages are equivalent to
>>Turing Machines,
> 
> Not quite.
> http://lambda-the-ultimate.org/node/view/203
> http://lambda-the-ultimate.org/node/view/1038

The paper quoted there is simply faulty.

It assumes that you cannot build a Turing Machine that computes outputs 
that depend on input which in turn depends on previous output.

But a TM can do what a computer program can:

1) If the cardinality of all potential input signals is finite, it can 
return a list of things to do for each case.
2) If the cardinality is countably infinite, the TM can compute another 
TM that will accept the next input. (Some kind of "telescoping" 
operation, I'd say - and I suspect that's how IO in purely functional 
languages works, too.)
3) Uncountably infinite input cannot be handled by neither TMs nor 
computer programs (all are limited to strings over finite alphabets, 
which makes the inputs countable), so this case doesn't

I think that makes interactive TMs exactly equivalent to standard TMs. 
Or, rather, equivalent in those respects that matter: termination, 
decidability, etc.
(I'd have to provide a proof if this where a reviewed periodical. I'll 
leave that to readers *gg*)

Regards,
Jo
0
jo427 (1164)
10/14/2005 4:31:42 PM
Gary Baumgartner schrieb:
> In article <dio2nq$uks$1@online.de>,
> Joachim Durchholz  <jo@durchholz.org> wrote:
> [...]
> 
>>I have always been sceptical about self-definable syntax. It tends to 
>>encourage code that nobody but the original macro author understands.
> 
> Would you claim this about functions, datatypes or classes?
> What's so different about (my-function a ...) versus (my-macro a ...)?
> Don't you just see "my-function" or "my-macro" and look up its documentation?

Because macros can do things that functions can't.

In (say) Pascal, when I look at a function declaration and see

   function foo (baz: integer): integer

I know that it won't modify baz. That may already be all that I need to 
know about foo.

For macros, I need to inspect the full macro body to find out whether 
it's adding a "var" to that parameter name. I.e. I have to read the full 
sources, or believe in what the docs tell me (and the docs are almost 
always incomplete, to believing them usually isn't good workmanship).

IOW it comes down to the guarantees that the language gives me. If the 
macro language cannot weaken the guarantees that the base language gives 
me, then OK. If it can, it's dangerous - or, put more neutrally, the 
safeness of a language for use is the lower of the safeness of the macro 
language and that of the base language. (A non-existent macro language 
has infinite safety - you can't do anything dangerous with it *ggg*)

Regards,
Jo
0
jo427 (1164)
10/14/2005 4:41:24 PM
In article <dion3l$tnu$1@online.de>,
Joachim Durchholz  <jo@durchholz.org> wrote:
>Gary Baumgartner schrieb:
>> In article <dio2nq$uks$1@online.de>,
>> Joachim Durchholz  <jo@durchholz.org> wrote:
>> [...]
>>
>>>I have always been sceptical about self-definable syntax. It tends to
>>>encourage code that nobody but the original macro author understands.
>>
>> Would you claim this about functions, datatypes or classes?
>> What's so different about (my-function a ...) versus (my-macro a ...)?
>> Don't you just see "my-function" or "my-macro" and look up its
>>  documentation?
>
>Because macros can do things that functions can't.
>
>In (say) Pascal, when I look at a function declaration and see
>
>   function foo (baz: integer): integer
>
>I know that it won't modify baz. That may already be all that I need to
>know about foo.

But in almost all cases that's not all you need to know. Otherwise, you
 could just not call foo.

>For macros, I need to inspect the full macro body to find out whether
>it's adding a "var" to that parameter name. I.e. I have to read the full
>sources, or believe in what the docs tell me (and the docs are almost
>always incomplete, to believing them usually isn't good workmanship).

For functions ... I have to read the full sources, or believe what the
 docs tell me about what will be returned, side-effects, etc.

>IOW it comes down to the guarantees that the language gives me. If the
>macro language cannot weaken the guarantees that the base language gives
>me, then OK. If it can, it's dangerous - or, put more neutrally, the
>safeness of a language for use is the lower of the safeness of the macro
>language and that of the base language.

I can essentially agree with your neutral version, and the flip side is
 that expressivity is the upper of the macro and base language. You are
 comfortable with a certain degree of safety (functions without call
 by reference) and expressiveness, but not less safety than that with more
 expressiveness (functions with call by reference, macros, etc).

Since it's a matter of degree, I think it needs more justification to say
 the a certain balance (except near the endpoints) is the right one.


By the way, macros can improve safety by removing more repetition.
 For example, I teach a course whose slides contain the following Java code:

  public static Node delete(Node front, Object o) {
    Node previous = null;
    Node current = front;
    while (current != null && !current.value.equals(o)) {
      previous = current;
      current = current.link;
    }
    if (current != null) {
      if (current == front) {
        front = current.link;
      } else {
        previous.link = current.link;
      }
    }
    return front;
  }

This is a case of a common pattern that Zahn and Knuth noted over 30 years ago:

  while (!a && !b) {
    ...
  }
  if (a) {
    post-a
  } else {
    post-b
  }

Notice however that the original code tests !a to select post-b, presumably
 for efficiency.

Wouldn't it be nice to be able to:

  specify a and b once, avoiding copying mistakes

  not have to negate and reason about negation, avoiding more mistakes

  say "until", just like we can in English, and have the post-condition

  not have to read "a" twice, not have the computer execute it twice

Scheme doesn't have until built in, but I can define it:

  ;; Until loop
  ;
  ;   (until condition body ...)
  ;
  ;     Like a while loop, except ends when condition is *true*
  ;
  ;     If the condition is a disjunction
  ;
  ;       (or sub-condition
  ;           sub-condition
  ;           ...)
  ;
  ;      it may be written in the following form to allow post-processing
  ;      based on the (first) sub-condition causing termination:
  ;
  ;       (one-of (sub-condition [optional post-processing])
  ;               (sub-condition [optional post-processing])
  ;               ...)
  ;
  (define-syntax until
    (syntax-rules (one-of)
      ((_ (one-of clauses ...) do0! ...)
       (letrec ((loop (lambda () (cond clauses ... (#t do0! ... (loop))))))
         (loop)))
      ((_ condition do0! ...) (until (one-of (condition)) do0! ...))))


To compare with the Java delete, I'll first:

  (define value car)
  (define link cdr)
  (define set-link! set-cdr!)
  (define == eq?)

Now:

  (define (delete front o)
    (let ((previous null)
          (current front))
      (until (one-of ((null? current))
                     ((equal? (value current) o)
                      (if (== current front)
                        (set! front (link current))
                        (set-link! previous (link current)))))
        (set! previous current)
        (set! current (link current))))
    front)

I could go further and define macros to capture the common forms:

  (set! v (op v a ...)) ; update a variable based on its current value
  (if c (op1 a ...) (op2 a ...) b ...) ; select operation to apply to a value

 both of which appear in the above code.

>[...]

Gary Baumgartner
0
gfb (30)
10/14/2005 6:38:09 PM
Marcin 'Qrczak' Kowalczyk wrote:
> Matthias Kretschmer <mccratch@gmx.net> writes:
> 
>>the trick is here, that most programming languages are equivalent to
>>Turing Machines,
> 
> Not quite.
> http://lambda-the-ultimate.org/node/view/203
> http://lambda-the-ultimate.org/node/view/1038

And <http://c2.com/cgi/wiki?InteractiveComputationIsMorePowerfulThanNonInteractive>.

-- 
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>

0
10/14/2005 8:01:41 PM
Joachim Durchholz wrote:
> Marcin 'Qrczak' Kowalczyk schrieb:
>> Matthias Kretschmer <mccratch@gmx.net> writes:
>>
>>> the trick is here, that most programming languages are equivalent to
>>> Turing Machines,
>>
>> Not quite.
>> http://lambda-the-ultimate.org/node/view/203
>> http://lambda-the-ultimate.org/node/view/1038
> 
> The paper quoted there is simply faulty.
> 
> It assumes that you cannot build a Turing Machine that computes outputs
> that depend on input which in turn depends on previous output.
> 
> But a TM can do what a computer program can:
> 
> 1) If the cardinality of all potential input signals is finite, it can
> return a list of things to do for each case.

This cardinality is not finite in most interactive models, and cannot be
made finite without introducing severe restrictions.

> 2) If the cardinality is countably infinite, the TM can compute another
> TM that will accept the next input. (Some kind of "telescoping"
> operation, I'd say - and I suspect that's how IO in purely functional
> languages works, too.)

You've just described a Sequential Interaction Machine, which is not itself
a TM; it is constructed from a TM.  Also, a SIM is deterministic, while most
interactive models are not.

<http://c2.com/cgi/wiki?InteractiveComputationIsMorePowerfulThanNonInteractive>
refutes the argument you're trying to make.

-- 
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>
0
10/14/2005 8:23:18 PM
David Hopwood schrieb:
> Joachim Durchholz wrote:
> 
>>1) If the cardinality of all potential input signals is finite, it can
>>return a list of things to do for each case.
> 
> This cardinality is not finite in most interactive models, and cannot be
> made finite without introducing severe restrictions.

I listed that case just for completeness.

>>2) If the cardinality is countably infinite, the TM can compute another
>>TM that will accept the next input. (Some kind of "telescoping"
>>operation, I'd say - and I suspect that's how IO in purely functional
>>languages works, too.)
> 
> You've just described a Sequential Interaction Machine, which is not itself
> a TM; it is constructed from a TM.  Also, a SIM is deterministic, while most
> interactive models are not.

Well, computer programs are deterministic (nondeterminism, while in 
theory interesting, is generally avoided in practice because it makes 
debugging far too difficult), so that doesn't seem to be a serious 
restriction. (Nondeterministic calculi could be interesting to model the 
entire human-machine interaction.)

Still, I'm reluctant to accept that "a machine that can do interaction 
is more powerful than a TM".

Let me try a proof outline:
1) Define the non-deterministic Turing machine (NDTM) with a tape that 
can be modified everywhere the TM didn't read or write yet.
2) TMs with a countably infinite number of tapes are equal in power to 
standard TMs. Model the nondeterminism in the NDTM by providing a 
(countably infinite) number of tapes that model all the possible 
modifications of the "outside world".
3) Postulate that for a concrete run, the "outside world" will choose 
what tape cell modifications will be in effect.

This construction leaves the NDTM with the task of providing answers for 
all possible inputs (something that a real program must also do), while 
keeping it at the same power as a standard TM.

I don't think that the conclusions are wrong, though I'm pretty sure 
that the reasoning has a lot of gaping holes in need of fixing :-)

I'm not even sure that there's any relevance in the question whether the 
tape is fixed beforehand or not - it may well be that all the proofs 
associated with TMs are independent of whether the tape's contents is 
fixed before the run or not.

Regards,
Jo
0
jo427 (1164)
10/14/2005 8:56:39 PM
Gary Baumgartner schrieb:
> In article <dion3l$tnu$1@online.de>,
> Joachim Durchholz  <jo@durchholz.org> wrote:
> 
>>Gary Baumgartner schrieb:
>>
>>>In article <dio2nq$uks$1@online.de>,
>>>Joachim Durchholz  <jo@durchholz.org> wrote:
>>>[...]
>>>
>>>>I have always been sceptical about self-definable syntax. It tends to
>>>>encourage code that nobody but the original macro author understands.
>>>
>>>Would you claim this about functions, datatypes or classes?
>>>What's so different about (my-function a ...) versus (my-macro a ...)?
>>>Don't you just see "my-function" or "my-macro" and look up its
>>> documentation?
>>
>>Because macros can do things that functions can't.

I have to modify this one:

I have yet to see things macros can do that functions (including 
higher-order ones) cannot do in a safer manner.

(Just take a look at how Haskell people create "sublanguages". Look, ma, 
no macros! *ggg*)

>>In (say) Pascal, when I look at a function declaration and see
>>
>>  function foo (baz: integer): integer
>>
>>I know that it won't modify baz. That may already be all that I need to
>>know about foo.
> 
> But in almost all cases that's not all you need to know. Otherwise, you
>  could just not call foo.

Such limited knowledge is useful in programmers' everyday practice: 
hunting bugs, and adapting code to new requirements.

Knowing the trails that need *not* be chased in advance is an invaluable 
asset in such a situation. It allows one to concentrate on one detail 
and leave all the other details out.

> By the way, macros can improve safety by removing more repetition.

Agreed. (Again, HOFs can be used to achieve the same effect.)

Regards,
Jo
0
jo427 (1164)
10/14/2005 9:07:03 PM
On Fri, 14 Oct 2005, Joachim Durchholz wrote:

> I have to modify this one:
>
> I have yet to see things macros can do that functions (including higher-order 
> ones) cannot do in a safer manner.
>

Build new types.

> (Just take a look at how Haskell people create "sublanguages". Look, ma, no 
> macros! *ggg*)
>

Yet there's still a use for Template Haskell.

-- 
flippa@flippac.org

A problem that's all in your head is still a problem.
Brain damage is but one form of mind damage.
0
flippa (196)
10/14/2005 11:45:24 PM
On Fri, 14 Oct 2005, Joachim Durchholz wrote:

> Still, I'm reluctant to accept that "a machine that can do interaction is 
> more powerful than a TM".
>

No TM can launch an ICBM, for one. More generally, if you can interact 
with things you've a chance of finding something more powerful than a TM 
to interact with.

-- 
flippa@flippac.org

Sometimes you gotta fight fire with fire. Most 
of the time you just get burnt worse though.
0
flippa (196)
10/14/2005 11:47:08 PM
Ulrich Hobelmann <u.hobelmann@web.de> writes:

> The contrived construction of a funny machine that can't be proven to
> halt isn't interesting to me.  Many practical algorithms don't just
> infinite-loop, and the people writing code *know* that their code
> (most often) won't loop.  The same with G�del's stuff.  I don't
> consider weird constructions practical or useful at all, just because
> there exists one totally made up case that refutes something.

Occasionally someone on comp.lang.lisp states ``Yeah, in *theory* you
cannot tell if a program halts, but in *practice* it should be easy.''
They see G�del's proof as an `artificial' construct of a `pathological
case' that would never occur in a `real program'.

However, it turns out that the halting problem and issues of
undecidability are *trival* to uncover in very simple problems (G�del
took the further step to prove that there is no way to paper over the
simple problems).  Here are two simple examples: 

(define k0 '(#t () (x)))

(define (kernel i s)
  (list (not (car s))
	(if (car s)
	    (cadr s)
	    (cons i (cadr s)))
	(cons 'y (cons i (cons 'z (caddr s))))))

(define (mystery list)
  (let ((result (foldl kernel k0 list)))
    (if (null? (cadr result))
        #f
        (mystery 
          (if (car result)
              (cadr result)
              (caddr result))))))

In this first example, we fold a kernel function over a list and
iterate on the result.  The kernel function trivially halts, and the
fold function will halt on any finite list if the kernel does.
Nonetheless, no one has been able to prove whether the mystery
function halts or not.  So what is it about the mystery function that
defies analysis?  How is this different from any other iteration that
someone *knows* won't loop?

Here's a different example:

(define (base-bump base n)
  (if (< n base)
      n
      (do ((exponent    0 (+ exponent 1))
           (probe    base (* probe base))
           (divisor     1 probe))
          ((> probe n)
           (+ (* (expt (+ base 1) (base-bump base exponent))
                 (quotient n divisor))
              (base-bump base (remainder n divisor)))))))

(define (goodstein seed)
  (do ((i 2 (+ i 1))
       (n seed (- (base-bump i n) 1)))
      ((zero? n))
    (newline)
    (display n)))

The `hereditary base-n representation' of a number is when you write
the number as a sum of powers-of-n and recursively write the exponents
in hereditary base-n representation.  For example, the number 35 in
base 2 is 

    (+ (expt 2 5) (expt 2 1) 1)

but we can rewrite 5 as
    (+ (expt 2 2) 1)

so the fully expanded heriditary base-2 representation of 35 is

    (+ (expt 2 (+ (expt 2 2) 1))
       (expt 2 1)
       1)

The `base-bump' operation works by taking a number in heriditary
base-n representation and replacing every occurrance of n with n+1.
`Base-bump'ing 35 would be

    (+ (expt 3 (+ (expt 3 3) 1))
       (expt 3 1)
       1)

or, in decimal, 

    22876792454965

As you can see, bumping the base can be quite impressive.  Suppose we
subtract one from that result, giving 22876792454964. and bumped the
base from 3 to 4:

    (+ (expt 4 (+ (expt 4 4) 1))
       (expt 4 1))

or, in decimal,
    536312317197703883982960999928233845099174632823695735108942
    457748870561202941879072074971926676137107601274327459442034
    15015531247786279785734596024336388

The next iteration gives us a number with 2185 digits and the one
after that a number with 36307 digits.

There are two interesting things about this process.  First, despite
the huge rate of growth, the sequence converges to zero for all
positive integers.  Second, the assertion that the sequence converges
is undecidable in number theory (Peano axioms).  In other words, you
need not resort to G�delian techniques to find undecidable programs.

> Mostly I just think my degree's first two years were a TOTAL waste of
> time.  Ok, in my free time I read lots of (to me) interesting stuff,
> so the years afterwards weren't too exciting either, but had I had
> those years right at the beginning, they would have been.  There are
> really interesting things in theoretical (let's call it "formal") CS,
> such as semantics, process calculi, type systems, automata, but
> incomputability is more a legend that CS people should have heard of,
> than something they should have to study in depth, IMHO.

Incomputibility is more practical than you think.
 
0
jmarshall (140)
10/15/2005 1:37:35 AM
Seeing as I've been trying to learn Lisp and Scheme, I'll just try to
translate Joe's code into OCaml and SML.

Joe Marshall wrote:
> (define k0 '(#t () (x)))

The quoted symbols can be substituted with polymorphic variants, so I'll
OCaml first:

let k0 = (true, [], [`x]);;

> (define (kernel i s)
>   (list (not (car s))
>         (if (car s)
>             (cadr s)
>             (cons i (cadr s)))
>         (cons 'y (cons i (cons 'z (caddr s))))))

let kernel (a, b, c) i =
  (not car, (if a then b else i :: b), `y :: i :: `z :: c);;

> (define (mystery list)
>   (let ((result (foldl kernel k0 list)))
>     (if (null? (cadr result))
>         #f
>         (mystery
>           (if (car result)
>               (cadr result)
>               (caddr result))))))

let rec mystery list = match List.fold_left kernel k0 list with
  | (_, [], _) -> `f
  | (a, b, c) -> mystery (if a then b else c);;

> (define (base-bump base n)
>   (if (< n base)
>       n
>       (do ((exponent    0 (+ exponent 1))
>            (probe    base (* probe base))
>            (divisor     1 probe))
>           ((> probe n)
>            (+ (* (expt (+ base 1) (base-bump base exponent))
>                  (quotient n divisor))
>               (base-bump base (remainder n divisor)))))))

SML makes arbitrary-precision integer arithmetic easier so I'll use MLton
here:

fun expt n m = if m=0 then 1 else n*expt n (m-1)

fun base_bump base n =
    if n<base then n else
    let fun aux exponent probe divisor =
            if probe <= n then aux (exponent+1) (probe*base) probe else
            expt (base+1) (base_bump base exponent) * (n div divisor) +
            base_bump base (n mod divisor) in
        aux 0 base 1
    end

> (define (goodstein seed)
>   (do ((i 2 (+ i 1))
>        (n seed (- (base-bump i n) 1)))
>       ((zero? n))
>     (newline)
>     (display n)))

fun goodstein seed =
    let fun aux i n =
            if n<>0 then (
                print (IntInf.toString n^"\n");
                aux (i+1) (base_bump i n - 1))
            else () in
        aux 2 seed
    end

-- 
Dr Jon D Harrop, Flying Frog Consultancy
http://www.ffconsultancy.com
0
usenet116 (1778)
10/15/2005 3:53:32 AM
In comp.lang.scheme Ulrich Hobelmann <u.hobelmann@web.de> wrote:

>The contrived construction of a funny machine that can't be proven to 
>halt isn't interesting to me.  Many practical algorithms don't just 

Well. The point is, that it's not just a "funny machine", but _every_
machine conceivable (via reciprocal emulation of various existing
machine types, and the Church-Turing-Thesis as a strong indication
for those that haven't been conceived yet.)  It's mostly a philosophical
result of, imho, quite some importance.  In extension of it, if the
human mind could be shown that it is just an electrochemical
contraption (which imho is quite the possibility), this would then
indicate that the human mind is no more powerful than a TM.  Wouldn't
you agree that, if such a thing could ever be proven, it would be
a rather interesting result?  This is to show that it's not just
some idle pastime of some weird mathematicians but a topic of broad
interest, from social, philosophical and theological effects to
technical ones (can we build programs that can "think" like humans?
etc.)

mkb.
0
mkb (1001)
10/15/2005 4:21:30 AM
In comp.lang.scheme Ulrich Hobelmann <u.hobelmann@web.de> wrote:

>But isn't the Lambda Calculus, or Recursive Functions equivalent in 
>power to the funny tape machine?  Both are much easier to understand, 
>IMHO, and the first one even provides the basis for lots of programming 
>languages.  Why does every CS student have to suffer through the Turing 
>stuff, but most haven't even *heard* of Lambda Calculus?  This just 
>doesn't make sense to me.

What's "space", as defined with recursive functions, or with the
lambda calculus?  The TM has certain advantages, like being a lot
nearer to actual computers (i.e., physcial devices) than most other
mathematical notions, while at the same time being simplistic enough
not having to bother with unnecessary complications (like, space
being calculated over a sum over the bits being used at clock ticks
with a RAM (random access machine with dyadic coding), instead, you
just count the tape cells that have been used.)  IMHO it's a rather
ingenious and elegant model.

mkb.
0
mkb (1001)
10/15/2005 4:30:27 AM
In comp.lang.scheme Joachim Durchholz <jo@durchholz.org> wrote:

>Because macros can do things that functions can't.
>In (say) Pascal, when I look at a function declaration and see
>I know that it won't modify baz. That may already be all that I need to 
>know about foo.

A good approach to guard against such problems is:

 * to carefully document all side effects (whether it's a Lisp macro,
   or a C function that stores something through a pointer argument),
 * only use macros sparingly, where functions won't work
 * apply common sense, think about the person having to read and
   understand the program, follow common idiom and the KISS principle

Unfortunately, all of the above won't help with bad programmers (or
if you're weeks behind the deadline.)

The worst thing that happens is when someone generously uses macros
"for sports" (or template metaprogramming in C++, or an extremely
terse syntax in Perl, etc.) because he wants to show off his
self-perceived ingenuity[1] or whatever, and the whole source is
also completely undocumented. Unfortunately, one comes across such
programs more often than one would wish.. ;-/

mkb.

[1] "If it was difficult to write, it should be difficult to read."
0
mkb (1001)
10/15/2005 6:46:51 AM
Philippa Cowderoy schrieb:
> On Fri, 14 Oct 2005, Joachim Durchholz wrote:
> 
>> Still, I'm reluctant to accept that "a machine that can do interaction 
>> is more powerful than a TM".
> 
> No TM can launch an ICBM, for one.

Simply attach it to a detector that detects it if the TM writes a One to 
a given tape cell, and let that launch the ICBM.

That's near enough to real-world computers for me that I don't need a 
more powerful model. (By whatever definition of "more powerful".)

> More generally, if you can interact with things you've a chance of
> finding something more powerful than a TM to interact with.

Not sure who's "you" in that scenario - the computer + software? the 
"non-computer" part of the world?

Regards,
Jo
0
jo427 (1164)
10/15/2005 9:26:20 AM
Philippa Cowderoy schrieb:
> On Fri, 14 Oct 2005, Joachim Durchholz wrote:
> 
>> I have to modify this one:
>>
>> I have yet to see things macros can do that functions (including 
>> higher-order ones) cannot do in a safer manner.

"as safe or safer" should be here    ^^^^^

> Build new types.

Not a problem if types are first-class values.

On systems where they are purely compile-time attributes, I think if a 
macro can do things that the language lacks, then it's more an argument 
that that specific language has a deficitary type system, than an 
argument that macros serve a useful purpose.

OTOH, if a language is *designed* to do types via some macro mechanism, 
it all depends on how the macros work. My reservations don't come from 
macros per se (if you will, functions are a kind of macros, too), they 
come from making macros so flexible that a macro can to anything, in 
particular escape language rules.

>> (Just take a look at how Haskell people create "sublanguages". Look, 
>> ma, no macros! *ggg*)
> 
> Yet there's still a use for Template Haskell.

Hmm... I was always wondering what the motives behind TH were, which 
problems it would solve.
And what the guarantees of the language are.

Regards,
Jo
0
jo427 (1164)
10/15/2005 9:34:40 AM
Matthias Buelow wrote:
> In comp.lang.scheme Ulrich Hobelmann <u.hobelmann@web.de> wrote:
> 
>> But isn't the Lambda Calculus, or Recursive Functions equivalent in 
>> power to the funny tape machine?  Both are much easier to understand, 
>> IMHO, and the first one even provides the basis for lots of programming 
>> languages.  Why does every CS student have to suffer through the Turing 
>> stuff, but most haven't even *heard* of Lambda Calculus?  This just 
>> doesn't make sense to me.
> 
> What's "space", as defined with recursive functions, or with the
> lambda calculus?  The TM has certain advantages, like being a lot
> nearer to actual computers (i.e., physcial devices) than most other
> mathematical notions, while at the same time being simplistic enough
> not having to bother with unnecessary complications (like, space
> being calculated over a sum over the bits being used at clock ticks
> with a RAM (random access machine with dyadic coding), instead, you
> just count the tape cells that have been used.)  IMHO it's a rather
> ingenious and elegant model.

I guess you could do what compilers do LC-derived languages do: count 
used variables at any given time.  Since scoping in LC is strictly 
lexical, with no funny additions, that shouldn't be too hard.  You can 
then make a relation of space per function invocation.

-- 
A government which robs Peter to pay Paul can always
depend on the support of Paul.
	George Bernard Shaw
0
u.hobelmann (1643)
10/15/2005 9:38:40 AM
Joe Marshall wrote:
> Occasionally someone on comp.lang.lisp states ``Yeah, in *theory* you
> cannot tell if a program halts, but in *practice* it should be easy.''
> They see G�del's proof as an `artificial' construct of a `pathological
> case' that would never occur in a `real program'.
> 
> However, it turns out that the halting problem and issues of
> undecidability are *trival* to uncover in very simple problems (G�del
> took the further step to prove that there is no way to paper over the
> simple problems).  Here are two simple examples: 

No, what I meant is that the problem doesn't interest me, because 
*programmers* write and test programs, and usually those hand-written 
programs don't infinite-loop.  Of course a general halt-detector for 
arbitrary binary code doesn't exist, but that's fine by me.  The 
important thing is that people can check if their programs halt, and 
that's usually humanly decideable, if you have some value that decreases 
over every loop iteration and exits when zero.

I don't think there are programmers that do really complicated 
(maybe-not-halting) algorithms, that just sit down and code them without 
analysis beforehand.

-- 
A government which robs Peter to pay Paul can always
depend on the support of Paul.
	George Bernard Shaw
0
u.hobelmann (1643)
10/15/2005 9:43:11 AM
Matthias Buelow wrote:
> In comp.lang.scheme Ulrich Hobelmann <u.hobelmann@web.de> wrote:
> 
>> The contrived construction of a funny machine that can't be proven to 
>> halt isn't interesting to me.  Many practical algorithms don't just 
> 
> Well. The point is, that it's not just a "funny machine", but _every_
> machine conceivable (via reciprocal emulation of various existing
> machine types, and the Church-Turing-Thesis as a strong indication
> for those that haven't been conceived yet.)  It's mostly a philosophical
> result of, imho, quite some importance.  In extension of it, if the
> human mind could be shown that it is just an electrochemical
> contraption (which imho is quite the possibility), this would then
> indicate that the human mind is no more powerful than a TM.  Wouldn't
> you agree that, if such a thing could ever be proven, it would be
> a rather interesting result?  This is to show that it's not just
> some idle pastime of some weird mathematicians but a topic of broad
> interest, from social, philosophical and theological effects to
> technical ones (can we build programs that can "think" like humans?
> etc.)

Of course it's interesting to know.  I just had to spend too much time 
and work on it IMHO.  Maybe I'm just pissed because university wasn't 
what I expected before I went there ;)
Well, it won't be long anymore.

-- 
A government which robs Peter to pay Paul can always
depend on the support of Paul.
	George Bernard Shaw
0
u.hobelmann (1643)
10/15/2005 9:45:07 AM
Ulrich Hobelmann schrieb:
> No, what I meant is that the problem doesn't interest me, because 
> *programmers* write and test programs, and usually those hand-written 
> programs don't infinite-loop.

The key term is "usually". This means that *occasionally* this is a 
problem anyway. And knowing about stuff that's only useful occasionally 
is still a net gain.

> I don't think there are programmers that do really complicated 
> (maybe-not-halting) algorithms, that just sit down and code them without 
> analysis beforehand.

Sure. But for that kind of analysis, knowledge about the halting problem 
and all the associated stuff is a valuable tool.

Regards,
Jo
0
jo427 (1164)
10/15/2005 12:07:27 PM
In comp.lang.scheme Ulrich Hobelmann <u.hobelmann@web.de> wrote:

>I guess you could do what compilers do LC-derived languages do: count 
>used variables at any given time.  Since scoping in LC is strictly 
>lexical, with no funny additions, that shouldn't be too hard.  You can 
>then make a relation of space per function invocation.

Of course one can. The issue is this: You have to define something,
it isn't obvious. Not even with the RAM model is it obvious, due
to a RAM program not doing just local modifications (like a TM):
if a RAM program touches memory cells 1 and 1000, does it mean the
algorithm is consuming 2 cells, or 1000?  With a Turing Machine,
both space aswell as time are intuitively defined.

mkb.
0
mkb (1001)
10/15/2005 12:47:56 PM
On Sat, 15 Oct 2005, Joachim Durchholz wrote:

> Philippa Cowderoy schrieb:
>> On Fri, 14 Oct 2005, Joachim Durchholz wrote:
>> 
>>> Still, I'm reluctant to accept that "a machine that can do interaction is 
>>> more powerful than a TM".
>> 
>> No TM can launch an ICBM, for one.
>
> Simply attach it to a detector that detects it if the TM writes a One to a 
> given tape cell, and let that launch the ICBM.
>
> That's near enough to real-world computers for me that I don't need a more 
> powerful model. (By whatever definition of "more powerful".)
>

Have one end of the tape for input from the outside world, or a second 
tape for IO and I'm perfectly happy - it doesn't take a massively strange 
design, just an explicit IO channel somewhere.

-- 
flippa@flippac.org

"My religion says so" explains your beliefs. But it doesn't explain 
why I should hold them as well, let alone be restricted by them.
0
flippa (196)
10/15/2005 2:54:43 PM
Ulrich Hobelmann wrote:
> No, what I meant is that the problem doesn't interest me, because 
> *programmers* write and test programs, and usually those hand-written 
> programs don't infinite-loop.  Of course a general halt-detector for 
> arbitrary binary code doesn't exist, but that's fine by me.  The 
> important thing is that people can check if their programs halt, and 
> that's usually humanly decideable, if you have some value that decreases 
> over every loop iteration and exits when zero.

In spite of the halting problem, it is still possible to perform
signficant formal analysis on programs, and decide a large number of
issues on them.  This requires specialized notations, properly stated
invariants, and resources that are not commonly available.

The unassisted human is unreliable, and can't write much code
without introducing error, and can't read much code without
overlooking errors.  This is why we test software.

As someone famous said: Beware of the following program, I have
proven it correct, but I have not tested it.

So, ultimately, theory complements practice.

> I don't think there are programmers that do really complicated 
> (maybe-not-halting) algorithms, that just sit down and code them without 
> analysis beforehand.

So, if your education does not include this foundational theory, how do
you analyze a complex problem?

-- 
A. Kanawati
NO.antounk.SPAM@comcast.net
0
antounk (33)
10/15/2005 3:35:03 PM
Joachim Durchholz wrote:
> Philippa Cowderoy schrieb:
>> On Fri, 14 Oct 2005, Joachim Durchholz wrote:
>>
>>> Still, I'm reluctant to accept that "a machine that can do
>>> interaction is more powerful than a TM".
>>
>> No TM can launch an ICBM, for one.
> 
> Simply attach it to a detector that detects it if the TM writes a One to
> a given tape cell, and let that launch the ICBM.

Doesn't work. An ICBM launcher has to be able to respond to its input
in real-time, i.e. it is a single process that launches the ICBM *when*
it receives an interactive input.

A TM is not sufficient to model this properly. If you run the TM multiple
times in a loop, you no longer have just a TM; you need some way to model
the overall process. Interactive models of computation allow you to do that,
even for much more complicated examples.

(For this particular example, you need a real-time interactive model.
It wouldn't be a good idea to have an arbitrary delay before launching
the ICBM.)

-- 
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>
0
10/15/2005 4:27:38 PM
["Followup-To:" header set to comp.lang.functional.]
On 2005-10-15, Antoun Kanawati <antounk@comcast.net> wrote:
> As someone famous said: Beware of the following program, I have
> proven it correct, but I have not tested it.

Knuth: "Beware of bugs in the above code; I have only proved it correct,
not tried it."

-- 
Aaron Denney
-><-
0
wnoise1 (65)
10/15/2005 5:07:17 PM
Antoun Kanawati wrote:
>> I don't think there are programmers that do really complicated 
>> (maybe-not-halting) algorithms, that just sit down and code them 
>> without analysis beforehand.
> 
> So, if your education does not include this foundational theory, how do
> you analyze a complex problem?

Well, I had my share of theory, but I think most programmers out there 
don't encounter this kind of stuff.  Maybe I'm wrong, and they do.  In 
that case I'd probably agree with you that theory is good to know.

As it is, I'd rather have skipped learning all the stuff I might as some 
point in ten years need to know, and learned other stuff instead of 
finished earlier.

To relate this opinion to my initial comments:
It's all cool that we learn so much in Germany, but in the US a graduate 
(BSc) is maybe 22-23, not 25-26, and they do get a job usually.  By the 
time they're 26 they might make more than a German at his entry-level 
job -- if the German finds one, that is.  On both sides of the ocean you 
also have grad school, but I suppose it depends on your employer if you 
can profit from that.  If you need more theory, you can always study it 
on your own, no need for grad school.  The German 4.5 year degree could 
easily be compressed into 3 years, and still teach more CS than an 
average US college (that OTOH offer a more diverse education than the 
German system that focuses only on major/minor right from the beginning).

-- 
Blessed are the young for they shall inherit the national debt.
	Herbert Hoover
0
u.hobelmann (1643)
10/15/2005 5:57:35 PM
Ulrich Hobelmann <u.hobelmann@web.de> writes:

> No, what I meant is that the problem doesn't interest me, because
> *programmers* write and test programs, and usually those hand-written
> programs don't infinite-loop.  Of course a general halt-detector for
> arbitrary binary code doesn't exist, but that's fine by me.  The
> important thing is that people can check if their programs halt, and
> that's usually humanly decideable, if you have some value that
> decreases over every loop iteration and exits when zero.

That was the point of my examples.  It's certainly obvious that the
kernel function and the foldl function halt because their input is
always finite.  The mystery function simply isn't humanly decidable
(although no one has ever found an input for which it doesn't halt).

On the other hand, the goodstein sequence program has a value that
increases dramatically (at first).

> I don't think there are programmers that do really complicated
> (maybe-not-halting) algorithms, that just sit down and code them
> without analysis beforehand.

I agree.  But there are times when I write a program and start it
going and wait.... and wait.... and wait .... and then I start to
wonder:  is it a big problem, a slow computer, or a bug.  For
instance, this program:


(define (m i j k)
  (cond ((= i 0) (+ k 1))
        ((and (= i 1) (= k 0)) j)
        ((and (= i 2) (= k 0)) 0)
        ((= k 0) 1)
        (else (m (- i 1) j (m i j (- k 1))))))

 (m 4 4 4) => ??

Does it halt?

It helps to know *how* to figure out what the problem is.


0
jmarshall (140)
10/15/2005 8:23:53 PM
Aaron Denney wrote:
> ["Followup-To:" header set to comp.lang.functional.]
> On 2005-10-15, Antoun Kanawati <antounk@comcast.net> wrote:
> 
>>As someone famous said: Beware of the following program, I have
>>proven it correct, but I have not tested it.
> 
> 
> Knuth: "Beware of bugs in the above code; I have only proved it correct,
> not tried it."

Thanks!

-- 
A. Kanawati
0
antounk (33)
10/15/2005 9:36:30 PM
Ulrich Hobelmann schrieb:
> Well, I had my share of theory, but I think most programmers out there 
> don't encounter this kind of stuff.   Maybe I'm wrong, and they do.  In
> that case I'd probably agree with you that theory is good to know.

It depends entirely on whether they're "coding", or "engineering 
software" (for some suitable definition of these two terms, no value 
judgement implied).

For the former, you don't need much theory.

For the latter, theory is one of the many tools in the toolbox. It may 
lie unused for weeks, months, or (sometimes) even years, but when you 
need it, it's indispensable.
IOW it's clearly a specialty tool.

Regards,
Jo
0
jo427 (1164)
10/16/2005 9:18:01 AM
Ulrich Hobelmann <u.hobelmann@web.de> writes:

> No, what I meant is that the problem doesn't interest me, because 
> *programmers* write and test programs, and usually those hand-written 
> programs don't infinite-loop.  Of course a general halt-detector for 
> arbitrary binary code doesn't exist, but that's fine by me.  The 
> important thing is that people can check if their programs halt, and 
> that's usually humanly decideable, if you have some value that decreases 
> over every loop iteration and exits when zero.
> 
> I don't think there are programmers that do really complicated 
> (maybe-not-halting) algorithms, that just sit down and code them without 
> analysis beforehand.

Well, your fundamental assertion here is wrong.  Average programmers
write arbitrarily complex and potentially non-halting programs all the
time. Moreover, they do it without intending to do it.  Then they run
their programs and are mystified when they don't work, because they
don't have the theoretical background to understand what the errors
they made are.  Instead they step through the programs one statement
at a time writing down values of variables and hoping to see the flaw.
I work with such people on a regular basis.

Howerver, your real gripe seems to be not about theory in general, but
about reductions to the halting problem.  You may be right.  You may
never have a practical use for that particular theoretical technique.
However, I don't think you can tell that a priori.  

When one spends a lot of time doing something, the point is that
practicing it makes it second nature.  I had the same problem with
trigonometry.  I had real trouble with the identities desptie them
being simplyy symbolic formulas, because I had real trouble with
remembering which was sine and which was cosine--my particular
dyslexia kicks in there.  Now, most of my life, not remembering my
trig has not been an issue.  However, it was crucial when I had to
solve a particular problem in Calculus a few years later.  The problem
did not fall to the standard solution for that type of problem (le
Hospital's rule if I recall correctly), unless one transformed the
problem into a trigonometric space.  Not being facile with
trigonometry, the idea did not occur to me and I would have been able
to do the transformation if it had.  So, I simply could not solve that
problem, because I lacked the theory (in this case trigonometry)--and
in particular the practicing of the theory so that it's application
was second nature to me.

Now, I don't know how many times I have actually transformed problems
into the halting problems in my professional career, maybe none.
However, I know that from time to time I read such transformations in
articles that I read on areas that do interest me.  The fact that I
can make the transformations without much thought allows me to read
the article and the proof and understand what is being written.  In
fact, it often lets me skip the detailed reading of the proof, because
I can understand the essential point being made because the concept is
second nature.

Theoretical knowledge works that way.  The more theory one understands
the easier it is to learn more things.  That's useful, even for
someone like me who is essentially a garden-variety practitioner. It's
nice to be able to read things like the paper on Boyer-Moore string
matching and understand why it works.  That's one real reason to learn
theory, to be able to read papers of interesting new results, results
that can make your day-to-day programming life easier and better. Sure
I don't write proofs for a living, but it is nice to be able to read
and understand papers that include proofs.

If I have any regrets, its not in learning how to reduce problems into
the halting problem.  Its rather in not being able to solve recurrance
equations as second nature. There are a lot of times I cannot follow a
performance argument, simply because I cannot solve the recurrance
equation trivially.  I guess I should get myself a book on them and do
some pratice....

-Chris
0
cfc1 (1)
10/16/2005 4:25:45 PM
THINGS YOU SHOULD KNOW ABOUT BRANDON J. VAN EVERY BEFORE REPLYING TO
ONE OF HIS POSTS

1.  He has never designed any game, nor contributed to the design of
    any game, which has ever seen the light of day, despite referring
    to himself as a "game designer."  (In rebuttal, he pointed out his
    "one complete game" from "1983" on the "Atari 800" which he showed
    to his "8th grade math teacher.")

2.  He has never been employed in the game industry, in any way,
    shape, manner or form.  Despite this, for some reason he managed
    to get named as an Independent Games Festival judge; a curious
    turn of events, since their stated intent is to appoint
    "professionals in the game industry" (their quote, not his).

3.  In fact, the only programming job he had listed on his resume was
    for only "2 years" ending in "1998," working in C and assembly on
    a graphics driver, as a "Sr. Software Engineer" -- a curious
    title, since this was his first (and only) job in the software
    industry.  There is no evidence he has used C++, nor any other
    language, professionally.  (And the company in question is
    defunct, anyway, so there is no way to verify his claim.)

4.  The other jobs he has mentioned having after this one and only
    items on his resume are: "yard maintenance work," "painting
    apartments," "scrubbing floors," "sub minimum wage signature
    gathering," and working for "$5/hour at a Vietnamese restaurant."

5.  The only personal project he actually wrote code for and made
    available in some manner was Free3d, a software 3D rendering
    engine.  Stating that its goals were to be "100% efficient, 100%
    portable" and to release it in a "one year time frame," which he
    started in "1993" and abandoned in "1996," admitting that it
    "barely drew even a single polygon" and "did hardly anything in
    the 3D department."

6.  Almost every Internet community (Usenet newsgroup, mailing list,
    etc.) he has ever introduced himself to has resulted in him
    repeating the same pattern: asking leading questions, demanding
    people do things his way, becoming hostile, annoying the other
    participants, alienating them, and finally leaving in disgust.

7.  Of the projects (open source and otherwise) whose communities he
    has (briefly) joined, he has never contributed anything tangible
    in terms of code or documentation.

8.  The project he has intermittently claimed to be working on, Ocean
    Mars, is vaporware -- and is one of his admitted "failures."  He
    allegedly sunk "nine months of full time 60 hours/week" and about
    "$80K" into it (at least; he "stopped counting") with only a
    "spherical hexified icosahedron" display to show for it (only
    allegedly, since it has never been shown or demonstrated
    publicly).

9.  Since his embarassing frustration with his Ocean Mars project, he
    has decided that C and C++ aren't "worth anything as a resume
    skill anymore," and embarked on a quest in 2003 to find a
    high-level language that will suit his needs.  After more than a
    year, at least ten languages, and not having even "written a line
    of code" in any of them, he still has yet to find a language that
    will suit him.

10. Finally, despite vehemently insisting that he is not a troll, many
    people quite understandingly have great difficulty distinguishing
    his public behavior from that of a troll.

0
10/18/2005 1:05:11 AM
Ulrich Hobelmann wrote:
> But isn't the Lambda Calculus, or Recursive Functions equivalent in 
> power to the funny tape machine?  Both are much easier to understand, 
> IMHO, and the first one even provides the basis for lots of programming 
> languages.  Why does every CS student have to suffer through the Turing 
> stuff, but most haven't even *heard* of Lambda Calculus?  This just 
> doesn't make sense to me.

In my 4th-year computation theory class, we learned of Turing machines,
mu-recursive functions (closely related to the lambda calculus), and
Church's Thesis. The course first presented a series of increasingly
powerful automata. In that context, Turing machines make a lot of sense:
The automata clearly model computing devices (rather than mathematical
functions), and the Turing machine is the simplest computing device with
the full power of real hardware.

The course then developed grammars and functions the same way, leading
up to unrestricted grammars and mu-recursive functions, to demonstrate
Church's Thesis from the device, language, and mathematical angles. I
think it's a good approach to teaching computation theory. Alas, the
textbook was quite difficult, and I was quite bored with school by that
point, so I didn't really understand the material at the time. I had a
similar problem with Lisp in my AI class; I didn't really "get" Lisp
until I decided to teach myself Scheme a couple years ago.

Nowadays, I strongly dislike schooling, specifically classroom teaching.
I learn much better from self-study. Universities have an excellent
/environment/ for learning -- lots of free time, lots of resources, lots
of smart people around -- but even at the best universities, classrooms
still basically suck.
-- 
Bradd W. Szonye
http://www.szonye.com/bradd
0
news152 (508)
10/19/2005 5:24:16 PM
On Wed, 19 Oct 2005 17:24:16 +0000, Bradd W. Szonye wrote:


> Nowadays, I strongly dislike schooling, specifically classroom teaching.
> I learn much better from self-study. Universities have an excellent
> /environment/ for learning -- lots of free time, lots of resources, lots
> of smart people around -- but even at the best universities, classrooms
> still basically suck.

One size fits all. Now there is one thing they provide that self-study
doesn't. The push to do something. Some people need to be "under the gun",
so to speak.
0
brodriguez (95)
10/19/2005 5:31:58 PM
BR wrote:
> On Wed, 19 Oct 2005 17:24:16 +0000, Bradd W. Szonye wrote:
> 
> 
>> Nowadays, I strongly dislike schooling, specifically classroom teaching.
>> I learn much better from self-study. Universities have an excellent
>> /environment/ for learning -- lots of free time, lots of resources, lots
>> of smart people around -- but even at the best universities, classrooms
>> still basically suck.

Absolutely, only that I've always disliked schooling, since I was maybe 
11 or 12.  It's not about learning, it's about forcing students to 
conform and to learn at the class's pace (too fast or too slow, doesn't 
matter).  For universities, add bad professors with boring, monotone 
talk and the result sucks a lot.

> One size fits all. Now there is one thing they provide that self-study
> doesn't. The push to do something. Some people need to be "under the gun",
> so to speak.

Oh, they can have it if they want.  After all they pay for it.  But 
forcing me to do that is neither necessary nor fair ;)

I wish we had something like standards for education.  If you could just 
undergo standardized testing to get your high school degree, your BSc, 
your diploma, your Master's, whatever, that would be great.  But without 
sky-high tuition fees, boring classes with stupid, mandatory homework 
etc.  No potential employer is interested if you attended classes or did 
your homework.  They want to know if you can do your stuff.  It's 
unfortunate that universities have a monopoly on degrees; you can't get 
them without all the crap around it.

-- 
Blessed are the young for they shall inherit the national debt.
	Herbert Hoover
0
u.hobelmann (1643)
10/19/2005 10:14:50 PM
On Thu, 20 Oct 2005 00:14:50 +0200, Ulrich Hobelmann wrote:

> I wish we had something like standards for education.  If you could just
> undergo standardized testing to get your high school degree, your BSc,
> your diploma, your Master's, whatever, that would be great.  But without
> sky-high tuition fees, boring classes with stupid, mandatory homework
> etc.  No potential employer is interested if you attended classes or did
> your homework.  They want to know if you can do your stuff.  It's
> unfortunate that universities have a monopoly on degrees; you can't get
> them without all the crap around it.

Use to be you could test out through what you knew. But I think you're
fogetting one of the things that employers are looking for, besided
"knowing the material". Do you have the "ethic" to stick through the
boring and difficult parts as well as the parts that are enjoyable? You
can't learn that through a book. Also there are standards for education.
Doesn't seem like it sometimes, but they are there. Also...well there are
some fields of endeavour that aren't of the "home study" type. If say I
wanted to be a nuclear engineer? Or even a doctor (at home, proctology
degree :).
0
brodriguez (95)
10/19/2005 10:31:22 PM
Brian Harvey wrote:
> "wooks" <wookiz@hotmail.com> writes:
> >Well I could do a course on e-business entrepeneurship but we don't get
> >many electives and I'd rather spend them on something more academic and
> >see if I can wangle my way on to that course on a not for credit basis.
>
> Ugh.
>
> I will tell you right in this message everything in the business curriculum:
>
> 	1.  It's good to be greedy.
>
> 	2.  Assorted techniques for manipulating people who think it
> 	    isn't good to be greedy.
>
> Since #1 is false, there's really no need for you to study #2.
>
>
> But you should definitely think beyond computer science.  I don't know where
> you're going to school, but I'm willing to bet there are courses available in
> literature, philosophy, psychology, art, mathematics, physics, etc.
> I advise fitting as many of those in as you can, even if it means a little
> less computer science.

Well having pondered further on what Brian and Cruise Director have
said I have decided to adopt a more strategic approach and defer the
functional programming course to year 3 when it will count for more
(higher weighting) and by that time I will have read SICP and should
ace the course.

In the meantime I will fill my vacant elective slot by taking my
academic writing course for credit - as I am not finding it  very
demanding.

40% of the mark is awarded for an essay about your own discipline so I
am going to use it to vent about the effects of using Java to teach
programming ( I expect that I will be quoting a passage or 2 from the
preface of Simply Scheme).

I have been told that we switched from a functional language to Java
for our Principles of programming course.
I suppose I ought to speak to our director of studies and maybe head of
dept to get balancing viewpoints but then they might then be interested
in seeing my essay and they may not like what they read.

0
wookiz (347)
10/26/2005 11:22:31 PM
wooks schrieb:
> 40% of the mark is awarded for an essay about your own discipline so I
> am going to use it to vent about the effects of using Java to teach
> programming ( I expect that I will be quoting a passage or 2 from the
> preface of Simply Scheme).

That's probably a good idea :-)

> I have been told that we switched from a functional language to Java
> for our Principles of programming course.
> I suppose I ought to speak to our director of studies and maybe head of
> dept to get balancing viewpoints but then they might then be interested
> in seeing my essay and they may not like what they read.

That's probably *not* a good idea. These administrative figureheads 
already have decided. Nobody likes to reverse a decision, or even hear 
anybody who advocates for reversing one, particularly if that advocating 
person isn't influential.
All you'll achieve is to leave the impression of being more "political" 
than "technical", and that's not good for a technical career.
Or let me word it in another fashion: regardless of what you say, unless 
both you *and* those administrators are exceptionally insightful, your 
remarks will be dismissed as the views of somebody who doesn't know much 
yet. This holds even if you pass every test with flying colors - that 
would be the equivalent of being able to multiply single-digit numbers 
after the first two weeks at school: *highly* impressive, and very much 
worth watching future progress, but hardly relevant for matters of 
syllabus - yet.

If you really want to have that decision reversed, talk to people and 
find out why that decision was made in the first place. Find out not 
only the technical reasons but also who advocated for and against the 
change, what they claimed were the reasons for their respective 
positions, and what were the real reasons (very few care to explain all 
their reasons, often just because the full reasons for a decision aren't 
interesting to anybody, even more often because most reasons remain at 
an unconscious level, sometimes because that's part of a hidden agenda). 
This kind of research is never complete, but it will teach you a *lot* 
about how a university works, and to a certain extent how bureaucracies 
in general work - and by the time you've learned that, you just might 
have accumulated enough standing that your positions will be heard.

HTH.

Regards,
Jo
0
jo427 (1164)
10/27/2005 8:53:06 AM
wooks wrote:
> I suppose I ought to speak to our director of studies and maybe head of
> dept to get balancing viewpoints but then they might then be interested
> in seeing my essay and they may not like what they read.

I'm sure if your essay is fair and balanced they will be happy to
comment on it.  Of course, a fair and balanced essay may be less fun to
write...

N.

0
noelwelsh (29)
10/27/2005 10:57:33 AM
noelwelsh@gmail.com wrote:
> wooks wrote:
> > I suppose I ought to speak to our director of studies and maybe head of
> > dept to get balancing viewpoints but then they might then be interested
> > in seeing my essay and they may not like what they read.
>
> I'm sure if your essay is fair and balanced they will be happy to
> comment on it.  Of course, a fair and balanced essay may be less fun to
> write...
>
> N.

Help me.

Good points.

1. Many students want to learn Java.
2. It is currently very relevant to what is used in  industry.
3. Extensive set of libraries.
4. Non proprietary.
5. Accessibility
6. ?????

Apart from 1 the rest seem to me to be arguments for it's use in
vocational training as opposed to teaching the principles of
programming.

0
wookiz (347)
10/28/2005 1:23:17 AM
Joachim Durchholz wrote:
> wooks schrieb:
> > 40% of the mark is awarded for an essay about your own discipline so I
> > am going to use it to vent about the effects of using Java to teach
> > programming ( I expect that I will be quoting a passage or 2 from the
> > preface of Simply Scheme).
>
> That's probably a good idea :-)
>
> > I have been told that we switched from a functional language to Java
> > for our Principles of programming course.
> > I suppose I ought to speak to our director of studies and maybe head of
> > dept to get balancing viewpoints but then they might then be interested
> > in seeing my essay and they may not like what they read.
>
> That's probably *not* a good idea. These administrative figureheads
> already have decided. Nobody likes to reverse a decision, or even hear
> anybody who advocates for reversing one, particularly if that advocating
> person isn't influential.
> All you'll achieve is to leave the impression of being more "political"
> than "technical", and that's not good for a technical career.
> Or let me word it in another fashion: regardless of what you say, unless
> both you *and* those administrators are exceptionally insightful, your
> remarks will be dismissed as the views of somebody who doesn't know much
> yet. This holds even if you pass every test with flying colors - that
> would be the equivalent of being able to multiply single-digit numbers
> after the first two weeks at school: *highly* impressive, and very much
> worth watching future progress, but hardly relevant for matters of
> syllabus - yet.
>
> If you really want to have that decision reversed, talk to people and
> find out why that decision was made in the first place. Find out not
> only the technical reasons but also who advocated for and against the
> change, what they claimed were the reasons for their respective
> positions, and what were the real reasons (very few care to explain all
> their reasons, often just because the full reasons for a decision aren't
> interesting to anybody, even more often because most reasons remain at
> an unconscious level, sometimes because that's part of a hidden agenda).
> This kind of research is never complete, but it will teach you a *lot*
> about how a university works, and to a certain extent how bureaucracies
> in general work - and by the time you've learned that, you just might
> have accumulated enough standing that your positions will be heard.
>

Not trying to change anything. What would be the point. I have already
decided to take responsibility for my programming education. Even if I
were successful I wouldn't benefit from it, but if I write an essay
about something that I am passionate about I am more likely to get an
A.

I am also more likely to get an A if I solicit and present the views of
the experts  that made this decision.

I do know that the department is very interested in student feedback,
so while I would rather solicit the info and keep my view to myself
(the course for which I am writing the essay is not run by CS dept) if
the Director of Studies expresses an interest in reading my essay I
don't see how I can refuse.

0
wookiz (347)
10/28/2005 1:40:31 AM
["Followup-To:" header set to comp.lang.functional.]
On 2005-10-28, wooks <wookiz@hotmail.com> wrote:
>
> noelwelsh@gmail.com wrote:
>> wooks wrote:
>> > I suppose I ought to speak to our director of studies and maybe head of
>> > dept to get balancing viewpoints but then they might then be interested
>> > in seeing my essay and they may not like what they read.
>>
>> I'm sure if your essay is fair and balanced they will be happy to
>> comment on it.  Of course, a fair and balanced essay may be less fun to
>> write...
>>
>> N.
>
> Help me.
>
> Good points.
>
> 1. Many students want to learn Java.
> 2. It is currently very relevant to what is used in  industry.
> 3. Extensive set of libraries.
> 4. Non proprietary.
> 5. Accessibility
> 6. ?????
>
> Apart from 1 the rest seem to me to be arguments for it's use in
> vocational training as opposed to teaching the principles of
> programming.

How the heck is java "non-proprietary"?  Sun owns it.  They have
relatively light terms as such things go, but it's still quite
proprietary.

-- 
Aaron Denney
-><-
0
wnoise1 (65)
10/28/2005 1:41:50 AM
Aaron Denney wrote:
>
> How the heck is java "non-proprietary"?  Sun owns it.

Be sure to distinguish among:

1) "Java"(tm) the trademarked term,
2) The Java language specification
3) Sun's implementation of the Java specification

1 and 3 are certainly proprietary. Is that what you meant?


Marshall

0
10/28/2005 2:19:41 AM
Marshall Spight wrote:
> Aaron Denney wrote:
> 
>>How the heck is java "non-proprietary"?  Sun owns it.
> 
> 
> Be sure to distinguish among:
> 
> 1) "Java"(tm) the trademarked term,
> 2) The Java language specification
> 3) Sun's implementation of the Java specification
> 
> 1 and 3 are certainly proprietary. Is that what you meant?
> 
> 
> Marshall
> 


By controling 1 they effectively control 2. Other than MS's J# I know of 
  no other "Java" that deviates from Sun's specification.
0
danwang74 (207)
10/28/2005 6:13:32 AM
Marshall  Spight wrote:
> Aaron Denney wrote:
> >
> > How the heck is java "non-proprietary"?  Sun owns it.
>
> Be sure to distinguish among:
>
> 1) "Java"(tm) the trademarked term,
> 2) The Java language specification
> 3) Sun's implementation of the Java specification
>
> 1 and 3 are certainly proprietary. Is that what you meant?
>
>
> Marshall

Actually cross-platform more accurately presents what I meant. If you
think thats a relatively weak point then I would agree but it's such a
poor language for teaching programming that I feel I have to scrape the
barrel. Maybe I should check out comp.lang.java to see whats been
written on this before. It's not really fair asking asking Schemers and
functional programmers.

0
wookiz (347)
10/28/2005 6:26:50 AM
["Followup-To:" header set to comp.lang.functional.]
On 2005-10-28, wooks <wookiz@hotmail.com> wrote:
> Actually cross-platform more accurately presents what I meant. If you
> think thats a relatively weak point then I would agree but it's such a
> poor language for teaching programming that I feel I have to scrape the
> barrel. Maybe I should check out comp.lang.java to see whats been
> written on this before. It's not really fair asking asking Schemers and
> functional programmers.

Well, okay, that is a reasonable thing.  It passes the minimum standard
of being cross-platform.

-- 
Aaron Denney
-><-
0
wnoise1 (65)
10/28/2005 6:35:20 AM
On 2005-10-28, Marshall  Spight <marshall.spight@gmail.com> wrote:
> Aaron Denney wrote:
>>
>> How the heck is java "non-proprietary"?  Sun owns it.
>
> Be sure to distinguish among:
>
> 1) "Java"(tm) the trademarked term,
> 2) The Java language specification
> 3) Sun's implementation of the Java specification
>
> 1 and 3 are certainly proprietary. Is that what you meant?

And (4) the standard libraries that everyone expects to be there,
the actual platform people are coding for.  Sun essentially controls
that as well, though it is a softer control, as it could theoretically
shift.

-- 
Aaron Denney
-><-
0
wnoise1 (65)
10/28/2005 6:37:01 AM
wooks schrieb:
> Not trying to change anything. What would be the point. I have already
> decided to take responsibility for my programming education. Even if I
> were successful I wouldn't benefit from it, but if I write an essay
> about something that I am passionate about I am more likely to get an
> A.
> 
> I am also more likely to get an A if I solicit and present the views of
> the experts  that made this decision.

OK. Good plan (both paragraphs).

> I do know that the department is very interested in student feedback,
> so while I would rather solicit the info and keep my view to myself
> (the course for which I am writing the essay is not run by CS dept) if
> the Director of Studies expresses an interest in reading my essay I
> don't see how I can refuse.

That's a risky situation. Many figureheads tell people "please, please, 
give us feedback, applause or criticism is equally solicited". The truly 
great among them are true to their words. Most aren't, and if the 
critique is well-founded, it will annoy them (sometimes despite their 
intents).
I have been bitten by this exact same situation!

Try to find out what type the head of dept. etc. are before publishing 
the essay.

Note that it won't help you if the essay is written in a non-CS course. 
If the uni is worth anything, the professors take an interest in their 
student, so you *will* be known by that article, regardless of its 
dedication.

If you want to do that article anyway, write two versions: one with all 
the passion in it, for yourself (and nobody else, not even close 
friends). Then take the article and write a "friendly" version of it: 
assume you are the head of the department; assume that you're trying to 
be fair, but are also just human and honestly like to think that your 
previous decision for Java was in the best interest of the students and 
the uni (though you know that there's criticism, and you don't feel too 
easy about that - such decisions are *always* compromises, and if you 
had decided for Lisp or ML you'd have gotten flak from the Java 
proponents). Now really immerse yourself into that role, and *then* read 
your passionate article. And whenever you feel attacked, change that 
passage or rewrite the article.
The new article can still be passionate and drive home all your points. 
It may take some practice to get to that though - just take this article 
as a first exercise towards that goal :-)
Note that the ability to criticise in public without enraging is a 
highly valuable skill, useful in any kind of job context. (I had to 
learn that after I left uni, and had several opportunities to dearly 
regret that I hadn't learned it before.)

HTH

Regards,
Jo
0
jo427 (1164)
10/28/2005 8:03:12 AM
Aaron Denney <wnoise@ofb.net> writes:

> > Be sure to distinguish among:
> >
> > 1) "Java"(tm) the trademarked term,
> > 2) The Java language specification
> > 3) Sun's implementation of the Java specification
> 
> And (4) the standard libraries that everyone expects to be there,

I have several acquaintances who program in Java on an irregular basis.
They complain that every time they come back to some code previously
written, the latest Java compiler tells them they are using deprecated
libraries, and would they please change to newer ones.

The idea of a standardised library platform is a myth.  It changes
constantly.

Regards,
    Malcolm
0
10/28/2005 9:44:03 AM
Malcolm Wallace schrieb:
> Aaron Denney <wnoise@ofb.net> writes:
> 
>>And (4) the standard libraries that everyone expects to be there,
> 
> 
> I have several acquaintances who program in Java on an irregular basis.
> They complain that every time they come back to some code previously
> written, the latest Java compiler tells them they are using deprecated
> libraries, and would they please change to newer ones.

Better that than silent upgrades and mysterious crashes, I'd say.

> The idea of a standardised library platform is a myth.  It changes
> constantly.

It's not *that* bad.

You can safely ignore those warnings, at least for a while. You can plan 
the change.
That's a *huge* advantage vs. being surprised by incompatibly "fixed" 
libraries of a certain major OS vendor.

Regards,
Jo
0
jo427 (1164)
10/28/2005 11:02:32 AM
On 2005-10-28, Malcolm Wallace <malcolm@cs.york.ac.uk> wrote:
> Aaron Denney <wnoise@ofb.net> writes:
>
>> > Be sure to distinguish among:
>> >
>> > 1) "Java"(tm) the trademarked term,
>> > 2) The Java language specification
>> > 3) Sun's implementation of the Java specification
>> 
>> And (4) the standard libraries that everyone expects to be there,
>
> I have several acquaintances who program in Java on an irregular basis.
> They complain that every time they come back to some code previously
> written, the latest Java compiler tells them they are using deprecated
> libraries, and would they please change to newer ones.

That's right.  Sun controls what the current standard is, and even has
its tools chastise you for not using the right one.

-- 
Aaron Denney
-><-
0
wnoise1 (65)
10/28/2005 1:13:50 PM
wooks wrote:
> Marshall  Spight wrote:
> > Aaron Denney wrote:
> > >
> > > How the heck is java "non-proprietary"?  Sun owns it.
> >
> > Be sure to distinguish among:
> >
> > 1) "Java"(tm) the trademarked term,
> > 2) The Java language specification
> > 3) Sun's implementation of the Java specification
> >
> > 1 and 3 are certainly proprietary. Is that what you meant?
> >
> >
> > Marshall
>
> Actually cross-platform more accurately presents what I meant.

I don't follow. Java is certainly hugely portable, but I don't
see how that makes a portability is relevant to whether a language
is good for teaching.


> If you
> think thats a relatively weak point then I would agree but it's such a
> poor language for teaching programming that I feel I have to scrape the
> barrel.

What do you feel makes it a poor choice for teaching? The libraries
are certainly huge; that's a point against simplicity. Anything else?
I would propose that for teaching programming, you want a simple
language that's nonetheless easily comprehensible; Java seems to
fit that description pretty well. The lambda calculus is certainly
simpler but is a lose on the comprehensibility metric.


Marshall

0
10/29/2005 6:35:01 AM
Daniel C. Wang wrote:
> Marshall Spight wrote:
> > Aaron Denney wrote:
> >
> >>How the heck is java "non-proprietary"?  Sun owns it.
> >
> >
> > Be sure to distinguish among:
> >
> > 1) "Java"(tm) the trademarked term,
> > 2) The Java language specification
> > 3) Sun's implementation of the Java specification
> >
> > 1 and 3 are certainly proprietary. Is that what you meant?
>
>
> By controling 1 they effectively control 2. Other than MS's J# I know of
>   no other "Java" that deviates from Sun's specification.

It strikes me that the existence of J# is a pretty strong knock
to the idea that Java is proprietary. Also, the IBM implementation,
the HP implementation, Waba, SavaJ, kaffe, gnu-java, etc.

How would things be different if Java wasn't "proprietary?"
Would there be MORE fragmentation in the language? That would
be a bad thing.


Marshall

0
10/29/2005 6:39:06 AM
Malcolm Wallace wrote:
> Aaron Denney <wnoise@ofb.net> writes:
>
> > > Be sure to distinguish among:
> > >
> > > 1) "Java"(tm) the trademarked term,
> > > 2) The Java language specification
> > > 3) Sun's implementation of the Java specification
> >
> > And (4) the standard libraries that everyone expects to be there,
>
> I have several acquaintances who program in Java on an irregular basis.
> They complain that every time they come back to some code previously
> written, the latest Java compiler tells them they are using deprecated
> libraries, and would they please change to newer ones.

Yes, the folks in charge of the libraries continue to improve
them. One mechanism they use for this is to insert compiler
warnings for the old ways of doing things which have been
found deficient in some way, with the idea that one day,
they may actually take the old ways away, even though they
have never done this in the history of the language, with
one exception which was later rescinded. If this bothers
you, you can pass "-nowarn" to the new version of compiler,
or simply continue to use the version you used originally.


> The idea of a standardised library platform is a myth.  It changes
> constantly.

If by "constantly" you mean four new major releases since
the original one in 1995, then okay. Of course, code you
wrote for Sun's jdk 1.0 will still run under jdk 1.5. But I
don't think anyone would be any happier than they are now
if, instead, they had simply halted development on the libraries.


Marshall

0
10/29/2005 7:07:51 AM
Marshall  Spight wrote:
> wooks wrote:
> > Marshall  Spight wrote:
> > > Aaron Denney wrote:
> > > >
> > > > How the heck is java "non-proprietary"?  Sun owns it.
> > >
> > > Be sure to distinguish among:
> > >
> > > 1) "Java"(tm) the trademarked term,
> > > 2) The Java language specification
> > > 3) Sun's implementation of the Java specification
> > >
> > > 1 and 3 are certainly proprietary. Is that what you meant?
> > >
> > >
> > > Marshall
> >
> > Actually cross-platform more accurately presents what I meant.
>
> I don't follow. Java is certainly hugely portable, but I don't
> see how that makes a portability is relevant to whether a language
> is good for teaching.
>
>
> > If you
> > think thats a relatively weak point then I would agree but it's such a
> > poor language for teaching programming that I feel I have to scrape the
> > barrel.
>
> What do you feel makes it a poor choice for teaching?

Let me qualify and say poor choice for a 1st language

1. Syntax
The majority of our lectures are spent discussing Java syntax instead
of teaching how to think about programming.

2. It is type safe. Something else that the novice has to deal with
alongside trying to get to grips with programming fundamentals.

3. You have to write a class before you can write a program to add 2
numbers together or print "Hello world".

4. Annoyingly inconsistent. Everything is an object..... and then they
introduce arrays which are not objects.

5. Too many tricks and quirks to trip up the novice.

Personally I would not advocate teaching OO as the 1st programming
paradigm, because not everything is an object and I see OO more as a
mechanism which if applied well can provide a very effective mode of
packaging and delivering code.

But even if it came to teaching OO, I would use something with a
simpler syntax than Java.

0
wookiz (347)
10/29/2005 10:22:23 AM
Marshall  Spight wrote:
> wooks wrote:
> > Marshall  Spight wrote:
> > > Aaron Denney wrote:
> > > >
> > > > How the heck is java "non-proprietary"?  Sun owns it.
> > >
> > > Be sure to distinguish among:
> > >
> > > 1) "Java"(tm) the trademarked term,
> > > 2) The Java language specification
> > > 3) Sun's implementation of the Java specification
> > >
> > > 1 and 3 are certainly proprietary. Is that what you meant?
> > >
> > >
> > > Marshall
> >
> > Actually cross-platform more accurately presents what I meant.
>
> I don't follow. Java is certainly hugely portable, but I don't
> see how that makes a portability is relevant to whether a language
> is good for teaching.
>
>
> > If you
> > think thats a relatively weak point then I would agree but it's such a
> > poor language for teaching programming that I feel I have to scrape the
> > barrel.
>
> What do you feel makes it a poor choice for teaching?

Let me qualify and say poor choice for a 1st language

1. Syntax
The majority of our lectures are spent discussing Java syntax instead
of teaching how to think about programming.

2. It is type safe. Something else that the novice has to deal with
alongside trying to get to grips with programming fundamentals.

3. You have to write a class before you can write a program to add 2
numbers together or print "Hello world".

4. Annoyingly inconsistent. Everything is an object..... and then they
introduce arrays which are not objects.

5. Too many tricks and quirks to trip up the novice.

Personally I would not advocate teaching OO as the 1st programming
paradigm, because not everything is an object and I see OO more as a
mechanism which if applied well can provide a very effective mode of
packaging and delivering code.

But even if it came to teaching OO, I would use something with a
simpler syntax than Java.

0
wookiz (347)
10/29/2005 11:11:20 AM
wooks schrieb:
> 
> Let me qualify and say [how Java is a] poor choice for a 1st language
> 
> 1. Syntax
> The majority of our lectures are spent discussing Java syntax instead
> of teaching how to think about programming.

Agreed.

I remember people saying that they need five nontrivial keywords in the 
first lecture on how to write "Hello World".

> 2. It is type safe. Something else that the novice has to deal with
> alongside trying to get to grips with programming fundamentals.

That should not really be a problem. Type safety helps in catching 
errors, which is a Good Thing - particularly for first-time programmers. 
It helps them concentrate on getting things right before they write the 
first line of code.

Java's type system may be too weak to work well. (Need a type cast just 
to get stuff back out of containers... might have become far less of a 
problem with parametric types though.)

> 3. You have to write a class before you can write a program to add 2
> numbers together or print "Hello world".

Seem to be more a syntactic problem.

(Well, it's a variant: having to write classes introduces yet another 
set of concepts and keywords very early in the learning process, further 
steepening the learning curve.)

> 4. Annoyingly inconsistent. Everything is an object..... and then they
> introduce arrays which are not objects.

Not to speak of numbers, which aren't objects either.

> 5. Too many tricks and quirks to trip up the novice.

Java is actually quite mild in this respect. Try C++ for a change, and 
you'll flee back to Java, screaming.

> Personally I would not advocate teaching OO as the 1st programming
> paradigm, because not everything is an object and I see OO more as a
> mechanism which if applied well can provide a very effective mode of
> packaging and delivering code.

Um, no, OO is actually a philosophy as well.

Not one that I adhere do personally (not anymore). But if the staff 
thinks that people should learn OO philosophy, then Java is a serious 
vehicle for teaching it.

> But even if it came to teaching OO, I would use something with a
> simpler syntax than Java.

There is currently no language that's all of:
* systematic
* suitable for teaching the intended OO philosophy
   (there are at least two not-quite-the-same OO philosophies around)
* comes with a reasonable set of libraries that's good for more than toy
   examples (students tend to embark on larger projects after a short
   while, and it the language fails them at that point, they'll simply
   switch to the next language that comes their way - typically C++ or
   PHP)

Regards,
Jo
0
jo427 (1164)
10/29/2005 11:18:48 AM
wooks wrote:
> 4. Annoyingly inconsistent. Everything is an object..... and then they
> introduce arrays which are not objects.

I think arrays actually are objects.  They don't have methods though and 
just one public field: length.

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
10/29/2005 11:25:00 AM
wooks wrote:
> Marshall  Spight wrote:
> >
> > What do you feel makes it a poor choice for teaching?
>
> Let me qualify and say poor choice for a 1st language
>
> 1. Syntax
> The majority of our lectures are spent discussing Java syntax instead
> of teaching how to think about programming.

One crawls before one walks, and one learns syntax before one
learns how to think. On the comparative scale, Java is
middle-ground in terms of the complexity of the syntax.
Are you perhaps a lisp advocate? My own memories of
being a student exposed to lisp was that I and everyone
I knew just really didn't like the syntax, simple or not.


> 2. It is type safe. Something else that the novice has to deal with
> alongside trying to get to grips with programming fundamentals.

I guess we'll have to disagree on this one! I'm a firm
believer in the value of static typing.


> 3. You have to write a class before you can write a program to add 2
> numbers together or print "Hello world".

Hmmm. I have heard this complaint many times in the past, and
dismissed it as irrelevant to the professional programmer.
But you may have a point for teaching; student programs tend
to be small.

OTOH, the added overhead to make a function part of a class
is "class Foo { }"-- just four tokens.


> 4. Annoyingly inconsistent. Everything is an object..... and then they
> introduce arrays which are not objects.

Arrays *are* objects.

The inconsistency is with primitives, such as int, float, etc.
which are not objects. But while I can see how this is an
annoyance for the advanced programmer, I don't see how the
student is going to care.


> 5. Too many tricks and quirks to trip up the novice.

This isn't specific enough for me to respond to.


> Personally I would not advocate teaching OO as the 1st programming
> paradigm, because not everything is an object

Whether everything is an object does not seem relevant to me.
(I know there are OO advocates who regularly claim the
"Real World" is composed of objects, but we can safely
dismiss them.) This is like critiquing functional languages
because not everything is a function.


> and I see OO more as a
> mechanism which if applied well can provide a very effective mode of
> packaging and delivering code.

Isn't learning about modularity important for students, too?


> But even if it came to teaching OO, I would use something with a
> simpler syntax than Java.

Such as?


Marshall

0
10/29/2005 3:51:57 PM
Joachim Durchholz wrote:
> > 5. Too many tricks and quirks to trip up the novice.
>
> Java is actually quite mild in this respect. Try C++ for a change, and
> you'll flee back to Java, screaming.

LOL! My experience exactly.


Marshall

0
10/29/2005 3:56:15 PM
On Fri, 28 Oct 2005 23:35:01 -0700, Marshall  Spight wrote:

> What do you feel makes it a poor choice for teaching?

Maybe the better question is "what qualities should a teaching language
have"? Then one can compare and contrast that to a "workhorse" language.
0
brodriguez (95)
10/29/2005 5:40:29 PM
Marshall  Spight wrote:
> wooks wrote:
> > Marshall  Spight wrote:
> > >
> > > What do you feel makes it a poor choice for teaching?
> >
> > Let me qualify and say poor choice for a 1st language
> >
> > 1. Syntax
> > The majority of our lectures are spent discussing Java syntax instead
> > of teaching how to think about programming.
>
> One crawls before one walks, and one learns syntax before one
> learns how to think.

Well if you live in a world where man is a slave to the machine I can
understand why you would think that.

Babies think before they learn syntax. Humans don't stop thinking when
they go to a foreign land that they don't speak the language of.

The language chosen should conform to the needs of humans not the other
way round.

> On the comparative scale, Java is
> middle-ground in terms of the complexity of the syntax.

Middle ground compared to what C++?. Regardles middle ground is not
good enough for teaching the principles of programming.

> Are you perhaps a lisp advocate?

No.

> My own memories of
> being a student exposed to lisp was that I and everyone
> I knew just really didn't like the syntax, simple or not.
>

University education shouldn't be about what students like. It's about
choosing the most effective medium for teaching what they are supposed
to be learning.

>
> > 2. It is type safe. Something else that the novice has to deal with
> > alongside trying to get to grips with programming fundamentals.
>
> I guess we'll have to disagree on this one! I'm a firm
> believer in the value of static typing.
>

I think you are missing the point here. It's not about the
merits/demerits of static typing. It's about whether a language that
says

3 / 2
3.0/2.0

should give different results is a suitable as a 1st language. That is
not the sort of thing you should be worrying about when you are
learning to program.

>
> > 3. You have to write a class before you can write a program to add 2
> > numbers together or print "Hello world".
>
> Hmmm. I have heard this complaint many times in the past, and
> dismissed it as irrelevant to the professional programmer.
> But you may have a point for teaching; student programs tend
> to be small.
>

What professional programmers want/need/think is not relevant here.
Students are being shown  programs that use the keyword static and
being told not to worry about what it means. Students are being told
that void is a type - how come I can't declare a variable of type void
then.

> OTOH, the added overhead to make a function part of a class
> is "class Foo { }"-- just four tokens.
>
>
> > 4. Annoyingly inconsistent. Everything is an object..... and then they
> > introduce arrays which are not objects.
>
> Arrays *are* objects.
>

no they are not.

> The inconsistency is with primitives, such as int, float, etc.
> which are not objects. But while I can see how this is an
> annoyance for the advanced programmer, I don't see how the
> student is going to care.
>

Of course they are going to care. They are supposed to be learning the
priniciples of programming.

>
> > 5. Too many tricks and quirks to trip up the novice.
>
> This isn't specific enough for me to respond to.
>

well our lecturer says he reserves the right to put in trick questions.
trick questions don't teach you the principles of programming.

>
> > Personally I would not advocate teaching OO as the 1st programming
> > paradigm, because not everything is an object
>
> Whether everything is an object does not seem relevant to me.

But we are not talking about you. We are talking about people having
their firs t exposure to computer programming.

> (I know there are OO advocates who regularly claim the
> "Real World" is composed of objects, but we can safely
> dismiss them.) This is like critiquing functional languages
> because not everything is a function.
>

No it isn't. functional languages do not force you to write a function
to add 2 numbers together.

>
> > and I see OO more as a
> > mechanism which if applied well can provide a very effective mode of
> > packaging and delivering code.
>
> Isn't learning about modularity important for students, too?
>

Yes. maybe you missed where I said alot of times were taken up with
lectures dealing Java syntax.

>
> > But even if it came to teaching OO, I would use something with a
> > simpler syntax than Java.
> 
> Such as?
> 

Python. Ruby.

0
wookiz (347)
10/29/2005 6:19:16 PM
wooks wrote:
> Marshall  Spight wrote:
> >
> > One crawls before one walks, and one learns syntax before one
> > learns how to think.
>
> Well if you live in a world where man is a slave to the machine I can
> understand why you would think that.

Yes, that is exactly what I advocate: man as slave to the computer.
In fact, I'd like to see a return to those 1940s style offices
where you have row after row, column after column of desks,
in the middle of a giant room, preferably equiped with
ankle manacles.
</ironic>


> The language chosen should conform to the needs of humans not the other
> way round.

Of course. That's something I really like about Java.


> > On the comparative scale, Java is
> > middle-ground in terms of the complexity of the syntax.
>
> Middle ground compared to what C++?. Regardles middle ground is not
> good enough for teaching the principles of programming.

Are you sure? You think an extreme position is ideal for teaching?
Anyway, it certainly is "good enough"; I think you meant
to ask whether it was best.


> > My own memories of
> > being a student exposed to lisp was that I and everyone
> > I knew just really didn't like the syntax, simple or not.
>
> University education shouldn't be about what students like. It's about
> choosing the most effective medium for teaching what they are supposed
> to be learning.

And you think whether they like it or not doesn't have any
bearing on its effectiveness as a teaching medium?


> > > 2. It is type safe. Something else that the novice has to deal with
> > > alongside trying to get to grips with programming fundamentals.
> >
> > I guess we'll have to disagree on this one! I'm a firm
> > believer in the value of static typing.
> >
>
> I think you are missing the point here. It's not about the
> merits/demerits of static typing. It's about whether a language that
> says
>
> 3 / 2
> 3.0/2.0
>
> should give different results is a suitable as a 1st language. That is
> not the sort of thing you should be worrying about when you are
> learning to program.

What does that have to do with type safety? And I disagree that
the difference between integers and floats, and their arithmetic
properties, is not important; it should be part of the
basic cirriculum.


> > > 3. You have to write a class before you can write a program to add 2
> > > numbers together or print "Hello world".
> >
> > Hmmm. I have heard this complaint many times in the past, and
> > dismissed it as irrelevant to the professional programmer.
> > But you may have a point for teaching; student programs tend
> > to be small.
> >
>
> What professional programmers want/need/think is not relevant here.

Yes, that's what I said.


> Students are being shown  programs that use the keyword static and
> being told not to worry about what it means.

What, you mean like never? Or do you mean for a while? There's
nothing wrong with omitting even important information during
the course of study if you come back to it at the appropriate
time.


> Students are being told
> that void is a type - how come I can't declare a variable of type void
> then.

Is that something you think students are having a hard time
with? It seems straightforward enough to me. Nor can I
remember anyone having any trouble with it. Maybe your
experiences are different.


> > OTOH, the added overhead to make a function part of a class
> > is "class Foo { }"-- just four tokens.
> >
> >
> > > 4. Annoyingly inconsistent. Everything is an object..... and then they
> > > introduce arrays which are not objects.
> >
> > Arrays *are* objects.
>
> no they are not.

JLS 4.3.1 "An object is a class instance or an array."

Also, this program compiles with no errors:

class Test {
  public static void main(String[] args) {
    int[] a = { 1, 2 };
    Object o = a;
  }
}


> > The inconsistency is with primitives, such as int, float, etc.
> > which are not objects. But while I can see how this is an
> > annoyance for the advanced programmer, I don't see how the
> > student is going to care.
>
> Of course they are going to care. They are supposed to be
> learning the priniciples of programming.

Well, that's not much of a rebuttal, is it? What programming
principle is going to be harder to learn because of the
difference between object types and primitives?


> > > 5. Too many tricks and quirks to trip up the novice.
> >
> > This isn't specific enough for me to respond to.
>
> well our lecturer says he reserves the right to put in trick questions.
> trick questions don't teach you the principles of programming.

It rather depends on the trick, I'd say. But in general, I agree,
trick questions are not much use for anything. This
sounds like more of an issue with your lecturer than Java, though.


> > > Personally I would not advocate teaching OO as the 1st programming
> > > paradigm, because not everything is an object
> >
> > Whether everything is an object does not seem relevant to me.
>
> But we are not talking about you. We are talking about people having
> their first exposure to computer programming.

As best I can tell, the conversation is about your dislike of
Java.


> > (I know there are OO advocates who regularly claim the
> > "Real World" is composed of objects, but we can safely
> > dismiss them.) This is like critiquing functional languages
> > because not everything is a function.
>
> No it isn't. functional languages do not force you to write a function
> to add 2 numbers together.

Okay, so now you're talking about the interactive top-level, is
that it? These exist for Java, but maybe you haven't been
exposed to them. Are you saying this is a desirable thing
for teaching programming? I can buy that. But if that's
what you mean, I don't see how that's relevant to OO vs. FP.


> > > and I see OO more as a
> > > mechanism which if applied well can provide a very effective mode of
> > > packaging and delivering code.
> >
> > Isn't learning about modularity important for students, too?
>
> Yes. maybe you missed where I said alot of times were taken up with
> lectures dealing Java syntax.

I was responding to the part where you were talking about why
not to teach OO. Regardless, I don't believe you about the
syntax. I highly doubt you teacher is just up there talking
about tokenization and parse trees, and not mentioning
the semantics at the same time.


> > > But even if it came to teaching OO, I would use something with a
> > > simpler syntax than Java.
> >
> > Such as?
>
> Python. Ruby.

Okay, so you're worried about syntax, and want to avoid OO,
and want to teach with FP, and you're proposing Python and
Ruby, both of which have lots of syntax, are object oriented,
and imperative, and are not FP?

3.0/2.0 returns 1.5 in both Java and Python.
3/2 returns 1 in both Java and Python.


Marshall

0
10/29/2005 8:01:30 PM
Marshall  Spight wrote:
> wooks wrote:
> > Marshall  Spight wrote:
> > >
> > > One crawls before one walks, and one learns syntax before one
> > > learns how to think.
> >
> > Well if you live in a world where man is a slave to the machine I can
> > understand why you would think that.
>
> Yes, that is exactly what I advocate: man as slave to the computer.
> In fact, I'd like to see a return to those 1940s style offices
> where you have row after row, column after column of desks,
> in the middle of a giant room, preferably equiped with
> ankle manacles.
> </ironic>
>

Ironic?? thats the import of what you said. You should learn the syntax
first before you start solving problems.

Sorry. A good proportion of students in my class have come straight
from school and never programmed before.

My tutor says half the 1st years have to leave the course every year
because they fail programming. 4 weeks into the term there are still
people walking around saying they don't know how to get started with
the programming assignments.

>
> > The language chosen should conform to the needs of humans not the other
> > way round.
>
> Of course. That's something I really like about Java.
>

Yes you might like it. See above.

>
> > > On the comparative scale, Java is
> > > middle-ground in terms of the complexity of the syntax.
> >
> > Middle ground compared to what C++?. Regardles middle ground is not
> > good enough for teaching the principles of programming.
>
> Are you sure? You think an extreme position is ideal for teaching?

Lets not get into logical fallacies. Javas syntax was design to appeal
to C++ programmers. It wasn't designed as a medium for education.

> Anyway, it certainly is "good enough"; I think you meant
> to ask whether it was best.
>

When people ask questions about why certain things are the way they
are, a very common answer is that it was inherited from C++.

>
> > > My own memories of
> > > being a student exposed to lisp was that I and everyone
> > > I knew just really didn't like the syntax, simple or not.
> >
> > University education shouldn't be about what students like. It's about
> > choosing the most effective medium for teaching what they are supposed
> > to be learning.
>
> And you think whether they like it or not doesn't have any
> bearing on its effectiveness as a teaching medium?
>

We have to learn Assembler. Lots of people don't like it, but we don't
have to waste time in class asking syntax related questions because the
syntax is dead simple.

I really don't know how to make myself clearer.

>
> > > > 2. It is type safe. Something else that the novice has to deal with
> > > > alongside trying to get to grips with programming fundamentals.
> > >
> > > I guess we'll have to disagree on this one! I'm a firm
> > > believer in the value of static typing.
> > >
> >
> > I think you are missing the point here. It's not about the
> > merits/demerits of static typing. It's about whether a language that
> > says
> >
> > 3 / 2
> > 3.0/2.0
> >
> > should give different results is a suitable as a 1st language. That is
> > not the sort of thing you should be worrying about when you are
> > learning to program.
>
> What does that have to do with type safety? And I disagree that
> the difference between integers and floats, and their arithmetic
> properties, is not important; it should be part of the
> basic cirriculum.
>

We  deal with it in assembler, where it really matters.

If you are asked to write a program to derive the square root of a
integer the important skill is developing /understanding the algorithm
not whether you declared the variables correctly. Thats a finer point
you can tack on later.

>
> > > > 3. You have to write a class before you can write a program to add 2
> > > > numbers together or print "Hello world".
> > >
> > > Hmmm. I have heard this complaint many times in the past, and
> > > dismissed it as irrelevant to the professional programmer.
> > > But you may have a point for teaching; student programs tend
> > > to be small.
> > >
> >
> > What professional programmers want/need/think is not relevant here.
>
> Yes, that's what I said.
>
>
> > Students are being shown  programs that use the keyword static and
> > being told not to worry about what it means.
>
> What, you mean like never? Or do you mean for a while? There's
> nothing wrong with omitting even important information during
> the course of study if you come back to it at the appropriate
> time.
>

The appopriate time is the time when you ask a student to use that
keyword. If a programmer writes a program he should what everything
that is in the program is there for. Thats a basic.

>
> > Students are being told
> > that void is a type - how come I can't declare a variable of type void
> > then.
>
> Is that something you think students are having a hard time
> with? It seems straightforward enough to me. Nor can I
> remember anyone having any trouble with it. Maybe your
> experiences are different.
>
>
> > > OTOH, the added overhead to make a function part of a class
> > > is "class Foo { }"-- just four tokens.
> > >
> > >
> > > > 4. Annoyingly inconsistent. Everything is an object..... and then they
> > > > introduce arrays which are not objects.
> > >
> > > Arrays *are* objects.
> >
> > no they are not.
>
> JLS 4.3.1 "An object is a class instance or an array."
>
> Also, this program compiles with no errors:
>
> class Test {
>   public static void main(String[] args) {
>     int[] a = { 1, 2 };
>     Object o = a;
>   }
> }
>

Oh it compiles  -

 therefore it is good/ok,
therefore it is not a quirk

 therefore I can intuitively figure out what the properties and methods
of object o now are.

How do I now specify a subscript/index for object o. Is it o.i - do I
have to declare i now to do this.

How do I express an element of o if o is multi dimensional.

What happens if I intermingle the syntaxes in declaring a multi
dimensional array.

I shouldn't worry about whether there are merits/demerits/material
differences about expressing my program using the object syntax or the
array syntax.

And what about arrayLists? Where do they fit in the conundrum. Which
should I use. Why should I use the above syntax when I could use an
arrayList. What is the difference.

How do I know that I won't be be examined on this stuff.

No I don't know the answers to most of the questions I have asked, but
I do know that it is all syntactical mumbo-jumbo that is not core to an
understanding of how to process an array.



>
> > > The inconsistency is with primitives, such as int, float, etc.
> > > which are not objects. But while I can see how this is an
> > > annoyance for the advanced programmer, I don't see how the
> > > student is going to care.
> >
> > Of course they are going to care. They are supposed to be
> > learning the priniciples of programming.
>
> Well, that's not much of a rebuttal, is it? What programming
> principle is going to be harder to learn because of the
> difference between object types and primitives?
>
>
> > > > 5. Too many tricks and quirks to trip up the novice.
> > >
> > > This isn't specific enough for me to respond to.
> >
> > well our lecturer says he reserves the right to put in trick questions.
> > trick questions don't teach you the principles of programming.
>
> It rather depends on the trick, I'd say. But in general, I agree,
> trick questions are not much use for anything. This
> sounds like more of an issue with your lecturer than Java, though.
>

It's an issue related to the medium chosen. Part of your training as a
programmer should involve dealing with trick questions because they
trip you up in real life but there is plenty of time for that after you
have learnt the basic principles.

>
> > > > Personally I would not advocate teaching OO as the 1st programming
> > > > paradigm, because not everything is an object
> > >
> > > Whether everything is an object does not seem relevant to me.
> >
> > But we are not talking about you. We are talking about people having
> > their first exposure to computer programming.
>
> As best I can tell, the conversation is about your dislike of
> Java.
>

Thats the conversation you are having. I am interested in the
pedagogical aspects.

>
> > > (I know there are OO advocates who regularly claim the
> > > "Real World" is composed of objects, but we can safely
> > > dismiss them.) This is like critiquing functional languages
> > > because not everything is a function.
> >
> > No it isn't. functional languages do not force you to write a function
> > to add 2 numbers together.
>
> Okay, so now you're talking about the interactive top-level, is
> that it? These exist for Java, but maybe you haven't been
> exposed to them. Are you saying this is a desirable thing
> for teaching programming? I can buy that. But if that's
> what you mean, I don't see how that's relevant to OO vs. FP.
>

I'm not talking about OO v FP and if my memory serves me correctly you
can do this with Python. The point is for a 1st language simple things
should be simply facilitated.

Teacher:I need you to write me a program that adds 2 numbers together.
Student:How do I do that?
Teacher:Well first of all you define a class
Student:Whats a class?
.....
.....

>
> > > > and I see OO more as a
> > > > mechanism which if applied well can provide a very effective mode of
> > > > packaging and delivering code.
> > >
> > > Isn't learning about modularity important for students, too?
> >
> > Yes. maybe you missed where I said alot of times were taken up with
> > lectures dealing Java syntax.
>
> I was responding to the part where you were talking about why
> not to teach OO.

I didn't say not to teach OO. I said not as the first programming
paradigm precisely because of the hypothetical conversation I just
posted.

> Regardless, I don't believe you about the
> syntax. I highly doubt you teacher is just up there talking
> about tokenization and parse trees, and not mentioning
> the semantics at the same time.
>

I am not talking about what he is teaching. I am telling you that the
majority of questions that he is asked are syntax related. No one has
asked him how  to utilise or map data structures to problems we are
asked to solve. No one has asked a single question about recursion
(even though we have been given assignments that require us to write
recursive programs) or the iteration v recursion issue.

>
> > > > But even if it came to teaching OO, I would use something with a
> > > > simpler syntax than Java.
> > >
> > > Such as?
> >
> > Python. Ruby.
>
> Okay, so you're worried about syntax, and want to avoid OO,
> and want to teach with FP, and you're proposing Python and
> Ruby, both of which have lots of syntax, are object oriented,
> and imperative, and are not FP?
>

We are obviously not having the same conversation because OO absolutely
should be taught at university. In fact we have a separate OO course in
the 1st year which is compulsory.

In fact that is all the more reason why the Introductory programming
course should not be in Java  or an OO language.

> 3.0/2.0 returns 1.5 in both Java and Python.
> 3/2 returns 1 in both Java and Python.
>

See the exposition on arrays above, alternatively go to comp.lang.ruby
and comp.lang.python and tell the above  is your reasoning for
concluding that their languages and Java are on a syntactical par.

0
wookiz (347)
10/30/2005 2:50:19 AM
wooks schrieb:
> When people ask questions about why certain things are the way they
> are, a very common answer is that it was inherited from C++.

Which is wrong most of the time.

Java has borrowed some of its syntax from C/C++.
The semantics, however, is as different as you can get if you stay 
within the statically-typed OO paradigm.

This also shows - C++ is more versatile by at least an order of 
magnitude, and makes it more easy to shoot yourself by several orders of 
magnitude.

> We have to learn Assembler. Lots of people don't like it, but we don't
> have to waste time in class asking syntax related questions because the
> syntax is dead simple.
> 
> I really don't know how to make myself clearer.

I don't think anybody here believes that Java's syntax is ideal for 
teaching, so you're crashing wide-open doors here :-)

The problem is: we don't know of better alternatives. At least not if 
they want to teach OO.
If they were going for functional programming, Haskell and Lisp would be 
candidates that could both be argued to have sufficiently little 
syntactic overhead.
(Java is far worse than both, and C++ is worse by orders of magnitude - 
in C++, it's easy to make a syntactic error and get a valid program that 
expresses something different from what you intended.)

>>>Students are being shown  programs that use the keyword static and
>>>being told not to worry about what it means.
>>
>>What, you mean like never? Or do you mean for a while? There's
>>nothing wrong with omitting even important information during
>>the course of study if you come back to it at the appropriate
>>time.
> 
> The appopriate time is the time when you ask a student to use that
> keyword. If a programmer writes a program he should what everything
> that is in the program is there for. Thats a basic.

Fully agreed.
I also agree that Java fails that test.
However, I don't think that's a very serious problem. The total loss of 
time (question asked and answered in class, students wasting time 
wondering about these keywords) is a few hours of lifetime. Other 
factors can eat weeks and months of lifetime. For example, if the uni 
insisted on teaching (say) Cobol to technical programmers. (Such things 
do happen, though probably not not on such an outrageous scale.)

> No I don't know the answers to most of the questions I have asked, but
> I do know that it is all syntactical mumbo-jumbo that is not core to an
> understanding of how to process an array.

Arrays are a special type of objects. So no, they aren't outside the 
Object idea of Java, and yes, they are very much special and need very 
much special treatment and consideration.

That's relatively normal though. Mutable containers and subtyping rules 
surrounding them have always been a challenge, and there are very few 
languages that "got it right". None them are in the mainstream. Besides, 
even the "got it right" solutions to the issue have drawbacks, it's a 
question of choosing the least evil.

(Note that Java didn't "get it right". Neither did C++.)

Regards,
Jo
0
jo427 (1164)
10/30/2005 10:22:03 AM
wooks wrote:
> What professional programmers want/need/think is not relevant here.
> Students are being shown  programs that use the keyword static and
> being told not to worry about what it means. Students are being told
> that void is a type - how come I can't declare a variable of type void
> then.

Indeed.  In ML you can: ().  You even pass that value to functions 
expecting void arguments.

C dialects have weird function syntax and weird asymmetry between 
calling and returning values.  A function that expects and returns 
nothing is written

void foo ();  not () foo (), or void foo void
(in ANSI C in fact you have to write void foo (void), but that's yet 
another workaround)

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
10/30/2005 10:38:17 AM
Marshall Spight wrote:
> Well, that's not much of a rebuttal, is it? What programming
> principle is going to be harder to learn because of the
> difference between object types and primitives?

For instance, they might want to add objects to a list, or print them 
out.  Before Java 5, you couldn't add ints to a list, you had to create 
new Integers.  And why?  For simple efficiency!  Never mind that 
Smalltalks and other languages can make that choice on their own, and 
that for most languages a pointer and an int have exactly 32bits anyway, 
so there's not even a difference!

Oh, and the list of println() functions is even better.  One for Object, 
and one for each "basic type."  Simple and elegant, huh?

Of course once in a while the user himself might have to do stuff like that.

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
10/30/2005 10:42:53 AM
Joachim Durchholz wrote:
> The problem is: we don't know of better alternatives. At least not if 
> they want to teach OO.

Excuse me?  For OO, there's Python, Ruby, Common Lisp, Smalltalk.  Why 
are these in your opinion less appropriate than Java?  They are all 
simpler, most/all of them offer an interactive environment (good for the 
beginning user)...

I think Java offers lots of distractions and obstactles especially for 
the beginning user.  I spent one semester being tutor for a Java class, 
and I'd refuse to ever support the use of Java in education again.

In Lisp you could even teach them not to attach methods to classes (does 
  a string->int method belong to class string or to class int?  oh yeah, 
let's just duplicate the sucker as in Java!), plus you can teach 
multimethods, just so they know the pattern when they need it.

> If they were going for functional programming, Haskell and Lisp would be 
> candidates that could both be argued to have sufficiently little 
> syntactic overhead.

For strict FP maybe Lisp isn't as good, because the standard doesn't 
require tail recursion optimization.  But of course you get a Lisp (or 
Scheme) that does it.

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
10/30/2005 10:50:07 AM
Ulrich Hobelmann schrieb:
> Joachim Durchholz wrote:
> 
>> The problem is: we don't know of better alternatives. At least not if 
>> they want to teach OO.
> 
> Excuse me?  For OO, there's Python, Ruby, Common Lisp, Smalltalk.  Why 
> are these in your opinion less appropriate than Java?

Sorry - I wanted to say "mainstream OO".

I wouldn't recommend CL - it's just as complicated as Java. (The 
learning curve is far smoother initially, but steepens considerably 
later on - it has far too many options of everything, and macros thrown in.)
Smalltalk isn't very useful for learning a structured approach. (I hear 
the Smalltalkers howling now...)

Dunno about Python and Ruby. Maybe these would be suitable; I've always 
watched them from quite a distance, so I couldn't tell.

> In Lisp you could even teach them not to attach methods to classes (does 
>  a string->int method belong to class string or to class int?  oh yeah, 
> let's just duplicate the sucker as in Java!), plus you can teach 
> multimethods, just so they know the pattern when they need it.

Multimethods bury timebombs in the code, which are bound to explode 
years later when some module is to be reused in a slightly different 
context. I wouldn't recommend any language that even remotely supports them.

>> If they were going for functional programming, Haskell and Lisp would 
>> be candidates that could both be argued to have sufficiently little 
>> syntactic overhead.
> 
> For strict FP maybe Lisp isn't as good, because the standard doesn't 
> require tail recursion optimization.  But of course you get a Lisp (or 
> Scheme) that does it.

Personally, I think that modern Lisps are even more complicated than 
Java. Maybe even as complicated as in C++, though that complication 
serves better purposes than in C++.
Just look at multitude of ways to pass parameters. Macro processing adds 
yet another layer of complication. Multimethods. And probably tons of 
things I haven't even touched yet.
I keep saying that Lisp is a great laboratory for computer language 
experiments, but I wouldn't want to write production code in it, or 
maintain production code written in it. I'd never know which of the 
heaps of strange and wonderful mechanisms that Lisp offers would work 
best, or might be part of some code that I'm maintaining.

I.e. while Java has a steep syntactic learning curve (which is utterly 
useless but not much of an issue for seasoned programmers), Lisp has a 
steep semantic learning curve.

That's just my personal impression. I believe I'm not the only one in 
that camp, though of course those who have mastered the learning curve 
wouldn't bother about these problems :-)

Regards,
Jo
0
jo427 (1164)
10/30/2005 3:53:39 PM
Joachim Durchholz wrote:
> Ulrich Hobelmann schrieb:
>> Joachim Durchholz wrote:
>>
>>> The problem is: we don't know of better alternatives. At least not if 
>>> they want to teach OO.
>>
>> Excuse me?  For OO, there's Python, Ruby, Common Lisp, Smalltalk.  Why 
>> are these in your opinion less appropriate than Java?
> 
> Sorry - I wanted to say "mainstream OO".
> 
> I wouldn't recommend CL - it's just as complicated as Java. (The 
> learning curve is far smoother initially, but steepens considerably 
> later on - it has far too many options of everything, and macros thrown 
> in.)

Admitted.

> Smalltalk isn't very useful for learning a structured approach. (I hear 
> the Smalltalkers howling now...)

Hm, I haven't really used it for anything, but from looks it seems like 
Java done right (as do Python, OCaml...).

> Dunno about Python and Ruby. Maybe these would be suitable; I've always 
> watched them from quite a distance, so I couldn't tell.

I think as a mainstream imperative/OO teaching language Python would be 
quite good.  For off-mainstream languages Scheme, Haskell and SML aren't 
too bad.

> Multimethods bury timebombs in the code, which are bound to explode 
> years later when some module is to be reused in a slightly different 
> context. I wouldn't recommend any language that even remotely supports 
> them.

Hmm, any concrete examples or pointers?  So far they seemed like a good 
idea to me.

> Personally, I think that modern Lisps are even more complicated than 
> Java. Maybe even as complicated as in C++, though that complication 
> serves better purposes than in C++.

Common Lisp is complicated, sure, and like Java or C I'd categorize it 
as production language, not scripting or teaching material.  That's what 
Scheme is for.

> Just look at multitude of ways to pass parameters. Macro processing adds 
> yet another layer of complication. Multimethods. And probably tons of 
> things I haven't even touched yet.

Macros are something that IMHO every CS person needs to learn at some 
point.  Yacc and Lex aren't really anything else; they are simply 
separate programs because C macros suck ;)

It's layers in a toolchain; nothing more or less.

For multimethods maybe I should believe you; I haven't used them so far.

> I.e. while Java has a steep syntactic learning curve (which is utterly 
> useless but not much of an issue for seasoned programmers), Lisp has a 
> steep semantic learning curve.

I think as semantics go Lisp isn't too bad.  It's more the sheer number 
of special operators, and some oddities (at what time stuff is loaded or 
evaluated, or symbol property lists) that makes it complicated.

Java has its problems too, and most of all it lacks power and 
conciseness of notation.  IMHO at least.

> That's just my personal impression. I believe I'm not the only one in 
> that camp, though of course those who have mastered the learning curve 
> wouldn't bother about these problems :-)

Well, maybe it'd be worth cleaning up the good stuff from Common Lisp. 
Of course the everyday users strongly disagree here.

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
10/30/2005 5:12:02 PM
> I am wondering though whether trying to study all 3 paradigms at the
> same time is advisable.

I wouldn't worry. It may be a good way to find out it's essentially
all the same.

0
hdnews (142)
10/30/2005 8:04:16 PM
Ulrich Hobelmann wrote:
> Marshall Spight wrote:
> > What programming
> > principle is going to be harder to learn because of the
> > difference between object types and primitives?
>
> For instance, they might want to add objects to a list,
> or print them out.

This is just a mechanical description of a semantic
irregularity; it doesn't illuminate a particular difficulty
in teaching.


> Before Java 5, you couldn't add ints to a list, you had
> to create new Integers.

Which is to say, that problem doesn't exist any more.


> And why? For simple efficiency!

Efficiency is quite important, so I don't see your point
here. Although perhaps this kind of efficiency can be
ignored for first year programming students.


> Never mind that
> Smalltalks and other languages can make that choice on their own, and
> that for most languages a pointer and an int have exactly 32bits anyway,
> so there's not even a difference!

Since one of the cornestones of Java is portability, the common
size of a pointer is not a good  choice to base the design on.

Does Smalltalk achieve the same degree of implementation efficiency
with ints that Java does?


> Oh, and the list of println() functions is even better.  One for Object,
> and one for each "basic type."  Simple and elegant, huh?

Well, yes it is. Overloading polymorphism is a fine technique.
Okay, it's maybe not *elegant*, but it gets the job done.


Marshall

0
10/31/2005 12:52:12 AM
Marshall Spight wrote:
> Ulrich Hobelmann wrote:
>> Marshall Spight wrote:
>>> What programming
>>> principle is going to be harder to learn because of the
>>> difference between object types and primitives?
>> For instance, they might want to add objects to a list,
>> or print them out.
> 
> This is just a mechanical description of a semantic
> irregularity; it doesn't illuminate a particular difficulty
> in teaching.

It does make it harder for students, when they get a weird error message 
that 5 can't be added to their list.

>> Before Java 5, you couldn't add ints to a list, you had
>> to create new Integers.
> 
> Which is to say, that problem doesn't exist any more.

For those students who learn Java 5, yes.  But Java 5 has other 
complications I guess.

>> And why? For simple efficiency!
> 
> Efficiency is quite important, so I don't see your point
> here. Although perhaps this kind of efficiency can be
> ignored for first year programming students.

Yes, but not efficiency at the cost of the student.

>> Never mind that
>> Smalltalks and other languages can make that choice on their own, and
>> that for most languages a pointer and an int have exactly 32bits anyway,
>> so there's not even a difference!
> 
> Since one of the cornestones of Java is portability, the common
> size of a pointer is not a good  choice to base the design on.

Ok, agreed.

> Does Smalltalk achieve the same degree of implementation efficiency
> with ints that Java does?

I don't know about it in particular, but at least my generic data 
structures in C have no problem at all to accept ints where other 
objects (void *) are fine.  On 64bit architectures there shouldn't be 
any problems, either.

Really, a compiler should know enough about ints to simply optimize to 
32bit arithmetic whereever an Integer pops up.  For all other purposes, 
Integer could be treated as an object.  I suppose most OO languages do 
that, but I don't know them specifically.

>> Oh, and the list of println() functions is even better.  One for Object,
>> and one for each "basic type."  Simple and elegant, huh?
> 
> Well, yes it is. Overloading polymorphism is a fine technique.
> Okay, it's maybe not *elegant*, but it gets the job done.

Well, the function could also just accept Objects, with the compiler 
performing the dirty work instead of the human.  Oh well, we gotta fight 
that bitch unemployment!

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
10/31/2005 7:03:05 AM
Ulrich Hobelmann schrieb:
> Joachim Durchholz wrote:
> 
>> Smalltalk isn't very useful for learning a structured approach. (I 
>> hear the Smalltalkers howling now...)
> 
> Hm, I haven't really used it for anything, but from looks it seems like 
> Java done right (as do Python, OCaml...).

Smalltalk is an entirely different league: dynamically typed.

In Smalltalk, the best approximation to defining the "type" of an object 
is the set of messages (i.e. function call names) that the object can 
handle without throwing a #DoesNotUnderstand exception. Type and class 
hierarchy are only loosely related, and it's easy to define a subclass 
that is not a subtype. (That's intentional. Smalltalk programmers are 
supposed to write robust functions that do "the expected thing" whatever 
you throw at them.)
Smalltalk has a quite "functional feel" about it: it's the function 
names that carry semantics (though said semantics is just implicit, not 
in the Smalltalk code), and it has closures.

In Java, a subclass is always a subtype (well, that was the intention - 
it doesn't always work out). It's "un-functional": no closures. It is 
statically typed.

>> Multimethods bury timebombs in the code, which are bound to explode 
>> years later when some module is to be reused in a slightly different 
>> context. I wouldn't recommend any language that even remotely supports 
>> them.
> 
> Hmm, any concrete examples or pointers?  So far they seemed like a good 
> idea to me.

Assume two class hierarchies, A1 A2 ... An and B1 B2 ... Bn, and a function

   foo: A1 B1 -> int

that accesses the internals of both its parameters, so it needs to be 
regularly overridden for the subclasses of A1 and B1.

To get an overview of the overrides, you can set up a matrix, with the 
Ai along the rows and the Bi along the columns (or vice versa); each 
matrix cell is filled with the concrete implementation for the given 
Ai/Bi implementation.

First of all, assume that a programmer A creates a new class Am, as a 
subclass of An. He'd have to override foo, since the inherited version 
of foo accesses internals that aren't present anymore, or have a changed 
semantics.
So far, that's not a serious problem, though A is cursing heavily 
because he has to write an implementation of foo for all of the B1...Bn 
types. Fortunately, most can be handled using a few helper functions and 
some boilerplate code, but he's wondering why he's supposed to write 
boilerplate code in the first place - wasn't OO supposed to factor out 
common code? A few cases are real work, some because he knows they will 
be called often and wants to optimise them (something I'd consider as 
"here we have the advantages of MD at work"), some because that 
particular subclass of B1 was *so* different that the normal algorithm 
wouldn't work.

So far, we have encountered no real problems, only annoyances (and even 
these might be small enough to not matter - a lot depends on language 
specifics).

Enter programmer B. He does essentially the same as A, but except he's 
doing a Bm class for the B1... hierarchy. In fact he isn't even aware of 
A's work, so he doesn't know he should also write a foo variant for the 
type combination Am/Bm.
A isn't aware of the problem, too. We have a blank place in the matrix 
for foo, yet the type system says it's OK to call foo with an Am and a 
Bm value!

The approaches for the issue that I have seen all have serious drawbacks:

1) Priorise dispatch. I.e. dispatch on parameters in parameter order. In 
the above example, the foo for Am/Bn will be made responsible. In the 
matrix model, this means copying over the implementation from Am/Bn to 
Am/Bm.
This is what you get if you code multiple dispatch by hand, without 
language support.
The problem is that the Am/Bn implementation will crash. (Remember that 
Bm is quite different from Bn internally, and that foo accesses those 
internals.)

2) Allow the programmer of the foo function to priorise the dispatch.
I dimly remember that one of CL or Scheme did things in such a way. I 
didn't dig too far into it though, and it's been a while since I looked 
it up, so I may be totally wrong about that.
This has the same drawbacks as (1), but it also burdens bug hunters with 
the task of determining which dispatch policy is in effect for each 
function that they encounter.

3) Disallow compositing classes Am and Bm into the same program, unless 
somebody writes the missing foo implementation (which may be just a 
one-liner that tells the compiler which of the two competing 
implementations will do "the right thing").
I can't remember having seen this implemented, but I read a paper that 
explored the approach. The (unsurprising) result was that the approach 
is technically feasible, and that one can attribute a semantics to it.
That's semantically the secondmost-sound approach, but it comes at a 
high price: you cannot reasonably do dynamically-loaded systems � la 
Java, Lisp, or Smalltalk with that. The timebomb is just shifted from 
runtime to integration time - that's better than nothing from a QA 
perspective, but it means that adding a new module into the system may 
invalidate the entire system. In other words, *modularity is broken by 
this approach*: modules are entities that can be freely added to the 
system without breaking existing code, but now we have a possibility 
that adding a module breaks the system (it won't compile anymore).

4) Mark the Am/Bm entry in the matrix as "invalid". If foo is called 
with arguments of types Am and Bm, respectively, it will throw a 
MethodNotDefined exception or something.
I once toyed with that idea and finally dropped it. Other than that, I'm 
not aware of it being considered anywhere.
It's just as bad as (3), but the QA nightmare is even worse: you have to 
inspect all data paths of the system and see whether there's any 
possibility that an Am and an Bm will meet in a foo call.

5) If there's multiple dispatch on both parameters of foo, force A1 and 
B1 into the same module. Anybody who extends the A1 hierarchy will 
automatically be responsible for implementing the B1 hierarchy, too.
This approach is the only sound and modular one, but it imposes two burdens:
a) Class designers will have to decide quite early which parameters will 
be dispatchable.
b) Language designers need to introduce modules that contain multiple 
classes, i.e. a class isn't a module anymore. This considerably 
complicates language syntax and adds a whole host of visibility rules, 
to the very least.

I have done some serious class designing in my OO time, and some of the 
designs were fairly complicated, design-from-scratch, non-boilerplate 
affairs, including situations where I wished I could have multiple dispatch.
In hindsight, I know that 90% of these designs could have been worked 
into a framework of higher-order functions. Some designs would even 
simplify considerably because some of the classes were handcrafted 
closures and other delayed-execution devices. For the remaining 10%, the 
design would be worse, but that loss would be far less than the 
improvements to be expected from going to closures and higher-order 
functions. (Losing a few thousand lines of boilerplate code is always a 
good thing after all!)

That's why I'm in comp.lang.functional, not in comp.object :-)

>> I.e. while Java has a steep syntactic learning curve (which is utterly 
>> useless but not much of an issue for seasoned programmers), Lisp has a 
>> steep semantic learning curve.
> 
> I think as semantics go Lisp isn't too bad.  It's more the sheer number 
> of special operators, and some oddities (at what time stuff is loaded or 
> evaluated, or symbol property lists) that makes it complicated.

Agreed.

> Java has its problems too, and most of all it lacks power and 
> conciseness of notation.  IMHO at least.

Agreed.
(Disclaimer: I don't know about the parametric type stuff in Java.)

>> That's just my personal impression. I believe I'm not the only one in 
>> that camp, though of course those who have mastered the learning curve 
>> wouldn't bother about these problems :-)
> 
> Well, maybe it'd be worth cleaning up the good stuff from Common Lisp. 
> Of course the everyday users strongly disagree here.

You can't improve existing languages if improvements require taking away 
things. I suspect that CL and Scheme acquired too many things that 
should be taken away and replaced by better mechanisms to make language 
improvement feasible - which is why I'm not using these languages :-) 
(and that also means that my ideas about CL and Schemes are probably 
quite biased, so if you wish to explore these languages from my 
perspective, do it on your down, don't simply take over my views unchecked).

Regards,
Jo
0
jo427 (1164)
10/31/2005 8:48:19 AM
Followup-To: comp.lang.functional

Joachim Durchholz <jo@durchholz.org> writes:

>>> Multimethods bury timebombs in the code, which are bound to
>>> explode years later when some module is to be reused in a slightly
>>> different context. I wouldn't recommend any language that even
>>> remotely supports them.

I disagree. Other solutions to the same problems are worse, they don't
allow to reuse the code even in cases it would work.

> To get an overview of the overrides, you can set up a matrix, with the
> Ai along the rows and the Bi along the columns (or vice versa); each
> matrix cell is filled with the concrete implementation for the given
> Ai/Bi implementation.

Let's assume that there exists a generic implementation for arbitrary
Ai, which is not as efficient or doesn't produce as high quality
output as a specialized implementation but is correct.

For example checking whether two sequences have equal elements can be
done element by element, using a generic iterator interface, and for
two byte arrays a specialized implementation can use memcmp. Printing
an arbitrary shape on a printing device can be implemented by
converting it to Bezier curves first. Comparison of two rational
numbers of different types which don't know each other (e.g. decimal
floating point from one package and variable precision binary floating
point from another) can be done by converting them to vulgar
fractions.

Multimethods work very well with that, and it's safe. Why do you want
to disallow this? What do you propose instead?

> The approaches for the issue that I have seen all have serious drawbacks:

It's better to be able to fill the missing slot when the two libraries
are combined than to not be able to do it at all.

> 4) Mark the Am/Bm entry in the matrix as "invalid". If foo is called
> with arguments of types Am and Bm, respectively, it will throw a
> MethodNotDefined exception or something.
> I once toyed with that idea and finally dropped it. Other than that,
> I'm not aware of it being considered anywhere.
> It's just as bad as (3), but the QA nightmare is even worse: you have
> to inspect all data paths of the system and see whether there's any
> possibility that an Am and an Bm will meet in a foo call.

No, you don't have to prove this. Just fill the missing slot if you
have any doubt whether it's needed.

> In hindsight, I know that 90% of these designs could have been
> worked into a framework of higher-order functions.

It's not extensible even for functions with one dispatched argument.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
0
qrczak (1266)
10/31/2005 10:01:41 AM
Joachim Durchholz wrote:
> Ulrich Hobelmann schrieb:
>> Joachim Durchholz wrote:
>>
>>> Smalltalk isn't very useful for learning a structured approach. (I 
>>> hear the Smalltalkers howling now...)
>>
>> Hm, I haven't really used it for anything, but from looks it seems 
>> like Java done right (as do Python, OCaml...).
> 
> Smalltalk is an entirely different league: dynamically typed.

Sure, that's a difference from Java, but hardly a reason for Java.  I 
like static typing, but I'd really not use Java to teach that, but SML 
or Haskell.

If you don't care about the typing, Smalltalk might be *more* flexible 
for teaching programming patterns than Java, if anything.

> So far, that's not a serious problem, though A is cursing heavily 
> because he has to write an implementation of foo for all of the B1...Bn 
> types. Fortunately, most can be handled using a few helper functions and 
> some boilerplate code, but he's wondering why he's supposed to write 
> boilerplate code in the first place - wasn't OO supposed to factor out 
> common code? A few cases are real work, some because he knows they will 

True :)

I'm always assuming that you only override stuff if you care about the 
specifics, and that OO components are loosely bound anyway (more 
black-box like, little use of inheritance).  So I guess I'm not heavily 
into OO anyway, and maybe that ameliorates some of these multimethod 
problems.

> I have done some serious class designing in my OO time, and some of the 
> designs were fairly complicated, design-from-scratch, non-boilerplate 
> affairs, including situations where I wished I could have multiple 
> dispatch.
> In hindsight, I know that 90% of these designs could have been worked 
> into a framework of higher-order functions. Some designs would even 
> simplify considerably because some of the classes were handcrafted 
> closures and other delayed-execution devices. For the remaining 10%, the 
> design would be worse, but that loss would be far less than the 
> improvements to be expected from going to closures and higher-order 
> functions. (Losing a few thousand lines of boilerplate code is always a 
> good thing after all!)
> 
> That's why I'm in comp.lang.functional, not in comp.object :-)

Exactly.  I always model everything OO as functional in my head (most of 
the time just passing the interface, i.e. tuple of higher-order 
functions, as a parameter).

I'm not sure, but mixins could be a cleaner way of multimethoding: you 
explicitly pass two interfaces instead of one, but that way there's no 
dangerous implicitness involved.

If that's not enough and you need a redefinition, pass the multimethod 
itself as a parameter to whatever, so every user can use their own 
custom definition.

By making things explicit/visible, in programming as in 
polito-economics, you IMHO avoid many problems that wouldn't need to be 
there to begin with :)

> You can't improve existing languages if improvements require taking away 
> things. I suspect that CL and Scheme acquired too many things that 
> should be taken away and replaced by better mechanisms to make language 
> improvement feasible - which is why I'm not using these languages :-) 
> (and that also means that my ideas about CL and Schemes are probably 
> quite biased, so if you wish to explore these languages from my 
> perspective, do it on your down, don't simply take over my views 
> unchecked).

Regarding Scheme my impression is that it's very clean, but rather too 
limited or stripped to do many useful things, unless you choose a 
non-standard implementation, but so far I didn't like those too much.

Schemers would say that language improvement is possible with macros, as 
Scheme offers a set of about every basic construct you might need to 
implement more advanced stuff.  OTOH, I prefer Lisp macros, so Scheme is 
the spot between Lisp and ML that I don't find too interesting personally.

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
10/31/2005 10:11:21 AM
Marcin 'Qrczak' Kowalczyk schrieb:
> 
> Joachim Durchholz <jo@durchholz.org> writes:
> 
>>>>Multimethods bury timebombs in the code, which are bound to
>>>>explode years later when some module is to be reused in a slightly
>>>>different context. I wouldn't recommend any language that even
>>>>remotely supports them.
> 
> I disagree. Other solutions to the same problems are worse, they don't
> allow to reuse the code even in cases it would work.

Well, the best solution would be to avoid dynamic dispatch in the first 
place :-)

>>To get an overview of the overrides, you can set up a matrix, with the
>>Ai along the rows and the Bi along the columns (or vice versa); each
>>matrix cell is filled with the concrete implementation for the given
>>Ai/Bi implementation.
> 
> Let's assume that there exists a generic implementation for arbitrary
> Ai, which is not as efficient or doesn't produce as high quality
> output as a specialized implementation but is correct.

If multiple dispatch is just for optimisation, then it can work 
semantically.

 > Multimethods work very well with that, and it's safe. Why do you want
 > to disallow this?

However, you still have the problem that the Am/Bm case isn't tested. 
Neither the programmer of Am nor the programmer of Bm has a reason to 
check that his optimisations are indeed correct for the Am/Bm case.

So I'm still extremely wary of MD even in this case. This is not the 
classic OO-style "gradually refine your code as you go down the type 
hierarchy" code reuse, it's more a case of "here we have a matrix, and 
we'd like to optimise cases A4, C9 and O32, falling back to the standard 
implementation in all other cases". I wouldn't want to have an 
optimisation chosen by default for the P33 case just because it's to the 
lower right of O32, at least not until I have specifically tested that case.

 > What do you propose instead?

Of course, in that case, explicit dispatch using a (sparse) matrix of 
implementations in an FPL works well enough. You need to know the entire 
matrix, or at least of those areas where you do optimisations.

>>4) Mark the Am/Bm entry in the matrix as "invalid". If foo is called
>>with arguments of types Am and Bm, respectively, it will throw a
>>MethodNotDefined exception or something.
>>I once toyed with that idea and finally dropped it. Other than that,
>>I'm not aware of it being considered anywhere.
>>It's just as bad as (3), but the QA nightmare is even worse: you have
>>to inspect all data paths of the system and see whether there's any
>>possibility that an Am and an Bm will meet in a foo call.
> 
> No, you don't have to prove this. Just fill the missing slot if you
> have any doubt whether it's needed.

I don't think that end users can do that.

And with dynamic loading, a lot of that loading is triggered by 
end-users' actions. Think of people setting up their work environment 
from freely available Java modules: they'll combine a spreadsheet, a 
word processor, a desktop calculator, etc. etc; there isn't much that 
they can do if they find that the calculator is using one bignum 
implementation and the spreadsheet another one, and have it report 
errors just because the system doesn't know what to do when adding a 
spreadsheet bignum and a calculator bignum.

>>In hindsight, I know that 90% of these designs could have been
>>worked into a framework of higher-order functions.
> 
> It's not extensible even for functions with one dispatched argument.

The new framework wouldn't have had dispatching functions.

Regards,
Jo
0
jo427 (1164)
10/31/2005 10:41:02 AM
Ulrich Hobelmann schrieb:
> Joachim Durchholz wrote:
> 
>> Ulrich Hobelmann schrieb:
>>
>>> Joachim Durchholz wrote:
>>>
>>>> Smalltalk isn't very useful for learning a structured approach. (I 
>>>> hear the Smalltalkers howling now...)
>>>
>>>
>>> Hm, I haven't really used it for anything, but from looks it seems 
>>> like Java done right (as do Python, OCaml...).
>>
>> Smalltalk is an entirely different league: dynamically typed.
> 
> Sure, that's a difference from Java, but hardly a reason for Java.  I 
> like static typing, but I'd really not use Java to teach that, but SML 
> or Haskell.
> 
> If you don't care about the typing, Smalltalk might be *more* flexible 
> for teaching programming patterns than Java, if anything.

Agreed on both accounts, but Smalltalk still isn't "Java done right": 
Java doesn't even intend to achieve what Smalltalk does.

IOW Java already does have its share of problems, we don't need to add a 
criticism of "it doesn't do well what Smalltalk does" :-)

>> So far, that's not a serious problem, though A is cursing heavily 
>> because he has to write an implementation of foo for all of the 
>> B1...Bn types. Fortunately, most can be handled using a few helper 
>> functions and some boilerplate code, but he's wondering why he's 
>> supposed to write boilerplate code in the first place - wasn't OO 
>> supposed to factor out common code? A few cases are real work, some 
>> because he knows they will 
> 
> True :)
> 
> I'm always assuming that you only override stuff if you care about the 
> specifics, and that OO components are loosely bound anyway (more 
> black-box like, little use of inheritance).  So I guess I'm not heavily 
> into OO anyway, and maybe that ameliorates some of these multimethod 
> problems.

True.

I've been into massive OO hierarchies. My experience was that inheriting 
from a class *always* creates a tight coupling. For example, when 
overriding a virtual function, you needed to check where the parent 
class would call that function, and check that the calling function 
would still "do the right thing" with the new implementation. Changes in 
the parent class might invalidate that analysis, so whenever a class was 
changed, I routined checked all the descendant classes to see whether 
anything broke (that would happen roughly with 10-20% of algorithmic 
changes).

> I'm not sure, but mixins could be a cleaner way of multimethoding: you 
> explicitly pass two interfaces instead of one, but that way there's no 
> dangerous implicitness involved.

I think that's the most important point: multiple dispatch in itself 
isn't bad, and has its uses (as Marcin pointed out). It's the 
implicitness that can create maintenance problems.

That implicitness already can create problems in the single-dispatch 
case. It's conceivable that multiple dispatch simply magnifies problems 
that are already present.

> Schemers would say that language improvement is possible with macros, as 
> Scheme offers a set of about every basic construct you might need to 
> implement more advanced stuff.

This stance, of course, neglects that some improvements are based on 
restrictions, not on enabling.
Expressiveness is a double-edged sword: if programmers "can do more", 
maintainers have to check more aspects to find out where a change is needed.
It's from that perspective that I tend to be annoyed/amused by claims 
like "the language is so powerful that you can add all you need 
yourself, or use libraries that do it for you".
Such claims are routinely made for Lisp/Scheme/CL, Smalltalk, Perl, and 
C++. I don't think it's a coincidence that all these languages have a 
reputation of being powerful but sometimes doing unexpected things, and 
placing high demands on programmer discipline and/or maintenance.

Regards,
Jo
0
jo427 (1164)
10/31/2005 11:01:53 AM
Joachim Durchholz wrote:
> It's from that perspective that I tend to be annoyed/amused by claims 
> like "the language is so powerful that you can add all you need 
> yourself, or use libraries that do it for you".
> Such claims are routinely made for Lisp/Scheme/CL, Smalltalk, Perl, and 
> C++. I don't think it's a coincidence that all these languages have a 
> reputation of being powerful but sometimes doing unexpected things, and 
> placing high demands on programmer discipline and/or maintenance.

True, but Scheme probably has the greatest freedom you can have (dynamic 
typing, but lots of good basic types to work with) and a set of 
orthogonal constructs, which isn't true for C dialects, and probably not 
for Smalltalk either, as it's heavily based on more complex constructs (OO).

On the downside, of course the features might be too basic for your taste ;)

Oh, and it has no standard package/module system, which might be the 
biggest problem for practical use.

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
10/31/2005 11:39:45 AM
Joachim Durchholz <jo@durchholz.org> writes:

> Well, the best solution would be to avoid dynamic dispatch in the
> first place :-)

And specify concrete types in all operations they are involved with?
This leads to less reusable components.

We have that in C. Consider operations which filter byte streams and
character streams: compression, encryption, character encoding,
duplicating a stream to two streams, buffering with lookahead and
putback. There are no common interfaces, each library provides its own
monomorphic functions. You have to build the stream abstraction out of
the raw block translation function yourself. I want reading from
compressed streams to look like reading from regular files. But C
FILE * is not extensible.

Sorry, it's better done with dynamic dispatch.

> However, you still have the problem that the Am/Bm case isn't tested.
> Neither the programmer of Am nor the programmer of Bm has a reason to
> check that his optimisations are indeed correct for the Am/Bm case.

You noticed that testing can't cover all possible combinations of
inputs, and thus you can't expect to test everything - you have to
hope that the code is correct even though you have tested only some
combinations.

It's nothing new, and nothing specific to multidispatch.

How do higher order functions help here? Do you test a higher order
function with all possible arguments, i.e. all possible functions
of the given type?

> I wouldn't want to have an optimisation chosen by default for the
> P33 case just because it's to the lower right of O32, at least not
> until I have specifically tested that case.

If P is not a subtype of O, or 33 is not a subtype of 32, don't
declare them as such.

> Of course, in that case, explicit dispatch using a (sparse) matrix
> of implementations in an FPL works well enough. You need to know the
> entire matrix, or at least of those areas where you do optimisations.

You have to know the whole matrix in one place, and you criticize a
solution which doesn't as unmodular?!

> And with dynamic loading, a lot of that loading is triggered by
> end-users' actions. Think of people setting up their work environment
> from freely available Java modules: they'll combine a spreadsheet,
> a word processor, a desktop calculator, etc. etc; there isn't much
> that they can do if they find that the calculator is using one
> bignum implementation and the spreadsheet another one, and have it
> report errors just because the system doesn't know what to do when
> adding a spreadsheet bignum and a calculator bignum.

If the particular tools can't be combined in this way, it's a pity.
The user will get an error and will have to invent some other way of
letting them cooperate. He can also inform authors of these packages
so in later versions they can try to be more compatible.

It's still infinitely many times better than having to enumerate
by the author all applications a given application is supposed
to communicate with, or all kinds of documents embeddable in a
spreadsheet.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
0
qrczak (1266)
10/31/2005 11:56:53 AM
"Marshall  Spight" <marshall.spight@gmail.com> writes:

> Yes, the folks in charge of the libraries continue to improve
> them.

This is a good thing, if done right.

> If by "constantly" you mean four new major releases since
> the original one in 1995, then okay. Of course, code you
> wrote for Sun's jdk 1.0 will still run under jdk 1.5.

Well, some code written by a colleague for jdk 1.0 wouldn't compile
with jdk 1.2.  After fixing it to work with 1.2, it then stopped
working again with 1.3.  At that point, we gave up and rewrote it
in Haskell, which turned out to be much nicer.  Of the graphical
libraries used in the Java code, it seemed that later versions just
made the original GUI design uglier (changed font, changed sizes, etc),
and the /real/ bugs (non-working scroll-bars) remained, with even a
few new bugs introduced.  A classic case of rearranging deck chairs.

> But I don't think anyone would be any happier than they are now
> if, instead, they had simply halted development on the libraries.

Indeed.  But one would like to be able to come back to previously
working code and for it still to work in much the same way.  This does
not always seem possible in Java.

Regards,
    Malcolm
0
10/31/2005 12:13:33 PM
Marshall Spight wrote:

> This is just a mechanical description of a semantic
> irregularity; it doesn't illuminate a particular difficulty
> in teaching.

The problem with Java is that there is a lot to explain before
you actually get anything done.  With e.g. Python, you type "python" on
the command line, and get a new prompt.  You can now experiment with
e.g. numerical expressions, define simple functions, objects etc etc
directly, and one step at a time.  (Similar for Scheme, Haskell or a 
host of other programming languages, of course).

With Java, you must learn� a lot more about structure of OO programs, 
involving concepts like public/private, static, classes, types, 
libraries, compilation, etc etc - just to print "hello world".

I can't for the life of me see one single good reason for teaching Java
to beginners.

-k

� Or at least, often be told to disregard the man behind the curtains.
0
news2 (145)
10/31/2005 12:39:03 PM
Marcin 'Qrczak' Kowalczyk schrieb:
> Joachim Durchholz <jo@durchholz.org> writes:
> 
>>Well, the best solution would be to avoid dynamic dispatch in the
>>first place :-)
> 
> And specify concrete types in all operations they are involved with?
> This leads to less reusable components.

Hey, we don't need dynamic dispatch to have polymorphic code, don't we?

>>I wouldn't want to have an optimisation chosen by default for the
>>P33 case just because it's to the lower right of O32, at least not
>>until I have specifically tested that case.
> 
> If P is not a subtype of O, or 33 is not a subtype of 32, don't
> declare them as such.

They can be subtypes and still be implementation-wise incompatible.

>>Of course, in that case, explicit dispatch using a (sparse) matrix
>>of implementations in an FPL works well enough. You need to know the
>>entire matrix, or at least of those areas where you do optimisations.
> 
> You have to know the whole matrix in one place, and you criticize a
> solution which doesn't as unmodular?!

Sure. If the whole matrix is in one place, that's a module.

It isn't extensible, of course. At least not modularly. But that's what 
dynamic dispatch doesn't offer in the first place, so I'm not losing 
anything.

The explicit matrix may make it easier to factor out common code. Then 
again, it may not - I haven't worked with such an approach yet, at least 
not in large projects.

>>And with dynamic loading, a lot of that loading is triggered by
>>end-users' actions. Think of people setting up their work environment
>>from freely available Java modules: they'll combine a spreadsheet,
>>a word processor, a desktop calculator, etc. etc; there isn't much
>>that they can do if they find that the calculator is using one
>>bignum implementation and the spreadsheet another one, and have it
>>report errors just because the system doesn't know what to do when
>>adding a spreadsheet bignum and a calculator bignum.
> 
> If the particular tools can't be combined in this way, it's a pity.
> The user will get an error and will have to invent some other way of
> letting them cooperate. He can also inform authors of these packages
> so in later versions they can try to be more compatible.
> 
> It's still infinitely many times better than having to enumerate
> by the author all applications a given application is supposed
> to communicate with, or all kinds of documents embeddable in a
> spreadsheet.

There are better ways.

Standardised interfaces are one of them. That reduces the NxM problem to 
an N+M one.

Translating that back to dynamic dispatch this means splitting
   foo: A1 B1 -> int
into
   foo1: A1 -> interface
   foo2: interface B1 -> int
and singly dispatching on these two.

Standardised interfaces are indeed the best solution. But you don't need 
multiple dispatch for them.

(And, no, I don't think that dynamic dispatch is a panacea. Neither as 
provided in OO languages nor in the form of explicit dispatch tables in 
an FPL.)

Regards,
Jo
0
jo427 (1164)
10/31/2005 3:10:00 PM
Joachim Durchholz <jo@durchholz.org> writes:

>> And specify concrete types in all operations they are involved with?
>> This leads to less reusable components.
>
> Hey, we don't need dynamic dispatch to have polymorphic code, don't we?

Static dispatch is applicable only to a subset of problems for which
dynamic dispatch is. Basically too much information about the choice
of types of subobjects leaks to the parameters of the type of the
whole object, and these type parameters can't be abstracted away.

>> If P is not a subtype of O, or 33 is not a subtype of 32, don't
>> declare them as such.
>
> They can be subtypes and still be implementation-wise incompatible.

No, if Liskov Substitution Principle doesn't hold then subtyping is
a lie. Expect troubles unless the code where lack of LSP manifests is
limited in scope and is careful to not rely on the declared subtyping.

>> You have to know the whole matrix in one place, and you criticize
>> a solution which doesn't as unmodular?!
>
> Sure. If the whole matrix is in one place, that's a module.
>
> It isn't extensible, of course. At least not modularly.

It thus loses the main benefit of modularity.

> But that's what dynamic dispatch doesn't offer in the first place,
> so I'm not losing anything.

What do you mean? Dynamic dispatch as in generic functions is
extensibile. I don't care about class-based dynamic dispatch.

> Standardised interfaces are one of them. That reduces the NxM problem
> to an N+M one.

It disallows making extensible optimized variants for particular
combinations. It's good as a fallback for the default case, but it's
not the ultimate solution.

And it's not always applicable (there isn't necessarily a universal
representation to which everything else can be converted). When it's
not, generic functions don't have a default implementation and thus
might fail if not all combinations are covered, but they will work
fine when used combinations are indeed covered.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
0
qrczak (1266)
10/31/2005 9:48:43 PM
In article <dk4tj2$847$1@online.de>,
Joachim Durchholz  <jo@durchholz.org> wrote:
>Ulrich Hobelmann schrieb:
[...]
>> Schemers would say that language improvement is possible with macros, as 
>> Scheme offers a set of about every basic construct you might need to 
>> implement more advanced stuff.
>
>This stance, of course, neglects that some improvements are based on 
>restrictions, not on enabling.
>Expressiveness is a double-edged sword: if programmers "can do more", 
>maintainers have to check more aspects to find out where a change is needed.

Your stance neglects that But lack of expressiveness is the main reason
 for repeated code, exactly the maintenance problem you are concerned about.
 I gave an example earlier involving "until".

>It's from that perspective that I tend to be annoyed/amused by claims 
>like "the language is so powerful that you can add all you need 
>yourself, or use libraries that do it for you".

That "or" joins very different claims.

>Such claims are routinely made for Lisp/Scheme/CL, Smalltalk, Perl, and 
>C++.

It's not much of a claim if it's made for such diverse languages.
 That "or" really weakens it.

C++ is a big language (in the strict sense of syntax and semantics).
Lisps and especially Scheme are small languages.
CL and C++ have large standard libraries.
Lisp and Scheme have a very powerful mechanism for adding to the language;
 looked at another way, they have a more powerful notion of what can be
 added to a library.

> I don't think it's a coincidence that all these languages have a 
>reputation of being powerful but sometimes doing unexpected things, and 
>placing high demands on programmer discipline and/or maintenance.

Scheme certainly doesn't have this reputation; quite the opposite.
C++ certainly does.

Gary Baumgartner

>
>Regards,
>Jo


0
gfb (30)
10/31/2005 10:56:15 PM
Following up on my own post.

I just remembered we've been through some of this before.
 When I mentioned that macros can remove repetition and improve safety
 you agreed, adding that HOFs can be used to achieve the same effect.

I would be interested in seeing your HOF implementation of my macro "until",
 to clarify your preferences.

Gary Baumgartner

In article <2005Oct31.175615.4931@jarvis.cs.toronto.edu>,
Gary Baumgartner <gfb@cs.toronto.edu> wrote:
>In article <dk4tj2$847$1@online.de>,
>Joachim Durchholz  <jo@durchholz.org> wrote:
>>Ulrich Hobelmann schrieb:
>[...]
>>> Schemers would say that language improvement is possible with macros, as 
>>> Scheme offers a set of about every basic construct you might need to 
>>> implement more advanced stuff.
>>
>>This stance, of course, neglects that some improvements are based on 
>>restrictions, not on enabling.
>>Expressiveness is a double-edged sword: if programmers "can do more", 
>>maintainers have to check more aspects to find out where a change is needed.
>
>Your stance neglects that But lack of expressiveness is the main reason
> for repeated code, exactly the maintenance problem you are concerned about.
> I gave an example earlier involving "until".
>
>>It's from that perspective that I tend to be annoyed/amused by claims 
>>like "the language is so powerful that you can add all you need 
>>yourself, or use libraries that do it for you".
>
>That "or" joins very different claims.
>
>>Such claims are routinely made for Lisp/Scheme/CL, Smalltalk, Perl, and 
>>C++.
>
>It's not much of a claim if it's made for such diverse languages.
> That "or" really weakens it.
>
>C++ is a big language (in the strict sense of syntax and semantics).
>Lisps and especially Scheme are small languages.
>CL and C++ have large standard libraries.
>Lisp and Scheme have a very powerful mechanism for adding to the language;
> looked at another way, they have a more powerful notion of what can be
> added to a library.
>
>> I don't think it's a coincidence that all these languages have a 
>>reputation of being powerful but sometimes doing unexpected things, and 
>>placing high demands on programmer discipline and/or maintenance.
>
>Scheme certainly doesn't have this reputation; quite the opposite.
>C++ certainly does.
>
>Gary Baumgartner
>
>>
>>Regards,
>>Jo
>
>


0
gfb (30)
10/31/2005 11:20:05 PM
Marcin 'Qrczak' Kowalczyk schrieb:
> Joachim Durchholz <jo@durchholz.org> writes:
> 
> 
>>>And specify concrete types in all operations they are involved with?
>>>This leads to less reusable components.
>>
>>Hey, we don't need dynamic dispatch to have polymorphic code, don't we?
> 
> 
> Static dispatch is applicable only to a subset of problems for which
> dynamic dispatch is. Basically too much information about the choice
> of types of subobjects leaks to the parameters of the type of the
> whole object, and these type parameters can't be abstracted away.
> 
> 
>>>If P is not a subtype of O, or 33 is not a subtype of 32, don't
>>>declare them as such.
>>
>>They can be subtypes and still be implementation-wise incompatible.
> 
> No, if Liskov Substitution Principle doesn't hold then subtyping is
> a lie. Expect troubles unless the code where lack of LSP manifests is
> limited in scope and is careful to not rely on the declared subtyping.

Nope. I'm talking about incompatible *internals*.

Say, two string implementations, one using contiguous memory and copying 
around whenever you insert a substring, and one that represents a 
strings as a series of index/length pairs into a pool of string fragments.

These two type can have the *exactly* same (hence compatible) semantics, 
be Liskov substitutable (even both ways), still be fully incompatible 
when it comes to representation.

>>>You have to know the whole matrix in one place, and you criticize
>>>a solution which doesn't as unmodular?!
>>
>>Sure. If the whole matrix is in one place, that's a module.
>>
>>It isn't extensible, of course. At least not modularly.
> 
> It thus loses the main benefit of modularity.

There wasn't any modularity in the beginning. Having a 
multiple-dispatched function with access to internals tightly couples 
the two types; pretending the two types are still independent is just a 
blatant lie.

In other words, if two types participate in multiple dispatch anywhere, 
forcing them into the same module just acknowledges facts, it doesn't 
impose new problems.

Note that this problem only happens if the multiple-dispatching function 
accesses internals of both types. (Assuming it doesn't call a 
multiple-dispatching function on the same types internally, which simply 
moves the problem around.)
A function that supposedly does multiple dispatch on its types and does 
*not* access internals isn't really multiple-dispatching at all. It's 
calling single-dispatching functions on either parameter, and there's no 
need to ever specialise on the parameters. (Other than optimisation, 
which is a separate issue.)

>>But that's what dynamic dispatch doesn't offer in the first place,
>>so I'm not losing anything.
> 
> What do you mean? Dynamic dispatch as in generic functions is
> extensibile. I don't care about class-based dynamic dispatch.

I was assuming dynamic dispatch with access to internals.

As said above, I don't think that dynamic dispatch without access to 
internals is particularly useful, other than for optimisation.

>>Standardised interfaces are one of them. That reduces the NxM problem
>>to an N+M one.
> 
> It disallows making extensible optimized variants for particular
> combinations. It's good as a fallback for the default case, but it's
> not the ultimate solution.
> 
> And it's not always applicable (there isn't necessarily a universal
> representation to which everything else can be converted). When it's
> not, generic functions don't have a default implementation and thus
> might fail if not all combinations are covered, but they will work
> fine when used combinations are indeed covered.

I remain unconvinced. Those cases in which M:N couldn't be converted to 
M:1:N invariably offered problems. Be it document format 
interoperability, object interaction in games, or whatever: the 
"cleanest" approach was to define a common interface.

Game object interaction is a useful example, since it keeps the asymmetry.
Say you've got to decide what happens if two objects collide. You can
a) Painstakingly define what happens for every combination of possible 
objects - reasonable if the set of object types is small (and that 
warrants putting all those objects into a single module, closing off the 
set of objects so we know what we're talking about).
b) Introduce a "coupling agent", e.g. momentum. If two object collide, 
both give and receive a certain amount of momentum, and then you're 
done. (Another, very common coupling agent is simply a "collision bit", 
and all objects react with an explosion. Or a "damage amount", each 
object exploding after it received a given amount of damage. There's a 
broad spectrum of couplers - but the set of coupling agent that every 
game uses is fixed and unalterable, so that we're back at case (a) here.)

Regards,
Jo
0
jo427 (1164)
11/1/2005 2:24:30 PM
Gary Baumgartner schrieb:
> 
> I would be interested in seeing your HOF implementation of my macro "until",
>  to clarify your preferences.

Hmm... an "until" macro is quite low on the list of things that I need. 
What I need are things that iterate over data structures without 
inviting fencepost errors, such as the "fold" family of HOFs.

HOFs are so massively expressive that, frankly, I don't feel any need 
for macros, except to correct deficits in the language. (The problem is 
that everybody thinks that the deficits are elsewhere, so everybody uses 
a different set of macros, making it more difficult to understand other 
peoples' code. That's why I usually refrain from using macros unless 
such usage is so highly standardised that it could be made part of the 
language.)

Regards,
Jo
0
jo427 (1164)
11/1/2005 2:31:05 PM
In article <dk7u79$r5$1@online.de>,
Joachim Durchholz  <jo@durchholz.org> wrote:
>Gary Baumgartner schrieb:
>> 
>> I would be interested in seeing your HOF implementation of my macro "until",
>>  to clarify your preferences.
>
>Hmm... an "until" macro is quite low on the list of things that I need. 
>What I need are things that iterate over data structures without 
>inviting fencepost errors, such as the "fold" family of HOFs.
>
>HOFs are so massively expressive that, frankly, I don't feel any need 
>for macros, except to correct deficits in the language.

Can you tell me which specific language you use that doesn't have these
 deficits, or what you do until (if even!) they get corrected in a future
 version?

Gary Baumgartner

> (The problem is 
>that everybody thinks that the deficits are elsewhere, so everybody uses 
>a different set of macros, making it more difficult to understand other 
>peoples' code. That's why I usually refrain from using macros unless 
>such usage is so highly standardised that it could be made part of the 
>language.)
>
>Regards,
>Jo


0
gfb (30)
11/1/2005 4:28:04 PM
Gary Baumgartner schrieb:
>>HOFs are so massively expressive that, frankly, I don't feel any need 
>>for macros, except to correct deficits in the language.
> 
> Can you tell me which specific language you use that doesn't have these
> deficits, or what you do until (if even!) they get corrected in a future
> version?

I usually live with the deficits of any language I happen to be using.

Sorry for wording it as if I'd regularly fix language deficits using 
macros. Most of the time, such "fixes" would make the code more 
difficult to maintain for my colleagues.
Not everything that's broken needs fixing :-)

Regards,
Jo
0
jo427 (1164)
11/1/2005 4:47:46 PM
In article <dk867i$eqc$1@online.de>,
Joachim Durchholz  <jo@durchholz.org> wrote:
>Gary Baumgartner schrieb:
>>>HOFs are so massively expressive that, frankly, I don't feel any need 
>>>for macros, except to correct deficits in the language.
>> 
>> Can you tell me which specific language you use that doesn't have these
>> deficits, or what you do until (if even!) they get corrected in a future
>> version?
>
>I usually live with the deficits of any language I happen to be using.
>
>Sorry for wording it as if I'd regularly fix language deficits using 
>macros.

You're actually very clear that you wouldn't.

But without telling me what specific deficits you are willing to live with,
 at least which language's deficits are not worth addressing through macros,
 we can't get anywhere. I showed you why I don't like the lack of "until"
 in languages such as Java in situations where people use "while", and gave
 a macro addressing this . So either you need to tell me that you don't use
 Java at all, don't use Java in this way (in which case you need to tell me
 how you do it in such a way that is isn't "difficult for your colleagues to
 maintain"), why "until" isn't a reasonable addition *in the spirit of Java
 programming*, or what language(s) you do use.

> Most of the time, such "fixes" would make the code more 
>difficult to maintain for my colleagues.

This is where we disagree. I believe you are making an arbitrary
 distinction, classifying certain changes as language changes,
 others as library additions, then arguing that "language" changes
 are bad because they need to be properly designed and documented,
 yet letting library additions off the hook. But unless you get
 specific we won't get anywhere.

>Not everything that's broken needs fixing :-)

Normally I wouldn't pick on this, but because you are so unspecific,
 throwing out (well-known) generalities, and making them even vaguer
 through smileys, is not a good idea.

Gary Baumgartner

>Regards,
>Jo


0
gfb (30)
11/1/2005 5:46:45 PM
Gary Baumgartner <gfb@cs.toronto.edu> wrote:
> When I mentioned that macros can remove repetition and improve safety
> you agreed, adding that HOFs can be used to achieve the same effect.
> 
> I would be interested in seeing your HOF implementation of my macro "until",

Here's a Haskell version. Since your example is not purely functional, but
intends to use side-effects, we have to use monads.

untilM :: Monad m => m Bool -> m a -> m ()
untilM condition body = 
  condition >>= \b -> if b then return () else body >> untilM condition body

Very simple and straightforward. (Read >> as a "semicolon" that
seperates statements, and >>= as a similar one, that additionally passes
the result of the last statement as an argument to the next statement.)
The type declaration at the top is optional, but helps understanding
what this function is about.

Here's example of using it in the IOMonad with "variables" represented
by IORefs (which is syntactically heavy, and not good style in Haskell):

test = do
  v <- newIORef 1
  untilM 
    (do { i <- readIORef v; return (i == 5) }) 
    (do { i <- readIORef v; writeIORef v (i+1); print i })

Result in the toplevel:

*Main> test
1
2
3
4

A more natural way to do this in Haskell would be to avoid side-effects,
and make "condition" and "body" functions, which take the state as argument.
The Prelude already has such an "until" HOF, also defined completely without
macros.

The variant with sub-conditions and post-processing is also straightforward:

untilPostM :: Monad m => [(m Bool, m b)] -> m a -> m b
untilPostM conds body = inspect conds where
  inspect [] = 
    body >> untilPostM conds body
  inspect ((cond, post):rest) = 
    cond >>= \b -> if b then post else inspect rest

As in your example, conditions (including side-effects) are evaluated
only until one of them is true. 

All this can be easily done in any functional language, nothing but lambda
calculus is required. 

If you would like to see an example where usage doesn't look so ugly,
please make one that doesn't require side-effects. After all, this is
about *functional* programming :-)

It's also possible to use infix operators to make the syntax look a bit
prettier; many combinator (i.e., "HOF") libraries do this. Again, completely
without macros.

- Dirk
0
dthierbach2 (260)
11/1/2005 8:39:33 PM
Dirk Thierbach wrote:
> Gary Baumgartner <gfb@cs.toronto.edu> wrote:
> > I would be interested in seeing your HOF implementation of my macro "until",
>
> Here's a Haskell version. Since your example is not purely functional, but
> intends to use side-effects, we have to use monads.
>

;;; And here's a version in Scheme...

(define (until conditional block)
    (if (not (conditional))
        (begin (block)
               (until conditional block))))

(define i 9)

(until (lambda () (< i 1))
       (lambda () (begin (display i) (set! i (- i 1)))))

0
11/1/2005 9:35:44 PM
Gary Baumgartner schrieb:
> In article <dk867i$eqc$1@online.de>,
> Joachim Durchholz  <jo@durchholz.org> wrote:
> 
>>Gary Baumgartner schrieb:
>>
>>>>HOFs are so massively expressive that, frankly, I don't feel any need 
>>>>for macros, except to correct deficits in the language.
>>>
>>>Can you tell me which specific language you use that doesn't have these
>>>deficits, or what you do until (if even!) they get corrected in a future
>>>version?
>>
>>I usually live with the deficits of any language I happen to be using.
>>
>>Sorry for wording it as if I'd regularly fix language deficits using 
>>macros.
> 
> You're actually very clear that you wouldn't.

OK.

> But without telling me what specific deficits you are willing to live with,
>  at least which language's deficits are not worth addressing through macros,
>  we can't get anywhere. I showed you why I don't like the lack of "until"
>  in languages such as Java in situations where people use "while", and gave
>  a macro addressing this .

Um, well, if that's the case: I wouldn't consider the absence of "until" 
a deficit. Rather, it's the absence of a loop construct that can have 
the exit condition placed anywhere within the loop - now *that* would 
really help for all those half-unrolled loops that we find in practice.

And this also very nicely illustrates why I don't think that macros 
should be commonplace: if anybody is going to work around language 
deficits, the first attempt at doing so will almost invariably be 
faulty. Language design is *hard*, and taking away can be as important 
as adding.

So I don't buy the advocacy that "macros can be used to add what the 
language doesn't specify". That's great for language experimentation, 
but a production language should have what's needed.
With macros, I have to look at the various implementations. 
Implementation differences are bad enough for standard libraries - why 
would I want differences in macro libraries thrown in? This offers 
flexibilities in places I don't appreciate, as a programmer.

 > So either you need to tell me that you don't use
>  Java at all, don't use Java in this way (in which case you need to tell me
>  how you do it in such a way that is isn't "difficult for your colleagues to
>  maintain"), why "until" isn't a reasonable addition *in the spirit of Java
>  programming*, or what language(s) you do use.

Anything that comes my way :-)

Java happens to be on the list, but it never was a favorite. (I'm moving 
towards SML at this time, but PHP/Perl projects keep interfering. I 
don't think any amount of macro facilities is going to fix up *these* 
languages... *sigh*)

>>Most of the time, such "fixes" would make the code more 
>>difficult to maintain for my colleagues.
> 
> This is where we disagree. I believe you are making an arbitrary
>  distinction, classifying certain changes as language changes,
>  others as library additions, then arguing that "language" changes
>  are bad because they need to be properly designed and documented,
>  yet letting library additions off the hook. But unless you get
>  specific we won't get anywhere.

Admittedly, libraries can be intractable, too.

It's just that macros offer new and interesting ways to vary (read: 
break) things. I don't need a maximum in expressivity; I need a maximum 
in scalability, robustness, freeness from surprises. I need a "boring" 
language, where similar things are always done similarly - and while 
that's indeed boring, it also tends to make refactoring easier. (I'm a 
strong disbeliever in Perl's "there should be at least five different 
ways to achieve the same thing" philosophy.)

Regards,
Jo

Regards,
Jo
0
jo427 (1164)
11/1/2005 9:37:48 PM
In article <20051101203933.1AF7.1.NOFFLE@dthierbach.news.arcor.de>,
Dirk Thierbach  <dthierbach@usenet.arcornews.de> wrote:
>Gary Baumgartner <gfb@cs.toronto.edu> wrote:
>> When I mentioned that macros can remove repetition and improve safety
>> you agreed, adding that HOFs can be used to achieve the same effect.
>> 
>> I would be interested in seeing your HOF implementation of my macro "until",
>
>Here's a Haskell version. Since your example is not purely functional, but
>intends to use side-effects, we have to use monads.

Thanks, I love to have (or generate) examples to compare. I'm letting you know
 now because I only occasionally have time to pop into this thread and may not
 get around to commenting on them specifically until much later.

>untilM :: Monad m => m Bool -> m a -> m ()
>untilM condition body = 
>  condition >>= \b -> if b then return () else body >> untilM condition body
>
>Very simple and straightforward. (Read >> as a "semicolon" that
>seperates statements, and >>= as a similar one, that additionally passes
>the result of the last statement as an argument to the next statement.)
>The type declaration at the top is optional, but helps understanding
>what this function is about.
>
>Here's example of using it in the IOMonad with "variables" represented
>by IORefs (which is syntactically heavy, and not good style in Haskell):
>
>test = do
>  v <- newIORef 1
>  untilM 
>    (do { i <- readIORef v; return (i == 5) }) 
>    (do { i <- readIORef v; writeIORef v (i+1); print i })
>
>Result in the toplevel:
>
>*Main> test
>1
>2
>3
>4
>
>A more natural way to do this in Haskell would be to avoid side-effects,
>and make "condition" and "body" functions, which take the state as argument.
>The Prelude already has such an "until" HOF, also defined completely without
>macros.
>
>The variant with sub-conditions and post-processing is also straightforward:
>
>untilPostM :: Monad m => [(m Bool, m b)] -> m a -> m b
>untilPostM conds body = inspect conds where
>  inspect [] = 
>    body >> untilPostM conds body
>  inspect ((cond, post):rest) = 
>    cond >>= \b -> if b then post else inspect rest
>
>As in your example, conditions (including side-effects) are evaluated
>only until one of them is true. 
>
>All this can be easily done in any functional language, nothing but lambda
>calculus is required. 
>
>If you would like to see an example where usage doesn't look so ugly,
>please make one that doesn't require side-effects. After all, this is
>about *functional* programming :-)

Yes, one of the two selected newsgroups certainly is, and I certainly program
 functionally when I can, and think highly of Haskell. I hope I haven't come
 across as attacking functional languages or any one of them in particular.

But another poster has made sweeping statements that I don't believe are
 valid criticisms of Scheme macros. In some sense your ability to mimic my
 example makes Haskell *seem* as unsafe and difficult to maintain as he claims
 Scheme macros make Scheme; at the least we need to dig deeper.

I had the "until" example handy from when I have had to teach Java but want
 to convince students that other approaches are valuable: if Scheme
 (or other language) programmers can do imperative programming obviously
 better than in Java, yet still opt to take another approach, then that
 should speak volumes to them.

I'd like to ask you a question if you have the answer offhand; are the
 extensions listed among the implementations of Haskell at say

  http://dmoz.org/Computers/Programming/Languages/Haskell/Implementations/

 such as O'Haskell, Template Haskell, Haskell++, pH and Eden actual extensions
 (as opposed to Haskell bundled with certain libraries), are they worthwhile,
 and could they be done with macros? I intend to investigate this when I have
 the time, to further understand the strengths and weaknesses of macros.

>It's also possible to use infix operators to make the syntax look a bit
>prettier; many combinator (i.e., "HOF") libraries do this. Again, completely
>without macros.

Agreed, and thanks again for the detailed response.

Gary Baumgartner

>
>- Dirk


0
gfb (30)
11/1/2005 9:56:15 PM
On Tue, 1 Nov 2005, Gary Baumgartner wrote:

> I'd like to ask you a question if you have the answer offhand; are the
> extensions listed among the implementations of Haskell at say
>
>  http://dmoz.org/Computers/Programming/Languages/Haskell/Implementations/
>
> such as O'Haskell, Template Haskell, Haskell++, pH and Eden actual extensions
> (as opposed to Haskell bundled with certain libraries), are they worthwhile,
> and could they be done with macros? I intend to investigate this when I have
> the time, to further understand the strengths and weaknesses of macros.
>

There's a meaningful sense in which Template Haskell *is* macros. Some of 
the others involve changes to the type system - as such, macros alone 
aren't enough to implement them.

-- 
flippa@flippac.org

Sometimes you gotta fight fire with fire. Most 
of the time you just get burnt worse though.
0
flippa (196)
11/1/2005 10:48:20 PM
In article <dk8n7d$e9v$1@online.de>,
Joachim Durchholz  <jo@durchholz.org> wrote:
>Gary Baumgartner schrieb:
>> In article <dk867i$eqc$1@online.de>,
>> Joachim Durchholz  <jo@durchholz.org> wrote:
>> 
>>>Gary Baumgartner schrieb:
>>>
>>>>>HOFs are so massively expressive that, frankly, I don't feel any need 
>>>>>for macros, except to correct deficits in the language.
>>>>
>>>>Can you tell me which specific language you use that doesn't have these
>>>>deficits, or what you do until (if even!) they get corrected in a future
>>>>version?
>>>
>>>I usually live with the deficits of any language I happen to be using.
>>>
>>>Sorry for wording it as if I'd regularly fix language deficits using 
>>>macros.
>> 
>> You're actually very clear that you wouldn't.
>
>OK.
>
>> But without telling me what specific deficits you are willing to live with,
>>  at least which language's deficits are not worth addressing through macros,
>>  we can't get anywhere. I showed you why I don't like the lack of "until"
>>  in languages such as Java in situations where people use "while", and gave
>>  a macro addressing this .
>
>Um, well, if that's the case: I wouldn't consider the absence of "until" 
>a deficit. Rather, it's the absence of a loop construct that can have 
>the exit condition placed anywhere within the loop - now *that* would 
>really help for all those half-unrolled loops that we find in practice.

Knuth seemed to think that until, loop and a half, and named exits
 were all sufficiently worthwhile but sufficiently distinct that he
 wanted them in a language.

So here's the construct you want, in Knuth/Zahn's style:

;;; Loop with termination from within the body.
;
;     (repeat (exits x ...) body ...)
;      where x is a name or (name then expr expr ...)
;
;     Repeats the body, terminating when one of the exit names is called.
;      If the exit is called with an argument, it is the value of the loop.
;      If the exit name was specified by (name then expr expr ...) then
;      post-processing occurs, with the original value bound to the exit name.
;
;     Example: (repeat (exits (a then (+ a 1)) b)
;                (case (random 10) ((0) (a 3)) ((1) (b 5)) ((2) (b))))
;              => sometimes 4, sometimes 5,
;                  sometimes undefined (#f in current implementation),
;
(define-syntax repeat
  (syntax-rules (exits _exits)

    ((repeat (exits x ...) body ...)
     (repeat (_exits (x ...) ()) body ...))

    ((repeat (_exits ((x then e0 e1 ...) xs ...) (a ...)) body ...)
     (repeat (_exits (xs ...) (a ... (x e0 e1 ...))) body ...))

    ((repeat (_exits (x xs ...) (a ...)) body ...)
     (repeat (_exits (xs ...) (a ... (x))) body ...))

    ((repeat (_exits () ((x e ...) ...)) body ...)
     (call-with-current-continuation
       (lambda (exit)
         (let ((x
                (lambda args
                  (let ((x (and (not (null? args)) (car args))))
                    (exit (begin x e ...))))) ...)
           (letrec ((repeat (lambda () body ... (repeat)))) (repeat))))))))

>And this also very nicely illustrates why I don't think that macros 
>should be commonplace: if anybody is going to work around language 
>deficits, the first attempt at doing so will almost invariably be 
>faulty.

So they will have to do them inline, like mimicing

  until one-of a then c
               b then d
    body

with

  while not (a or b)
    body
  if a then c
       else d

or what I've often seen in actual code (notice the change to the if):

  while not a and not b
    body
  if a then d
       else c

So you're hoping that they do this right every time, and don't trust
 someone who is smart enough to capture this with a macro. If the macro
 is faulty because it isn't written properly, then you can fix it in
 one place, exactly the reason one writes a function. And if someone
 uses the macro improperly, at least you have a better idea of their
 intention than deciding whether someone who forgot to swap d and c
 in the if intended to do so.

> Language design is *hard*, and taking away can be as important 
>as adding.

I know. I've read "The Design and Evolution of C++" by the creator
 of C++, and followed the development of the C++ standard library.
 Many of the developers of the library found it just as hard.
 Current work on Scheme also demonstrates this.

>So I don't buy the advocacy that "macros can be used to add what the 
>language doesn't specify". That's great for language experimentation, 
>but a production language should have what's needed.
>With macros, I have to look at the various implementations. 
>Implementation differences are bad enough for standard libraries - why 
>would I want differences in macro libraries thrown in? This offers 
>flexibilities in places I don't appreciate, as a programmer.

Okay, so you are arguing a sweet-spot, since you're not advocating removing
 the ability to add "traditional" functions.

> > So either you need to tell me that you don't use
>>  Java at all, don't use Java in this way (in which case you need to tell me
>>  how you do it in such a way that is isn't "difficult for your colleagues to
>>  maintain"), why "until" isn't a reasonable addition *in the spirit of Java
>>  programming*, or what language(s) you do use.
>
>Anything that comes my way :-)
>
>Java happens to be on the list, but it never was a favorite. (I'm moving 
>towards SML at this time, but PHP/Perl projects keep interfering. I 
>don't think any amount of macro facilities is going to fix up *these* 
>languages... *sigh*)

I certainly could use a macro facility to make these languages bearable.
 (But I'm not advocating that macros are sufficient or the only approach
 one should take).

>
>>>Most of the time, such "fixes" would make the code more 
>>>difficult to maintain for my colleagues.
>> 
>> This is where we disagree. I believe you are making an arbitrary
>>  distinction, classifying certain changes as language changes,
>>  others as library additions, then arguing that "language" changes
>>  are bad because they need to be properly designed and documented,
>>  yet letting library additions off the hook. But unless you get
>>  specific we won't get anywhere.
>
>Admittedly, libraries can be intractable, too.
>
>It's just that macros offer new and interesting ways to vary (read: 
>break) things. I don't need a maximum in expressivity; I need a maximum 
>in scalability, robustness, freeness from surprises. I need a "boring" 
>language, where similar things are always done similarly

Agreed. We perhaps differ mainly on what things are similar.
 I see lots of similarities worth refactoring that are hard in many
 languages. In another branch of this discussion I agree that
 macros aren't the only way: but the criticisms of macros you have
 seem to apply to those approaches as well. I'd be happy to join
 the two branches if you want to tell me whether you believe:

   everything that can be done in Haskell is not too expressive
    but Scheme macros can go farther and I object to those aspects

 or

   some of those things in Haskell are too expressive

> - and while that's indeed boring, it also tends to make refactoring easier.
> (I'm a strong disbeliever in Perl's "there should be at least five different 
> ways to achieve the same thing" philosophy.)

There's a whole Patterns community, which identifies many patterns in
 even in the one-way-to-do-it languages that can't be refactored
 (but macros could) and must be simply documented every time.
 And student code for the first assignment of the first programming
 course at the University of Toronto, in Java, demonstrates that code
 of tens of lines is written in substantially different ways, and there's
 absolutely no hope of making it doable only in one way except maybe
 at the level of single statements.

I mentioned this before I think: macros give you more refactoring power.
More generally, expressivity *is* refactoring power.

Gary Baumgartner

>Regards,
>Jo


0
gfb (30)
11/1/2005 11:12:38 PM
"Joachim Durchholz" <jo@durchholz.org> wrote

> (I'm a 
> strong disbeliever in Perl's "there should be at least five different 
> ways to achieve the same thing" philosophy.)

I use to spread around the statement: you can kill yourself in, say
4876487648 distinct ways. But there is essentially one way to get
people born...

Jerzy Karczmarczuk


-- 
Posted via Mailgate.ORG Server - http://www.Mailgate.ORG
0
karczma (331)
11/1/2005 11:46:23 PM
Joachim Durchholz wrote:
>
> HOFs are so massively expressive that, frankly, I don't feel any need
> for macros, except to correct deficits in the language. (The problem is
> that everybody thinks that the deficits are elsewhere, so everybody uses
> a different set of macros, making it more difficult to understand other
> peoples' code. That's why I usually refrain from using macros unless
> such usage is so highly standardised that it could be made part of the
> language.)

What do you think of the idea of a language having macros
as a way to keep the core language as small as possible?
For example, one could have a language with general recursion,
but use macros to add while loops and for loops, say.

I see advantages to keeping the core language as tiny as
possible, but I also see advantages, from what let's call
a marketing standpoint, of being able to support familiar
constructs from other languages. Macros strike me as a way
to have both of these things.


Marshall

0
11/2/2005 1:07:28 AM
Greg Buchholz <sleepingsquirrel@yahoo.com> wrote:
> Dirk Thierbach wrote:

>> Here's a Haskell version. Since your example is not purely functional, but
>> intends to use side-effects, we have to use monads.

> ;;; And here's a version in Scheme...

> (define (until conditional block)
>    (if (not (conditional))
>        (begin (block)
>               (until conditional block))))
> 
> (define i 9)
> 
> (until (lambda () (< i 1))
>       (lambda () (begin (display i) (set! i (- i 1)))))

Note that here one has to pack the arguments into functions. So in that
sense, it changes the original example, which is why I used a different 
approach (though I had this very idea first). Your approach (side-effects 
aside) is more similar to the "until" function that is already present in 
the Prelude.

And finally, one starts to wonder why this example should demonstrate the
superiority of macros (niceness of syntax aside), when the
implementation with HOFs is so straightforward in really all functional
languages. :-)

- Dirk

0
dthierbach2 (260)
11/2/2005 7:34:47 AM
Gary Baumgartner <gfb@cs.toronto.edu> wrote:
> But another poster has made sweeping statements that I don't believe are
> valid criticisms of Scheme macros. 

I didn't follow most of this discussion. Macros are somtimes nice to
have, but I think the criticism starts to appear when people start making
arguments like "Lisp/Scheme is the most powerful language because it
has so powerful macros that are better than everything else." (I am
exaggerating, of course :-).

> In some sense your ability to mimic my example makes Haskell *seem*
> as unsafe and difficult to maintain as he claims Scheme macros make
> Scheme; at the least we need to dig deeper.

One difference is that the HOFs can be typed (in fact, a type system
helps tremendously to deal with more complicated HOFs), so in this
sense they are quite safe.

> I'd like to ask you a question if you have the answer offhand; are
> the extensions listed among the implementations of Haskell [...]
> such as O'Haskell, Template Haskell, Haskell++, pH and Eden actual
> extensions (as opposed to Haskell bundled with certain libraries),
> are they worthwhile, and could they be done with macros?

Template Haskell *is* in some sense a macro preprocessor for Haskell.
Most of the others extend the type system in some way, which is
difficult to do with macros alone, and is simpler and cleaner to do
with a different approach.

Are they worthwhile? I guess they can be a lot of help in certain 
situations, but so far I haven't used any of them myself. 
I guess it depends strongly on what kind of appliactions you
need to write.

- Dirk
0
dthierbach2 (260)
11/2/2005 7:43:22 AM
Gary Baumgartner schrieb:
> In article <dk8n7d$e9v$1@online.de>,
> Joachim Durchholz  <jo@durchholz.org> wrote:
> 
>>Um, well, if that's the case: I wouldn't consider the absence of "until" 
>>a deficit. Rather, it's the absence of a loop construct that can have 
>>the exit condition placed anywhere within the loop - now *that* would 
>>really help for all those half-unrolled loops that we find in practice.
> 
> Knuth seemed to think that until, loop and a half, and named exits
>  were all sufficiently worthwhile but sufficiently distinct that he
>  wanted them in a language.

I value Knuth's work, but I wouldn't charge him with designing a 
language ;-)
Specifically, the algorithms that he gives in his Art of Programming 
series are given in an extremely low-level notation, and some of them 
are real spaghetti (take a look at the Chi-Square test if you want to 
see a particularly horrible example - I couldn't make rhyme nor reason 
out of it).

> So here's the construct you want, in Knuth/Zahn's style:

I'll immediately believe you that you can do such a loop.

It's just that I'd have to live with code written using your "until", 
code written using somebody else's "until", and code written using your 
new "repeat". A basic construct like such a loop should have a uniform 
definition.

I admit it doesn't matter how that uniformity is achieved. If it's in 
the language, uniformity is by definition, if it's in a (macro or 
function) library, uniformity comes later via standardisation. 
(Hopefully - I have seen several attempts at standardisation fail.)

>>And this also very nicely illustrates why I don't think that macros 
>>should be commonplace: if anybody is going to work around language 
>>deficits, the first attempt at doing so will almost invariably be 
>>faulty.
> 
> So they will have to do them inline, like mimicing
> 
>   until one-of a then c
>                b then d
>     body
> 
> with
> 
>   while not (a or b)
>     body
>   if a then c
>        else d
> 
> or what I've often seen in actual code (notice the change to the if):
> 
>   while not a and not b
>     body
>   if a then d
>        else c
> 
> So you're hoping that they do this right every time, and don't trust
>  someone who is smart enough to capture this with a macro.

The best would be if the language designer got it right.

A good macro would be the second-best solution.

I agree that everybody coding half-unrolled loops is the worst solution. 
Macros do have their place if the language has deficits.

>>Language design is *hard*, and taking away can be as important 
>>as adding.
> 
> I know. I've read "The Design and Evolution of C++" by the creator
>  of C++, and followed the development of the C++ standard library.
>  Many of the developers of the library found it just as hard.
>  Current work on Scheme also demonstrates this.

:-)

>>Java happens to be on the list, but it never was a favorite. (I'm moving 
>>towards SML at this time, but PHP/Perl projects keep interfering. I 
>>don't think any amount of macro facilities is going to fix up *these* 
>>languages... *sigh*)
> 
> I certainly could use a macro facility to make these languages bearable.
>  (But I'm not advocating that macros are sufficient or the only approach
>  one should take).

Adding macros to Perl would simply add a sixth way to do everything ;-)

> I'd be happy to join
>  the two branches if you want to tell me whether you believe:
> 
>    everything that can be done in Haskell is not too expressive
>     but Scheme macros can go farther and I object to those aspects
> 
>  or
> 
>    some of those things in Haskell are too expressive

It's the former: Haskell seems to get expressivity just right (though 
it's always possible to disagree where exactly that "just right" point 
lies).

One of the things that I value most about Haskell is the guarantees it 
gives. One of them is side effect management: any code that may have 
side effects is clearly marked with the IO or State type. Side effects 
have bitten me so often, and the temptation to optimise by overwriting 
data has so often led me to bad code, that I'd really, really, really 
like to code in a language that makes writing side-effect-free code not 
only possible but also fun.

Macros don't help with that :-)

>>- and while that's indeed boring, it also tends to make refactoring easier.
>>(I'm a strong disbeliever in Perl's "there should be at least five different 
>>ways to achieve the same thing" philosophy.)
> 
> There's a whole Patterns community, which identifies many patterns in
>  even in the one-way-to-do-it languages that can't be refactored
>  (but macros could) and must be simply documented every time.

Sure. It's one of the reasons why I left the OO bandwagon.

Note that most patterns are simple higher-order functions in FPLs. Or 
Haskell typeclasses - monads are a design pattern, yet there isn't much 
boilerplate code in the definitions of the various monads in the Haskell 
libraries.

> I mentioned this before I think: macros give you more refactoring power.
> More generally, expressivity *is* refactoring power.

I'd agree if I hadn't seen the kind of refactoring that's possible using 
HOFs.
Maybe that's the main difference between our perspectives :-)

Regards,
Jo
0
jo427 (1164)
11/2/2005 9:26:28 AM
Marshall Spight schrieb:
> Joachim Durchholz wrote:
> 
>>HOFs are so massively expressive that, frankly, I don't feel any need
>>for macros, except to correct deficits in the language. (The problem is
>>that everybody thinks that the deficits are elsewhere, so everybody uses
>>a different set of macros, making it more difficult to understand other
>>peoples' code. That's why I usually refrain from using macros unless
>>such usage is so highly standardised that it could be made part of the
>>language.)
> 
> What do you think of the idea of a language having macros
> as a way to keep the core language as small as possible?
> For example, one could have a language with general recursion,
> but use macros to add while loops and for loops, say.

I'm highly sceptical.

The language is a way to standardise things. And standards are good - 
they make maintenance easier.

Of course, this breaks down if the standards are lacking.

> I see advantages to keeping the core language as tiny as
> possible, but I also see advantages, from what let's call
> a marketing standpoint, of being able to support familiar
> constructs from other languages. Macros strike me as a way
> to have both of these things.

Yes, but you end up with a modern PL/I.
PL/I was the classical "committee language": lots of people from 
different backgrounds, and features from all the backgrounds thrown 
together. The result was a mess. (This is what's generally said about 
PL/I: I don't know whether it's fact. But I'd expect a language to fail 
if its community uses macros to rebuild all the features from all the 
languages, possibly even with warts and all.)

A pointed response would be: would you really want to have Perl's 
features imported into such a language?

I do see the marketing standpoint. One of the elements that made Java a 
popular was the usage of curly braces (a choice that I personally detest 
- curlies and brackets are hellishly difficult to blind-type on a German 
keyboard).
However, I don't think one can go very far in that direction, nor is it 
necessary - just use curlies, that will make the C crowd "feel at home" 
even if the semantics is completely different. (Java's semantics is 
quite different from C's semantics, too. It's really just appearances.)

Regards,
jo
0
jo427 (1164)
11/2/2005 9:35:08 AM
Dirk Thierbach wrote:
> 
> Greg Buchholz <sleepingsquirrel@yahoo.com> wrote:
> > Dirk Thierbach wrote:

> > ;;; And here's a version in Scheme...
> 
> > (define (until conditional block)
> >    (if (not (conditional))
> >        (begin (block)
> >               (until conditional block))))
> >
> > (define i 9)
> >
> > (until (lambda () (< i 1))
> >       (lambda () (begin (display i) (set! i (- i 1)))))
> 
> Note that here one has to pack the arguments into functions. So in that
> sense, it changes the original example, which is why I used a different
> approach (though I had this very idea first). Your approach (side-effects
> aside) is more similar to the "until" function that is already present in
> the Prelude.
> 
> And finally, one starts to wonder why this example should demonstrate the
> superiority of macros (niceness of syntax aside), when the
> implementation with HOFs is so straightforward in really all functional
> languages. :-)

Because the HOF approach exposes internal implementation 
details that do not belong to the abstraction.  The macro 
does not have this defect.  

Cheers
Andre
0
andre9567 (120)
11/2/2005 4:55:22 PM
Andre wrote:
>
> Because the HOF approach exposes internal implementation
> details that do not belong to the abstraction.  The macro
> does not have this defect.

    But isn't the original poster going to come by and say something to
the effect of, "but at least the HOF example keeps the standard
evaluation regime intact, so I can do something like..."

(define i 9)
(until (if (char=? #\y (read-char))
            (lambda () (< i 1))
            (lambda () (< i 5)))
       (lambda () (begin (display i)
                         (set! i (- i 1)))))

"...and have it work as expected?"

0
11/2/2005 5:41:14 PM
Andre <andre@het.brown.edu> wrote:
> Because the HOF approach exposes internal implementation 
> details that do not belong to the abstraction.  

I don't follow that. In what why does the HOF approach expose the
internal implementation? By abusing side-effects? Or what idea is
behind this? Example?

- Dirk
0
dthierbach2 (260)
11/2/2005 7:08:51 PM
Joachim Durchholz wrote:
> wooks schrieb:
>> When people ask questions about why certain things are the way they
>> are, a very common answer is that it was inherited from C++.
> 
> Which is wrong most of the time.
> 
> Java has borrowed some of its syntax from C/C++.
> The semantics, however, is as different as you can get if you stay
> within the statically-typed OO paradigm.

I tend to agree. However, since the discussion was about learning
programming for beginners, I think syntax matters a lot, even superficial
things like using 'while ... do ... end' instead of 'while (...) {...}'.
The reason I think it is important is that is makes /reading/ the code
easier for beginners, since it is nearer natural language syntax (if you
know english, that is).

Another point is variable declaration: 'x: Integer' can be read as 'x, of
type Integer'. This is better than 'int x'.

And all those stupid semicolons. In Eiffel you can use them (to terminate a
statement) if it makes you feel safer; you don't have to. Not to speak of
distracting ambiguities such as the infamous 'dangling else'. I cannot
understand how anyone would propose to use such a language for teaching
beginners.

IMO, if people insist on teaching an imperative OO language to beginners,
Eiffel is a /much/ better choice than Java.

> The problem is: we don't know of better alternatives. At least not if
> they want to teach OO.

We do, see above. However, I would argue that an OO language is not very
suitable for a first course in programming.

>> No I don't know the answers to most of the questions I have asked, but
>> I do know that it is all syntactical mumbo-jumbo that is not core to an
>> understanding of how to process an array.
> 
> Arrays are a special type of objects. So no, they aren't outside the
> Object idea of Java, and yes, they are very much special and need very
> much special treatment and consideration.
> 
> That's relatively normal though. Mutable containers and subtyping rules
> surrounding them have always been a challenge, and there are very few
> languages that "got it right". None them are in the mainstream. Besides,
> even the "got it right" solutions to the issue have drawbacks, it's a
> question of choosing the least evil.

Maybe Eiffel is not mainstream. But being mainstream is really irrelevant
for a first course in programming. In my first course we used Hope. Does
anyone know Hope? Well, I must say I had difficulties with this funny
'functional programming' style, and the syntax didn't really help. Also,
being designed (IIRC) for teaching, the language was somewhat crippled,
i.e. not usable for 'real programming'. I completely wrote off functional
programming for a very long time -- until I stumbled over Haskell a few
years ago and after lots of experience, trying out many paradigms and
languages. I think when I began studying, Haskell would have made me
appreciate FP much more, but then Haskell didn't really exist yet... not
H98 anyway; although -- thinking of monads -- maybe for beginners Haskell
isn't such a good idea in the end...

Ben
0
11/2/2005 10:18:37 PM
Joachim Durchholz <jo@durchholz.org> writes:

>> No, if Liskov Substitution Principle doesn't hold then subtyping is
>> a lie. Expect troubles unless the code where lack of LSP manifests is
>> limited in scope and is careful to not rely on the declared subtyping.
>
> Nope. I'm talking about incompatible *internals*.

That's why I said that false subtyping is acceptable when the
incompatibility is limited to internals, used only in a limited area
of code. This area must know what it is doing, and should not rely on
subtyping which doesn't hold on this level of abstraction. Outside
this area, where internals are not accessed directly, subtyping holds
and everything is safe.

Subtyping is declared globally in all languages I know, so you must
decide whether to claim it and have to be careful when you access
internals which break it, or not to claim it and risk duplicating some
code elsewhere or making it less modular. There is no middle ground.

> These two type can have the *exactly* same (hence compatible)
> semantics, be Liskov substitutable (even both ways), still be fully
> incompatible when it comes to representation.

If the representation is exposed, then it's a part of the semantics
and thus they are not the same nor substitutable.

If it's not exposed, dispatching doesn't apply code to objects it's
not prepared to handle.

If it's exposed in some parts of the library and hidden elsewhere,
then you must be careful to not rely on substitutability of these
types in the parts where the representation is exposed, no matter
whether it uses dynamic dispatch or typecase.

>>>It isn't extensible, of course. At least not modularly.
>> It thus loses the main benefit of modularity.
>
> There wasn't any modularity in the beginning.

There was. My framework of collections is extensible in a modular way.
My framework of numbers too (since today and there might still be some
rough edges left), although in this case it's quite hard; I don't
know any other extensible framework; these in Python and Ruby are not
extensible in a modular way. My definitions of equality and ordering
are extensible.

> Having a multiple-dispatched function with access to internals
> tightly couples the two types; pretending the two types are still
> independent is just a blatant lie.

Not at all. There doesn't have to be a separate implementation
for every pair of types if some combinations go through generic
conversions, but it's still useful to implement various combinations
explicitly, and it would be worse if recognizing these combinations
was applicable only to some builtin types.

> In other words, if two types participate in multiple dispatch
> anywhere, forcing them into the same module just acknowledges facts,
> it doesn't impose new problems.

False. It means that one of them knew about the other, but not
necessarily in both directions. Forcing them in the same module
would disallow extensions at all.

> Note that this problem only happens if the multiple-dispatching
> function accesses internals of both types.

Which "problem"?

> A function that supposedly does multiple dispatch on its types and
> does *not* access internals isn't really multiple-dispatching at all.
> It's calling single-dispatching functions on either parameter, and
> there's no need to ever specialise on the parameters. (Other than
> optimisation, which is a separate issue.)

What if it doesn't access internals directly, but calls other
functions which do, yet which are themselves multidispatched too?
Dispatching the outer function allows to choose the proper algorithm,
e.g. one which minimizes rounding errors and which returns an exact
result in as broad class of cases as possible.

> As said above, I don't think that dynamic dispatch without access to
> internals is particularly useful, other than for optimisation.

Even if this was true, optimization is useful.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
0
qrczak (1266)
11/2/2005 10:55:20 PM
Benjamin Franksen schrieb:
> Joachim Durchholz wrote:
> 
>>wooks schrieb:
>>
>>>When people ask questions about why certain things are the way they
>>>are, a very common answer is that it was inherited from C++.
>>
>>Which is wrong most of the time.
>>
>>Java has borrowed some of its syntax from C/C++.
>>The semantics, however, is as different as you can get if you stay
>>within the statically-typed OO paradigm.
> 
> I tend to agree. However, since the discussion was about learning
> programming for beginners, I think syntax matters a lot, even superficial
> things like using 'while ... do ... end' instead of 'while (...) {...}'.
> The reason I think it is important is that is makes /reading/ the code
> easier for beginners, since it is nearer natural language syntax (if you
> know english, that is).

Fully agreed with that.

Another aspect is this: If syntax is easy to grasp, beginners can 
concentrate on semantics. It's a less steep learning curve - and 
particularly for those who're still learning how to program, that's 
invaluable.

> Another point is variable declaration: 'x: Integer' can be read as 'x, of
> type Integer'. This is better than 'int x'.

Hmm... I tend to think it's not *that* important whether it's "type 
name" or "name type". What's bad is that names are declared as

   some-complicated-deeply-nested-type-expression(((var))) innermost-type

Good thing that Java doesn't have nestable type expressions.

> And all those stupid semicolons. In Eiffel you can use them (to terminate a
> statement) if it makes you feel safer; you don't have to. Not to speak of
> distracting ambiguities such as the infamous 'dangling else'. I cannot
> understand how anyone would propose to use such a language for teaching
> beginners.
> 
> IMO, if people insist on teaching an imperative OO language to beginners,
> Eiffel is a /much/ better choice than Java.

Um... well... Eiffel isn't without some horrible problems either.

But these problems arise after a year or two. For beginners, the 
language is indeed the best choice - and its emphasis on semantic 
guarantees (as opposed to just giving a type to everything) helps.

Hopefully they'll use a fast and easy-to-learn compiler :-)
(In a job interview, I once mentioned my Eiffel experience and got a 
prompt reaction: "Wasn't that the slow, bug-ridden IDE that they threw 
at us at university?" Some Eiffel IDEs were indeed of that quality, 
hence useless for teaching...)

>>The problem is: we don't know of better alternatives. At least not if
>>they want to teach OO.
> 
> We do, see above.

Indeed.
And that with my personal Eiffel background... the only excuse that I 
have is that Eiffel is utterly nonexistent in the job market, and I kind 
of lost the memories.

 > However, I would argue that an OO language is not very
> suitable for a first course in programming.

Now that's an entirely different can of worms :-)

>>>No I don't know the answers to most of the questions I have asked, but
>>>I do know that it is all syntactical mumbo-jumbo that is not core to an
>>>understanding of how to process an array.
>>
>>Arrays are a special type of objects. So no, they aren't outside the
>>Object idea of Java, and yes, they are very much special and need very
>>much special treatment and consideration.
>>
>>That's relatively normal though. Mutable containers and subtyping rules
>>surrounding them have always been a challenge, and there are very few
>>languages that "got it right". None them are in the mainstream. Besides,
>>even the "got it right" solutions to the issue have drawbacks, it's a
>>question of choosing the least evil.
> 
> Maybe Eiffel is not mainstream. But being mainstream is really irrelevant
> for a first course in programming. In my first course we used Hope. Does
> anyone know Hope? Well, I must say I had difficulties with this funny
> 'functional programming' style, and the syntax didn't really help. Also,
> being designed (IIRC) for teaching, the language was somewhat crippled,
> i.e. not usable for 'real programming'. I completely wrote off functional
> programming for a very long time -- until I stumbled over Haskell a few
> years ago and after lots of experience, trying out many paradigms and
> languages. I think when I began studying, Haskell would have made me
> appreciate FP much more, but then Haskell didn't really exist yet... not
> H98 anyway; although -- thinking of monads -- maybe for beginners Haskell
> isn't such a good idea in the end...

I don't think that monads are a serious problem. It's just that those 
who know both category theory and FPLs can't explain the parallels 
between the two monad concepts.
My current understanding of a "monad" is that it's an associative way to 
chain up values. If the monad allows it, the values need not have the 
same types (but the monad will usually have to constrain the types in 
some way to be able to do something meaningful with them).

If my understanding is correct, monads are dead simple. They are just a 
recurring FPL design pattern - and here it's *really* design, because 
the term "monad" just classifies a small set of type and function 
signatures, and an equally small set of properties of the functions.

Using the IO monad is simple, too. (Understanding how it works is far 
more complicated. Actually I don't know how it works - I'd have to look 
a *lot* deeper into Haskell to find that out.)

Regards,
Jo
0
jo427 (1164)
11/2/2005 11:13:44 PM
Gary Baumgartner wrote:
> Following up on my own post.
> 
> I just remembered we've been through some of this before.
>  When I mentioned that macros can remove repetition and improve safety
>  you agreed, adding that HOFs can be used to achieve the same effect.
> 
> I would be interested in seeing your HOF implementation of my macro
> "until",
>  to clarify your preferences.

Do you mean something like this (this is Haskell):

do_until :: IO a -> (a -> IO Bool) -> IO a
do_until actions cond = do
  x <- actions
  b <- cond x
  if b
    then return x
    else do_until actions cond

-- test
main = (putStrLn "Enter x to stop" >> getChar)
          `do_until` (return . (== 'x'))

Ben
0
11/2/2005 11:14:45 PM
"Jerzy Karczmarczuk" <karczma@info.unicaen.fr> writes:
> "Joachim Durchholz" <jo@durchholz.org> wrote
>> (I'm a strong disbeliever in Perl's "there should be at least five
>> different ways to achieve the same thing" philosophy.)
>
> I use to spread around the statement: you can kill yourself in, say
> 4876487648 distinct ways. But there is essentially one way to get
> people born...

Hehe.  That was nice.  Despite being false :)

0
keramida (464)
11/2/2005 11:57:19 PM
Joachim Durchholz wrote:
> Benjamin Franksen schrieb:
>> Joachim Durchholz wrote:
>>>Java has borrowed some of its syntax from C/C++.
>>>The semantics, however, is as different as you can get if you stay
>>>within the statically-typed OO paradigm.
>> 
>> I tend to agree. However, since the discussion was about learning
>> programming for beginners, I think syntax matters a lot, even superficial
>> things like using 'while ... do ... end' instead of 'while (...) {...}'.
>> The reason I think it is important is that is makes /reading/ the code
>> easier for beginners, since it is nearer natural language syntax (if you
>> know english, that is).
> 
> Fully agreed with that.
> 
> Another aspect is this: If syntax is easy to grasp, beginners can
> concentrate on semantics. It's a less steep learning curve - and
> particularly for those who're still learning how to program, that's
> invaluable.

Yes, that is exactly the reason why I think syntax matters for beginners.
For seasoned programmers these things are much less relevant, if at all.

>> Another point is variable declaration: 'x: Integer' can be read as 'x, of
>> type Integer'. This is better than 'int x'.
> 
> Hmm... I tend to think it's not *that* important whether it's "type
> name" or "name type". 

It's a minor question, yes. But all these minor issues add up...

> What's bad is that names are declared as 
> 
>    some-complicated-deeply-nested-type-expression(((var))) innermost-type
> 
> Good thing that Java doesn't have nestable type expressions.

Right. Although usually you can disentangle such horror declarations in C
with a few typedefs. What cannot be fixed, however, is that the name you
define is not in a syntactically distinguished position, sometimes making
it hard to spot it. You could use C macros for that, of course...;-))

>> IMO, if people insist on teaching an imperative OO language to beginners,
>> Eiffel is a /much/ better choice than Java.
> 
> Um... well... Eiffel isn't without some horrible problems either.

Completely agreed.

> But these problems arise after a year or two. For beginners, the
> language is indeed the best choice - and its emphasis on semantic
> guarantees (as opposed to just giving a type to everything) helps.
> 
> Hopefully they'll use a fast and easy-to-learn compiler :-)
> (In a job interview, I once mentioned my Eiffel experience and got a
> prompt reaction: "Wasn't that the slow, bug-ridden IDE that they threw
> at us at university?" Some Eiffel IDEs were indeed of that quality,
> hence useless for teaching...)

Today they would probably (hopefully) use SmartEiffel. No IDE but pretty
fast & stable, AFAIR (I am not up to date).

>>>The problem is: we don't know of better alternatives. At least not if
>>>they want to teach OO.
>> 
>> We do, see above.
> 
> Indeed.
> And that with my personal Eiffel background... 

Of which I happen know ;) ... (I think now and again you posted on the
SmartEiffel mailing list). I already wondered why you didn't mention it.

>> I think when I began studying, Haskell would have made me
>> appreciate FP much more, but then Haskell didn't really exist yet... not
>> H98 anyway; although -- thinking of monads -- maybe for beginners Haskell
>> isn't such a good idea in the end...
> 
> I don't think that monads are a serious problem. It's just that those
> who know both category theory and FPLs can't explain the parallels
> between the two monad concepts.

The problem with monads is that they are an extremely abstract and general
concept (that's why they come from category theory), which is fine, but not
so much for beginners. Let me give a mathematical analogon: The concept of
a topology is not hard to understand, in principle. However, you don't tell
the beginners what a topology is. You don't even tell them about metric
spaces. You just use |x - y| for real numbers x and y, and take care to
point out that this is a measure of the /distance/ between x and y on the
real line and that whenever you talk about convergence this foremost means
talking about distances. Later you generalize.

I can't imagine explaining monads to someone who just learns her first
programming language. But you need to understand them, in order to
understand how IO and other side-effects are even /possible/ in a pure
functional language. Well, maybe you don't need to understand the IO monad
to learn to do IO. OTOH, maybe some ML dialect is the better choice for
beginners.

Don't get me wrong, I think understanding monads and how they enable IO and
controlled side-effects in Haskell is extremely valuable. I only think you
need to be somewhat experienced to appreciate this.

> My current understanding of a "monad" is that it's an associative way to
> chain up values. If the monad allows it, the values need not have the
> same types (but the monad will usually have to constrain the types in
> some way to be able to do something meaningful with them).
> 
> If my understanding is correct, monads are dead simple. They are just a
> recurring FPL design pattern - and here it's *really* design, because
> the term "monad" just classifies a small set of type and function
> signatures, and an equally small set of properties of the functions.

A design pattern, yes. The bit about 'chaining up values' I don't
understand. As I see it, monads are an abstraction of the idea of a
'computation' (or 'action') that may return (=produce) some value. This
makes the fundamental operations quite clear: (return x) is the computation
that just returns x, and (m >>= f) is sequencing of a computation with a
function that returns a computation, where the output of m is the input
(=argument) of f. In Haskell, this abstraction can readily be expressed
using a type (constructor) class.

Of course, since the monad is an abstraction, it can be used in unforseen
ways that do not seems to have much to do with 'computation' as such. But
that's what happens with all (good) abstractions, doesn't it? The main
point is that monadic values are values that (somehow) 'produce' other
values.

Ok, now that I have explained how I see it, I finally understand what you
mean by 'chaining up values associatively'. Yes, but this is not the whole
picture: monadic values need to /return something (else)/ and it is this
what makes them 'chainable' in an associative manner.

Want to explain this to a beginner who still struggles with understanding
what a loop (or recursion) is?

Ben
0
11/3/2005 1:02:42 AM
Joachim Durchholz wrote:
> Using the IO monad is simple, too. (Understanding how it works is far
> more complicated. Actually I don't know how it works - I'd have to look
> a *lot* deeper into Haskell to find that out.)

There is a fine paper called 'Tackling the Awkward Squad' by Simon
Peyton-Jones that contains, among else, a good explanation of the IO monad
and how it works. He even gives an operational semantics (and also explains
the notation in a way that I understood it for the first time).

Ben
0
11/3/2005 1:12:42 AM
Marshall Spight wrote:
> Joachim Durchholz wrote:
> 
>>HOFs are so massively expressive that, frankly, I don't feel any need
>>for macros, except to correct deficits in the language. (The problem is
>>that everybody thinks that the deficits are elsewhere, so everybody uses
>>a different set of macros, making it more difficult to understand other
>>peoples' code. That's why I usually refrain from using macros unless
>>such usage is so highly standardised that it could be made part of the
>>language.)
> 
> 
> What do you think of the idea of a language having macros
> as a way to keep the core language as small as possible?
> For example, one could have a language with general recursion,
> but use macros to add while loops and for loops, say.

I would say... Dylan.

> I see advantages to keeping the core language as tiny as
> possible, but I also see advantages, from what let's call
> a marketing standpoint, of being able to support familiar
> constructs from other languages. Macros strike me as a way
> to have both of these things.
> 
> 
> Marshall
0
noone3 (3603)
11/3/2005 2:38:45 AM
Marcin 'Qrczak' Kowalczyk schrieb:
> Joachim Durchholz <jo@durchholz.org> writes:
> 
> 
>>>No, if Liskov Substitution Principle doesn't hold then subtyping is
>>>a lie. Expect troubles unless the code where lack of LSP manifests is
>>>limited in scope and is careful to not rely on the declared subtyping.
>>
>>Nope. I'm talking about incompatible *internals*.
> 
> 
> That's why I said that false subtyping is acceptable when the
> incompatibility is limited to internals, used only in a limited area
> of code.

Huh? Internals don't go into the notion of subtyping.
The LSP essentially states that, wherever the supertype is requested, 
the subtype may go. That's exactly what a reimplementation does: it 
keeps the external interface identical (so the subtyping relationship 
even goes both ways), but does things entirely differently internally. 
That's perfectly valid by my book.

>>>>It isn't extensible, of course. At least not modularly.
>>>
>>>It thus loses the main benefit of modularity.
>>
>>There wasn't any modularity in the beginning.
> 
> There was. My framework of collections is extensible in a modular way.
> My framework of numbers too (since today and there might still be some
> rough edges left), although in this case it's quite hard; I don't
> know any other extensible framework; these in Python and Ruby are not
> extensible in a modular way. My definitions of equality and ordering
> are extensible.

Numeric frameworks can easily be made extensible via conversion (convert 
to the most specific type that can represent both operands, then apply 
the operator). No multiple dispatch needed here, except possibly for 
optimisation. Of course, since numerics are usually done by the language 
designer, interference between independent third parties is usually not 
an issue.

Things get interesting once you wish to extend the framework to 
encompass incompatible things like vector spaces, which can be mutually 
incompatible. Is that the case in your framework?
If yes: when does your framework detect attempts to operate on 
incompatible data, say adding a two-dimensional and a three-dimensional 
vector?

Um... on second thinking, I recognise that conversion isn't always the 
Right Thing even with simple integers. Assume you have two integers, one 
from a ring modulo some number (i.e. wrap-around arithmetic), one from 
an unlimited-digits type. They can be added, but the compiler can't know 
whether the result is modulo N or unlimited-digits. I.e. the MxN matrix 
should have a "declare error" entry here.
How does your framework handle the situation?
How does your framework handle the situation if somebody adds a new kind 
of integers; say, "only even integers" for a silly example? How would 
that interact if somebody else defined a numeric "only numbers that are 
evenly divisibly by three"?

>>Having a multiple-dispatched function with access to internals
>>tightly couples the two types; pretending the two types are still
>>independent is just a blatant lie.
> 
> Not at all. There doesn't have to be a separate implementation
> for every pair of types if some combinations go through generic
> conversions, but it's still useful to implement various combinations
> explicitly, and it would be worse if recognizing these combinations
> was applicable only to some builtin types.

Agreed for optimisations (with caveats).

Disagreed for anything else.

>>In other words, if two types participate in multiple dispatch
>>anywhere, forcing them into the same module just acknowledges facts,
>>it doesn't impose new problems.
> 
> False. It means that one of them knew about the other, but not
> necessarily in both directions. Forcing them in the same module
> would disallow extensions at all.

Um... the problem isn't in the two types, it's in the extensions.

>>Note that this problem only happens if the multiple-dispatching
>>function accesses internals of both types.
> 
> Which "problem"?

Nonmodularity.

>>A function that supposedly does multiple dispatch on its types and
>>does *not* access internals isn't really multiple-dispatching at all.
>>It's calling single-dispatching functions on either parameter, and
>>there's no need to ever specialise on the parameters. (Other than
>>optimisation, which is a separate issue.)
> 
> What if it doesn't access internals directly, but calls other
> functions which do, yet which are themselves multidispatched too?

Then the same argument applies to that multidispatched function: it 
can't be modularly extended.

> Dispatching the outer function allows to choose the proper algorithm,
> e.g. one which minimizes rounding errors and which returns an exact
> result in as broad class of cases as possible.

I don't follow.

>>As said above, I don't think that dynamic dispatch without access to
>>internals is particularly useful, other than for optimisation.
> 
> Even if this was true, optimization is useful.

How would you organise regression testing?

Regards,
Jo
0
jo427 (1164)
11/3/2005 9:17:48 AM
Benjamin Franksen wrote:
> IMO, if people insist on teaching an imperative OO language to beginners,
> Eiffel is a /much/ better choice than Java.

Oh yes, I forgot that in my list of languages that Java should have been :)

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
11/3/2005 10:03:24 AM
Benjamin Franksen schrieb:
> Joachim Durchholz wrote:
> 
>>Another aspect is this: If syntax is easy to grasp, beginners can
>>concentrate on semantics. It's a less steep learning curve - and
>>particularly for those who're still learning how to program, that's
>>invaluable.
> 
> Yes, that is exactly the reason why I think syntax matters for beginners.
> For seasoned programmers these things are much less relevant, if at all.

Indeed - though I still see lots of holy wars over syntax. Maybe 
"seasoned" isn't the right word.

>>What's bad is that names are declared as 
>>
>>   some-complicated-deeply-nested-type-expression(((var))) innermost-type
>>
>>Good thing that Java doesn't have nestable type expressions.
> 
> Right. Although usually you can disentangle such horror declarations in C
> with a few typedefs. What cannot be fixed, however, is that the name you
> define is not in a syntactically distinguished position, sometimes making
> it hard to spot it. You could use C macros for that, of course...;-))

I suspect most C programmers will have a hard time reading that... ;-)))

I'm not sure that it's tractable using C macros though.

>>>IMO, if people insist on teaching an imperative OO language to beginners,
>>>Eiffel is a /much/ better choice than Java.
>>
>>Um... well... Eiffel isn't without some horrible problems either.
> 
> Completely agreed.
> 
>>But these problems arise after a year or two. For beginners, the
>>language is indeed the best choice - and its emphasis on semantic
>>guarantees (as opposed to just giving a type to everything) helps.
>>
>>Hopefully they'll use a fast and easy-to-learn compiler :-)
>>(In a job interview, I once mentioned my Eiffel experience and got a
>>prompt reaction: "Wasn't that the slow, bug-ridden IDE that they threw
>>at us at university?" Some Eiffel IDEs were indeed of that quality,
>>hence useless for teaching...)
> 
> Today they would probably (hopefully) use SmartEiffel. No IDE but pretty
> fast & stable, AFAIR (I am not up to date).

It was that two years ago. There's no reason why SmartEiffel should have 
lost that.

>>>>The problem is: we don't know of better alternatives. At least not if
>>>>they want to teach OO.
>>>
>>>We do, see above.
>>
>>Indeed.
>>And that with my personal Eiffel background... 
> 
> Of which I happen know ;) ... (I think now and again you posted on the
> SmartEiffel mailing list). I already wondered why you didn't mention it.

Eiffel was one of those deep loves that turned bitter. Holes in the type 
system, and a language designer, first refusing to acknowledge the 
problem, then refusing to fix it ("it's a rare problem"), finally coming 
up with a fix that essentially disables dynamic dispatch as a design 
principle... no.

Similar story with repeated inheritance ("diamond inheritance" for the 
C++ folks), though that was more personal: I was the one who researched 
the problem, my arguments brushed aside, the issue finally acknowledged 
when I presented an irrefutable example, my solutions brushed aside, 
finally being presented with a "solution" that further restricted 
dynamic dispatch.

Eiffel is "almost there" in so many respects, yet the language designer 
refuses to go the last step - if at all, the language is going back.

>>>I think when I began studying, Haskell would have made me
>>>appreciate FP much more, but then Haskell didn't really exist yet... not
>>>H98 anyway; although -- thinking of monads -- maybe for beginners Haskell
>>>isn't such a good idea in the end...
>>
>>I don't think that monads are a serious problem. It's just that those
>>who know both category theory and FPLs can't explain the parallels
>>between the two monad concepts.
> 
> The problem with monads is that they are an extremely abstract and general
> concept (that's why they come from category theory), which is fine, but not
> so much for beginners. Let me give a mathematical analogon: The concept of
> a topology is not hard to understand, in principle. However, you don't tell
> the beginners what a topology is. You don't even tell them about metric
> spaces. You just use |x - y| for real numbers x and y, and take care to
> point out that this is a measure of the /distance/ between x and y on the
> real line and that whenever you talk about convergence this foremost means
> talking about distances. Later you generalize.
> 
> I can't imagine explaining monads to someone who just learns her first
> programming language. But you need to understand them, in order to
> understand how IO and other side-effects are even /possible/ in a pure
> functional language. Well, maybe you don't need to understand the IO monad
> to learn to do IO. OTOH, maybe some ML dialect is the better choice for
> beginners.
> 
> Don't get me wrong, I think understanding monads and how they enable IO and
> controlled side-effects in Haskell is extremely valuable. I only think you
> need to be somewhat experienced to appreciate this.

I suspect that being monadic isn't the core of Haskell's way of doing 
IO. It's more that you stick together a list of activities, and leave 
the dirty work of actually executing the list to the run-time system.

>>My current understanding of a "monad" is that it's an associative way to
>>chain up values. If the monad allows it, the values need not have the
>>same types (but the monad will usually have to constrain the types in
>>some way to be able to do something meaningful with them).
>>
>>If my understanding is correct, monads are dead simple. They are just a
>>recurring FPL design pattern - and here it's *really* design, because
>>the term "monad" just classifies a small set of type and function
>>signatures, and an equally small set of properties of the functions.
> 
> A design pattern, yes. The bit about 'chaining up values' I don't
> understand. As I see it, monads are an abstraction of the idea of a
> 'computation' (or 'action') that may return (=produce) some value.

Yes, that's what many people say.

Yet I don't see anything about computations in the monad laws. They mandate:

* A type (the "monad type" m)
* An operation to create a monad from a single value (return)
* An operation to connect two monads (>>=)
* A rule that the connect operation should be associative
* A convenience operator that is a combination of return and >>=
   (>>)
* A convenience operator for signalling errors.

(I'm skipping a few details here. The full description is at 
http://www.nomaware.com/monads/html/laws.html .)

I think people are taking the characteristics of two well-known monads 
(IO and State) and take them to be characteristics of *all* monads. The 
problem is compounded by the fact that all functions can be seen as 
operations... but that's not particularly helpful: I don't want to see 
everything as a computation, I want to see functions as value-to-value 
mappings!

> Of course, since the monad is an abstraction, it can be used in unforseen
> ways that do not seems to have much to do with 'computation' as such. But
> that's what happens with all (good) abstractions, doesn't it? The main
> point is that monadic values are values that (somehow) 'produce' other
> values.

I agree

> Ok, now that I have explained how I see it, I finally understand what you
> mean by 'chaining up values associatively'. Yes, but this is not the whole
> picture: monadic values need to /return something (else)/ and it is this
> what makes them 'chainable' in an associative manner.

No, the monad laws specifically do *not* require that there's anyway to 
get the values back out. IO in fact takes that route, or at least so it 
seems to me. Other monads like List and Maybe don't, they give you ways 
to get the values back out - but both pay a price: List elements must 
have uniform type, and Maybe cannot give you access to anything but the 
last value (which is usually what we want anyway, but Maybe itself is 
less powerful that way).

> Want to explain this to a beginner who still struggles with understanding
> what a loop (or recursion) is?

The monad laws? Sure.
Haskell's concoct-an-action-list approach? Sure.
How input and output interact in Haskell? Um... well... not so sure 
anymore. But it should certainly be taught at some point, and the 
earlier, the better. (Maybe there's an easier way to explain it. I'm not 
sure enough about Haskell's IO details to set one up, but the above 
explanations - *if they are indeed correct* - are already a big step 
towards making monads explainable.)

Regards,
Jo
0
jo427 (1164)
11/3/2005 10:04:28 AM
Benjamin Franksen schrieb:
> Joachim Durchholz wrote:
> 
>>Using the IO monad is simple, too. (Understanding how it works is far
>>more complicated. Actually I don't know how it works - I'd have to look
>>a *lot* deeper into Haskell to find that out.)
> 
> There is a fine paper called 'Tackling the Awkward Squad' by Simon
> Peyton-Jones that contains, among else, a good explanation of the IO monad
> and how it works. He even gives an operational semantics (and also explains
> the notation in a way that I understood it for the first time).

Ah yes. I should reread it with my current understanding of monads in mind.

Regards,
Jo
0
jo427 (1164)
11/3/2005 10:11:43 AM
Joachim Durchholz <jo@durchholz.org> writes:

>>>Nope. I'm talking about incompatible *internals*.
>> That's why I said that false subtyping is acceptable when the
>> incompatibility is limited to internals, used only in a limited area
>> of code.
>
> Huh? Internals don't go into the notion of subtyping.

If a difference in internals breaks code, they were not really internals.

I don't understand your complaint. You are saying that when you declare
two types as compatible, when in reality they are not, bad things happen.
Well, it's normal, and dynamic dispatch has nothing to do with it. Equally
bad things would happen if you applied non-dispatched functions to objects
they are not prepared to handle.

> Numeric frameworks can easily be made extensible via conversion
> (convert to the most specific type that can represent both operands,
> then apply the operator). No multiple dispatch needed here, except
> possibly for optimisation.

It's not that simple. When we add an integer to a floating point
number, most languages and most people expect the result to be a
floating point number, even if it could not represent the integer
exactly. Turning them into a vulgar fraction would be harmful because
their performance characteristic doesn't allow them to replace
floating point. An important aspect of fixed-size floating point is
that precision is cut in order to maintain a constant efficiency.

Dispatch alone is not sufficient. When a programmer asks to compute
the square root of 2, types of operands don't indicate the precision
in which he wants the result to be represented. I have a dynamically
scoped variable holding a conversion function for choosing the
representation of inexact reals, for cases it doesn't follow from
operands.

> Of course, since numerics are usually done by the language designer,
> interference between independent third parties is usually not an
> issue.

Indeed. But I did not want to commit to a specific floating point type
in the language design. While I did not implement arbitrary precision
floating point (like CLISP does), I want it to be implementable as
a library. The same applies to decimal fractions, if somebody decides
that ratios are not efficient enough (because keeping the numerator
and denominator relatively prime is wasteful if you only care about
denominators being powers of 10).

> Things get interesting once you wish to extend the framework to
> encompass incompatible things like vector spaces, which can be
> mutually incompatible. Is that the case in your framework?

I did not design with this in mind, but it should work well.

> If yes: when does your framework detect attempts to operate
> on incompatible data, say adding a two-dimensional and a
> three-dimensional vector?

If they have different types, there will probably be no method of the
given generic function for them, so you will get an error about a
generic function not applicable to some combination of types. If they
are not distinguished by types, the implementation of the operation
will have to detect this and throw an exception.

> Um... on second thinking, I recognise that conversion isn't always
> the Right Thing even with simple integers. Assume you have two
> integers, one from a ring modulo some number (i.e. wrap-around
> arithmetic), one from an unlimited-digits type.

They have a sufficiently incompatible behavior that the intended
meaning of mixing them in the same operation is unclear. If it was
done anyway, the implementor of these numbers would have to decide
how to behave. But I'm not sure whether they should be considered
numbers at all.

> How does your framework handle the situation?

If they are declared with INTEGER as a supertype (it's different
from INT which implies a specific representation), then by default
they will be converted to plain unlimited ints, unless specialized
differently. If they aren't given INTEGER as the supertype, then
mixing is an error, unless they are specialized differently.

How would you want them to behave? Then I could see whether it fits
in my framework.

> How does your framework handle the situation if somebody adds a new
> kind of integers; say, "only even integers" for a silly example? How
> would that interact if somebody else defined a numeric "only numbers
> that are evenly divisibly by three"?

Depends on how they implement operations and conversions.

Besides methods of arithmetic operations for concrete types, there are
specializations for abstract supertypes like INTEGER, RATIONAL, REAL
and COMPLEX, which convert arguments to canonical representations of
the given domain (the one for REAL being settable dynamically). This
means that we will get some reasonable behavior if the given types
declare them as supertypes but don't implement an operation.

They can opt out from this by not declaring the supertypes, and in any
case they can implement specific operations differently. The framework
was designed with different representations of numbers in mind, i.e.
for values with varying precision and performance characteristic but
with a well-defined underlying mathematical model, when it's clear
which abstract value should be obtained but not clear how it should be
represented.

How would you design it differently?

>>>In other words, if two types participate in multiple dispatch
>>>anywhere, forcing them into the same module just acknowledges facts,
>>>it doesn't impose new problems.
>> False. It means that one of them knew about the other, but not
>> necessarily in both directions. Forcing them in the same module
>> would disallow extensions at all.
>
> Um... the problem isn't in the two types, it's in the extensions.

I see no problem arising from generic functions which isn't inherent
to the world we are modeling.

>>>A function that supposedly does multiple dispatch on its types and
>>>does *not* access internals isn't really multiple-dispatching at all.
>>>It's calling single-dispatching functions on either parameter, and
>>>there's no need to ever specialise on the parameters. (Other than
>>>optimisation, which is a separate issue.)
>> What if it doesn't access internals directly, but calls other
>> functions which do, yet which are themselves multidispatched too?
>
> Then the same argument applies to that multidispatched function:
> it can't be modularly extended.

I don't follow.

>> Dispatching the outer function allows to choose the proper algorithm,
>> e.g. one which minimizes rounding errors and which returns an exact
>> result in as broad class of cases as possible.
>
> I don't follow.

A multidispatched function doesn't have to access internals of its
arguments.

For example the function BeginsWith, which checks whether a sequence
begins with the same elements as from some other sequence, has a
specialization for both arguments of type FLAT_SEQUENCE. A flat
sequence is one which has an efficient size calculation and indexing;
this is an abstract supertype. This specialization checks the sizes
first, before proceeding with iteration through elements. It doesn't
rely on any internals of the arguments, it only relies on FLAT_SEQUENCE
being declared as a supertype of those sequences for which the size
can be efficiently obtained.

>>>As said above, I don't think that dynamic dispatch without access
>>>to internals is particularly useful, other than for optimisation.
>> Even if this was true, optimization is useful.
>
> How would you organise regression testing?

The same as for any other library which can't be tested exhaustively:
by giving it sample arguments diverse enough that we hope that all
interesting code paths have been covered.

How would you organize testing of a higher order function?

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
0
qrczak (1266)
11/3/2005 11:57:41 AM
On Tue, 1 Nov 2005 23:46:23 +0000 (UTC), "Jerzy Karczmarczuk"
<karczma@info.unicaen.fr> wrote:

>But there is essentially one way to get people born...

2 ways ... if you count cesarean.

George
--
for email reply remove "/" from address
0
George
11/3/2005 1:24:53 PM
Marcin 'Qrczak' Kowalczyk schrieb:
> Joachim Durchholz <jo@durchholz.org> writes:
> 
> 
>>>>Nope. I'm talking about incompatible *internals*.
>>>
>>>That's why I said that false subtyping is acceptable when the
>>>incompatibility is limited to internals, used only in a limited area
>>>of code.
>>
>>Huh? Internals don't go into the notion of subtyping.
> 
> If a difference in internals breaks code, they were not really internals.
> 
> I don't understand your complaint. You are saying that when you declare
> two types as compatible, when in reality they are not, bad things happen.
> Well, it's normal, and dynamic dispatch has nothing to do with it. Equally
> bad things would happen if you applied non-dispatched functions to objects
> they are not prepared to handle.

Likewise, I don't understand your position. I suspect we're using terms 
in slightly different senses.

Let me restate my perspective:

I'm starting from the classic statically-typed OO perspective, which 
upholds the following principles:
* A class defines a type.
* The LSP, i.e. that a subclass should define a subtype.
* Class boundaries and module boundaries coincide.
* Function calls are dispatched based on run-time type information.

I hold that at least some of these principles need to be modified, or 
modularity as such goes out of the window, if multiple dispatch is 
introduced.

>>Numeric frameworks can easily be made extensible via conversion
>>(convert to the most specific type that can represent both operands,
>>then apply the operator). No multiple dispatch needed here, except
>>possibly for optimisation.
> 
> It's not that simple. When we add an integer to a floating point
> number, most languages and most people expect the result to be a
> floating point number, even if it could not represent the integer
> exactly. Turning them into a vulgar fraction would be harmful because
> their performance characteristic doesn't allow them to replace
> floating point. An important aspect of fixed-size floating point is
> that precision is cut in order to maintain a constant efficiency.

Well, then floating-point and integer aren't in a subtype relationship, 
and they don't even compose.

Integer and floating-point arithmetic are simply different operations, 
and there should be no automatic selection anyway.

> Dispatch alone is not sufficient. When a programmer asks to compute
> the square root of 2, types of operands don't indicate the precision
> in which he wants the result to be represented. I have a dynamically
> scoped variable holding a conversion function for choosing the
> representation of inexact reals, for cases it doesn't follow from
> operands.

That depends on how you define the square root. Actually it's a 
two-parameter operation: the number of which we want the square root, 
and the precision that we want (machine epsilon or integral numbers, in 
that case). (There's also a difference in the *kind* of precision: 
relative or absolute. Again, we're back at two different operations, 
even if most programmers would conflate them.)

>>Of course, since numerics are usually done by the language designer,
>>interference between independent third parties is usually not an
>>issue.
> 
> Indeed. But I did not want to commit to a specific floating point type
> in the language design.

Seems like a wise decision to me.

>>Things get interesting once you wish to extend the framework to
>>encompass incompatible things like vector spaces, which can be
>>mutually incompatible. Is that the case in your framework?
> 
> I did not design with this in mind, but it should work well.
> 
>>If yes: when does your framework detect attempts to operate
>>on incompatible data, say adding a two-dimensional and a
>>three-dimensional vector?
> 
> If they have different types, there will probably be no method of the
> given generic function for them, so you will get an error about a
> generic function not applicable to some combination of types. If they
> are not distinguished by types, the implementation of the operation
> will have to detect this and throw an exception.

Hmm... how does the framework decide what combinations are compatible 
and which are not?

>>Um... on second thinking, I recognise that conversion isn't always
>>the Right Thing even with simple integers. Assume you have two
>>integers, one from a ring modulo some number (i.e. wrap-around
>>arithmetic), one from an unlimited-digits type.
> 
> They have a sufficiently incompatible behavior that the intended
> meaning of mixing them in the same operation is unclear. If it was
> done anyway, the implementor of these numbers would have to decide
> how to behave. But I'm not sure whether they should be considered
> numbers at all.

Numbers modulo some limit are numbers, sure enough. All the usual 
arithmetic laws hold. (There are a few extra ones: subtraction becomes a 
total function, which it isn't with natural numbers, and if the limit is 
a prime, division becomes "mostly invertible", i.e. all numbers except 
zero get an inverse. It's an interesting arithmetic, but an arithmetic 
nonetheless. It's also practically relevant because numbers modulo a 
power of two is particularly efficiently implementable on today's hardware.)

>>How does your framework handle the situation?
> 
> If they are declared with INTEGER as a supertype (it's different
> from INT which implies a specific representation), then by default
> they will be converted to plain unlimited ints, unless specialized
> differently. If they aren't given INTEGER as the supertype, then
> mixing is an error, unless they are specialized differently.
> 
> How would you want them to behave? Then I could see whether it fits
> in my framework.

I was rather aiming at how the default behavior of the framework is. If 
they don't mix by default, or are converted, then that sounds reasonable 
to me.

>>How does your framework handle the situation if somebody adds a new
>>kind of integers; say, "only even integers" for a silly example? How
>>would that interact if somebody else defined a numeric "only numbers
>>that are evenly divisibly by three"?
> 
> Depends on how they implement operations and conversions.
> 
> Besides methods of arithmetic operations for concrete types, there are
> specializations for abstract supertypes like INTEGER, RATIONAL, REAL
> and COMPLEX, which convert arguments to canonical representations of
> the given domain (the one for REAL being settable dynamically). This
> means that we will get some reasonable behavior if the given types
> declare them as supertypes but don't implement an operation.
> 
> They can opt out from this by not declaring the supertypes, and in any
> case they can implement specific operations differently. The framework
> was designed with different representations of numbers in mind, i.e.
> for values with varying precision and performance characteristic but
> with a well-defined underlying mathematical model, when it's clear
> which abstract value should be obtained but not clear how it should be
> represented.
> 
> How would you design it differently?

I'd probably do automatic conversion wherever it's sound (I wouldn't 
even convert from integer to float since that can lose precision), and 
require explicit conversion elsewhere.

Ordinarily, +-*/ would be unlimited-precision integer operators, 
possibly with the OCaml variation that +. -. *. /. were there for 
floating-point arithmetic.
Then I'd add a way to "import" alternate definitions, so that those 
who're doing heavy-duty floating-point arithmetic could use an alternate 
definition where +-*/ would be floating-point operators (which in turn 
would mean that the integer operators are either unavailable or renamed 
to something else). Importing the alternative defitions should probably 
be possible on a per-block basis, so that iterating over an array 
doesn't require writing stuff like "i := IntPlus (i, 1)" :-)

>>>Dispatching the outer function allows to choose the proper algorithm,
>>>e.g. one which minimizes rounding errors and which returns an exact
>>>result in as broad class of cases as possible.
>>
>>I don't follow.
> 
> A multidispatched function doesn't have to access internals of its
> arguments.
> 
> For example the function BeginsWith, which checks whether a sequence
> begins with the same elements as from some other sequence, has a
> specialization for both arguments of type FLAT_SEQUENCE. A flat
> sequence is one which has an efficient size calculation and indexing;
> this is an abstract supertype. This specialization checks the sizes
> first, before proceeding with iteration through elements. It doesn't
> rely on any internals of the arguments, it only relies on FLAT_SEQUENCE
> being declared as a supertype of those sequences for which the size
> can be efficiently obtained.

Hmm... sorry I can't follow that to the last ramification right now 
(lack of time, not lack of merit in the argument).

I think it's not necessary to do multiple dispatch in such a case, but 
I'd have to take a look how such a case would work out with both 
approaches. Unfortunately, I'm lacking the time to do that.

Let's put that aside as "can't be resolved right now". Sorry for the 
inconvenience.

>>>>As said above, I don't think that dynamic dispatch without access
>>>>to internals is particularly useful, other than for optimisation.
>>>
>>>Even if this was true, optimization is useful.
>>
>>How would you organise regression testing?
> 
> The same as for any other library which can't be tested exhaustively:
> by giving it sample arguments diverse enough that we hope that all
> interesting code paths have been covered.
> 
> How would you organize testing of a higher order function?

That's a different kind of problem.

For optimisations, you need to test each case separately, since each 
case is handled in different code paths. Each optimisation adds a code 
path that needs to be tested separately; if there are NxM optimisations 
that may interact, you need to test NxM cases.

Higher-order functions are "more linear", so a single test case to see 
whether it does the right thing at all should cover most cases. (If the 
HOF does direct recursion, I'd probably want to test the base case, the 
base+1 case, and a base+<some arbitrary number of iterations> case.)

(Note I haven't done any serious regression testing with HOFs yet. The 
above is purely theoretical, and probably needs to be refined with 
practice. Suggestions welcome.)

Regards,
Jo
0
jo427 (1164)
11/3/2005 2:49:04 PM
Dirk Thierbach wrote:
> 
> Andre <andre@het.brown.edu> wrote:
> > Because the HOF approach exposes internal implementation
> > details that do not belong to the abstraction.
> 
> I don't follow that. In what why does the HOF approach expose the
> internal implementation? By abusing side-effects? Or what idea is
> behind this? Example?

I just meant that there are many ways of implementing a
do-until construct, many of which do not require the thunks needed
by the one HOF example (which imply the use of side effects, as you
observe), or the monadic types used in the other HOF example.  

A macro can be used to hide this detail, and can be justified by the 
same arguments used for any other kind of abstraction.  

These include:

- Modularity: being able to change the implementation 
  from e.g., thunks to monads without having to change all the use sites. 
- Optimization: The macro might construct thunks behind the 
  scenes, but does not have to.  A syntax that is agnostic with 
  respect to this detail is arguably better than a HOF that isn't,
  for a looping construct where performance might be important.  

Cheers
Andre
0
andre9567 (120)
11/3/2005 3:41:29 PM
Andre schrieb:
> - Modularity: being able to change the implementation 
>   from e.g., thunks to monads without having to change all the use sites. 

Ah, that's the "exposes internal implementation" criticism - it meant 
"exposing the internal implementation of the 'until' construct".

Well, I'm not sure that this is a serious issue in an FPL context. If 
all you have are closures, then there isn't much variation here. I think 
the variation that we have observed is more because "until" is an 
inherently imperative construct, something that isn't needed in 
side-effect-free FPL programming. You don't then need two cooperating 
closures.

A better example would be "iterate over a list and return some condensed 
information". For historical reasons, this is commonly called a "fold" 
and looks like this:

   fold op zero list

E.g. to get the sum of all elements of integer_list, you write

   fold (+) 0 integer_list

One could do that using macros, but it's not necessary: fold is a simple 
higher-order function (a three-liner including the type signature IIRC).

There exists a multitude of such operators. They can slice and dice 
lists in about any way imaginable. There are operations to split and 
merge lists, to transform them.
I don't need "until" if I put everything into lists and have these 
operators work on these :-)
OK, lists aren't suitable for every task in computation. They suck if 
you need to operate on many elements at once, or need to select a few 
items in them at random. There are other data structures that can do 
this, of course, and these also come with the appropriate HOFs for 
slicing and dicing.
Do I need macros? Not really: I have all the data structures I want, and 
the operations to transform them as needed.

> - Optimization: The macro might construct thunks behind the 
>   scenes, but does not have to.  A syntax that is agnostic with 
>   respect to this detail is arguably better than a HOF that isn't,
>   for a looping construct where performance might be important.  

FPL compilers are usually quite good at optimising this kind of closure 
("thunk" in your terminology) away.

Regards,
Jo
0
jo427 (1164)
11/3/2005 6:41:45 PM
Andre wrote:
> A macro can be used to hide this detail, and can be justified by the 
> same arguments used for any other kind of abstraction.  
> 
> These include:
> 
> - Modularity: being able to change the implementation 
>   from e.g., thunks to monads without having to change all the use sites. 
> - Optimization: The macro might construct thunks behind the 
>   scenes, but does not have to.  A syntax that is agnostic with 
>   respect to this detail is arguably better than a HOF that isn't,
>   for a looping construct where performance might be important.  

Good example.  I much prefer Lisp's
(dolist (x '(1 2 3))
   (format t "~d~%" x))

over Scheme's
(for-each (lambda (x) (display x) (newline))
	  '(1 2 3))

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
11/3/2005 6:50:19 PM
Joachim Durchholz <jo@durchholz.org> writes:

> I'm starting from the classic statically-typed OO perspective, which
> upholds the following principles:
> * A class defines a type.
> * The LSP, i.e. that a subclass should define a subtype.
> * Class boundaries and module boundaries coincide.
> * Function calls are dispatched based on run-time type information.

I don't know a language-independent definition of a class.

LSP defines subtyping on the basis of substitutability. It can be
interpreted either as a definition of subtyping, or as a guideline
when to declare subtypes if they are declared explicitly. Subclassing
(inheritance of implementation) is a different tool, subclasses may
coincide with subtypes or not.

I hate languages where class boundaries and module boundaries must
coincide. For me modules are used as namespaces, splitting code into
files and separate compilation. (Some people believe that namespaces
should be decoupled from other uses of modules, but for me the ability
to selectively import and reexport names makes modules appropriate.)

Some functions dispatch their calls based on run-time type information.
Others do not.

My language Kogut reflects these views. I don't know whether it has
classes, it depends on the definition of a class. Subtyping, used for
dispatch, is declared explicitly, and it's the programmer's responsibility
to declare supertypes when it makes sense. When implementing a type,
you can delegate the behavior to another object or include "features"
(mixins), although I almost don't use this facility and prefer explicit
delegation through a field (with the functional style you have to change
many methods during wrapping anyway).

> Well, then floating-point and integer aren't in a subtype
> relationship, and they don't even compose.
>
> Integer and floating-point arithmetic are simply different
> operations, and there should be no automatic selection anyway.

I agree that they aren't in a subtype relationships, and disagree with
the rest. 2*x should return the number twice as big no matter whether
it's represented as an integer or as a floating point object.

> That depends on how you define the square root. Actually it's a
> two-parameter operation: the number of which we want the square
> root, and the precision that we want (machine epsilon or integral
> numbers, in that case).

Most of the time I don't want to specify the precision explicitly,
and with floating point there is little choice anyway. So my Sqrt
takes a single argument, like in all other languages.

A variable precision floating point type (currently unimplemented)
would receive the precision through a dynamic variable or use the
precision of the argument; I don't know which policy is better.
Often the same precision is used for several operations, so passing
it implicitly is more convenient. This doesn't need a new interface
of Sqrt.

>> If they have different types, there will probably be no method of
>> the given generic function for them, so you will get an error about
>> a generic function not applicable to some combination of types.
>> If they are not distinguished by types, the implementation of the
>> operation will have to detect this and throw an exception.
>
> Hmm... how does the framework decide what combinations are
> compatible and which are not?

I don't understand. It provides methods for types where the given
operation makes sense mathematically, and doesn't try to provide
cases where it makes no sense (like checking whether a ratio is even).
If only a part of a type is in the domain, for other values there
will be a runtime error (like division by zero).

It provides infinities which extend the domain of some operations
a bit. Inexact infinities are used to propagate overflow of the
representation, and exact infinities are primarily used to provide
a bound which is greater or less than anything else.

There are two kinds of operations with respect to default
implementations: some operations are defined by default through
conversion to different types, and should be provided by all types
which want their algorithm to be used (core operations); other
operations are defined in terms of core operations, e.g. Sqr x
is defined as x*x for types which don't provide their explicit
definition.

>> They have a sufficiently incompatible behavior that the intended
>> meaning of mixing them in the same operation is unclear. If it was
>> done anyway, the implementor of these numbers would have to decide
>> how to behave. But I'm not sure whether they should be considered
>> numbers at all.
>
> Numbers modulo some limit are numbers, sure enough. All the usual
> arithmetic laws hold.

It doesn't hold that 2*x==0 implies x==0. Generally they give
different results for operations already defined for plain integers,
so if they are considered numbers, they would have to be different
from existing integers, i.e. "2 (modulo 5)" is not a representation
of 2 but of an entirely different abstract thing, while float 2.0
for example *is* a representation of 2.

> It's also practically relevant because numbers modulo a power of two
> is particularly efficiently implementable on today's hardware.)

In the current implementation of Kogut they would be less efficient
than the standard INT, because they could not use the unboxed
representation, which is already taken by small ints. The only case
where they could be more efficient is representing numbers just above
the small int range (signed machine word minus one bit), where
a heap-allocated fixed-size word is faster than a general bignum.
Of course in a hypothetical optimizing implementation this could be
different.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
0
qrczak (1266)
11/3/2005 7:03:05 PM
Joachim Durchholz wrote:
> Andre schrieb:
>> - Modularity: being able to change the implementation   from e.g.,
>> thunks to monads without having to change all the use sites. 
> 
> Ah, that's the "exposes internal implementation" criticism - it meant
> "exposing the internal implementation of the 'until' construct".
> 
> Well, I'm not sure that this is a serious issue in an FPL context. If
> all you have are closures, then there isn't much variation here. I think
> the variation that we have observed is more because "until" is an
> inherently imperative construct, something that isn't needed in
> side-effect-free FPL programming. You don't then need two cooperating
> closures.
> 
> A better example would be "iterate over a list and return some condensed
> information". For historical reasons, this is commonly called a "fold"
> and looks like this:
> 
>   fold op zero list
> 
> E.g. to get the sum of all elements of integer_list, you write
> 
>   fold (+) 0 integer_list
> 
> One could do that using macros, but it's not necessary: fold is a simple
> higher-order function (a three-liner including the type signature IIRC).

What would the macro for fold look like?  I'm not sure what your example
is supposed to illustrate, but unless the list being folded over is
known statically, ie. a literal list of numbers, a macro can do little
work here.  A macro then is neither necessary nor sufficient as far as I
can tell.

> There exists a multitude of such operators. They can slice and dice
> lists in about any way imaginable. There are operations to split and
> merge lists, to transform them.
> I don't need "until" if I put everything into lists and have these
> operators work on these :-)
> OK, lists aren't suitable for every task in computation. They suck if
> you need to operate on many elements at once, or need to select a few
> items in them at random. There are other data structures that can do
> this, of course, and these also come with the appropriate HOFs for
> slicing and dicing.
> Do I need macros? Not really: I have all the data structures I want, and
> the operations to transform them as needed.

Try writing even streams in a strict language without making a mistake.

David
0
11/3/2005 7:37:27 PM
Ulrich Hobelmann wrote:
> Good example.  I much prefer Lisp's
> (dolist (x '(1 2 3))
>   (format t "~d~%" x))
> 
> over Scheme's
> (for-each (lambda (x) (display x) (newline))
>       '(1 2 3))
> 

This example is exactly where a macro gives you nothing compared to its
procedural counterpart.  Well, except that I have to remember another
binder besides the universal one, lambda.  I have to remember the
evaluation rules for this special form rather than the universal one for
application.  There's no abstraction or any other benefit offered by
dolist over for-each.  Lastly, dolist has been relegated to second class
status whereas for-each is a first-class citizen.

Maybe there are other features of dolist that separate it from for-each,
but for this example the procedure clearly wins in my opinion.

Could you say *why* you much prefer one over the other?

David
0
11/3/2005 7:51:10 PM
Ulrich Hobelmann <u.hobelmann@web.de> writes:

> Good example.  I much prefer Lisp's
> (dolist (x '(1 2 3))
>    (format t "~d~%" x))
>
> over Scheme's
> (for-each (lambda (x) (display x) (newline))
> 	  '(1 2 3))

OTOH if the type of the sequence is not known statically, a macro
can't do much better than an equivalent of the second implementation.

And a macro usage is not necessarily shorter. In my Kogut this very
example looks like this: Each [1 2 3] WriteLine

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
0
qrczak (1266)
11/3/2005 8:17:39 PM
Marcin 'Qrczak' Kowalczyk wrote:
> Joachim Durchholz <jo@durchholz.org> writes:
>> Things get interesting once you wish to extend the framework to
>> encompass incompatible things like vector spaces, which can be
>> mutually incompatible. Is that the case in your framework?
> 
> I did not design with this in mind, but it should work well.
> 
>> If yes: when does your framework detect attempts to operate
>> on incompatible data, say adding a two-dimensional and a
>> three-dimensional vector?
> 
> If they have different types, there will probably be no method of the
> given generic function for them, so you will get an error about a
> generic function not applicable to some combination of types. If they
> are not distinguished by types, the implementation of the operation
> will have to detect this and throw an exception.

Looks as if this comes down to the old argument about the pros and cons of
static typing. My personal preference is a compiler error in such a case,
rather than an exception.

BTW, the whole problem can be circumvented simply by not allowing open
recursion. I think open recursion is too difficult to use and maintain,
because it hides a really imortant dependency. It is useful mostly in
languages that don't support closures directly. If you have closures, you
should make the dependency explicit by giving the method as a parameter,
instead of letting the user redefine it in a derived class.

Ben
0
11/3/2005 8:30:03 PM
Joachim Durchholz wrote:
 
> Do I need macros? Not really: I have all the data structures I want, and
> the operations to transform them as needed.

Here is some nice motivation for macros within the context 
of Scheme.  Especially section 35.5 and the DSL in chapter 36 are 
of interest: 

http://www.cs.brown.edu/people/sk/Publications/Books/ProgLangs/
0
andre9567 (120)
11/3/2005 9:09:41 PM
In article <20051102074322.4B0.3.NOFFLE@dthierbach.news.arcor.de>,
Dirk Thierbach  <dthierbach@usenet.arcornews.de> wrote:
>Gary Baumgartner <gfb@cs.toronto.edu> wrote:
>> But another poster has made sweeping statements that I don't believe are
>> valid criticisms of Scheme macros. 
>
>I didn't follow most of this discussion. Macros are somtimes nice to
>have, but I think the criticism starts to appear when people start making
>arguments like "Lisp/Scheme is the most powerful language because it
>has so powerful macros that are better than everything else." (I am
>exaggerating, of course :-).

I'll take this as an opportunity to set the record straight.

Here's how I got involved:

>In article <dio2nq$uk...@online.de>,
>Joachim Durchholz  <j...@durchholz.org> wrote:
>[...]
>>I have always been sceptical about self-definable syntax. It tends to
>>encourage code that nobody but the original macro author understands.
>
>Would you claim this about functions, datatypes or classes?
>What's so different about (my-function a ...) versus (my-macro a ...)?
>Don't you just see "my-function" or "my-macro" and look up its documentation?
>
>Gary Baumgartner

I eventually mentioned an until macro I had just written (for other reasons)
 to see how the criticisms applied, and make the discussion more concrete.
 That's all. That macro has since I think been taken out of context, though
 I certainly have learned from some of the responses to it, regardless of
 whether those responses were meant to address a strong claim never made
 for the macro, or indeed macros in general (in this thread, though it's
 hard to keep up).

I still think that certain people have rejected macros for the wrong reasons.
 That doesn't mean I'm claiming there are no valid reasons to reject them.
 But in my use of them I don't recall encountering a situation where I
 thought "oh, if only I/they hadn't been tempted by macros". If I think
 about it I might find cases where I thought a function would have been
 better, but then (and this has been the essence of my response all along)
 I've found cases where I thought someone made the wrong function, class, etc.

Gary Baumgartner
0
gfb (30)
11/3/2005 9:51:17 PM
David Van Horn wrote:
> Ulrich Hobelmann wrote:
>> Good example.  I much prefer Lisp's
>> (dolist (x '(1 2 3))
>>   (format t "~d~%" x))
>>
>> over Scheme's
>> (for-each (lambda (x) (display x) (newline))
>>       '(1 2 3))
>>
> 
> This example is exactly where a macro gives you nothing compared to its
> procedural counterpart.  Well, except that I have to remember another

Efficiency.  To make the Scheme version avoid a dynamic function call 
FOR-EACH needs to be recognized by the compiler.  DOLIST is just a macro 
that expands to label-and-goto code.

> binder besides the universal one, lambda.  I have to remember the
> evaluation rules for this special form rather than the universal one for
> application.  There's no abstraction or any other benefit offered by
> dolist over for-each.  Lastly, dolist has been relegated to second class
> status whereas for-each is a first-class citizen.

That's right, but DOLIST's syntax is so easy that I gladly pay the 
price.  FOR-EACH is too verbose and distracts from the essentials, IMHO.

> Maybe there are other features of dolist that separate it from for-each,
> but for this example the procedure clearly wins in my opinion.
> 
> Could you say *why* you much prefer one over the other?

I find it more readable, more writeable, and it's probably faster.

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
11/3/2005 10:51:25 PM
Marcin 'Qrczak' Kowalczyk wrote:
> Ulrich Hobelmann <u.hobelmann@web.de> writes:
> 
>> Good example.  I much prefer Lisp's
>> (dolist (x '(1 2 3))
>>    (format t "~d~%" x))
>>
>> over Scheme's
>> (for-each (lambda (x) (display x) (newline))
>> 	  '(1 2 3))
> 
> OTOH if the type of the sequence is not known statically, a macro
> can't do much better than an equivalent of the second implementation.

Well, when you have a for-each function that works for all sequences, so 
could the dolist macro...

> And a macro usage is not necessarily shorter. In my Kogut this very
> example looks like this: Each [1 2 3] WriteLine

It's mostly shorter because it avoids the verbose lambda.  If you didn't 
have WriteLine, how would you write the "display, then newline" stuff in 
Kogut?

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
11/3/2005 10:53:38 PM
Ulrich Hobelmann <u.hobelmann@web.de> writes:

>> OTOH if the type of the sequence is not known statically, a macro
>> can't do much better than an equivalent of the second implementation.
>
> Well, when you have a for-each function that works for all sequences,
> so could the dolist macro...

The point is, what would the macro expand to? It must either pack the
loop body in a closure and pass it to a different function depending
on the sequence type, or pack the iterator in an object and perform an
indirect call for obtaining each element, or jump between the iteration
and the loop body through continuations.

In each case the cost is similar, and there is no gain from iteration
being a macro except that some people might find the macro syntax nicer.

>> And a macro usage is not necessarily shorter. In my Kogut this very
>> example looks like this: Each [1 2 3] WriteLine
>
> It's mostly shorter because it avoids the verbose lambda.

Indeed, but I gave even explicit lambdas a concise syntax:
?parameters {body} in the general case.

For continuation passing style you can write ?parameters => body
without the closing brace, and with empty parameters the question
mark is not necessary.

If the body consists of a single function call with some arguments
passed from the lambda's parameters (in the same order), you can
write the application with passed parameters replaced by _, e.g.
Write _ "\n" being equivalent to ?s {Write s "\n"}. This is similar
to currying in languages which do curry (the provided arguments are
evaluated when the function is created).

A lightweight syntax for lambdas means that more control structures
can be implemented with regular functions instead of with macros.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
0
qrczak (1266)
11/3/2005 11:17:31 PM
Joachim Durchholz wrote:
> Benjamin Franksen schrieb:
>> Joachim Durchholz wrote:
>>>>>The problem is: we don't know of better alternatives. At least not if
>>>>>they want to teach OO.
>>>>
>>>>We do, see above.
>>>
>>>Indeed.
>>>And that with my personal Eiffel background...
>> 
>> Of which I happen know ;) ... (I think now and again you posted on the
>> SmartEiffel mailing list). I already wondered why you didn't mention it.
> 
> Eiffel was one of those deep loves that turned bitter. Holes in the type
> system, and a language designer, first refusing to acknowledge the
> problem, then refusing to fix it ("it's a rare problem"), finally coming
> up with a fix that essentially disables dynamic dispatch as a design
> principle... no.
>
> Similar story with repeated inheritance ("diamond inheritance" for the
> C++ folks), though that was more personal: I was the one who researched
> the problem, my arguments brushed aside, the issue finally acknowledged
> when I presented an irrefutable example, my solutions brushed aside,
> finally being presented with a "solution" that further restricted
> dynamic dispatch.
> 
> Eiffel is "almost there" in so many respects, yet the language designer
> refuses to go the last step - if at all, the language is going back.

Yes. It's a sad story. I was never as involved in it as you were but I got
disappointed with Eiffel's development, too, and my issues with it have
been similar.

> I suspect that being monadic isn't the core of Haskell's way of doing
> IO. It's more that you stick together a list of activities, and leave
> the dirty work of actually executing the list to the run-time system.

You say it: "stick together a list of activities". That's exactly what you
use a monad for. And note, that that there are (data) dependencies between
the activities, i.e. it is not just a simple list; what to do next and how
to do it can depend on what the earlier activities resulted in.

> Yet I don't see anything about computations in the monad laws. They
> mandate:
> 
> * A type (the "monad type" m)
> * An operation to create a monad from a single value (return)
> * An operation to connect two monads (>>=)
> * A rule that the connect operation should be associative
> * A convenience operator that is a combination of return and >>=
>    (>>)
> * A convenience operator for signalling errors.

First that doesn't explain the intuition behind the formalism. Why are the
laws and operations such and such? What is it that this formalism is an
abstraction of?

Second, the bind operation (>>=) does /not/ connect two monads. Instead it
connects a monad to a function that returns a(nother) monad. See further
below for why this distinction is important.

> I think people are taking the characteristics of two well-known monads
> (IO and State) and take them to be characteristics of *all* monads. 

I conjecture that every monad (in Haskell) has a (more or less natural)
interpretation as a computation (in the sense I explained in my last
posting).

> The 
> problem is compounded by the fact that all functions can be seen as
> operations... but that's not particularly helpful: I don't want to see
> everything as a computation, I want to see functions as value-to-value
> mappings!

Yes, sure. It can lead to misunderstandings for beginners or even
experienced people without prior exposition to FP. And nobody wants you to
see everything always as a computation. I am merely saying that computation
is the basic intuition behind and the standard model for monads, at least
in the area of programming. (Don't know much about category theory.)

And let us not forget that programming is ultimately about getting results.
It is nice to be able to do this in a more abstract and concise way (FP).
But a function in an FPL is /not/ the same as a function in mathematics.
The computational aspects are important to know about, even for beginners;
otherwise their programs may (eventually) terminate but not in time for
them to observe it actually happening ;-) It is nice to be able to reason
about programs in terms of their (denotational) semantics; yet, this is not
the same as running them on a computer.

>> Of course, since the monad is an abstraction, it can be used in unforseen
>> ways that do not seems to have much to do with 'computation' as such. But
>> that's what happens with all (good) abstractions, doesn't it? The main
>> point is that monadic values are values that (somehow) 'produce' other
>> values.
> 
> I agree
> 
>> Ok, now that I have explained how I see it, I finally understand what you
>> mean by 'chaining up values associatively'. Yes, but this is not the
>> whole picture: monadic values need to /return something (else)/ and it is
>> this what makes them 'chainable' in an associative manner.
> 
> No, the monad laws specifically do *not* require that there's anyway to
> get the values back out. IO in fact takes that route, or at least so it
> seems to me.

Ok, it was wrong (or at least misleading) the way I formulated it. What I
meant when I said 'monadic values need to /return something (else)/' is
that either your program or else the 'system' must somehow be able to get
at the value(s) 'inside' the monad.

It may be that there are some monads that only allow this to happen 'behind
the scenes', but there must be some way, or else your monad is useless.
Why? Well, how else do you explain the bind operator? I already mentioned
above that it does /not/ chain two monadic values. Rather it chains a
monadic value with a /function/ that returns a monadic value. And the type
of bind states that the type of the values 'contained' in its first
argument is the same as the type of values expected by the function. So,
how does this function get its input (a value to be applied to)? Obviously
the bind operator must (somehow) extract it from its first argument.

This can be made more precise using a Wadler'ian "theorems for free" kind of
appeal to polymorhism: The bind operator doesn't know anything about the
type of values contained in the monads. That means it can't do anything to
produce such a value (other than bottom) out of nothing. It can chose not
to apply its second argument to anything, but then the result must be
trivial ('bottom' or 'return bottom'), because bind is also polymorphic in
the type of values contained in its result monad. Otherwise a non-bottom
value is needed for the function to be applied to. The only place where
such a value can come from is bind's first argument.

Of course this reasoning works only for monads that are implemented in
Haskell itself.

>> Want to explain this to a beginner who still struggles with understanding
>> what a loop (or recursion) is?
> 
> The monad laws? Sure.

So how do you justify them to beginners? Remember, we are talking about
those poor guys desparately asking you what the hell this 'compiler' thing
means telling them their program has a 'syntax error' although everything
'looks right'.

(Yes, there may be exceptional students who can understand and appreciate
monads in a first semester course, but I doubt you'll find many of that
caliber.)

> Haskell's concoct-an-action-list approach? Sure.
> How input and output interact in Haskell? Um... well... not so sure
> anymore. But it should certainly be taught at some point, and the
> earlier, the better. (Maybe there's an easier way to explain it. I'm not
> sure enough about Haskell's IO details to set one up, but the above
> explanations - *if they are indeed correct* - are already a big step
> towards making monads explainable.)

They are explainable just fine, IMO. Just not to the average beginner
student. I remember from my time at the university that many 2nd year
students couldn't really appreciate the 'higher' (=abstract) analysis.
There is a reason why you start teaching the more concrete stuff first and
only gradually abstract things.

Ben
0
11/4/2005 12:13:01 AM
Andre <andre@het.brown.edu> writes:

> I just meant that there are many ways of implementing a
> do-until construct, many of which do not require the thunks needed
> by the one HOF example (which imply the use of side effects, as you
> observe), or the monadic types used in the other HOF example.  
>
> A macro can be used to hide this detail, and can be justified by the 
> same arguments used for any other kind of abstraction.  
>
> These include:
>
> - Modularity: being able to change the implementation 
>   from e.g., thunks to monads without having to change all the use sites. 
> - Optimization: The macro might construct thunks behind the 
>   scenes, but does not have to.  A syntax that is agnostic with 
>   respect to this detail is arguably better than a HOF that isn't,
>   for a looping construct where performance might be important.  

I follow your argument, but I don't think it is compelling.
Regardless of how the construct is implemented, it is going to be the
case that the code within the construct is not going to be evaluated
in the standard manner.  The user is going to have to be aware of this
one way or another.

I find explicit thunks to be a reasonable way to deal with this.  They
restore the code to the standard evaluation model (at the expense of
introducing lexical closures), and the `thunk' syntax is easily and
obviously recognizable.

That said, I also think that macros are a fine tool and the fact that
some people may abuse them is not a good reason to avoid them.
0
jmarshall (140)
11/4/2005 4:32:50 AM
Marcin 'Qrczak' Kowalczyk wrote:
> Ulrich Hobelmann <u.hobelmann@web.de> writes:
> 
>>> OTOH if the type of the sequence is not known statically, a macro
>>> can't do much better than an equivalent of the second implementation.
>> Well, when you have a for-each function that works for all sequences,
>> so could the dolist macro...
> 
> The point is, what would the macro expand to? It must either pack the
> loop body in a closure and pass it to a different function depending
> on the sequence type, or pack the iterator in an object and perform an
> indirect call for obtaining each element, or jump between the iteration
> and the loop body through continuations.

No, each iteration is only a goto.  To find out the sequence type, the 
macro could start with checking the sequence type and then do one of 
maybe three different loop bodies, or it could use virtual sequence 
accessors (is-last-element?, next) with the associated cost, of course. 
  IMO the programmer should declare the sequence type, though, for 
clarity, so in that case you end up with just one fine compiled loop.

> In each case the cost is similar, and there is no gain from iteration
> being a macro except that some people might find the macro syntax nicer.

No, one version doesn't compile anything.  The loop has to call the 
lambda function on every iteration.  The macro version however, creates 
a loop to compile, with the iteration body inline.  So the only jump is 
the test-and-jump-back in the loop.  Everything else is code that's 
directly executed.

> A lightweight syntax for lambdas means that more control structures
> can be implemented with regular functions instead of with macros.

Yes, but still not enough for my taste ;)

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
11/4/2005 10:06:59 AM
David Van Horn schrieb:
> Joachim Durchholz wrote:
 >
 >>   fold (+) 0 integer_list
 >
> What would the macro for fold look like?  I'm not sure what your example
> is supposed to illustrate, but unless the list being folded over is
> known statically, ie. a literal list of numbers, a macro can do little
> work here.  A macro then is neither necessary nor sufficient as far as I
> can tell.

Indeed.

And that's exactly the point I'm trying to make: I don't see any need 
for macros. I can do everything I want with HOFs, provided defining and 
using them is easy.

As for usage: the (+) above is already a full closure, it's the + 
operator in isolation. That's similar to Lisp, where an atom can stand 
for the operation.

Defining a HOF is simple, too:

 > foldl            :: (a -> b -> a) -> a -> [b] -> a
 > foldl f z []     =  z
 > foldl f z (x:xs) =  foldl f (f z x) xs

The first line just defines the types of the parameters and the result. 
(First parameter is a function that takes an "a" and a "b", the second 
parameter is type "a", third a list of "b", the result again "a". "a" 
and "b" can be arbitrary types.)
In imperative pseudocode, this function is:

   foldl (fn, initial, list)
     result := initial
     foreach item in list loop
       result := fn (result, item)
     endloop
     return result

Note that the first line looks complicated, but it could be left out: 
the compiler is able to infer it. That's typing without the pain: no 
need to write type signatures at all, but the compiler will tell you if 
there's a problem. (The downside is that lists elements must have 
uniform type. There are ways around this though, but it's a bit too much 
to explain in an already-too-long post.)

(One last remark: the above is Haskell code, but similar code is found 
in all statically-typed FPLs. Think map and friends on steroids *g*.)


Back to the arguments:

Just because I'm not inventive enough to find an application for macros 
doesn't mean there aren't any, of course. However, I'm confident that 
there's no need for macros at the level of control structures, such as 
the "until" macro given. FPL programmers aren't interested in control 
structures, they have all they want in the form of HOFs from the 
standard libraries.

>>There exists a multitude of such operators. They can slice and dice
>>lists in about any way imaginable. There are operations to split and
>>merge lists, to transform them.
>>I don't need "until" if I put everything into lists and have these
>>operators work on these :-)
>>OK, lists aren't suitable for every task in computation. They suck if
>>you need to operate on many elements at once, or need to select a few
>>items in them at random. There are other data structures that can do
>>this, of course, and these also come with the appropriate HOFs for
>>slicing and dicing.
>>Do I need macros? Not really: I have all the data structures I want, and
>>the operations to transform them as needed.
> 
> Try writing even streams in a strict language without making a mistake.

I haven't, but others have.

In Haskell, I wouldn't need them: the language has lazy evaluation, and 
you simply define an infinite stream (that's fine with the language 
unless you try to access every element of the list, say by taking its 
length, or counting all the even numbers in an infinite list).

In SML and Haskell, streams exist as standard library functions. I don't 
know too much about the specifics though; I'll have to rely on others to 
fill that in.

Regards,
Jo
0
jo427 (1164)
11/4/2005 10:26:44 AM
Followup-To: comp.lang.functional

Ulrich Hobelmann <u.hobelmann@web.de> writes:

>> The point is, what would the macro expand to? It must either pack the
>> loop body in a closure and pass it to a different function depending
>> on the sequence type, or pack the iterator in an object and perform an
>> indirect call for obtaining each element, or jump between the iteration
>> and the loop body through continuations.
>
> No, each iteration is only a goto.  To find out the sequence type, the
> macro could start with checking the sequence type and then do one of
> maybe three different loop bodies,

Duplicating the loop body risks exponential code explosion for nested
loops, and it's only suitable for a small fixed number of sequence types.

I know that CL and Scheme have a fixed number of sequence types, but
this is not enough for my taste.

> or it could use virtual sequence accessors (is-last-element?, next)
> with the associated cost, of course.

This is what I said (the second variant). Getting the next element
through an indirect/virtual/generic call is as (in)efficient as
evaluating the body by calling a function.

> IMO the programmer should declare the sequence type, though, for
> clarity, so in that case you end up with just one fine compiled
> loop.

I agree with the *ability* to declare it (although you can't do that
in my Kogut), but I don't want to be forced to do it. I want to say
"what", not "how". I wan't a nice clean API, not separate variants of
iteration over plain lists, lazy lists, arrays, byte arrays, boolean
arrays etc., and all their combinations in case of serial or parallel
iteration.

In Kogut each sequence type provides a constructor function with a
uniform interface: taking any number of collections, and populating
the newly created collection with elements taken from them. For
example:
   Array()             // a new empty array
   Array [10 20 30]    // populate with these elements
   Array someByteArray // convert a byte array to a generic array
   Array arr1 arr2     // concatenate two arrays
   Array arr           // clone an array
   Array (Fill 256 0)  // initialize with 256 zeros
   Array (Between "A" "Z") // initialize with capital letters
   Array (Fill (10 - Size a) Null) a // right-justify a to 10 elements
   Array (ReadLinesGenFrom f) // initialize with lines read from a file
and the same applies to other collection types. I can't imagine
a monomorphic interface for every combination.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
0
qrczak (1266)
11/4/2005 10:56:14 AM
Marcin 'Qrczak' Kowalczyk schrieb:
> Joachim Durchholz <jo@durchholz.org> writes:
> 
> 
>>I'm starting from the classic statically-typed OO perspective, which
>>upholds the following principles:
>>* A class defines a type.
>>* The LSP, i.e. that a subclass should define a subtype.
>>* Class boundaries and module boundaries coincide.
>>* Function calls are dispatched based on run-time type information.
> 
> I don't know a language-independent definition of a class.

Co-encapsulated definitions of data and functions.

> LSP defines subtyping on the basis of substitutability. It can be
> interpreted either as a definition of subtyping, or as a guideline
> when to declare subtypes if they are declared explicitly.

Which is essentially the same thing.

 > Subclassing
> (inheritance of implementation) is a different tool, subclasses may
> coincide with subtypes or not.

Um... I think the core of LSP is this:
If syntax lets the compiler infer that a type is a subtype of some other 
given type, then that subtype property should also hold semantically.

> I hate languages where class boundaries and module boundaries must
> coincide.

Agreed.

 > For me modules are used as namespaces, splitting code into
> files and separate compilation. (Some people believe that namespaces
> should be decoupled from other uses of modules, but for me the ability
> to selectively import and reexport names makes modules appropriate.)

I agree that modules should also be namespaces.

However, the main use of modules is decoupling (initially termed 
"information hiding", but I think "decoupling" is a better term because 
it emphasises effect instead of mechanism).

>>Well, then floating-point and integer aren't in a subtype
>>relationship, and they don't even compose.
>>
>>Integer and floating-point arithmetic are simply different
>>operations, and there should be no automatic selection anyway.
> 
> I agree that they aren't in a subtype relationships, and disagree with
> the rest. 2*x should return the number twice as big no matter whether
> it's represented as an integer or as a floating point object.

It doesn't. Integers overflow, and floats do the same, but they don't 
overflow in the same manner, and at the same points. They are simply 
incompatible.

You *can* make them compose, either by introducing ranged types (a good 
idea anyway), or by introducing unlimited-size integers and 
unlimited-precision-to-the-left-of-the-decimal-dot floats.

You can have efficient machine numbers or subtyping/composability, but 
not both.

I don't think they *should* compose. Comparing floats for equality is 
problematic, comparing integers isn't.

>>That depends on how you define the square root. Actually it's a
>>two-parameter operation: the number of which we want the square
>>root, and the precision that we want (machine epsilon or integral
>>numbers, in that case).
> 
> Most of the time I don't want to specify the precision explicitly,
> and with floating point there is little choice anyway. So my Sqrt
> takes a single argument, like in all other languages.
> 
> A variable precision floating point type (currently unimplemented)
> would receive the precision through a dynamic variable or use the
> precision of the argument; I don't know which policy is better.

I didn't mean to advocate such a function - it would be an option, but 
extra work to implement and of no clear use.
What I meant is that integer square root and floating-point square root 
are different operations. They don't have the same semantics.

>>>If they have different types, there will probably be no method of
>>>the given generic function for them, so you will get an error about
>>>a generic function not applicable to some combination of types.
>>>If they are not distinguished by types, the implementation of the
>>>operation will have to detect this and throw an exception.
>>
>>Hmm... how does the framework decide what combinations are
>>compatible and which are not?
> 
> I don't understand. It provides methods for types where the given
> operation makes sense mathematically, and doesn't try to provide
> cases where it makes no sense (like checking whether a ratio is even).

How does the framework determine what operations make sense 
mathematically? Is that predetermined? If not: if a library programmer 
adds a new numeric type, how does the system know which combinations 
make sense and which don't: by analysing the code semantics? by relying 
on specifications in the library code?

> There are two kinds of operations with respect to default
> implementations: some operations are defined by default through
> conversion to different types, and should be provided by all types
> which want their algorithm to be used (core operations); other
> operations are defined in terms of core operations, e.g. Sqr x
> is defined as x*x for types which don't provide their explicit
> definition.

I've been thinking along the same lines.

>>>They have a sufficiently incompatible behavior that the intended
>>>meaning of mixing them in the same operation is unclear. If it was
>>>done anyway, the implementor of these numbers would have to decide
>>>how to behave. But I'm not sure whether they should be considered
>>>numbers at all.
>>
>>Numbers modulo some limit are numbers, sure enough. All the usual
>>arithmetic laws hold.
> 
> It doesn't hold that 2*x==0 implies x==0.

With a prime modulus this still holds ;-)
But sure enough there are differences. That's the whole point of the 
example.

 > Generally they give
> different results for operations already defined for plain integers,
> so if they are considered numbers, they would have to be different
> from existing integers, i.e. "2 (modulo 5)" is not a representation
> of 2 but of an entirely different abstract thing,

Agreed.

 > while float 2.0 for example *is* a representation of 2.

I disagree.

Adding maxint to 2 and to 2.0 will give different results (overflow in 
the integer case, and an exact or rounded result in the floating-point 
case).
(Assuming that 2 and 2.0 are both implemented using machine numbers.)


Regards,
Jo
0
jo427 (1164)
11/5/2005 12:05:46 PM
Benjamin Franksen schrieb:
> Joachim Durchholz wrote:
> 
>>I suspect that being monadic isn't the core of Haskell's way of doing
>>IO. It's more that you stick together a list of activities, and leave
>>the dirty work of actually executing the list to the run-time system.
> 
> You say it: "stick together a list of activities". That's exactly what you
> use a monad for.

I see it just the other way round:

A monad is a way to "stick together things".
The IO monad sticks together activities. The list monad sticks together 
list items. The Maybe monad sticks together function calls. I'm pretty 
sure that the function composition operator ($ IIRC in Haskell) is a 
monad, too.

 > And note, that that there are (data) dependencies between
> the activities, i.e. it is not just a simple list; what to do next and how
> to do it can depend on what the earlier activities resulted in.

I suspect that the true story is that an IO value specifies all the 
reactions for all possible inputs. The lazy nature of Haskell makes sure 
that only a finite part of that value is ever computed, namely the part 
that matches the inputs.

>>Yet I don't see anything about computations in the monad laws. They
>>mandate:
>>
>>* A type (the "monad type" m)
>>* An operation to create a monad from a single value (return)
>>* An operation to connect two monads (>>=)
>>* A rule that the connect operation should be associative
>>* A convenience operator that is a combination of return and >>=
>>   (>>)
>>* A convenience operator for signalling errors.
> 
> First that doesn't explain the intuition behind the formalism. Why are the
> laws and operations such and such? What is it that this formalism is an
> abstraction of?

How do you know that the intuition is indeed the most general 
interpretation of the formalism? Operation sequences aren't particularly 
general after all. The sequence-of-operations intuition doesn't work for 
lists, for example - nobody in his right mind would see lists as 
operation sequences? (Once somebody tried to explain lists as "parallel 
computations" - that may have been technically correct, but it isn't a 
good intuition about lists IMHO.)

> Second, the bind operation (>>=) does /not/ connect two monads. Instead it
> connects a monad to a function that returns a(nother) monad. See further
> below for why this distinction is important.

Right - I confused >>= and >>.

The corrected list is:

* A type (the "monad type" m)
* return :: a -> m a (creates a monad from a single value)
* (>>) :: m a -> m b -> m b (chains two monads)
* (>>=) :: m a -> (a -> m b) -> m b
   (adds an element to the right end of the monad)
   (essentially a combination of return and >>,
   though I'm not 100% sure of that)
* A rule that >> should be associative
   (that's slightly imprecise: actually the rule limits >>=,
   and I *think* it amounts to making >> associative)
* fail :: String -> m a (convenience operator for signalling errors)

>>I think people are taking the characteristics of two well-known monads
>>(IO and State) and take them to be characteristics of *all* monads. 
> 
> I conjecture that every monad (in Haskell) has a (more or less natural)
> interpretation as a computation (in the sense I explained in my last
> posting).

That's the mental model that I disagree with. I think it specialises too 
much, and needs awkward bending to encompass things like List.

>>The 
>>problem is compounded by the fact that all functions can be seen as
>>operations... but that's not particularly helpful: I don't want to see
>>everything as a computation, I want to see functions as value-to-value
>>mappings!
> 
> Yes, sure. It can lead to misunderstandings for beginners or even
> experienced people without prior exposition to FP. And nobody wants you to
> see everything always as a computation. I am merely saying that computation
> is the basic intuition behind and the standard model for monads, at least
> in the area of programming. (Don't know much about category theory.)

It's a very common model, yes. It's just that I don't think that this is 
the best model.

> And let us not forget that programming is ultimately about getting results.
> It is nice to be able to do this in a more abstract and concise way (FP).
> But a function in an FPL is /not/ the same as a function in mathematics.

It is - that's the point of referential transparency!

> The computational aspects are important to know about, even for beginners;
> otherwise their programs may (eventually) terminate but not in time for
> them to observe it actually happening ;-)

Um, well, OK, I agree that the analogy breaks down in the face of 
computationally expensive functions.
However, that doesn't influence what's the best mental model for monads IMO.

>>>Ok, now that I have explained how I see it, I finally understand what you
>>>mean by 'chaining up values associatively'. Yes, but this is not the
>>>whole picture: monadic values need to /return something (else)/ and it is
>>>this what makes them 'chainable' in an associative manner.
>>
>>No, the monad laws specifically do *not* require that there's anyway to
>>get the values back out. IO in fact takes that route, or at least so it
>>seems to me.
> 
> Ok, it was wrong (or at least misleading) the way I formulated it. What I
> meant when I said 'monadic values need to /return something (else)/' is
> that either your program or else the 'system' must somehow be able to get
> at the value(s) 'inside' the monad.
> It may be that there are some monads that only allow this to happen 'behind
> the scenes', but there must be some way, or else your monad is useless.

Indeed.
I was talking from the program's perspective. IO doesn't give anything 
back into the program (its return type is (), the empty Unit type). Of 
course, to be useful, it must have some other effects (but none that are 
observable from within the program).
Actually if monads didn't have to encompass IO, they's probably be 
defined using a return type. It's a *very* abstract abstraction in that 
it doesn't restrict any return values (except in the associativity law, 
which - I think - had to worded in such an awkward way because there are 
no result values that can be compared for equality).

> Why? Well, how else do you explain the bind operator? I already mentioned
> above that it does /not/ chain two monadic values. Rather it chains a
> monadic value with a /function/ that returns a monadic value.

Indeed. I meant >> (sequence).

 > And the type
> of bind states that the type of the values 'contained' in its first
> argument is the same as the type of values expected by the function. So,
> how does this function get its input (a value to be applied to)? Obviously
> the bind operator must (somehow) extract it from its first argument.

Not necessarily. Throwing away the left parameter would be an entirely 
valid possibility (and that's indeed what Maybe occasionally does).

Of course, monads usually take advantage of that type dependency. But 
that's stuff that happens *inside* the monad, not observable from the 
outside. It's the stick-together operation: the sticks have different 
forms at their ends, and you can stick only matching sticks together - 
but once stuck together, you don't see any seam from the outside 
(there's no way to get at that function parameter from the outside).

> This can be made more precise using a Wadler'ian "theorems for free" kind of
> appeal to polymorhism: The bind operator doesn't know anything about the
> type of values contained in the monads. That means it can't do anything to
> produce such a value (other than bottom) out of nothing. It can chose not
> to apply its second argument to anything, but then the result must be
> trivial ('bottom' or 'return bottom'),

Or Nothing, in the case of the Maybe monad.

 > because bind is also polymorphic in
> the type of values contained in its result monad. Otherwise a non-bottom
> value is needed for the function to be applied to. The only place where
> such a value can come from is bind's first argument.
> 
> Of course this reasoning works only for monads that are implemented in
> Haskell itself.

Indeed. That's why IO is useful despite always returning ().

>>>Want to explain this to a beginner who still struggles with understanding
>>>what a loop (or recursion) is?
>>
>>The monad laws? Sure.
> 
> So how do you justify them to beginners? Remember, we are talking about
> those poor guys desparately asking you what the hell this 'compiler' thing
> means telling them their program has a 'syntax error' although everything
> 'looks right'.

I suspect the errors are more in the specifics of IO.

The monad laws (as I understand them right now) have very little substance.
That's just like the associativity law: it's a very common law, but 
looking at "a op (b op c) = (a op b) op c" won't tell many people a lot 
about its consequences (it essentially means "you can leave out 
parentheses in series of 'op' calls if 'op' is known to be 
associative"). It's good to know if an operator is associative (provided 
you know all the consequences of associativity), but other than that, 
nobody really cares about associativity.
I assume that being monadic is just the same. People moan about monads, 
their eyes glazing over because they try to understand monads and IO at 
the same time. It's just like trying to understand associativity and 
function composition at the same time - relatively easy in isolation, 
but trying to grasp several very different abstractions at the same time 
is almost impossible.

> (Yes, there may be exceptional students who can understand and appreciate
> monads in a first semester course, but I doubt you'll find many of that
> caliber.)

Indeed.
I attribute that to being taught two nontrivial concepts at the same 
time (IO and monads). The problem is, of course, that IO in Haskell is 
essentially monadic, and it must be taught relatively early because 
students need to output things to do anything useful with their programs.
Well, sort of - the curriculum could lean on the top-level for quite a 
while. As a (maybe somewhat extreme) example, I could imagine this 
curriculum:
* Expressions.
* Functions. Equational reasoning.
* Recursion.
* Monads. Use Maybe as an example.
* Monad transformers.
* IO

>>Haskell's concoct-an-action-list approach? Sure.
>>How input and output interact in Haskell? Um... well... not so sure
>>anymore. But it should certainly be taught at some point, and the
>>earlier, the better. (Maybe there's an easier way to explain it. I'm not
>>sure enough about Haskell's IO details to set one up, but the above
>>explanations - *if they are indeed correct* - are already a big step
>>towards making monads explainable.)
> 
> They are explainable just fine, IMO.

They aren't.

Take me for an example. With 20+ years of practical experience, I'm a 
seasoned programmer. I'm one of the few people with an absolute 
confidence into their knowledge of what dynamic dispatch is. I have done 
a complete analysis of diamond inheritance, to the point that I can 
immediately identify the problems in any language design. This all goes 
*far* beyond the stock bread-and-butter programmer.
Yet I still see several murky corners in monads.

What's explainable (and explained, over and over) is the *use* of monads.

But I don't find many explanations of the *concept*. I have yet to find 
a person who can confidently tell me: "such-and-so is the essence of 
monads", in the way I can say confidently: "associativity is if you can 
disregard parentheses", or "string concatenation is the most general / 
least powerful model of associativity".

Maybe it's just that I'm asking deeper questions :-)
However, if I find the answers, if anybody comes and asks me "what's a 
monad", I'll be able to give an answer - without resorting to examples 
and handwaving. I think that's a quest well worth my time :-)

 > Just not to the average beginner
> student. I remember from my time at the university that many 2nd year
> students couldn't really appreciate the 'higher' (=abstract) analysis.
> There is a reason why you start teaching the more concrete stuff first and
> only gradually abstract things.

Yup.
It's just that teaching monads and IO at the same time is probably too 
many abstractions at once.
Or at least so I assume :-)

Regards,
Jo
0
jo427 (1164)
11/5/2005 1:09:13 PM
Marcin 'Qrczak' Kowalczyk schrieb:
> A lightweight syntax for lambdas means that more control structures
> can be implemented with regular functions instead of with macros.

That might indeed be the reason why I don't feel the need for macros in 
Haskell.

Regards,
Jo
0
jo427 (1164)
11/5/2005 1:15:09 PM
Joachim Durchholz <jo@durchholz.org> writes:

>> I don't know a language-independent definition of a class.
>
> Co-encapsulated definitions of data and functions.

Functions are data. So this is any non-atomic object data structure?

>> 2*x should return the number twice as big no matter whether it's
>> represented as an integer or as a floating point object.
>
> It doesn't. Integers overflow, and floats do the same, but they don't
> overflow in the same manner, and at the same points. They are simply
> incompatible.

Integers don't overflow unless the memory is tight. What do you mean
by "incompatible"? They can be used together in the same operation,
thus they are compatible.

> You *can* make them compose, either by introducing ranged types
> (a good idea anyway), or by introducing unlimited-size integers
> and unlimited-precision-to-the-left-of-the-decimal-dot floats.

No, they can also be composed by converting integers to floats,
losing information if the integer is too big (by either returning
a float infinity or signalling an error, depending on whether
trapping arithmetic overflow is enabled).

When floats are involved, an educated programmer should expect
that the result might be inaccurate.

A few weeks ago needed and have written a program which calculates
the integral of the absolute value of the difference of two functions
given by line segments, and some other calculations on functions given
by line segments. Sometimes there are only a few points, and they have
well-defined rational coordinates - in this case the result can be
computed exactly. because it involves only + - * / and comparisons.
In other cases the segments approximate a smooth curve and the result
is inherently approximate, since a good approximation needs lots of
points, using rational with huge denominators is pointless and floating
point arithmetic is appropriate.

Why do you want do disallow writing a single function which computes
the result exactly or inexactly, depending on whether arguments are
exact or not? Now the main algorithm just uses the + - * / operations,
and it works both for rationals mixed with integers and floats. And
even with floats mixed with rationals and integers: the beginning and
end are forced to (0,0) and (1,1), and I didn't have to conditionally
make it (0., 0.) and (1., 1.) instead.

> I don't think they *should* compose. Comparing floats for equality
> is problematic, comparing integers isn't.

Problematic but should be allowed. One of algorithms I mentioned above
transforms one function defined by line segments to another. If the
number of segments in the input is N, the number of segments in the
output is between N and 2*N-1, depending on whether differences
between x-coordinate and y-coordinate of certain points coincide or
not. They always coincide in the result, and applying it again even
doesn't change the function, while applying it to the result of a
related algorithm does change the function but doesn't change the
number of segments.

The program repeatedly compares which difference is larger, and treats
equality as a third case, even though with floating point it's only
approximate. If it didn't, it would always double the number of
segments, and would lose the property that applying it again doesn't
change the segments, by creating unnecessary zero-length segments.
They would be technically correct but pointless, and would only slow
down further processing.

> What I meant is that integer square root and floating-point square
> root are different operations. They don't have the same semantics.

If by integer square root you mean a function which returns an
approximation of the true square root (2 -> 1.4142135623730951)
unless the square root is an integer too (4 -> 2 exactly), then
I claim that it's more useful to group it in the same function
together with pure floating point sqrt, than force programmers
to use different operations depending on the type of the argument.

If you mean a sqrt truncated to an integer and represented as an int,
then it's indeed a separate function, because it yields very different
results for the same arguments, and it's not because of rounding
errors.

With the same reasoning having 3. / 2. == 1.5 and 3 / 2 == 1 in a
dynamically typed language is idiotic (Python is currently backing out
from this), but letting 3/2 be a ratio is OK, and letting 3/2 be 1.5
is acceptable if the given language doesn't intend to support exact
rational arithmetic at all.

Hmm, I think I will make the default rational type settable, which in
particular could be set to Float for those who don't want ratios to
arise from computation on integers and floats only. It applies only
to division and raising to a negative power.

> How does the framework determine what operations make sense
> mathematically?

A human (i.e. me) implemented only those which do.

> Is that predetermined?

Technically not, i.e. if a given combination of types doesn't have
a method at all (e.g. IsEven applied to FLOAT), then the programmer
can supply such definition. But they were not supplied *because* they
don't make sense mathematically, so trying to supply them is idiotic.

You can't replace existing definitions, e.g. change the definition of
division of standard integers.

Nothing is predetermined for new types or new operations.

> If not: if a library programmer adds a new numeric type, how does
> the system know which combinations make sense and which don't: by
> analysing the code semantics? by relying on specifications in the
> library code?

The system does not know it. The programmer who supplies a new type
should know it, and implement those operations which make sense for it.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
0
qrczak (1266)
11/5/2005 2:50:27 PM
Marcin 'Qrczak' Kowalczyk schrieb:
> Joachim Durchholz <jo@durchholz.org> writes:
> 
>>>I don't know a language-independent definition of a class.
>>
>>Co-encapsulated definitions of data and functions.
> 
> Functions are data. So this is any non-atomic object data structure?

Hey, not in OO languages. Not usually anyway.

>>>2*x should return the number twice as big no matter whether it's
>>>represented as an integer or as a floating point object.
>>
>>It doesn't. Integers overflow, and floats do the same, but they don't
>>overflow in the same manner, and at the same points. They are simply
>>incompatible.
> 
> Integers don't overflow unless the memory is tight.

Hmm... with your emphasis on efficiency, I had thought you were talking 
about machine integers.

If it's infinite-size integers, then integers and floats are even more 
different, because integers don't overflow and floats do.

There's another semantic difference: floats can lose precision in ranges 
where integers (even machine integers, in most cases) don't.

 > What do you mean
> by "incompatible"? They can be used together in the same operation,
> thus they are compatible.

They have incompatible semantics. Running the "same" arithmetic 
operations on them can give different results.

E.g. assuming floats have a, say, 10-digit mantissa, 2 * (10^12 + 1) 
isn't the same as 2 * (10.0^12 + 1.0): the floating-point computation 
will return a different result.

> Why do you want do disallow writing a single function which computes
> the result exactly or inexactly, depending on whether arguments are
> exact or not? Now the main algorithm just uses the + - * / operations,
> and it works both for rationals mixed with integers and floats.

Untrue. The algorithm will have entirely different error bounds for 
floats on the one side, and integers/rationals on the other. It isn't 
even the same algorithm, despite having the same encoding in your language!

 > And
> even with floats mixed with rationals and integers: the beginning and
> end are forced to (0,0) and (1,1), and I didn't have to conditionally
> make it (0., 0.) and (1., 1.) instead.
> 
> 
>>I don't think they *should* compose. Comparing floats for equality
>>is problematic, comparing integers isn't.
> 
> Problematic but should be allowed.

Maybe. That difference is just a symptom that integers and floats are 
really quite different beasts, despite often being conflated.

>>How does the framework determine what operations make sense
>>mathematically?
> 
> A human (i.e. me) implemented only those which do.
> 
>>Is that predetermined?
> 
> Technically not, i.e. if a given combination of types doesn't have
> a method at all (e.g. IsEven applied to FLOAT), then the programmer
> can supply such definition. But they were not supplied *because* they
> don't make sense mathematically, so trying to supply them is idiotic.
> 
> You can't replace existing definitions, e.g. change the definition of
> division of standard integers.
> 
> Nothing is predetermined for new types or new operations.
> 
>>If not: if a library programmer adds a new numeric type, how does
>>the system know which combinations make sense and which don't: by
>>analysing the code semantics? by relying on specifications in the
>>library code?
> 
> The system does not know it. The programmer who supplies a new type
> should know it, and implement those operations which make sense for it.

Now there's the rub: If two programmers implement new arithmetic types, 
who's responsible for writing the operators that combine these new types?

(This is the (in)famous "binary operator problem", well-known from OO, 
but it really is a general problem. Actually independently of whether 
you're doing dynamic dispatch or not.)

Regards,
Jo
0
jo427 (1164)
11/6/2005 2:54:43 PM
Andre schrieb:
> Joachim Durchholz wrote:
>  
> 
>>Do I need macros? Not really: I have all the data structures I want, and
>>the operations to transform them as needed.
> 
> Here is some nice motivation for macros within the context 
> of Scheme.  Especially section 35.5 and the DSL in chapter 36 are 
> of interest: 
> 
> http://www.cs.brown.edu/people/sk/Publications/Books/ProgLangs/

Interesting but ultimately unconvincing.

The arguments he gives are:

1) Providing cosmetics
OK, this one can't be done using HOFs (not easily). You can't clean up 
after the language designer made a mess except with macros - the only 
option is that the language designer creates a "cosmetic" language :-)

2) Introducing binding constructs
The way out here is a language that has just one class of citizen. I.e. 
if everything is a value (including functions, modules and whatnot), you 
don't need macros to create constructs in novel ways.
See the Oz language for an example of this. Everything (including 
functions and OO classes) is defined as being created at run-time; the 
compiler just precomputes a lot of them.
Most FPLs don't go that route; from what I read, I assume that type 
inference issues are preventing some of that. (I think type inference is 
a bit overvalued - I'd rather err on the side of needing a few 
additional type annotations or run-time checks than on the side of 
having a less universal language. But that's just me.)

3) Altering order of evaluation
That's the weakest argument. He essentially says that this would require 
creating a closure for everything that needs to have its evaluation 
deferred, and (rephrasing it in my words) that would to too verbose for 
practical use. I think I can agree it would be too verbose in Scheme, 
but it certainly isn't in Haskell or SML!

4) Defining data languages
He holds that it's easy to set up large data structures.
In Haskell or Scheme, I'd write functions to set up the data structures, 
and assume the compiler would precompute them at compile time. (In 
languages without side effects, compilers can aggressively precompute: 
replacing a function call with its result is mostly safe operation.)
In other words, the ordinary language is already a data language as 
needed. Applying macros for that purpose is just an optimisation 
(ironically, he's warning against exactly that idea, using macros as an 
optimisation vehicle...).

5) Automata building
Um... the usage of tail call optimisation to set up efficient automata 
in an FPL doesn't seem to be very macro-specific to me, so I don't 
really see how this justifies macros...


In all, I don't see many reasons for macros in general.

They may be useful in Scheme. I don't know enough about Scheme to have 
any fixed opinion on the matter; the best approximation of Scheme that I 
ever used was Interlisp, which is rather dated expertise (if one would 
like to consider that as "expertise" in the first place).

However, if I were to design a language, I'd leave macros out and make 
everything first-class values instead. I'd remove an entire language 
layer in one fell swoop, without losing anything too valuable - seems 
like a good trade-off to me!

If I were to choose a language, I also wouldn't make the presence or 
absence of a good macro system a criterion. The real criteria are 
elsewhere: conciseness, learning curve, scalability, ability to formally 
encode program properties (i.e. a static type system or better), library 
availability, etc. etc. etc. (This list isn't in any particular order.)

Regards,
Jo
0
jo427 (1164)
11/6/2005 3:51:57 PM
Joachim Durchholz <jo@durchholz.org> writes:

>>>>I don't know a language-independent definition of a class.
>>>
>>>Co-encapsulated definitions of data and functions.
>> Functions are data. So this is any non-atomic object data structure?
>
> Hey, not in OO languages. Not usually anyway.

So what is the definition?

I don't know whether Kogut has clasess. I don't even know whether it's
OO.

> They have incompatible semantics. Running the "same" arithmetic
> operations on them can give different results.

It's not incompatible but different, compatible.

>> Why do you want do disallow writing a single function which computes
>> the result exactly or inexactly, depending on whether arguments are
>> exact or not? Now the main algorithm just uses the + - * / operations,
>> and it works both for rationals mixed with integers and floats.
>
> Untrue. The algorithm will have entirely different error bounds for
> floats on the one side, and integers/rationals on the other.

So what? It's still the same algorithm. The result is computed in
the same way (modulo the choice of implementations for the particular
operations - but the structure of the choice of operations is the
same), it means the same thing mathematically (only one is exact
and the other is approximate), and it's written in the same way.

> Now there's the rub: If two programmers implement new arithmetic
> types, who's responsible for writing the operators that combine these
> new types?

The person who cares about obtaining the results of such mixed
operations represented in a specific type.

-- 
   __("<         Marcin Kowalczyk
   \__/       qrczak@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/
0
qrczak (1266)
11/6/2005 4:30:17 PM
In comp.lang.functional Joachim Durchholz <jo@durchholz.org> wrote:
> Andre schrieb:
[...]
> > Here is some nice motivation for macros within the context 
> > of Scheme.  Especially section 35.5 and the DSL in chapter 36 are 
> > of interest: 
> > 
> > http://www.cs.brown.edu/people/sk/Publications/Books/ProgLangs/

> Interesting but ultimately unconvincing.

> The arguments he gives are:
[...]
> 2) Introducing binding constructs
> The way out here is a language that has just one class of citizen. I.e. 
> if everything is a value (including functions, modules and whatnot), you 
> don't need macros to create constructs in novel ways. [...]
[...]
> However, if I were to design a language, I'd leave macros out and make 
> everything first-class values instead. I'd remove an entire language 
> layer in one fell swoop, without losing anything too valuable - seems 
> like a good trade-off to me!
[...]

I think that you are confused. I would guess that you have misunderstood
what a "binding construct" is. Basically everything in Scheme already is
first-class, but it doesn't eliminate the occasional desire to introduce
special binding constructs. For an example of a special binding construct
look at the do-notation in Haskell (specifically the subnotation "v <-
e"). Another example would be a parser generator macro in Scheme that
allows you to name right hand sides for use in semantic actions (instead
of using positional references like in Yacc). Yet another example of a
special binding construct would be list comprehensions in Haskell (again
the <-'s). You might also find the loop macro of Olin Shiver's an
interesting example of a special binding construct. Various pattern
matching notations that you find in many languages are also binding
constructs.

-Vesa Karvonen
0
11/6/2005 5:13:25 PM
In article <dkl8qu$5iq$1@online.de>,
Joachim Durchholz  <jo@durchholz.org> wrote:
>Andre schrieb:
>> Joachim Durchholz wrote:
[...]
>In all, I don't see many reasons for macros in general.
>
>They may be useful in Scheme. I don't know enough about Scheme to have 
>any fixed opinion on the matter; the best approximation of Scheme that I 
>ever used was Interlisp, which is rather dated expertise (if one would 
>like to consider that as "expertise" in the first place).
>
>However, if I were to design a language, I'd leave macros out and make 
>everything first-class values instead. I'd remove an entire language 
>layer in one fell swoop, without losing anything too valuable - seems 
>like a good trade-off to me!

I'd be interested in seeing the concrete details of such a language,
 so we can judge whether this goal can be satisfied, including whether
 everyone agrees that the trade-off is at the right level.

Can you answer me something specific:

  is there such a language already (and then which one), or

  are we still waiting for it

If the latter, then I will simply leave it at this: In the meantime,
 I find Scheme with its macros worthwhile (not to the exclusion of
 some other languages).

>If I were to choose a language, I also wouldn't make the presence or 
>absence of a good macro system a criterion. The real criteria are 
>elsewhere:

Good. In other words, what I suggested in my first post to this thread:
 macros aren't as inherently bad as you claimed, and don't automatically
 preclude the "real criteria" from being met. You're welcome to dispute
 this, but if so I'd appreciate a concrete example of all this "code that
 nobody but the original macro author understands" that you mentioned there.
 If you don't have much expertise in Lisp and Scheme, I wonder where you've
 seen all this code.

If your point now has weakened to you not finding a use for them then I'll
 let it stand there. I too am interested in whether the various uses can
 be categorized and captured by a small number of `better' mechanisms, but
 I find the discussion in this thread a too general and divergent for my
 taste, so won't pursue it here.

> conciseness, learning curve, scalability, ability to formally 
>encode program properties (i.e. a static type system or better), library 
>availability, etc. etc. etc. (This list isn't in any particular order.)

Sure, everyone wants that. It goes without saying.

Gary

PS: I'm tempted to trim the followup to comp.lang.scheme since this is
 about macros, but your lack of expertise in Scheme makes me reluctant.
 On the other hand, if I trim it to comp.lang.functional then I risk
 my response being interpreted as pushing macros on other functional
 languages. Your specific/concrete answers to my above questions will
 hopefully resolve this.


0
gfb (30)
11/6/2005 5:52:14 PM
>>>Joachim Durchholz wrote:

>>In all, I don't see many reasons for macros in general.
>>
>>They may be useful in Scheme. I don't know enough about Scheme to have 
>>any fixed opinion on the matter; the best approximation of Scheme that I 
>>ever used was Interlisp, which is rather dated expertise (if one would 
>>like to consider that as "expertise" in the first place).
>>
>>However, if I were to design a language, I'd leave macros out and make 
>>everything first-class values instead. I'd remove an entire language 
>>layer in one fell swoop, without losing anything too valuable - seems 
>>like a good trade-off to me!
....
>>If I were to choose a language, I also wouldn't make the presence or 
>>absence of a good macro system a criterion. The real criteria are 
>>elsewhere:

This is an understandable and common perspective, but there's a sense in 
which it ignores reality.  C and C++ have macros, albeit via a rather 
unsatisfactory system.  OCaml has a macro system in the form of Camlp4. 
   As for Haskell, I'll quote: "Existing large Haskell systems make 
extensive use of the C preprocessor, CPP.  Such use is 
problematic..."[*]  Macro systems like Template Haskell are addressing 
the latter problem.  Language like Java, which don't have macros in the 
language, rely heavily on various kinds of external code generation 
systems to achieve macro functionality in real projects.

All of these macro-like systems exist to serve a common requirement that 
many, perhaps most languages experience.  So while Scheme 
implementations may be unusual in the degree to which they integrate 
powerful macro support, it's wrong to conclude from this that other 
languages are somehow succeeding without macros, or some workaround for 
the lack of macros.  Most languages, in fact, are not.

The claim that eliminating macros can be done without "losing anything 
too valuable" remains to be demonstrated.

Anton

[*] http://lambda-the-ultimate.org/classic/message2463.html
0
anton58 (1240)
11/6/2005 7:59:49 PM
Gary Baumgartner schrieb:
> In article <dkl8qu$5iq$1@online.de>,
> Joachim Durchholz  <jo@durchholz.org> wrote:
> 
>>However, if I were to design a language, I'd leave macros out and make 
>>everything first-class values instead. I'd remove an entire language 
>>layer in one fell swoop, without losing anything too valuable - seems 
>>like a good trade-off to me!
> 
> I'd be interested in seeing the concrete details of such a language,
>  so we can judge whether this goal can be satisfied, including whether
>  everyone agrees that the trade-off is at the right level.
> 
> Can you answer me something specific:
> 
>   is there such a language already (and then which one), or

Haskell. *ML.

These languages do have their limits. I agree that the presence of 
special forms in Haskell (such as 'do', or list comprehension) shows 
that the language could have been designed smaller if there were a macro 
system.
OTOH one could argue that adding a good, complete macro system would 
have made the language far larger than the limited scope of these 
syntactic extensions.
I'm undecided what's actually the case :-)

OT3H, yes I hope that some future language does it even better. I see 
that modules aren't first-class objects in SML (except for some 
extensions), and the module language enlarges the language massively anyway.

>>If I were to choose a language, I also wouldn't make the presence or 
>>absence of a good macro system a criterion. The real criteria are 
>>elsewhere:
> 
> Good. In other words, what I suggested in my first post to this thread:
>  macros aren't as inherently bad as you claimed, and don't automatically
>  preclude the "real criteria" from being met. You're welcome to dispute
>  this, but if so I'd appreciate a concrete example of all this "code that
>  nobody but the original macro author understands" that you mentioned there.
>  If you don't have much expertise in Lisp and Scheme, I wonder where you've
>  seen all this code.

Mostly from vivid imagination ;-)

Oh, and from reading Common Lisp language references. There's a whole 
lot of strange and wonderful gimmicks in the language, and I didn't like 
the look of that at all. I took macros to be largely responsible for 
that - but other things may have been involved, too.

Actually it's even a bit more concrete. I have been looking through 
various systems, including various Smalltalks. In general, understanding 
a system was more difficult if the system was more flexible. E.g. 
typical Smalltalk distributions are too far on the "too flexible" side 
of the line. Since any subroutine call could do essentially anything, 
there was little hope of concentrating on one aspect (say, a bunch or 
routines) of the system and be reasonably confident that the conclusions 
drawn are correct.
I agree that macros can be used in a sensible manner. I also agree that 
they may be a good thing to have in a Lisp system (not because I agree 
with the proposition, but because I don't know enough about contemporary 
Lisps to validate or dispute such a claim).
However, I think macros do have a tendency to disrupt the guarantees 
that a language gives. That's not a problem for languages that don't 
give you much in the way of guarantees (such as Lisp, which doesn't use 
static typing or other static analysis techniques to validate programs), 
it's more of a problem for languages that do give guarantees.


IOW what I meant to say above was this:
Macros do have advantages and disadvantages.
The advantages can range from "overwhelming" (for a language that's 
otherwise lacking, whether by intent or by bad design) to "unneeded" 
(for an otherwise excellently designed language).
Likewise, the disadvantages similarly range from "abysmal" (macro 
systems that disrupt language guarantees) to "largely irrelevant".
So it's not the decision for or against a macro system alone that makes 
a point for or against a language; it's the way how the macros work and 
how they are integrated into the language.

Oh, and if I were to design a language, I'd do it without macros anyway. 
I feel it's possible to live without them - assuming that really 
everything is a first-class value.
But, of course, that's just my personal idea, and I'd have to really 
design a language to prove or disprove that point :-)

> If your point now has weakened to you not finding a use for them then I'll
>  let it stand there. I too am interested in whether the various uses can
>  be categorized and captured by a small number of `better' mechanisms, but
>  I find the discussion in this thread a too general and divergent for my
>  taste, so won't pursue it here.

OK. I can understand that.

> PS: I'm tempted to trim the followup to comp.lang.scheme since this is
>  about macros, but your lack of expertise in Scheme makes me reluctant.
>  On the other hand, if I trim it to comp.lang.functional then I risk
>  my response being interpreted as pushing macros on other functional
>  languages. Your specific/concrete answers to my above questions will
>  hopefully resolve this.

I think this is interesting to both newsgroups. I've seen occasional 
responses from participants from both camps.

Regards,
Jo
0
jo427 (1164)
11/6/2005 9:37:20 PM
Anton van Straaten wrote:

> All of these macro-like systems exist to serve a common requirement that 
> many, perhaps most languages experience.  So while Scheme 
> implementations may be unusual in the degree to which they integrate 
> powerful macro support, it's wrong to conclude from this that other 
> languages are somehow succeeding without macros, or some workaround for 
> the lack of macros.  Most languages, in fact, are not.
> 
> The claim that eliminating macros can be done without "losing anything 
> too valuable" remains to be demonstrated.

This is true.  But I believe that it is possible.  I've
even got some working code that I'm trying to fix, debug,
document, extend, and get into a releasable state.

It's a lisp dialect which has different function-call
semantics (definitely not a Common Lisp or Scheme).  The
expression in the function position is evaluated, and the
arguments are packaged up as promise-like objects and passed
to the function.  Each package contains both the source
form and the environment in which the source form is to be
evaluated; and the function may evaluate them (once or many
times) or it may unpack them, use the source forms to construct
modified expressions, and then evaluate *those* expressions
in the argument's environment.  Or whatever.  Basically, in
this dialect, functions are both first class and first order.

In this "omega lisp" dialect, there is no semantic distinction
between functions and macros, although at least for now
macro-like uses are considerably less efficient.  But you
can also store them in variables or use them as arguments
to higher-order functions, which makes it strictly
more expressive than other Lisps.

I started several years ago, trying to create a better way
to define syntax usable in separately compiled modules, and
I've been through several incarnations and refinements along
the way... mu functions, fexprs, old-style dynamic macros,
etc...  and all had problems I wasn't satisfied with.  In
particular, how to separately compile modules without knowing
whether a particular name from another module referred to a
function or to syntax, and what a "runtime macro" ought to do
about/with the environment from which it was called, how to
handle runtime importing of modules with their own syntax
definitions, etc.  Ultimately, I developed a call frame that
could be used with syntax *or* functions so that the compiler
didn't have to know ... and then realized that my functions
were now strictly redundant because my syntax definitions
were a strict superset of function definitions.  That's one
of those realizations where you throw out all the code you've
written, cackle gleefully, and start over with a better
design. It's amazing how much a breakthrough can look like
a setback in its effect on your code base.

Anyway, now I'm trying to make it more useful so it's a
full and usable language before I release it.  The difference
between a semantic testbed and a usable general language that's
reasonably integrated with its environment is like the
difference between a toy pedal car and a porsche - I get a feel
for the enormity of the project the more I work on it and
sometimes get discouraged.

There's a lot of language design still to do.

				Bear







0
bear (1219)
11/7/2005 6:01:09 AM
Andre <andre@het.brown.edu> wrote:
> Dirk Thierbach wrote:
>> 
>> Andre <andre@het.brown.edu> wrote:
>> > Because the HOF approach exposes internal implementation
>> > details that do not belong to the abstraction.

>> I don't follow that. In what why does the HOF approach expose the
>> internal implementation? By abusing side-effects? Or what idea is
>> behind this? Example?

> I just meant that there are many ways of implementing a do-until
> construct,

If you keep it purely functional, then I don't think so. I'd say
there's is really just one canonical way to do an until-loop. The 
only thing that changes is presentation and syntax.

> many of which do not require the thunks needed by the one HOF
> example (which imply the use of side effects, as you observe),

Maybe there's a misunderstanding here. It's the *idea* of an
until-loop which implies the use of side effects. HOFs and
side-effects are orthogonal concepts, they don't imply each other.
(The trick one has to use in a strict language like Lisp to defer
evaluation doesn't really count as "HOF" in my book. In a lazy
language, the situation is different in the first place.)

> or the monadic types used in the other HOF example.

I chose the Haskell example with monads because if you want
side-effects, then in Haskell you have no other choice but to use
monads. So, in Haskell, there is *no* other way, even with macros,
to implement an until-loop with side effects, but to use monads.

So the question to use monads or not depends in the first place on how
your language handles side effects, and not if you want to use macros
or HOFs.

Which is why I said that this is maybe not a particularly good example
to discuss the whole issue.

> A macro can be used to hide this detail, and can be justified by the 
> same arguments used for any other kind of abstraction.  

In Lisp, yes. In many other functional languages, no.

And in the end, as other people have observed, I'd argue it is actually
*better* to use one universal way to parametrize abstractions.

One thing that I really hate about many macros I have seen is the
unsystematical, ad-hoc way in which they treat they arguments. This
may all be very nice for a quick hack, but in the long run, I have
to remember all of those tricks and exceptions, I have to figure them
out in other people's code, and so on. If there is just one simple,
generally accepted way to do this, the burden on my poor brain is much
less :-) And I gladly pay a few nanoseconds execution time penalty
for all the time it costs me to look up the details again when 
programming, or the mistakes I make by misunderstand them.

Additionally, being able to cleanly type the HOF makes debugging it
so much easier.

> These include:

> - Modularity: being able to change the implementation 
>  from e.g., thunks to monads without having to change all the use sites. 

As I said, that's something you just cannot do. It's a very
fundamental decision whether you handle side-effects implicitely by
pretending that one understands the order of evaluation (which can be
wrong sometimes even in strict languages; I have been surprised by
that once), or whether you have to make that order explicit by using
monads. If you want to change one way to the other, you have to make
that change everywhere, regardless whether you use macros, or not.

(If it's not obvious to you why monads imply an ordering on
side effects, someone in this NG has written a nice introduction of
monads for Lispers some time ago. Google should be able to find it.)

> - Optimization: The macro might construct thunks behind the 
>  scenes, but does not have to.  A syntax that is agnostic with 
>  respect to this detail is arguably better than a HOF that isn't,
>  for a looping construct where performance might be important.  

Yes, one of the things that macros give you is *control* over
run-time vs. compile-time evaluation. Sometimes that is important.

But that doesn't expose internal implementation. Additionally, it
doesn't affect the original argument, because it's "only" an
optimazation issue. And it's not so bad if you have a compiler
(like GHC) which automatically does compile-time evaluation. (In this
case, it would very probably inline the calls, anyway, so the result
is not different from using macros).

The initial argument did say something like "Macros are the best
things since sliced bread, they make a language sooo powerful, no
language without them is as good" (again, I am exaggerating :-)

And that's just not true (in the exaggerated form). Macros are
sometimes nice to have, but you can go a long way without them. And
while there's a number of reasons why it's more conveninient to use
macros in Lisp, I have the impression that this tends to make
people a bit unnecessarily "preachy" about them, while at the same
side blinding them to alternatives. But maybe that's just my 
impression :-)

- Dirk



0
dthierbach2 (260)
11/7/2005 9:57:40 AM
On 2005-11-06, Vesa Karvonen <vesa.karvonen@cs.helsinki.fi> wrote:
> For an example of a special binding construct look at the do-notation
> in Haskell (specifically the subnotation "v <- e"). Another example
> would be a parser generator macro in Scheme that allows you to name
> right hand sides for use in semantic actions (instead of using
> positional references like in Yacc).

I have to chime in that I wrote a toy port of Parsec using exactly this
idea:

Haskell: do { p1; x <- p2; p3; return x }
becomes Lisp: (p& p1 (<- x p2) p3 (p^ x))

No doubt Haskell people feel that the builtin "do" notation cleanly
eliminates the need for a whole class of binding, sequencing,
backtracking, and control-flow macros, and they like having it
standardized. On the other hand, it's nice that you can get the same
effect in Lisp even though the language designers had probably never
heard of a monad. Personally I think Lisp macros are like a magic
wormhole that puts you just one step away from tons of widely-separated
language features... but wormholes are dangerous and some people would
prefer to have a fast, efficient public transportation system instead.
0
adrian-news (121)
11/7/2005 3:21:48 PM
Adrian Kubala <adrian-news@sixfingeredman.net> wrote:

> No doubt Haskell people feel that the builtin "do" notation cleanly
> eliminates the need for a whole class of binding, sequencing,
> backtracking, and control-flow macros, 

I don't think so. The "do" notation is just thin syntactic sugar.  For
short monadic expressions, I actually tend to not use it. E.g., in the
until-loop code I gave I don't use it in the implementation, only in the
usage example.

(For those who don't already know it, an expression like

  do { x <- f; y <- g; return z }

is equivalent to
  
  f >>= \x -> g >>= \y -> return z

Not much harder to read once you get used to it. And certainly not harder
to get used to than s-expressions :-) )

For longer monadic expressions, it saves a bit of typing, so it comes
handy, but that's it.

I recently found myself using a parser monad in Ocaml. I implemented
this just with infix functions, though I could have easily used the
Camlp4 preprocessor.

> and they like having it standardized. 

I think one of the arguments for a special "do" notation was to
make it easier for imperative programmers to use monads. Syntax 
"prettyfications" are certainly a good usage for macros and a 
preprocessor, but it's not essential, or "magic".

> On the other hand, it's nice that you can get the same effect in
> Lisp even though the language designers had probably never heard of
> a monad.

And you can get the same effect in Ocaml, which doesn't have native
monads, without macros. And probably in other languages as well.

> Personally I think Lisp macros are like a magic wormhole that puts
> you just one step away from tons of widely-separated language
> features... but wormholes are dangerous and some people would prefer
> to have a fast, efficient public transportation system instead.

That starts to sound again like advertising :-)

I think everyone can agree that it is sometimes nice to have macros,
but why do Lispers always try to sell them as such a great thing?
It's this sales pitch that probably puts many people off. 

- Dirk
0
dthierbach2 (260)
11/7/2005 5:29:01 PM
In article <F4Cbf.2416$te3.39120@typhoon.sonic.net>,
Ray Dillinger  <bear@sonic.net> wrote:
>Anton van Straaten wrote:
>
>> All of these macro-like systems exist to serve a common requirement that 
>> many, perhaps most languages experience.  So while Scheme 
>> implementations may be unusual in the degree to which they integrate 
>> powerful macro support, it's wrong to conclude from this that other 
>> languages are somehow succeeding without macros, or some workaround for 
>> the lack of macros.  Most languages, in fact, are not.
>> 
>> The claim that eliminating macros can be done without "losing anything 
>> too valuable" remains to be demonstrated.
>
>This is true.  But I believe that it is possible.  I've
>even got some working code that I'm trying to fix, debug,
>document, extend, and get into a releasable state.
>
>It's a lisp dialect which has different function-call
>semantics (definitely not a Common Lisp or Scheme).  The
>expression in the function position is evaluated, and the
>arguments are packaged up as promise-like objects and passed
>to the function.  Each package contains both the source
>form and the environment in which the source form is to be
>evaluated; and the function may evaluate them (once or many
>times) or it may unpack them, use the source forms to construct
>modified expressions, and then evaluate *those* expressions
>in the argument's environment.  Or whatever.  Basically, in
>this dialect, functions are both first class and first order.

You're doing exactly what I've meant to do. I got excited when
 you brought it up earlier this year in another thread.
 The discussion unfortunately only got as far as someone asking
 about (map lambda '(x) '(y)) and whether it ends up depending
 on the particular implementation of map.
 
[...]
>Anyway, now I'm trying to make it more useful so it's a
>full and usable language before I release it.  The difference
>between a semantic testbed and a usable general language that's
>reasonably integrated with its environment is like the
>difference between a toy pedal car and a porsche - I get a feel
>for the enormity of the project the more I work on it and
>sometimes get discouraged.
[...]

I'd be interested in just the specification, or even how the example
 above is handled (which would probably give me a feel for where
 you're going with the language).

Gary
0
gfb (30)
11/7/2005 6:51:42 PM
Dirk Thierbach wrote:

> If you keep it purely functional, then I don't think so. I'd say
> there's is really just one canonical way to do an until-loop. The
> only thing that changes is presentation and syntax.

Let's be purely functional and take Clean, for example.  There it
might be most natural to use uniqueness types.  I could change the 
implementation - perhaps making it easier to port to Haskell - by 
wrapping things up in a monad, in which case the do-until interface 
would be different, even though there may be a canonical underlying
isomorphism.  A macro would allow me to make this change of implementation
without needing to change all the use sites throughout
my code base.  

By the way, why indeed does Haskell have 
the "do" notation?  Isn't "do" a macro, after all?  
Wouldn't it have been nice not to have had to wait for a new version 
of each of the compilers to have access to the "do" notation when 
it was initially introduced?  Is "do" going to be the only such 
notation that will ever be useful?  What if you come up with
a wonderful new DSL analogous to "do" for, say, arrows, that made 
thinking about them so much easier, and wanted to use it in writing 
and portably sharing programs?   

> Maybe there's a misunderstanding here. It's the *idea* of an
> until-loop which implies the use of side effects. HOFs and
> side-effects are orthogonal concepts, they don't imply each other.

My statement was that the thunk implementation required side 
effects.  I am aware that a monadic implementation does not.  

> I chose the Haskell example with monads because if you want
> side-effects, then in Haskell you have no other choice but to use
> monads. So, in Haskell, there is *no* other way, even with macros,
> to implement an until-loop with side effects, but to use monads.

That seems wrong.  You could explicitly pass a store as an extra 
argument, to give just one alternative.  

> And in the end, as other people have observed, I'd argue it is actually
> *better* to use one universal way to parametrize abstractions.

So which universal way is it going to be?  The monadic one, the store-
passing one, the CPS one ... ;-)

> One thing that I really hate about many macros I have seen is the
> unsystematical, ad-hoc way in which they treat they arguments. 

This would indeed be bad if it were true.  However, macros do not have 
arguments.  Asserting that they do is like asserting that the phrase
"i <- readIORef v" is an argument of "do" in the following:

  (do { i <- readIORef v; return (i == 5) })

However, I do not think many Haskellers would agree with such an assertion.

> (If it's not obvious to you why monads imply an ordering on
> side effects, someone in this NG has written a nice introduction of
> monads for Lispers some time ago. Google should be able to find it.)

No need.  I wrote one of those introductions ;-)
 
> > - Optimization: The macro might construct thunks behind the
> >  scenes, but does not have to.  A syntax that is agnostic with
> >  respect to this detail is arguably better than a HOF that isn't,
> >  for a looping construct where performance might be important.
> 
> Yes, one of the things that macros give you is *control* over
> run-time vs. compile-time evaluation. Sometimes that is important.

My point was not that the macro gave you that control.  It was that
the HOF explicitly committed you to extra, unnecessary steps in the 
evaluation model that might indeed be optimized away, but also 
might not.  The macro does not commit you to this.  
The fact that the idea of inlining optimizations even occurs to 
us at the use site comprises a conceptual barrier, however slight,
that has nothing to do with the actual semantics of the loop we 
are trying to write.  

> Macros are
> sometimes nice to have, but you can go a long way without them. And
> while there's a number of reasons why it's more conveninient to use
> macros in Lisp, I have the impression that this tends to make
> people a bit unnecessarily "preachy" about them, while at the same
> side blinding them to alternatives. But maybe that's just my
> impression :-)

I agree that macros can be, and are, abused, as are HOFs and everything
else.  But stating, as some on this thread have done, that syntactic
abstraction is useless, when such a statement is made from lack of
understanding and actual experience with syntactic abstraction, is 
as ignorant as the oft-repeated claims in some circles that HOF-based
abstraction is useless.  

Cheers
Andre
0
andre9567 (120)
11/7/2005 7:00:50 PM
On 2005-11-07, Dirk Thierbach <dthierbach@usenet.arcornews.de> wrote:
> Adrian Kubala <adrian-news@sixfingeredman.net> wrote:
>> No doubt Haskell people feel that the builtin "do" notation cleanly
>> eliminates the need for a whole class of binding, sequencing,
>> backtracking, and control-flow macros, 
>
> I don't think so. The "do" notation is just thin syntactic sugar.

Yes, I should have clarified; they allow you to implement these macros
as HOFs without the syntax of HOFs that some people find ugly and/or
annoying. And it's really not the do notation itself, it's the design
pattern of monads which the do notation happens to make very
elegant-looking.

>> Personally I think Lisp macros are like a magic wormhole that puts
>> you just one step away from tons of widely-separated language
>> features... but wormholes are dangerous and some people would prefer
>> to have a fast, efficient public transportation system instead.
>
> That starts to sound again like advertising :-)
> I think everyone can agree that it is sometimes nice to have macros,
> but why do Lispers always try to sell them as such a great thing?
> It's this sales pitch that probably puts many people off. 

Maybe it's because we miss them so much when we have to use other
languages? What I meant by my analogy above was that people that dislike
macros seem to believe it would be better if we were all using the same
limited, well-designed set of syntactic constructs from the beginning,
and I agree. I just haven't seen that perfect syntax yet.
0
adrian-news (121)
11/7/2005 7:29:21 PM
Dirk Thierbach wrote:
 
> I don't think so. The "do" notation is just thin syntactic sugar.  For
> short monadic expressions, I actually tend to not use it.

> (For those who don't already know it, an expression like
> 
>   do { x <- f; y <- g; return z }
> 
> is equivalent to
> 
>   f >>= \x -> g >>= \y -> return z
> 
> Not much harder to read once you get used to it. And certainly not harder
> to get used to than s-expressions :-) )

This would have been more convincing if >>= and >> were not
themselves syntactic sugar :-)

Andre
0
andre9567 (120)
11/7/2005 7:34:32 PM
Andre wrote:
> Dirk Thierbach wrote:
> > (For those who don't already know it, an expression like
> >
> >   do { x <- f; y <- g; return z }
> >
> > is equivalent to
> >
> >   f >>= \x -> g >>= \y -> return z
>
> This would have been more convincing if >>= and >> were not
> themselves syntactic sugar :-)
>

    Actually, there's nothing special about >>= and >> in Haskell.
They are ordinary user definable infix functions...

import Prelude hiding ((>>=),(>>))

main = do print (4 >>= 5)
          print (2 >>  3)

a >>= b = a + b
x >>  y = x * y

0
11/7/2005 9:27:24 PM
Andre <andre@het.brown.edu> wrote:
> This would have been more convincing if >>= and >> were not
> themselves syntactic sugar :-)

But they are not syntactic sugar -- just ordinary infix functions.
As I said in this thread some time before, infix functions can help
a lot to make syntax prettier without using macros.

- Dirk

0
dthierbach2 (260)
11/7/2005 9:57:50 PM
Adrian Kubala <adrian-news@sixfingeredman.net> wrote:

>>> Personally I think Lisp macros are like a magic wormhole that puts
>>> you just one step away from tons of widely-separated language
>>> features... but wormholes are dangerous and some people would prefer
>>> to have a fast, efficient public transportation system instead.

>> That starts to sound again like advertising :-)
>> I think everyone can agree that it is sometimes nice to have macros,
>> but why do Lispers always try to sell them as such a great thing?
>> It's this sales pitch that probably puts many people off. 

> Maybe it's because we miss them so much when we have to use other
> languages? 

Maybe you would miss them less if more people would be more familiar
with the alternatives. 

> What I meant by my analogy above was that people that dislike
> macros 

I would be careful with "dislike". I certainly don't dislike macros.
Like everything else, macros are a tool. You use the right tool for
the job. Which tool is right depends a lot on the circumstances.  In
Lisp, macros are very convenient in many circumstances. In other
languages, other tools (like HOFs) are often more convenient.

> seem to believe it would be better if we were all using the same
> limited, well-designed set of syntactic constructs from the beginning,

I am not sure who does believe that. Having an extensible syntax is
nice. A preprocessor is the ultimate tool to extend the syntax (which
is why most languages have one. And even if they didn't, it would be
easy to program one). 

Nevertheless, in many cases one can go a long way with a different
approach, using infix functions, HOFs, etc. These tools are more
restricted than macros, but that makes their usage more uniform, while
they are (in many, but not all cases) equally powerful.

> and I agree. I just haven't seen that perfect syntax yet.

I don't think there is anything like perfect syntax. And I don't
think it's worth trying to create one. OTOH, I don't feel the need
to meddle with the syntax so much that nobody can read my programs
besides me. A sufficiently rich set of common syntactic constructs
is all that is needed in most cases. 

- Dirk

0
dthierbach2 (260)
11/7/2005 10:08:16 PM
Andre schrieb:
> 
> By the way, why indeed does Haskell have 
> the "do" notation?  Isn't "do" a macro, after all?  

Yes.

> Wouldn't it have been nice not to have had to wait for a new version 
> of each of the compilers to have access to the "do" notation when 
> it was initially introduced?

Well, it's the only macro that the Haskellers ever found worth adding to 
the language. Inventing a macro sublanguage just to define a single 
macro might have seemed overkill to them :-)

(There's a bit of white lying involved here - (a) I'm not really sure 
that "do" is the only syntactic sugar in Haskell, (b) I'm pretty sure 
that the Haskell people *first* decided against using a macro system, 
and *then* found reasons to introduce "do.)

 > Is "do" going to be the only such
> notation that will ever be useful?

Seems so. I don't see Haskellers clamoring for additional syntactic sugar.
I'm not even sure that a few well-designed HOFs couldn't replace the 
"do" notation. As somebody else noted, "do" isn't *that* useful anyway.

 > What if you come up with
> a wonderful new DSL analogous to "do" for, say, arrows, that made 
> thinking about them so much easier, and wanted to use it in writing 
> and portably sharing programs?   

Personally, I think that "do" more obscures than eases thinking about 
monads. It pretends a sequencing that isn't there - at least not in the 
way that the code indicates. It isn't even useful for all monads - 
nobody uses the "do" notation for lists, for example.

I think the FPL world is still recovering from the shock that the 
introduction of category theory was. People are *very* reluctant to try 
new concepts (such as Arrows); they haven't fully digested monads yet.

Regards,
Jo
0
jo427 (1164)
11/7/2005 10:24:03 PM
Andre <andre@het.brown.edu> wrote:
> Dirk Thierbach wrote:

>> If you keep it purely functional, then I don't think so. I'd say
>> there's is really just one canonical way to do an until-loop. The
>> only thing that changes is presentation and syntax.

> Let's be purely functional and take Clean, for example.  There it
> might be most natural to use uniqueness types.  I could change the
> implementation - perhaps making it easier to port to Haskell - by
> wrapping things up in a monad, in which case the do-until interface
> would be different, even though there may be a canonical underlying
> isomorphism.  A macro would allow me to make this change of
> implementation without needing to change all the use sites
> throughout my code base.

I still don't think that could be done. Can you give an example
how a macro would translate the same expression either to a
uniqueness-type or a monadic implementation? Without changing the
call sites, and without having the monadic information at the
call sites in the first place?

(Interpreting a monad in terms of uniqueness types is trivial, of
course, and you don't need macros for that -- an appropriate typeclass
works fine).

> By the way, why indeed does Haskell have the "do" notation?

As I said, the argument seemed to have been that it makes it easier
for imperative programmers to understand the notation (in the same way
list comprehension makes it easier for mathematicians to write some
stuff).

Maybe someone who actually was involved when that was decided can
comment.

> Isn't "do" a macro, after all?  

It's thin syntactic sugar, and AFAIK it was debated if it was really
necessary.

> Wouldn't it have been nice not to have had to wait for a new version 
> of each of the compilers to have access to the "do" notation when 
> it was initially introduced?  

The point is not that syntactic sugar, or a complete macro system, is
not nice to have. The point is that macro advocats frequently claim
that macros are so insanely powerful that nothing can compete with
them. For many cases, that's just not true. For some cases, macros are
indeed great, and the only option -- but it really depends on your
application if you are going to hit such a case. So far, I have felt
the need to use macros only once, and that was with the object part of
OCaml, where HOFs cannot be used.

> What if you come up with a wonderful new DSL analogous to "do" for,
> say, arrows, that made thinking about them so much easier, and
> wanted to use it in writing and portably sharing programs?

You can implement Arrows just fine with the available languages
constructs (that's what the library does). If you think syntactic
sugar is important, then you (surprise!) just use a simple preprocessor.

Again, the point is that these kinds of usage are very superficial,
and don't contain a lot of "power". They are nice to have, but
are really only icing on the cake. The substance is elsewhere.

> My statement was that the thunk implementation required side 
> effects.  

Then maybe I don't understand the statement. Assuming that by
"thunk implementation", you mean the wrapping of some expression
inside a lambda, why does that *require* side effects? I'd rather
say that it "enables" side-effects by postponing the evaluation,
converting the strict evaluation order into a lazy one.

But maybe this point is not very important for this discussion.

> I am aware that a monadic implementation does not.  

The monadic implementation basically uses the same trick: You wrap
everything into a function that is executed at the appropriate time.
Hence, monads also work under strict evaluation.

>> I chose the Haskell example with monads because if you want
>> side-effects, then in Haskell you have no other choice but to use
>> monads. So, in Haskell, there is *no* other way, even with macros,
>> to implement an until-loop with side effects, but to use monads.

> That seems wrong.  You could explicitly pass a store as an extra 
> argument, to give just one alternative.  

Assuming that by store you mean a state expression, then you wouldn't
have real side effects, but just pass around the state in a purely
functional way, as in the until-function from the Prelude.

Maybe an example to clarify? In Haskell, it's just not possible to use
side effects to assign to "variables" (i.e. references) outside a
monad.

>> One thing that I really hate about many macros I have seen is the
>> unsystematical, ad-hoc way in which they treat they arguments. 

> This would indeed be bad if it were true.  However, macros do not have 
> arguments.  

I really don't mind what you call them. If you don't like the word,
suggest another one. But it's clear what I meant, isn't it?

> My point was not that the macro gave you that control.  It was that
> the HOF explicitly committed you to extra, unnecessary steps in the 
> evaluation model that might indeed be optimized away, but also 
> might not.  The macro does not commit you to this.  

Yes, and I think I already agreed with this.

> The fact that the idea of inlining optimizations even occurs to 
> us at the use site comprises a conceptual barrier, however slight,
> that has nothing to do with the actual semantics of the loop we 
> are trying to write.  

Huh? Optimization and correctness should be completely unrelated.
There's no "conceptual barrier" I can see here.

>> Macros are sometimes nice to have, but you can go a long way
>> without them. And while there's a number of reasons why it's more
>> conveninient to use macros in Lisp, I have the impression that this
>> tends to make people a bit unnecessarily "preachy" about them,
>> while at the same side blinding them to alternatives. But maybe
>> that's just my impression :-)

> I agree that macros can be, and are, abused, as are HOFs and everything
> else.  

The point is not that macros can be abused (as you say, everything can
be abused). The point is their evangelization above everything else by
some people.

> But stating, as some on this thread have done, that syntactic
> abstraction is useless, when such a statement is made from lack of
> understanding and actual experience with syntactic abstraction, is 
> as ignorant as the oft-repeated claims in some circles that HOF-based
> abstraction is useless.  

No kind of abstraction is "useless".

- Dirk
0
dthierbach2 (260)
11/7/2005 10:46:21 PM
Joachim Durchholz wrote:
>
> (There's a bit of white lying involved here - (a) I'm not really sure
> that "do" is the only syntactic sugar in Haskell

    See also, list comprehensions.  And...
http://haskell.org/hawiki/ThingsToAvoid#head-c0992f4c922f7b58b6930397ea7a6a2e0f30b48b

>
> I don't see Haskellers clamoring for additional syntactic sugar.

Template Haskell -- http://www.haskell.org/th/

> It isn't even useful for all monads -
> nobody uses the "do" notation for lists, for example.
>

    List comprehensions are probably more popular than "do" notation
for working exclusively on lists, but people certainly use "do" with
lists.  If for no other reason, using "do" makes the code work with
other monads, where list comprehensions only work for lists.  Anyway,
here's a little snippet using "do" and lists...

       do x <- [1,2,3]
          y <- [4,5,6]
          z <- [7,8,9]
          return (x,y*z)

0
11/7/2005 10:55:13 PM
Joachim Durchholz wrote:
> Andre schrieb:
>> By the way, why indeed does Haskell have the "do" notation?  Isn't 
>> "do" a macro, after all?  
> 
> Yes.
>> Wouldn't it have been nice not to have had to wait for a new version 
>> of each of the compilers to have access to the "do" notation when it 
>> was initially introduced?
> 
> Well, it's the only macro that the Haskellers ever found worth adding to 
> the language. Inventing a macro sublanguage just to define a single 
> macro might have seemed overkill to them :-)
> 
> (There's a bit of white lying involved here - (a) I'm not really sure 
> that "do" is the only syntactic sugar in Haskell, (b) I'm pretty sure 
> that the Haskell people *first* decided against using a macro system, 
> and *then* found reasons to introduce "do.)


   T. T. T.

   Put up in a place
   where it's easy to see
   the cryptic admonishment
        T. T. T.

   When you feel how depressingly
   slowly you climb,
   it's well to remember that
        Things Take Time.
   Things take time.
     -- Kumbel (Piet Hein)


This macro proposal from Tim Sheard and Simpon Peyton Jones is
from 2002:

<http://research.microsoft.com/%7Esimonpj/papers/meta%2Dhaskell/meta-haskell.ps>



I have only skimmed this thread, some if TemplateHaskell have been 
mentioned before - just ignore this post.

-- 
Jens Axel S�gaard

0
usenet8944 (1130)
11/7/2005 11:01:42 PM
In comp.lang.functional Joachim Durchholz <jo@durchholz.org> wrote:
> Andre schrieb:
> > By the way, why indeed does Haskell have 
> > the "do" notation?  Isn't "do" a macro, after all?  

> Yes.

I wouldn't describe the do-notation as a macro. There are no macros in
Haskell. As a first approximation, I would describe the do-notation as
syntactic sugar for >> and >>=.

> > Wouldn't it have been nice not to have had to wait for a new version 
> > of each of the compilers to have access to the "do" notation when 
> > it was initially introduced?

> Well, it's the only macro that the Haskellers ever found worth adding to 
> the language. Inventing a macro sublanguage just to define a single 
> macro might have seemed overkill to them :-)

> (There's a bit of white lying involved here - (a) I'm not really sure 
> that "do" is the only syntactic sugar in Haskell, (b) I'm pretty sure 
> that the Haskell people *first* decided against using a macro system, 
> and *then* found reasons to introduce "do.)

I guess that's basically fair enough. The do-notation isn't the only form
of syntactic sugar in Haskell. Another example is list comprehensions,
which could be described roughly as syntactic sugar for map and filter.

>  > Is "do" going to be the only such
> > notation that will ever be useful?

> Seems so. I don't see Haskellers clamoring for additional syntactic sugar.

Depends on what you mean, but, for examples, you could google for "arrows"
and "notation" and I'm sure you'll find some proposals made by Haskellers.

-Vesa Karvonen
0
11/7/2005 11:03:11 PM
Dirk Thierbach <dthierbach@usenet.arcornews.de> wrote:
> [...] In Lisp, macros are very convenient in many circumstances. In
> other languages, other tools (like HOFs) are often more convenient.

First, I'd like to say that I'm not really trying to promote or advocate
macros here, but it seems that there is a lot of FUD flying around here.
Also, you can definitely do a lot of nice things with combinators and they
are preferable to syntactic sugar in many cases. For one thing,
combinators are first-class and macros (usually) aren't.

However, in my opinion, there certainly are situations where one would
want to be able to extend the syntax of languages like Haskell or ML
(whether Ocaml or SML or some other dialect). Unfortunately those
languages do not provide a macro system that would allow one to do it as
safely and as trivially as in Scheme/Lisp. What happens is that people
adapt their programming style to the language like the man who got his
suit from Levine the Genius Tailor.

> Nevertheless, in many cases one can go a long way with a different
> approach, using infix functions, HOFs, etc. These tools are more
> restricted than macros, but that makes their usage more uniform, while
> they are (in many, but not all cases) equally powerful.

AFAIK, they are strictly less powerful. You just can't introduce new
binding constructs using HOFs and infix functions.

> [...] to meddle with the syntax so [...] nobody can read my programs
> besides me.

I think that arguments of the above form are a red herring. Well designed
syntactic sugar can have an immense positive effect on readability and you
really don't need macros (or syntactic sugar) to write unreadable code.

> A sufficiently rich set of common syntactic constructs is all that is
> needed in most cases.

What is needed and what is desired are two different things.

-Vesa Karvonen
0
11/7/2005 11:51:44 PM
Gary Baumgartner wrote:

> You're doing exactly what I've meant to do. I got excited when
>  you brought it up earlier this year in another thread.
>  The discussion unfortunately only got as far as someone asking
>  about (map lambda '(x) '(y)) and whether it ends up depending
>  on the particular implementation of map.

Hmmm.  Okay, let me think; the packages that map sees are:
1: the symbol lambda and the calling environment.
2: the list (QUOTE X) and the calling environment.
3: the list (QUOTE Y) and the calling environment.

What Map does, is to evaluate its second and subsequent
arguments (in this case getting single-element lists)
and then construct N expressions e1 through eN using
the first argument-package directly and the Nth element
of the result of evaluating each of the other arguments.
After evaluating these expressions, it constructs a
list of their results and returns the list.

So it would in this case construct (<lambda-promise>
<symbol-x> <symbol-y>) and evaluate that in the
environment where MAP was called.  This would create
the call

(<lambda-promise> <x-promise> <y-promise>)

where the environment for evaluating lambda is fixed at
the time MAP is called and the environment for evaluating
X and Y is established during the execution of MAP and
will be the environment created by whatever binding
form encloses the call to MAP (in this case all three
are the same environment, but it wouldn't necessarily
always be true and they're not fixed at exactly the
same time).

Map should then return a list of functions, one element
long.  In practice, I think that one element would be
a fairly useless function.

Calling it would first evaluate <lambda-promise>, getting
the function <lambda>, then it would pass control to that.
<lambda> doesn't evaluate <x-promise>; it just extracts
the expression (in this case the symbol X) to use as its
formal argument list.  Lambda also doesn't evaluate
<y-promise>; instead it extracts the expression (in this
case the symbol Y) to use as its function specifier.

So the effect would be to create a function containing
a reference to an unbound variable Y, and following lexical
scope that Y would be inherited from the lexical contour
enclosing the call to Lambda - which although it winds up
being the same environment again was NOT determined by the
environment of the package given as second argument to
lambda.  It would, once again, be the lexical contour
enclosing the call to Map.

So the function created would evaluate and return the
current value of Y in the environment where map was called,
each and every time it was invoked.  The function is
essentially an implicit closure; if you store it in a
structure or whatever, its preserved reference to that
environment will keep the environment containing the
call to map from being garbage-collected.  So even after
the function containing MAP returns, its environment is
preserved on the heap and can be accessed through the
created function.

				Bear

(after a quick check; My current implementation has a bug
and returns the symbol Y. Darnit.  I'm convinced it should
work as I described it above, but I wouldn't have noticed
the mistake except for this question.  Good test case. The
interaction between two higher-order functions that both
don't evaluate all their arguments is tricky.)

0
bear (1219)
11/8/2005 5:12:43 AM
Ray Dillinger wrote:
> Gary Baumgartner wrote:
> 
>> You're doing exactly what I've meant to do. I got excited when
>>  you brought it up earlier this year in another thread.
>>  The discussion unfortunately only got as far as someone asking
>>  about (map lambda '(x) '(y)) and whether it ends up depending
>>  on the particular implementation of map.
> 
> Hmmm.  Okay, let me think; the packages that map sees are:
> 1: the symbol lambda and the calling environment.
> 2: the list (QUOTE X) and the calling environment.

(QUOTE (X)) and (QUOTE (Y)) if I'm not mistaken.

> 3: the list (QUOTE Y) and the calling environment.

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
11/8/2005 6:02:46 AM
Vesa Karvonen <vesa.karvonen@cs.helsinki.fi> wrote:
> Dirk Thierbach <dthierbach@usenet.arcornews.de> wrote:
>> [...] In Lisp, macros are very convenient in many circumstances. In
>> other languages, other tools (like HOFs) are often more convenient.

> First, I'd like to say that I'm not really trying to promote or advocate
> macros here, but it seems that there is a lot of FUD flying around here.

I can agree with that :-)

> Also, you can definitely do a lot of nice things with combinators
> and they are preferable to syntactic sugar in many cases. For one
> thing, combinators are first-class and macros (usually) aren't.

Exactly.

> However, in my opinion, there certainly are situations where one would
> want to be able to extend the syntax of languages like Haskell or ML
> (whether Ocaml or SML or some other dialect). 

Of course.

> Unfortunately those languages do not provide a macro system that
> would allow one to do it as safely and as trivially as in
> Scheme/Lisp.

Template Haskell and Camlp4 do exist. It's not "as trivially" because
the abstract syntax trees are a bit more complex, but I don't see why
it shouldn't be "as safely".

> What happens is that people adapt their programming style to the
> language like the man who got his suit from Levine the Genius
> Tailor.

This is equally true with respect to macros. Which is part of what
I am trying to critize. As someone else said, everything can be abused.

>> Nevertheless, in many cases one can go a long way with a different
>> approach, using infix functions, HOFs, etc. These tools are more
>> restricted than macros, but that makes their usage more uniform, while
>> they are (in many, but not all cases) equally powerful.

> AFAIK, they are strictly less powerful. You just can't introduce new
> binding constructs using HOFs and infix functions.

Nobody said you could. But the existing binding constructs (which include
pattern matching) are sufficient in most cases, according to my experience. 

>> [...] to meddle with the syntax so [...] nobody can read my programs
>> besides me.

> I think that arguments of the above form are a red herring. Well designed
> syntactic sugar can have an immense positive effect on readability and you
> really don't need macros (or syntactic sugar) to write unreadable code.

Of course. Nevertheless, it's much easier to abuse macros to make code
unreadable. (That doesn't mean macros are bad, totally useless or something
like this, it just means one should be a bit careful, and actually
consider using a HOF, say, instead of just writing a macro.)

>> A sufficiently rich set of common syntactic constructs is all that is
>> needed in most cases.

> What is needed and what is desired are two different things.

Yes. But the whole argument was in response to the "macros are powerful
above everything" claim. I never questioned the *desirability* of
syntactic sugar.

- Dirk

0
dthierbach2 (260)
11/8/2005 8:25:55 AM
Dirk Thierbach schrieb:
> Adrian Kubala <adrian-news@sixfingeredman.net> wrote:
>>What I meant by my analogy above was that people that dislike
>>macros 
> 
> I would be careful with "dislike".

I have to chime in.

I have written my own macro interpreters, and one of them went far 
beyond the usual "string expansion with parameters" stuff.

Now, it's not macros per se that I dislike. It's having multiple levels 
of interpretation. Actually I feel that most of my misgivings with 
macros in Lisp is that they compound a different problem: that of 
multiple interpretation levels.
I had no problems understanding "quote" and what it was good for. Yet I 
found myself confusing quote levels in practice in my Interlisp days. I 
also found there was no good way to annotate a function as to what quote 
level it expected in what parameter. Counting everything together, 
quoting-related errors cost me several weeks of bug-hunting.
Now I read that modern Lisps have a gazillion ways of quoting things. 
I'm horrified: every single one of those quote mechanism was found to be 
useful in some was, else Lisp wouldn't have acquired these mechanisms - 
but how is a humble application programmer supposed to know which of a 
gazillion subtly different quote mechanisms is the right one to use? 
Even worse: in many cases he won't even see that he did something wrong, 
until he's replacing a value with a closure or something, at which point 
his code will fail subtly and unexpectedly. Maintenance nightmare time!

Macros are yet another mechanism for working on the meta level. I'm 
pretty sure that there's a "macro quote" mechanism. But even if there 
isn't, people now have to distinguish various quoting techniques *and* 
their interaction with macro application - and at that point, my eyes 
finally glaze over.

(It shouldn't surprise anybody that my macro languages *never* had 
anything but macro expansion. No additional evaluation mechanism, no 
thanks. Whether that was good design - ah well, that's an entirely 
different question *ggg*)

>>and I agree. I just haven't seen that perfect syntax yet.
> 
> I don't think there is anything like perfect syntax. And I don't
> think it's worth trying to create one.

Um... it's indeed worth trying to create one. If only to get personal 
insights into the problems associated with that goal :-)
Though I think that trying a better syntax design can even help others. 
E.g. take Pascal-style syntax. It was slightly improved with Modula-2, 
which is in a local optimum: you can't add or take away much without 
seriously breaking things or making the code unreadable. I always 
thought "pity that one needs to many keywords, but it helps code 
readability so let's stick with and advocate for that kind of syntax" (I 
have a profound dislike for syntax that's derived off line noise). Then 
I came across Haskell's syntax and found that all the things that need 
dozens of keywords in Pascal and Modula can be expressed with just a few 
indents and an occasional operator.
Moral: If thousands of syntax designers struggle to create a better way 
to do syntax, and just a single one succeeds, then it was worth the effort.

Regards,
Jo
0
jo427 (1164)
11/8/2005 8:59:46 AM
Vesa Karvonen schrieb:
> In comp.lang.functional Joachim Durchholz <jo@durchholz.org> wrote:
> 
>>Andre schrieb:
>>
>>>By the way, why indeed does Haskell have 
>>>the "do" notation?  Isn't "do" a macro, after all?  
> 
>>Yes.
> 
> I wouldn't describe the do-notation as a macro. There are no macros in
> Haskell. As a first approximation, I would describe the do-notation as
> syntactic sugar for >> and >>=.

I understood Andre to mean that "do would have been done using macros if 
Haskell had a macro system". Anything that can be classified as 
syntactic sugar is a macro in that sense.

>>>Wouldn't it have been nice not to have had to wait for a new version 
>>>of each of the compilers to have access to the "do" notation when 
>>>it was initially introduced?
> 
>>Well, it's the only macro that the Haskellers ever found worth adding to 
>>the language. Inventing a macro sublanguage just to define a single 
>>macro might have seemed overkill to them :-)
> 
>>(There's a bit of white lying involved here - (a) I'm not really sure 
>>that "do" is the only syntactic sugar in Haskell, (b) I'm pretty sure 
>>that the Haskell people *first* decided against using a macro system, 
>>and *then* found reasons to introduce "do.)
> 
> I guess that's basically fair enough. The do-notation isn't the only form
> of syntactic sugar in Haskell. Another example is list comprehensions,
> which could be described roughly as syntactic sugar for map and filter.

Yup, I overlooked these. (Haskell isn't what I'm using on a day-to-day 
basis.)

>> > Is "do" going to be the only such
>>
>>>notation that will ever be useful?
> 
>>Seems so. I don't see Haskellers clamoring for additional syntactic sugar.
> 
> Depends on what you mean, but, for examples, you could google for "arrows"
> and "notation" and I'm sure you'll find some proposals made by Haskellers.

Please dig some up :-)

Seriously, I haven't seen people moaning about a lack of syntactic sugar 
for arrows on comp.lang.functional. They still struggle with the concept 
of monads, let alone know what the "do" notation actually gives them, 
don't want to mess with arrows and are even less keen to see syntactic 
sugar for arrows!
(Advanced developers who're doing bleeding-edge research on arrows are 
an entirely different story. I'm pretty sure that SPJ's proposal for a 
macro language in Haskell is motivated by similar considerations. I just 
fear that they'll finally leave the last Joe Avg. Programmer behind. I 
usually have a full, clear grasp of the concepts I'm programming with, 
yet I found it difficult to keep the various interpretation levels apart 
in practice - how are average programmers supposed to do that? Many are 
doing what I call "experimental programming", i.e. fiddling around long 
enough until it works... I can't see how they'd ever successfully use a 
macro system, or Lisp's quote system.)

Regards,
Jo
0
jo427 (1164)
11/8/2005 9:11:14 AM
On 2005-11-08, Joachim Durchholz <jo@durchholz.org> wrote:
> Now I read that modern Lisps have a gazillion ways of quoting things. 
> I'm horrified: every single one of those quote mechanism was found to be 
> useful in some was, else Lisp wouldn't have acquired these mechanisms

`quote' is one... what are the others you have in mind?

> but how is a humble application programmer supposed to know which of a 
> gazillion subtly different quote mechanisms is the right one to use? 
> Even worse: in many cases he won't even see that he did something wrong, 
> until he's replacing a value with a closure or something, at which point 
> his code will fail subtly and unexpectedly. Maintenance nightmare time!

People getting confused about what is evaluated and what is not in a
macro invocation is indeed a source of confusion -- but I don't think
there are ever issues with "different quote mechanisms".
0
adrian-news (121)
11/8/2005 9:23:14 PM
Adrian Kubala schrieb:
> On 2005-11-08, Joachim Durchholz <jo@durchholz.org> wrote:
> 
>>Now I read that modern Lisps have a gazillion ways of quoting things. 
>>I'm horrified: every single one of those quote mechanism was found to be 
>>useful in some was, else Lisp wouldn't have acquired these mechanisms
> 
> `quote' is one... what are the others you have in mind?

I dimly remember that there were different quote mechanism, each 
protecting the expression from being evaluated in another kind of context.
Hmm... maybe sometimes things needed to be quoted multiple times. Now 
that would be just a single quote mechanism, but with enough confusion 
potential for multiple kinds of quotes.

>>but how is a humble application programmer supposed to know which of a 
>>gazillion subtly different quote mechanisms is the right one to use? 
>>Even worse: in many cases he won't even see that he did something wrong, 
>>until he's replacing a value with a closure or something, at which point 
>>his code will fail subtly and unexpectedly. Maintenance nightmare time!
> 
> People getting confused about what is evaluated and what is not in a
> macro invocation is indeed a source of confusion

Oh, and not only macros - this is actually a source of confusion for any 
kind of function call that I do in Lisp.

Let me set up a hierarchy of languages: side-effect-free ones, 
side-effecting ones, and multi-level ones.

In a side-effect-free language (such as Haskell or the side-effect-free 
sublanguages of *ML), there's no need for a quote operator. Whether an 
expression is evaluated earlier or later is utterly uninteresting: it 
will give the same result. In fact an expression is just a different 
representation of its result, and semantically we don't care about that 
kind of difference, nor does the language give us a way to inspect the 
difference.

In a language with side-effects, we cannot entirely ignore the 
difference. It may make a difference when an expression is evaluated, 
because the context may have changed in the mean time. Quote-like 
operators can become a necessity.
Actually that's one of the reasons why I try to write code that's 
side-effect-free, regardless of the language I happen to use :-)

In a language with side effects and macros, the distinction becomes even 
more important. The form of an expression may affect how a macro 
processes it. The macro may (partially) evaluate the expression. It may 
rearrange or even rewrite the expression, and even change its semantics 
(if such a change was inadvertent, we have a particularly nasty bug, 
particularly if the change in semantics is subtle).

Macros as in Lisp seem like something immensely powerful and subtly 
dangerous to me.
I may be wrong :-)

Um... to check whether I'm wrong or not:
Would you charge an average programmer with writing macro code? Are 
there kinds of macro code you'd leave to him, kinds that you'd want to 
proofread, kinds that you'd never leave in the hands of an average 
programmer?
Would these categories change drastically if we're considering 
above-average programmers? Below-average programmers?

In other words: is writing macros a workable abstraction tool for the 
application programmer, or something for the library design expert?

Regards,
Jo
0
jo427 (1164)
11/9/2005 12:36:24 PM
Joachim Durchholz <jo@durchholz.org> writes:


> In a side-effect-free language (such as Haskell or the
> side-effect-free sublanguages of *ML), there's no need for a quote
> operator. Whether an expression is evaluated earlier or later is
> utterly uninteresting: it will give the same result. In fact an
> expression is just a different representation of its result, and
> semantically we don't care about that kind of difference, nor does the
> language give us a way to inspect the difference.

Not entirely true.  In strict languages like SML, there is a visible
difference between evaluating an expression and not evaluating it at
all (if the expression is nonterminating or might generate an error).
For example, you can't implement if-then-else as a function without in
some way "quoting" the expressions for the branches.  In SML, the
traditional way of doing this is to wrap an expression e into a
parameterless function, i.e., val qe = fn()=>e and "unquote" it by
applying this to no arguments (i.e., (qe)).

You can argue that errors are effects, but nontermination usually
isn't reagarded as such.

        Torben

0
torbenm (37)
11/9/2005 3:24:32 PM
On 2005-11-09, Joachim Durchholz <jo@durchholz.org> wrote:
> I dimly remember that there were different quote mechanism, each 
> protecting the expression from being evaluated in another kind of context.
> Hmm... maybe sometimes things needed to be quoted multiple times. Now 
> that would be just a single quote mechanism, but with enough confusion 
> potential for multiple kinds of quotes.

This can be the case in macro-writing macros, which are indeed confusing
but only for the implementor of the macro -- the user of the macro
doesn't have to worry about this.

> In a side-effect-free language (such as Haskell or the side-effect-free 
> sublanguages of *ML), there's no need for a quote operator. Whether an 
> expression is evaluated earlier or later is utterly uninteresting

Ok, I think we should be clear that in modern Lisp, there is no need to
quote code unless you're implementing a macro. This is true of any macro
language -- you have to distinguish between the code being manipulated
and the code doing the manipulating.

You're right that side-effect-free languages have one less thing (what
order are these forms evaluated in) to confuse programmers, but this is
independent of quoting.

> The macro may (partially) evaluate the expression.

I've never seen such a macro and there'd have to be some very good
reason for it to do so.

> It may rearrange or even rewrite the expression, and even change its
> semantics (if such a change was inadvertent, we have a particularly
> nasty bug, particularly if the change in semantics is subtle).

This is definitely true.

> Macros as in Lisp seem like something immensely powerful and subtly 
> dangerous to me.

There are different kinds of macros, and the most common ones are easy
to write perfectly safely following a few simple rules. A hygenic macro
system like Scheme's makes macros practically foolproof (although it
does prevent you from doing a lot of useful things too).

> Um... to check whether I'm wrong or not:
> Would you charge an average programmer with writing macro code? Are 
> there kinds of macro code you'd leave to him, kinds that you'd want to 
> proofread, kinds that you'd never leave in the hands of an average 
> programmer?

Yes, and yes. I also wouldn't want an average programmer designing an OO
class hierarchy, or writing any Haskell whatsoever.
0
adrian-news (121)
11/9/2005 3:43:40 PM
Dirk Thierbach wrote:
> 
> Andre <andre@het.brown.edu> wrote:
> > This would have been more convincing if >>= and >> were not
> > themselves syntactic sugar :-)
> 
> But they are not syntactic sugar -- just ordinary infix functions.

Infix operations are not written using the primitive function application
syntax.  Their definition requires fixity and associativity declarations.
Given these, they are at least conceptually, and probably also in practice,
translated to ordinary function applications as part of compilation. 
In this sense, I would certainly regard infix usage as syntactic sugar for 
function application. 

Regards
Andre
0
andre9567 (120)
11/9/2005 5:03:35 PM
Joachim Durchholz wrote:
 
> Now, it's not macros per se that I dislike. It's having multiple levels
> of interpretation.  

> Macros are yet another mechanism for working on the meta level. I'm
> pretty sure that there's a "macro quote" mechanism.  

> (It shouldn't surprise anybody that my macro languages *never* had
> anything but macro expansion. No additional evaluation mechanism, no
> thanks. Whether that was good design - ah well, that's an entirely
> different question *ggg*)

Scheme SYNTAX-RULES macros follow exactly this design.  It is 
a simple language for specifying rewrite rules, completely different
from regular Scheme.  It is impossible to evaluate or interpret
any code during macro expansion.  Neither is there any quote mechanism
involved.  

Andre
0
andre9567 (120)
11/9/2005 5:11:12 PM
Dirk Thierbach wrote:

> Vesa Karvonen <vesa.karvonen@cs.helsinki.fi> wrote:
 
> > Also, you can definitely do a lot of nice things with combinators
> > and they are preferable to syntactic sugar in many cases. For one
> > thing, combinators are first-class and macros (usually) aren't.
> 
> Exactly.

It is actually a little ironic, in the light of the present discussion,
that Schemers (in my experience) tend to use /less/ syntactic
sugar than writers of ML or Haskell.  The absence of standardized 
pattern matching makes Schemers more likely to use accessor functions 
(arguably more modular in some cases).  The absence of standardized list 
comprehensions make them more likely to use higher order functions.

If non-schemers/lispers were more informed on how sparingly user
macros are actually used, I think they would worry much less about them.  

> > Unfortunately those languages do not provide a macro system that
> > would allow one to do it as safely and as trivially as in
> > Scheme/Lisp.
> 
> Template Haskell and Camlp4 do exist. It's not "as trivially" because
> the abstract syntax trees are a bit more complex, but I don't see why
> it shouldn't be "as safely".

Actually, the syntax trees are the least of the problems.  The much more 
difficult problem, which has everything to do with safety, is hygiene.
Last I checked, neither of these were hygienic, but I may be wrong.  

Regards
Andre
0
andre9567 (120)
11/9/2005 5:29:23 PM
Andre <andre@het.brown.edu> wrote:

> It is actually a little ironic, in the light of the present discussion,
> that Schemers (in my experience) tend to use /less/ syntactic
> sugar than writers of ML or Haskell. 

Unless you count usage of infix functions as syntactic sugar (which
I would think is a bit strange), then I wouldn't say they use "less"
syntactic sugar.

> The absence of standardized pattern

And now pattern matching is also syntactic sugar? :-)

> matching makes Schemers more likely to use accessor functions 
> (arguably more modular in some cases).  

Accessor functions are often inferior to pattern matching, because with
pattern matching, the compiler can check completeness. And if you want
accessor functions as an abstraction tool, nobody keeps you from writing
them.

> The absence of standardized list comprehensions make them more
> likely to use higher order functions.

In my experience, list comprehensions are not very useful once they
get a bit more complicated.

> If non-schemers/lispers were more informed on how sparingly user
> macros are actually used, I think they would worry much less about them.  

I don't think anynone "worries" about them. Are you sure we are still
discussing the same topic? (I have repeated now the main point I want
to make so often that I won't do that again).

> Actually, the syntax trees are the least of the problems.  The much more 
> difficult problem, which has everything to do with safety, is hygiene.
> Last I checked, neither of these were hygienic, but I may be wrong.  

And HOFs don't have any problems with hygiene in the first place.

- Dirk

0
dthierbach2 (260)
11/9/2005 6:09:00 PM
Torben �gidius Mogensen schrieb:
> Joachim Durchholz <jo@durchholz.org> writes:
> 
>>In a side-effect-free language (such as Haskell or the
>>side-effect-free sublanguages of *ML), there's no need for a quote
>>operator. Whether an expression is evaluated earlier or later is
>>utterly uninteresting: it will give the same result. In fact an
>>expression is just a different representation of its result, and
>>semantically we don't care about that kind of difference, nor does the
>>language give us a way to inspect the difference.
> 
> Not entirely true.  In strict languages like SML, there is a visible
> difference between evaluating an expression and not evaluating it at
> all (if the expression is nonterminating or might generate an error).

Being nonterminating is just a special case of an error, so these are 
really the same issue.

I don't think that erroneous programs are really part of this discussion 
anyway.

> For example, you can't implement if-then-else as a function without in
> some way "quoting" the expressions for the branches.  In SML, the
> traditional way of doing this is to wrap an expression e into a
> parameterless function, i.e., val qe = fn()=>e and "unquote" it by
> applying this to no arguments (i.e., (qe)).

Isn't if-then-else lazy, even in a strict language?

> You can argue that errors are effects,

They are indeed :-)

 > but nontermination usually isn't reagarded as such.

I would. With the exception of the unused branches of if-then-else or 
other built-in lazy constructs.
However, nontermination and other "errors" in unevaluated branches 
aren't errors anyway.

Regards,
Jo
0
jo427 (1164)
11/9/2005 8:49:41 PM
Adrian Kubala schrieb:
> On 2005-11-09, Joachim Durchholz <jo@durchholz.org> wrote:
> 
>>I dimly remember that there were different quote mechanism, each 
>>protecting the expression from being evaluated in another kind of context.
>>Hmm... maybe sometimes things needed to be quoted multiple times. Now 
>>that would be just a single quote mechanism, but with enough confusion 
>>potential for multiple kinds of quotes.
> 
> This can be the case in macro-writing macros, which are indeed confusing
> but only for the implementor of the macro -- the user of the macro
> doesn't have to worry about this.
> 
>>In a side-effect-free language (such as Haskell or the side-effect-free 
>>sublanguages of *ML), there's no need for a quote operator. Whether an 
>>expression is evaluated earlier or later is utterly uninteresting
> 
> Ok, I think we should be clear that in modern Lisp, there is no need to
> quote code unless you're implementing a macro. This is true of any macro
> language -- you have to distinguish between the code being manipulated
> and the code doing the manipulating.

I most definitely remember having wrestled with the quote operator.

Of course, that's ten years in the past (um, actually more like twenty 
years now), and a modern Lisp may have better ways to do what I did 
then. You can certainly hide quoting inside macros.

> You're right that side-effect-free languages have one less thing (what
> order are these forms evaluated in) to confuse programmers, but this is
> independent of quoting.
> 
>>The macro may (partially) evaluate the expression.
> 
> I've never seen such a macro and there'd have to be some very good
> reason for it to do so.

Indeed :-)

Though it may fully evaluate the expression. Or never at all. Or its 
behavior may be different depending on circumstances.
I'm not sure whether I like the sound of that. I'd certainly be wary of 
documentation under such circumstances.

Again, this is related to having side effects. Macros and/or quoting 
just magnify the problem to perceptible dimensions (programmers in 
macro-less languages usually aren't even aware that they are running 
into side effect problems).

>>Macros as in Lisp seem like something immensely powerful and subtly 
>>dangerous to me.
> 
> There are different kinds of macros, and the most common ones are easy
> to write perfectly safely following a few simple rules. A hygenic macro
> system like Scheme's makes macros practically foolproof (although it
> does prevent you from doing a lot of useful things too).

I'd still want to live with a hygienic macro system. I take safety over 
expressive power any day (in case this isn't clear to everybody already 
*gg*).

>>Um... to check whether I'm wrong or not:
>>Would you charge an average programmer with writing macro code? Are 
>>there kinds of macro code you'd leave to him, kinds that you'd want to 
>>proofread, kinds that you'd never leave in the hands of an average 
>>programmer?
> 
> Yes, and yes. I also wouldn't want an average programmer designing an OO
> class hierarchy, or writing any Haskell whatsoever.

Not sure about having average programmers write Haskell. Lazy evaluation 
can have unexpected effects, but they might get used to that after a 
training period - so, don't leave them unattended in the first few months.

Agreed about the other points.

Regards,
Jo
0
jo427 (1164)
11/9/2005 8:55:45 PM
Joachim Durchholz <jo@durchholz.org> wrote:
[...]

> I dimly remember that there were different quote mechanism, each 
> protecting the expression from being evaluated in another kind of context.

You must be thinking about shell scripts (sh, etc).

[...]
> > People getting confused about what is evaluated and what is not in a
> > macro invocation is indeed a source of confusion

> Oh, and not only macros - this is actually a source of confusion for any 
> kind of function call that I do in Lisp.

Function calls in Scheme are basically like function calls in other
languages.

Ok, maybe I'm off base here, but how recent is your knowledge of Scheme
(and, say, Common Lisp)? If you just want to argue based on knowledge of
some obscure Lisp implementation dating back some 20 years, then that's
fine with me as long as you always clearly and repeatedly point out that
your assertions only apply to (or prejudice is based on) that particular
Lisp. AFAIK, the designs of early Lisps were indeed rather ad hoc and a
lot has changed since then.

> In a side-effect-free language (such as Haskell or the side-effect-free 
> sublanguages of *ML), there's no need for a quote operator.

This depends on what you mean by need. If you would have a Haskell or ML
like language with macros, then it would probably be desirable to be able
to define macros using (essentially) simple rewrite rules (like in the
Scheme syntax-rules and syntax-case macro systems). The (code) template
part of a rewrite rule essentially uses a form of (quasi)quotation.

IOW, the desireability of having a convenient quotation mechanism in a
macro system is orthogonal to the purity of the language.

> Whether an expression is evaluated earlier or later is utterly
> uninteresting:

Argh... Assuming that you are still basically talking about macros and not
just about the concept of quoting code (without having a macro system),
then I think that you are fundamentally confused about the essense of
macros.

Please understand that a macro system is a mechanims for extending the
syntax of a language. The purpose of a macro system is not to give a
mechanism for controlling the order of evaluation of "subexpressions".

[Quote and eval of some form (and no macro system) provide a reflection
mechanism rather than a syntactic extension mechanism.]

[...]
> In a language with side effects and macros, the distinction becomes even 
> more important. The form of an expression may affect how a macro 
> processes it. The macro may (partially) evaluate the expression. It may 
> rearrange or even rewrite the expression, and even change its semantics 
> (if such a change was inadvertent, we have a particularly nasty bug, 
> particularly if the change in semantics is subtle).

Please understand that a "function like" macro takes, as an argument, a
syntax object. The macro gives the meaning (semantics) to the syntax
object it receives by producing another syntax object. IOW, a macro
*defines* a new form of expression or a new production (in the grammar).

When you say that a macro may "change" the semantics of an expression you
are making a category mistake. A macro does not "change" the semantics of
an "expression". A macro *defines* the semantics of whatever syntax it is
given. (When interpreted strictly, with the assumption that you are
talking about macros, almost everything you said in your post is
nonsense.)

In general, if you have the code

  (some-macro . some-sexpr)

in Scheme, then the meaning (semantics) of some-expr is *defined* by
some-macro. The sexpr (above some-sexpr) that you pass to a macro does not
have semantics on its own. For example, in the expression

  (some-expr (+ 1 2))

or, perhaps more clearly, in the expression

  (some-macro . ((+ 1 2)))

the meaning of the sexpr ((+ 1 2)) is defined by some-macro. The grammar
of the language has (roughly) been extended (for the scope of the macro
binding) by a new production of the form

  expr -> '(' 'some-macro' ??? ')'

where ??? is defined by some-macro.

> Would you charge an average programmer with writing macro code?

Frankly, I don't know what an average programmer is. I think that your
question is a disguised tautology and a red herring.

I think that your thinking goes something like this:
1. I don't understand macros.
2. Therefore the average programmer can't understand macros.
3. I can't trust code that I don't understand.
4. Therefore I can't trust macros written by average programmers.

The way I interpret your question is:

  Would you charge a programmer who does not understand macros with
  writing macro code?

If that is what you are really asking then the answer should be quite
clear. One should not write code that one doesn't understand. This has
absolutely nothing to do with macros per se.

I've seen this same kind of red herring raised many times in conjunction
with just about everything like HOFs, monads, exceptions, lazy evaluation,
side-effects, static/dynamic typing, (unit) testing, etc.

-Vesa Karvonen
0
11/10/2005 12:01:04 AM
Andre wrote:
> Dirk Thierbach wrote:
> 
>>Template Haskell and Camlp4 do exist. It's not "as trivially" because
>>the abstract syntax trees are a bit more complex, but I don't see why
>>it shouldn't be "as safely".
> 
> Actually, the syntax trees are the least of the problems.  The much more 
> difficult problem, which has everything to do with safety, is hygiene.
> Last I checked, neither of these were hygienic, but I may be wrong.

Template Haskell is hygienic, unless you explicitly ask it not to be by
using a lower-level interface to syntax trees. I don't know about Camlp4.

-- 
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>
0
11/10/2005 10:14:34 AM
Joachim Durchholz wrote:
> In other words: is writing macros a workable abstraction tool for the
> application programmer, or something for the library design expert?

If the conclusion were that it is something for the library design expert,
that would still justify support for macros in a language.

-- 
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>
0
11/10/2005 10:18:15 AM
Joachim Durchholz <jo@durchholz.org> writes:
(snip)
> I dimly remember that there were different quote mechanism, each protecting
> the expression from being evaluated in another kind of context.
> Hmm... maybe sometimes things needed to be quoted multiple times. Now that
> would be just a single quote mechanism, but with enough confusion potential
> for multiple kinds of quotes.
(snip)

I wonder if you're thinking of things like normal quoting ('), the
quotes you usually use in macros (` ,), function quoting (#'), etc.
Lispers may not call all these things quoting - I don't know - but
I do lump them into the same category when I think about them, so
maybe you did too. (I only know Common Lisp and a little elisp.)

-- Mark
0
Mark.Carroll (154)
11/10/2005 12:19:05 PM
Dirk Thierbach wrote:
 
> And now pattern matching is also syntactic sugar? :-)
 
From the Haskell report:

"Patterns appear in lambda abstractions, function definitions, pattern bindings, list
comprehensions, do expressions, and case
expressions. However, the first five of these ultimately translate into case expressions, so
defining the semantics of pattern
matching for case expressions is sufficient."

Case expressions themselves may be regarded as syntactic sugar
as described in figures 3.1 and 3.2 from the Haskell report.  
I give one example of the productions there.  The fact that 
Haskell compilers may optimize these is beside the point.  

  case v of { K p1 ...pn -> e; _ -> e' }
      = case v of {
           K x1 ...xn -> case x1 of {
                          p1 -> ... case xn of { pn -> e ; _ -> e' } ...
                          _  -> e' }
           _ -> e' }

  (at least one of p1, ..., pn is not a variable; x1, ..., xn are new
   variables)
 
In Lisp/Scheme, these productions would be the macro, so this kind
of semantics can be portably expressed in the language itself.  
Once again, the fact that some Scheme compilers would optimize this by 
various static methods is really beside the point.  

Cheers
Andre
0
andre9567 (120)
11/10/2005 3:39:32 PM
Andre schrieb:
> Dirk Thierbach wrote:
>  
>>And now pattern matching is also syntactic sugar? :-)
> 
> From the Haskell report:
> 
> "Patterns appear in lambda abstractions, function definitions, pattern bindings, list
> comprehensions, do expressions, and case
> expressions. However, the first five of these ultimately translate into case expressions, so
> defining the semantics of pattern
> matching for case expressions is sufficient."
> 
> Case expressions themselves may be regarded as syntactic sugar
> as described in figures 3.1 and 3.2 from the Haskell report.  
> I give one example of the productions there.  The fact that 
> Haskell compilers may optimize these is beside the point.  

It's beside the point only if it's clear that macros can apply all 
optimisations that a compiler could. Anybody with insights into that?

Regards,
Jo
0
jo427 (1164)
11/10/2005 3:59:02 PM
In article <dkvqo6$p1p$1@online.de>, Joachim Durchholz wrote:
> 
> It's beside the point only if it's clear that macros can apply all
> optimisations that a compiler could. Anybody with insights into
> that?

Scheme-style macros cannot do this, because macros are transformations
of abstract syntax trees. A more elaborate macro system, where the
macros represented transformations of type derivations, potentially
could.

As I understand it, the Common Lisp process started to standardize
some of this stuff with the "&environment" argument to defmacro
arglists, but gave up when it became clear that they would have to
present an API for accessing an internal compiler analysis framework.


-- 
Neel Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
11/10/2005 8:51:00 PM
Neelakantan Krishnaswami wrote:
 
> Scheme-style macros cannot do this, because macros are transformations
> of abstract syntax trees. A more elaborate macro system, where the
> macros represented transformations of type derivations, potentially
> could.
 
I'm a little confused regarding what this has to do with optimization.  
I might mention that the Haskell report productions I mentioned claim 
to be type-preserving.

Regards
Andre
0
andre9567 (120)
11/10/2005 10:07:39 PM
Joachim Durchholz wrote:
 
> > Case expressions themselves may be regarded as syntactic sugar
> > as described in figures 3.1 and 3.2 from the Haskell report.
> > I give one example of the productions there.  The fact that
> > Haskell compilers may optimize these is beside the point.
> 
> It's beside the point only if it's clear that macros can apply all
> optimisations that a compiler could. Anybody with insights into that?
 
The point was whether these expressions could be understood as syntactic 
sugar or not.  Optimization was beside it :-)

Macros are certainly not the place for these kinds of optimizations.
While it is clear that standard compiler techniques should be able to do
quite a bit of optimization on the post-macro-expanded object code,
it is true that a compiler written to handle certain constructs specially
will usually generate better code.  But various Scheme compilers already
handle various standard macros specially, and do various types of
static type inference and optimizations, so there is no inherent
deficiency with this approach.  

Regards
Andre
0
andre9567 (120)
11/10/2005 10:20:02 PM
In article <4373C4AB.AF11B4DA@het.brown.edu>, Andre wrote:
> Neelakantan Krishnaswami wrote:
>  
>> Scheme-style macros cannot do this, because macros are transformations
>> of abstract syntax trees. A more elaborate macro system, where the
>> macros represented transformations of type derivations, potentially
>> could.
>  
> I'm a little confused regarding what this has to do with optimization.  
> I might mention that the Haskell report productions I mentioned claim 
> to be type-preserving.

You can't compute exhaustiveness or completeness without type
information, and you need that info to do optimization. I don't really
care about the optimization aspect, but I'm basically not interested
in pattern matchers that don't report exhaustiveness and completeness
information -- I think it's a design error in Haskell and ML that they
only issue warning for bad sets of patterns.

-- 
Neel Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
11/10/2005 10:44:01 PM
On 2005-11-10, Mark T.B. Carroll <Mark.Carroll@Aetion.com> wrote:
> Joachim Durchholz <jo@durchholz.org> writes:
> (snip)
>> I dimly remember that there were different quote mechanism, each protecting
>> the expression from being evaluated in another kind of context.
>> Hmm... maybe sometimes things needed to be quoted multiple times. Now that
>> would be just a single quote mechanism, but with enough confusion potential
>> for multiple kinds of quotes.
> (snip)
>
> I wonder if you're thinking of things like normal quoting ('), the
> quotes you usually use in macros (` ,), function quoting (#'), etc.
> Lispers may not call all these things quoting - I don't know - but
> I do lump them into the same category when I think about them, so
> maybe you did too. (I only know Common Lisp and a little elisp.)

and the unquoting things... , and @, IIRC.


-- 
Aaron Denney
-><-
0
wnoise1 (65)
11/11/2005 2:12:34 AM
Andre wrote:
> Macros are certainly not the place for these kinds of optimizations.
> While it is clear that standard compiler techniques should be able to do
> quite a bit of optimization on the post-macro-expanded object code,
> it is true that a compiler written to handle certain constructs specially
> will usually generate better code.  But various Scheme compilers already
> handle various standard macros specially, and do various types of
> static type inference and optimizations, so there is no inherent
> deficiency with this approach.  

I don't quite share that view.  The acrobatics modern compilers do to 
extract information from low-level SSA-like code to optimize that is a 
lot of work, and often optimization isn't even possible (aliasing in C 
etc.).

OTOH, if you optimize stuff on each rewriting level (or rather, don't 
introduce redundancies in evaluation), the output will be quite good 
because no phase makes it dirty.

I have the choice to write (+ (* x 3) (* x 3)) or
(let ((y (* x 3))) (+ y y)).  The latter is both clearer and will be 
fast even without compiler technology (subexp elimination).  If you'd 
layer advanced constructs on top of R5RS Scheme, for instance, there are 
probably many points where it's easy to introduce some redundant 
subexpressions, but you could just as well produce minimally redundant 
code.  Can't get faster than that...  For stuff like OO you could 
optimize even more, because high-level code might have more information 
than the low-level optimizer, obviously.  That's why C++ has real 
compilers instead of naive translators to C.

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
11/11/2005 9:59:38 AM
Ulrich Hobelmann wrote:
> 
> Andre wrote:
> > Macros are certainly not the place for these kinds of optimizations.
 
> I don't quite share that view.  The acrobatics modern compilers do to
> extract information from low-level SSA-like code to optimize that is a
> lot of work, and often optimization isn't even possible (aliasing in C
> etc.).
> 
> OTOH, if you optimize stuff on each rewriting level (or rather, don't
> introduce redundancies in evaluation), the output will be quite good
> because no phase makes it dirty.
 
You are right, and I overstated the case.  For example, pattern matching
is a well-understood problem with well-known, published algorithms 
available for eliminating redundancies.  It would indeed be better to 
generate good code at the macro rewrite level, where the problem
is well-delineated and can be specifically targeted, rather than 
leaving it to be handled somewhere down the pipeline in an ad hoc 
manner.   

Having said that, there is a level of optimizations that the macro
writer should not have to worry about.  Krishnamurthy's book, cited
earlier in the thread, touches upon this, but I remember reading a 
very nice paper about this precise issue once.  I thought it 
was called something like "The macro-writer's manifesto", but 
Google gives me nothing... :-(

Cheers
Andre
0
andre9567 (120)
11/11/2005 5:37:22 PM
Andre wrote:
> Having said that, there is a level of optimizations that the macro
> writer should not have to worry about.  Krishnamurthy's book, cited

Makes sense.

> earlier in the thread, touches upon this, but I remember reading a 
> very nice paper about this precise issue once.

Yes, I'd like to read that sometime.  Too little time :(

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
11/11/2005 7:38:10 PM
Andre <andre@het.brown.edu> wrote:
> Dirk Thierbach wrote:

>> And now pattern matching is also syntactic sugar? :-)

[...]
> Case expressions themselves may be regarded as syntactic sugar
> as described in figures 3.1 and 3.2 from the Haskell report.  

I didn't mean that case expressions can be translated to patterns (or
the other way round), I meant pattern matching in itself.

> In Lisp/Scheme, these productions would be the macro, so this kind
> of semantics can be portably expressed in the language itself.  

Yes. So what? In Haskell, there are at least half a dozen constructs
that are really only slight syntactical variations. So far, we had
do-Notation, list comprehensions, and pattern matching vs. case-statements.
I could add a few more.

But if you remember the original point of this discussion (do you?),
this is completely irrelevant.

- Dirk
0
dthierbach2 (260)
11/12/2005 5:29:07 PM
Dirk Thierbach wrote:
> 
> Andre <andre@het.brown.edu> wrote:
> > Dirk Thierbach wrote:
> 
> >> And now pattern matching is also syntactic sugar? :-)
> 
> [...]
> > Case expressions themselves may be regarded as syntactic sugar
> > as described in figures 3.1 and 3.2 from the Haskell report.
> 
> I didn't mean that case expressions can be translated to patterns (or
> the other way round), I meant pattern matching in itself.

Me too.  The case productions mentioned translate composite
patterns into a simple set of base patterns.  

Regards
Andre
0
andre9567 (120)
11/12/2005 5:49:25 PM
Andre <andre@het.brown.edu> wrote:
> Me too.  The case productions mentioned translate composite
> patterns into a simple set of base patterns.  

But that's not necessarily how it is implemented. That's just one 
way to clearly define it for the sake of the standard.

- Dirk

0
dthierbach2 (260)
11/13/2005 9:18:56 AM
Scripsit Dirk Thierbach <dthierbach@usenet.arcornews.de>
> Andre <andre@het.brown.edu> wrote:

>> Me too.  The case productions mentioned translate composite
>> patterns into a simple set of base patterns.  

> But that's not necessarily how it is implemented. That's just one 
> way to clearly define it for the sake of the standard.

That is true for all kinds of syntactic sugar in all languages.

-- 
Henning Makholm             "Jeg forst�r mig p� at anvende s�danne midler p�
                           folks legemer, at jeg kan varme eller afk�le dem,
                    som jeg vil, og f� dem til at kaste op, hvis det er det,
                  jeg vil, eller give aff�ring og meget andet af den slags."
0
henning (37)
11/13/2005 12:15:07 PM
Henning Makholm <henning@makholm.net> wrote:
> Scripsit Dirk Thierbach <dthierbach@usenet.arcornews.de>
>> Andre <andre@het.brown.edu> wrote:

>>> Me too.  The case productions mentioned translate composite
>>> patterns into a simple set of base patterns.  

>> But that's not necessarily how it is implemented. That's just one 
>> way to clearly define it for the sake of the standard.

> That is true for all kinds of syntactic sugar in all languages.

Not according to the way I understand "syntactic sugar".

But this discussion is now way off the original topic, and descending
into sophistry, so I'll say EOT.

- Dirk

0
dthierbach2 (260)
11/13/2005 2:23:40 PM
Neelakantan Krishnaswami wrote:
 
> You can't compute exhaustiveness or completeness without type
> information, and you need that info to do optimization. I don't really
> care about the optimization aspect, but I'm basically not interested
> in pattern matchers that don't report exhaustiveness and completeness
> information

It is relatively straightforward, in Lisp or Scheme, to write a set of 
macros (which I have done) that can check exhaustiveness, so that 
one can write, for example:
    
(define-sum-type boolean T F)
(define-sum-type pair pair)

(match x
  ((pair (pair (T) (F)) (T))   1)
  ((pair (pair (T) (T)) (T))   2)
  ((pair _              (F))   3)
  ((pair (pair (F)  _ ) (T))   4))

and it will expand fine, while a syntax error will be reported 
if any of the cases are left out.  In fact, when I was playing
with it, I found that that inexhaustiveness reporting was indeed 
quite useful in deeply nested examples like:

(define-sum-type tree make-node make-leaf)

(match x
  ((make-node (make-node (make-node x y) v) z)  1)
  ((make-node (make-node (make-leaf u)   v) z)  2)
  ((make-node (make-leaf u)                 z)  3)
  ((make-leaf y)                                4))

where otherwise it would be very easy to omit a case.  I can 
post the macros (not very long) if anyone is interested.  

Regards
Andre
0
andre9567 (120)
11/15/2005 2:41:41 PM
--=-=-=
Content-Transfer-Encoding: quoted-printable


Hi Andre. I'd be interested in having a look at those macros. Thanks.

jao

Andre <andre@het.brown.edu> writes:

> It is relatively straightforward, in Lisp or Scheme, to write a set of=20
> macros (which I have done) that can check exhaustiveness, so that=20
> one can write, for example:
>=20=20=20=20=20
> (define-sum-type boolean T F)
> (define-sum-type pair pair)
>
> (match x
>   ((pair (pair (T) (F)) (T))   1)
>   ((pair (pair (T) (T)) (T))   2)
>   ((pair _              (F))   3)
>   ((pair (pair (F)  _ ) (T))   4))
>
> and it will expand fine, while a syntax error will be reported=20
> if any of the cases are left out.  In fact, when I was playing
> with it, I found that that inexhaustiveness reporting was indeed=20
> quite useful in deeply nested examples like:
>
> (define-sum-type tree make-node make-leaf)
>
> (match x
>   ((make-node (make-node (make-node x y) v) z)  1)
>   ((make-node (make-node (make-leaf u)   v) z)  2)
>   ((make-node (make-leaf u)                 z)  3)
>   ((make-leaf y)                                4))
>
> where otherwise it would be very easy to omit a case.  I can=20
> post the macros (not very long) if anyone is interested.=20=20
>
> Regards
> Andre

=2D-=20
Any sufficiently advanced technology is indistinguishable from magic.
=2DArthur C Clarke, science fiction writer (1917- )

--=-=-=
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)

iD8DBQBDeg3MlX2PbVNDo+wRAsjYAJ4gJVjfvrHd1H3F0ABWVjjblU1I8wCfQQkq
OCeyVD1tcG7SZY5Hm1te/gQ=
=sSi9
-----END PGP SIGNATURE-----
--=-=-=--
0
jao1 (13)
11/15/2005 4:33:10 PM
Andre wrote:

> It is relatively straightforward, in Lisp or Scheme, to write a set of
> macros (which I have done) that can check exhaustiveness, so that
> one can write, for example:

Okay, here is the code.  It is rough but demonstrates the assertion.
It runs in MzScheme but, since it just uses define-macro, should be
quite easy to port to Lisp or other Schemes.

;;==============================================================
;;
;; Exhaustiveness checking for pattern matching (public domain).
;; Andre van Tonder.
;;
;;==============================================================

(define-macro (define-sum-type name . tags)
  (register (cons name tags))
  '(void))

(define-macro (match exp . clauses)
  (if (exhaustive? (map car clauses))
      ;; Here put the rewrites to object code:
      "Match ok"
      (raise-syntax-error #f "Match non-exhaustive" clauses)))

(begin-for-syntax

  ;; Support code for registering types:

  (define tags->types '())

  ;; type :: (name tag ...)

  (define (register type)
    (let ((tags (cdr type)))
      (for-each (lambda (tag)
                  (set! tags->types
                        (cons (cons tag type)
                              tags->types)))
                tags)))

  (define (type-of tag)
    (cond ((assoc tag tags->types) => cdr)
          (else (error "Undefined tag:" tag))))

  (define (tags-of type) (cdr type))

  ;; Patterns are tested for exhaustiveness by converting to simpler
  ;; patterns according to the following algorithm:
  ;;
  ;; Ignoring wildcards for now, to test if a list of patterns of
  ;; of the specific form:
  ;;
  ;; ((tag (tag* p* ...) p ...)
  ;;  ...                     )
  ;;
  ;; is exhaustive,
  ;;
  ;;    - we determine the type of the head tags denoted by |tag|, and
  ;;    - for each tag defined for this type, take the patterns
  ;;      starting with that tag, and recursively determine if the list
  ;;      of partially flattened tail patterns
  ;;
  ;;         (tag* p* ... p ...)
  ;;
  ;;      is exhaustive.
  ;;      If there are no patterns starting with a tag, the
  ;;      corresponding list is null, and the match is non-exhaustive
  ;;      by the first case in the logic below.
  ;;
  ;; Variables (wildcards) in tail patterns are flattened as follows:
  ;;
  ;;   (tag x p ...)  -->  (any+ p ...)
  ;;
  ;; where the right hand side may match a list of arbitrary length
  ;; ending with (p ...).  Any+ tags can stand for any constructor tag,

  ;; as reflected in the logic below.

  (define (exhaustive? pats)
    (cond ((null? pats)            #f)
          ((andmap null? pats)     #t)
          ((ormap wildcard? pats)  #t)
          ((andmap wild-tag? pats) (exhaustive? (map make-recur-pattern
pats)))
          (else
           (let* ((tame-pats (filter not-wild-tag? pats))
                  (wild-pats (filter wild-tag? pats))
                  (tags      (map car tame-pats))
                  (type      (type-of (car tags))))
             (if (not (of-type? tags type))
                 (error "Tags do not belong to the same type:" tags)
                 (andmap (lambda (same-tag-pats)
                           (exhaustive?
                            (map make-recur-pattern
                                 (if (null? same-tag-pats)
                                     wild-pats
                                     (append same-tag-pats
                                             (map (lambda (wild-pat)
                                                    (make-congruent
wild-pat

(car same-tag-pats)))
                                                  wild-pats))))))
                         (partition-by (tags-of type) tame-pats)))))))

  (define (make-recur-pattern pat)
    (cond ((null? (cdr pat))    '())
          ((symbol? (cadr pat)) (cons 'any+ (cddr pat)))
          (else                 (append (cadr pat) (cddr pat)))))

  (define (partition-by tags ls)
    (map (lambda (tag)
           (filter (lambda (x)
                     (eq? tag (car x)))
                   ls))
         tags))

  (define (make-congruent wild-pat template)
    (cons 'any+
          (let loop ((n (- (length template)
                           (length wild-pat))))
            (if (= n 0)
                (cdr wild-pat)
                (cons '(any+)
                      (loop (- n 1)))))))

  (define (wild-tag? p)
    (and (pair? p)
         (eq? (car p) 'any+)))

  (define (not-wild-tag? p)
    (not (wild-tag? p)))

  (define (wildcard? p)
    (or (symbol? p)
        (and (wild-tag? p)
             (null? (cdr p)))))

  (define (of-type? tags type)
    (andmap (lambda (tag)
              (member tag (tags-of type)))
            tags))

  ) ; for-syntax


;;=======================================================
;;
;; Examples: Comment out any cases below to see it work:
;;
;;=======================================================

(define-sum-type boolean T F)

(define-sum-type pair pair)

(match x
  ((T) 1)
  ( _  2))

(match x
  ((pair (T) (F)) 1)
  ((pair (T) (T)) 2)
  ((pair (F) (T)) 3)
  ((pair (F) (F)) 4))

(match x
  ((pair (T) (T)) 1)
  ((pair  _   _ ) 2))

(match x
  ((pair (pair (T) (F)) (T))  1)
  ((pair (pair (T) (T)) (T))  2)
  ((pair _              (F))  3)
  ((pair (pair (F) _)   (T))  4))

(define-sum-type tree make-node make-leaf)

(match x
  ((make-node (make-node (make-node x y)) z) 1)
  ((make-node (make-node (make-leaf u))   z) 2)
  ((make-node (make-leaf (make-node x y)) z) 3)
  ((make-node (make-leaf (make-leaf u))   z) 4)
  ((make-leaf x)                             5))

0
andreuri2000 (148)
11/16/2005 8:56:51 PM
andreuri2000@yahoo.com writes:

> Andre wrote:
> 
> > It is relatively straightforward, in Lisp or Scheme, to write a
> > set of macros (which I have done) that can check exhaustiveness,
> > so that one can write, for example:
> 
> Okay, here is the code.  It is rough but demonstrates the assertion.
> It runs in MzScheme but, since it just uses define-macro, should be
> quite easy to port to Lisp or other Schemes.

Actually, your code does have a problem in MzScheme...  The thing is
that you're using persistent information at the syntax level, but that
level is completely gone when you compile a file.  For example, I
compiled your file, and loaded it into a fresh MzScheme, and then I
get:

  > (match x ((T) 1) ((F)  2))
  Undefined tag: T

The solution is to bind your constructors as syntax values that are
both transformers (macros) and values.  I did just that for a course
that I'm teaching, and we're making extensive use of this facility
(defining sums, and using pattern matching that raises an error for
redundant cases or inexhaustive matches).

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
11/17/2005 5:41:54 AM
Eli Barzilay wrote:

> I did just that for a course
> that I'm teaching, and we're making extensive use of this facility
> (defining sums, and using pattern matching that raises an error for
> redundant cases or inexhaustive matches).

Any possibility you could make this available to the general public?

Best regards
Andre
0
andre9567 (120)
11/17/2005 1:33:06 PM
Andre <andre@het.brown.edu> writes:

> Eli Barzilay wrote:
> 
> > I did just that for a course
> > that I'm teaching, and we're making extensive use of this facility
> > (defining sums, and using pattern matching that raises an error for
> > redundant cases or inexhaustive matches).
> 
> Any possibility you could make this available to the general public?

You can get it as a .plt file from
http://csu660.barzilay.org/csu660.plt, but it's pretty course
specific.  It's also very PLT specific.  If you do get it, then the
file that you're looking for is datatype.ss.

-- 
          ((lambda (x) (x x)) (lambda (x) (x x)))          Eli Barzilay:
                  http://www.barzilay.org/                 Maze is Life!
0
eli666 (555)
11/17/2005 5:39:33 PM
Andre wrote:
 
> ... there is a level of optimizations that the macro
> writer should not have to worry about.  Krishnamurthy's book, cited
> earlier in the thread, touches upon this, but I remember reading a
> very nice paper about this precise issue once.  I thought it
> was called something like "The macro-writer's manifesto", but
> Google gives me nothing... :-(

I found the document.  Google for:

The Guaranteed Optimization Clause of the Macro-Writer's Bill of Rights
  - R. Kent Dybvig

Cheers
Andre
0
andre9567 (120)
11/23/2005 4:40:03 PM
In comp.lang.scheme Ulrich Hobelmann <u.hobelmann@web.de> wrote:
> Joachim Durchholz wrote:
> > The problem is: we don't know of better alternatives. At least not if 
> > they want to teach OO.
> 
> Excuse me?  For OO, there's Python, Ruby, Common Lisp, Smalltalk.  Why 
> are these in your opinion less appropriate than Java?  They are all 
> simpler, most/all of them offer an interactive environment (good for the 
> beginning user)...
> 

While its a common myth that interactive environment is better for the 
begginer, it is also yet another common myth that is wrong. 

-- 
	Sander

+++ Out of cheese error +++
0
sander569 (105)
11/27/2005 8:31:06 PM
Sander Vesik wrote:
> In comp.lang.scheme Ulrich Hobelmann <u.hobelmann@web.de> wrote:
>> Joachim Durchholz wrote:
>>> The problem is: we don't know of better alternatives. At least not if 
>>> they want to teach OO.
>> Excuse me?  For OO, there's Python, Ruby, Common Lisp, Smalltalk.  Why 
>> are these in your opinion less appropriate than Java?  They are all 
>> simpler, most/all of them offer an interactive environment (good for the 
>> beginning user)...
>>
> 
> While its a common myth that interactive environment is better for the 
> begginer, it is also yet another common myth that is wrong. 

Let's just call it a matter of taste.  I certainly like to be able to 
see values when I don't know the language too well yet.

-- 
The road to hell is paved with good intentions.
0
u.hobelmann (1643)
11/27/2005 10:33:33 PM
Sander Vesik wrote:

> While its a common myth that interactive environment is better for the 
> begginer, it is also yet another common myth that is wrong. 
> 

I don't know what do you do, your site name: haldjas.folklore.ee doesn't
say too much. But I teach programming for many years, and for [not only] me
this is not a myth, and it certainly helps a lot. You are issuing empty
statements just for fun/provocation?

Jerzy Karczmarczuk
0
karczma (331)
11/28/2005 8:33:27 AM
Reply: