COMPGROUPS.NET | Search | Post Question | Groups | Stream | About | Register

### What is Expressiveness in a Computer Language

• Email
• Follow

in March, i posted a essay =E2=80=9CWhat is Expressiveness in a Computer
Language=E2=80=9D, archived at:
http://xahlee.org/perl-python/what_is_expresiveness.html

I was informed then that there is a academic paper written on this
subject.

On the Expressive Power of Programming Languages, by Matthias
Felleisen, 1990.
http://www.ccs.neu.edu/home/cobbe/pl-seminar-jr/notes/2003-sep-26/expressiv=
e-slides.pdf

Has anyone read this paper? And, would anyone be interested in giving a
summary?

thanks.

Xah
xah@xahlee.org
=E2=88=91 http://xahlee.org/


 0
Reply xah (484) 6/9/2006 5:04:47 AM

See related articles to this posting

Xah Lee wrote:
> in March, i posted a essay “What is Expressiveness in a Computer
> Language”, archived at:
> http://xahlee.org/perl-python/what_is_expresiveness.html
>
> I was informed then that there is a academic paper written on this
> subject.
>
> On the Expressive Power of Programming Languages, by Matthias
> Felleisen, 1990.
> http://www.ccs.neu.edu/home/cobbe/pl-seminar-jr/notes/2003-sep-26/expressive-slides.pdf
>
> Has anyone read this paper? And, would anyone be interested in giving a
> summary?

The paper itself has a good summary.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0
Reply pc56 (3930) 6/9/2006 7:48:44 AM

Xah Lee wrote:
> in March, i posted a essay "What is Expressiveness in a Computer
> Language", archived at:
> http://xahlee.org/perl-python/what_is_expresiveness.html
>
> I was informed then that there is a academic paper written on this
> subject.
>
> On the Expressive Power of Programming Languages, by Matthias
> Felleisen, 1990.
> http://www.ccs.neu.edu/home/cobbe/pl-seminar-jr/notes/2003-sep-26/expressive-slides.pdf
>
> Has anyone read this paper? And, would anyone be interested in giving a
> summary?

The gist of the paper is this:  Some computer languages seem to be
more expressive' than
others.  But anything that can be computed in one Turing complete
language can be computed in any other Turing complete language.
Clearly the notion of
expressiveness isn't concerned with ultimately computing the answer.

Felleisen's paper puts forth a formal definition of expressiveness in
terms of semantic
equivilances of small, local constructs.  In his definition, wholescale
program transformation is
disallowed so you cannot appeal to Turing completeness to claim program
equivalence.

Expressiveness isn't necessarily a good thing.  For instance, in C, you
can express the
addresses of variables by using pointers.  You cannot express the same
thing in Java, and
most people consider this to be a good idea.


 0

<I've removed the massive cross-posting - I wouldn't presume this message is
all that interesting to folks in those other NG's, and I'm sure they'd be
saying, "who the heck is Paul McGuire, and who gives a @#*$! what he thinks?"> "Joe Marshall" <eval.apply@gmail.com> wrote in message news:1149863687.298352.45980@h76g2000cwa.googlegroups.com... > > Expressiveness isn't necessarily a good thing. For instance, in C, you > can express the > addresses of variables by using pointers. You cannot express the same > thing in Java, and > most people consider this to be a good idea. > For those who remember the bad old days of COBOL, its claim to fame was that it was more like English prose, with the intent of making a programming language that was as readable as English, assuming that this was more "expressive", and not requiring as much of a mental mapping exercise for someone trying to "read" a program. Even the language terminology itself strived for this: statements were "sentences"; blocks were "paragraphs". The sentence model may have ended up being one of COBOL's Achilles Heel's - the placement of terminating periods for an IF THEN ELSE block was crucial for disambiguating which ELSE went with which IF. Unfortunately, periods are one of the least visible printed characters, and an extra or missing period could cause hours of extra debugging. (Of course, at the time of COBOL's inception, the only primary languages to compare with were assembly or FORTRAN-60, so this idea wasn't totally unfounded.) -- Paul   0 Reply ptmcg2 (617) 6/9/2006 3:07:23 PM Joe Marshall wrote: > Xah Lee wrote: >>On the Expressive Power of Programming Languages, by Matthias >>Felleisen, 1990. >>http://www.ccs.neu.edu/home/cobbe/pl-seminar-jr/notes/2003-sep-26/expressive-slides.pdf > > The gist of the paper is this: Some computer languages seem to be > more expressive' than others. But anything that can be computed in > one Turing complete language can be computed in any other Turing > complete language. Clearly the notion of expressiveness isn't > concerned with ultimately computing the answer. > > Felleisen's paper puts forth a formal definition of expressiveness in > terms of semantic equivilances of small, local constructs. In his > definition, wholescale program transformation is disallowed so you > cannot appeal to Turing completeness to claim program equivalence. I suspect that the small, local transformations versus global transformations is also to do with the practice of not saying the same thing twice. Everything from subroutines to LISP macros also helps here, increasing language expressiveness. > Expressiveness isn't necessarily a good thing. For instance, in C, > you can express the addresses of variables by using pointers. You > cannot express the same thing in Java, and most people consider this > to be a good idea. Assuming the more-expressive feature does not preclude the less-expressive one, good/bad depends on the programmer. I know *I* can't be trusted with pointers ;-) , but I know many programmers benefit greatly from them. Of course, knowing that the programmer cannot do something does help the compiler stop you shooting yourself in the foot. -- Simon Richard Clarkstone: s.r.cl?rkst?n?@durham.ac.uk/s?m?n.cl?rkst?n?@ hotmail.com ### "I have a spelling chequer / it came with my PC / it plainly marks for my revue / Mistake's I cannot sea" ... by: John Brophy (at: http://www.cfwf.ca/farmj/fjjun96/)   0 Reply Simon 6/9/2006 3:51:33 PM Xah Lee wrote: [the usual toff-topic trolling stuff] Shit, da troll is back. Abuse reports need to go to abuse [] pacbell.net and abuse [] swbell.net this time.   0 Reply PofN 6/9/2006 6:28:46 PM Xah Lee wrote: > Has anyone read this paper? And, would anyone be interested in giving a > summary? Not you, of course. Too busy preparing the next diatribe against UNIX, Perl, etc. ;)   0 Reply Kaz 6/9/2006 6:32:18 PM  Joe Marshall wrote: > Xah Lee wrote: > >>in March, i posted a essay "What is Expressiveness in a Computer >>Language", archived at: >>http://xahlee.org/perl-python/what_is_expresiveness.html >> >>I was informed then that there is a academic paper written on this >>subject. >> >>On the Expressive Power of Programming Languages, by Matthias >>Felleisen, 1990. >>http://www.ccs.neu.edu/home/cobbe/pl-seminar-jr/notes/2003-sep-26/expressive-slides.pdf >> >>Has anyone read this paper? And, would anyone be interested in giving a >>summary? > > > The gist of the paper is this: Some computer languages seem to be > more expressive' than > others. But anything that can be computed in one Turing complete > language can be computed in any other Turing complete language. > Clearly the notion of > expressiveness isn't concerned with ultimately computing the answer. > > Felleisen's paper puts forth a formal definition of expressiveness in > terms of semantic > equivilances of small, local constructs. In his definition, wholescale > program transformation is > disallowed so you cannot appeal to Turing completeness to claim program > equivalence. > > Expressiveness isn't necessarily a good thing. For instance, in C, you > can express the > addresses of variables by using pointers. You cannot express the same > thing in Java, and > most people consider this to be a good idea. > Thanks for the summary. Me, I would like to see a definition of expressiveness that would exclude a programming mechanism from "things to be expressed". If the subject is programmer productivity, well, I write programs to get some behavior out of them, such as operating an ATM cash dispenser. If I need to keep a list of transactions, I need to express the abstraction "list" in some data structure or other, but below that level of abstraction I am just hacking code, not expressing myself -- well, that is the distinction for which I am arguing. heck, in this case I will even give you as "thing to express" getting back multiple values from a function. That comes up all the time, and it can be an aggravation or a breeze. But then I would score C down because it does not really return multiple values. One still has some heavy lifting to do to fake the expressed thing. But I would still give it an edge over java because Java's fakery would have to be a composite object -- one could not have a primary return value as the function result and ancillary values "somewhere else". kt -- Cells: http://common-lisp.net/project/cells/ "I'll say I'm losing my grip, and it feels terrific." -- Smiling husband to scowling wife, New Yorker cartoon   0 Reply Ken 6/9/2006 9:27:35 PM hi Joe, Joe Marshall wrote: =C2=AB Expressiveness isn't necessarily a good thing. For instance, in C, you can express the addresses ...=C2=BB we gotta be careful here, because soon we gonna say binaries are the most expressive. For instance, in assembly, you can express the registers and stuff. Expressiveness, with respect to =E2=80=94 for lack of refined terms =E2=80= =94 semantics, is a good thing, period. When discussing a language's semantical expressiveness, it goes without saying that a =E2=80=9Cdomain=E2= =80=9D are understood, or needs to be defined. This is almost never mentioned because it is very difficult. Put it in driveler's chant for better understanding: we can't =E2=80=9Ccompare apples with oranges=E2=80=9D. Let me give a example. Let's say i invented a language, where, there's no addition of numbers, but phaserfy and realify with respective operators ph and re. So, in my language, to do 1+2, you write =E2=80=9Cph 1 re ph 2=E2=80=9D, which means, to phaserfy 1, and phaserfy 2, then realify their results, which results in 3. Now, this language is the most expressive, because it can deal with concepts of phaserfy and realify that no other lang can. This may seem ridiculous, but is in fact a lot imperative languages do. I won't go long here, but for instance, the addresses or references of C and Perl is such. And in Java and few other OOP langs, there's =E2=80=9Citerator=E2=80=9D and =E2=80=9Cenumerator=E2=80=9D things, are lik= ewise immaterial. As to OOP's iterator and enumerator things, and the general perspective of extraneous concepts in languages, i'll have to write a essay in detail some other day. ---- Thanks for the summary. Is there no one else who are able to read that paper? Xah xah@xahlee.org =E2=88=91 http://xahlee.org/ > Xah Lee wrote: > > in March, i posted a essay "What is Expressiveness in a Computer > > Language", archived at: > > http://xahlee.org/perl-python/what_is_expresiveness.html > > ... > > On the Expressive Power of Programming Languages, by Matthias > > Felleisen, 1990. > > http://www.ccs.neu.edu/home/cobbe/pl-seminar-jr/notes/2003-sep-26/expre= ssive-slides.pdf Joe Marshall wrote: > The gist of the paper is this: Some computer languages seem to be > more expressive' than > others. But anything that can be computed in one Turing complete > language can be computed in any other Turing complete language. > Clearly the notion of > expressiveness isn't concerned with ultimately computing the answer. > > Felleisen's paper puts forth a formal definition of expressiveness in > terms of semantic > equivilances of small, local constructs. In his definition, wholescale > program transformation is > disallowed so you cannot appeal to Turing completeness to claim program > equivalence. > > Expressiveness isn't necessarily a good thing. For instance, in C, you > can express the > addresses of variables by using pointers. You cannot express the same > thing in Java, and > most people consider this to be a good idea.   0 Reply Xah 6/14/2006 9:03:46 AM "Joe Marshall" <eval.apply@gmail.com> writes: > > On the Expressive Power of Programming Languages, by Matthias > > Felleisen, 1990. > > http://www.ccs.neu.edu/home/cobbe/pl-seminar-jr/notes/2003-sep-26/expressive-slides.pdf > > The gist of the paper is this: Some computer languages seem to be > more expressive' than others. But anything that can be computed in > one Turing complete language can be computed in any other Turing > complete language. Clearly the notion of expressiveness isn't > concerned with ultimately computing the answer. > > Felleisen's paper puts forth a formal definition of expressiveness > in terms of semantic equivilances of small, local constructs. In > his definition, wholescale program transformation is disallowed so > you cannot appeal to Turing completeness to claim program > equivalence. I think expressiveness is more subtle than this. Basically, it boils down to: "How quickly can I write a program to solve my problem?". There are several aspects relevant to this issue, some of which are: - Compactness: How much do I have to type to do what I want? - Naturality: How much effort does it take to convert the concepts of my problem into the concepts of the language? - Feedback: Will the language provide sensible feedback when I write nonsensical things? - Reuse: How much effort does it take to reuse/change code to solve a similar problem? Compactness is hard to measure. It isn't really about the number of characters needed in a program, as I don't think one-character symbols instead of longer keywords make a language more expressive. It is better to count lexical units, but if there are too many different predefined keywords and operators, this isn't reasonable either. Also, the presence of opaque one-liners doesn't make a language expressible. Additionally, as mentioned above, Turing-completeness (TC) allows you to implement any TC language in any other, so above a certain size, the choice of language doesn't affect size. But something like (number of symbols in program)/log(number of different symbols) is not too bad. If programs are allowed to use standard libraries, the identifiers in the libraries should be counted in the number of different symbols. Naturality is very difficult to get a grip on, and it strongly depends on the type of problem you want to solve. So it only makes sense to talk about expressiveness relative to a set of problem domains. If this set is small, domain-specific languages win hands down, so if you want to compare expressiveness of general-purpose languages, you need a large set of very different problems. And even with a single problem, it is hard to get an objective measure of how difficult it is to map the problem's concepts to those of the language. But you can normally observe whether you need to overspecify the concept (i.e., you are required to make arbitrary decisions when mapping from concept to data), if the mapping is onto (i.e., can you construct data that isn't sensible in the problem domain) and how much redundancy your representation has. Feedback is a mixture of several things. Partly, it is related to naturality, as a close match between problem concepts and language concepts makes it less likely that you will express nonsense (relative to the problem domain) that makes sense in the language. For example, if you have to code everything as natural numbers, untyped pure lambda calculus or S-expressions, there is a good chance that you can get nonsense past the compiler. Additionally, it is about how difficult it is to tie an observation about a computed result to a point in the program. Measuring reuse depends partly on what is meant by problems being similar and also on whether you at the time you write the original code can predict what types of problems you might later want to solve, i.e., if you can prepare the code for reuse. Some languages provide strong mechanisms for reuse (templates, object hierarchies, etc.), but many of those require that you can predict how the code is going to be reused. So, maybe, you should measure how difficult it is to reuse a piece of code that is _not_ written with reuse in mind. This reminds me a bit of last years ICFP contest, where part of the problem was adapting to a change in specification after the code was written. > Expressiveness isn't necessarily a good thing. For instance, in C, > you can express the addresses of variables by using pointers. You > cannot express the same thing in Java, and most people consider this > to be a good idea. I think this is pretty much covered by the above points on naturality and feedback: Knowing the address of a value or object is an overspecification unless the address maps back into something in the problem domain. On a similar note, is a statically typed langauge more or less expressive than a dynamically typed language? Some would say less, as you can write programs in a dynamically typed language that you can't compile in a statically typed language (without a lot of encoding), whereas the converse isn't true. However, I think this is misleading, as it ignores the feedback issue: It takes longer for the average programmer to get the program working in the dynamically typed language. Torben   0 Reply torbenm 6/14/2006 1:42:25 PM On 2006-06-14 09:42:25 -0400, torbenm@app-1.diku.dk (Torben �gidius Mogensen) said: > It takes longer for the average > programmer to get the program working in the dynamically typed > language. Though I agree with much of your post I would say that many here find the opposite to be true - it takes us longer to get a program working in a statically typed language because we have to keep adding/changing things to get the compiler to stop complaining and actually compile and run a program which would be perfectly permissible in a dynamically typed language such as common lisp - for example - heterogeneous lists and forward references to as yet non-existent functions.   0 Reply Raffael 6/14/2006 2:51:05 PM Torben =C6gidius Mogensen wrote: > On a similar note, is a statically typed langauge more or less > expressive than a dynamically typed language? Some would say less, as > you can write programs in a dynamically typed language that you can't > compile in a statically typed language (without a lot of encoding), > whereas the converse isn't true. However, I think this is misleading, > as it ignores the feedback issue: It takes longer for the average > programmer to get the program working in the dynamically typed > language. >From the point of view purely of expressiveness I'd say it's rather different. If a language can express constraints of one kind that is an increase in expressiveness. If a language requires constraint to be in one particular way thats a decrease in expressiveness. So I would say languages that can be statically typed and can be dynamically typed are the most expressive. Languages that require static typing or are dynamic but cannot express static typing are less expressive. This meets my experience of what useful in practice too, static typing for everything is painful for writing simple code. Pure dynamic typing is painful when writing complex code because it makes impossible a layer of error checking that could otherwise be useful.   0 Reply Rob 6/14/2006 3:12:12 PM Torben �gidius Mogensen schrieb: > For example, > if you have to code everything as natural numbers, untyped pure lambda > calculus or S-expressions, there is a good chance that you can get > nonsense past the compiler. Also past peer review and/or debugging runs. And, most importantly, past your own mental review processes. Regards, Jo   0 Reply Joachim 6/14/2006 6:55:40 PM Raffael Cavallaro schrieb: > On 2006-06-14 09:42:25 -0400, torbenm@app-1.diku.dk (Torben �gidius > Mogensen) said: > >> It takes longer for the average >> programmer to get the program working in the dynamically typed >> language. > > Though I agree with much of your post I would say that many here find > the opposite to be true - it takes us longer to get a program working in > a statically typed language because we have to keep adding/changing > things to get the compiler to stop complaining and actually compile and > run I think Torben was assuming a language with type inference. You write only those type annotations that really carry meaning (and some people let the compiler infer even these). > a program which would be perfectly permissible in a dynamically > typed language such as common lisp - for example - heterogeneous lists > and forward references to as yet non-existent functions. Um... heterogenous lists are not necessarily a sign of expressiveness. The vast majority of cases can be transformed to homogenous lists (though these might then contain closures or OO objects). As to references to nonexistent functions - heck, I never missed these, not even in languages without type inference :-) I don't hold that they are a sign of *in*expressiveness either. They are just typical of highly dynamic programming environments such as Lisp or Smalltalk. Regards, Jo   0 Reply Joachim 6/14/2006 7:04:34 PM Rob Thorpe schrieb: > > If a language can express constraints of one kind that is an increase > in expressiveness. Agreed. > If a language requires constraint to be in one particular way thats a > decrease in expressiveness. Unless alternatives would be redundant. Having redundant ways to express the same thing doesn't make a language more or less expressive (but programs written in it become more difficult to maintain). > So I would say languages that can be statically typed and can be > dynamically typed are the most expressive. Languages that require > static typing or are dynamic but cannot express static typing are less > expressive. Note that this is a different definition of expressiveness. (The term is very diffuse...) I think Felleisen's paper defines something that should be termed "conciseness". Whether there's a way to express constraints or other static properties of the software is something different. I don't have a good word for it, but "expressiveness" covers too much for my taste to really fit. Regards, Jo   0 Reply Joachim 6/14/2006 7:09:08 PM Torben �gidius Mogensen wrote: > On a similar note, is a statically typed langauge more or less > expressive than a dynamically typed language? Some would say less, as > you can write programs in a dynamically typed language that you can't > compile in a statically typed language (without a lot of encoding), > whereas the converse isn't true. It's important to get the levels right here: A programming language with a rich static type system is more expressive at the type level, but less expressive at the base level (for some useful notion of expressiveness ;). > However, I think this is misleading, > as it ignores the feedback issue: It takes longer for the average > programmer to get the program working in the dynamically typed > language. This doesn't seem to capture what I hear from Haskell programmers who say that it typically takes quite a while to convince the Haskell compiler to accept their programs. (They perceive this to be worthwhile because of some benefits wrt correctness they claim to get in return.) Pascal -- 3rd European Lisp Workshop July 3 - Nantes, France - co-located with ECOOP 2006 http://lisp-ecoop06.bknr.net/   0 Reply Pascal 6/14/2006 8:18:03 PM Joachim Durchholz <jo@durchholz.org> writes: > Raffael Cavallaro schrieb: >> a program which would be perfectly permissible in a dynamically >> typed language such as common lisp - for example - heterogeneous >> lists and forward references to as yet non-existent functions. > > Um... heterogenous lists are not necessarily a sign of > expressiveness. The vast majority of cases can be transformed to > homogenous lists (though these might then contain closures or OO > objects). In lisp, all lists are homogenous: lists of T. -- __Pascal Bourguignon__ http://www.informatimago.com/ ADVISORY: There is an extremely small but nonzero chance that, through a process known as "tunneling," this product may spontaneously disappear from its present location and reappear at any random place in the universe, including your neighbor's domicile. The manufacturer will not be responsible for any damages or inconveniences that may result.   0 Reply Pascal 6/14/2006 8:36:52 PM In article <4fb97sF1if8l6U1@individual.net>, Pascal Costanza wrote: > Torben �gidius Mogensen wrote: > >> On a similar note, is a statically typed langauge more or less >> expressive than a dynamically typed language? Some would say less, as >> you can write programs in a dynamically typed language that you can't >> compile in a statically typed language (without a lot of encoding), >> whereas the converse isn't true. > > It's important to get the levels right here: A programming language > with a rich static type system is more expressive at the type level, > but less expressive at the base level (for some useful notion of > expressiveness ;). This doesn't seem obviously the case to me. If you have static information about your program, the compiler can use this information to automate a lot of grunt work away. Haskell's system of typeclasses work this way. If you tell the compiler how to print integers, and how to print lists, then when you call a print function on a list of list of integers, then the compiler will automatically figure out the right print function using your base definitions. This yields an increase in Felleisen-expressiveness over a dynamically typed language, because you would need to globally restructure your program to achieve a similar effect. More dramatic are the "polytypic" programming languages, which let you automate even more by letting you write generic map, fold, and print functions which work at every type. > This doesn't seem to capture what I hear from Haskell programmers > who say that it typically takes quite a while to convince the > Haskell compiler to accept their programs. (They perceive this to be > worthwhile because of some benefits wrt correctness they claim to > get in return.) This is true, if you are a novice learning the language, or if you are an expert programming with good style. If you encode your invariants in the types, then type errors will signal broken invariants. But: learning how to use the type system to encode significantly complex invariants (eg, that an abstract syntax tree representing an HTML document actually satisfies all of the constraints on valid HTML) takes experience to do well. -- Neel Krishnaswami neelk@cs.cmu.edu   0 Reply Neelakantan 6/14/2006 10:57:58 PM On 2006-06-14 15:04:34 -0400, Joachim Durchholz <jo@durchholz.org> said: > Um... heterogenous lists are not necessarily a sign of expressiveness. > The vast majority of cases can be transformed to homogenous lists > (though these might then contain closures or OO objects). > > As to references to nonexistent functions - heck, I never missed these, > not even in languages without type inference :-) > > I don't hold that they are a sign of *in*expressiveness either. They > are just typical of highly dynamic programming environments such as > Lisp or Smalltalk. This is a typical static type advocate's response when told that users of dynamically typed languages don't want their hands tied by a type checking compiler: "*I* don't find those features expressive so *you* shouldn't want them." You'll have to excuse us poor dynamically typed language rubes - we find these features expressive and we don't want to give them up just to silence a compiler whose static type checks are of dubious value in a world where user inputs of an often unpredictable nature can come at a program from across a potentially malicious internet making run-time checks a practical necessity.   0 Reply Raffael 6/15/2006 5:42:33 AM On 2006-06-14 16:36:52 -0400, Pascal Bourguignon <pjb@informatimago.com> said: > In lisp, all lists are homogenous: lists of T. CL-USER 123 > (loop for elt in (list #\c 1 2.0d0 (/ 2 3)) collect (type-of elt)) (CHARACTER FIXNUM DOUBLE-FLOAT RATIO) i.e., "heterogenous" in the common lisp sense: having different dynamic types, not in the H-M sense in which all lisp values are of the single union type T.   0 Reply Raffael 6/15/2006 5:58:24 AM Neelakantan Krishnaswami wrote: > In article <4fb97sF1if8l6U1@individual.net>, Pascal Costanza wrote: >> Torben �gidius Mogensen wrote: >> >>> On a similar note, is a statically typed langauge more or less >>> expressive than a dynamically typed language? Some would say less, as >>> you can write programs in a dynamically typed language that you can't >>> compile in a statically typed language (without a lot of encoding), >>> whereas the converse isn't true. >> It's important to get the levels right here: A programming language >> with a rich static type system is more expressive at the type level, >> but less expressive at the base level (for some useful notion of >> expressiveness ;). > > This doesn't seem obviously the case to me. If you have static > information about your program, the compiler can use this information > to automate a lot of grunt work away. > > Haskell's system of typeclasses work this way. If you tell the > compiler how to print integers, and how to print lists, then when you > call a print function on a list of list of integers, then the compiler > will automatically figure out the right print function using your base > definitions. This yields an increase in Felleisen-expressiveness over > a dynamically typed language, because you would need to globally > restructure your program to achieve a similar effect. > > More dramatic are the "polytypic" programming languages, which let you > automate even more by letting you write generic map, fold, and print > functions which work at every type. Yes, but these decisions are taken at compile time, without running the program. Pascal -- 3rd European Lisp Workshop July 3 - Nantes, France - co-located with ECOOP 2006 http://lisp-ecoop06.bknr.net/   0 Reply Pascal 6/15/2006 7:26:29 AM Neelakantan Krishnaswami <neelk@cs.cmu.edu> writes: > Haskell's system of typeclasses work this way. If you tell the > compiler how to print integers, and how to print lists, then when you > call a print function on a list of list of integers, then the compiler > will automatically figure out the right print function using your base > definitions. This yields an increase in Felleisen-expressiveness over > a dynamically typed language, because you would need to globally > restructure your program to achieve a similar effect. Most uses of Haskell classes dispatch on types of function arguments. The example you mention is easily archievable with dynamic typing: instead of dispatching on static type information, you dispatch on the actual type of the datum, and on types of individual list elements. There are uses of typeclasses which would require explicit passing of additional information, where the implementation is chosen basing on something other than argument types, e.g. when you want to print the empty list differently basing on the potential element type. Only in these cases they are more expressive than dynamic typing. > More dramatic are the "polytypic" programming languages, which let you > automate even more by letting you write generic map, fold, and print > functions which work at every type. Actually this is even easier with dynamic typing. > If you encode your invariants in the types, then type errors will > signal broken invariants. But: learning how to use the type system to > encode significantly complex invariants (eg, that an abstract syntax > tree representing an HTML document actually satisfies all of the > constraints on valid HTML) takes experience to do well. I've once done exactly this. The types were generated automatically based on the DTD. I'm not convinced that this is the best idea. HTML tags 'ins' and 'del' may appear anywhere below 'body', under any other element; they are not listed explicitly at each potential parent. This complicates the types significantly. And they may be used either as block elements or as inline elements, where what is their valid contents depends on when they appear. This constraint is not expressed in the DTD. This would be simpler with a fully generic structure of elements which doesn't try to express any HTML-specific constraints in types. -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/   0 Reply qrczak (1265) 6/15/2006 9:28:41 AM Neelakantan Krishnaswami <neelk@cs.cmu.edu>: > More dramatic are the "polytypic" programming languages, which let you > automate even more by letting you write generic map, fold, and print > functions which work at every type. Ok, can you give an example of (or a pointer to) a polytypic language? Especially nice would be an example of writing generic map, fold, and print functions with their corresponding (non-polytypic) Haskell of ML equivalents. -Chris   0 Reply cfc (239) 6/15/2006 2:47:11 PM Raffael Cavallaro <raffaelcavallaro@pas-d'espam-s'il-vous-plait-mac.com> writes: > On 2006-06-14 16:36:52 -0400, Pascal Bourguignon <pjb@informatimago.com> said: > > > In lisp, all lists are homogenous: lists of T. > > CL-USER 123 > (loop for elt in (list #\c 1 2.0d0 (/ 2 3)) collect > (type-of elt)) > (CHARACTER FIXNUM DOUBLE-FLOAT RATIO) > > i.e., "heterogenous" in the common lisp sense: having different > dynamic types, not in the H-M sense in which all lisp values are of > the single union type T. What's the difference? Dynamically types values _are_ all members of a single tagged union type. The main difference is that the tages aren't always visible and that there are only a fixed, predefined number of them. Torben   0 Reply torbenm 6/16/2006 9:04:33 AM Pascal Costanza <pc@p-cos.net> writes: > Torben �gidius Mogensen wrote: > > > On a similar note, is a statically typed langauge more or less > > expressive than a dynamically typed language? Some would say less, as > > you can write programs in a dynamically typed language that you can't > > compile in a statically typed language (without a lot of encoding), > > whereas the converse isn't true. > > It's important to get the levels right here: A programming language > with a rich static type system is more expressive at the type level, > but less expressive at the base level (for some useful notion of > expressiveness ;). > > > However, I think this is misleading, > > as it ignores the feedback issue: It takes longer for the average > > programmer to get the program working in the dynamically typed > > language. > > This doesn't seem to capture what I hear from Haskell programmers who > say that it typically takes quite a while to convince the Haskell > compiler to accept their programs. (They perceive this to be > worthwhile because of some benefits wrt correctness they claim to get > in return.) That's the point: Bugs that in dynamically typed languages would require testing to find are found by the compiler in a statically typed language. So whil eit may take onger to get a program thatgets past the compiler, it takes less time to get a program that works. Torben   0 Reply torbenm 6/16/2006 9:07:57 AM Raffael Cavallaro schrieb: > On 2006-06-14 15:04:34 -0400, Joachim Durchholz <jo@durchholz.org> said: > >> Um... heterogenous lists are not necessarily a sign of expressiveness. >> The vast majority of cases can be transformed to homogenous lists >> (though these might then contain closures or OO objects). >> >> As to references to nonexistent functions - heck, I never missed >> these, not even in languages without type inference :-) >> >> [[snipped - doesn't seem to relate to your answer]] > > This is a typical static type advocate's response when told that users > of dynamically typed languages don't want their hands tied by a type > checking compiler: > > "*I* don't find those features expressive so *you* shouldn't want them." And this is a typical dynamic type advocate's response when told that static typing has different needs: "*I* don't see the usefulness of static typing so *you* shouldn't want it, either." No ad hominem arguments, please. If you find my position undefendable, give counterexamples. Give a heterogenous list that would to too awkward to live in a statically-typed language. Give a case of calling nonexistent functions that's useful. You'll get your point across far better that way. Regards, Jo   0 Reply Joachim 6/16/2006 9:22:08 AM Torben �gidius Mogensen wrote: > Raffael Cavallaro <raffaelcavallaro@pas-d'espam-s'il-vous-plait-mac.com> writes: > >> On 2006-06-14 16:36:52 -0400, Pascal Bourguignon <pjb@informatimago.com> said: >> >>> In lisp, all lists are homogenous: lists of T. >> CL-USER 123 > (loop for elt in (list #\c 1 2.0d0 (/ 2 3)) collect >> (type-of elt)) >> (CHARACTER FIXNUM DOUBLE-FLOAT RATIO) >> >> i.e., "heterogenous" in the common lisp sense: having different >> dynamic types, not in the H-M sense in which all lisp values are of >> the single union type T. > > What's the difference? Dynamically types values _are_ all members of > a single tagged union type. Yes, but that's mostly a meaningless statement in a dynamically typed language. In a dynamically typed language, you typically don't care about the static types. > The main difference is that the tages > aren't always visible and that there are only a fixed, predefined > number of them. Depending on the language, the number of "tags" is not fixed. Pascal -- 3rd European Lisp Workshop July 3 - Nantes, France - co-located with ECOOP 2006 http://lisp-ecoop06.bknr.net/   0 Reply Pascal 6/16/2006 9:47:09 AM Torben �gidius Mogensen wrote: > Pascal Costanza <pc@p-cos.net> writes: > >> Torben �gidius Mogensen wrote: >> >>> On a similar note, is a statically typed langauge more or less >>> expressive than a dynamically typed language? Some would say less, as >>> you can write programs in a dynamically typed language that you can't >>> compile in a statically typed language (without a lot of encoding), >>> whereas the converse isn't true. >> It's important to get the levels right here: A programming language >> with a rich static type system is more expressive at the type level, >> but less expressive at the base level (for some useful notion of >> expressiveness ;). >> >>> However, I think this is misleading, >>> as it ignores the feedback issue: It takes longer for the average >>> programmer to get the program working in the dynamically typed >>> language. >> This doesn't seem to capture what I hear from Haskell programmers who >> say that it typically takes quite a while to convince the Haskell >> compiler to accept their programs. (They perceive this to be >> worthwhile because of some benefits wrt correctness they claim to get >> in return.) > > That's the point: Bugs that in dynamically typed languages would > require testing to find are found by the compiler in a statically > typed language. Yes. However, unfortunately statically typed languages also reject programs that don't have such bugs. It's a tradeoff whether you want to spend time to deal with them or not. > So whil eit may take onger to get a program thatgets > past the compiler, it takes less time to get a program that works. That's incorrect. See http://haskell.org/papers/NSWC/jfp.ps - especially Figure 3. Pascal -- 3rd European Lisp Workshop July 3 - Nantes, France - co-located with ECOOP 2006 http://lisp-ecoop06.bknr.net/   0 Reply Pascal 6/16/2006 9:59:25 AM "Joachim Durchholz" <jo@durchholz.org> wrote in message news:e6tt7j$b41$1@online.de... > Raffael Cavallaro schrieb: >> On 2006-06-14 15:04:34 -0400, Joachim Durchholz <jo@durchholz.org> said: >> >>> Um... heterogenous lists are not necessarily a sign of expressiveness. >>> The vast majority of cases can be transformed to homogenous lists >>> (though these might then contain closures or OO objects). >>> >>> As to references to nonexistent functions - heck, I never missed these, >>> not even in languages without type inference :-) >>> >>> [[snipped - doesn't seem to relate to your answer]] >> > Give a heterogenous list that would to too awkward to live in a > statically-typed language. Many lists are heterogenous, even in statically typed languages. For instance lisp code are lists, with several kinds of atoms and sub-lists.. A car dealer will sell cars, trucks and equipment.. In a statically typed language you would need to type the list on a common ancestor... What would then be the point of statical typing , as you stilll need to type check each element in order to process that list ? Sure you can do this in a statically-typed language, you just need to make sure some relevant ancestor exists. In my experience you'll end up with the base object-class more often than not, and that's what i call dynamic typing. > Give a case of calling nonexistent functions that's useful. I might want to test some other parts of my program before writing this function. Or maybe will my program compile that function depending on user input. As long as i get a warning for calling a non-existing function, everything is fine. Sacha   0 Reply Sacha 6/16/2006 10:10:17 AM -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Sacha schreef: > "Joachim Durchholz" <jo@durchholz.org> wrote in message > news:e6tt7j$b41$1@online.de... >> Raffael Cavallaro schrieb: >>> On 2006-06-14 15:04:34 -0400, Joachim Durchholz <jo@durchholz.org> said: >>> >>>> Um... heterogenous lists are not necessarily a sign of expressiveness. >>>> The vast majority of cases can be transformed to homogenous lists >>>> (though these might then contain closures or OO objects). >>>> >>>> As to references to nonexistent functions - heck, I never missed these, >>>> not even in languages without type inference :-) >>>> >>>> [[snipped - doesn't seem to relate to your answer]] >> Give a heterogenous list that would to too awkward to live in a >> statically-typed language. > > Many lists are heterogenous, even in statically typed languages. > For instance lisp code are lists, with several kinds of atoms and > sub-lists.. > A car dealer will sell cars, trucks and equipment.. > In a statically typed language you would need to type the list on a common > ancestor... > What would then be the point of statical typing , as you stilll need to type > check > each element in order to process that list ? Sure you can do this in a > statically-typed > language, you just need to make sure some relevant ancestor exists. In my > experience > you'll end up with the base object-class more often than not, and that's > what i call > dynamic typing. In my experience you won’t. I almost never have a List<Object> (Java), and when I have one, I start thinking on how I can improve the code to get rid of it. H. - -- Hendrik Maryns ================== http://aouw.org Ask smart questions, get good answers: http://www.catb.org/~esr/faqs/smart-questions.html -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2 (GNU/Linux) iD8DBQFEkofme+7xMGD3itQRAo0XAJ9DiG228ZKMLX8JH+u9X+6YGEHwgQCdH/Jn kC/F/b5rbmUvUzKYIv8agis= =iuV0 -----END PGP SIGNATURE-----   0 Reply Hendrik 6/16/2006 10:28:54 AM Torben =C6gidius Mogensen wrote: > Pascal Costanza <pc@p-cos.net> writes: > > > Torben =C6gidius Mogensen wrote: > > > > > On a similar note, is a statically typed langauge more or less > > > expressive than a dynamically typed language? Some would say less, as > > > you can write programs in a dynamically typed language that you can't > > > compile in a statically typed language (without a lot of encoding), > > > whereas the converse isn't true. > > > > It's important to get the levels right here: A programming language > > with a rich static type system is more expressive at the type level, > > but less expressive at the base level (for some useful notion of > > expressiveness ;). > > > > > However, I think this is misleading, > > > as it ignores the feedback issue: It takes longer for the average > > > programmer to get the program working in the dynamically typed > > > language. > > > > This doesn't seem to capture what I hear from Haskell programmers who > > say that it typically takes quite a while to convince the Haskell > > compiler to accept their programs. (They perceive this to be > > worthwhile because of some benefits wrt correctness they claim to get > > in return.) > > That's the point: Bugs that in dynamically typed languages would > require testing to find are found by the compiler in a statically > typed language. So whil eit may take onger to get a program thatgets > past the compiler, it takes less time to get a program that works. In my experience the opposite is true for many programs. Having to actually know the precise type of every variable while writing the program is not necessary, it's a detail often not relevant to the core problem. The language should be able to take care of itself. In complex routines it can be useful for the programmer to give types and for the compiler to issue errors when they are contradicted. But for most routines it's just an unnecessary chore that the compiler forces on the programmer.   0 Reply Rob 6/16/2006 10:40:46 AM Pascal Costanza <pc@p-cos.net> writes: > Torben �gidius Mogensen wrote: > > So while it may take longer to get a program that gets > > past the compiler, it takes less time to get a program that works. > > That's incorrect. See http://haskell.org/papers/NSWC/jfp.ps - > especially Figure 3. There are many other differences between these languages than static vs. dynamic types, and some of these differences are likely to be more significant. What you need to test is langauges with similar features and syntax, except one is statically typed and the other dynamically typed. And since these languages would be quite similar, you can use the same test persons: First let one half solve a problem in the statically typed language and the other half the same problem in the dynamically typed language, then swap for the next problem. If you let a dozen persons each solve half a dozen problems, half in the statically typed language and half in the dynamically typed language (using different splits for each problem), you might get a useful figure. Torben   0 Reply torbenm 6/16/2006 11:38:26 AM "Rob Thorpe" <robert.thorpe@antenova.com> writes: > Torben �gidius Mogensen wrote: > > That's the point: Bugs that in dynamically typed languages would > > require testing to find are found by the compiler in a statically > > typed language. So whil eit may take onger to get a program thatgets > > past the compiler, it takes less time to get a program that works. > > In my experience the opposite is true for many programs. > Having to actually know the precise type of every variable while > writing the program is not necessary, it's a detail often not relevant > to the core problem. The language should be able to take care of > itself. > > In complex routines it can be useful for the programmer to give types > and for the compiler to issue errors when they are contradicted. But > for most routines it's just an unnecessary chore that the compiler > forces on the programmer. Indeed. So use a language with type inference. Torben   0 Reply torbenm 6/16/2006 11:53:10 AM Torben �gidius Mogensen schreef: > Bugs that in dynamically typed languages would > require testing to find are found by the compiler in a statically > typed language. So whil[e ]it may take [l]onger to get a program that[ ] > gets past the compiler, it takes less time to get a program that works. If it were that simple, I would say: compile time type inference is the only way to go. -- Affijn, Ruud "Gewoon is een tijger."   0 Reply Dr 6/16/2006 12:16:07 PM Torben �gidius Mogensen wrote: > Pascal Costanza <pc@p-cos.net> writes: > >> Torben �gidius Mogensen wrote: > >>> So while it may take longer to get a program that gets >>> past the compiler, it takes less time to get a program that works. >> That's incorrect. See http://haskell.org/papers/NSWC/jfp.ps - >> especially Figure 3. > > There are many other differences between these languages than static > vs. dynamic types, and some of these differences are likely to be more > significant. What you need to test is langauges with similar features > and syntax, except one is statically typed and the other dynamically > typed. > > And since these languages would be quite similar, you can use the same > test persons: First let one half solve a problem in the statically > typed language and the other half the same problem in the dynamically > typed language, then swap for the next problem. If you let a dozen > persons each solve half a dozen problems, half in the statically typed > language and half in the dynamically typed language (using different > splits for each problem), you might get a useful figure. ....and until then claims about the influence of static type systems on the speed with which you can implement working programs are purely guesswork. That's the only point I need to make to show that your original unqualified statement, namely that it takes less time to get a program that works, is incorrect. Pascal -- 3rd European Lisp Workshop July 3 - Nantes, France - co-located with ECOOP 2006 http://lisp-ecoop06.bknr.net/   0 Reply Pascal 6/16/2006 2:48:28 PM Torben =C6gidius Mogensen wrote: > There are several aspects relevant to this issue, some of which are: > - Compactness: How much do I have to type to do what I want? =2E..... > - Naturality: How much effort does it take to convert the concepts of > my problem into the concepts of the language? > - Feedback: Will the language provide sensible feedback when I write > nonsensical things? > - Reuse: How much effort does it take to reuse/change code to solve a > similar problem? ..=2E... I am fairly new to Haskell, but the compactness of the language and the way you can express a lot in a very small amount of real estate is very important.. I used to program back in the 80's in forth a lot.... okay I'm a dinosaur!, but "good" definitions were usually very short, and sweet. Unicon/Icon that I used {still do!} in the imperative world, very compact. I will give an example that covers compact, reusable, and because of static typing when will give back mis-type info when you load a new "a or xs" into it to form a new function. -- what is happening below is a is being replaced with the curried lambda: ((++) 3) and -- xs a list: [1..6], so when that definition of f is used it is type checked to see if the -- elements in xs match the type of a, so if this were going to be compiled, it would -- checked and guaranteed to work. Prelude> :t f f :: forall a b. (Show b) =3D> (a -> b) -> [a] -> IO () Prelude> let f a xs =3D putStr$ foldr (++) "\n"  $map (((++) "\n"). show . a ) xs Prelude> f ((*) 3) [1..6] 3 6 9 12 15 18 Prelude> another substitution of parameters.. using the same definition of f allowed by the the polymorphic parameters allowed in Haskell add to the versatility and reusability angle: Prelude> f sqrt [0.5,1.0..4] 0=2E7071067811865476 1=2E0 1=2E224744871391589 1=2E4142135623730951 1=2E5811388300841898 1=2E7320508075688772 1=2E8708286933869707 2=2E0 Same function 'f" now used with a different type.. [0.5,1.0..4] :: forall a. (Fractional a, Enum a) =3D> [a] I don't know, but this just makes programming fun, for me anyway, and if it is fun, it is expressive.. I've heard this statement made about Ruby and Unicon, to name a few... some would say Python.. but it really applies to the functional languages too, with all their strict typing, with the type inference mechanisms, it isn't usually that big a deal.. If you just get into it, and can learn to take some constructive criticism from your compiler, well hey, it is a really patient teacher... you might get frustrated at times.. but the compiler will happily remind you of the same type mis-matches, until you get a handle on some concept and never once complain... Happy Programming to all! -- gene   0 Reply genea 6/16/2006 3:13:42 PM On 2006-06-16 05:22:08 -0400, Joachim Durchholz <jo@durchholz.org> said: > And this is a typical dynamic type advocate's response when told that > static typing has different needs: > > "*I* don't see the usefulness of static typing so *you* shouldn't want > it, either." But I haven't made this sort of argument. I never said you shouldn't use static typing if you want to. There are indeed types of software where one wants the guarantees provided by static type checks. For example, software that controls irreplaceable or very expensive equipment such as space craft, or software that can kill people if it fails such as software for aircraft or medical devices. The problem for static typing advocates is that most software is not of this type. There is a very large class of software where user inputs are unpredictable and/or where input data comes from an untrusted source. In these cases run-time checks are going to be needed anyway so the advantages of static type checking are greatly reduced - you end up doing run-time checks anyway, precisely the thing you were trying to avoid by doing static analysis. In software like this it isn't worth satisfying a static type checker because you don't get much of the benefit anyway and it means forgoing such advantages of dynamic typing as being able to run and test portions of a program before other parts are written (forward references to as yet nonexistent functions). Ideally one wants a language with switchable typing - static where possible and necessary, dynamic elsewhere. To a certain extent this is what common lisp does but it requires programmer declarations. Some implementations try to move beyond this by doing type inference and alerting the programmer to potential static guarantees that the programmer could make that would allow the compiler to do a better job. In effect the argument comes down to which kind of typing one thinks should be the default. Dynamic typing advocates think that static typing is the wrong default. The notion that static typing can prove program correctness is flawed - it can only prove that type constraints are not violated but not necessarily that program logic is correct. It seems to me that if we set aside that class of software where safety is paramount - mostly embedded software such as aircraft and medical devices - we are left mostly with efficiency concerns. The 80-20 rule suggests that most code doesn't really need the efficiency provided by static guarantees. So static typing should be invoked for that small portion of a program where efficiency is really needed and that dynamic typing should be the default elswhere. This is how common lisp works - dynamic typing by default with static guarantees available where one needs them.   0 Reply Raffael 6/16/2006 3:29:12 PM On 2006-06-16 11:29:12 -0400, Raffael Cavallaro <raffaelcavallaro@pas-d'espam-s'il-vous-plait-mac.com> said: > In software like this it isn't worth satisfying a static type checker > because you don't get much of the benefit > anywaytext�Dx�description�text�Dx�fromname > as being able to run and test portions of a program before other parts > are written (forward references to as yet nonexistent functions). I don't what bizarre key combination I accidentally hit here, but the original read: In software like this it isn't worth satisfying a static type checker because you don't get much of the benefit anyway and it means forgoing such advantages of dynamic typing as being able to run and test portions of a program before other parts are written (forward references to as yet nonexistent functions).   0 Reply Raffael 6/16/2006 3:37:45 PM On 2006-06-16 05:22:08 -0400, Joachim Durchholz <jo@durchholz.org> said: > And this is a typical dynamic type advocate's response when told that > static typing has different needs: > > "*I* don't see the usefulness of static typing so *you* shouldn't want > it, either." But I haven't made this sort of argument. I never said you shouldn't use static typing if you want to. There are indeed types of software where one wants the guarantees provided by static type checks. For example, software that controls irreplaceable or very expensive equipment such as space craft, or software that can kill people if it fails such as software for aircraft or medical devices. The problem for static typing advocates is that most software is not of this type. There is a very large class of software where user inputs are unpredictable and/or where input data comes from an untrusted source. In these cases run-time checks are going to be needed anyway so the advantages of static type checking are greatly reduced - you end up doing run-time checks anyway, precisely the thing you were trying to avoid by doing static analysis. In software like this it isn't worth satisfying a static type checker because you don't get much of the benefit anyway and it means forgoing such advantages of dynamic typing as being able to run and test portions of a program before other parts are written (forward references to as yet nonexistent functions). Ideally one wants a language with switchable typing - static where possible and necessary, dynamic elsewhere. To a certain extent this is what common lisp does but it requires programmer declarations. Some implementations try to move beyond this by doing type inference and alerting the programmer to potential static guarantees that the programmer could make that would allow the compiler to do a better job. In effect the argument comes down to which kind of typing one thinks should be the default. Dynamic typing advocates think that static typing is the wrong default. The notion that static typing can prove program correctness is flawed - it can only prove that type constraints are not violated but not necessarily that program logic is correct. It seems to me that if we set aside that class of software where safety is paramount - mostly embedded software such as aircraft and medical devices - we are left mostly with efficiency concerns. The 80-20 rule suggests that most code doesn't really need the efficiency provided by static guarantees. So static typing should be invoked for that small portion of a program where efficiency is really needed and that dynamic typing should be the default elswhere. This is how common lisp works - dynamic typing by default with static guarantees available where one needs them.   0 Reply Raffael 6/16/2006 3:49:23 PM On 2006-06-16 05:22:08 -0400, Joachim Durchholz <jo@durchholz.org> said: > And this is a typical dynamic type advocate's response when told that static typing has different needs: > "*I* don't see the usefulness of static typing so *you* shouldn't want it, either." But I haven't made this sort of argument. I never said you shouldn't use static typing if you want to. There are indeed types of software where one wants the guarantees provided by static type checks. For example, software that controls irreplaceable or very expensive equipment such as space craft, or software that can kill people if it fails such as software for aircraft or medical devices. The problem for static typing advocates is that most software is not of this type. There is a very large class of software where user inputs are unpredictable and/or where input data comes from an untrusted source. In these cases run-time checks are going to be needed anyway so the advantages of static type checking are greatly reduced - you end up doing run-time checks anyway, precisely the thing you were trying to avoid by doing static analysis. In software like this it isn't worth satisfying a static type checker because you don't get much of the benefit anyway and it means forgoing such advantages of dynamic typing as being able to run and test portions of a program before other parts are written (forward references to as yet nonexistent functions). Ideally one wants a language with switchable typing - static where possible and necessary, dynamic elsewhere. To a certain extent this is what common lisp does but it requires programmer declarations. Some implementations try to move beyond this by doing type inference and alerting the programmer to potential static guarantees that the programmer could make that would allow the compiler to do a better job. In effect the argument comes down to which kind of typing one thinks should be the default. Dynamic typing advocates think that static typing is the wrong default. The notion that static typing can prove program correctness is flawed - it can only prove that type constraints are not violated but not necessarily that program logic is correct. It seems to me that if we set aside that class of software where safety is paramount - mostly embedded software such as aircraft and medical devices - we are left mostly with efficiency concerns. The 80-20 rule suggests that most code doesn't really need the efficiency provided by static guarantees. So static typing should be invoked for that small portion of a program where efficiency is really needed and that dynamic typing should be the default elswhere. This is how common lisp works - dynamic typing by default with static guarantees available where one needs them.   0 Reply Raffael 6/16/2006 4:09:54 PM Joachim Durchholz wrote: > Give a heterogenous list that would to too awkward to live in a > statically-typed language. Write a function that takes an arbitrary set of arguments and stores them into a structure allocated on the heap. > Give a case of calling nonexistent functions that's useful. See the Tcl "unknown" proc, used for interactive command expansion, dynamic loading of code on demand, etc. -- Darren New / San Diego, CA, USA (PST) My Bath Fu is strong, as I have studied under the Showerin' Monks.   0 Reply Darren 6/16/2006 4:45:46 PM In article <87ejxq22eu.fsf@qrnik.zagroda>, Marcin 'Qrczak' Kowalczyk wrote: > Neelakantan Krishnaswami <neelk@cs.cmu.edu> writes: > >> Haskell's system of typeclasses work this way. If you tell the >> compiler how to print integers, and how to print lists, then when you >> call a print function on a list of list of integers, then the compiler >> will automatically figure out the right print function using your base >> definitions. This yields an increase in Felleisen-expressiveness over >> a dynamically typed language, because you would need to globally >> restructure your program to achieve a similar effect. > > Most uses of Haskell classes dispatch on types of function arguments. > > The example you mention is easily archievable with dynamic typing: > instead of dispatching on static type information, you dispatch on the > actual type of the datum, and on types of individual list elements. No, this is not the case. Consider the example of a read function, which takes a string and produces a value. There, you don't have anything to dispatch on. In Haskell, you can use the type of the the context containing the read expects to figure out which function to dispatch to. >> More dramatic are the "polytypic" programming languages, which let >> you automate even more by letting you write generic map, fold, and >> print functions which work at every type. > > Actually this is even easier with dynamic typing. No, it's not. A function like (eg) map has a recursive definition which mirrors the shape of the input data it receives. Eg, lists are generated by the grammar: list ::= nil | cons(elt, list) and the map function is map f nil = nil map f cons(h,t) = cons(f h, map f t) Now, imagine backwards-lists: blist ::= nil | snoc(blist, elt) map f nil = nil map f snoc(t, h) = snoc(map f t, f h) Lists and backwards-lists differ only in the order of the arguments to cons/snoc, and this means that the order of the recursive calls in the two map functions must differ. If you want to write a single generic map function, it has to know which argument to apply f to, and which argument to apply the recursive call to. In a dynamically typed language, there's no way to tell at compile time which field is which, because a field can hold anything. A static type gives you this information; you can read it off the type, and use it to generate the map functions. This lets you synthesize the appropriate definition automatically. >> If you encode your invariants in the types, then type errors will >> signal broken invariants. But: learning how to use the type system to >> encode significantly complex invariants (eg, that an abstract syntax >> tree representing an HTML document actually satisfies all of the >> constraints on valid HTML) takes experience to do well. > > I've once done exactly this. The types were generated automatically > based on the DTD. > > I'm not convinced that this is the best idea. I agree with you about this. It really depends on the quality of error messages your implementation provides. -- Neel Krishnaswami neelk@cs.cmu.edu   0 Reply neelk (298) 6/16/2006 4:59:06 PM Joachim Durchholz wrote: > Give a heterogenous list that would to too awkward to live in a > statically-typed language. Printf()? -- Darren New / San Diego, CA, USA (PST) My Bath Fu is strong, as I have studied under the Showerin' Monks.   0 Reply Darren 6/16/2006 4:59:41 PM Darren New <dnew@san.rr.com> writes: > Joachim Durchholz wrote: >> Give a heterogenous list that would to too awkward to live in a >> statically-typed language. > > Printf()? Very good statically typed versions of printf exist. See, e.g., Danvy's unparsing combinators.   0 Reply Matthias 6/16/2006 6:10:34 PM Matthias Blume wrote: > Very good statically typed versions of printf exist. See, e.g., > Danvy's unparsing combinators. That seems to ignore the fact that the pattern is a string, which means that printf's first argument in Danvy's mechanism has to be a literal. You can't read the printf format from a configuration file (for example) to support separate languages. It doesn't look like the version of printf that can print its arguments in an order different from the order provided in the argument list is supported either; something like "%3$d"
or some such.

Second, what's the type of the argument that printf, sprintf, fprintf,
kprintf, etc all pass to the subroutine that actually does the
formatting? (Called vprintf, I think?)

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.

 0

I don't know anything about Perl and little about Python so I skipped
those parts.

However your  Example: Symbolic Computation rings some bells.
The sole concept of program who can write program , especially one who
can manipulate and create  variables and their arrived to me far before
I 've heard about Lisp and it's macros.
So the concept of expressing symbols is something natural contained in
Lisp
that other languages are supressing.

Graham stated this more clearly than I, that programming in one
language
makes you think in that language.

I've read a bunch of tips from C/C++/Java book like:
1. Avoid macros.
2. Avoid functions that do very complicated things and they look naive?

In the opposite is the Lisp with it's wishfull thinking and almost
unlimited way of thinking and implementing your ideas

Somebody said that Lisp is like  a lever it helps you achieve something
that
you were afraid to even think  to do it in a lesser languages.

Actually Lisp is more like a pulley.
A lot of strength for heavy lifting
and enaphe rope to hang yourself .

the choice is all yours.
cheers
bobi


 0
Reply BLASESKI (158) 6/16/2006 8:28:51 PM

Darren New <dnew@san.rr.com> writes:

> Matthias Blume wrote:
>> Very good statically typed versions of printf exist.  See, e.g.,
>> Danvy's unparsing combinators.
>
> That seems to ignore the fact that the pattern is a string, which
> means that printf's first argument in Danvy's mechanism has to be a
> literal.

In Danvy's solution, the format argument is not a string.

> You can't read the printf format from a configuration file
> (for example) to support separate languages.

You don't need to do that if you want to support separate languages.
Moreover, reading the format string from external input is a good way
of opening your program to security attacks, since ill-formed data on
external media are then able to crash you program.

> It doesn't look like the
> version of printf that can print its arguments in an order different
> from the order provided in the argument list is supported either;
> something like "%3\$d" or some such.

I am not familiar with the version of printf you are refering to, but
I am sure one could adapt Danvy's solution to support such a thing.

> Second, what's the type of the argument that printf, sprintf, fprintf,
> kprintf, etc all pass to the subroutine that actually does the
> formatting? (Called vprintf, I think?)

Obviously, a Danvy-style solution (see, e.g., the one in SML/NJ's
library) is not necessarily structured that way.  I don't see the
problem with typing, though.

Matthias

 0

Matthias Blume wrote:
> In Danvy's solution, the format argument is not a string.

That's what I said, yes.

>>You can't read the printf format from a configuration file
>>(for example) to support separate languages.

> You don't need to do that if you want to support separate languages.

That's kind of irrelevant to the discussion. We're talking about
collections of dynamically-typed objects, not the best mechanisms for
supporting I18N.

> Moreover, reading the format string from external input is a good way
> of opening your program to security attacks, since ill-formed data on
> external media are then able to crash you program.

Still irrelevant to the point.

> I am sure one could adapt Danvy's solution to support such a thing.

I'm not. It's consuming arguments as it goes, from what I understood of
the paper. It's translating, essentially, into a series of function
calls in argument order.

> Obviously, a Danvy-style solution (see, e.g., the one in SML/NJ's
> library) is not necessarily structured that way.  I don't see the
> problem with typing, though.

You asked for an example of a heterogenous list that would be awkward in
a statically strongly-typed language. The arguments to printf() count,
methinks. What would the second argument to apply be if the first
argument is printf (since I'm reading this in the LISP group)?

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.

 0

Sacha schrieb:
>
> Many lists are heterogenous, even in statically typed languages.
> For instance lisp code are lists, with several kinds of atoms and
> sub-lists..

Lisp isn't exactly a statically-typed language :-)

> A car dealer will sell cars, trucks and equipment..
> In a statically typed language you would need to type the list on a common
> ancestor...

Where's the problem with that?

BTW the OO way isn't the only way to set up a list from heterogenous data.
In statically-typed FPL land, lists require homogenous data types all
right, but the list elements aren't restricted to data - they can be
functions as well.
Now the other specialty of FPLs is that you can construct functions at
run-time - you take a function, fill some of its parameters and leave
others open - the result is another function. And since you'll iterate
over the list and will do homogenous processing over it, you construct
the function so that it will do all the processing that you'll later need.

The advantage of the FPL way over the OO way is that you can use ad-hoc
functions. You don't need precognition to know which kinds of data
should be lumped under a common supertype - you simply write and/or
construct functions of a common type that will go into the list.

> What would then be the point of statical typing , as you stilll need to type
> check each element in order to process that list ?

Both OO and FPL construction allow static type checks.

> Sure you can do this in a
> statically-typed
> language, you just need to make sure some relevant ancestor exists. In my
> experience
> you'll end up with the base object-class more often than not, and that's
> what i call dynamic typing.

Not quite - the common supertype is more often than not actually useful.

However, getting the type hierarchy right requires a *lot* of
experimentation and fine-tuning. You can easily spend a year or more
(sometimes *much* more) with that (been there, done that). Even worse,
once the better hierarchy is available, you typically have to adapt all
the client code that uses it (been there, done that, too).

That's the problems in OO land. FPL land doesn't have these problems -
if the list type is just a function mapping two integers to another
integer, reworking the data types that went into the functions of the
list don't require those global changes.

>> Give a case of calling nonexistent functions that's useful.
>
> I might want to test some other parts of my program before writing this
> function.

That's unrelated to dynamic typing. All that's needed is an environment
that throws an exception once such an undefined function is called,

environment. (Though, actually, C interpreters do exist.)

> Or maybe will my program compile that function depending on user input.

Hmm... do I really want this kind of power at the user's hand in the age
of malware?

> As long as i get a warning for calling a non-existing function, everything
> is fine.

That depends.
For software that's written to run once (or very few times), and where
somebody who's able to correct problems is always nearby, that's a
perfectly viable strategy.
For safety-critical software where problems must be handled within
seconds (or an even shorter period of time), you want to statically
ensure as many properties as you can. You'll take not just static
typing, you also want to ascertain value ranges and dozens of other
properties. (In Spark, an Ada subset, this is indeed done.)

Between those extremes, there's a broad spectrum.

 0

Raffael Cavallaro schrieb:
> There is a very large class of software where user inputs are
> unpredictable and/or where input data comes from an untrusted source. In
> these cases run-time checks are going to be needed anyway so the
> advantages of static type checking are greatly reduced - you end up
> doing run-time checks anyway, precisely the thing you were trying to
> avoid by doing static analysis.

There's still a large class of errors that *can* be excluded via type
checking.

> Ideally one wants a language with switchable typing - static where
> possible and necessary, dynamic elsewhere.

That has been my position for a long time now.

> To a certain extent this is
> what common lisp does but it requires programmer declarations. Some
> implementations try to move beyond this by doing type inference and
> alerting the programmer to potential static guarantees that the
> programmer could make that would allow the compiler to do a better job.

I think it's easier to start with a good (!) statically-typed language
and relax the checking, than to start with a dynamically-typed one and
With the right restrictions, a language can make all kinds of strong
guarantees, and it can make it easy to construct software where static
guarantees abound. If the mechanisms are cleverly chosen, they interfere
just minimally with the programming process. (A classical example it
Hindley-Milner type inference systems. Typical reports from languages
with HM systems say that you can have it verify thousand-line programs
without a single type annotation in the code. That's actually far better
than you'd need - you'd *want* to document the types at least on the
major internal interfaces after all *grin*.)
With a dynamically-typed language, programming style tends to evolve in
directions that make it harder to give static guarantees.

> It seems to
> me that if we set aside that class of software where safety is paramount
> - mostly embedded software such as aircraft and medical devices - we are
> left mostly with efficiency concerns.

Nope. Efficiency has taken a back seat. Software is getting slower
(barely offset by increasing machine speed), and newer languages even
don't statically typecheck everything (C++, Java). (Note that the
impossibility to statically typecheck everything in OO languages doesn't
mean that it's impossible to do rigorous static checking in general.
FPLs have been quite rigorous about static checks; the only cases when
an FPL needs to dynamically typecheck its data structures is after
unmarshalling one from an untyped data source such as a network stream,
a file, or an IPC interface.)

The prime factor nowadays seems to be maintainability.

And the difference here is this:
With dynamic typing, I have to rely on the discipline of the programmers
to document interfaces.
With static typing, the compiler will infer (and possibly document) at
least part of their semantics (namely the types).

> So static typing should be invoked for that small portion of a program
> where efficiency is really needed and that dynamic typing should be the
> default elswhere. This is how common lisp works - dynamic typing by
> default with static guarantees available where one needs them.

Actually static typing seems to become more powerful at finding errors
as the program size increases.
(Yes, that's a maintainability argument. Efficiency isn't *that*
important; since maintenance is usually the most important single
factor, squelching bugs even before testing is definitely helpful.)

Regards,
Jo

 0

Darren New schrieb:
> Joachim Durchholz wrote:
>> Give a heterogenous list that would to too awkward to live in a
>> statically-typed language.
>
> Write a function that takes an arbitrary set of arguments and stores
> them into a structure allocated on the heap.

If the set of arguments is really arbitrary, then the software can't do
anything with it. In that case, the type is simply "opaque data block",
and storing it in the heap requires nothing more specific than that of
"opaque data block".
There's more in this. If we see a function with a parameter type of
"opaque data block", and there's no function available except copying
that data and comparing it for equality, then from simply looking at the
function's signature, we'll know that it won't inspect the data. More
interestingly, we'll know that funny stuff in the data might trigger
bugs in the code - in the context of a security audit, that's actually a
pretty strong guarantee, since the analysis can stop at the function't
interface and doesn't have to dig into the function's implementation.

>> Give a case of calling nonexistent functions that's useful.
>
> See the Tcl "unknown" proc, used for interactive command expansion,

Not related to dynamic typing, I fear - I can easily envision
alternatives to that in a statically-typed context.

Of course, you can't eliminate *all* run-time type checking. I already
mentioned unmarshalling data from an untyped source; another possibility
is run-time code compilation (highly dubious in a production system but
of value in a development system).

However, that's some very specialized applications, easily catered for
by doing a dynamic type check plus a thrown exception in case the types
don't match. I still don't see a convincing argument for making dynamic
typing the standard policy.

Regards,
Jo

 0

On 2006-06-16 17:59:07 -0400, Joachim Durchholz <jo@durchholz.org> said:

> I think it's easier to start with a good (!) statically-typed language
> and relax the checking, than to start with a dynamically-typed one and
> With the right restrictions, a language can make all kinds of strong
> guarantees, and it can make it easy to construct software where static
> guarantees abound. If the mechanisms are cleverly chosen, they
> interfere just minimally with the programming process. (A classical
> example it Hindley-Milner type inference systems. Typical reports from
> languages with HM systems say that you can have it verify thousand-line
> programs without a single type annotation in the code. That's actually
> far better than you'd need - you'd *want* to document the types at
> least on the major internal interfaces after all *grin*.)
> With a dynamically-typed language, programming style tends to evolve in
> directions that make it harder to give static guarantees.

This is purely a matter of programming style. For explorative
guarantees later rather than having to make decisions about
representation and have stubs for everything right from the start. The
lisp programming style is arguably all about using heterogenous lists
and forward references in the repl for everything until you know what
it is that you are doing, then choosing a more appropriate
representation and filling in forward references once the program gels.
Having to choose representation right from the start and needing
working versions (even if only stubs) of *every* function you call may
ensure type correctness, but many programmers find that it also ensures
that you never find the right program to code in the first place. This
is because you don't have the freedom to explore possible solutions
without having to break your concentration to satisfy the nagging of a
static type checker.


 0

This Thread started out with this....
What is Expressiveness in a Computer Language"
....
> Clearly the notion of  expressiveness isn't concerned with ultimately

I seems to me this thread ought to be moved to a new title as it is
and vices of dynamic versus static typing,
am I alone in this perception... I guess that is just how it goes.. but
there is I think a lot more to expressiveness of a language than its
typing system, or lack there of...   I find languages of both
statically typed and dynamically typed that are very good at say
writing self documenting code, and that seems to have more to do with
expressiveness than the type system..  I will admit that there has been
an interesting discussion spawned on type systems.. but seems to me it
ought to be under a different heading..

-- gene


 0
Reply yumagene (16) 6/17/2006 12:10:25 AM

genea wrote:
>
> I seems to me this thread ought to be moved to a new title as it is
> and vices of dynamic versus static typing,

*All* threads cross-posted to multiple comp.lang.* newsgroups
end up as a discussion about the relative merits of static vs.
dynamic typing.

It is *useless* to resist.

Marshall


 0
Reply marshall.spight (580) 6/17/2006 12:52:12 AM

I'm of the mind that any programming language should be designed so that
it supports both dynamic and static typing methodologies. I've done more
than my fair share of hacking in Perl and ML. I can tell you the two
language are so philosophically opposite in design it's amazing that
they are both so useful.

However, back to my provocative title. The biggest and most important
reason everyone should sit down and program in a statically typed
language is that it fundamentally warps your mind.  It warps your mind
in a way that makes you think about problems more abstractly. It forces
you up front to do some basic design and makes you answer a lot of big
and little questions.

How much design you have to do up front depends on the particulars of
your language. I definitely feel like when I'm just exploring an area, I
prefer just to get coding and figure out how it should have been after
I'm done.

However, when I'm doing something where I'm pretty much clear about what
needs to get done and how, I start using a type systems to sketch out
all the small details and the big ones that need to be made.

It is a amazing feature that I can write down a complete interface for
an API directly in a typed language and code against it before providing
the implementation of it. Of course the big win is that  I spent some
time thinking about the interface and recording my thoughts and ideas in
a language supported artifact that I can constantly change as I continue
coding.

<flame-bait>
Anyone who hasn't had their mind properly warped by static typing really
isn't a programmer. Static typing is all about abstraction. If you can't
deal with types then you can't abstract.
</flame-bait>

Whether your programming language should be statically typed by default
is another question for which these days I'm more agnostic about.

genea wrote:
> This Thread started out with this....
> What is Expressiveness in a Computer Language"
> ...
>> Clearly the notion of  expressiveness isn't concerned with ultimately
>
> I seems to me this thread ought to be moved to a new title as it is
> and vices of dynamic versus static typing,
> am I alone in this perception... I guess that is just how it goes.. but
> there is I think a lot more to expressiveness of a language than its
> typing system, or lack there of...   I find languages of both
> statically typed and dynamically typed that are very good at say
> writing self documenting code, and that seems to have more to do with
> expressiveness than the type system..  I will admit that there has been
> an interesting discussion spawned on type systems.. but seems to me it
> ought to be under a different heading..
>
> -- gene
>

 0
Reply danwang74 (206) 6/17/2006 8:54:17 AM

Raffael Cavallaro schrieb:
> On 2006-06-16 17:59:07 -0400, Joachim Durchholz <jo@durchholz.org> said:
>
>> I think it's easier to start with a good (!) statically-typed language
>> and relax the checking, than to start with a dynamically-typed one and
>
> This is purely a matter of programming style. For explorative
> guarantees later rather than having to make decisions about
> representation and have stubs for everything right from the start.

Sorry for being ambiguous - I meant to talk about language evolution.

I agree that static checking could (and probably should) be slightly
relaxed: compilers should still do all the diagnostics that current-day
technology allows, but any problems shouldn't abort the compilation.
It's always possible to generate code that will throw an exception as
soon as a problematic piece of code becomes actually relevant; depending
on the kind of run-time support, this might abort the program, abort
just the computation, or open an interactive facility to correct and/or
modify the program on the spot (the latter is the norm in highly dynamic
systems like those for Lisp and Smalltalk, and I consider this actually
useful).

I don't see static checking and explorative programming as opposites.
Of course, in practice, environments that combine these don't seem to
exist (except maybe in experimental or little-known state).

Regards,
Jo

 0

On 2006-06-17 07:03:19 -0400, Joachim Durchholz <jo@durchholz.org> said:

> I don't see static checking and explorative programming as opposites.
> Of course, in practice, environments that combine these don't seem to
> exist (except maybe in experimental or little-known state).

Right. Unfortunately the philosophical leanings of those who design
these two types of languages tend to show themselves as different
tastes in development style - for example, static type advocates don't
often want a very dynamic development environment that would allow a
program to run for testing even when parts of it arent defined yet, and
dynamic type advocates don't want a compiler preventing them from doing
so because the program can't yet be proven statically correct. Dynamic
typing advocates don't generally want a compiler error for ambiguous
typing - for example, adding a float and an int - but static typing
advocates generally do. Of course there's little reason one couldn't
have a language that allowed the full range to be switchable so that
programmers could tighten up compiler warnings and errors as the
program becomes more fully formed. Unfortunately we're not quite there
yet. For my tastes something like sbcl*, with its type inference and
very detailed warnings and notes is as good as it gets for now. I can
basically ignore warnings and notes early on, but use them to allow the
compiler to improve the code it generates once the program is doing
what I want correctly.

[*] I don't mean to exclude other common lisp implementations that do
type inference here - I just happen to use sbcl.


 0

"Daniel C. Wang" <danwang74@gmail.com> writes:

> I'm of the mind that any programming language should be designed so
> that it supports both dynamic and static typing methodologies. I've
> done more than my fair share of hacking in Perl and ML. I can tell you
> the two language are so philosophically opposite in design it's
> amazing that they are both so useful.

dynamic:

(mapcar (lambda (x) (princ " ") (princ x))
'(1 1.2 "abc" def (1 a #C(1 2))  #(1 a "b")))

static:

(mapcar (lambda (x) (declare (integer x)) (princ " ") (princ x))
'(1 2 3))

-- Common Lisp, established 1994.

> <flame-bait>
> Anyone who hasn't had their mind properly warped by static typing
> really isn't a programmer. Static typing is all about abstraction. If
> you can't deal with types then you can't abstract.
> </flame-bait>

There are whole books about abstraction that don't even mention static
types, and use only scheme to program the examples:

http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-4.html
http://www.gustavus.edu/+max/concrete-abstractions.html

Abstraction can be archieved without static typing.

--
__Pascal Bourguignon__                     http://www.informatimago.com/

COMPONENT EQUIVALENCY NOTICE: The subatomic particles (electrons,
protons, etc.) comprising this product are exactly the same in every
measurable respect as those used in the products of other
manufacturers, and no claim to the contrary may legitimately be
expressed or implied.

 0
Reply pjb (7873) 6/17/2006 5:39:01 PM

In article <4ffulsF1ie6ifU1@individual.net>, Pascal Costanza wrote:
>
> ...and until then claims about the influence of static type systems
> on the speed with which you can implement working programs are
> purely guesswork. That's the only point I need to make to show that
> your original unqualified statement, namely that it takes less time
> to get a program that works, is incorrect.

No, they're more than guesswork. You can have perfectly good
explanations without p-values.

For example, I switched over from using Dylan and Scheme to using ML,
because I wanted to program in a very higher-order style. I found that
I couldn't effectively write these kinds of programs in a dynamically
typed setting, because I had trouble localizing errors. That is, when
you build up functions from combinators, the time at which you
construct an incorrect function can be very far from the place where
you actually use that function. And worse, the stack trace you get is
usually unhelpful, since it contains things like '<anonymous call>'
ten times over. However, almost all of my errors were caught at the
site of the error with static type checking, because the function
signatures didn't match up.

Conversely, someone who tells me that he implemented a lazy patching
service to a CL program on top of CHANGE-CLASS to add functionality to
a service that can't afford any downtime is also making a perfectly
intelligible non-statistical claim.

I mean, yeah, we have to rely on our judgement and experience to
evaluate the relative importance of such explanations, but really our
capacity to do that is what distinguishes us from the tools we build.

--
Neel Krishnaswami
neelk@cs.cmu.edu

 0
Reply neelk (298) 6/17/2006 6:15:26 PM

On Sat, 17 Jun 2006 18:15:26 +0000, Neelakantan Krishnaswami wrote:

> In article <4ffulsF1ie6ifU1@individual.net>, Pascal Costanza wrote:
>>
>> ...and until then claims about the influence of static type systems
>> on the speed with which you can implement working programs are
>> purely guesswork. That's the only point I need to make to show that
>> your original unqualified statement, namely that it takes less time
>> to get a program that works, is incorrect.
>
> No, they're more than guesswork. You can have perfectly good
> explanations without p-values.
>
> For example, I switched over from using Dylan and Scheme to using ML,
> because I wanted to program in a very higher-order style. I found that
> I couldn't effectively write these kinds of programs in a dynamically
> typed setting, because I had trouble localizing errors. That is, when
> you build up functions from combinators, the time at which you
> construct an incorrect function can be very far from the place where
> you actually use that function. And worse, the stack trace you get is
> usually unhelpful, since it contains things like '<anonymous call>'
> ten times over. However, almost all of my errors were caught at the
> site of the error with static type checking, because the function
> signatures didn't match up.
>

I also struggle with this in Common Lisp.  CLOS (especially funcallable
instances) can help, but the over head of such objects is significant if
am creating millions of short lived closures.  It is not necessarily a
static vs. dynamic typing issue, but it does seems more expensive to
create and track (higher-order) function type information at run-time.  Is
this a fundamental limitation of dynamically typed languages or just
(common) lisp?

Matt
"You do not really understand something unless you can
explain it to your grandmother." — Albert Einstein.


 0

Pascal Bourguignon wrote:
{stuff deleted}
>
>> <flame-bait>
>> Anyone who hasn't had their mind properly warped by static typing
>> really isn't a programmer. Static typing is all about abstraction. If
>> you can't deal with types then you can't abstract.
>> </flame-bait>
>
> There are whole books about abstraction that don't even mention static
> types, and use only scheme to program the examples:
>
> http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-4.html
> http://www.gustavus.edu/+max/concrete-abstractions.html
>
> Abstraction can be archieved without static typing.

Yes, but that's like saying you can climb mount everest blind-folded and
with two arms tied behind your back. Nothing, is stopping you, but
*why*.  The closure is not the end all and be all of abstraction.

 0
Reply danwang74 (206) 6/17/2006 8:29:23 PM

In article <pan.2006.06.17.19.54.24.884809@c.net>, Matthew D Swank wrote:
> On Sat, 17 Jun 2006 18:15:26 +0000, Neelakantan Krishnaswami wrote:
>>
>> For example, I switched over from using Dylan and Scheme to using
>> ML, because I wanted to program in a very higher-order style. I
>> found that I couldn't effectively write these kinds of programs in
>> a dynamically typed setting, because I had trouble localizing
>> errors. That is, when you build up functions from combinators, the
>> time at which you construct an incorrect function can be very far
>> from the place where you actually use that function.
>
> I also struggle with this in Common Lisp.  CLOS (especially
> funcallable instances) can help, but the over head of such objects
> is significant if am creating millions of short lived closures.  It
> is not necessarily a static vs. dynamic typing issue, but it does
> seems more expensive to create and track (higher-order) function
> type information at run-time.  Is this a fundamental limitation of
> dynamically typed languages or just (common) lisp?

I think the main issue for development is that you want the blame for
the error to be in the right place -- the error message should tell
you what was the cause of the error, and not just where the error
happened. This is hard in a functional language, because combinators
delay the usage of a function.

Eg, if you have a compose function:

(define (compose f g)
(lambda (x) (f (g x))))

and you write (compose map +), then the call to compose will succeed,
even though the returned function will fail when it is called (due to
map not getting enough arguments).

However, if you use a contract system like DrScheme has, then the the
runtime will have enough info to successfully locate where a runtime
error happened. I haven't used DrScheme's contracts, but I bet that
would address many of those ease-of-development issues.

The main difficulty with contract checking, IMO, is that it can change
the space complexity of your program. Functional programs use
tail-call optimizations to use constant stack space. However, if you
need to check the post-condition of a call, then your recursive calls
can go from being in tail position to being in non-tail position,
which can shift space usage from O(1) to O(N). Eg:

(define (foo x) (returns: blah?) ;; totally made-up syntax
...
(foo expr))

becomes

(define (foo e)
...
(let ((result (foo expr)))
(if (blah? result)
result
(raise-appropriate-error))))

I think this is fundamental to doing runtime postcondition checks, but
it's possible there's some clever trick I missed.

--
Neel Krishnaswami
neelk@cs.cmu.edu

 0
Reply neelk (298) 6/18/2006 4:00:26 AM

Neelakantan Krishnaswami schrieb:

> In article <pan.2006.06.17.19.54.24.884809@c.net>, Matthew D Swank wrote:
> > On Sat, 17 Jun 2006 18:15:26 +0000, Neelakantan Krishnaswami wrote:
> >>
> >> For example, I switched over from using Dylan and Scheme to using
> >> ML, because I wanted to program in a very higher-order style. I
> >> found that I couldn't effectively write these kinds of programs in
> >> a dynamically typed setting, because I had trouble localizing
> >> errors. That is, when you build up functions from combinators, the
> >> time at which you construct an incorrect function can be very far
> >> from the place where you actually use that function.
> >
> > I also struggle with this in Common Lisp.  CLOS (especially
> > funcallable instances) can help, but the over head of such objects
> > is significant if am creating millions of short lived closures.  It
> > is not necessarily a static vs. dynamic typing issue, but it does
> > seems more expensive to create and track (higher-order) function
> > type information at run-time.  Is this a fundamental limitation of
> > dynamically typed languages or just (common) lisp?
>
> I think the main issue for development is that you want the blame for
> the error to be in the right place -- the error message should tell
> you what was the cause of the error, and not just where the error
> happened. This is hard in a functional language, because combinators
> delay the usage of a function.
>
> Eg, if you have a compose function:
>
> (define (compose f g)
>   (lambda (x) (f (g x))))
>
> and you write (compose map +), then the call to compose will succeed,
> even though the returned function will fail when it is called (due to
> map not getting enough arguments).
>
> However, if you use a contract system like DrScheme has, then the the
> runtime will have enough info to successfully locate where a runtime
> error happened.

Several Common Lisp systems will point you to the error in the source.


 0
Reply joswig (506) 6/18/2006 10:19:17 AM

Neelakantan Krishnaswami schrieb:
>
> The main difficulty with contract checking, IMO, is that it can change
> the space complexity of your program. Functional programs use
> tail-call optimizations to use constant stack space. However, if you
> need to check the post-condition of a call, then your recursive calls
> can go from being in tail position to being in non-tail position,
> which can shift space usage from O(1) to O(N). [...]
>
> I think this is fundamental to doing runtime postcondition checks, but
> it's possible there's some clever trick I missed.

Hmm... sort of. If you know how to go from a recursive call's parameters
back to the caller's parameters, you don't have to store them. Kind of
"tail return", that ;-)

Postconditions are a bit hairy anyway. Many functions use themselves
recursively for their postconditions; in these cases, simply running
postcondition checks will blow up the running time to exponential
complexity, since the recursive call inside the postcondition check will
itself have its postconditions checked.

If programming in a functional style, the postcondition of a function
often is already its implementation. IOW with a functional style, it's
enough to check postconditions on the outermost call of a recursion.

Regards,
Jo

 0
Reply jo427 (1164) 6/18/2006 11:36:06 AM

Neelakantan Krishnaswami wrote:
> In article <4ffulsF1ie6ifU1@individual.net>, Pascal Costanza wrote:
>> ...and until then claims about the influence of static type systems
>> on the speed with which you can implement working programs are
>> purely guesswork. That's the only point I need to make to show that
>> your original unqualified statement, namely that it takes less time
>> to get a program that works, is incorrect.
>
> No, they're more than guesswork. You can have perfectly good
> explanations without p-values.
>
> For example, I switched over from using Dylan and Scheme to using ML,
> because I wanted to program in a very higher-order style. I found that
> I couldn't effectively write these kinds of programs in a dynamically
> typed setting, because I had trouble localizing errors. That is, when
> you build up functions from combinators, the time at which you
> construct an incorrect function can be very far from the place where
> you actually use that function. And worse, the stack trace you get is
> usually unhelpful, since it contains things like '<anonymous call>'
> ten times over. However, almost all of my errors were caught at the
> site of the error with static type checking, because the function
> signatures didn't match up.
>
> Conversely, someone who tells me that he implemented a lazy patching
> service to a CL program on top of CHANGE-CLASS to add functionality to
> a service that can't afford any downtime is also making a perfectly
> intelligible non-statistical claim.
>
> I mean, yeah, we have to rely on our judgement and experience to
> evaluate the relative importance of such explanations, but really our
> capacity to do that is what distinguishes us from the tools we build.

I was worried about Torben's original unqualified statement. I am not at
all against making informed decisions. (My impression wrt debugging in
Scheme is that Common Lisp are generally more helpful - but I could
simply be not experienced enough with Scheme. I don't know about Dylan
in this regard.)

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0
Reply pc56 (3930) 6/18/2006 6:02:22 PM

Matthew D Swank wrote:
> On Sat, 17 Jun 2006 18:15:26 +0000, Neelakantan Krishnaswami wrote:
>
>> In article <4ffulsF1ie6ifU1@individual.net>, Pascal Costanza wrote:
>>> ...and until then claims about the influence of static type systems
>>> on the speed with which you can implement working programs are
>>> purely guesswork. That's the only point I need to make to show that
>>> your original unqualified statement, namely that it takes less time
>>> to get a program that works, is incorrect.
>> No, they're more than guesswork. You can have perfectly good
>> explanations without p-values.
>>
>> For example, I switched over from using Dylan and Scheme to using ML,
>> because I wanted to program in a very higher-order style. I found that
>> I couldn't effectively write these kinds of programs in a dynamically
>> typed setting, because I had trouble localizing errors. That is, when
>> you build up functions from combinators, the time at which you
>> construct an incorrect function can be very far from the place where
>> you actually use that function. And worse, the stack trace you get is
>> usually unhelpful, since it contains things like '<anonymous call>'
>> ten times over. However, almost all of my errors were caught at the
>> site of the error with static type checking, because the function
>> signatures didn't match up.
>>
>
> I also struggle with this in Common Lisp.  CLOS (especially funcallable
> instances) can help, but the over head of such objects is significant if
> am creating millions of short lived closures.  It is not necessarily a
> static vs. dynamic typing issue, but it does seems more expensive to
> create and track (higher-order) function type information at run-time.  Is
> this a fundamental limitation of dynamically typed languages or just
> (common) lisp?

Have you tried something like this?

(defmacro named-lambda (name (&rest lambda-list) &body body)
(if *debug*
(flet ((,name ,lambda-list ,@body)) #',name)
(lambda ,lambda-list ,@body)))

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0
Reply pc56 (3930) 6/18/2006 6:06:34 PM

On Sun, 18 Jun 2006 20:06:34 +0200, Pascal Costanza wrote:

> Matthew D Swank wrote:
>> On Sat, 17 Jun 2006 18:15:26 +0000, Neelakantan Krishnaswami wrote:
>>
>>> In article <4ffulsF1ie6ifU1@individual.net>, Pascal Costanza wrote:
>>>> ...and until then claims about the influence of static type systems
>>>> on the speed with which you can implement working programs are
>>>> purely guesswork. That's the only point I need to make to show that
>>>> your original unqualified statement, namely that it takes less time
>>>> to get a program that works, is incorrect.
>>> No, they're more than guesswork. You can have perfectly good
>>> explanations without p-values.
>>>
>>> For example, I switched over from using Dylan and Scheme to using ML,
>>> because I wanted to program in a very higher-order style. I found that
>>> I couldn't effectively write these kinds of programs in a dynamically
>>> typed setting, because I had trouble localizing errors. That is, when
>>> you build up functions from combinators, the time at which you
>>> construct an incorrect function can be very far from the place where
>>> you actually use that function. And worse, the stack trace you get is
>>> usually unhelpful, since it contains things like '<anonymous call>'
>>> ten times over. However, almost all of my errors were caught at the
>>> site of the error with static type checking, because the function
>>> signatures didn't match up.
>>>
>>
>> I also struggle with this in Common Lisp.  CLOS (especially funcallable
>> instances) can help, but the over head of such objects is significant if
>> am creating millions of short lived closures.  It is not necessarily a
>> static vs. dynamic typing issue, but it does seems more expensive to
>> create and track (higher-order) function type information at run-time.  Is
>> this a fundamental limitation of dynamically typed languages or just
>> (common) lisp?
>
> Have you tried something like this?
>
> (defmacro named-lambda (name (&rest lambda-list) &body body)
>    (if *debug*
>       (flet ((,name ,lambda-list ,@body)) #',name)
>       (lambda ,lambda-list ,@body)))
>
>
> Pascal

That is useful in debugging. However, I was being a little imprecise in my
complaint.  The other major reason (previously omitted) that I wrap
functions in CLOS instances is so I can use generic functions.  What would
be nice is some limited deftype support in CLOS.  I realize that this is
almost the same thing as asking for predicate dispatch.

What I had in mind though, is something I'll call "tag dispatch": the
ability to define union types by simply adding objects to a weak container.

Something like:

(defmacro define-tag-type (name &optional qualifier)
(let ((pred-name (gensym))
(val-name (gensym)))
(let ((members (make-weak-collection)))
(defun ,pred-name (val)
(and (weak-collection-member val members) t))
(deftype ,name ()
(list 'satisfies ',pred-name))
(defmethod make-instance ((class (eql ',name)) &rest args)
(let ((,val-name (getf args :val)))
,@(if qualifier
((unless (funcall ,qualifier ,val-name)
(error "~a is not a valid ~a" ,val-name ',name))
,val-name)
((add-if-not-member ,val-name members)
,val-name))))
',name)))

And, then, be able to use these types in method qualifiers.

This part of the reason I've been slowly making my way through AMOP.
However, weak-containers aren't standard, or even well supported across
implementations.*

Matt

*Am I right in assuming that using standard containers (lists, hashes,
etc) will cause members of the collection never to be garbage collected?

--
"You do not really understand something unless you can
explain it to your grandmother." — Albert Einstein.


 0

Matthew D Swank <akopa-is-very-much-like-my-mail-address@c.net> writes:
> That is useful in debugging. However, I was being a little imprecise in my
> complaint.  The other major reason (previously omitted) that I wrap
> functions in CLOS instances is so I can use generic functions.  What would
> be nice is some limited deftype support in CLOS.  I realize that this is
> almost the same thing as asking for predicate dispatch.
>
> What I had in mind though, is something I'll call "tag dispatch": the
> ability to define union types by simply adding objects to a weak container.
>
> Something like:
>
> (defmacro define-tag-type (name &optional qualifier)
>   (let ((pred-name (gensym))
>          (val-name (gensym)))
>     (let ((members (make-weak-collection)))
>        (defun ,pred-name (val)
>          (and (weak-collection-member val members) t))
>        (deftype ,name ()
>          (list 'satisfies ',pred-name))
>        (defmethod make-instance ((class (eql ',name)) &rest args)
>          (let ((,val-name (getf args :val)))
>            ,@(if qualifier
>                  ((unless (funcall ,qualifier ,val-name)
>                      (error "~a is not a valid ~a" ,val-name ',name))
>                    ,val-name)
>                    ,val-name))))
>        ',name)))
>
> And, then, be able to use these types in method qualifiers.
>
> This part of the reason I've been slowly making my way through AMOP.
> However, weak-containers aren't standard, or even well supported across
> implementations.*

Have a look at closer-weak at:
http://www.informatimago.com/develop/lisp/index.html#clext

> *Am I right in assuming that using standard containers (lists, hashes,
> etc) will cause members of the collection never to be garbage collected?

Yes, of course.

--
__Pascal Bourguignon__                     http://www.informatimago.com/

The world will now reboot.  don't bother saving your artefacts.

 0
Reply pjb (7873) 6/19/2006 3:15:13 AM

Raffael Cavallaro <raffaelcavallaro@pas-d'espam-s'il-vous-plait-mac.com> writes:

> This is purely a matter of programming style. For explorative
> guarantees later rather than having to make decisions about
> representation and have stubs for everything right from the start.

I think you are confusing static typing with having to write types
everywhere.  With type inference, you only have to write a minimum of
type information (such as datatype declarations), so you can easily do
explorative progrmming in such languages -- I don't see any advantage
of dynamic typing in this respect.

> The
> lisp programming style is arguably all about using heterogenous lists
> and forward references in the repl for everything until you know what
> it is that you are doing, then choosing a more appropriate
> representation and filling in forward references once the program
> gels. Having to choose representation right from the start and needing
> working versions (even if only stubs) of *every* function you call may
> ensure type correctness, but many programmers find that it also
> ensures that you never find the right program to code in the first
> place.

If you don't have definitions (stubs or complete) of the functions you
use in your code, you can only run it up to the point where you call
an undefined function.  So you can't really do much exploration until
you have some definitions.

I expect a lot of the exploration you do with incomplete programs
amount to the feedback you get from type inference.

> This is because you don't have the freedom to explore possible
> solutions without having to break your concentration to satisfy the
> nagging of a static type checker.

I tend to disagree.  I have programmed a great deal in Lisp, Scheme,
Prolog (all dynamically typed) and SML and Haskell (both statically
typed).  And I don't find that I need more stubs etc. in the latter.
In fact, I do a lot of explorative programming when writing programs
in ML and Haskell.  And I find type inference very helpful in this, as
it guides the direction of the exploration, so it is more like a
safari with a local guide than a blindfolded walk in the jungle.

Torben

 0

Torben =C6gidius Mogensen wrote:
> "Rob Thorpe" <robert.thorpe@antenova.com> writes:
>
> > Torben =C6gidius Mogensen wrote:
>
> > > That's the point: Bugs that in dynamically typed languages would
> > > require testing to find are found by the compiler in a statically
> > > typed language.  So whil eit may take onger to get a program thatgets
> > > past the compiler, it takes less time to get a program that works.
> >
> > In my experience the opposite is true for many programs.
> > Having to actually know the precise type of every variable while
> > writing the program is not necessary, it's a detail often not relevant
> > to the core problem. The language should be able to take care of
> > itself.
> >
> > In complex routines it can be useful for the programmer to give types
> > and for the compiler to issue errors when they are contradicted.  But
> > for most routines it's just an unnecessary chore that the compiler
> > forces on the programmer.
>
> Indeed.  So use a language with type inference.

Well, for most purposes that's the same as dynamic typing since the
compiler doesn't require you to label the type of your variables.  I
occasionally use CMUCL and SBCL which do type inference, which is
useful at improving generated code quality.  It also can warn the
programmer if they if they reuse a variable in a context implying that
it's a different type which is useful.

I see type inference as an optimization of dynamic typing rather than a
generalization of static typing.  But I suppose you can see it that way
around.


 0

"Rob Thorpe" <robert.thorpe@antenova.com> writes:

> Torben �gidius Mogensen wrote:
> > "Rob Thorpe" <robert.thorpe@antenova.com> writes:
> >
> > > Torben �gidius Mogensen wrote:
> >
> > Indeed.  So use a language with type inference.
>
> Well, for most purposes that's the same as dynamic typing since the
> compiler doesn't require you to label the type of your variables.

That's not really the difference between static and dynamic typing.
Static typing means that there exist a typing at compile-time that
guarantess against run-time type violations.  Dynamic typing means
that such violations are detected at run-time.  This is orthogonal to
strong versus weak typing, which is about whether such violations are
detected at all.  The archetypal weakly typed language is machine code
-- you can happily load a floating point value from memory, add it to
a string pointer and jump to the resulting value.  ML and Scheme are
both strongly typed, but one is statically typed and the other
dynamically typed.

Anyway, type inference for statically typed langauges don't make them
any more dynamically typed.  It just moves the burden of assigning the
types from the programmer to the compiler.  And (for HM type systems)
the compiler doesn't "guess" at a type -- it finds the unique most
general type from which all other legal types (within the type system)
can be found by instantiation.

>  I
> occasionally use CMUCL and SBCL which do type inference, which is
> useful at improving generated code quality.  It also can warn the
> programmer if they if they reuse a variable in a context implying that
> it's a different type which is useful.
>
> I see type inference as an optimization of dynamic typing rather than a
> generalization of static typing.  But I suppose you can see it that way
> around.

Some compilers for dynamically typed languages will do a type analysis
similar to type inference, but they will happily compile a program
even if they can't guarantee static type safety.

Such "type inference" can be seen as an optimisation of dynamic
typing, as it allows the compiler to omit _some_ of the runtime type
checks.  I prefer the term "soft typing" for this, though, so as not
to confuse with static type inference.

Soft typing can give feedback similar to that of type inference in
terms of identifying potential problem spots, so in that respect it is
similar to static type inference, and you might get similar fast code
development.  You miss some of the other benefits of static typing,
though, such as a richer type system -- soft typing often lacks
features like polymorphism (it will find a set of monomorphic
instances rather than the most general type) and type classes.

Torben


 0

Torben =C6gidius Mogensen <torbenm@app-3.diku.dk> wrote:
> That's not really the difference between static and dynamic typing.
> Static typing means that there exist a typing at compile-time that
> guarantess against run-time type violations.  Dynamic typing means
> that such violations are detected at run-time.  This is orthogonal to
> strong versus weak typing, which is about whether such violations are
> detected at all.  The archetypal weakly typed language is machine code
> -- you can happily load a floating point value from memory, add it to
> a string pointer and jump to the resulting value.  ML and Scheme are
> both strongly typed, but one is statically typed and the other
> dynamically typed.

Knowing that it'll cause a lot of strenuous objection, I'll nevertheless=20
interject my plea not to abuse the word "type" with a phrase like=20
"dynamically typed".  If anyone considers "untyped" to be perjorative,=20
as some people apparently do, then I'll note that another common term is=20
"type-free," which is marketing-approved but doesn't carry the=20
misleading connotations of "dynamically typed."  We are quickly losing=20
any rational meaning whatsoever to the word "type," and that's quite a=20
shame.

By way of extending the point, let me mention that there is no such=20
thing as a universal class of things that are called "run-time type=20
violations".  At runtime, there is merely correct code and incorrect=20
code.  To the extent that anything is called a "type" at runtime, this=20
is a different usage of the word from the usage by which we may define=20
languages as being statically typed (which means just "typed").  In=20
typed OO languages, this runtime usage is often called the "class", for=20
example, to distinguish it from type.

This cleaner terminology eliminates a lot of confusion.  For example, it=20
clarifies that there is no binary division between strongly typed=20
languages and weakly typed languages, since the division between a "type=20
error" and any other kind of error is arbitrary, depending only on=20
whether the type system in a particular language happens to catch that=20
error.  For example, type systems have been defined to try to catch unit=20
errors in scientific programming, or to catch out-of-bounds array=20
indices... yet these are not commonly called "type errors" only because=20
such systems are not in common use.

This also leads us to define something like "language safety" to=20
encapsulate what we previously would have meant by the phrase "strongly=20
dynamically typed language".  This also is a more general concept than=20
we had before.  Language safety refers to a language having well-defined=20
behavior for as many operations as feasible, so that it's less likely=20
that someone will do something spectacularly bad.  Language safety may=20
be achieved either by a type system or by runtime checks.  Again, it's=20
not absolute... I'm aware of no language that is completely determinate,=20
at least if it supports any kind of concurrency.

This isn't just a matter of preference in terminology.  The definitions=20
above (which are, in my experience, used widely by most non-academic=20
language design discussions) actually limit our understanding of=20
language design by pretending that certain delicate trade-offs such as=20
the extent of the type system, or which language behavior is allowed to=20
be non-deterministic or undefined, are etched in stone.  This is simply=20
not so.  If types DON'T mean a compile-time method for proving the=20
absence of certain program behaviors, then they don't mean anything at=20
all.  Pretending that there's a distinction at runtime between "type=20
errors" and "other errors" serves only to confuse things and=20
artificially limit which problems we are willing to concieve as being=20
solvable by types.

> Anyway, type inference for statically typed langauges don't make them
> any more dynamically typed.

Indeed it does not.  Unless it weakens the ability of a compiler to=20
prove the absence of certain program behaviors (which type inference=20
does not), it doesn't move anything on the scale toward a type-free=20
language.

That being said, though, it is considered a feature of some programming=20
languages that the programmer is asked to repeat type information in a=20
few places.  The compiler may not need the information, but for=20
precisely the reason that the information is redundant, the compiler is=20
then able to check the consistency of the programmer in applying the=20
type.  I won't get into precisely how useful this is, but it is=20
nevertheless present as an advantage to outweigh the wordiness.

--=20
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Chris Smith wrote:
> Torben =C6gidius Mogensen <torbenm@app-3.diku.dk> wrote:
> > That's not really the difference between static and dynamic typing.
> > Static typing means that there exist a typing at compile-time that
> > guarantess against run-time type violations.  Dynamic typing means
> > that such violations are detected at run-time.  This is orthogonal to
> > strong versus weak typing, which is about whether such violations are
> > detected at all.  The archetypal weakly typed language is machine code
> > -- you can happily load a floating point value from memory, add it to
> > a string pointer and jump to the resulting value.  ML and Scheme are
> > both strongly typed, but one is statically typed and the other
> > dynamically typed.
>
> Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
> interject my plea not to abuse the word "type" with a phrase like
> "dynamically typed".  If anyone considers "untyped" to be perjorative,
> as some people apparently do, then I'll note that another common term is
> "type-free," which is marketing-approved but doesn't carry the
> misleading connotations of "dynamically typed."  We are quickly losing
> any rational meaning whatsoever to the word "type," and that's quite a
> shame.

I don't think dynamic typing is that nebulous.  I remember this being
discussed elsewhere some time ago, I'll post the same reply I did then
..=2E

A language is statically typed if a variable has a property - called
it's type - attached to it, and given it's type it can only represent
values defined by a certain class.

A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class.

Some people use dynamic typing as a word for latent typing, others use
it to mean something slightly different.  But for most purposes the
definition above works for dynamic typing also.

Untyped and type-free mean something else: they mean no type checking
is done.


 0

Chris Smith wrote:
> Torben �gidius Mogensen <torbenm@app-3.diku.dk> wrote:
>> That's not really the difference between static and dynamic typing.
>> Static typing means that there exist a typing at compile-time that
>> guarantess against run-time type violations.  Dynamic typing means
>> that such violations are detected at run-time.  This is orthogonal to
>> strong versus weak typing, which is about whether such violations are
>> detected at all.  The archetypal weakly typed language is machine code
>> -- you can happily load a floating point value from memory, add it to
>> a string pointer and jump to the resulting value.  ML and Scheme are
>> both strongly typed, but one is statically typed and the other
>> dynamically typed.
>
> Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
> interject my plea not to abuse the word "type" with a phrase like
> "dynamically typed".  If anyone considers "untyped" to be perjorative,
> as some people apparently do, then I'll note that another common term is
> "type-free," which is marketing-approved but doesn't carry the
> misleading connotations of "dynamically typed."  We are quickly losing
> any rational meaning whatsoever to the word "type," and that's quite a
> shame.

The words "untyped" or "type-free" only make sense in a purely
statically typed setting. In a dynamically typed setting, they are
meaningless, in the sense that there are _of course_ types that the
runtime system respects.

Types can be represented at runtime via type tags. You could insist on
using the term "dynamically tagged languages", but this wouldn't change
a lot. Exactly _because_ it doesn't make sense in a statically typed
setting, the term "dynamically typed language" is good enough to
communicate what we are talking about - i.e. not (static) typing.

> By way of extending the point, let me mention that there is no such
> thing as a universal class of things that are called "run-time type
> violations".  At runtime, there is merely correct code and incorrect
> code.

No, there is more: There is safe and unsafe code (i.e., code that throws
exceptions or that potentially just does random things). There are also
runtime systems where you have the chance to fix the reason that caused
the exception and continue to run your program. The latter play very
well with dynamic types / type tags.

> To the extent that anything is called a "type" at runtime, this
> is a different usage of the word from the usage by which we may define
> languages as being statically typed (which means just "typed").  In
> typed OO languages, this runtime usage is often called the "class", for
> example, to distinguish it from type.

What type of person are you to tell other people what terminology to use? ;)

Ha! Here I used "type" in just another sense of the word. ;)

It is important to know the context in which you are discussing things.
For example, "we" Common Lispers use the term "type" as defined in
http://www.lispworks.com/documentation/HyperSpec/Body/26_glo_t.htm . You
cannot possibly argue that "our" use of the word "type" is incorrect
because in "our" context, when we talk about Common Lisp, the use of the
word "type" better be consistent with that definition. (You can say that
you don't like the definition, that it is unsound, or whatever, but that
doesn't change anything here.)

> This cleaner terminology eliminates a lot of confusion.  For example, it
> clarifies that there is no binary division between strongly typed
> languages and weakly typed languages, since the division between a "type
> error" and any other kind of error is arbitrary, depending only on
> whether the type system in a particular language happens to catch that
> error.  For example, type systems have been defined to try to catch unit
> errors in scientific programming, or to catch out-of-bounds array
> indices... yet these are not commonly called "type errors" only because
> such systems are not in common use.

What type system catches division by zero? That is, statically? Would
you like to program in such a language?

> This isn't just a matter of preference in terminology.  The definitions
> above (which are, in my experience, used widely by most non-academic
> language design discussions) actually limit our understanding of
> language design by pretending that certain delicate trade-offs such as
> the extent of the type system, or which language behavior is allowed to
> be non-deterministic or undefined, are etched in stone.  This is simply
> not so.  If types DON'T mean a compile-time method for proving the
> absence of certain program behaviors, then they don't mean anything at
> all.  Pretending that there's a distinction at runtime between "type
> errors" and "other errors" serves only to confuse things and
> artificially limit which problems we are willing to concieve as being
> solvable by types.

own folks, and "static types" when you're amongst a broader audience,
and everything's fine. Instead of focusing on terminology, just focus on
the contents.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Matthew D Swank wrote:
> On Sun, 18 Jun 2006 20:06:34 +0200, Pascal Costanza wrote:
>
>> Matthew D Swank wrote:
>>> On Sat, 17 Jun 2006 18:15:26 +0000, Neelakantan Krishnaswami wrote:
>>>
>>>> In article <4ffulsF1ie6ifU1@individual.net>, Pascal Costanza wrote:
>>>>> ...and until then claims about the influence of static type systems
>>>>> on the speed with which you can implement working programs are
>>>>> purely guesswork. That's the only point I need to make to show that
>>>>> your original unqualified statement, namely that it takes less time
>>>>> to get a program that works, is incorrect.
>>>> No, they're more than guesswork. You can have perfectly good
>>>> explanations without p-values.
>>>>
>>>> For example, I switched over from using Dylan and Scheme to using ML,
>>>> because I wanted to program in a very higher-order style. I found that
>>>> I couldn't effectively write these kinds of programs in a dynamically
>>>> typed setting, because I had trouble localizing errors. That is, when
>>>> you build up functions from combinators, the time at which you
>>>> construct an incorrect function can be very far from the place where
>>>> you actually use that function. And worse, the stack trace you get is
>>>> usually unhelpful, since it contains things like '<anonymous call>'
>>>> ten times over. However, almost all of my errors were caught at the
>>>> site of the error with static type checking, because the function
>>>> signatures didn't match up.
>>>>
>>> I also struggle with this in Common Lisp.  CLOS (especially funcallable
>>> instances) can help, but the over head of such objects is significant if
>>> am creating millions of short lived closures.  It is not necessarily a
>>> static vs. dynamic typing issue, but it does seems more expensive to
>>> create and track (higher-order) function type information at run-time.  Is
>>> this a fundamental limitation of dynamically typed languages or just
>>> (common) lisp?
>> Have you tried something like this?
>>
>> (defmacro named-lambda (name (&rest lambda-list) &body body)
>>    (if *debug*
>>       (flet ((,name ,lambda-list ,@body)) #',name)
>>       (lambda ,lambda-list ,@body)))
>>
>>
>> Pascal
>
> That is useful in debugging. However, I was being a little imprecise in my
> complaint.  The other major reason (previously omitted) that I wrap
> functions in CLOS instances is so I can use generic functions.  What would
> be nice is some limited deftype support in CLOS.  I realize that this is
> almost the same thing as asking for predicate dispatch.
>
> What I had in mind though, is something I'll call "tag dispatch": the
> ability to define union types by simply adding objects to a weak container.
>
> Something like:
>
> (defmacro define-tag-type (name &optional qualifier)
>   (let ((pred-name (gensym))
>          (val-name (gensym)))
>     (let ((members (make-weak-collection)))
>        (defun ,pred-name (val)
>          (and (weak-collection-member val members) t))
>        (deftype ,name ()
>          (list 'satisfies ',pred-name))
>        (defmethod make-instance ((class (eql ',name)) &rest args)
>          (let ((,val-name (getf args :val)))
>            ,@(if qualifier
>                  ((unless (funcall ,qualifier ,val-name)
>                      (error "~a is not a valid ~a" ,val-name ',name))
>                    ,val-name)
>                  ((add-if-not-member ,val-name members)
>                    ,val-name))))
>        ',name)))
>
> And, then, be able to use these types in method qualifiers.
>
> This part of the reason I've been slowly making my way through AMOP.
> However, weak-containers aren't standard, or even well supported across
> implementations.*

That shouldn't be too hard to implement with the CLOS MOP. The hard part
with predicate dispatch is to determine an order in which predicates are
checked. For example, you can consider dispatching on classes as a
predicate dispatch on the predicate typep. What you get for free when
dispatching on classes is that the class hierarchy gives you a "natural"
order of specificity. Note that eql specializers are specified to always
be more specific than their classes - exactly to make it possible to
still rely on such a "natural" specificity.

The problem with general predicate dispatch is that in general, you
cannot sort predicates in a meaningful way. (Consider predicates for
prime and odd numbers - due to the presence of 2 in the list of prime
numbers, prime is not "naturally" more specific than odd, only almost. ;)

However, what you describe seems to be solvable by specifying that
members of collections are more specific than their respective classes
(similar to eql specializers). So it should be possible to make this
work, at least in principle.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0
Reply pc56 (3930) 6/19/2006 4:30:06 PM

Rob Thorpe <robert.thorpe@antenova.com> wrote:
> A language is latently typed if a value has a property - called it's
> type - attached to it, and given it's type it can only represent values
> defined by a certain class.

I'm assuming you mean "class" in the general sense, rather than in the
sense of a specific construct of some subset of OO programming
languages.

Now I define a class of values called "correct" values.  I define these
to be those values for which my program will produce acceptable results.
Clearly there is a defined class of such values: (1) they are
immediately defined by the program's specification for those lines of
code that produce output; (2) if they are defined for the values that
result from any expression, then they are defined for the values that
are used by that expression; and (3) for any value for which correctness
is not defined by (1) or (2), we may define its "correct" values as the
class of all possible values.  Now, by your definition, any language
which provides checking of that property of correctness for values is
latently typed.  Of course, there are no languages that assign this
specific class of values; but ANY kind of correctness checking on values
that a language does (if it's useful at all) is a subset of the perfect
correctness checking system above.  Apparently, we should call all such
systems "latent type systems".  Can you point out a language that is not
latently typed?

I'm not trying to poke holes in your definition for fun.  I am proposing
that there is no fundamental distinction between the kinds of problems
that are "type problems" and those that are not.  Types are not a class
of problems; they are a class of solutions.  Languages that solve
problems in ways that don't assign types to variables are not typed
languages, even if those same problems may have been originally solved
by type systems.

> Untyped and type-free mean something else: they mean no type checking
> is done.

Hence, they don't exist, and the definitions being used here are rather
pointless.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Pascal Costanza <pc@p-cos.net> wrote:
> Types can be represented at runtime via type tags. You could insist on
> using the term "dynamically tagged languages", but this wouldn't change
> a lot. Exactly _because_ it doesn't make sense in a statically typed
> setting, the term "dynamically typed language" is good enough to
> communicate what we are talking about - i.e. not (static) typing.

Okay, fair enough.  It's certainly possible to use the same sequence of
letters to mean two different things in different contexts.  The problem
arises, then, when Torben writes:

: That's not really the difference between static and dynamic typing.
: Static typing means that there exist a typing at compile-time that
: guarantess against run-time type violations.  Dynamic typing means
: that such violations are detected at run-time.

This is clearly not using the word "type" to mean two different things
in different contexts.  Rather, it is speaking under the mistaken
impression that "static typing" and "dynamic typing" are varieties of
some general thing called "typing."  In fact, the phrase "dynamically
typed" was invented to do precisely that.  My argument is not really
with LISP programmers talking about types, by which they would mean
approximately the same thing Java programmers mean by "class."  My point
here concerns the confusion that results from the conception that there
is this binary distinction (or continuum, or any other simple
relationship) between a "statically typed" and a "dynamically typed"
language.

Torben's (and I don't mean to single out Torben -- the terminology is
used quite widely) classification of dynamic versus static type systems
depends on the misconception that there is some universal definition to
the term "type error" or "type violation" and that the only question is
how we address these well-defined things.  It's that misconception that
I aim to challenge.

> No, there is more: There is safe and unsafe code (i.e., code that throws
> exceptions or that potentially just does random things). There are also
> runtime systems where you have the chance to fix the reason that caused
> the exception and continue to run your program. The latter play very
> well with dynamic types / type tags.

Yes, I was oversimplifying.

> What type system catches division by zero? That is, statically?

I can define such a type system trivially.  To do so, I simply define a
type for integers, Z, and a subtype for non-zero integers, Z'.  I then
define the language such that division is only possible in an expression
that looks like << z / z' >>, where z has type Z and z' has type Z'.
The language may then contain an expression:

z 0? t1 : t2

in which t1 is evaluated in the parent type environment, but t2 is
evaluated in the type environment augmented by (z -> Z'), the type of
the expression is the intersection type of t1 and t2 evaluated in those
type environments, and the evaluation rules are defined as you probably
expect.

> Would you like to program in such a language?

No.  Type systems for real programming languages are, of course, a
balance between rigor and usability.  This particular set of type rules
doesn't seem to exhibit a good balance.  Perhaps there is a way to
achieve it in a way that is more workable, but it's not immediately
obvious.

As another example, from Pierce's text "Types and Programming
Languages", Pierce writes: "Static elimination of array-bounds checking
is a long-standing goal for type system designers.  In principle, the
necessary mechanisms (based on dependent types) are well understood, but
packaging them in a form that balances expressive power, predictability
and tractability of typechecking, and complexity of program annotations
remains a significant challenge."  Again, this could quite validly be
described as a type error, just like division by zero or ANY other
program error... it's just that the type system that solves it doesn't
look appealing, so everyone punts the job to runtime checks (or, in some
cases, to the CPU's memory protection features and/or the user's ability
to fix resulting data corruption).

Why aren't these things commonly considered type errors?  There is only
one reason: there exists no widely used language which solves them with
types.  (I mean in the programming language type theory sense of "type";
since many languages "tag" arrays with annotations indicating their
dimensions, I guess you could say that we do solve them with types in
the LISP sense).

> Your problem doesn't exist. Just say "types" when you're amongst your
> own folks, and "static types" when you're amongst a broader audience,
> and everything's fine.

I think I've explained why that's not the case.  I don't have a
complaint about anyone speaking of types.  It's the confusion from
pretending that the two definitions are comparable that I'm pointing
out.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Chris Smith <cdsmith@twu.net> wrote in
news:MPG.1f007f6a483b5e7d9896c6@news.altopia.net:

> Rob Thorpe <robert.thorpe@antenova.com> wrote:
>> A language is latently typed if a value has a property - called it's
>> type - attached to it, and given it's type it can only represent
>> values defined by a certain class.
>
> Now I define a class of values called "correct" values.  I define
> these to be those values for which my program will produce acceptable
> results.  Clearly there is a defined class of such values: (1) they
> are immediately defined by the program's specification for those lines
> of code that produce output; ...

> I'm not trying to poke holes in your definition for fun.  I am
> proposing that there is no fundamental distinction between the kinds
> of problems that are "type problems" and those that are not.

That sounds like a lot to demand of a type system. It almost sounds like
it's supposed to test and debug the whole program. In general, defining the
exact set of values for a given variable that generate acceptable output
from your program will require detailed knowledge of the program and all
its possible inputs. That goes beyond simple typing. It sounds more like
contracts. Requiring an array index to be an integer is considered a typing
problem because it can be checked based on only the variable itself,
whereas checking whether it's in bounds requires knowledge about the array.

--

 0

Yet Another Dan wrote:
> Chris Smith <cdsmith@twu.net> wrote in
> news:MPG.1f007f6a483b5e7d9896c6@news.altopia.net:
>
>> Rob Thorpe <robert.thorpe@antenova.com> wrote:
>>> A language is latently typed if a value has a property - called it's
>>> type - attached to it, and given it's type it can only represent
>>> values defined by a certain class.
>> Now I define a class of values called "correct" values.  I define
>> these to be those values for which my program will produce acceptable
>> results.  Clearly there is a defined class of such values: (1) they
>> are immediately defined by the program's specification for those lines
>> of code that produce output; ...
>
>> I'm not trying to poke holes in your definition for fun.  I am
>> proposing that there is no fundamental distinction between the kinds
>> of problems that are "type problems" and those that are not.
>
> That sounds like a lot to demand of a type system. It almost sounds like
> it's supposed to test and debug the whole program. In general, defining the
> exact set of values for a given variable that generate acceptable output
> from your program will require detailed knowledge of the program and all
> its possible inputs. That goes beyond simple typing. It sounds more like
> contracts. Requiring an array index to be an integer is considered a typing
> problem because it can be checked based on only the variable itself,
> whereas checking whether it's in bounds requires knowledge about the array.
>

It's worse than that. The general question of whether a program will
terminate on a given input is undecidable for any language that is
Turing-machine equivalent, including Java.

Patricia

 0
Reply pats (3556) 6/19/2006 6:24:24 PM

"Rob Thorpe" <robert.thorpe@antenova.com> writes:

> I don't think dynamic typing is that nebulous.  I remember this being
> discussed elsewhere some time ago, I'll post the same reply I did then
> ..
>
>
> A language is statically typed if a variable has a property - called
> it's type - attached to it, and given it's type it can only represent
> values defined by a certain class.

By this definition, all languages are statically typed (by making that
"certain class" the set of all values).  Moreover, this "definition",
when read the way you probably wanted it to be read, requires some
considerable stretch to accommodate existing static type systems such
as F_\omega.

Perhaps better: A language is statically typed if its definition
includes (or ever better: is based on) a static type system, i.e., a
static semantics with typing judgments derivable by typing rules.
Usually typing judgmets associate program phrases ("expressions") with
types given a typing environment.

> A language is latently typed if a value has a property - called it's
> type - attached to it, and given it's type it can only represent values
> defined by a certain class.

This "definition" makes little sense.  Any given value can obviously
only represent one value: itself.  "Dynamic types" are nothing more
than sets of values, often given by computable predicates.

> Untyped and type-free mean something else: they mean no type checking
> is done.

Look up "untyped lambda calculus".

 0

Matthias Blume wrote:
> "Rob Thorpe" <robert.thorpe@antenova.com> writes:
>
>> I don't think dynamic typing is that nebulous.  I remember this being
>> discussed elsewhere some time ago, I'll post the same reply I did then
>> ..
>>
>>
>> A language is statically typed if a variable has a property - called
>> it's type - attached to it, and given it's type it can only represent
>> values defined by a certain class.
>
> By this definition, all languages are statically typed (by making that
> "certain class" the set of all values).  Moreover, this "definition",
> when read the way you probably wanted it to be read, requires some
> considerable stretch to accommodate existing static type systems such
> as F_\omega.
>
> Perhaps better: A language is statically typed if its definition
> includes (or ever better: is based on) a static type system, i.e., a
> static semantics with typing judgments derivable by typing rules.
> Usually typing judgmets associate program phrases ("expressions") with
> types given a typing environment.

How does your definition exclude the trivial type system in which the
only typing judgment states that every expression is acceptable?

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Chris Smith wrote:
> Pascal Costanza <pc@p-cos.net> wrote:
>> Types can be represented at runtime via type tags. You could insist on
>> using the term "dynamically tagged languages", but this wouldn't change
>> a lot. Exactly _because_ it doesn't make sense in a statically typed
>> setting, the term "dynamically typed language" is good enough to
>> communicate what we are talking about - i.e. not (static) typing.
>
> Okay, fair enough.  It's certainly possible to use the same sequence of
> letters to mean two different things in different contexts.  The problem
> arises, then, when Torben writes:
>
> : That's not really the difference between static and dynamic typing.
> : Static typing means that there exist a typing at compile-time that
> : guarantess against run-time type violations.  Dynamic typing means
> : that such violations are detected at run-time.
>
> This is clearly not using the word "type" to mean two different things
> in different contexts.  Rather, it is speaking under the mistaken
> impression that "static typing" and "dynamic typing" are varieties of
> some general thing called "typing."  In fact, the phrase "dynamically
> typed" was invented to do precisely that.  My argument is not really
> with LISP programmers talking about types, by which they would mean
> approximately the same thing Java programmers mean by "class."  My point
> here concerns the confusion that results from the conception that there
> is this binary distinction (or continuum, or any other simple
> relationship) between a "statically typed" and a "dynamically typed"
> language.

There is an overlap in the sense that some static type systems cover
only types as sets of values whose correct use could as well be checked
dynamically.

Yes, it's correct that more advanced static type systems can provide
more semantics than that (and vice versa).

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Yet Another Dan <goofball@vapornet.com> wrote:
> That sounds like a lot to demand of a type system. It almost sounds like
> it's supposed to test and debug the whole program. In general, defining the
> exact set of values for a given variable that generate acceptable output
> from your program will require detailed knowledge of the program and all
> its possible inputs. That goes beyond simple typing. It sounds more like
> contracts. Requiring an array index to be an integer is considered a typing
> problem because it can be checked based on only the variable itself,
> whereas checking whether it's in bounds requires knowledge about the array.

Thanks for proving my point.  As a matter of fact, as I just mentioned
in another response, array bounds checking is widely known to be
solvable via type systems, and doing so in a way that yields a usable
language is an active area of work in type theory for programming
languages.  My point has been that confusion over the meaning of a type
leads to limitations in what problems we think of as solvable by types.
Apparently, that is accurate at least in this case.

Once again: there is no such thing as a bug is that not a type error.
There are only bugs that aren't caught by type systems.  Because of
this, defining a general class of "typed languages" as those that catch
"type errors", regardless of mechanism, is meaningless.  All practical
languages (even machine code) catch some errors (for machine code, e.g.,
mapped by the MMU), and therefore they catch some type errors, and
therefore there would be no languages that are not in that class of
typed languages (with trivial exceptions, such as the pure untyped
lambda calculus in which any normal form is considered to be correct
output.)

The only reasonable possibility for defining types is based on the
*mechanism* for catching these errors.  Several different groups of
people use the word "type" for several different such mechanisms.  While
acknowledging that this is valid, it remains confusing to talk about
"statically typed" versus "dynamically typed" languages, because the
terminology implies that "typed" has the same meaning in both cases,
which it does not.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0
Reply cdsmith (3863) 6/19/2006 7:07:13 PM

Pascal Costanza <pc@p-cos.net> wrote:
> How does your definition exclude the trivial type system in which the
> only typing judgment states that every expression is acceptable?

It is not necessary to exclude that trivial type system.  Since it is
useless, no one will implement it.  However, if pressed, I suppose one
would have to admit that that definition includes a type system that is
just useless.

I do, though, prefer Pierce's definition:

A type system is a tractable syntactic method for proving the
absence of certain program behaviors by classifying phrases
according to the kinds of values they compute.

(Benjamin Pierce, Types and Programming Languages, MIT Press, pg. 1)

Key words include:

- tractable: it's not sufficient to just evaluate the program

- syntactic: types are tied to the kinds of expressions in the language

- certain program behaviors: while perhaps confusing out of context,
there is nowhere in the book a specification of which program
behaviors may be prevented by type systems and which may not.  In
context, the word "certain" there is meant to make it clear that type
systems should be able to specifically identify which behaviors they
prevent, and not that there is some universal set.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Pascal Costanza <pc@p-cos.net> writes:

> Matthias Blume wrote:
>> "Rob Thorpe" <robert.thorpe@antenova.com> writes:
>>
>>> I don't think dynamic typing is that nebulous.  I remember this being
>>> discussed elsewhere some time ago, I'll post the same reply I did then
>>> ..
>>>
>>>
>>> A language is statically typed if a variable has a property - called
>>> it's type - attached to it, and given it's type it can only represent
>>> values defined by a certain class.
>> By this definition, all languages are statically typed (by making
>> that
>> "certain class" the set of all values).  Moreover, this "definition",
>> when read the way you probably wanted it to be read, requires some
>> considerable stretch to accommodate existing static type systems such
>> as F_\omega.
>> Perhaps better: A language is statically typed if its definition
>> includes (or ever better: is based on) a static type system, i.e., a
>> static semantics with typing judgments derivable by typing rules.
>> Usually typing judgmets associate program phrases ("expressions") with
>> types given a typing environment.
>
> How does your definition exclude the trivial type system in which the
> only typing judgment states that every expression is acceptable?

It does not.

 0

Chris Smith wrote:
>
> Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
> interject my plea not to abuse the word "type" with a phrase like
> "dynamically typed".

Allow me to strenuously object.  The static typing community has its
own set of
terminology and that's fine.  However, we Lisp hackers are not used to
this terminology.
It confuses us.  *We* know what we mean by dynamically typed', and we
suspect *you* do, too.

> This cleaner terminology eliminates a lot of confusion.

Hah!  Look at the archives.

> This isn't just a matter of preference in terminology.

No?

>  If types DON'T mean a compile-time method for proving the
> absence of certain program behaviors, then they don't mean anything at
> all.

Nonsense.


 0

Joe Marshall <eval.apply@gmail.com> wrote:
>
> Chris Smith wrote:
> >
> > Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
> > interject my plea not to abuse the word "type" with a phrase like
> > "dynamically typed".
>
> Allow me to strenuously object.  The static typing community has its
> own set of
> terminology and that's fine.  However, we Lisp hackers are not used to
> this terminology.
> It confuses us.  *We* know what we mean by dynamically typed', and we
> suspect *you* do, too.

I know what you mean by types in LISP.  The phrase "dynamically typed,"
though, was explicitly introduced as a counterpart to "statically
typed" in order to imply (falsely) that the word "typed" has related
meanings in those two cases.  Nevertheless, I did not really object,
since it's long since passed into common usage, until Torben attempted
to give what I believe are rather meaningless definitions to those
words, in terms of some mythical thing called "type violations" that he
seems to believe exist apart from any specific type systems.
(Otherwise, how could you define kinds of type systems in terms of when
they catch type violations?)

> > This cleaner terminology eliminates a lot of confusion.
>
> Hah!  Look at the archives.

I'm not sure what you mean here.  You would like me to look at the
archives of which of the five groups that are part of this conversation?
In any case, the confusion I'm referring to pertains to comparison of
languages, and it's already been demonstrated once in the half-dozen or
so responses to this branch of this thread.

> >  If types DON'T mean a compile-time method for proving the
> > absence of certain program behaviors, then they don't mean anything at
> > all.
>
> Nonsense.

Please accept my apologies for not making the context clear.  I tried to
clarify, in my response to Pascal, that I don't mean that the word
"type" can't have any possible meaning except for the one from
programming language type theory.  I should modify my statement as
follows:

An attempt to generalize the definition of "type" from programming
language type theory to eliminate the requirement that they are
syntactic in nature yields something meaningless.  Any concept of
"type" that is not syntactic is a completely different thing from
static types.

Basically, I start objecting when someone starts comparing "statically
typed" and "dynamically typed" as if they were both varieties of some
general concept called "typed".  They aren't.  Furthermore, these two
phrases were invented under the misconception that that are.  If you
mean something else by types, such as the idea that a value has a tag
indicating its range of possible values, then I tend to think it would
be less confusing to just say "type" and then clarify the meaning it
comes into doubt, rather than adopting language that implies that those
types are somehow related to types from type theory.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Chris Smith wrote:
>
> Basically, I start objecting when someone starts comparing "statically
> typed" and "dynamically typed" as if they were both varieties of some
> general concept called "typed".  They aren't.  Furthermore, these two
> phrases were invented under the misconception that that are.  If you
> mean something else by types, such as the idea that a value has a tag
> indicating its range of possible values, then I tend to think it would
> be less confusing to just say "type" and then clarify the meaning it
> comes into doubt, rather than adopting language that implies that those
> types are somehow related to types from type theory.

While I am quite sympathetic to this point, I have to say that
this horse left the barn quite some time ago.

Marshall

PS. Hi Chris!


 0

Marshall <marshall.spight@gmail.com> wrote:
> While I am quite sympathetic to this point, I have to say that
> this horse left the barn quite some time ago.

I don't think so.  Perhaps it's futile to go scouring the world for uses
of the phrase "dynamic type" and eliminating them.   It's not useless to
point out when the term is used in a particularly confusing way, though,
as when it's implied that there is some class of "type errors" that is
strictly a subset of the class of "errors".  Terminology is often
confused for historical reasons, but incorrect statements eventually get
corrected.

> PS. Hi Chris!

Hi!  Where are you posting from these days?

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

On Mon, 19 Jun 2006 05:15:13 +0200, Pascal Bourguignon wrote:

> Have a look at closer-weak at:
> http://www.informatimago.com/develop/lisp/index.html#clext

Thanks!

--
"You do not really understand something unless you can
explain it to your grandmother." — Albert Einstein.


 0

Yet Another Dan sez:

.... Requiring an array index to be an integer is considered a typing
> problem because it can be checked based on only the variable itself,
> whereas checking whether it's in bounds requires knowledge about the array.

You mean like
subtype MyArrayIndexType is INTEGER 7 .. 11
type MyArrayType is array (MyArrayIndexType) of MyElementType

Dima
--
We're sysadmins. Sanity happens to other people.                  -- Chris King

 0

Chris Smith wrote:
> Marshall <marshall.spight@gmail.com> wrote:
> > While I am quite sympathetic to this point, I have to say that
> > this horse left the barn quite some time ago.
>
> I don't think so.  Perhaps it's futile to go scouring the world for uses
> of the phrase "dynamic type" and eliminating them.   It's not useless to
> point out when the term is used in a particularly confusing way, though,
> as when it's implied that there is some class of "type errors" that is
> strictly a subset of the class of "errors".  Terminology is often
> confused for historical reasons, but incorrect statements eventually get
> corrected.

That's fair.

One thing that is frustrating to me is that I really want to build
an understanding of what dynamic typing is and what its
advantages are, but it's difficult to have a productive discussion
on the static vs. dynamic topic. Late binding of every function
invocation: how does that work, what are the implications of that,

I have come to believe that the two actually represent
very different ways of thinking about programming. Which
goes a long way to explaining why there are such difficulties
communicating. Each side tries to describe to the other how
the tools and systems they use facilitate doing tasks that
don't exist in the other side's mental model.

> > PS. Hi Chris!
>
> Hi!  Where are you posting from these days?

I'm mostly on comp.databases.theory, but I also lurk
on comp.lang.functional, which is an awesome group, and
if you're reading Pierce then you might like it too.

Marshall


 0

Chris Smith wrote:
> Marshall <marshall.spight@gmail.com> wrote:
> > While I am quite sympathetic to this point, I have to say that
> > this horse left the barn quite some time ago.
>
> I don't think so.  Perhaps it's futile to go scouring the world for uses
> of the phrase "dynamic type" and eliminating them.   It's not useless to
> point out when the term is used in a particularly confusing way, though,
> as when it's implied that there is some class of "type errors" that is
> strictly a subset of the class of "errors".  Terminology is often
> confused for historical reasons, but incorrect statements eventually get
> corrected.

Ok, so you (Chris Smith) object to the term "dynamic type".  From a
historical perspective it makes sense, perticular in the sense of
tags.  A dynamic type system involves associating tags with values and
expresses variant operations in terms of those tags, with certain
disallowed combinations checked (dynamicly) at run-time.  A static
type system eliminates some set of tags on values by syntactic
analysis of annotations (types) written with or as part of the program
and detects some of the disallowed compuatations (staticly) at compile
time.

Type errors are the errors caught by ones personal sense of which
annotations are expressible, computable, and useful.  Thus, each
person has a distinct sense of which errors can (and/or should) be
staticly caught.

A personal example, I read news using Emacs, which as most readers in
these newsgroups will know is a dialect of lisp that includes
primitives to edit files.  Most of emacs is quite robust, perhaps due
to it being lisp.  However, some commands (reading news being one of
them) are exceptionally fragile and fail in the most undebuggable
ways, often just complaining about "nil being an invalid argument" (or
"stack overflow in regular expression matcher".)

This is particularly true, when I am composing lisp code.  If I write
some lisp code and make a typographical error, the code may appear to
run on some sample case and fail due to a path not explored in my
sample case when applied to real data.

I consider such an error to be a "type error" because I believe if I
used a languages that required more type annotations, the compiler
would have caught my typographical error (and no I'm not making a pun
on type and typographical).  Because I do a lot of this--it is easy
enough for me to conjure up a small lisp macro to make some edit that
it is a primary tool in my toolkit, I wish I could make my doing so
more robust.  It would not bother me to have to type more to do so.  I
simply make too many stupid errors to use emacs lisp as effectively as
I would like.

Now, this has nothing to do with real lisp programming, and so I do
not wish to offend those who do that.  However, I personally would
like a staticly typed way of writing macros (and more importantly of
annotating some of the existing emacs code) to remove some of the
fragilities that I experience.  I'm not taking avantage of the
exploratory nature of lisp, except in the sense that the exploratory
nature has created lots of packages which mostly work most of the
time.

Now, I will return this group to its much more erudite discussion of
the issue.

Thank you,
-Chris

*****************************************************************************
Chris Clark                    Internet   :  compres@world.std.com
Compiler Resources, Inc.       Web Site   :  http://world.std.com/~compres
23 Bailey Rd                   voice      :  (508) 435-5016
Berlin, MA  01503  USA         fax        :  (978) 838-0263  (24 hours)
------------------------------------------------------------------------------


 0

Chris Smith wrote:
> Joe Marshall <eval.apply@gmail.com> wrote:
> >
> > Chris Smith wrote:
> > >
> > > Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
> > > interject my plea not to abuse the word "type" with a phrase like
> > > "dynamically typed".
> >
> > Allow me to strenuously object.  The static typing community has its
> > own set of
> > terminology and that's fine.  However, we Lisp hackers are not used to
> > this terminology.
> > It confuses us.  *We* know what we mean by dynamically typed', and we
> > suspect *you* do, too.
>
> I know what you mean by types in LISP.  The phrase "dynamically typed,"
> though, was explicitly introduced as a counterpart to "statically
> typed" in order to imply (falsely) that the word "typed" has related
> meanings in those two cases.

They *do* have a related meaning.  Consider this code fragment:
(car "a string")

Obviously this code is wrong' in some way.  In static typing terms, we
could say that
we have a type error because the primitive procedure CAR doesn't
operate on strings.
Alternatively, we could say that since Lisp has one universal type (in
static type terms)
the code is correctly statically typed - that is, CAR is defined on all
input, but it's definition is to raise a runtime exception when passed
a string.

But regardless of which way you want to look at it, CAR is *not* going
to perform its usual computation and it is *not* going to return a
value.  The reason behind this is that you cannot take the CAR of a
string.  A string is *not* a valid argument to CAR.  Ask anyone why and
they will tell you It's the wrong type.'

Both static typing' and dynamic typing' (in the colloquial sense) are
strategies to detect this sort of error.

> Nevertheless, I did not really object,
> since it's long since passed into common usage,

Exactly.  And you are far more likely to encounter this sort of usage
outside of a type theorist's convention.

> until Torben attempted
> to give what I believe are rather meaningless definitions to those
> words, in terms of some mythical thing called "type violations" that he
> seems to believe exist apart from any specific type systems.

It's hardly mythical.  (car "a string") is obviously an error and you
don't need a static type system to know that.

> > > This cleaner terminology eliminates a lot of confusion.
> >
> > Hah!  Look at the archives.
>
> I'm not sure what you mean here.  You would like me to look at the
> archives of which of the five groups that are part of this conversation?
> In any case, the confusion I'm referring to pertains to comparison of
> languages, and it's already been demonstrated once in the half-dozen or
> so responses to this branch of this thread.

I mean that this has been argued time and time again in comp.lang.lisp
and probably the other groups as well.  You may not like the fact that
we say that Lisp is dynamically typed, but *we* are not confused by
this usage.  In fact, we become rather confused when you say a
correctly typed program cannot go wrong at runtime' because we've seen
plenty of runtime errors from code that is correctly typed'.

> > >  If types DON'T mean a compile-time method for proving the
> > > absence of certain program behaviors, then they don't mean anything at
> > > all.
> >
> > Nonsense.
>
> Please accept my apologies for not making the context clear.  I tried to
> clarify, in my response to Pascal, that I don't mean that the word
> "type" can't have any possible meaning except for the one from
> programming language type theory.  I should modify my statement as
> follows:
>
>     An attempt to generalize the definition of "type" from programming
>     language type theory to eliminate the requirement that they are
>     syntactic in nature yields something meaningless.  Any concept of
>     "type" that is not syntactic is a completely different thing from
>     static types.

Agreed.  That is why there is the qualifier dynamic'.  This indicates
that it is a completely different thing from static types.

> Basically, I start objecting when someone starts comparing "statically
> typed" and "dynamically typed" as if they were both varieties of some
> general concept called "typed".  They aren't.

I disagree.  There is clearly a concept that there are different
varieties of data and they are not interchangable.  In some languages,
it is up to the programmer to ensure that mistakes in data usage do not
happen.  In other languages, the computer can detect such mistakes and
prevent them.  If this detection is performed by syntactic analysis
prior to running the program, it is static typing.  Some languages like
Lisp defer the detection until the program is run.  Call it what you
want, but here in comp.lang.lisp we tend to call it dynamic typing'.

>  Furthermore, these two
> phrases were invented under the misconception that that are.  If you
> mean something else by types, such as the idea that a value has a tag
> indicating its range of possible values, then I tend to think it would
> be less confusing to just say "type" and then clarify the meaning it
> comes into doubt, rather than adopting language that implies that those
> types are somehow related to types from type theory.

You may think it would be less confusing, but if you look at the
archives of comp.lang.lisp you would see that it is not.

We're all rubes here, so don't try to educate us with your
high-falutin' technical terms.


 0

Joe Marshall wrote:
>
> They *do* have a related meaning.  Consider this code fragment:
> (car "a string")
> [...]
> Both static typing' and dynamic typing' (in the colloquial sense) are
> strategies to detect this sort of error.

The thing is though, that putting it that way makes it seems as
if the two approaches are doing the same exact thing, but
just at different times: runtime vs. compile time. But they're
not the same thing. Passing the static check at compile
time is universally quantifying the absence of the class
of error; passing the dynamic check at runtime is existentially
quantifying the absence of the error. A further difference is
the fact that in the dynamically typed language, the error is
found during the evaluation of the expression; in a statically
typed language, errors are found without attempting to evaluate
the expression.

I find everything about the differences between static and
dynamic to be frustratingly complex and subtle.

(To be clear, I do know that Joe understands these issues
quite well.)

So I kind of agree with Chris, insofar as I think the terminology
plays a role in obscuring rather than illuminating the differences.

On the other hand I agree with Joe in that:

> I mean that this has been argued time and time again in comp.lang.lisp
> and probably the other groups as well.  You may not like the fact that
> we say that Lisp is dynamically typed, but *we* are not confused by
> this usage.  In fact, we become rather confused when you say a
> correctly typed program cannot go wrong at runtime' because we've seen
> plenty of runtime errors from code that is correctly typed'.

Yes; as I said ealier, the horse has left the barn on this one.

The conversation I would *really* like to have is the one where we
discuss what all the differences are, functionally, between the two,
and what the implications of those differences are, without trying
to address which approach is "right" or "better", because those are
dependent on the problem domain anyway, and because I can
make up my own mind just fine about which one I prefer.

The comp.lang.functional and comp.lang.lisp people are probably
two of the smartest groups on usenet. (I do not consider myself
a member of either group.) You wouldn't *think* that conversation
would be *so* hard to have.

Marshall


 0

Joe Marshall <eval.apply@gmail.com> wrote:
> They *do* have a related meaning.  Consider this code fragment:
> (car "a string")

My feeling is that this code is obviously wrong.  It is so obviously
wrong, in fact, that the majority of automated error detection systems,
if written for Lisp, would probably manage to identify it as wrong at
some point.  This includes the Lisp runtime.  So far, I haven't
mentioned types.

> A string is *not* a valid argument to CAR.  Ask anyone why and
> they will tell you It's the wrong type.'

Indeed, they will.  We're assuming, of course that they know a little
Lisp... otherwise, it may be entirely reasonable for someone to expect
that (car "a string") is 'a' and (cdr "a string") is " string"... but
I'll ignore that, even though I haven't yet convinced myself that it's
not relevant to why this is colloquially considered a type error.

I believe that in this sense, the 'anyone' actually means "type" in the
sense that you mean "type".  The fact that a static type system detects
this error is somewhat coincidental (except for the fact that, again,
any general error-detection scheme worth its salt probably detects this
error), and orthogonal to whether it is considered a type error by our
hypothetical 'anyone'.

> Both static typing' and dynamic typing' (in the colloquial sense) are
> strategies to detect this sort of error.

I will repeat that static typing is a strategy for detecting errors in
general, on the basis of tractable syntactic methods.  There are some
types of errors that are easier to detect in such a system than
others... but several examples have been given of problems solved by
static type systems that are not of the colloquial "It's the wrong
type" variety that you mention here.  The examples so far have included
detecting division by zero, or array bounds checking.  Other type
systems can check dimensionality (correct units).

Another particularly interesting example may be the following from
Ocaml:

let my_sqrt x = if x < 0.0 then None else Some(sqrt(x));;

Then, if I attempt to use my_sqrt in a context that requires a float,
the compiler will complain about a type violation, since the type of the
expression is "float option".  So this is a type error *in Ocaml*, but
it's not the kind of thing that gets intuitively classified as a type
error.  In fact, it's roughly equivalent to a NullPointerException at
runtime in Java, and few Java programmers would consider a
NullPointerException to be somehow "actually a type error" that the
compiler just doesn't catch.  In this case, when the error appears in
Ocaml, it appears to be "obviously" a type error, but that's only
because the type system was designed to catch some class of program
errors, of which this is a member.

> It's hardly mythical.  (car "a string") is obviously an error and you
> don't need a static type system to know that.

Sure.  The question is whether it means much to say that it's a "type
error".  So far, I'd agree with either of two statements, depending on
the usage of the word "type":

a) Yes, it means something, but Torben's definition of a static type
system was wrong, because static type systems are not specifically
looking for type errors.

or

b) No, "type error" just means "error that can be caught by the type
system", so it is circular and meaningless to use the phrase in defining
a kind of type system.

> I mean that this has been argued time and time again in comp.lang.lisp
> and probably the other groups as well.

My apologies, then.  It has not been discussed so often in any newsgroup
that I followed up until now, though Marshall has now convinced me to
read comp.lang.functional, so I might see these endless discussions from
now on.

> In fact, we become rather confused when you say a
> correctly typed program cannot go wrong at runtime' because we've seen
> plenty of runtime errors from code that is correctly typed'.

Actually, I become a little confused by that as well.  I suppose it
would be true of a "perfect" static type system, but I haven't seen one
of those yet.  (Someone did email me earlier today to point out that the
type system of a specification language called LOTOS supposedly is
perfect in that sense, that every correctly typed program is also
correct, but I've yet to verify this for myself.  It seems rather
difficult to believe.)

Unless I suddenly have some kind of realization in the future about the
feasibility of a perfect type system, I probably won't make that
statement that you say confuses you.

> >     An attempt to generalize the definition of "type" from programming
> >     language type theory to eliminate the requirement that they are
> >     syntactic in nature yields something meaningless.  Any concept of
> >     "type" that is not syntactic is a completely different thing from
> >     static types.
>
> Agreed.  That is why there is the qualifier dynamic'.  This indicates
> that it is a completely different thing from static types.

discussion.  I'm not sure we do agree, though, because I doubt we'd be
right here in this conversation if we did.

This aspect of being a "completely different thing" is why I objected to
Torben's statement of the form: static type systems detect type
violations at compile time, whereas dynamic type systems detect type
violations at runtime.  The problem with that statement is that "type
violations" means different things in the first and second half, and
that's obscured by the form of the statement.  It would perhaps be
clearer to say something like:

Static type systems detect some bugs at compile time, whereas
dynamic type systems detect type violations at runtime.

Here's one interesting consequence of the change.  It is easy to
recognize that static and dynamic type systems are largely orthogonal to
each other in the sense above.  Java, for example (since that's one of
the newsgroups on the distribution list for this thread), restricting
the field of view to reference types for simplicity's sake, clearly has
both a very well-developed static type system, and a somewhat well-
developed dynamic type system.  There are dynamic "type" errors that
pass the compiler and are caught by the runtime; and there are errors
that are caught by the static type system.  There is indeed considerable
overlap involved, but nevertheless, neither is made redundant.  In fact,
one way of understanding the headaches of Java 1.5 generics is to note
that the two different meanings of "type errors" are no longer in
agreement with each other!

> We're all rubes here, so don't try to educate us with your
> high-falutin' technical terms.

That's not my intention.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

I <cdsmith@twu.net> wrote:
>     Static type systems detect some bugs at compile time, whereas
>     dynamic type systems detect type violations at runtime.

PS: In order to satisfy the Python group (among others not on the cross-
post list), we'd need to include "duck typing," which fits neither of
the two definitions above.  We'd probably need to modify the definition
of dynamic type systems, since most source tend to classify it as a
dynamic type system.  It's getting rather late, though, and I don't
intend to think about how to do that.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

In article <1150765102.867144.12590@h76g2000cwa.googlegroups.com>,
"Marshall" <marshall.spight@gmail.com> wrote:

> The conversation I would *really* like to have is the one where we
> discuss what all the differences are, functionally, between the two,
> and what the implications of those differences are, without trying
> to address which approach is "right" or "better", because those are
> dependent on the problem domain anyway, and because I can
> make up my own mind just fine about which one I prefer.
>
> The comp.lang.functional and comp.lang.lisp people are probably
> two of the smartest groups on usenet. (I do not consider myself
> a member of either group.) You wouldn't *think* that conversation
> would be *so* hard to have.

It's hard to have because there's very little to say, which leaves the
academics without enough to do to try to justify their existence.  This
is the long and the short of it:

1.  There are mismatches between the desired behavior of code and its
actual behavior.  Such mismatches are generally referred to as "bugs" or
"errors" (and, occasionally, "features").

2.  Some of those mismatches can be detected before the program runs.
Some can be detected while the program runs.  And some cannot be
detected until after the program has finished running.

3.  The various techniques for detecting those mismatches impose varying
degrees of burden upon the programmer and the user.

That's it.  Everything else, including but not limited to quibbling over
the meaning of the word "type", is nothing but postmodernist claptrap.

IMHO of course.

rg

 0

In comp.lang.functional Chris Smith <cdsmith@twu.net> wrote:
[...]
> Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
> interject my plea not to abuse the word "type" with a phrase like
> "dynamically typed".  If anyone considers "untyped" to be perjorative,
> as some people apparently do, then I'll note that another common term is
> "type-free," which is marketing-approved but doesn't carry the
> misleading connotations of "dynamically typed."  We are quickly losing
> any rational meaning whatsoever to the word "type," and that's quite a
> shame.
[...]

FWIW, I agree and have argued similarly on many occasions (both on the
and person-to-person).  The widely used terminology (statically /
dynamically typed, weakly / strongly typed) is extremely confusing to
beginners and even to many with considerable practical experience.

-Vesa Karvonen

 0

Chris Smith wrote:
> Rob Thorpe <robert.thorpe@antenova.com> wrote:
> > A language is latently typed if a value has a property - called it's
> > type - attached to it, and given it's type it can only represent values
> > defined by a certain class.
>
> I'm assuming you mean "class" in the general sense, rather than in the
> sense of a specific construct of some subset of OO programming
> languages.

Yes.

> Now I define a class of values called "correct" values.  I define these
> to be those values for which my program will produce acceptable results.
> Clearly there is a defined class of such values: (1) they are
> immediately defined by the program's specification for those lines of
> code that produce output; (2) if they are defined for the values that
> result from any expression, then they are defined for the values that
> are used by that expression; and (3) for any value for which correctness
> is not defined by (1) or (2), we may define its "correct" values as the
> class of all possible values.

>  Now, by your definition, any language
> which provides checking of that property of correctness for values is
> latently typed.

No, that isn't what I said.  What I said was:
"A language is latently typed if a value has a property - called it's
type - attached to it, and given it's type it can only represent values
defined by a certain class."

I said nothing about the values producing correct outputs, or being
correct inputs.  I only said that they have types.

What I mean is that each value in the language is defined by two peice
of information, its contents X and its type T.

>  Of course, there are no languages that assign this
> specific class of values; but ANY kind of correctness checking on values
> that a language does (if it's useful at all) is a subset of the perfect
> correctness checking system above.  Apparently, we should call all such
> systems "latent type systems".

No, I'm speaking about type checking not correctness.

>  Can you point out a language that is not
> latently typed?

Easy, any statically typed language is not latently typed.  Values have
no type associated with them, instead variables have types.

If I tell a C program to print out a string but give it a number it
will give an error telling me that the types mismatch where the
variable the number is held in is used to assign to a variable that
must hold a string.  Similarly if I have a lisp function that prints
out a string it will also fail when given a number, but it will fail at
a different point, it will fail when the type of the value is examined
and found to be incorrect.

> I'm not trying to poke holes in your definition for fun.  I am proposing
> that there is no fundamental distinction between the kinds of problems
> that are "type problems" and those that are not.  Types are not a class
> of problems; they are a class of solutions.

Exactly.  Which is why they are only tangentially associated with
correctness.
Typing is a set of rules put in place to aid correctness, but it is not
a system that attempts to create correctness itself.

>  Languages that solve
> problems in ways that don't assign types to variables are not typed
> languages, even if those same problems may have been originally solved
> by type systems.

Well, you can think of things that way.  But to the rest of the
computing world languages that don't assign types to variables but do
assign them to values are latently typed.

> > Untyped and type-free mean something else: they mean no type checking
> > is done.
>
> Hence, they don't exist, and the definitions being used here are rather
> pointless.

No they aren't, types of data exist even if there is no system in place
to check them.  Ask an assembly programmer whether his programs have
string and integers in them and he will probably tell you that they do.


 0

Chris F Clark wrote:
>
> A static
> type system eliminates some set of tags on values by syntactic
> analysis of annotations (types) written with or as part of the program
> and detects some of the disallowed compuatations (staticly) at compile
> time.

Explicit annotations are not a necessary ingredient of a type system,
nor is "eliminating tags" very relevant to its function.

- Andreas

 0

Marshall wrote:

> The conversation I would *really* like to have is the one where we
> discuss what all the differences are, functionally, between the two,
> and what the implications of those differences are, without trying
> to address which approach is "right" or "better", because those are
> dependent on the problem domain anyway, and because I can
> make up my own mind just fine about which one I prefer.

My current take on this is that static typing and dynamic typing are
incompatible, at least in their "extreme" variants.

The simplest examples I have found are this:

- In a statically typed language, you can have variables that contain
only first-class functions at runtime that are guaranteed to have a
specific return type. Other values are rejected, and the rejection
happens at compile time.

In dynamically typed languages, this is impossible because you can never
be sure about the types of return values - you cannot predict the
future. This can at best be approximated.

- In a dynamically typed language, you can run programs successfully
that are not acceptable by static type systems. Here is an example in
Common Lisp:

; A class "person" with no superclasses and with the only field "name":
(defclass person ()
(name))

; A test program:
(defun test ()
(let ((p (make-instance 'person)))

(slot-value p 'address) is an attempt to access the field 'address in
the object p. In many languages, the notation for this is p.address.

Although the class definition for person doesn't mention the field
address, the call to (eval (read)) allows the user to change the
definition of the class person and update its existing instances.
Therefore at runtime, the call to (slot-value p 'adress) has a chance to
succeed.

(Even without the call to (eval (read)), in Common Lisp the call to
(slot-value p 'address) would raise an exception which gives the user a
chance to fix things and continue from the point in the control flow
where the exception was raised.)

I cannot imagine a static type system which has a chance to predict that
this program can successfully run without essentially accepting all
kinds of programs.

At least, development environments for languages like Smalltalk, Common
Lisp, Java, etc., make use of such program updates to improve
edit-compile-test cycles. However, it is also possible (and done in
new features to deployed systems.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Rob Thorpe wrote:
>
> No, that isn't what I said.  What I said was:
> "A language is latently typed if a value has a property - called it's
> type - attached to it, and given it's type it can only represent values
> defined by a certain class."

"it [= a value] [...] can [...] represent values"?

> Easy, any statically typed language is not latently typed.  Values have
> no type associated with them, instead variables have types.

A (static) type system assigns types to (all) *expressions*. This
includes values as well as variables.

Don't confuse type assignment with type annotation (which many
mainstream languages enforce for, but also only allow for, variable
declarations).

- Andreas

 0

Andreas Rossberg wrote:
> Rob Thorpe wrote:
> >
> > No, that isn't what I said.  What I said was:
> > "A language is latently typed if a value has a property - called it's
> > type - attached to it, and given it's type it can only represent values
> > defined by a certain class."
>
> "it [= a value] [...] can [...] represent values"?

???

> > Easy, any statically typed language is not latently typed.  Values have
> > no type associated with them, instead variables have types.
>
> A (static) type system assigns types to (all) *expressions*.

That's right most of the time yes, I probably should have said
expressions.  Though I can think of static typed languages where the
resulting type of an expression depends on the type of the variable it
is being assigned to.

> This includes values as well as variables.

Well I haven't programmed in any statically typed language where values
have types themselves.  Normally the language only knows that variable
Z is of type Q because it's in a variable of type Q, or as you mention
an expression of type Q.  There are many things that could be
considered more than one type.  The integer 45 could be unsigned 45 or
signed 45 or even float 45 depending on the variable it's in, but
without context it doesn't have a precise type.

It does imply the type to some extent though, you could imagine a
language where every value has a precise type.  So, you've found a hole
in my definition.

Maybe a better definition would be:-

if (variables have types || expressions have types) <lang is statically
typed>
else if (values have types) <lang is latently/dynamically typed>
else <lang is untyped>

That seems to fit usage, quite well.

Even then there are problems.  Perl has static types for arrays, hashs
and scalars.  But scalars themselves can be integers, strings, etc.

> Don't confuse type assignment with type annotation (which many
> mainstream languages enforce for, but also only allow for, variable
> declarations).

Point taken.


 0

Andreas Rossberg <rossberg@ps.uni-sb.de> writes:

>> "A language is latently typed if a value has a property - called it's
>> type - attached to it, and given it's type it can only represent values
>> defined by a certain class."

I thought the point was to separate the (bitwise) representation of a
value from its interpretation (which is its type).  In a static
system, the interpretation is derived from context, in a dynamic
system values must carry some form of tags specifying which
interpretation to use.

I think this applies - conceptually, at least - also to expressions?

My impression is that dynamic typers tend to include more general
properties in their concept of types (div by zero, srqt of negatives,
etc).

-k
--
If I haven't seen further, it is by standing in the footprints of giants

 0

Rob Thorpe wrote:
>>
>>>No, that isn't what I said.  What I said was:
>>>"A language is latently typed if a value has a property - called it's
>>>type - attached to it, and given it's type it can only represent values
>>>defined by a certain class."
>>
>>"it [= a value] [...] can [...] represent values"?
>
> ???

I just quoted, in condensed form, what you said above: namely, that a
value represents values - which I find a strange and circular definition.

>>A (static) type system assigns types to (all) *expressions*.
>
> That's right most of the time yes, I probably should have said
> expressions.  Though I can think of static typed languages where the
> resulting type of an expression depends on the type of the variable it
> is being assigned to.

Yes, but that's no contradiction. A type system does not necessarily
subtyping, polymorphism, etc).

> Well I haven't programmed in any statically typed language where values
> have types themselves.

They all have - the whole purpose of a type system is to ensure that any
expression of type T always evaluates to a value of type T. So when you
look at type systems formally then you certainly have to assign types to
values, otherwise you couldn't prove any useful property about those
systems (esp. soundness).

- Andreas

 0

Rob Thorpe <robert.thorpe@antenova.com> wrote:
>

Since you wrote that, I've come to understand that you meant something
specific by "property" which I didn't understand at first.  From my
earlier perspective, it was obvious that correctness was a property of a
value.  I now realize that you meant a property that's explicitly
associated with the value and plays a role in determining the behavior
of the language.  Sorry for the confusion.

> No, that isn't what I said.  What I said was:
> "A language is latently typed if a value has a property - called it's
> type - attached to it, and given it's type it can only represent values
> defined by a certain class."

No, to answer Andreas' concern, you would only need to say:

... if a value has a property - called it's type - attached to it,
and the language semantics guarantees that only values defined by a
certain class may have that same property attached.

> Easy, any statically typed language is not latently typed.

I'm actually not sure I agree with this at all.  I believe that
reference values in Java may be said to be latently typed.  This is the
case because each reference value (except null, which may be tested
separately) has an explicit property (called its "class", but surely the
word doesn't make any difference), such that the language semantics
guarantees that only a certain class of values may have that same
property, and the property is used to determine behavior of the language
in many cases (for example, in the case of type-based polymorphism, or
use of Java's instanceof operator).  Practically all class-based OO
languages are subject to similar consideration, as it turns out.

I'm unsure whether to consider explicitly stored array lengths, which
are present in most statically typed languages, to be part of a "type"
in this sense or not.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Andreas Rossberg wrote:
> Rob Thorpe wrote:
>>>
>>>> No, that isn't what I said.  What I said was:
>>>> "A language is latently typed if a value has a property - called it's
>>>> type - attached to it, and given it's type it can only represent values
>>>> defined by a certain class."
>>>
>>> "it [= a value] [...] can [...] represent values"?
>>
>> ???
>
> I just quoted, in condensed form, what you said above: namely, that a
> value represents values - which I find a strange and circular definition.
>

But you left out the most significant part: "given it's type it can only
represent values *defined by a certain class*" (my emphasis). In C-ish
notation:

unsigned int x;

means that x can only represent elements that are integers elements of
the set (class) of values [0, MAX_INT]. Negative numbers and non-integer
numbers are excluded, as are all sorts of other things.

You over-condensed.

DS

NB. This is not a comment on static, latent, derived or other typing,
merely on summarization.


 0

David Squire wrote:
> Andreas Rossberg wrote:
>
>> Rob Thorpe wrote:
>>
>>>>
>>>>> No, that isn't what I said.  What I said was:
>>>>> "A language is latently typed if a value has a property - called it's
>>>>> type - attached to it, and given it's type it can only represent
>>>>> values
>>>>> defined by a certain class."
>>>>
>>>>
>>>> "it [= a value] [...] can [...] represent values"?
>>>
>>>
>>> ???
>>
>> I just quoted, in condensed form, what you said above: namely, that a
>> value represents values - which I find a strange and circular definition.
>
> But you left out the most significant part: "given it's type it can only
> represent values *defined by a certain class*" (my emphasis).

That qualification does not remove the circularity from the definition.

> In C-ish notation:
>
>     unsigned int x;
>
> means that x can only represent elements that are integers elements of
> the set (class) of values [0, MAX_INT]. Negative numbers and non-integer
> numbers are excluded, as are all sorts of other things.

I don't see how that example is relevant, since the above definition
does not mention variables.

- Andreas

 0

Andreas Rossberg wrote:
> Rob Thorpe wrote:
> >>
> >>>No, that isn't what I said.  What I said was:
> >>>"A language is latently typed if a value has a property - called it's
> >>>type - attached to it, and given it's type it can only represent values
> >>>defined by a certain class."
> >>
> >>"it [= a value] [...] can [...] represent values"?
> >
> > ???
>
> I just quoted, in condensed form, what you said above: namely, that a
> value represents values - which I find a strange and circular definition.

Yes, but the point is, as the other poster mentioned: values defined by
a class.
For example, in lisp:
"xyz" is a string, #(1 2 3) is an array, '(1 2 3) is a list, 45 is a
fixed-point number.
Each item that can be stored has a type, no item can be stored without
one.  The types can be tested.  Most dynamic typed languages are like
that.

Compare this to an untyped language where types cannot generally be
tested.

> >>A (static) type system assigns types to (all) *expressions*.
> >
> > That's right most of the time yes, I probably should have said
> > expressions.  Though I can think of static typed languages where the
> > resulting type of an expression depends on the type of the variable it
> > is being assigned to.
>
> Yes, but that's no contradiction. A type system does not necessarily
> subtyping, polymorphism, etc).

That's a fair way of looking at it.

> > Well I haven't programmed in any statically typed language where values
> > have types themselves.
>
> They all have - the whole purpose of a type system is to ensure that any
> expression of type T always evaluates to a value of type T.

But it only gaurantees this because the variables themselves have a
type, the values themselves do not.  Most of the time the fact that the
variable they are held in has a type infers that the value does too.
But the value itself has no type, in a C program for example I can take
the value from some variable and recast it in any way I feel and the
language cannot correct any errors I make because their is no
information in the variable to indicate what type it is.

> So when you
> look at type systems formally then you certainly have to assign types to
> values, otherwise you couldn't prove any useful property about those
> systems (esp. soundness).

Yes, but indirectly.


 0

"Rob Thorpe" <robert.thorpe@antenova.com> writes:

> But it only gaurantees this because the variables themselves have a
> type, the values themselves do not.

I think statements like this are confusing, because there are
different interpretations of what a "value" is.  I would say that the
integer '4' is a value, and that it has type Integer (for instance).
This value is different from 4 the Int16, or 4 the double-precision
floating point number.  From this viewpoint, all values in statically
typed languages have types, but I think you use 'value' to denote the
representation of a datum in memory, which is a different thing.

-k
--
If I haven't seen further, it is by standing in the footprints of giants

 0

Rob Thorpe wrote:
>Andreas Rossberg wrote:
>>Rob Thorpe wrote:
>>
>>>>>"A language is latently typed if a value has a property - called it's
>>>>>type - attached to it, and given it's type it can only represent values
>>>>>defined by a certain class."
>>>>
>>>>"it [= a value] [...] can [...] represent values"?
>>>
>>>???
>>
>>I just quoted, in condensed form, what you said above: namely, that a
>>value represents values - which I find a strange and circular definition.
>
> Yes, but the point is, as the other poster mentioned: values defined by
> a class.

I'm sorry, but I still think that the definition makes little sense.
Obviously, a value simply *is* a value, it does not "represent" one, or
several ones, regardless how you qualify that statement.

>>>Well I haven't programmed in any statically typed language where values
>>>have types themselves.
>>
>>They all have - the whole purpose of a type system is to ensure that any
>>expression of type T always evaluates to a value of type T.
>
> But it only gaurantees this because the variables themselves have a
> type,

No, variables are insignificant in this context. You can consider a
language without variables at all (such languages exist, and they can
even be Turing-complete) and still have evaluation, values, and a
non-trivial type system.

> But the value itself has no type

You mean that the type of the value is not represented at runtime? True,
but that's simply because the type system is static. It's not the same
as saying it has no type.

> in a C program for example I can take
> the value from some variable and recast it in any way I feel and the
> language cannot correct any errors I make because their is no
> information in the variable to indicate what type it is.

Nothing in the C spec precludes an implementation from doing just that.
The problem with C rather is that its semantics is totally
underspecified. In any case, C is about the worst example to use when
discussing type systems. For starters, it is totally unsound - which is

- Andreas

 0

Pascal Costanza <pc@p-cos.net> writes:

> - In a dynamically typed language, you can run programs successfully
>   that are not acceptable by static type systems.

This statement is false.

For every program that can run successfully to completion there exists
a static type system which accepts that program.  Moreover, there is
at least one static type system that accepts all such programs.

What you mean is that for static type systems that are restrictive
enough to be useful in practice there always exist programs which
(after type erasure in an untyped setting, i.e., by switching to a
different language) would run to completion, but which are rejected by
the static type system.

By the way, the parenthetical remark is important: If a language's
definition is based on a static type system, then there are *no*
programs in that language which are rejected by its type checker.
That's trivially so because strings that do not type-check are simply
not considered programs.

> Here is an example in Common Lisp:
>
> ; A class "person" with no superclasses and with the only field "name":
> (defclass person ()
>    (name))
>
> ; A test program:
> (defun test ()
>    (let ((p (make-instance 'person)))
>
> (slot-value p 'address) is an attempt to access the field 'address in
> the object p. In many languages, the notation for this is p.address.
>
> Although the class definition for person doesn't mention the field
> address, the call to (eval (read)) allows the user to change the
> definition of the class person and update its existing
> instances. Therefore at runtime, the call to (slot-value p 'adress)
> has a chance to succeed.

I am quite comfortable with the thought that this sort of evil would
get rejected by a statically typed language. :-)

 0

David Squire <David.Squire@no.spam.from.here.au> writes:

> Andreas Rossberg wrote:
>> Rob Thorpe wrote:
>>>>
>>>>> No, that isn't what I said.  What I said was:
>>>>> "A language is latently typed if a value has a property - called it's
>>>>> type - attached to it, and given it's type it can only represent values
>>>>> defined by a certain class."
>>>>
>>>> "it [= a value] [...] can [...] represent values"?
>>>
>>> ???
>> I just quoted, in condensed form, what you said above: namely, that
>> a value represents values - which I find a strange and circular
>> definition.
>>
>
> But you left out the most significant part: "given it's type it can
> only represent values *defined by a certain class*" (my emphasis). In
> C-ish notation:
>
>      unsigned int x;
>
> means that x can only represent elements that are integers elements of
> the set (class) of values [0, MAX_INT]. Negative numbers and
> non-integer numbers are excluded, as are all sorts of other things.

This x is not a value.  It is a name of a memory location.

> You over-condensed.

Andreas condensed correctly.

 0

Matthias Blume wrote:
> David Squire <David.Squire@no.spam.from.here.au> writes:
>
>> Andreas Rossberg wrote:
>>> Rob Thorpe wrote:
>>>>>> No, that isn't what I said.  What I said was:
>>>>>> "A language is latently typed if a value has a property - called it's
>>>>>> type - attached to it, and given it's type it can only represent values
>>>>>> defined by a certain class."
>>>>> "it [= a value] [...] can [...] represent values"?
>>>> ???
>>> I just quoted, in condensed form, what you said above: namely, that
>>> a value represents values - which I find a strange and circular
>>> definition.
>>>
>> But you left out the most significant part: "given it's type it can
>> only represent values *defined by a certain class*" (my emphasis). In
>> C-ish notation:
>>
>>      unsigned int x;
>>
>> means that x can only represent elements that are integers elements of
>> the set (class) of values [0, MAX_INT]. Negative numbers and
>> non-integer numbers are excluded, as are all sorts of other things.
>
> This x is not a value.  It is a name of a memory location.
>
>> You over-condensed.
>
> Andreas condensed correctly.

I should have stayed out of this. I had not realised that it had
degenerated to point-scoring off someone typing "value" when it is clear
from context that he meant "variable".

Bye.

DS

 0

Rob Thorpe wrote:
> Yes, but the point is, as the other poster mentioned: values defined by
> a class.

A value can only represent one value, right? Or can a value have
multiple values?

> For example, in lisp:
> "xyz" is a string,

"xyz" cannot represent values from the class of strings. It can only
represent one value.

I think that's what the others are getting at.

>>They all have - the whole purpose of a type system is to ensure that any
>>expression of type T always evaluates to a value of type T.
>
> But it only gaurantees this because the variables themselves have a
> type, the values themselves do not.

Sure they do. 23.5e3 is a "real" in Pascal, and there's no variable there.

("hello" % "there") is illegal in most languages, because the modulo
operator doesn't apply to strings. How could this be determined at
compile time if "hello" and "there" don't have types?

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.

 0

"Rob Thorpe" <robert.thorpe@antenova.com> writes:

> Andreas Rossberg wrote:
>> Rob Thorpe wrote:
>> >>
>> >>>No, that isn't what I said.  What I said was:
>> >>>"A language is latently typed if a value has a property - called it's
>> >>>type - attached to it, and given it's type it can only represent values
>> >>>defined by a certain class."
>> >>
>> >>"it [= a value] [...] can [...] represent values"?
>> >
>> > ???
>>
>> I just quoted, in condensed form, what you said above: namely, that a
>> value represents values - which I find a strange and circular definition.
>
> Yes, but the point is, as the other poster mentioned: values defined by
> a class.
> For example, in lisp:
> "xyz" is a string, #(1 2 3) is an array, '(1 2 3) is a list, 45 is a
> fixed-point number.
> Each item that can be stored has a type, no item can be stored without
> one.  The types can be tested.  Most dynamic typed languages are like
> that.

> Compare this to an untyped language where types cannot generally be
> tested.

You mean there are no predicates in untyped languages?

>> They all have - the whole purpose of a type system is to ensure that any
>> expression of type T always evaluates to a value of type T.
>
> But it only gaurantees this because the variables themselves have a
> type, the values themselves do not.

Of course they do.

>  Most of the time the fact that the
> variable they are held in has a type infers that the value does too.
> But the value itself has no type, in a C program for example I can take
> the value from some variable and recast it in any way I feel and the
> language cannot correct any errors I make because their is no
> information in the variable to indicate what type it is.

Casting in C takes values of one type to values of another type.

>> So when you
>> look at type systems formally then you certainly have to assign types to
>> values, otherwise you couldn't prove any useful property about those
>> systems (esp. soundness).
>
> Yes, but indirectly.

No, this is usually done very directly and very explicitly.

 0

David Squire <David.Squire@no.spam.from.here.au> writes:

> Matthias Blume wrote:
>> David Squire <David.Squire@no.spam.from.here.au> writes:
>>
>>> Andreas Rossberg wrote:
>>>> Rob Thorpe wrote:
>>>>>>> No, that isn't what I said.  What I said was:
>>>>>>> "A language is latently typed if a value has a property - called it's
>>>>>>> type - attached to it, and given it's type it can only represent values
>>>>>>> defined by a certain class."
>>>>>> "it [= a value] [...] can [...] represent values"?
>>>>> ???
>>>> I just quoted, in condensed form, what you said above: namely, that
>>>> a value represents values - which I find a strange and circular
>>>> definition.
>>>>
>>> But you left out the most significant part: "given it's type it can
>>> only represent values *defined by a certain class*" (my emphasis). In
>>> C-ish notation:
>>>
>>>      unsigned int x;
>>>
>>> means that x can only represent elements that are integers elements of
>>> the set (class) of values [0, MAX_INT]. Negative numbers and
>>> non-integer numbers are excluded, as are all sorts of other things.
>> This x is not a value.  It is a name of a memory location.
>>
>>> You over-condensed.
>> Andreas condensed correctly.
>
> I should have stayed out of this. I had not realised that it had
> degenerated to point-scoring off someone typing "value" when it is
> clear from context that he meant "variable".

If he really had meant "variable" then he would have been completely wrong.

> Bye.

Bye.

 0

Chris F Clark wrote:
> A static
> type system eliminates some set of tags on values by syntactic
> analysis of annotations (types) written with or as part of the program
> and detects some of the disallowed compuatations (staticly) at compile
> time.

> Explicit annotations are not a necessary ingredient of a type system,
> nor is "eliminating tags" very relevant to its function.

While this is true, I disagree at some level with the second part.  By
eliminating tags, I mean allowing one to perform "type safe"
computations without requiring the values to be tagged.  One could
argue that the tags were never there.  However, many of the
interesting polymorphic computations reaquire either that the values
be tagged or that some other process assures that at each point one
can determine apriori what the correct variant of computation is which
applies.

To me a static type system is one which does this apriori
determination.  A dynamic type system does not do a apriori and
instead includes explicit information in the values being computed to
select the corret variant computations.

In that sense, a static type system is eliminating tags, because the
information is pre-computed and not explicitly stored as a part of the
computation.  Now, you may not view the tag as being there, but in my
mind if there exists a way of perfoming the computation that requires
tags, the tag was there and that tag has been eliminated.

To put it another way, I consider the tags to be axiomatic.  Most
computations involve some decision logic that is driven by distinct
values that have previously been computed.  The separation of the
values which drive the compuation one-way versus another is a tag.
That tag can potentially be eliminated by some apriori computation.

In what I do, it is very valuable to move information from being
explicitly represented in the computed result into the tag, so that I
often have distinct "types" (i.e. tags) for an empty list, a list with
one element, a list with two elements, and a longer list.  In that
sense, I agree with Chris Smith's assertion that "static typing" is
about asserting general properties of the algorithm/data.  These
assertions are important to the way I am manipulating the data.  They
are part of my type model, but they may not be part of anyone else's,
and to me toe pass a empty list to a function requiring a list with
two elements is a "type error", because it is something I expect the
type system to detect as incorrect.  The fact that no one else may
have that expectation does not seem relevant to me.

Now, to carry this farther, since I expect the type system to validate
that certain values are of certain types and only be used in certain
contexts, I am happy when it does not require certain "tags" to be
actualy present in the data.  However, because other bits of code are
polymorphic, I do expect certain values to require tags.  In the end,
this is still a win for me.  I had certain data elements that in the
abstract had to be represented explicitly.  I have encoded that
information into the type system and in some cases the type system is
not using any bits in the computed representation to hold that
information.  Whenever that happens, I win and that solves one of the
problems that I need solved.

Thus, a type system is a way for me to express certain axioms about my
algorithm.  A dynamic type system encodes those facts as part of the
computation.  A static type system pre-calculates certain "theorems"
from my axioms and uses those theorems to allow my algorithm to be
computed without all the facts being stored as part of the computation.

Hopefully, this makes my point of view clear.

-Chris

*****************************************************************************
Chris Clark                    Internet   :  compres@world.std.com
Compiler Resources, Inc.       Web Site   :  http://world.std.com/~compres
23 Bailey Rd                   voice      :  (508) 435-5016
Berlin, MA  01503  USA         fax        :  (978) 838-0263  (24 hours)
------------------------------------------------------------------------------

 0

Ketil Malde wrote:
> "Rob Thorpe" <robert.thorpe@antenova.com> writes:
>
> > But it only gaurantees this because the variables themselves have a
> > type, the values themselves do not.
>
> I think statements like this are confusing, because there are
> different interpretations of what a "value" is.  I would say that the
> integer '4' is a value, and that it has type Integer (for instance).
> This value is different from 4 the Int16, or 4 the double-precision
> floating point number.  From this viewpoint, all values in statically
> typed languages have types, but I think you use 'value' to denote the
> representation of a datum in memory, which is a different thing.

Well I haven't been consistent so far :)

But I mean the value as the semantics of the program itself sees it.
Which mostly means the datum in memory.


 0

Matthias Blume wrote:
> "Rob Thorpe" <robert.thorpe@antenova.com> writes:
> > Andreas Rossberg wrote:
> >> Rob Thorpe wrote:
> >> >>
> >> >>>No, that isn't what I said.  What I said was:
> >> >>>"A language is latently typed if a value has a property - called it's
> >> >>>type - attached to it, and given it's type it can only represent values
> >> >>>defined by a certain class."
> >> >>
> >> >>"it [= a value] [...] can [...] represent values"?
> >> >
> >> > ???
> >>
> >> I just quoted, in condensed form, what you said above: namely, that a
> >> value represents values - which I find a strange and circular definition.
> >
> > Yes, but the point is, as the other poster mentioned: values defined by
> > a class.
> > For example, in lisp:
> > "xyz" is a string, #(1 2 3) is an array, '(1 2 3) is a list, 45 is a
> > fixed-point number.
> > Each item that can be stored has a type, no item can be stored without
> > one.  The types can be tested.  Most dynamic typed languages are like
> > that.
>
> Your "types" are just predicates.

You can call them predicates if you like.  Most people programming in
python, perl, or lisp will call them types though.

> > Compare this to an untyped language where types cannot generally be
> > tested.
>
> You mean there are no predicates in untyped languages?

Well, no there aren't.  That's what defines a language as untyped.

Of-course you can make them with your own code, in for example
assembler, but that's outside the language.

> >> They all have - the whole purpose of a type system is to ensure that any
> >> expression of type T always evaluates to a value of type T.
> >
> > But it only gaurantees this because the variables themselves have a
> > type, the values themselves do not.
>
> Of course they do.

I think we're discussing this at cross-purposes.  In a language like C
or another statically typed language there is no information passed
with values indicating their type.  Have a look in a C compiler if you
don't believe me.  You store 56 in a location in memory then in most
compilers all that will be store is 56, nothing more.  The compiler
relys entirely on the types of the variables to know how to correctly
operate on the values.  The values themselves have no type information
associated with them.

> >  Most of the time the fact that the
> > variable they are held in has a type infers that the value does too.
> > But the value itself has no type, in a C program for example I can take
> > the value from some variable and recast it in any way I feel and the
> > language cannot correct any errors I make because their is no
> > information in the variable to indicate what type it is.
>
> Casting in C takes values of one type to values of another type.

No it doesn't. Casting reinterprets a value of one type as a value of
another type.
There is a difference.  If I cast an unsigned integer 2000000000 to a
signed integer in C on the machine I'm using then the result I will get
will not make any sense.

> >> So when you
> >> look at type systems formally then you certainly have to assign types to
> >> values, otherwise you couldn't prove any useful property about those
> >> systems (esp. soundness).
> >
> > Yes, but indirectly.
>
> No, this is usually done very directly and very explicitly.

No it isn't.
If I say "4 is a integer" that is a direct statement about 4.
If I say "X is and integer and X is 4, therefore 4 is an integer" that
is a slightly less direct statement about 4.

The difference in directness is only small, and much more indirectness
is needed to prove anything useful about a system formally.


 0

Rob Thorpe wrote:
> The compiler
> relys entirely on the types of the variables to know how to correctly
> operate on the values.  The values themselves have no type information
> associated with them.

int x = (int) (20.5 / 3);

What machine code operations does the "/" there invoke? Integer
division, or floating point division? How did the variables involved in
the expression affect that?

>>Casting in C takes values of one type to values of another type.

> No it doesn't. Casting reinterprets a value of one type as a value of
> another type.

No it doesn't.
int x = (int) 20.5;
There's no point at which bits from the floating point representation
appear in the variable x.

int * x = (int *) 0;
There's nothing that indicates all the bits of "x" are zero, and indeed
in some hardware configurations they aren't.

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.

 0

Andreas Rossberg wrote:
> Rob Thorpe wrote:
> >Andreas Rossberg wrote:
> >>Rob Thorpe wrote:
> >>
> >>>>>"A language is latently typed if a value has a property - called it's
> >>>>>type - attached to it, and given it's type it can only represent values
> >>>>>defined by a certain class."
> >>>>
> >>>>"it [= a value] [...] can [...] represent values"?
> >>>
> >>>???
> >>
> >>I just quoted, in condensed form, what you said above: namely, that a
> >>value represents values - which I find a strange and circular definition.
> >
> > Yes, but the point is, as the other poster mentioned: values defined by
> > a class.
>
> I'm sorry, but I still think that the definition makes little sense.
> Obviously, a value simply *is* a value, it does not "represent" one, or
> several ones, regardless how you qualify that statement.

You've clipped a lot of context.  I'll put some back in, I said:-

> > Yes, but the point is, as the other poster mentioned: values defined by
> > a class.
> > For example, in lisp:
> > "xyz" is a string, #(1 2 3) is an array, '(1 2 3) is a list, 45 is a
> > fixed-point number.
> > Each item that can be stored has a type, no item can be stored without
> > one.  The types can be tested.  Most dynamic typed languages are like
> > that.
> > Compare this to an untyped language where types cannot generally be
> > tested.

I think this should make it clear.  If I have a "xyz" in lisp I know it
is a string.
If I have "xyz" in an untyped language like assembler it may be
anything, two pointers in binary, an integer, a bitfield.  There is no
data at compile time or runtime to tell what it is, the programmer has
to remember.

(I'd point out this isn't true of all assemblers, there are some typed
assemblers)

> >>>Well I haven't programmed in any statically typed language where values
> >>>have types themselves.
> >>
> >>They all have - the whole purpose of a type system is to ensure that any
> >>expression of type T always evaluates to a value of type T.
> >
> > But it only gaurantees this because the variables themselves have a
> > type,
>
> No, variables are insignificant in this context. You can consider a
> language without variables at all (such languages exist, and they can
> even be Turing-complete) and still have evaluation, values, and a
> non-trivial type system.

Hmm.  You're right, ML is no-where in my definition since it has no
variables.

> > But the value itself has no type
>
> You mean that the type of the value is not represented at runtime? True,
> but that's simply because the type system is static. It's not the same
> as saying it has no type.

Well, is it even represented at compile time?
The compiler doesn't know in general what values will exist at runtime,
it knows only what types variables have.  Sometimes it only has partial
knowledge and sometimes the programmer deliberately overrides it.  From
what knowledge it you could say it know what types values will have.

> > in a C program for example I can take
> > the value from some variable and recast it in any way I feel and the
> > language cannot correct any errors I make because their is no
> > information in the variable to indicate what type it is.
>
> Nothing in the C spec precludes an implementation from doing just that.

True, that would be an interesting implementation.

> The problem with C rather is that its semantics is totally
> underspecified. In any case, C is about the worst example to use when
> discussing type systems. For starters, it is totally unsound - which is

Yes.  Unfortunately it's often necessary to break static type systems.

Regarding C the problem is, what should we discuss instead that would
be understood in all these newsgroups we're discussing this in.


 0

Chris Smith wrote:
> Joe Marshall <eval.apply@gmail.com> wrote:
> >
> > Agreed.  That is why there is the qualifier dynamic'.  This indicates
> > that it is a completely different thing from static types.
>
> discussion.  I'm not sure we do agree, though, because I doubt we'd be
> right here in this conversation if we did.

I think we do agree.

The issue of static vs. dynamic types' comes up about twice a year in
comp.lang.lisp  It generally gets pretty heated, but eventually people
come to understand what the other person is saying (or they get bored
and drop out of the conversation - I'm not sure which).  Someone always
points out that the phrase dynamic types' really has no meaning in the
world of static type analysis.  (Conversely, the notion of a static
type' that is available at runtime has no meaning in the dynamic
world.)  Much confusion usually follows.

You'll get much farther in your arguments by explaining what you mean
in detail rather than attempting to force a unification of teminology.
You'll also get farther by remembering that many of the people here
have not had much experience with real static type systems.  The static
typing of C++ or Java is so primitive that it is barely an example of
static typing at all, yet these are the most common examples of
statically typed languages people typically encounter.


 0

Matthias Blume wrote:
> Pascal Costanza <pc@p-cos.net> writes:
>
>> - In a dynamically typed language, you can run programs successfully
>>   that are not acceptable by static type systems.
>
> This statement is false.

The example I have given is more important than this statement.

> For every program that can run successfully to completion there exists
> a static type system which accepts that program.  Moreover, there is
> at least one static type system that accepts all such programs.
>
> What you mean is that for static type systems that are restrictive
> enough to be useful in practice there always exist programs which
> (after type erasure in an untyped setting, i.e., by switching to a
> different language) would run to completion, but which are rejected by
> the static type system.

No, that's not what I mean.

>> Here is an example in Common Lisp:
>>
>> ; A class "person" with no superclasses and with the only field "name":
>> (defclass person ()
>>    (name))
>>
>> ; A test program:
>> (defun test ()
>>    (let ((p (make-instance 'person)))
>>
>> (slot-value p 'address) is an attempt to access the field 'address in
>> the object p. In many languages, the notation for this is p.address.
>>
>> Although the class definition for person doesn't mention the field
>> address, the call to (eval (read)) allows the user to change the
>> definition of the class person and update its existing
>> instances. Therefore at runtime, the call to (slot-value p 'adress)
>> has a chance to succeed.
>
> I am quite comfortable with the thought that this sort of evil would
> get rejected by a statically typed language. :-)

This sort of feature is clearly not meant for you. ;-P

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Pascal Costanza wrote:
> Matthias Blume wrote:
> > Pascal Costanza <pc@p-cos.net> writes:
> >> (slot-value p 'address) is an attempt to access the field 'address in
> >> the object p. In many languages, the notation for this is p.address.
> >>
> >> Although the class definition for person doesn't mention the field
> >> address, the call to (eval (read)) allows the user to change the
> >> definition of the class person and update its existing
> >> instances. Therefore at runtime, the call to (slot-value p 'adress)
> >> has a chance to succeed.
> >
> > I am quite comfortable with the thought that this sort of evil would
> > get rejected by a statically typed language. :-)
>
> This sort of feature is clearly not meant for you. ;-P

To be fair though that kind of thing would only really be used while
debugging a program.
Its no different than adding a new member to a class while in the
debugger.

There are other places where you might add a slot to an object at
runtime, but they would be done in tidier ways.


 0

Chris Smith wrote:

> > Easy, any statically typed language is not latently typed.
>
> I'm actually not sure I agree with this at all.  I believe that
> reference values in Java may be said to be latently typed. Practically
> all class-based OO
> languages are subject to similar consideration, as it turns out.

Quite probably true of GC-ed statically typed languages in general, at least up
to a point (and provided you are not using something like a tagless ML
implementation).  I think Rob is assuming a rather too specific implementation
of statically typed languages.

> I'm unsure whether to consider explicitly stored array lengths, which
> are present in most statically typed languages, to be part of a "type"
> in this sense or not.

If I understand your position correctly, wouldn't you be pretty much forced to
reject the idea of the length of a Java array being part of its type ?  If you
want to keep the word "type" bound to the idea of static analysis, then --
since Java doesn't perform any size-related static analysis -- the size of a
Java array cannot be part of its type.

That's assuming that you would want to keep the "type" connected to the actual
type analysis performed by the language in question.  Perhaps you would prefer
to loosen that and consider a different (hypothetical) language (perhaps
producing identical bytecode) which does do compile time size analysis.

But then you get into an area where you cannot talk of the type of a value (or
variable) without relating it to the specific type system under discussion.
Personally, I would be quite happy to go there -- I dislike the idea that a
value has a specific inherent type.

It would be interesting to see what a language designed specifically to support
user-defined, pluggable, and perhaps composable, type systems would look like.
Presumably the syntax and "base" semantics would be very simple, clean, and
unrestricted (like Lisp, Smalltalk, or Forth -- not that I'm convinced that any
of those would be ideal for this), with a defined result for any possible
sequence of operations.  The type-system(s) used for a particular run of the
interpreter (or compiler) would effectively reduce the space of possible
sequences.   For instance, one could have a type system which /only/ forbade
dereferencing null, or another with the job of ensuring that mutability
restrictions were respected, or a third which implemented access control...

But then, I don't see a semantically critically distinction between such space
reduction being done at compile time vs. runtime.  Doing it at compile time
could be seen as an optimisation of sorts (with downsides to do with early
binding etc).  That's particularly clear if the static analysis is /almost/
able to prove that <some sequence> is legal (by its own rules) but has to make
certain assumptions in order to construct the proof.  In such a case the
compiler might insert a few runtime checks to ensure that it's assumptions were
valid, but do most of its checking statically.

There would /be/ a distinction between static and dynamic checks in such a
system, and it would be an important distinction, but not nearly as important
as the distinctions between the different type systems.  Indeed I can imagine
categorising type systems by /whether/ (or to what extent) a tractable static
implementation exists.

-- chris

P.S  Apologies Chris, btw, for dropping out of a conversation we were having on
this subject a little while ago -- I've now said everything that I /would/ have
said in reply to your last post if I'd got around to it in time...


 0

Joe Marshall wrote:
{...}
> The issue of static vs. dynamic types' comes up about twice a year in
> comp.lang.lisp  It generally gets pretty heated, but eventually people
> come to understand what the other person is saying (or they get bored
> and drop out of the conversation - I'm not sure which).  {...}

I think that the thing about "Language Expressiveness" is just so
elusive, as it is based on each programmers way of thinking about
things, and also the general types of problems that that programmer is
dealing with on a daily basis.  There are folks out there like the Paul
Grahams of the world, that do wonderfully complex things in Lisp,
eschew totally the facilities of the CLOS, the lisp object system, and
get the job done ... just because they can hack and have a mind picture
of what all the type matches are "in there head".  I used to use forth,
where everything goes on a stack, and it is up to the programmer to
remember what the heck the type of a thing was that was stored there...
maybe an address of a string, another word {read function}, or an
integer...  NO TYPING AT ALL, but can you be expressive in forth... You
can if you are a good forth programmer... NOW that being said, I think
that the reason I like Haskell, a very strongly typed language, is that
because of it's type system, the language is able to do things like
lazy evaluation, and then though it is strongly typed, and has
wonderful things like type classes, a person can write wildly
expressive code, and NEVER write a thing like:
fromtoby :: forall b a.
(Num a, Enum a) =>
a -> a -> a -> (a -> b) -> [b]
The above was generated by the Haskell Compiler for me... and it does
that all the time, without any fuss.  I just wrote the function and it
did the rest for me... By the way the function was a replacement for
the { for / to / by } construct of like a C language and it was done in
one line...  THAT TO ME IS EXPRESSIVE, when I can build whole new
features into my language in just a few lines... usually only one
line..   I think that is why guys who are good to great in dynamic or
if it floats your boat, type free languages like Lisp and Scheme find
their languages so expressive, because  they have found the macro's or
whatever else facility to give them the power...  to extend their
language to meet a problem, or fit maybe closer to an entire class of
problems.. giving them the tool box..  Haskeller', folks using Ruby,
Python, ML, Ocaml, Unicon.... even C or whatever... By building either
modules, or libraries... and understand that whole attitude of tool
building.. can be equally as expressive "for their own way of doing
things".  Heck, I don't use one size of hammer out in my workshop, and
I sure as hell don't on my box...   I find myself picking up Icon and
it's derivative Unicon to do a lot of data washing chores.. as it
allows functions as first class objects... any type can be stored in a
list...  tables... like a hash ... with any type as the data  and the
key... you can do    things like
i := 0
which will append a sequence of line numbers to the lines read in from
stdin.. pretty damn concise and expressive in my book...  Now for other
problems .. Haskell with it's single type lists... which of course can
have tuples, which themselves have tuples in them with a list as one
element of that tuple... etc.. and you can build accessors for all of
that for function application brought to bear on one element of one
tuple allowing mappings of that function to all of that particular
element of ... well you get the idea.. It is all about how you see
things and how your particular mindset is...  I would agree with
someone that early on in the discussion said the thing about "type's
warping your mind" and depending on the individual, it is either a good
warp or a bad warp, but that is why there is Ruby and Python, and
Haskell and even forth... for a given problem, and a given programmer,
any one of those or even Cobol or Fortran might be the ideal tool... if
nothing else based on that persons familiarity or existing tool and
code base.

enough out of me...
-- gene


 0

Chris Uppal wrote:
> Personally, I would be quite happy to go there -- I dislike the idea that a
> value has a specific inherent type.

Interestingly, Ada defines a type as a collection of values. It works
quite well, when one consistantly applies the definition. For example,
it makes very clear the difference between a value of (type T) and a
value of (type T or one of its subtypes).

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.

 0

"Rob Thorpe" <robert.thorpe@antenova.com> writes:

> I think we're discussing this at cross-purposes.  In a language like C
> or another statically typed language there is no information passed
> with values indicating their type.

You seem to be confusing "does not have a type" with "no type
information is passed at runtime".

> Have a look in a C compiler if you don't believe me.

Believe me, I have.

> No it doesn't. Casting reinterprets a value of one type as a value of
> another type.
> There is a difference.  If I cast an unsigned integer 2000000000 to a
> signed integer in C on the machine I'm using then the result I will get
> will not make any sense.

Which result are you getting?  What does it mean to "make sense"?

 0

Rob Thorpe wrote:
> Pascal Costanza wrote:
>> Matthias Blume wrote:
>>> Pascal Costanza <pc@p-cos.net> writes:
>>>> (slot-value p 'address) is an attempt to access the field 'address in
>>>> the object p. In many languages, the notation for this is p.address.
>>>>
>>>> Although the class definition for person doesn't mention the field
>>>> address, the call to (eval (read)) allows the user to change the
>>>> definition of the class person and update its existing
>>>> instances. Therefore at runtime, the call to (slot-value p 'adress)
>>>> has a chance to succeed.
>>> I am quite comfortable with the thought that this sort of evil would
>>> get rejected by a statically typed language. :-)
>> This sort of feature is clearly not meant for you. ;-P
>
> To be fair though that kind of thing would only really be used while
> debugging a program.
> Its no different than adding a new member to a class while in the
> debugger.
>
> There are other places where you might add a slot to an object at
> runtime, but they would be done in tidier ways.

Yes, but the question remains how a static type system can deal with

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Chris Uppal <chris.uppal@metagnostic.REMOVE-THIS.org> wrote:
> > I'm unsure whether to consider explicitly stored array lengths, which
> > are present in most statically typed languages, to be part of a "type"
> > in this sense or not.
>
> If I understand your position correctly, wouldn't you be pretty much forced to
> reject the idea of the length of a Java array being part of its type ?

I've since abandoned any attempt to be picky about use of the word
"type".  That was a mistake on my part.  I still think it's legitimate
to object to statements of the form "statically typed languages X, but
dynamically typed languages Y", in which it is implied that Y is
distinct from X.  When I used the word "type" above, I was adopting the
working definition of a type from the dynamic sense.  That is, I'm
considering whether statically typed languages may be considered to also
have dynamic types, and it's pretty clear to me that they do.

> If you
> want to keep the word "type" bound to the idea of static analysis, then --
> since Java doesn't perform any size-related static analysis -- the size of a
> Java array cannot be part of its type.

Yes, I agree.  My terminology has been shifting constantly throughout
that probably confuses people, but my hope is that this is still
superior to stubbornly insisting that I'm right. :)

> That's assuming that you would want to keep the "type" connected to the actual
> type analysis performed by the language in question.  Perhaps you would prefer
> to loosen that and consider a different (hypothetical) language (perhaps
> producing identical bytecode) which does do compile time size analysis.

In the static sense, I think it's absolutely critical that "type" is
defined in terms of the analysis done by the type system.  Otherwise,
you miss the definition entirely.  In the dynamic sense, I'm unsure; I
don't have any kind of deep understanding of what's meant by "type" in
this sense.  Certainly it could be said that there are somewhat common
cross-language definitions of "type" that get used.

> But then you get into an area where you cannot talk of the type of a value (or
> variable) without relating it to the specific type system under discussion.

Which is entirely what I'd expect in a static type system.

> Personally, I would be quite happy to go there -- I dislike the idea that a
> value has a specific inherent type.

Good! :)

> It would be interesting to see what a language designed specifically to support
> user-defined, pluggable, and perhaps composable, type systems would look like.
> Presumably the syntax and "base" semantics would be very simple, clean, and
> unrestricted (like Lisp, Smalltalk, or Forth -- not that I'm convinced that any
> of those would be ideal for this), with a defined result for any possible
> sequence of operations.  The type-system(s) used for a particular run of the
> interpreter (or compiler) would effectively reduce the space of possible
> sequences.   For instance, one could have a type system which /only/ forbade
> dereferencing null, or another with the job of ensuring that mutability
> restrictions were respected, or a third which implemented access control...

You mean in terms of a practical programming language?  If not, then
lambda calculus is used in precisely this way for the static sense of
types.

> But then, I don't see a semantically critically distinction between such space
> reduction being done at compile time vs. runtime.  Doing it at compile time
> could be seen as an optimisation of sorts (with downsides to do with early
> binding etc).  That's particularly clear if the static analysis is /almost/
> able to prove that <some sequence> is legal (by its own rules) but has to make
> certain assumptions in order to construct the proof.  In such a case the
> compiler might insert a few runtime checks to ensure that it's assumptions were
> valid, but do most of its checking statically.

I think Marshall got this one right.  The two are accomplishing
different things.  In one case (the dynamic case) I am safeguarding
against negative consequences of the program behaving in certain non-
sensical ways.  In the other (the static case) I am proving theorems
about the impossibility of this non-sensical behavior ever happening.
You mention static typing above as an optimization to dynamic typing;
that is certainly one possible applications of these theorems.

In some sense, though, it is interesting in its own right to know that
these theorems have been proven.  Of course, there are important doubts
to be had whether these are the theorems we wanted to prove in the first
place, and whether the effort of proving them was worth the additional
confidence I have in my software systems.

I acknowledge those questions.  I believe they are valid.  I don't know
the answers.  As an intuitive judgement call, I tend to think that
knowing the correctness of these things is of considerable benefit to
software development, because it means that I don't have as much to
think about at any one point in time.  I can validly make more
assumptions about my code and KNOW that they are correct.  I don't have
to trace as many things back to their original source in a different
module of code, or hunt down as much documentation.  I also, as a
practical matter, get development tools that are more powerful.
(Whether it's possible to create the same for a dynamically typed
language is a potentially interesting discussion; but as a practical
matter, no matter what's possible, I still have better development tools
for Java than for JavaScript when I do my job.)

In the end, though, I'm just not very interested in them at the moment.
For me, as a practical matter, choices of programming language are
generally dictated by more practical considerations.  I do basically all
my professional "work" development in Java, because we have a gigantic
existing software system written in Java and no time to rewrite it.  On
the other hand, I do like proving theorems, which means I am interested
in type theory; if that type theory relates to programming, then that's
great!  That's probably not the thing to say to ensure that my thoughts
are relevant to the software development "industry", but it's
nevertheless the truth.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Torben �gidius Mogensen schrieb:
> That's not really the difference between static and dynamic typing.
> Static typing means that there exist a typing at compile-time that
> guarantess against run-time type violations.  Dynamic typing means
> that such violations are detected at run-time.

Agreed.

> This is orthogonal to
> strong versus weak typing, which is about whether such violations are
> detected at all.  The archetypal weakly typed language is machine code
> -- you can happily load a floating point value from memory, add it to

I'd rather call machine code "untyped".
("Strong typing" and "weak typing" don't have a universally accepted
definition anyway, and I'm not sure that this terminology is helpful
anyway.)

> Anyway, type inference for statically typed langauges don't make them
> any more dynamically typed.  It just moves the burden of assigning the
> types from the programmer to the compiler.  And (for HM type systems)
> the compiler doesn't "guess" at a type -- it finds the unique most
> general type from which all other legal types (within the type system)
> can be found by instantiation.

Hmm... I think this distinction doesn't cover all cases.

Assume a language that
a) defines that a program is "type-correct" iff HM inference establishes
that there are no type errors
b) compiles a type-incorrect program anyway, with an establishes
rigorous semantics for such programs (e.g. by throwing exceptions as
appropriate).
The compiler might actually refuse to compile type-incorrect programs,
depending on compiler flags and/or declarations in the code.

Typed ("strongly typed") it is, but is it statically typed or
dynamically typed?
("Softly typed" doesn't capture it well enough - if it's declarations in
the code, then those part of the code are statically typed.)

> You miss some of the other benefits of static typing,
> though, such as a richer type system -- soft typing often lacks
> features like polymorphism (it will find a set of monomorphic
> instances rather than the most general type) and type classes.

That's not a property of soft typing per se, it's a consequence of
tacking on type inference on a dynamically-typed language that wasn't
designed for allowing strong type guarantees.

Regards,
Jo

 0

Matthias Blume schrieb:
> Perhaps better: A language is statically typed if its definition
> includes (or ever better: is based on) a static type system, i.e., a
> static semantics with typing judgments derivable by typing rules.
> Usually typing judgmets associate program phrases ("expressions") with
> types given a typing environment.

This is defining a single term ("statically typed") using three
undefined terms ("typing judgements", "typing rules", "typing environment").

Regards,
Jo

 0

Chris F Clark schrieb:
> In that sense, a static type system is eliminating tags, because the
> information is pre-computed and not explicitly stored as a part of the
> computation.  Now, you may not view the tag as being there, but in my
> mind if there exists a way of perfoming the computation that requires
> tags, the tag was there and that tag has been eliminated.

On a semantic level, the tag is always there - it's the type (and
definitely part of an axiomatic definition of the language).
Tag elimination is "just" an optimization.

> To put it another way, I consider the tags to be axiomatic.  Most
> computations involve some decision logic that is driven by distinct
> values that have previously been computed.  The separation of the
> values which drive the compuation one-way versus another is a tag.
> That tag can potentially be eliminated by some apriori computation.

Um... just as precomputing constants, I'd say.
Are the constants that went into a precomputed constant eliminated?
On the implementation level, yes. On the semantic/axiomatic level, no.
Or, well, maybe - since that's just an optimization, the compiler may
have decided to no precompute the constant at all.

(Agreeing with the snipped parts.)

Regards,
Jo

 0

Joachim Durchholz <jo@durchholz.org> writes:

> Matthias Blume schrieb:
>> Perhaps better: A language is statically typed if its definition
>> includes (or ever better: is based on) a static type system, i.e., a
>> static semantics with typing judgments derivable by typing rules.
>> Usually typing judgmets associate program phrases ("expressions") with
>> types given a typing environment.
>
> This is defining a single term ("statically typed") using three
> undefined terms ("typing judgements", "typing rules", "typing
> environment").

This was not meant to be a rigorous definition.  Also, I'm not going
to repeat the textbook definitions for those three standard terms
here.  Next thing you are going to ask me to define the meaning of the
word "is"...

 0

Chris Smith wrote:
> Chris Uppal <chris.uppal@metagnostic.REMOVE-THIS.org> wrote:
>
>>>I'm unsure whether to consider explicitly stored array lengths, which
>>>are present in most statically typed languages, to be part of a "type"
>>>in this sense or not.
>>
>>If I understand your position correctly, wouldn't you be pretty much forced to
>>reject the idea of the length of a Java array being part of its type ?
>
> I've since abandoned any attempt to be picky about use of the word "type".

I think you should stick to your guns on that point. When people talk about
"types" being associated with values in a "latently typed" or "dynamically typed"
language, they really mean *tag*, not type.

It is remarkable how much of the fuzzy thinking that often occurs in the
discussion of type systems can be dispelled by insistence on this point (although
much of the benefit can be obtained just by using this terminology in your own
mind and translating what other people are saying to it). It's a good example of
the weak Sapir-Whorf hypothesis, I think.

--
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>

 0

Pascal Costanza wrote:
> Chris Smith wrote:
>
>> Knowing that it'll cause a lot of strenuous objection, I'll
>> nevertheless interject my plea not to abuse the word "type" with a
>> phrase like "dynamically typed".  If anyone considers "untyped" to be
>> perjorative, as some people apparently do, then I'll note that another
>> common term is "type-free," which is marketing-approved but doesn't
>> carry the misleading connotations of "dynamically typed."  We are
>> quickly losing any rational meaning whatsoever to the word "type," and
>> that's quite a shame.
>
> The words "untyped" or "type-free" only make sense in a purely
> statically typed setting. In a dynamically typed setting, they are
> meaningless, in the sense that there are _of course_ types that the
> runtime system respects.
>
> Types can be represented at runtime via type tags. You could insist on
> using the term "dynamically tagged languages", but this wouldn't change
> a lot. Exactly _because_ it doesn't make sense in a statically typed
> setting, the term "dynamically typed language" is good enough to
> communicate what we are talking about - i.e. not (static) typing.

Oh, but it *does* make sense to talk about dynamic tagging in a statically
typed language.

That's part of what makes the term "dynamically typed" harmful: it implies
a dichotomy between "dynamically typed" and "statically typed" languages,
when in fact dynamic tagging and static typing are (mostly) independent
features.

--
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>

 0

Quoth David Hopwood <david.nospam.hopwood@blueyonder.co.uk>:
> Pascal Costanza wrote:
> > Chris Smith wrote:
> >
> > Types can be represented at runtime via type tags. You could insist on
> > using the term "dynamically tagged languages", but this wouldn't change
> > a lot. Exactly _because_ it doesn't make sense in a statically typed
> > setting, the term "dynamically typed language" is good enough to
> > communicate what we are talking about - i.e. not (static) typing.
>
> Oh, but it *does* make sense to talk about dynamic tagging in a statically
> typed language.

Though I'm *seriously* reluctant to encourage this thread...

A prime example of this is Perl, which has both static and dynamic
typing. Variables are statically typed scalar/array/hash, and then
scalars are dynamically typed string/int/unsigned/float/ref.

> That's part of what makes the term "dynamically typed" harmful: it implies
> a dichotomy between "dynamically typed" and "statically typed" languages,
> when in fact dynamic tagging and static typing are (mostly) independent
> features.

Nevertheless, I see no problem in calling both of these 'typing'. They
are both means to the same end: causing a bunch of bits to be
interpreted in a meaningful fashion. The only difference is whether the
distinction is made a compile- or run-time. The above para had no
ambiguities...

Ben

--
Every twenty-four hours about 34k children die from the effects of poverty.
Meanwhile, the latest estimate is that 2800 people died on 9/11, so it's like
that image, that ghastly, grey-billowing, double-barrelled fall, repeated
twelve times every day. Full of children. [Iain Banks]  benmorrow@tiscali.co.uk

 0

Pascal Costanza wrote:
> Rob Thorpe wrote:
>> Pascal Costanza wrote:
>>> Matthias Blume wrote:
>>>> Pascal Costanza <pc@p-cos.net> writes:
>>>>
>>>>> (slot-value p 'address) is an attempt to access the field 'address in
>>>>> the object p. In many languages, the notation for this is p.address.
>>>>>
>>>>> Although the class definition for person doesn't mention the field
>>>>> address, the call to (eval (read)) allows the user to change the
>>>>> definition of the class person and update its existing
>>>>> instances. Therefore at runtime, the call to (slot-value p 'adress)
>>>>> has a chance to succeed.
>>>>
>>>> I am quite comfortable with the thought that this sort of evil would
>>>> get rejected by a statically typed language. :-)
>>>
>>> This sort of feature is clearly not meant for you. ;-P
>>
>> To be fair though that kind of thing would only really be used while
>> debugging a program.
>> Its no different than adding a new member to a class while in the
>> debugger.
>>
>> There are other places where you might add a slot to an object at
>> runtime, but they would be done in tidier ways.
>
> Yes, but the question remains how a static type system can deal with

It's not difficult in principle:

- for each class [*], define a function which converts an 'old' value of
that class to a 'new' value (the ability to do this is necessary anyway
to support some kinds of upgrade). A default conversion function may be
autogenerated if the class definition has changed only in minor ways.

- typecheck the new program and the conversion functions, using the old
type definitions for the argument of each conversion function, and the
new type definitions for its result.

- have the debugger apply the conversions to all values, and then resume
the program.

[*] or nearest equivalent in a non-OO language.

--
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>

 0

genea wrote:
> [...] NOW that being said, I think
> that the reason I like Haskell, a very strongly typed language, is that
> because of it's type system, the language is able to do things like
> lazy evaluation, [...]

Lazy evaluation does not depend on, nor is it particularly helped by
static typing (assuming that's what you mean by "strongly typed" here).

An example of a non-statically-typed language that supports lazy evaluation
is Oz. (Lazy functions are explicitly declared in Oz, as opposed to Haskell's
implicit lazy evaluation, but that's not because of the difference in type
systems.)

--
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>

 0

Marshall wrote:
> Joe Marshall wrote:
>
>>They *do* have a related meaning.  Consider this code fragment:
>>(car "a string")
>>[...]
>>Both static typing' and dynamic typing' (in the colloquial sense) are
>>strategies to detect this sort of error.
>
>
> The thing is though, that putting it that way makes it seems as
> if the two approaches are doing the same exact thing, but
> just at different times: runtime vs. compile time. But they're
> not the same thing. Passing the static check at compile
> time is universally quantifying the absence of the class
> of error; passing the dynamic check at runtime is existentially
> quantifying the absence of the error. A further difference is
> the fact that in the dynamically typed language, the error is
> found during the evaluation of the expression; in a statically
> typed language, errors are found without attempting to evaluate
> the expression.
>
> I find everything about the differences between static and
> dynamic to be frustratingly complex and subtle.

Let me add another complex subtlety, then: the above description misses
an important point, which is that *automated* type checking is not the
whole story.  I.e. that compile time/runtime distinction is a kind of
red herring.

In fact, automated type checking is orthogonal to the question of the
existence of types.  It's perfectly possible to write fully typed
programs in a (good) dynamically-checked language.

In a statically-checked language, people tend to confuse automated
static checking with the existence of types, because they're thinking in
a strictly formal sense: they're restricting their world view to what
they see "within" the language.

Then they look at programs in a dynamically-checked language, and see
checks happening at runtime, and they assume that this means that the
program is "untyped".

It's certainly close enough to say that the *language* is untyped.  One
could also say that a program, as seen by the language, is untyped.

But a program as seen by the programmer has types: the programmer
performs (static) type inference when reasoning about the program, and
debugs those inferences when debugging the program, finally ending up
with a program which has a perfectly good type scheme.  It's may be
messy compared to say an HM type scheme, and it's usually not proved to
be perfect, but that again is an orthogonal issue.

Mathematicians operated for thousands of years without automated
checking of proofs, so you can't argue that because a
dynamically-checked program hasn't had its type scheme proved correct,
that it somehow doesn't have types.  That would be a bit like arguing
that we didn't have Math until automated theorem provers came along.

These observations affect the battle over terminology in various ways.
I'll enumerate a few.

1. "Untyped" is really quite a misleading term, unless you're talking
about something like the untyped lambda calculus.  That, I will agree,
can reasonably be called untyped.

2.  "Type-free" as suggested by Chris Smith is equally misleading.  It's
only correct in a relative sense, in a narrow formal domain which
ignores the process of reasoning about types which is inevitably
performed by human programmers, in any language.

3.  A really natural term to refer to types which programmers reason
about, even if they are not statically checked, is "latent types".  It
captures the situation very well intuitively, and it has plenty of
precedent -- e.g. it's mentioned in the Scheme reports, R5RS and its
predecessors, going back at least a decade or so (haven't dug to check
when it first appeared).

4.  Type theorists like to say that "universal" types can be used in a
statically-typed language to subsume "dynamic types".  Those theorists
are right, the term "dynamic type", with its inextricable association
with runtime checks, definitely gets in the way here.  It might be
enlightening to rephrase this: what's really happening is that universal
types allow you to embed a latently-typed program in a
statically-checked language.  The latent types don't go anywhere,
they're still latent in the program with universal types.  The program's
statically-checked type scheme doesn't capture the latent types.
Describing it in these terms clarifies what's actually happening.

5.  Dynamic checks are only part of the mechanism used to verify latent
types.  They shouldn't be focused on as being the primary equivalent to
static checks.  The closest equivalent to the static checks is a
combination of human reasoning and testing, in which dynamic checks play
an important but ultimately not a fundamental part.  You could debug a
program and get the type scheme correct without dynamic checks, it would
just be more difficult.

So, will y'all just switch from using "dynamically typed" to "latently
typed", and stop talking about any real programs in real programming
languages as being "untyped" or "type-free", unless you really are
talking about situations in which human reasoning doesn't come into
whole issue.

Anton

 0

David Hopwood wrote:
> Pascal Costanza wrote:
>> Rob Thorpe wrote:
>>> Pascal Costanza wrote:
>>>> Matthias Blume wrote:
>>>>> Pascal Costanza <pc@p-cos.net> writes:
>>>>>
>>>>>> (slot-value p 'address) is an attempt to access the field 'address in
>>>>>> the object p. In many languages, the notation for this is p.address.
>>>>>>
>>>>>> Although the class definition for person doesn't mention the field
>>>>>> address, the call to (eval (read)) allows the user to change the
>>>>>> definition of the class person and update its existing
>>>>>> instances. Therefore at runtime, the call to (slot-value p 'adress)
>>>>>> has a chance to succeed.
>>>>> I am quite comfortable with the thought that this sort of evil would
>>>>> get rejected by a statically typed language. :-)
>>>> This sort of feature is clearly not meant for you. ;-P
>>> To be fair though that kind of thing would only really be used while
>>> debugging a program.
>>> Its no different than adding a new member to a class while in the
>>> debugger.
>>>
>>> There are other places where you might add a slot to an object at
>>> runtime, but they would be done in tidier ways.
>> Yes, but the question remains how a static type system can deal with
>
> It's not difficult in principle:
>
>  - for each class [*], define a function which converts an 'old' value of
>    that class to a 'new' value (the ability to do this is necessary anyway
>    to support some kinds of upgrade). A default conversion function may be
>    autogenerated if the class definition has changed only in minor ways.

Yep, this is more or less exactly how CLOS does it. (The conversion
function is called update-instance-for-redefined-class, and you can
provide your own methods on it.)

>  - typecheck the new program and the conversion functions, using the old
>    type definitions for the argument of each conversion function, and the
>    new type definitions for its result.

The problem here is: The program is already executing, so this typecheck
isn't performed at compile-time, in the strict sense of the word (i.e.,
before the program is deployed). It may still be a syntactic analysis,
but you don't get the kind of guarantees anymore that you typically
expect from a static type checker _before_ the program is started in the
first place.

(It's really important to understand that the idea is to use this for
deployed programs - albeit hopefully in a more structured fashion - and
not only for debugging. The example I have given is an extreme one that
you would probably not use as such in a "real-world" setting, but it
shows that there is a boundary beyond which static type systems cannot
be used in a meaningful way anymore, at least as far as I can tell.)

>  - have the debugger apply the conversions to all values, and then resume
>    the program.

In CLOS, this conversion is defined as part of the language proper, but
this is mostly because Common Lisp doesn't make a sharp distinction
between debugging capabilities and "regular" language features. (I think
it's a good thing that there is no strong barrier against having
debugging capabilities in a deployed program.)

> [*] or nearest equivalent in a non-OO language.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

David Hopwood wrote:
>
> Oh, but it *does* make sense to talk about dynamic tagging in a statically
> typed language.

It even makes perfect sense to talk about dynamic typing in a statically
typed language - but keeping the terminology straight, this rather
refers to something like described in the well-known paper of the same
title (and its numerous follow-ups):

Martin Abadi, Luca Cardelli, Benjamin Pierce, Gordon Plotkin
Dynamic typing in a statically-typed language.
Proc. 16th Symposium on Principles of Programming Languages, 1989
/ TOPLAS 13(2), 1991

Note how this is totally different from simple tagging, because it deals
with real types at runtime.

- Andreas

 0

"Rob Thorpe" <robert.thorpe@antenova.com> writes:

> Andreas Rossberg wrote:
>
> > No, variables are insignificant in this context. You can consider a
> > language without variables at all (such languages exist, and they can
> > even be Turing-complete) and still have evaluation, values, and a
> > non-trivial type system.
>
> Hmm.  You're right, ML is no-where in my definition since it has no
> variables.

That's not true.  ML has variables in the mathematical sense of
variables -- symbols that can be associated with different values at
different times.  What it doesn't have is mutable variables (though it
can get the effect of those by having variables be immutable
references to mutable memory locations).

What Andreas was alluding to was presumably FP-style languages where
functions or relations are built by composing functions or relations
without ever naming values.

Torben

 0

Darren New wrote:

[me:]
> > Personally, I would be quite happy to go there -- I dislike the idea
> > that a value has a specific inherent type.
>
> Interestingly, Ada defines a type as a collection of values. It works
> quite well, when one consistantly applies the definition.

I have never been very happy with relating type to sets of values (objects,
whatever).  I'm not saying that it's formally wrong (but see below), but it
doesn't fit with my intuitions very well -- most noticeably in that the sets
are generally unbounded so you have to ask where the (intentional) definitions
come from.

Two other notions of what "type" means might be interesting, both come from
attempts to create type-inference mechanisms for Smalltalk or related
languages.  Clearly one can't use the set-of-values approach for these purposes
;-)   One approach takes "type" to mean "set of classes" the other takes a
finer-grained approach and takes it to mean "set of selectors" (where
"selector" is Smalltalk for "name of a method" -- or, more accurately, name of
a message).

But I would rather leave the question of what a type "is" open, and consider
that to be merely part of the type system.  For instance the hypothetical
nullability analysis type system I mentioned might have only three types
NULLABLE, ALWAYSNULL, and NEVERNULL.

It's worth noting, too, that (in some sense) the type of an object can change
over time[*].  That can be handled readily (if not perfectly) in the informal
internal type system(s) which programmers run in their heads (pace the very
sensible post by Anton van Straaten today in this thread -- several branches
away), but cannot be handled by a type system based on sets-of-values (and is
also a counter-example to the idea that "the" dynamic type of an object/value
can be identified with its tag).

([*] if the set of operations in which it can legitimately partake changes.
That can happen explicitly in Smalltalk (using DNU proxies for instance if the
proxied object changes, or even using #becomeA:), but can happen anyway in less
"free" languages -- the State Pattern for instance, or even (arguably) in the
difference between an empty list and a non-empty list).

-- chris


 0

Anton van Straaten wrote:

> But a program as seen by the programmer has types: the programmer
> performs (static) type inference when reasoning about the program, and
> debugs those inferences when debugging the program, finally ending up
> with a program which has a perfectly good type scheme.  It's may be
> messy compared to say an HM type scheme, and it's usually not proved to
> be perfect, but that again is an orthogonal issue.

I like this way of looking at it.

-- chris


 0

David Hopwood wrote:

> When people talk
> about "types" being associated with values in a "latently typed" or
> "dynamically typed" language, they really mean *tag*, not type.

I don't think that's true.  Maybe /some/ people do confuse the two, but I am
certainly a counter-example ;-)

The tag (if any) is part of the runtime machinery (or, if not, then I don't
understand what you mean by the word), and while that is certainly a reasonably
approximation to the type of the object/value, it is only an approximation,
and -- what's more -- is only an approximation to the type as yielded by one
specific (albeit abstract, maybe even hypothetical) type system.

If I send #someMessage to a proxy object which has not had its referent set
(and assuming the default value, presumably some variant of nil, does not
understand #someMessage), then that's just as much a type error as sending
#someMessage to a variable holding a nil value.  If I then assign the referent
of the proxy to some object which does understand #someMessage, then it is not
a type error to send #someMessage to the proxy.  So the type has changed, but
nothing in the tag system of the language implementation has changed.

-- chris


 0

Rob Thorpe wrote:
>
> I think this should make it clear.  If I have a "xyz" in lisp I know it
> is a string.
> If I have "xyz" in an untyped language like assembler it may be
> anything, two pointers in binary, an integer, a bitfield.  There is no
> data at compile time or runtime to tell what it is, the programmer has
> to remember.

You have to distinguish between values (at the level of language
semantics) and their low-level representation (at the implementation
level). In a high-level language, the latter should be completely
immaterial to the semantics, and hence not interesting for the discussion.

>>No, variables are insignificant in this context. You can consider a
>>language without variables at all (such languages exist, and they can
>>even be Turing-complete) and still have evaluation, values, and a
>>non-trivial type system.
>
> Hmm.  You're right, ML is no-where in my definition since it has no
> variables.

Um, it has. Mind you, it has no /mutable/ variables, but that was not
even what I was talking about.

>>>But the value itself has no type
>>
>>You mean that the type of the value is not represented at runtime? True,
>>but that's simply because the type system is static. It's not the same
>>as saying it has no type.
>
> Well, is it even represented at compile time?
> The compiler doesn't know in general what values will exist at runtime,
> it knows only what types variables have.  Sometimes it only has partial
> knowledge and sometimes the programmer deliberately overrides it.  From
> what knowledge it you could say it know what types values will have.

Again, variables are insignificant. From the structure of an expression
the type system derives the type of the resulting value. An expression
may contain variables, and then the type system generally must know (or
be able to derive) their types too, but that's a separate issue. Most
values are anonymous. Nevertheless their types are known.

> Unfortunately it's often necessary to break static type systems.

Your definitely using the wrong static language then. ;-)

- Andreas

 0

Chris Smith wrote:

> > It would be interesting to see what a language designed specifically to
> > support user-defined, pluggable, and perhaps composable, type systems
> > would look like. [...]
>
> You mean in terms of a practical programming language?  If not, then
> lambda calculus is used in precisely this way for the static sense of
> types.

Good point.  I was actually thinking about what a practical language might look
like, but -- hell -- why not start with theory for once ? ;-)

> I think Marshall got this one right.  The two are accomplishing
> different things.  In one case (the dynamic case) I am safeguarding
> against negative consequences of the program behaving in certain non-
> sensical ways.  In the other (the static case) I am proving theorems
> about the impossibility of this non-sensical behavior ever happening.

And so conflating the two notions of type (-checking) as a kind of category
error ?  If so then I see what you mean, and it's a useful distinction, but am
unconvinced that it's /so/ helpful a perspective that I would want to exclude
other perspectives which /do/ see the two as more-or-less trivial variants on
the same underlying idea.

> I acknowledge those questions.  I believe they are valid.  I don't know
> the answers.  As an intuitive judgement call, I tend to think that
> knowing the correctness of these things is of considerable benefit to
> software development, because it means that I don't have as much to
> think about at any one point in time.  I can validly make more
> assumptions about my code and KNOW that they are correct.  I don't have
> to trace as many things back to their original source in a different
> module of code, or hunt down as much documentation.  I also, as a
> practical matter, get development tools that are more powerful.

Agreed that these are all positive benefits of static declarative (more or
less) type systems.

But then (slightly tongue-in-cheek) shouldn't you be agitating for Java's type
system to be stripped out (we hardly /need/ it since the JVM does latent typing
anyway), leaving the field free for more powerful or more specialised static
analysis ?

> (Whether it's possible to create the same for a dynamically typed
> language is a potentially interesting discussion; but as a practical
> matter, no matter what's possible, I still have better development tools
> for Java than for JavaScript when I do my job.)

Acknowledged.  Contrary-wise, I have better development tools in Smalltalk than
I ever expect to have in Java -- in part (only in part) because of the late
binding in Smalltalk and it's lack of insistence on declared types from an
arbitrarily chosen type system.

> On
> the other hand, I do like proving theorems, which means I am interested
> in type theory; if that type theory relates to programming, then that's
> great!  That's probably not the thing to say to ensure that my thoughts
> are relevant to the software development "industry", but it's
> nevertheless the truth.

Saying it will probably win you more friends in comp.lang.functional than it
looses in comp.lang.java.programmer ;-)

-- chris


 0

Andreas Rossberg schrieb:
> Rob Thorpe wrote:
>> Hmm.  You're right, ML is no-where in my definition since it has no
>> variables.
>
> Um, it has. Mind you, it has no /mutable/ variables, but that was not
> even what I was talking about.

Indeed. A (possibly nonexhaustive) list of program entities that (can)
have type would comprise of mutable variables, immutable variables (i.e.
constants and parameter names), and functions resp. their results.

Regards,
Jo

 0

Matthias Blume schrieb:
> Joachim Durchholz <jo@durchholz.org> writes:
>
>> Matthias Blume schrieb:
>>> Perhaps better: A language is statically typed if its definition
>>> includes (or ever better: is based on) a static type system, i.e., a
>>> static semantics with typing judgments derivable by typing rules.
>>> Usually typing judgmets associate program phrases ("expressions") with
>>> types given a typing environment.
>> This is defining a single term ("statically typed") using three
>> undefined terms ("typing judgements", "typing rules", "typing
>> environment").
>
> This was not meant to be a rigorous definition.

Rigorous or not, introducing additional undefined terms doesn't help
with explaining a term.

> Also, I'm not going to repeat the textbook definitions for those
> three standard terms here.

These terms certainly aren't standard for Perl, Python, Java, or Lisp,
and they aren't even standard for topics covered on comp.lang.functional
(which includes dynamically-typed languages after all).

Regards,
Jo

 0

Pascal Costanza schrieb:
> (It's really important to understand that the idea is to use this for
> deployed programs - albeit hopefully in a more structured fashion - and
> not only for debugging. The example I have given is an extreme one that
> you would probably not use as such in a "real-world" setting, but it
> shows that there is a boundary beyond which static type systems cannot
> be used in a meaningful way anymore, at least as far as I can tell.)

As soon as the running program can be updated, the distinction between
"static" (compile time) and "dynamic" (run time) blurs.
You can still erect a definition for such a case, but it needs to refer
to the update process, and hence becomes language-specific. In other
words, language-independent definitions of dynamic and static typing
won't give any meaningful results for such languages.

I'd say it makes more sense to talk about what advantages of static vs.
dynamic typing can be applied in such a situation.
E.g. one interesting topic would be the change in trade-offs: making
sure that a type error cannot occur becomes much more difficult
(particularly if the set of available types can change during an
update), so static typing starts to lose some of its appeal; OTOH a good
type system can give you a lot of guarantees even in such a situation,
even if it might have to revert to the occasional run-time type check,
so static checking still has its merits.

Regards,
Jo

 0

> So, will y'all just switch from using "dynamically typed" to "latently
> typed", and stop talking about any real programs in real programming
> languages as being "untyped" or "type-free", unless you really are
> talking about situations in which human reasoning doesn't come into
> whole issue.

I agree with most of what you say except regarding "untyped".

In machine language or most assembly the type of a variable is
something held only in the mind of the programmer writing it, and
nowhere else.  In latently typed languages though the programmer can
ask what they type of a particular value is.  There is a vast
difference to writing code in the latter kind of language to writing
code in assembly.

I would suggest that at least assembly should be referred to as
"untyped".


 0

Chris Uppal wrote:
>
> I have never been very happy with relating type to sets of values (objects,
> whatever).

Indeed, this view is much too narrow. In particular, it cannot explain
abstract types, which is *the* central aspect of decent type systems.
There were papers observing this as early as 1970. A type system should
rather be seen as a logic, stating invariants about a program. This can
include operations supported by values of certain types, as well as more
advanced properties, e.g. whether something can cause a side-effect, can
diverge, can have a deadlock, etc.

(There are also theoretic problems with the types-as-sets view, because
sufficiently rich type systems can no longer be given direct models in
standard set theory. For example, first-class polymorphism would run
afoul the axiom of foundation.)

> It's worth noting, too, that (in some sense) the type of an object can change
> over time[*].

No. Since a type expresses invariants, this is precisely what may *not*
happen. If certain properties of an object may change then the type of
the object has to reflect that possibility. Otherwise you cannot
legitimately call it a type.

Taking your example of an uninitialised reference, its type is neither
"reference to nil" nor "reference to object that understands message X",
it is in fact the union of both (at least). And indeed, languages with
slightly more advanced type systems make things like this very explicit
(in ML for example you have the option type for that purpose).

- Andreas

 0

Joachim Durchholz wrote:
> Pascal Costanza schrieb:
>> (It's really important to understand that the idea is to use this for
>> deployed programs - albeit hopefully in a more structured fashion -
>> and not only for debugging. The example I have given is an extreme one
>> that you would probably not use as such in a "real-world" setting, but
>> it shows that there is a boundary beyond which static type systems
>> cannot be used in a meaningful way anymore, at least as far as I can
>> tell.)
>
> As soon as the running program can be updated, the distinction between
> "static" (compile time) and "dynamic" (run time) blurs.
> You can still erect a definition for such a case, but it needs to refer
> to the update process, and hence becomes language-specific. In other
> words, language-independent definitions of dynamic and static typing
> won't give any meaningful results for such languages.
>
> I'd say it makes more sense to talk about what advantages of static vs.
> dynamic typing can be applied in such a situation.
> E.g. one interesting topic would be the change in trade-offs: making
> sure that a type error cannot occur becomes much more difficult
> (particularly if the set of available types can change during an
> update), so static typing starts to lose some of its appeal; OTOH a good
> type system can give you a lot of guarantees even in such a situation,
> even if it might have to revert to the occasional run-time type check,
> so static checking still has its merits.

I am not opposed to this view. The two examples I have given for things
that are impossible in static vs. dynamic type systems were
intentionally extreme to make the point that you have to make a choice,
that you cannot just blindly throw (instances of) both approaches
together. Static type systems potentially change the semantics of a
language in ways that cannot be captured by dynamically typed languages
anymore, and vice versa.

There is, of course, room for research on performing static type checks
in a running system, for example immediately after or before a software
update is applied, or maybe even on separate type checking on software
increments such that guarantees for their composition can be derived.
However, I am not aware of a lot of work in that area, maybe because the
static typing community is too focused on compile-time issues.

Personally, I also don't think that's the most interesting issue in that
area, but that's of course only a subjective opinion.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Joachim Durchholz <jo@durchholz.org> writes:

> Matthias Blume schrieb:
>> Joachim Durchholz <jo@durchholz.org> writes:
>>
>>> Matthias Blume schrieb:
>>>> Perhaps better: A language is statically typed if its definition
>>>> includes (or ever better: is based on) a static type system, i.e., a
>>>> static semantics with typing judgments derivable by typing rules.
>>>> Usually typing judgmets associate program phrases ("expressions") with
>>>> types given a typing environment.
>>> This is defining a single term ("statically typed") using three
>>> undefined terms ("typing judgements", "typing rules", "typing
>>> environment").
>> This was not meant to be a rigorous definition.
>
> Rigorous or not, introducing additional undefined terms doesn't help
> with explaining a term.

I think you missed my point.  My point was that a language is
statically typed IF IT IS DEFINED THAT WAY, i.e., if it has a static
type system that is PART OF THE LANGUAGE DEFINITION.  The details are
up to each individual definition.

>> Also, I'm not going to repeat the textbook definitions for those
>> three standard terms here.
>
> These terms certainly aren't standard for Perl, Python, Java, or Lisp,

Indeed.  That's because these languages are not statically typed.

 0

Chris Uppal schrieb:
> Chris Smith wrote:
>> I think Marshall got this one right.  The two are accomplishing
>> different things.  In one case (the dynamic case) I am safeguarding
>> against negative consequences of the program behaving in certain non-
>> sensical ways.  In the other (the static case) I am proving theorems
>> about the impossibility of this non-sensical behavior ever happening.
>
> And so conflating the two notions of type (-checking) as a kind of category
> error ?  If so then I see what you mean, and it's a useful distinction, but am
> unconvinced that it's /so/ helpful a perspective that I would want to exclude
> other perspectives which /do/ see the two as more-or-less trivial variants on
> the same underlying idea.

Just think of all the unit tests that you don't have to write.

Regards,
Jo

 0

Matthias Blume schrieb:
> Joachim Durchholz <jo@durchholz.org> writes:
>
>> Matthias Blume schrieb:
>>> Joachim Durchholz <jo@durchholz.org> writes:
>>>
>>>> Matthias Blume schrieb:
>>>>> Perhaps better: A language is statically typed if its definition
>>>>> includes (or ever better: is based on) a static type system, i.e., a
>>>>> static semantics with typing judgments derivable by typing rules.
>>>>> Usually typing judgmets associate program phrases ("expressions") with
>>>>> types given a typing environment.
>>>> This is defining a single term ("statically typed") using three
>>>> undefined terms ("typing judgements", "typing rules", "typing
>>>> environment").
>>> This was not meant to be a rigorous definition.
>> Rigorous or not, introducing additional undefined terms doesn't help
>> with explaining a term.
>
> I think you missed my point.  My point was that a language is
> statically typed IF IT IS DEFINED THAT WAY, i.e., if it has a static
> type system that is PART OF THE LANGUAGE DEFINITION.  The details are
> up to each individual definition.

Well, that certainly makes more sense to me.

Regards,
Jo

 0

Pascal Costanza schrieb:
> Static type systems potentially change the semantics of a
> language in ways that cannot be captured by dynamically typed languages
> anymore, and vice versa.

Very true.

I also suspect that's also why adding type inference to a
dynamically-typed language doesn't give you all the benefits of static
typing: the added-on type system is (usually) too weak to express really
interesting guarantees, usually because the language's semantics isn't
tailored towards making the inference steps easy enough.

Conversely, I suspect that adding dynamic typing to statically-typed
languages tends to miss the most interesting applications, mostly
because all the features that can "simply be done" in a
dynamically-typed language have to be retrofitted to the
statically-typed language on a case-by-case basis.

In both cases, the language designers often don't know the facilities of
the opposed camp well enough to really assess the trade-offs they are doing.

> There is, of course, room for research on performing static type checks
> in a running system, for example immediately after or before a software
> update is applied, or maybe even on separate type checking on software
> increments such that guarantees for their composition can be derived.
> However, I am not aware of a lot of work in that area, maybe because the
> static typing community is too focused on compile-time issues.

I think it's mostly because it's intimidating.

The core semantics of an ideal language fits on a single sheet of paper,
to facilitate proofs of language properties. Type checking
dynamically-loaded code probably wouldn't fit on that sheet of paper.
(The non-core semantics is then usually a set of transformation rules
that map the constructs that the programmer sees to constructs of the
core language.)

Regards,
Jo

 0

Chris Smith wrote:
>
>  When I used the word "type" above, I was adopting the
> working definition of a type from the dynamic sense.  That is, I'm
> considering whether statically typed languages may be considered to also
> have dynamic types, and it's pretty clear to me that they do.

I suppose this statement has to be evaluated on the basis of a
definition of "dynamic types." I don't have a firm definition for
that term, but my working model is runtime type tags. In which
case, I would say that among statically typed languages,
Java does have dynamic types, but C does not. C++ is
somewhere in the middle.

Marshall


 0

Joachim Durchholz wrote:
>
> Hmm... I think this distinction doesn't cover all cases.
>
> Assume a language that
> a) defines that a program is "type-correct" iff HM inference establishes
> that there are no type errors
> b) compiles a type-incorrect program anyway, with an establishes
> rigorous semantics for such programs (e.g. by throwing exceptions as
> appropriate).
> The compiler might actually refuse to compile type-incorrect programs,
> depending on compiler flags and/or declarations in the code.
>
> Typed ("strongly typed") it is, but is it statically typed or
> dynamically typed?

I think what this highlights is the fact that our existing terminology
is not up to the task of representing all the possible design
choices we could make. Some parts of dynamic vs. static
a mutually exclusive; some parts are orthogonal. Maybe
we have reached the point where trying to cram everything
in two one of two possible ways of doing things isn't going
to cut it any more.

Could it be that the US two-party system has influenced our
thinking?</joke>

Marshall


 0

Joachim Durchholz wrote:
>
> On a semantic level, the tag is always there - it's the type (and
> definitely part of an axiomatic definition of the language).
> Tag elimination is "just" an optimization.

I see what you're saying, but the distinction is a bit fine for me.
If the language has no possible mechanism to observe the
it-was-only-eliminated-as-an-optimization tag, in what sense
is it "always there?" E.g. The 'C' programming language.

Marshall


 0

David Hopwood wrote:
>
> Oh, but it *does* make sense to talk about dynamic tagging in a statically
> typed language.
>
> That's part of what makes the term "dynamically typed" harmful: it implies
> a dichotomy between "dynamically typed" and "statically typed" languages,
> when in fact dynamic tagging and static typing are (mostly) independent
> features.

That's really coming home to me in this thread: the terminology is *so*
bad. I have noticed this previously in the differences between
structural
and nominal typing; many typing issues associated with this distinction
are falsely labeled as a static-vs-dynamic issues, since so many
statically
type languages are nominally typed.

We need entirely new, finer grained terminology.

Marshall


 0

Marshall schreef:

> "dynamic types." I don't have a firm definition for
> that term, but my working model is runtime type tags. In which
> case, I would say that among statically typed languages,
> Java does have dynamic types, but C does not. C++ is
> somewhere in the middle.

C has union.

--
Affijn, Ruud

"Gewoon is een tijger."


 0

Chris Uppal wrote:
> It's worth noting, too, that (in some sense) the type of an object can change
> over time[*].  That can be handled readily (if not perfectly) in the informal
> internal type system(s) which programmers run in their heads (pace the very
> sensible post by Anton van Straaten today in this thread -- several branches
> away), but cannot be handled by a type system based on sets-of-values (and is
> also a counter-example to the idea that "the" dynamic type of an object/value
> can be identified with its tag).
>
> ([*] if the set of operations in which it can legitimately partake changes.
> That can happen explicitly in Smalltalk (using DNU proxies for instance if the
> proxied object changes, or even using #becomeA:), but can happen anyway in less
> "free" languages -- the State Pattern for instance, or even (arguably) in the
> difference between an empty list and a non-empty list).

Dynamic changes in object behaviour are not incompatible with type systems based
on sets of values (e.g. semantic subtyping). There are some tricky issues in
making such a system work, and I'm not aware of any implemented language that
does it currently, but in principle it's quite feasible.

For a type system that can handle dynamic proxying, see
<http://www.doc.ic.ac.uk/~scd/FOOL11/FCM.pdf>.

--
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>

 0

Andreas Rossberg schrieb:
> Chris Uppal wrote:
>
>> It's worth noting, too, that (in some sense) the type of an object can
>> change over time[*].
>
> No. Since a type expresses invariants, this is precisely what may *not*
> happen.

No. A type is a set of allowable values, allowable operations, and
constraints on the operations (which are often called "invariants" but
they are invariant only as long as the type is invariant).

I very much agree that the association of a type with a value or a name
should be constant over their lifetime - but that doesn't follow from
the definition of "type", it follows from general maintainability
considerations (quite strong ones actually).

Regards,
Jo

 0

Nice post! One question:

Anton van Straaten wrote:
>
> 3.  A really natural term to refer to types which programmers reason
> about, even if they are not statically checked, is "latent types".  It
> captures the situation very well intuitively, and it has plenty of
> precedent -- e.g. it's mentioned in the Scheme reports, R5RS and its
> predecessors, going back at least a decade or so (haven't dug to check
> when it first appeared).

Can you be more explicit about what "latent types" means?
I'm sorry to say it's not at all natural or intuitive to me.
Are you referring to the types in the programmers head,
or the ones at runtime, or what?

Marshall


 0

Chris Uppal wrote:
> doesn't fit with my intuitions very well -- most noticeably in that the sets
> are generally unbounded

Errr, not in Ada.  Indeed, not in any machine I know of with a limited

Andreas Rossberg wrote:
> Indeed, this view is much too narrow. In particular, it cannot explain
> abstract types, which is *the* central aspect of decent type systems.

Well, it's Ada's view. I didn't say it was right for theoretical
languages or anything like that. As far as I know, LOTOS is the only
language that *actually* uses abstract data types - you have to use the
equivalent of #include to bring in the integers, for example. Everything
else uses informal rules to say how types work.

But Ada's definition gives you a very nice way of talking about things
like whether integers that overflow are the same type as integers that
don't overflow, or whether an assignment of an integer to a positive is
legal, or adding a CountOfApples to a CountOfOranges is legal, or
whether passing a "Dog" object to an "Animal" function parameter makes
sense in a particular context.

Indeed, the ability to declare a new type that has the exact same
underlying representation and isomorphically identical operations but
not be the same type is something I find myself often missing in
languages. It's nice to be able to say "this integer represents vertical
pixel count, and that represents horizontal pixel count, and you don't

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.

 0

Joachim Durchholz wrote:
>>
>>> It's worth noting, too, that (in some sense) the type of an object
>>> can change over time[*].
>>
>> No. Since a type expresses invariants, this is precisely what may
>> *not* happen.
>
> No. A type is a set of allowable values, allowable operations, and
> constraints on the operations (which are often called "invariants" but
> they are invariant only as long as the type is invariant).

The purpose of a type system is to derive properties that are known to
hold in advance. A type is the encoding of these properties. A type
varying over time is an inherent contradiction (or another abuse of the
term "type").

- Andreas

 0

Darren New <dnew@san.rr.com> writes:

> [ ... ] As far as I know, LOTOS is the only
> language that *actually* uses abstract data types - you have to use
> the equivalent of #include to bring in the integers, for
> example. Everything else uses informal rules to say how types work.

There are *tons* of languages that "actually" facilitate abstract data
types, and some of these languages are actually used by real people.

 0

Darren New wrote:
>
> As far as I know, LOTOS is the only
> language that *actually* uses abstract data types

Maybe I don't understand what you mean with ADT here, but all languages
with a decent module system support ADTs in the sense it is usually
understood, see ML for a primary example. Classes in most OOPLs are

> Indeed, the ability to declare a new type that has the exact same
> underlying representation and isomorphically identical operations but
> not be the same type is something I find myself often missing in
> languages. It's nice to be able to say "this integer represents vertical
> pixel count, and that represents horizontal pixel count, and you don't
> get to add them together."

Not counting C/C++, I don't know when I last worked with a typed
language that does *not* have this ability... (which is slightly

- Andreas

 0

In comp.lang.functional Anton van Straaten <anton@appsolutions.com> wrote:
[...]

This static vs dynamic type thing reminds me of one article written by
Bjarne Stroustrup where he notes that "Object-Oriented" has become a
synonym for "good".  More precisely, it seems to me that both camps
(static & dynamic) think that "typed" is a synonym for having
"well-defined semantics" or being "safe" and therefore feel the need
to be able to speak of their language as "typed" whether or not it
makes sense.

> Let me add another complex subtlety, then: the above description misses
> an important point, which is that *automated* type checking is not the
> whole story.  I.e. that compile time/runtime distinction is a kind of
> red herring.

I agree.  I think that instead of "statically typed" we should say
"typed" and instead of "(dynamically|latently) typed" we should say
"untyped".

> In a statically-checked language, people tend to confuse automated
> static checking with the existence of types, because they're thinking in
> a strictly formal sense: they're restricting their world view to what
> they see "within" the language.

That is not unreasonable.  You see, you can't have types unless you
have a type system.  Types without a type system are like answers
without questions - it just doesn't make any sense.

> Then they look at programs in a dynamically-checked language, and see
> checks happening at runtime, and they assume that this means that the
> program is "untyped".

Not in my experience.  Either a *language* specifies a type system or
not.  There is little room for confusion.  Well, at least unless you
equate "typing" with being "well-defined" or "safe" and go to great
lengths to convince yourself that your program has "latent types" even
without specifying a type system.

> It's certainly close enough to say that the *language* is untyped.

Indeed.  Either a language has a type system and is typed or has no
type system and is untyped.  I see very little room for confusion
here.  In my experience, the people who confuse these things are
people from the dynamic/latent camp who wish to see types everywhere
because they confuse typing with safety or having well-defined
semantics.

> But a program as seen by the programmer has types: the programmer
> performs (static) type inference when reasoning about the program, and
> debugs those inferences when debugging the program, finally ending up
> with a program which has a perfectly good type scheme.  It's may be
> messy compared to say an HM type scheme, and it's usually not proved to
> be perfect, but that again is an orthogonal issue.

There is a huge hole in your argument above.  Types really do not make
sense without a type system.  To claim that a program has a type
scheme, you must first specify the type system.  Otherwise it just
doesn't make any sense.

> Mathematicians operated for thousands of years without automated
> checking of proofs, so you can't argue that because a
> dynamically-checked program hasn't had its type scheme proved correct,
> that it somehow doesn't have types.  That would be a bit like arguing
> that we didn't have Math until automated theorem provers came along.

No - not at all.  First of all, mathematics has matured quite a bit
since the early days.  I'm sure you've heard of the axiomatic method.
However, what you are missing is that to prove that your program has
types, you first need to specify a type system.  Similarly, to prove
something in math you start by specifying [fill in the rest].

> 1. "Untyped" is really quite a misleading term, unless you're talking
> about something like the untyped lambda calculus.  That, I will agree,
> can reasonably be called untyped.

Untyped is not misleading.  "Typed" is not a synonym for "safe" or
"having well-defined semantics".

> So, will y'all just switch from using "dynamically typed" to "latently
> typed"

I won't (use "latently typed").  At least not without further
qualification.

-Vesa Karvonen

 0

Rob Thorpe wrote:
> Chris Smith wrote:
> > Torben =C6gidius Mogensen <torbenm@app-3.diku.dk> wrote:
> > > That's not really the difference between static and dynamic typing.
> > > Static typing means that there exist a typing at compile-time that
> > > guarantess against run-time type violations.  Dynamic typing means
> > > that such violations are detected at run-time.  This is orthogonal to
> > > strong versus weak typing, which is about whether such violations are
> > > detected at all.  The archetypal weakly typed language is machine code
> > > -- you can happily load a floating point value from memory, add it to
> > > a string pointer and jump to the resulting value.  ML and Scheme are
> > > both strongly typed, but one is statically typed and the other
> > > dynamically typed.
> >
> > Knowing that it'll cause a lot of strenuous objection, I'll nevertheless
> > interject my plea not to abuse the word "type" with a phrase like
> > "dynamically typed".  If anyone considers "untyped" to be perjorative,
> > as some people apparently do, then I'll note that another common term is
> > "type-free," which is marketing-approved but doesn't carry the
> > misleading connotations of "dynamically typed."  We are quickly losing
> > any rational meaning whatsoever to the word "type," and that's quite a
> > shame.
>
> I don't think dynamic typing is that nebulous.  I remember this being
> discussed elsewhere some time ago, I'll post the same reply I did then
> ..
> A language is statically typed if a variable has a property - called
> it's type - attached to it, and given it's type it can only represent
> values defined by a certain class.
>
> A language is latently typed if a value has a property - called it's
> type - attached to it, and given it's type it can only represent values
> defined by a certain class.
>
> Some people use dynamic typing as a word for latent typing, others use
> it to mean something slightly different.  But for most purposes the
> definition above works for dynamic typing also.
>
> Untyped and type-free mean something else: they mean no type checking
> is done.

Since people have found some holes in this definition I'll have another
go:-

Firstly, a definition, General expression (gexpr) are variables
(mutable or immutable), expressions and the entities functions return.

A statically typed language has a parameter associated with each gexpr
called it's type.  The code may test the type of a gexpr.  The language
will check if the gexprs of an operator/function have types that match
what is required, to some criteria of sufficiency. It will emit an
error/warning when they don't.  It will do so universally.

A latently typed language has a parameter associated with each value
called it's type.  The code may test the type of a value.  The language
may check if the gexprs of an operator/function have types that match
what is required, to some criteria of sufficiency.  It will not
necessarily do so universally.

An untyped language is one that does not possess either a static or
latent type system.  In an untyped language gexprs possess no type
information, and neither do values.

--

These definitions still have problems, they don't say anything about
languages that sit between the various categories for example.  I don't
know where HM type system would come in them.


 0

In comp.lang.functional Andreas Rossberg <rossberg@ps.uni-sb.de> wrote:
> Darren New wrote:
[...]
> > Indeed, the ability to declare a new type that has the exact same
> > underlying representation and isomorphically identical operations but
> > not be the same type is something I find myself often missing in
> > languages. It's nice to be able to say "this integer represents vertical
> > pixel count, and that represents horizontal pixel count, and you don't
> > get to add them together."

> Not counting C/C++, I don't know when I last worked with a typed
> language that does *not* have this ability... (which is slightly

Would Java count?

-Vesa Karvonen

 0

Vesa Karvonen wrote:
>
>>>Indeed, the ability to declare a new type that has the exact same
>>>underlying representation and isomorphically identical operations but
>>>not be the same type is something I find myself often missing in
>>>languages. It's nice to be able to say "this integer represents vertical
>>>pixel count, and that represents horizontal pixel count, and you don't
>
>>Not counting C/C++, I don't know when I last worked with a typed
>>language that does *not* have this ability... (which is slightly
>
> Would Java count?

Yes, you are right. And there certainly are more in the OO camp.

But honestly, I do not remember when I last had to actively work with
one of them, including Java... :-)

- Andreas

 0

Vesa Karvonen wrote:
> In comp.lang.functional Anton van Straaten <anton@appsolutions.com> wrote:
> > Let me add another complex subtlety, then: the above description misses
> > an important point, which is that *automated* type checking is not the
> > whole story.  I.e. that compile time/runtime distinction is a kind of
> > red herring.
>
> I agree.  I think that instead of "statically typed" we should say
> "typed" and instead of "(dynamically|latently) typed" we should say
> "untyped".
>
> > In a statically-checked language, people tend to confuse automated
> > static checking with the existence of types, because they're thinking in
> > a strictly formal sense: they're restricting their world view to what
> > they see "within" the language.
>
> That is not unreasonable.  You see, you can't have types unless you
> have a type system.  Types without a type system are like answers
> without questions - it just doesn't make any sense.
>
> > Then they look at programs in a dynamically-checked language, and see
> > checks happening at runtime, and they assume that this means that the
> > program is "untyped".
>
> Not in my experience.  Either a *language* specifies a type system or
> not.  There is little room for confusion.  Well, at least unless you
> equate "typing" with being "well-defined" or "safe" and go to great
> lengths to convince yourself that your program has "latent types" even
> without specifying a type system.

The question is: What do you mean by "type system"?
Scheme and Lisp both define how types work in their specifications
clearly, others may do too, I don't know.
Of-course you may not consider that as a type system if you mean "type
system" to mean a static type system.

> > It's certainly close enough to say that the *language* is untyped.
>
> Indeed.  Either a language has a type system and is typed or has no
> type system and is untyped.  I see very little room for confusion
> here.  In my experience, the people who confuse these things are
> people from the dynamic/latent camp who wish to see types everywhere
> because they confuse typing with safety or having well-defined
> semantics.

No.  It's because the things that we call latent types we use for the
same purpose that programmers of static typed languages use static
types for.

Statically typed programmers ensure that the value of some expression
is of some type by having the compiler check it.  Programmers of
latently typed languages check, if they think it's important, by asking
what the type of the result is.

The objection here is that advocates of statically typed language seem
to be claiming the "type" as their own word, and asking that others use
their definitions of typing, which are really specific to their
subjects of interest. This doesn't help advocates of static languages/
latently typed languages, or anyone else.  It doesn't help because
no-one else is likely to change their use of terms, there's no reason
why they would.  All that may happen is that users of statically typed
languages change the words they use.  This would confuse me, for one. I
would much rather understand what ML programmers, for example, are
saying and that's hard enough as it is.

There's also my other objection, if you consider latently typed
languages untyped, then what is assembly?


 0

Matthias Blume wrote:
> There are *tons* of languages that "actually" facilitate abstract data
> types, and some of these languages are actually used by real people.

I don't know of any others in actual use. Could you name a couple?

Note that I don't consider things like usual OO languages (Eiffel,
Smalltalk, etc) to have "abstract data types".

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.

 0

Andreas Rossberg wrote:
> Maybe I don't understand what you mean with ADT here, but all languages
> with a decent module system support ADTs in the sense it is usually
> understood, see ML for a primary example.

OK.  Maybe some things like ML and Haskell and such that I'm not
intimately familiar with do, now that you mention it, yes.

> Classes in most OOPLs are essentially beefed-up ADTs as well.

Err, no. There's nothing really abstract about them. And their values
are mutable. So while one can think about them as an ADT, one actually
has to write code to (for example) calculate or keep track of how many
entries are on a stack, if you want that information.

> Not counting C/C++, I don't know when I last worked with a typed
> language that does *not* have this ability... (which is slightly

Java? C#? Icon? Perl? (Hmmm... Pascal does, IIRC.) I guess you just work
with better languages than I do. :-)

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.

 0

Darren New wrote:
>
>> Maybe I don't understand what you mean with ADT here, but all
>> languages with a decent module system support ADTs in the sense it is
>> usually understood, see ML for a primary example.
>
> OK.  Maybe some things like ML and Haskell and such that I'm not
> intimately familiar with do, now that you mention it, yes.

>> Classes in most OOPLs are essentially beefed-up ADTs as well.
>
> Err, no. There's nothing really abstract about them. And their values
> are mutable. So while one can think about them as an ADT, one actually
> has to write code to (for example) calculate or keep track of how many
> entries are on a stack, if you want that information.

Now you lost me completely. What has mutability to do with it? And the
stack?

AFAICT, ADT describes a type whose values can only be accessed by a
certain fixed set of operations. Classes qualify for that, as long as
they provide proper encapsulation.

>> Not counting C/C++, I don't know when I last worked with a typed
>> language that does *not* have this ability... (which is slightly
>
> Java? C#? Icon? Perl? (Hmmm... Pascal does, IIRC.)  I guess you just work
> with better languages than I do. :-)

OK, I admit that I exaggerated slightly. Although currently I'm indeed
able to mostly work with the more pleasant among languages. :-)

(Btw, Pascal did not have it either, AFAIK)

- Andreas

 0

Dr.Ruud wrote:
> Marshall schreef:
>
> > "dynamic types." I don't have a firm definition for
> > that term, but my working model is runtime type tags. In which
> > case, I would say that among statically typed languages,
> > Java does have dynamic types, but C does not. C++ is
> > somewhere in the middle.
>
> C has union.

That's not the same thing.  The value of a union in C can be any of a
set of specified types.  But the program cannot find out which, and the
language doesn't know either.

With C++ and Java dynamic types the program can test to find the type.


 0

Andreas Rossberg wrote:
> AFAICT, ADT describes a type whose values can only be accessed by a
> certain fixed set of operations.

No. AFAIU, an ADT defines the type based on the operations. The stack
holding the integers 1 and 2 is the value (push(2, push(1, empty()))).
There's no "internal" representation. The values and operations are
defined by preconditions and postconditions.

Both a stack and a queue could be written in most languages as "values
that can only be accessed by a fixed set of operations" having the same
possible internal representations and the same function signatures.
They're far from the same type, because they're not abstract. The part
you can't see from outside the implementation is exactly the part that
defines how it works.

Granted, it's a common mistake to write that some piece of C++ code

For example, an actual ADT for a stack (and a set) is shown on
Note that the "axia" for the stack type completely define its operation
and semantics. There is no other "implementation" involved. And in LOTOS
(which uses ACT.1 or ACT.ONE (I forget which)) as its type, this is
actually how you'd write the code for a stack, and then the compiler
would crunch a while to figure out a corresponding implementation.

I suspect "ADT" is by now so corrupted that it's useful to use a
different term, such as "algebraic type" or some such.

> Classes qualify for that, as long as they provide proper encapsulation.

Nope.

> OK, I admit that I exaggerated slightly. Although currently I'm indeed
> able to mostly work with the more pleasant among languages. :-)

Yah. :-)

> (Btw, Pascal did not have it either, AFAIK)

I'm pretty sure in Pascal you could say

Type Apple = Integer; Orange = Integer;
and then vars of type apple and orange were not interchangable.

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.

 0

Rob Thorpe schreef:
> Dr.Ruud:
>> Marshall:

>>> "dynamic types." I don't have a firm definition for
>>> that term, but my working model is runtime type tags. In which
>>> case, I would say that among statically typed languages,
>>> Java does have dynamic types, but C does not. C++ is
>>> somewhere in the middle.
>>
>> C has union.
>
> That's not the same thing.

That is your opinion. In the context of this discussion I don't see any
problem to put C's union under "dynamic types".

> The value of a union in C can be any of a
> set of specified types.  But the program cannot find out which, and
> the language doesn't know either.
>
> With C++ and Java dynamic types the program can test to find the type.

When such a test is needed for the program with the union, it has it.

--
Affijn, Ruud

"Gewoon is een tijger."


 0

Joachim Durchholz <jo@durchholz.org> wrote:
> Assume a language that
> a) defines that a program is "type-correct" iff HM inference establishes
> that there are no type errors
> b) compiles a type-incorrect program anyway, with an establishes
> rigorous semantics for such programs (e.g. by throwing exceptions as
> appropriate).

So the compiler now attempts to prove theorems about the program, but
once it has done so it uses the results merely to optimize its runtime
behavior and then throws the results away.  I'd call that not a
statically typed language, then.  The type-checking behavior is actually
rather irrelevant both to the set of valid programs of the language, and
to the language semantics (since the same could be accomplished without
the type checking).  It is only relevant to performance.  Obviously, the
language probably qualifies as dynamically typed for most common
definitions of that term, but I'm not ready to accept one definition and
claim to understand it, yet, so I'll be cautious about classsifying the
language.

> The compiler might actually refuse to compile type-incorrect programs,
> depending on compiler flags and/or declarations in the code.

Then those compiler flags would cause the compiler to accept a different
language, and that different language would be a statically typed
language (by which I don't mean to exclude the possibility of its also
being dynamically typed).

> Typed ("strongly typed") it is, but is it statically typed or
> dynamically typed?

So my answer is that it's not statically typed in the first case, and is
statically typed in the second case, and it intuitively appears to be
dynamically typed at least in the first, and possibly in the second as
well.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Marshall <marshall.spight@gmail.com> wrote:
> I think what this highlights is the fact that our existing terminology
> is not up to the task of representing all the possible design
> choices we could make. Some parts of dynamic vs. static
> a mutually exclusive; some parts are orthogonal.

Really?  I can see that in a strong enough static type system, many
dynamic typing features would become unobservable and therefore would be
pragmatically excluded from any probable implementations... but I don't
see any other kind of mutual exclusion between the two.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Marshall wrote:
>
> That's really coming home to me in this thread: the terminology is *so*
> bad. I have noticed this previously in the differences between
> structural
> and nominal typing; many typing issues associated with this distinction
> are falsely labeled as a static-vs-dynamic issues, since so many
> statically
> type languages are nominally typed.
>
> We need entirely new, finer grained terminology.

Agreed.  That's why I've been biting my tongue and avoiding posting.
The discussion is going along the lines of the blind men and the
elephant.  I've seen about seven different definitions of what a type'
is, and most of the arguments seem to come from people misunderstanding
the other person's definition.  I think that *most* of the people
arguing here would agree with each other (possibly in detail) if only
they understood each other.

Static type aficionados have a specialized jargon that has precise
meaning for a number of the terms being used.  People that aren't into
that field of computer science use the same terms in a much looser
sense.  But static type aficionados are definitely in the minority in
comp.lang.lisp, and probably in a few of the other comp.langs as well.

What we need is an FAQ entry for how to talk about types with people
who are technically adept, but non-specialists.  Or alternatively, an
FAQ of how to explain the term dynamic typing' to a type theorist.


 0

Andreas Rossberg wrote:
> Chris Uppal wrote:
> >
> > I have never been very happy with relating type to sets of values (objects,
> > whatever).
>
> Indeed, this view is much too narrow. In particular, it cannot explain
> abstract types, which is *the* central aspect of decent type systems.

What prohibits us from describing an abstract type as a set of values?

> There were papers observing this as early as 1970.

References?

> (There are also theoretic problems with the types-as-sets view, because
> sufficiently rich type systems can no longer be given direct models in
> standard set theory. For example, first-class polymorphism would run
> afoul the axiom of foundation.)

There is no reason why we must limit ourselves to "standard set theory"
any more than we have to limit ourselves to standard type theory.
Both are progressing, and set theory seems to me to be a good
choice for a foundation. What else would you use?

(Agree with the rest.)

Marshall


 0

Chris Uppal wrote:
> David Hopwood wrote:
>
>> When people talk about "types" being associated with values in a "latently typed"
>> or "dynamically typed" language, they really mean *tag*, not type.
>
> I don't think that's true.  Maybe /some/ people do confuse the two, but I am
> certainly a counter-example ;-)
>
> The tag (if any) is part of the runtime machinery (or, if not, then I don't
> understand what you mean by the word), and while that is certainly a reasonably
> approximation to the type of the object/value, it is only an approximation,
> and -- what's more -- is only an approximation to the type as yielded by one
> specific (albeit abstract, maybe even hypothetical) type system.

Yes. I should perhaps have mentioned that people sometimes mean "protocol"
rather than "tag" or "type" (a protocol being the set of messages that an object
can respond to, roughly speaking).

> If I send #someMessage to a proxy object which has not had its referent set
> (and assuming the default value, presumably some variant of nil, does not
> understand #someMessage), then that's just as much a type error as sending
> #someMessage to a variable holding a nil value.

It's an error, certainly. People usually call it a type error. But does that
terminology actually make sense?

Typical programming languages have many kinds of semantic error that can occur
at run-time: null references, array index out of bounds, assertion failures,
failed casts, "message not understood", ArrayStoreExceptions in Java,
arithmetic overflow, divide by zero, etc.

Conventionally, some of these errors are called "type errors" and some are
not. But there seems to be little rhyme or reason to this categorization, as
far as I can see. If in a particular language, both array index bounds errors
and "message not understood" can occur at run-time, then there's no objective
reason to call one a type error and the other not. Both *could* potentially
be caught by a type-based analysis in some cases, and both *are not* caught
by such an analysis in that language.

A more consistent terminology would reserve "type error" for errors that
occur when a typechecking/inference algorithm fails, or when an explicit
type coercion or typecheck fails.

According to this view, the only instances where a run-time error should be
called a "type error" are:

- a failed cast, or no match for any branch of a 'typecase' construct.
Here the construct that fails is a coercion of a value to a specific type,
or a check that it conforms to that type, and so the term "type error"
makes sense.

- cases where a typechecking/inference algorithm fails at run-time (e.g.
typechecking).

In other cases, just say "run-time error".

> If I then assign the referent
> of the proxy to some object which does understand #someMessage, then it is not
> a type error to send #someMessage to the proxy.  So the type has changed, but
> nothing in the tag system of the language implementation has changed.

In the terminology I'm suggesting, the object has no type in this language
(assuming we're talking about a Smalltalk-like language without any type system
extensions). So there is no type error, and no inconsistency.

Objects in this language do have protocols, so this situation can be described
as a change to the object's protocol, which changes whether a given message
causes a protocol error.

--
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>

 0

Dr.Ruud wrote:
> Marshall schreef:
>
> > "dynamic types." I don't have a firm definition for
> > that term, but my working model is runtime type tags. In which
> > case, I would say that among statically typed languages,
> > Java does have dynamic types, but C does not. C++ is
> > somewhere in the middle.
>
> C has union.

But it does not have tagged unions, so my point is unaffected.

Marshall


 0

Chris Smith wrote:
> Marshall <marshall.spight@gmail.com> wrote:
> > I think what this highlights is the fact that our existing terminology
> > is not up to the task of representing all the possible design
> > choices we could make. Some parts of dynamic vs. static
> > a mutually exclusive; some parts are orthogonal.
>
> Really?  I can see that in a strong enough static type system, many
> dynamic typing features would become unobservable and therefore would be
> pragmatically excluded from any probable implementations... but I don't
> see any other kind of mutual exclusion between the two.

Well, it strikes me that some of what the dynamic camp likes
is the actual *absence* of declared types, or the necessity
of having them. At the very least, requiring types vs. not requiring
types is mutually exclusive.

But again, my dynamic kung fu is very weak, so I may not know
what I'm talking about when I represent the dynamic side.

Marshall


 0

Rob Thorpe wrote:
> Vesa Karvonen wrote:
>
>>In comp.lang.functional Anton van Straaten <anton@appsolutions.com> wrote:
>>
>>>Let me add another complex subtlety, then: the above description misses
>>>an important point, which is that *automated* type checking is not the
>>>whole story.  I.e. that compile time/runtime distinction is a kind of
>>>red herring.
>>
>>I agree.  I think that instead of "statically typed" we should say
>>"typed" and instead of "(dynamically|latently) typed" we should say
>>"untyped".
[...]
>>>It's certainly close enough to say that the *language* is untyped.
>>
>>Indeed.  Either a language has a type system and is typed or has no
>>type system and is untyped.  I see very little room for confusion
>>here.  In my experience, the people who confuse these things are
>>people from the dynamic/latent camp who wish to see types everywhere
>>because they confuse typing with safety or having well-defined
>>semantics.
>
> No.  It's because the things that we call latent types we use for the
> same purpose that programmers of static typed languages use static
> types for.
>
> Statically typed programmers ensure that the value of some expression
> is of some type by having the compiler check it.  Programmers of
> latently typed languages check, if they think it's important, by asking
> what the type of the result is.
>
> The objection here is that advocates of statically typed language seem
> to be claiming the "type" as their own word, and asking that others use
> their definitions of typing, which are really specific to their
> subjects of interest.

As far as I can tell, the people who advocate using "typed" and "untyped"
in this way are people who just want to be able to discuss all languages in
a unified terminological framework, and many of them are specifically not

--
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>

 0

Chris F Clark schrieb:
> In that sense, a static type system is eliminating tags, because the
> information is pre-computed and not explicitly stored as a part of the
> computation.  Now, you may not view the tag as being there, but in my
> mind if there exists a way of perfoming the computation that requires
> tags, the tag was there and that tag has been eliminated.

Joachim Durchholz replied:
> On a semantic level, the tag is always there - it's the type (and
> definitely part of an axiomatic definition of the language).
> Tag elimination is "just" an optimization.

I agree the tag is always there in the abstract.

However, for the work I do the optimization of the tag at runtime is
important, and we specifically change things into types when we know
the system can do that optimization, because then we are getting the
system to do the work we would have to do and validating that the job
is done correctly.  So, I care that the tag is eliminated in practice
(and remains in theory--I have to have both).

In the end, I'm trying to fit things which are "too big" and "too
slow" into much less space and time, using types to help me reliably
make my program smaller and faster is just one trick.  It's a really
great and non-obvious one though, and one I'm glad I learned.  Any
algebra I can learn that helps me solve my problems better is
appreciated.

However, I also know that my way of thinking about it is fringe.  Most
people don't think that the purpose of types is to help one write
reliably tighter code.

Still, knowing about dynmic typing (tagging) and static typing, helped
me understand this trick.  Thus, conflating the two meanings may at
some level be confusing.  However, for me, they aided understanding
something that I needed to learn.

-Chris

 0

Marshall wrote:
> Chris Smith wrote:
>>Marshall <marshall.spight@gmail.com> wrote:
>>
>>>I think what this highlights is the fact that our existing terminology
>>>is not up to the task of representing all the possible design
>>>choices we could make. Some parts of dynamic vs. static
>>>a mutually exclusive; some parts are orthogonal.
>>
>>Really?  I can see that in a strong enough static type system, many
>>dynamic typing features would become unobservable and therefore would be
>>pragmatically excluded from any probable implementations... but I don't
>>see any other kind of mutual exclusion between the two.
>
> Well, it strikes me that some of what the dynamic camp likes
> is the actual *absence* of declared types, or the necessity
> of having them.

So why aren't they happy with something like, say, Alice ML, which is
statically typed, but has a "dynamic" type and type inference? I mean
this as a serious question.

> At the very least, requiring types vs. not requiring
> types is mutually exclusive.

Right, but it's pretty well established that languages that don't
require type *declarations* can still be statically typed.

--
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>

 0

Pascal Costanza wrote:
> There is, of course, room for research on performing static type checks
> in a running system, for example immediately after or before a software
> update is applied, or maybe even on separate type checking on software
> increments such that guarantees for their composition can be derived.
> However, I am not aware of a lot of work in that area, maybe because the
> static typing community is too focused on compile-time issues.

Not everyone is. For instance, Don Stewart has been enormously successful in
deploying such a system for Haskell (very much a statically typed language)
in a practically usable way. It is called hs-plugins (see
http://www.cse.unsw.edu.au/~dons/hs-plugins/), a framework for run-time
compiled, giving different levels of security). Far from being a purely
academic exercise, there are interesting applications, including yi, an
extensible editor, and lambdabot, an IRC bot, both available from the above

Cheers,
Ben

 0

Marshall <marshall.spight@gmail.com> wrote:
> Well, it strikes me that some of what the dynamic camp likes
> is the actual *absence* of declared types, or the necessity
> of having them. At the very least, requiring types vs. not requiring
> types is mutually exclusive.

So you're saying, then, that while static typing and dynamic typing are
not themselves mutually exclusive, there are people whose concerns run
as much in the "not statically typed" direction as in the "dynamically
typed" direction?  I agree that this is undoubtedly true.  That (not
statically typed) seems to be what gets all the attention, as a matter
of fact.  Most programmers in modern languages assume, though, that
there will be some kind of safeguard against writing bad code with
unpredictable consequences, so in practice "not statically typed"
correlates strongly with "dynamically typed".

Nevertheless, the existence of languages that are clearly "both"
suggests that they should be considered separately to at least some
extent.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

David Hopwood <david.nospam.hopwood@blueyonder.co.uk> wrote:
> Typical programming languages have many kinds of semantic error that can occur
> at run-time: null references, array index out of bounds, assertion failures,
> failed casts, "message not understood", ArrayStoreExceptions in Java,
> arithmetic overflow, divide by zero, etc.
>
> Conventionally, some of these errors are called "type errors" and some are
> not. But there seems to be little rhyme or reason to this categorization, as
> far as I can see. If in a particular language, both array index bounds errors
> and "message not understood" can occur at run-time, then there's no objective
> reason to call one a type error and the other not. Both *could* potentially
> be caught by a type-based analysis in some cases, and both *are not* caught
> by such an analysis in that language.

Incidentally, yes!  Filtering out the terminology stuff [as hard as this
may be to believe on USENET where half the world seems to be trolls, I
really was not so aware when I originally posted of how some communities
use terminology and where the motivations come from], this was my
original point.  In the static sense, there is no such thing as a type
error; only an error that's caught by a type system.  I don't know if
the same can be said of dynamic types.  Some people here seem to be
saying that there is a universal concept of "type error" in dynamic
typing, but I've still yet to see a good precise definition (nor a good
precise definition of dynamic typing at all).

In either case it doesn't make sense, then, to compare how static type
systems handle type errors versus how dynamic systems handle type
errors.  That's akin to asking how comparing how much many courses are
offered at a five star restaurant versus how many courses are offered by
the local university.  (Yes, that's an exaggeration, of course.  The
word "type" in the static/dynamic typing world at least has a closer to
common root.)

> A more consistent terminology would reserve "type error" for errors that
> occur when a typechecking/inference algorithm fails, or when an explicit
> type coercion or typecheck fails.

I am concerned as to whether that actually would turn out to have any
meaning.

If we are considering array length bounds checking by type systems (and
just to confuse ourselves, by both static and dynamic type systems),
then is the error condition that gets raised by the dynamic system a
type error?  Certainly, if the system keeps track of the fact that this
is an array of length 5, and uses that information to complain when
someone tries to treat the array as a different type (such as an array
of length >= 7, for example), certainly that's a type error, right?
Does the reference to the seventh index constitute an "explicit" type
coercion?  I don't know.  It seems pretty explicit to me, but I suspect
some may not agree.

The same thing would certainly be a type error in a static system, if
indeed the static system solved the array bounds problem at all.

While this effort to salvage the term "type error" in dynamic languages
is interesting, I fear it will fail.  Either we'll all have to admit
that "type" in the dynamic sense is a psychological concept with no
precise technical definition (as was at least hinted by Anton's post
earlier, whether intentionally or not) or someone is going to have to
propose a technical meaning that makes sense, independently of what is
meant by "type" in a static system.

> In the terminology I'm suggesting, the object has no type in this language
> (assuming we're talking about a Smalltalk-like language without any type system
> extensions).

I suspect you'll see the Smalltalk version of the objections raised in
response to my post earlier.  In other words, whatever terminology you
think is consistent, you'll probably have a tough time convincing
Smalltalkers to stop saying "type" if they did before.  If you exclude
"message not understood" as a type error, then I think you're excluding
type errors from Smalltalk entirely, which contradicts the psychological
understanding again.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Rob Thorpe <robert.thorpe@antenova.com> wrote:
+---------------
| > So, will y'all just switch from using "dynamically typed" to "latently
| > typed", and stop talking about any real programs in real programming
| > languages as being "untyped" or "type-free", unless you really are
| > talking about situations in which human reasoning doesn't come into play?
|
| I agree with most of what you say except regarding "untyped".
|
| In machine language or most assembly the type of a variable is
| something held only in the mind of the programmer writing it, and
| nowhere else.  In latently typed languages though the programmer can
| ask what they type of a particular value is.  There is a vast
| difference to writing code in the latter kind of language to writing
| code in assembly.
|
| I would suggest that at least assembly should be referred to as
| "untyped".
+---------------

Another language which has *neither* latent ("dynamic") nor
manifest ("static") types is (was?) BLISS[1], in which, like
assembler, variables are "just" addresses[2], and values are
"just" a machine word of bits.

However, while in BLISS neither variable nor values are typed,
operators *are* "typed"; that is, each operator specifies how
it will treat its input machine word(s) and how the machine word(s)
of bits it produces should be interpreted. So "+" is (mod 2^wordsize)
with rounding (as opposed to "FADD", which truncates), and so on.
So this (legal but non-sensical!) BLISS:

x := .y FMPR (.x - 13);

would, in C, have to be written roughly like this:

((void*)x) = (void*)((float)(*(void*)y) * (float)((int)(*(void*)x) - 13));

On the PDP-10, at least, both of them would generate this assembler code:

move  t1, x
subi  t1, 13
fmpr  t1, y
movem t1, x

So is BLISS "typed" or not?  And if so, what is that kind of typing called?

-Rob

[1] "Basic Language for the Implementation of Systems Software",
see <http://en.wikipedia.org/wiki/BLISS>. Created at CMU,
added-to by DEC, used by CMU, DEC, and a few others for in
the 70's-80's.

[2] Well, approximately. A BLISS variable is, conceptually at least,
really a "byte-pointer" -- a triple of a word address, a byte-size,
and a byte-position-within-word -- even on target architectures
other than the DEC PDP-10 [which had hardware byte-pointer types].
The compiler (even on the PDP-10) optimizes away LDB/DPB accesses

-----
Rob Warnock			<rpw3@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607


 0

Marshall <marshall.spight@gmail.com> wrote:
+---------------
| Anton van Straaten wrote:
| > 3.  A really natural term to refer to types which programmers reason
| > about, even if they are not statically checked, is "latent types".  It
| > captures the situation very well intuitively, and it has plenty of
| > precedent -- e.g. it's mentioned in the Scheme reports, R5RS and its
| > predecessors, going back at least a decade or so (haven't dug to check
| > when it first appeared).
|
| Can you be more explicit about what "latent types" means?
| I'm sorry to say it's not at all natural or intuitive to me.
| Are you referring to the types in the programmers head,
| or the ones at runtime, or what?
+---------------

Here's what the Scheme Standard has to say:

http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-4.html
1.1  Semantics
...
Scheme has latent as opposed to manifest types. Types are assoc-
iated with values (also called objects) rather than with variables.
(Some authors refer to languages with latent types as weakly typed
or dynamically typed languages.) Other languages with latent types
are APL, Snobol, and other dialects of Lisp. Languages with manifest
types (sometimes referred to as strongly typed or statically typed
languages) include Algol 60, Pascal, and C.

To me, the word "latent" means that when handed a value of unknown type
at runtime, I can look at it or perform TYPE-OF on it or TYPECASE or
something and thereby discover its actual type at the moment[1], whereas
"manifest" means that types[2] are lexically apparent in the code.

-Rob

[1] I added "at the moment", since I remembered that in Common Lisp
one may change the type of a value at runtime, specifically, a
CLOS instance may change type "out from under you" if someone
performs a CHANGE-CLASS on it or redefines its CLASS definition.
[Though maybe the latter is more a change of the *type* itself
rather than a change of the *object's* type per se.]

[2] Usually of a variables or locations, but sometimes of expressions.

-----
Rob Warnock			<rpw3@rpw3.org>
627 26th Avenue			<URL:http://rpw3.org/>
San Mateo, CA 94403		(650)572-2607


 0

Rob Warnock <rpw3@rpw3.org> wrote:
> Another language which has *neither* latent ("dynamic") nor
> manifest ("static") types is (was?) BLISS[1], in which, like
> assembler, variables are "just" addresses[2], and values are
> "just" a machine word of bits.

I'm unsure that it's correct to describe any language as having no
latent typing, in the sense that's being used in this thread.  It might
be more correct to say "so specified latent typing" and/or "no latent
typing beyond what is provided by the execution environment, including
the CPU, virtual memory system, etc." as appropriate.  I am aware of no
hardware environment that really accepts all possible values for all
possible operations without the potential of somehow signaling a type
violation.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

David Hopwood wrote:
> Marshall wrote:
>> Chris Smith wrote:
>>> Marshall <marshall.spight@gmail.com> wrote:
>>>
>>>> I think what this highlights is the fact that our existing terminology
>>>> is not up to the task of representing all the possible design
>>>> choices we could make. Some parts of dynamic vs. static
>>>> a mutually exclusive; some parts are orthogonal.
>>> Really?  I can see that in a strong enough static type system, many
>>> dynamic typing features would become unobservable and therefore would be
>>> pragmatically excluded from any probable implementations... but I don't
>>> see any other kind of mutual exclusion between the two.
>> Well, it strikes me that some of what the dynamic camp likes
>> is the actual *absence* of declared types, or the necessity
>> of having them.
>
> So why aren't they happy with something like, say, Alice ML, which is
> statically typed, but has a "dynamic" type and type inference? I mean
> this as a serious question.

Note: I haven't yet worked with such a language, but here is my take anyway.

A statically type language requires you to think about two models of
your program at the same time: the static type model and the dynamic
behavioral model. A static type system ensures that these two
_different_ (that's important!) perspectives are always in sync. This is
especially valuable in settings where you know your domain well and want
to rely on feedback by your compiler that you haven't made any mistakes
in encoding your knowledge. (A static type system based on type
inferencing doesn't essentially change the requirement to think in two
models at the same time.)

A dynamically typed language is especially well suited when you don't
(yet) have a good idea about your domain and you want to use programming
especially to explore that domain. Some static typing advocates claim
that static typing is still suitable for exploring domains because of
incomplete knowledge, but the disadvantages are a) that you still have
to think about two models at the same time when you don't even have
_one_ model ready and b) that you cannot just run your incomplete
program to see what it does as part of your exploration.

A statically typed language with a dynamic type treats dynamic typing as
the exception, not as the general approach, so this doesn't help a lot
in the second setting (or so it seems to me).

A language like Common Lisp treats static typing as the exception, so
you can write a program without static types / type checks, but later on
add type declarations as soon as you get a better understanding of your
domain. Common Lisp implementations like CMUCL or SBCL even include
static type inference to aid you here, which gives you warnings but
still allows you to run a program even in the presence of static type
errors. I guess the feedback you get from such a system is probably not
"strong" enough to be appreciated by static typing advocates in the
first setting (where you have a good understanding of your domain).

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Benjamin Franksen wrote:
> Pascal Costanza wrote:
>> There is, of course, room for research on performing static type checks
>> in a running system, for example immediately after or before a software
>> update is applied, or maybe even on separate type checking on software
>> increments such that guarantees for their composition can be derived.
>> However, I am not aware of a lot of work in that area, maybe because the
>> static typing community is too focused on compile-time issues.
>
> Not everyone is. For instance, Don Stewart has been enormously successful in
> deploying such a system for Haskell (very much a statically typed language)
> in a practically usable way. It is called hs-plugins (see
> http://www.cse.unsw.edu.au/~dons/hs-plugins/), a framework for run-time
> compiled, giving different levels of security). Far from being a purely
> academic exercise, there are interesting applications, including yi, an
> extensible editor, and lambdabot, an IRC bot, both available from the above

Thanks for the link, I will check this out.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Chris Smith wrote:

> While this effort to salvage the term "type error" in dynamic languages
> is interesting, I fear it will fail.  Either we'll all have to admit
> that "type" in the dynamic sense is a psychological concept with no
> precise technical definition (as was at least hinted by Anton's post
> earlier, whether intentionally or not) or someone is going to have to
> propose a technical meaning that makes sense, independently of what is
> meant by "type" in a static system.

invoke an operation on values that are not appropriate for this operation.

Examples: adding numbers to strings; determining the string-length of a
number; applying a function on the wrong number of parameters; applying
a non-function; accessing an array with out-of-bound indexes; etc.

>> In the terminology I'm suggesting, the object has no type in this language
>> (assuming we're talking about a Smalltalk-like language without any type system
>> extensions).
>
> I suspect you'll see the Smalltalk version of the objections raised in
> response to my post earlier.  In other words, whatever terminology you
> think is consistent, you'll probably have a tough time convincing
> Smalltalkers to stop saying "type" if they did before.  If you exclude
> "message not understood" as a type error, then I think you're excluding
> type errors from Smalltalk entirely, which contradicts the psychological
> understanding again.

Sending a message to an object that does not understand that message is
a type error. The "message not understood" machinery can be seen either
as a way to escape from this type error in case it occurs and allow the
program to still do something useful, or to actually remove (some)
potential type errors. Which view you take probably depends on what your
concrete implementation of "message not understood" does. For example,
if it simply forwards the message to another object that is known to be
able to respond to it, then you remove a potential type error; however,
if it pops up a dialog box to ask the user how to continue from here, it
is still a type error, but just gives you a way to deal with it at runtime.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

 > A statically type language requires you to think about two models of

> your program at the same time: the static type model and the dynamic
> behavioral model. A static type system ensures that these two
> _different_ (that's important!) perspectives are always in sync.

I have trouble understanding your use of the wording "Model of a
program".
If it is a system that behaves according to the rules of your program
then
If it is a formal system that makes statements about properties of your
program
than the static type system is a simplified model that is suitable for
automatic
analysis and your runtime model is in most cases nonexistent.
Can you give a definition of a "model of a program"? Can you explain
why
Lisp doesn't have two (SBCL does do a lot of typechecking and gives
type errors)?

> This is
> especially valuable in settings where you know your domain well and want
> to rely on feedback by your compiler that you haven't made any mistakes
> in encoding your knowledge. (A static type system based on type
> inferencing doesn't essentially change the requirement to think in two
> models at the same time.)

It is also valuable when you don't know your domain very well and you
want to rely on feedback by your compiler that you haven't made any
mistakes  in encoding your limited knowledge

> A dynamically typed language is especially well suited when you don't
> (yet) have a good idea about your domain and you want to use programming
> especially to explore that domain. our domain).

In the sense that you can start writing code without the compiler
pointing out
all but the most glaring holes in your program, I agree. Most of your
arguments
aren't very convincing and the thruth is that I have seem lisp
programmers using
the debugger to find out that you can't add a number and a hastable.
The static view
was not there and the dynamic view must have been too complicated so
Immanuel


 0

Marshall wrote:
>Andreas Rossberg wrote:
>>Chris Uppal wrote:
>>
>>>I have never been very happy with relating type to sets of values (objects,
>>>whatever).
>>
>>Indeed, this view is much too narrow. In particular, it cannot explain
>>abstract types, which is *the* central aspect of decent type systems.
>
> What prohibits us from describing an abstract type as a set of values?

If you identify an abstract type with the set of underlying values then
it is equivalent to the underlying representation type, i.e. there is no
abstraction.

>>There were papers observing this as early as 1970.
>
> References?

This is 1973, actually, but most relevant:

James Morris
Types Are Not Sets.
Proc. 1st ACM Symposium on Principles of Programming Languages, 1973

>>(There are also theoretic problems with the types-as-sets view, because
>>sufficiently rich type systems can no longer be given direct models in
>>standard set theory. For example, first-class polymorphism would run
>>afoul the axiom of foundation.)
>
> There is no reason why we must limit ourselves to "standard set theory"
> any more than we have to limit ourselves to standard type theory.
> Both are progressing, and set theory seems to me to be a good
> choice for a foundation. What else would you use?

I'm no expert here, but Category Theory is a preferred choice in many
areas of PLT.

- Andreas

 0

Rob Warnock wrote:
>
> Here's what the Scheme Standard has to say:
>
>     http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-4.html
>     1.1  Semantics
>     ...
>     Scheme has latent as opposed to manifest types. Types are assoc-
>     iated with values (also called objects) rather than with variables.
>     (Some authors refer to languages with latent types as weakly typed
>     or dynamically typed languages.) Other languages with latent types
>     are APL, Snobol, and other dialects of Lisp. Languages with manifest
>     types (sometimes referred to as strongly typed or statically typed
>     languages) include Algol 60, Pascal, and C.

Maybe this is the original source of the myth that static typing is all

With all my respect to the Scheme people, I'm afraid this paragraph is
pretty off, no matter where you stand. Besides the issue just mentioned
it equates "manifest" with static types. I understand "manifest" to mean
"explicit in code", which of course is nonsense - static typing does not
require explicit types. Also, I never heard "weakly typed" used in the
way they suggest - in my book, C is a weakly typed language (= typed,
but grossly unsound).

> To me, the word "latent" means that when handed a value of unknown type
> at runtime, I can look at it or perform TYPE-OF on it or TYPECASE or
> something and thereby discover its actual type at the moment[1], whereas
> "manifest" means that types[2] are lexically apparent in the code.

Mh, I'd say typecase is actually a form of reflection, which is yet a
different issue. Moreover, there are statically typed languages with
typecase (e.g. Modula-3, and several more modern ones) or related
constructs (consider instanceof).

- Andreas

 0

ilitzroth@gmail.com wrote:
>  > A statically type language requires you to think about two models of
>
>> your program at the same time: the static type model and the dynamic
>> behavioral model. A static type system ensures that these two
>> _different_ (that's important!) perspectives are always in sync.
>
> I have trouble understanding your use of the wording "Model of a
> program".
> If it is a system that behaves according to the rules of your program
> then
> If it is a formal system that makes statements about properties of your
> program
> than the static type system is a simplified model that is suitable for
> automatic
> analysis and your runtime model is in most cases nonexistent.
> Can you give a definition of a "model of a program"? Can you explain
> why
> Lisp doesn't have two (SBCL does do a lot of typechecking and gives
> type errors)?

I wasn't talking about models that the language implementation may or
may not have, but the models that I as a programmer must have in order
to convince the compiler to let me program run.

Consider a simple expression like 'a + b': In a dynamically typed
language, all I need to have in mind is that the program will attempt to
add two numbers. In a statically typed language, I additionally need to
know that there must a guarantee that a and b will always hold numbers.

In a trivial example like this, this doesn't hurt a lot, but can be
problematic as soon as the program size grows.

>> This is
>> especially valuable in settings where you know your domain well and want
>> to rely on feedback by your compiler that you haven't made any mistakes
>> in encoding your knowledge. (A static type system based on type
>> inferencing doesn't essentially change the requirement to think in two
>> models at the same time.)
>
> It is also valuable when you don't know your domain very well and you
> want to rely on feedback by your compiler that you haven't made any
> mistakes  in encoding your limited knowledge

I have more or less used exactly the same words in the paragraph that
followed the one you cited from my previous posting, and I have already

>> A dynamically typed language is especially well suited when you don't
>> (yet) have a good idea about your domain and you want to use programming
>> especially to explore that domain. our domain).
>
> In the sense that you can start writing code without the compiler
> pointing out
> all but the most glaring holes in your program, I agree.

I don't know what language environments you are used to, but the Common
Lisp compilers I use always point out the most glaring holes in my
programs. But maybe I just have trouble understanding your use of the
wording "most glaring holes". Can you give a definition of "most glaring
holes"? ;)

> Most of your
> arguments
> aren't very convincing and the thruth is that I have seem lisp
> programmers using
> the debugger to find out that you can't add a number and a hastable.
> The static view
> was not there and the dynamic view must have been too complicated so

We have all seen less-than-average programmers who would fail in all
kinds of languages. What they do is typically not very illuminating.

My goal is not to convince anyone, my goal is to illustrate for those
who are interested in getting a possibly different perspective.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

"Pascal Costanza" <pc@p-cos.net> wrote in message
news:4fv081F1jh4ifU1@individual.net...
> A statically type language requires you to think about two models of
> your program at the same time: the static type model and the dynamic
> behavioral model. A static type system ensures that these two
> _different_ (that's important!) perspectives are always in sync. This is
> especially valuable in settings where you know your domain well and want
> to rely on feedback by your compiler that you haven't made any mistakes
> in encoding your knowledge. (A static type system based on type
> inferencing doesn't essentially change the requirement to think in two
> models at the same time.)

I think this may be true in your line of research, where you are looking at
very abstract
ways of representing algorithms.

I used to use common lisp for exploratory programming, but having read an
article from another person who used to use common lisp a lot, and later

In the same way that common lisp gives you a good notation for experimenting
with algorithms, the Haskell type system/notation gives you a good notation
for experimenting with constraints/structure and meaning.

I think also missing in the discussion of types so far, is the use of types
to give explicit meanings to values (and variables / parameters) (I would
argue that in a lot of cases in common lisp these are also there, just
implicitly). Once something is explicit I find it easier to manipulate and
reason with.

In the type of exploratory programming I have done, generally the parameters
of functions represent some aspect of a model. e.g. number of apples.

What you can do using the Haskell type system is to explorer the model at
the meaning/structural level, without having to look at the behavorial model
at all.

Also when I want to look at behavorial model, I simple use types that
represent the meaning of each of the input parameters. The static type model
then doesn't really get in the way.

(I think common lisp has the advantage of being easy to learn. In Haskell it
took about 3 months before the static type system 'got out the way' and I
didn't have to think about it, when I didn't want to).

Rene.


 0
Reply Rene_de_Visser1 (16) 6/22/2006 11:09:18 AM

Pascal Costanza wrote:
>
> Consider a simple expression like 'a + b': In a dynamically typed
> language, all I need to have in mind is that the program will attempt to
> add two numbers. In a statically typed language, I additionally need to
> know that there must a guarantee that a and b will always hold numbers.

I'm confused. Are you telling that you just write a+b in your programs
without trying to ensure that a and b are in fact numbers??

- Andreas

 0

Andreas Rossberg wrote:
> Pascal Costanza wrote:
>>
>> Consider a simple expression like 'a + b': In a dynamically typed
>> language, all I need to have in mind is that the program will attempt
>> to add two numbers. In a statically typed language, I additionally
>> need to know that there must a guarantee that a and b will always hold
>> numbers.
>
> I'm confused. Are you telling that you just write a+b in your programs
> without trying to ensure that a and b are in fact numbers??

Basically, yes.

Note that this is a simplistic example. Consider, instead, sending a
message to an object, or calling a generic function, without ensuring
that there will be applicable methods for all possible cases. When I get
a "message not understood" exception, I can then decide whether that
kind of object shouldn't be a receiver in the first place, or else
whether I should define an appropriate method. I don't want to be forced
to decide this upfront, because either I don't want to be bothered, or
maybe I simply can't because I don't understand the domain well enough
yet, or maybe I want to keep a hook to be able to update the program
appropriately while it is running.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Andreas Rossberg <rossberg@ps.uni-sb.de> writes:

> Pascal Costanza wrote:
>> Consider a simple expression like 'a + b': In a dynamically typed
>> language, all I need to have in mind is that the program will
>> attempt to add two numbers. In a statically typed language, I
>> additionally need to know that there must a guarantee that a and b
>> will always hold numbers.
>
> I'm confused. Are you telling that you just write a+b in your programs
> without trying to ensure that a and b are in fact numbers??

Of course.

(defun + (&rest args) (+ ,@args))
(defun * (&rest args) (* ,@args))

(let ((var 'x) (init 'b) (slop 'a))
(+ init (* slop var)))
--> (+ B (* A X))

--
__Pascal Bourguignon__                     http://www.informatimago.com/

Nobody can fix the economy.  Nobody can be trusted with their finger
on the button.  Nobody's perfect.  VOTE FOR NOBODY.

 0

Pascal Costanza <pc@p-cos.net> writes:

> Andreas Rossberg wrote:
>> Pascal Costanza wrote:
>>>
>>> Consider a simple expression like 'a + b': In a dynamically typed
>>> language, all I need to have in mind is that the program will
>>> attempt to add two numbers. In a statically typed language, I
>>> additionally need to know that there must a guarantee that a and b
>>> will always hold numbers.
>> I'm confused. Are you telling that you just write a+b in your
>> programs without trying to ensure that a and b are in fact numbers??
>
> Basically, yes.
>
> Note that this is a simplistic example. Consider, instead, sending a
> message to an object, or calling a generic function, without ensuring
> that there will be applicable methods for all possible cases. When I
> get a "message not understood" exception, I can then decide whether
> that kind of object shouldn't be a receiver in the first place, or
> else whether I should define an appropriate method. I don't want to be
> forced to decide this upfront, because either I don't want to be
> bothered, or maybe I simply can't because I don't understand the
> domain well enough yet, or maybe I want to keep a hook to be able to
> update the program appropriately while it is running.

Moreover, a good proportion of the program and a good number of
algorithms don't even need to know the type of the objects they
manipulate.

For example, sort doesn't need to know what type the objects it sorts
are.  It only needs to be given a function that is able to compare the
objects.

Only a few "primitive" functions need specific types.

So basically, you've got a big black box of applicaition code in the
middle that doesn't care what type of value they get, and you've got a
few input values of a specific type, a few processing functions
needing a specific type and returning a specific type, and a few
output values that are expected to be of a specific type.  At anytime,
you may change the type of the input values, and ensure that the
needed processing functions will be able to handle this new input
type, and the output gets mapped to the expected type.

Why should adding a few functions or methods, and providing input
values of a new type be rejected from a statically checked  point of
view by a compiled program that would be mostly bit-for-bit the same
with or without this new type?

Of course, in the process of so modifying the program, we may get some
dynamically detected type errors that we would correct as Pascal
indicated.

--
__Pascal Bourguignon__                     http://www.informatimago.com/

"Specifications are for the weak and timid!"

 0

Pascal Bourguignon <pjb@informatimago.com> writes:

> Moreover, a good proportion of the program and a good number of
> algorithms don't even need to know the type of the objects they
> manipulate.
>
> For example, sort doesn't need to know what type the objects it sorts
> are.  It only needs to be given a function that is able to compare the
> objects.

Of course, some statically typed languages handle this sort of thing
routinely.

> Only a few "primitive" functions need specific types.

Your sort function from above also has a specific type -- a type which
represents the fact that the objects to be sorted must be acceptable
input to the comparison function.

> So basically, you've got a big black box of applicaition code in the
> middle that doesn't care what type of value they get, and you've got a
> few input values of a specific type, a few processing functions
> needing a specific type and returning a specific type, and a few
> output values that are expected to be of a specific type.  At anytime,
> you may change the type of the input values, and ensure that the
> needed processing functions will be able to handle this new input
> type, and the output gets mapped to the expected type.

....or you type-check your "black box" and make sure that no matter how
you will ever change the type of the inputs (in accordance with the
interface type of the box) you get a valid program.


 0

Pascal Costanza <pc@p-cos.net> writes:

> Chris Smith wrote:
>
>> While this effort to salvage the term "type error" in dynamic
>> languages is interesting, I fear it will fail.  Either we'll all
>> have to admit that "type" in the dynamic sense is a psychological
>> concept with no precise technical definition (as was at least hinted
>> by Anton's post earlier, whether intentionally or not) or someone is
>> going to have to propose a technical meaning that makes sense,
>> independently of what is meant by "type" in a static system.
>
> invoke an operation on values that are not appropriate for this
> operation.
>
> Examples: adding numbers to strings; determining the string-length of
> a number; applying a function on the wrong number of parameters;
> applying a non-function; accessing an array with out-of-bound indexes;
> etc.

Yes, the phrase "runtime type error" is actually a misnomer.  What one
usually means by that is a situation where the operational semantics
is "stuck", i.e., where the program, while not yet arrived at what's
considered a "result", cannot make any progress because the current
configuration does not match any of the rules of the dynamic
semantics.

The reason why we call this a "type error" is that such situations are
precisely the ones we want to statically rule out using sound static
type systems.  So it is a "type error" in the sense that the static
semantics was not strong enough to rule it out.

> Sending a message to an object that does not understand that message
> is a type error. The "message not understood" machinery can be seen
> either as a way to escape from this type error in case it occurs and
> allow the program to still do something useful, or to actually remove
> (some) potential type errors.

I disagree with this.  If the program keeps running in a defined way,
then it is not what I would call a type error.  It definitely is not
an error in the sense I described above.

 0

Pascal Costanza <pc@p-cos.net> wrote:
> invoke an operation on values that are not appropriate for this operation.
>
> Examples: adding numbers to strings; determining the string-length of a
> number; applying a function on the wrong number of parameters; applying
> a non-function; accessing an array with out-of-bound indexes; etc.

Hmm.  I'm afraid I'm going to be picky here.  I think you need to
clarify what is meant by "appropriate".  If you mean "the operation will
not complete successfully" as I suspect you do, then we're closer... but
this little snippet of Java (HORRIBLE, DO NOT USE!) confuses the matter
for me:

int i = 0;

try
{
while (true) process(myArray[i++]);
}
catch (IndexOutOfBoundsException e) { }

That's an array index from out of bounds that not only fails to be a
type error, but also fails to be an error at all!  (Don't get confused
by Java's having a static type system for other purposes... we are
looking at array indexing here, which Java checks dynamically.  I would
have used a dynamically typed language, if I could have written this as
quickly.)

I'm also unsure how your definition above would apply to languages that
do normal order evaluation, in which (at least in my limited brain) it's
nearly impossible to break down a program into sequences of operations
on actual values.  I suppose, though, that they do eventually happen
with primitives at the leaves of the derivation tree, so the definition
would still apply.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Rob Warnock wrote:
> Marshall <marshall.spight@gmail.com> wrote:
> >
> > Can you be more explicit about what "latent types" means?
> > I'm sorry to say it's not at all natural or intuitive to me.
> > Are you referring to the types in the programmers head,
> > or the ones at runtime, or what?
>
> Here's what the Scheme Standard has to say:
>
>     http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-4.html
>     1.1  Semantics
>     ...
>     Scheme has latent as opposed to manifest types. Types are assoc-
>     iated with values (also called objects) rather than with variables.
>     (Some authors refer to languages with latent types as weakly typed
>     or dynamically typed languages.) Other languages with latent types
>     are APL, Snobol, and other dialects of Lisp. Languages with manifest
>     types (sometimes referred to as strongly typed or statically typed
>     languages) include Algol 60, Pascal, and C.
>
> To me, the word "latent" means that when handed a value of unknown type
> at runtime, I can look at it or perform TYPE-OF on it or TYPECASE or
> something and thereby discover its actual type at the moment[1], whereas
> "manifest" means that types[2] are lexically apparent in the code.

Hmmm. If I read the R5RS text correctly, it is simply doing the
either/or thing that often happens with "static/dynamic" only
using different terms. I don't see any difference between
"latent" and "dynamic." Also, this phrase "types associated with
values instead of variables" that I'm starting to see a lot is
beginning to freak me out: the implication is that other languages
have types associated with variables and not values, which
doesn't describe anything I can think of.

In your followup paragraph, you've contrasted runtime type
introspection, vs. explicit type declarations, which seem
orthorgonal to me. (Not that you said they weren't.)

Marshall


 0

Pascal Costanza wrote:
>
> A statically type language requires you to think about two models of
> your program at the same time: the static type model and the dynamic
> behavioral model. A static type system ensures that these two
> _different_ (that's important!) perspectives are always in sync. This is
> especially valuable in settings where you know your domain well and want
> to rely on feedback by your compiler that you haven't made any mistakes
> in encoding your knowledge. (A static type system based on type
> inferencing doesn't essentially change the requirement to think in two
> models at the same time.)
>
> A dynamically typed language is especially well suited when you don't
> (yet) have a good idea about your domain and you want to use programming
> especially to explore that domain. Some static typing advocates claim
> that static typing is still suitable for exploring domains because of
> the compiler's feedback about the preliminary encoding of your
> incomplete knowledge, but the disadvantages are a) that you still have
> to think about two models at the same time when you don't even have
> _one_ model ready and b) that you cannot just run your incomplete
> program to see what it does as part of your exploration.
>
> A statically typed language with a dynamic type treats dynamic typing as
> the exception, not as the general approach, so this doesn't help a lot
> in the second setting (or so it seems to me).
>
> A language like Common Lisp treats static typing as the exception, so
> you can write a program without static types / type checks, but later on
> add type declarations as soon as you get a better understanding of your
> domain. Common Lisp implementations like CMUCL or SBCL even include
> static type inference to aid you here, which gives you warnings but
> still allows you to run a program even in the presence of static type
> errors. I guess the feedback you get from such a system is probably not
> "strong" enough to be appreciated by static typing advocates in the
> first setting (where you have a good understanding of your domain).

I am sceptical of the idea that when programming in a dynamically
typed language one doesn't have to think about both models as well.
I don't have a good model of the mental process of working
in a dynamically typed language, but how could that be the case?
and over, mechanically correcting the code each time you discover
a type error? In other words, if you're not thinking of the type model,
are you using the runtime behavior of the program as an assistant,
the way I use the static analysis of the program as an assistant?

I don't accept the idea about pairing the appropriateness of each
system according to whether one is doing exploratory programming.
I do exploratory programming all the time, and I use the static type
system as an aide in doing so. Rather I think this is just another
manifestation of the differences in the mental processes between
static typed programmers and dynamic type programmers, which
we are beginning to glimpse but which is still mostly unknown.

Oh, and I also want to say that of all the cross-posted mega threads
on static vs. dynamic typing, this is the best one ever. Most info;
least flames. Yay us!

Marshall


 0

Andreas Rossberg wrote:

[me:]
> > It's worth noting, too, that (in some sense) the type of an object can
> > change over time[*].
>
> No. Since a type expresses invariants, this is precisely what may *not*
> happen. If certain properties of an object may change then the type of
> the object has to reflect that possibility. Otherwise you cannot
> legitimately call it a type.

Well, it seems to me that you are /assuming/ a notion of what kinds of logic
can be called type (theories), and I don't share your assumptions.  No offence
intended.

Actually I would go a little further than that.  Granted that whatever logic
one wants to apply in order to prove <whatever> about a program execution is
abstract -- and so timeless -- that does not (to my mind) imply that it must be
/static/.  However, even if we grant that additional restriction, that doesn't
imply that the analysis itself must not be cognisant of time.  I see no reason,
even in practise, why a static analysis should not be able to see that the set
of acceptable operations (for some definition of acceptable) for some
object/value/variable can be different at different times in the execution.  If
the analysis is rich enough to check that the temporal constraints are [not]
satisfied, then I don't see why you should want to use another word than "type"
to describe the results of its analysis.

-- chris


 0

I wrote:

> It would be interesting to see what a language designed specifically to
> support user-defined, pluggable, and perhaps composable, type systems
> would look like.

Since writing that I've come across some thoughts by Gilad Bracha (a Name known
to Java and Smalltalk enthusiasts alike) here:

http://blogs.sun.com/roller/page/gbracha?entry=a_few_ideas_on_type

and a long, and occasionally interesting, related thread on LtU:

http://lambda-the-ultimate.org/node/1311

Not much discussion of concrete language design, though.

-- chris


 0

Chris Smith wrote:

>  Some people here seem to be
> saying that there is a universal concept of "type error" in dynamic
> typing, but I've still yet to see a good precise definition (nor a good
> precise definition of dynamic typing at all).

I think we're agreed (you and I anyway, if not everyone in this thread) that we
don't want to talk of "the" type system for a given language.  We want to allow
a variety of verification logics.  So a static type system is a logic which can
be implemented based purely on the program text without making assumptions
runtime events (or making maximally pessimistic assumptions -- which comes to
the same thing really).  I suggest that a "dynamic type system" is a
verification logic which (in principle) has available as input not only the
program text, but also the entire history of the program execution up to the
moment when the to-be-checked operation is invoked.

I don't mean to imply that an operation /must/ not be checked until it is
invoked (although a particular logic/implementation might not do so).  For
instance an out-of-bound array access might be rejected:
When, in the surrounding code, it first became
When the array was first passed to a function which
...and so on...

Note that not all errors that I would want to call type errors are necessarily
caught by the runtime -- it might go happily ahead never realising that it had
just allowed one of the constraints of one of the logics I use to reason about
the program.  What's known as an undetected bug -- but just because the runtime
doesn't see it, doesn't mean that I wouldn't say I'd made a type error.  (The
same applies to any specific static type system too, of course.)

But the checks the runtime does perform (whatever they are, and whenever they
happen), do between them constitute /a/ logic of correctness.  In many highly
dynamic languages that logic is very close to being maximally optimistic, but
it doesn't have to be (e.g. the runtime type checking in the JMV is pretty
pessimistic in many cases).

Anyway, that's more or less what I mean when I talk of dynamically typed
language and their dynamic type systems.

> I suspect you'll see the Smalltalk version of the objections raised in
> response to my post earlier.  In other words, whatever terminology you
> think is consistent, you'll probably have a tough time convincing
> Smalltalkers to stop saying "type" if they did before.  If you exclude
> "message not understood" as a type error, then I think you're excluding
> type errors from Smalltalk entirely, which contradicts the psychological
> understanding again.

Taking Smalltalk /specifically/, there is a definite sense in which it is
typeless -- or trivially typed -- in that in that language there are no[*]
operations which are forbidden[**], and none which might not be invoked
deliberately (e.g. I have code which deliberately reads off the end of a
container object -- just to make sure I raise the "right" error for that
container, rather than raising my own error).  But, on the other hand, I do
still want to talk of type, and type system, and type errors even when I
program Smalltalk, and when I do I'm thinking about "type" in something like
the above sense.

-- chris

[*] I can't think of any offhand -- there may be a few.

[**] Although there are operations which are not possible, reading another
object's instvars directly for instance, which I suppose could be taken to
induce a non-trivial (and static) type logic.


 0

Pascal Costanza wrote:
>
> Consider a simple expression like 'a + b': In a dynamically typed
> language, all I need to have in mind is that the program will attempt to
> add two numbers. In a statically typed language, I additionally need to
> know that there must a guarantee that a and b will always hold numbers.

I still don't really see the difference.

I would not expect that the dynamic programmer will be
thinking that this code will have two numbers most of the
time but sometimes not, and fail. I would expect that in both
static and dynamic, the thought is that that code is adding
two numbers, with the difference being the static context
gives one a proof that this is so. In this simple example,
the static case is better, but this is not free, and the cost
of the static case is evident elsewhere, but maybe not
illuminated by this example.

This thread's exploration of the mindset of the two kinds
of programmers is difficult. It is actually quite difficult,
(possibly impossible) to reconstruct mental states
though introspection. Nonetheless I don't see any
other way to proceed. Pair programming?

> My goal is not to convince anyone, my goal is to illustrate for those
> who are interested in getting a possibly different perspective.

Yes, and thank you for doing so.

Marshall


 0

Pascal Bourguignon wrote:
>
> For example, sort doesn't need to know what type the objects it sorts
> are.  It only needs to be given a function that is able to compare the
> objects.

Sure. That's why any decent type system supports polymorphism of this
sort. (And some of them can even infer which comparison function to pass
for individual calls, so that the programmer does not have to bother.)

- Andreas

 0

Joe Marshall wrote:

> What we need is an FAQ entry for how to talk about types with people
> who are technically adept, but non-specialists.  Or alternatively, an
> FAQ of how to explain the term dynamic typing' to a type theorist.

You could point people at
"a regular series on object-oriented type theory, aimed
specifically at non-theoreticians."
which was published on/in JoT from:
http://www.jot.fm/issues/issue_2002_05/column5
to
http://www.jot.fm/issues/issue_2005_09/column1

Only 20 episodes ! (But #3 seems to be missing.)

Actually the first one has (in section four) a quick and painless overview of
several kinds of type theory.  I haven't read the rest (yet, and maybe never
;-)

-- chris


 0

Chris Uppal wrote:
>
>>>It's worth noting, too, that (in some sense) the type of an object can
>>>change over time[*].
>>
>>No. Since a type expresses invariants, this is precisely what may *not*
>>happen. If certain properties of an object may change then the type of
>>the object has to reflect that possibility. Otherwise you cannot
>>legitimately call it a type.
>
> Well, it seems to me that you are /assuming/ a notion of what kinds of logic
> can be called type (theories), and I don't share your assumptions.  No offence
> intended.

OK, but can you point me to any literature on type theory that makes a
different assumption?

> I see no reason,
> even in practise, why a static analysis should not be able to see that the set
> of acceptable operations (for some definition of acceptable) for some
> object/value/variable can be different at different times in the execution.

Neither do I. But what is wrong with a mutable reference-to-union type,
as I suggested? It expresses this perfectly well.

- Andreas

 0

Rob Warnock wrote:
> Another language which has *neither* latent ("dynamic") nor
> manifest ("static") types is (was?) BLISS[1], in which, like
> assembler, variables are "just" addresses[2], and values are
> "just" a machine word of bits.

360-family assembler, yes. 8086-family assembler, not so much.

--
John W. Kennedy
"The blind rulers of Logres
Nourished the land on a fallacy of rational virtue."
-- Charles Williams.  "Taliessin through Logres: Prelude"

 0

David Hopwood wrote:
> Rob Thorpe wrote:
> > Vesa Karvonen wrote:
> >
> >>In comp.lang.functional Anton van Straaten <anton@appsolutions.com> wrote:
> >>
> >>>Let me add another complex subtlety, then: the above description misses
> >>>an important point, which is that *automated* type checking is not the
> >>>whole story.  I.e. that compile time/runtime distinction is a kind of
> >>>red herring.
> >>
> >>I agree.  I think that instead of "statically typed" we should say
> >>"typed" and instead of "(dynamically|latently) typed" we should say
> >>"untyped".
> [...]
> >>>It's certainly close enough to say that the *language* is untyped.
> >>
> >>Indeed.  Either a language has a type system and is typed or has no
> >>type system and is untyped.  I see very little room for confusion
> >>here.  In my experience, the people who confuse these things are
> >>people from the dynamic/latent camp who wish to see types everywhere
> >>because they confuse typing with safety or having well-defined
> >>semantics.
> >
> > No.  It's because the things that we call latent types we use for the
> > same purpose that programmers of static typed languages use static
> > types for.
> >
> > Statically typed programmers ensure that the value of some expression
> > is of some type by having the compiler check it.  Programmers of
> > latently typed languages check, if they think it's important, by asking
> > what the type of the result is.
> >
> > The objection here is that advocates of statically typed language seem
> > to be claiming the "type" as their own word, and asking that others use
> > their definitions of typing, which are really specific to their
> > subjects of interest.
>
> As far as I can tell, the people who advocate using "typed" and "untyped"
> in this way are people who just want to be able to discuss all languages in
> a unified terminological framework, and many of them are specifically not
> advocates of statically typed languages.

Its easy to create a reasonable framework. My earlier posts show simple
ways of looking at it that could be further refined, I'm sure there are
others who have already done this.

The real objection to this was that latently/dynamically typed
languages have a place in it.  But some of the advocates of statically
typed languages wish to lump these languages together with assembly
language a "untyped" in an attempt to label them as unsafe.


 0

Rob Thorpe wrote:
>
> Its easy to create a reasonable framework.

Luca Cardelli has given the most convincing one in his seminal tutorial
"Type Systems", where he identifies "typed" and "safe" as two orthogonal
dimensions and gives the following matrix:

| typed | untyped
-------+-------+----------
safe   | ML    | Lisp
unsafe | C     | Assembler

Now, jargon "dynamically typed" is simply untyped safe, while "weakly
typed" is typed unsafe.

> The real objection to this was that latently/dynamically typed
> languages have a place in it.  But some of the advocates of statically
> typed languages wish to lump these languages together with assembly
> language a "untyped" in an attempt to label them as unsafe.

No, see above. And I would assume that that is how most proponents of
the typed/untyped dichotomy understand it.

- Andreas

 0

John W. Kennedy wrote:
> 360-family assembler, yes. 8086-family assembler, not so much.

And Burroughs B-series, not at all. There was one "ADD" instruction, and
it looked at the data in the addresses to determine whether to add ints
or floats. :-)

--
Darren New / San Diego, CA, USA (PST)
My Bath Fu is strong, as I have
studied under the Showerin' Monks.

 0

Matthias Blume <find@my.address.elsewhere> writes:

> Pascal Bourguignon <pjb@informatimago.com> writes:
>
>> Moreover, a good proportion of the program and a good number of
>> algorithms don't even need to know the type of the objects they
>> manipulate.
>>
>> For example, sort doesn't need to know what type the objects it sorts
>> are.  It only needs to be given a function that is able to compare the
>> objects.
>
> Of course, some statically typed languages handle this sort of thing
> routinely.
>
>> Only a few "primitive" functions need specific types.
>
> Your sort function from above also has a specific type -- a type which
> represents the fact that the objects to be sorted must be acceptable
> input to the comparison function.

Well, not exactly.  sort is a higher level function. The type of its
arguments is an implicit parameter of the sort function.

(sort "Hello World"  (function char<=))
--> " HWdellloor"

(sort '(52 12 42 37) (function <=))
--> (12 37 42 52)

(sort (list (make-instance 'person               :name "Pascal")
(make-instance 'unit                 :name "Pascal")
(make-instance 'programming-language :name "Pascal"))
(lambda (a b) (string<= (class-name (class-of a))
(class-name (class-of b)))))
--> (#<PERSON #x205763FE>
#<PROGRAMMING-LANGUAGE #x205765BE>
#<UNIT #x205764DE>)

In Common Lisp, sort is specified to take a parameter of type SEQUENCE
= (or vector list), and if a list it should be a proper list,
and a function taking two arguments (of any type)
and returning a generalized boolean (that is anything can be returned,
NIL is false, something else is true)

So you could say that:

(sort (sequence element-type)
(function (element-type element-type) boolean))
--> (sequence element-type)

but element-type is not a direct parameter of sort, and can change for
all calls event at the same call point:

(mapcar (lambda (s) (sort s (lambda (a b) (<= (sxhash a) (sxhash b)))))
(list (vector 52 12 42 37)
(list   52 12 42 37)
(list "abc" 'def (make-instance 'person :name "Zorro") 76)))
--> (#(12 37 42 52)
(12 37 42 52)
(76 #<PERSON #x2058D496> DEF "abc"))

>> So basically, you've got a big black box of applicaition code in the
>> middle that doesn't care what type of value they get, and you've got a
>> few input values of a specific type, a few processing functions
>> needing a specific type and returning a specific type, and a few
>> output values that are expected to be of a specific type.  At anytime,
>> you may change the type of the input values, and ensure that the
>> needed processing functions will be able to handle this new input
>> type, and the output gets mapped to the expected type.
>
> ...or you type-check your "black box" and make sure that no matter how
> you will ever change the type of the inputs (in accordance with the
> interface type of the box) you get a valid program.

When?  At run-time?  All the modifications I spoke of can be done at
run-time in Lisp.

--
__Pascal Bourguignon__                     http://www.informatimago.com/
The mighty hunter
Returns with gifts of plump birds,

 0

Pascal Bourguignon <pjb@informatimago.com> writes:

>
>> Pascal Bourguignon <pjb@informatimago.com> writes:
>>
>>> Moreover, a good proportion of the program and a good number of
>>> algorithms don't even need to know the type of the objects they
>>> manipulate.
>>>
>>> For example, sort doesn't need to know what type the objects it sorts
>>> are.  It only needs to be given a function that is able to compare the
>>> objects.
>>
>> Of course, some statically typed languages handle this sort of thing
>> routinely.
>>
>>> Only a few "primitive" functions need specific types.
>>
>> Your sort function from above also has a specific type -- a type which
>> represents the fact that the objects to be sorted must be acceptable
>> input to the comparison function.
>
> Well, not exactly.

What do you mean by "not exactly".

>  sort is a higher level function. The type of its
> arguments is an implicit parameter of the sort function.

What do you mean by "higher-level"? Maybe you meant "higher-order" or
"polymorphic"?

[ rest snipped ]

You might want to look up "System F".

>>> So basically, you've got a big black box of applicaition code in the
>>> middle that doesn't care what type of value they get, and you've got a
>>> few input values of a specific type, a few processing functions
>>> needing a specific type and returning a specific type, and a few
>>> output values that are expected to be of a specific type.  At anytime,
>>> you may change the type of the input values, and ensure that the
>>> needed processing functions will be able to handle this new input
>>> type, and the output gets mapped to the expected type.
>>
>> ...or you type-check your "black box" and make sure that no matter how
>> you will ever change the type of the inputs (in accordance with the
>> interface type of the box) you get a valid program.
>
> When?

When what?

 0

Marshall wrote:
>
> In this simple example,
> the static case is better, but this is not free, and the cost
> of the static case is evident elsewhere, but maybe not
> illuminated by this example.

Ugh, please forgive my ham-fisted use of the word "better."
Let me try again:

In this simple example, the static case is provides you with
a guarantee of type safety, but this is not free, and the
cost of the static case may be evident elsewhere, even
if not illuminated by this example.

Marshall


 0

"Marshall" <marshall.spight@gmail.com> writes:

> Pascal Costanza wrote:
>>
>> Consider a simple expression like 'a + b': In a dynamically typed
>> language, all I need to have in mind is that the program will attempt to
>> add two numbers. In a statically typed language, I additionally need to
>> know that there must a guarantee that a and b will always hold numbers.
>
> I still don't really see the difference.
>
> I would not expect that the dynamic programmer will be
> thinking that this code will have two numbers most of the
> time but sometimes not, and fail. I would expect that in both
> static and dynamic, the thought is that that code is adding
> two numbers, with the difference being the static context
> gives one a proof that this is so. In this simple example,
> the static case is better, but this is not free, and the cost
> of the static case is evident elsewhere, but maybe not
> illuminated by this example.
>
> This thread's exploration of the mindset of the two kinds
> of programmers is difficult. It is actually quite difficult,
> (possibly impossible) to reconstruct mental states
> though introspection. Nonetheless I don't see any
> other way to proceed. Pair programming?

Well this is a question of data flow.  As I explained, there's a whole
body of functions that don't process concretely the data they get.
But of course, eventually you must write a function that do some
concrete processing on the data it gets.  That's when you consider the
type of the values. Such functions may be generic functions with
methods dispatching on the actual type of the parameters, or you may
encounter some TYPECASE or COND inside the function before calling
non-abstract "primitives" that work only on some specific type.

--
__Pascal Bourguignon__                     http://www.informatimago.com/

"I have challenged the entire quality assurance team to a Bat-Leth
contest.  They will not concern us again."

 0

Matthias Blume <find@my.address.elsewhere> writes:

> Pascal Bourguignon <pjb@informatimago.com> writes:
>
>>
>>> Pascal Bourguignon <pjb@informatimago.com> writes:
>>>
>>>> Moreover, a good proportion of the program and a good number of
>>>> algorithms don't even need to know the type of the objects they
>>>> manipulate.
>>>>
>>>> For example, sort doesn't need to know what type the objects it sorts
>>>> are.  It only needs to be given a function that is able to compare the
>>>> objects.
>>>
>>> Of course, some statically typed languages handle this sort of thing
>>> routinely.
>>>
>>>> Only a few "primitive" functions need specific types.
>>>
>>> Your sort function from above also has a specific type -- a type which
>>> represents the fact that the objects to be sorted must be acceptable
>>> input to the comparison function.
>>
>> Well, not exactly.
>
> What do you mean by "not exactly".
>
>>  sort is a higher level function. The type of its
>> arguments is an implicit parameter of the sort function.
>
> What do you mean by "higher-level"? Maybe you meant "higher-order" or
> "polymorphic"?

Yes, that's what I wanted to say.

> [ rest snipped ]
>
> You might want to look up "System F".
> [...]
>>> ...or you type-check your "black box" and make sure that no matter how
>>> you will ever change the type of the inputs (in accordance with the
>>> interface type of the box) you get a valid program.
>>
>> When?
>
> When what?

When will you type-check the "black box"?

A function such as:

(defun f (x y)
(if (g x)
(h x y)
(i y x)))

in the context of a given program could be type-infered statically as
taking an integer and a string as argument.  If the compiler did this
inference, it could perhaps generate code specific to these types.

But it's always possible at run-time that new functions and new
function calls be generated such as:

(let ((x "two"))
(eval (defmethod g ((self ,(type-of x))) t))
(eval (defmethod h ((x ,(type-of x)) (y string))
(,(intern (format nil "DO-SOMETHING-WITH-A-~A" (type-of x))) x)
(do-something-with-a-string y)))
(funcall (compile nil (let ((x ,x)) (lambda () (f x "Hi!"))))))

Will you execute the whole type-inference on the whole program "black
box" everytime you define a new function?  Will you recompile all the
"black box" functions to take into account the new type the arguments
can be now?

This wouldn't be too efficient.  Let's just say that by default, all
arguments and variable are of type T, so the type checking is trivial,
and the generated code is, by default, totally generic.

Only the few concrete, low-level functions need to know the types of
their arguments and variables.  In these functions, either the lisp
compiler will do the type inference (starting from the predefined
primitives), or the programmer will declare the types to inform the
compiler what to expect.

(defun do-something-with-a-string (x)
(declare (string x))
...)

(defun do-something-with-a-integer (x)
(declare (integer x))
...)

....

--
__Pascal Bourguignon__                     http://www.informatimago.com/

PUBLIC NOTICE AS REQUIRED BY LAW: Any use of this product, in any
manner whatsoever, will increase the amount of disorder in the
universe. Although no liability is implied herein, the consumer is
warned that this process will ultimately lead to the heat death of
the universe.

 0

Chris Smith wrote:
> Joachim Durchholz <jo@durchholz.org> wrote:
> > Assume a language that
> > a) defines that a program is "type-correct" iff HM inference establishes
> > that there are no type errors
> > b) compiles a type-incorrect program anyway, with an establishes
> > rigorous semantics for such programs (e.g. by throwing exceptions as
> > appropriate).
>
> So the compiler now attempts to prove theorems about the program, but
> once it has done so it uses the results merely to optimize its runtime
> behavior and then throws the results away.  I'd call that not a
> statically typed language, then.

You're assuming that type-correctness is an all-or-nothing property
(well, technically it *is*, but bear with me).  What if the compiler is
unable to prove a theorem about the entire program, but *can* prove a
theorem about a subset of the program.  The theorems would naturally be
conditional, e.g.  Provided the input is an integer, the program is
type-safe', or time-bounded, e.g. Until the program attempts to invoke
function FOO, the program is type-safe.'

Of course, we could encode that by restricting the type of the input
and everything would be copacetic, but suppose there is a requirement
that floating point numbers are valid input.  For some reason, our
program is not type-safe for floats, but as a developer who is working
on the integer math routines, I have no intention of running that code.
The compiler cannot prove that I won't perversely enter a float, but
it can prove that if I enter an integer everything is type-safe.  I can
therefore run, debug, and use a subset of the program.

That's the important point:  I want to run broken code.  I want to run
as much of the working fragments as I can, and I want a safety net' to
prevent me from performing undefined operations, but I want the safety
net to catch me at the *last* possible moment.  I'm not playing it safe
and staying where the compiler can prove I'll be ok.  I'm living
dangerously and wandering near the edge where the compiler can't quite
prove that I'll fail.

Once I've done the major amount of exploratory programming, I may very
well want to tighten things up.  I'd like to have the compiler prove
that some mature library is type-safe and that all callers to the
library use it in a type-correct manner.  I'd like the compiler to say
Hey, did you know that if the user enters a floating point number
instead of his name that your program will crash and burn?', but I
don't want that sort of information until I'm pretty sure about what I
want the program to do in the normal case.


 0

Chris Uppal <chris.uppal@metagnostic.REMOVE-THIS.org> wrote:
> I think we're agreed (you and I anyway, if not everyone in this thread) that we
> don't want to talk of "the" type system for a given language.  We want to allow
> a variety of verification logics.  So a static type system is a logic which can
> be implemented based purely on the program text without making assumptions
> about runtime events (or making maximally pessimistic assumptions -- which comes
> to the same thing really).  I suggest that a "dynamic type system" is a
> verification logic which (in principle) has available as input not only the
> program text, but also the entire history of the program execution up to the
> moment when the to-be-checked operation is invoked.

I am trying to understand how the above statement about dynamic types
actually says anything at all.  So a dynamic type system is a system of
logic by which, given a program and a path of program execution up to
this point, verifies something.  We still haven't defined "something",
though.  We also haven't defined what happens if that verification
fails.  One or the other or (more likely) some combination of the two
must be critical to the definition in order to exclude silly
applications of it.  Presumably you want to exclude from your definition
of a dynamic "type system" which verifies that a value is non-negative,
and if so executes the block of code following "then"; and otherwise,
executes the block of code following "else".  Yet I imagine you don't
want to exclude ALL systems that allow the programmer to execute
different code when the verification fails (think exception handlers)
versus succeeds, nor exclude ALL systems where the condition is that a
value is non-negative.

In other words, I think that everything so far is essentially just
defining a dynamic type system as equivalent to a formal semantics for a
programming language, in different words that connote some bias toward
certain ways of looking at possibilities that are likely to lead to
incorrect program behavior.  I doubt that will be an attractive
definition to very many people.

> Note that not all errors that I would want to call type errors are necessarily
> caught by the runtime -- it might go happily ahead never realising that it had
> just allowed one of the constraints of one of the logics I use to reason about
> the program.  What's known as an undetected bug -- but just because the runtime
> doesn't see it, doesn't mean that I wouldn't say I'd made a type error.  (The
> same applies to any specific static type system too, of course.)

In static type system terminology, this quite emphatically does NOT
apply.  There may, of course, be undetected bugs, but they are not type
errors.  If they were type errors, then they would have been detected,
unless the compiler is broken.

If you are trying to identify a set of dynamic type errors, in a way
that also applies to statically typed languages, then I will read on.

> But the checks the runtime does perform (whatever they are, and whenever they
> happen), do between them constitute /a/ logic of correctness.  In many highly
> dynamic languages that logic is very close to being maximally optimistic, but
> it doesn't have to be (e.g. the runtime type checking in the JMV is pretty
> pessimistic in many cases).
>
> Anyway, that's more or less what I mean when I talk of dynamically typed
> language and their dynamic type systems.

So my objections, then, are in the first paragraph.

> [**] Although there are operations which are not possible, reading another
> object's instvars directly for instance, which I suppose could be taken to
> induce a non-trivial (and static) type logic.

In general, I wouldn't consider a syntactically incorrect program to
have a static type error.  Type systems are, in fact, essentially a tool
so separate concerns; specifically, to remove type-correctness concerns
from the grammars of programming languages.  By doing so, we are able at
least to considerably simplify the grammar of the language, and perhaps
also to increase the "tightness" of the verification without risking
making the language grammar context-sensitive.  (I'm unsure about the
second part of that statement, but I can think of no obvious theoretical
reason to assume that combining a type system with a regular context-
free grammar would yield another context-free grammar.  Then again,
formal languages are not my strong point.)

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Pascal Costanza wrote:
> Consider a simple expression like 'a + b': In a dynamically typed
> language, all I need to have in mind is that the program will attempt to
> add two numbers. In a statically typed language, I additionally need to
> know that there must a guarantee that a and b will always hold numbers.

"Marshall" <marshall.spight@gmail.com> replied:
> I would not expect that the dynamic programmer will be
> thinking that this code will have two numbers most of the
> time but sometimes not, and fail. I would expect that in both
> static and dynamic, the thought is that that code is adding
> two numbers, with the difference being the static context
> gives one a proof that this is so.

I don't think the the problem is down at the atom level.  It's really
at the structure (tuple) level.  If we view 'a + b' as applying the
function + to some non-atomic objects a and b.  The question is how
well do we specify what a and b are.  From my limited exposure to
programs that use dynamic typing, I would surmise that developers who
use dynamic typing have very broad and open definitions of the "types"
of a and b, and they don't want the type system complaining that they
have nailed the definitions down quite yet.  And, if you read some of
the arguments in this topic, one assumes that they want to even be
able to correct the "types" of a and b at run-time, when they find out
that they have made a mistake without taking the system down.

So, for a concrete example, let b be a "file" type (either text file
or directory) and we get a whole system up and running with those two
types.  But, we discover a need for "symbolic links".  So, we want to
change the file type to have 3 sub-types.  In a dynamic type system,
because the file "type" is loose (it's operationally defined by what
function one applies to b), this isn't an issue.  As each function is
recoded to deal with the 3rd sub-type, the program becomes more
functional.  If we need to demo the program before we have worked out
how symbolic links work for some operation, it is not a problem.  Run
the application, but don't exercise that combination of type and
operation.

In many ways, this can also be done in languages with type inference.
I don't understand the process by which one does it, so I can't
explain it.  Perhaps someone else will--please....

Back to the example, the file type is not an atomic type, but
generally some structure/tuple/list/record/class with members or
fields.  A function which renames files by looking at only the name
without change because the new subtype has the same name field and
uses it the same way.  A spell-check function which works on text
files using the "contents" field might need to be recoded for symbolic
links if the contents field for that subtype is "different" (e.g. the
name of the target).  The naive spell check function might appear to
work, but really do the wrong thing, (i.e. checking if the target file
name is made of legal words).  Thus, the type problems are not at the
atomic level (where contents is a string), but at the structure level,
where one needs to know which fields have which meanings for which
subtypes.

-Chris

 0

Andreas Rossberg schrieb:
> Joachim Durchholz wrote:
>>>
>>>> It's worth noting, too, that (in some sense) the type of an object
>>>> can change over time[*].
>>>
>>> No. Since a type expresses invariants, this is precisely what may
>>> *not* happen.
>>
>> No. A type is a set of allowable values, allowable operations, and
>> constraints on the operations (which are often called "invariants" but
>> they are invariant only as long as the type is invariant).
>
> The purpose of a type system is to derive properties that are known to

That's just one of many possible purposes (a noble one, and the most
preeminent one in FPLs I'll agree any day, but it's still *not the
definition of a type*).

> A type is the encoding of these properties. A type
> varying over time is an inherent contradiction (or another abuse of the
> term "type").

No. It's just a matter of definition, essentially.
E.g. in Smalltalk and Lisp, it does make sense to talk of the "type" of
a name or a value, even if that type may change over time.
I regard it as a highly dubious practice to have things change their
types over their lifetime, but if there are enough other constraints,
type constancy may indeed have to take a back seat.

Regards,
Jo

 0

Andreas Rossberg schrieb:
> (Btw, Pascal did not have it either, AFAIK)

Indeed.
Some Pascal dialects have it.

Regards,
Jo

 0

In article <1150998222.352746.65520@i40g2000cwc.googlegroups.com>, Joe
Marshall wrote:
>
> That's the important point:  I want to run broken code.  I want to run
> as much of the working fragments as I can, and I want a safety net' to
> prevent me from performing undefined operations, but I want the safety
> net to catch me at the *last* possible moment.  I'm not playing it safe
> and staying where the compiler can prove I'll be ok.  I'm living
> dangerously and wandering near the edge where the compiler can't quite
> prove that I'll fail.

Hi Joe,

How do you write programs? Specifically, how do you write and debug
higher-order programs that involve lots of combinators (eg, code
that's partially CPS-converted, or in state-passing style, and also
uses maps and folds)?

The reason I ask is that I see that there are Scheme programmers that
manage to do this successfully. However, I switched to ML because I
just couldn't get that kind of code right without having type errors
to guide me.

Since people like you and Matthias and Shriram obviously *can* write
this kind of code, I'm curious what your strategies are.

--
Neel Krishnaswami
neelk@cs.cmu.edu

 0
Reply neelk (298) 6/22/2006 8:28:47 PM


> Chris Smith wrote:
>>I suspect you'll see the Smalltalk version of the objections raised in
>>response to my post earlier.  In other words, whatever terminology you
>>think is consistent, you'll probably have a tough time convincing
>>Smalltalkers to stop saying "type" if they did before.  If you exclude
>>"message not understood" as a type error, then I think you're excluding
>>type errors from Smalltalk entirely, which contradicts the psychological
>>understanding again.
>
Chris Uppal wrote:

>
> Taking Smalltalk /specifically/, there is a definite sense in which it is
> typeless -- or trivially typed -- in that in that language there are no[*]
> operations which are forbidden[**],

Come one Chris U.   One has to distinguish an attempt to invoke an
operation with it being carried out.  There is nothing in Smalltalk to
stop one attempting to invoke any "operation" on any object.  But one
can only actually carry-out operations on objects that implement them.
(which is important, but avoidable), Smalltalk is in fact
strongly-typed, but not statically strongly-typed.

--
_______________,,,^..^,,,____________________________
Eliot Miranda              Smalltalk - Scene not herd


 0

Xah Lee wrote:
> in March, i posted a essay “What is Expressiveness in a Computer
> Language”, archived at:
> http://xahlee.org/perl-python/what_is_expresiveness.html
>
> I was informed then that there is a academic paper written on this
> subject.
>
> On the Expressive Power of Programming Languages, by Matthias
> Felleisen, 1990.
> http://www.ccs.neu.edu/home/cobbe/pl-seminar-jr/notes/2003-sep-26/expressive-slides.pdf
>
> Has anyone read this paper? And, would anyone be interested in giving a
> summary?
>
> thanks.
>
>    Xah
>    xah@xahlee.org
>  ∑ http://xahlee.org/
>
Looking this thread growing it appears to me, that at least one thing
becomes evident here:

Xah unwillingness to learn from the bad past experience contaminates
others (who are still posting to his trolling threads).

Here another try to rescue these ones who are just virgin enough not to
know what I am speaking about:

Citation from http://www.xahlee.org/Netiquette_dir/_/art_trolling.html :
"""
What I want this document to focus on is how to create entertaining
trolls. I have drawn on the expertise of the writer's of some of
Usenet's finest and best remembered trolls. Trolls are for fun. The
object of recreational trolling is to sit back and laugh at all those
gullible idiots that will believe *anything*.
[...]
Remember that you have two audiences. The people who are going to get
the maximum enjoyment out of your post are other trollers. You need to
keep in contact with them through both your troll itself and the way you
direct its effect. It is trollers that you are trying to entertain so be
creative - trollers don't just want a laugh from you they want to see
good trolls so that they can also learn how to improve their own in the
never ending search for the perfect troll.
[...]
Section 6    Following-Up
Try not to follow-up to your own troll. The troll itself quickly becomes
forgotten in the chaos and if you just sit back you can avoid being
blamed for causing it. Remember, if you do follow up you are talking to
an idiot. Treat them with the ill-respect they deserve.
"""

Claudio Grondi (a past 'gullible idiot' who learned to enjoy the fun of
being the audience)

 0

Claudio Grondi <claudio.grondi@freenet.de> wrote:
> Looking this thread growing it appears to me, that at least one thing
> becomes evident here:
>
> Xah unwillingness to learn from the bad past experience contaminates
> others (who are still posting to his trolling threads).
>
> Here another try to rescue these ones who are just virgin enough not to
> know what I am speaking about:

I am enjoying the discussion.  I think several other people are, too.
(At least, I hope so!)  It matters not one whit to me who started the
thread, or the merits of the originating post.  Why does it matter to
you?

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Claudio Grondi schrieb:
> Looking this thread growing it appears to me, that at least one thing
> becomes evident here:
>
> Xah unwillingness to learn from the bad past experience contaminates
> others (who are still posting to his trolling threads).

This is actually one of the most interesting threads I have read in a
long time. If you ignore the evangelism, there is a lot if high-quality
information and first-hand experience you couldn't find in a dozen books.

 0

Eliot Miranda wrote:
> can only actually carry-out operations on objects that implement them.

Execpt that every operation is implemented by every object in Smalltalk.
Unless you specify otherwise, the implementation of every method is to
call the receiver with doesNotUnderstand.  (I don't recall whether the
class of nil has a special rule for this or whether it implements
doesNotUnderstand and invokes the appropriate "don't send messages to
nil" method.)

There are a number of Smalltalk extensions, such as
multiple-inheritance, that rely on implementing doesNotUnderstand.

--
Darren New / San Diego, CA, USA (PST)
Native Americans used every part
of the buffalo, including the wings.

 0

Timo Stamm schreef:

> This is actually one of the most interesting threads I have read in a
> long time. If you ignore the evangelism, there is a lot if
> high-quality information and first-hand experience you couldn't find
> in a dozen books.

Much of what is talked about, is in these articles (and their links)
http://www.mindview.net/WebLog/log-0066
http://en.wikipedia.org/wiki/Dynamic_typing

--
Affijn, Ruud

"Gewoon is een tijger."


 0

Rob Thorpe wrote:
> David Hopwood wrote:
>
>>As far as I can tell, the people who advocate using "typed" and "untyped"
>>in this way are people who just want to be able to discuss all languages in
>>a unified terminological framework, and many of them are specifically not
>
> Its easy to create a reasonable framework. My earlier posts show simple
> ways of looking at it that could be further refined, I'm sure there are
> others who have already done this.
>
> The real objection to this was that latently/dynamically typed
> languages have a place in it.

You seem to very keen to attribute motives to people that are not apparent
from what they have said.

> But some of the advocates of statically
> typed languages wish to lump these languages together with assembly
> language a "untyped" in an attempt to label them as unsafe.

A common term for languages which have defined behaviour at run-time is
"memory safe". For example, "Smalltalk is untyped and memory safe."
That's not too objectionable, is it?

(It is actually more common for statically typed languages to fail to be
memory safe; consider C and C++, for example.)

--
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>

 0

Marshall wrote:
> Can you be more explicit about what "latent types" means?
> I'm sorry to say it's not at all natural or intuitive to me.
> Are you referring to the types in the programmers head,
> or the ones at runtime, or what?

Sorry, that was a huge omission.  (What I get for posting at 3:30am.)

The short answer is that I'm most directly referring to "the types in
the programmer's head".  A more complete informal summary is as follows:

Languages with latent type systems typically don't include type
declarations in the source code of programs.  The "static" type scheme
of a given program in such a language is thus latent, in the English
dictionary sense of the word, of something that is present but
undeveloped.  Terms in the program may be considered as having static
types, and it is possible to infer those types, but it isn't necessarily
easy to do so automatically, and there are usually many possible static
type schemes that can be assigned to a given program.

Programmers infer and reason about these latent types while they're
writing or reading programs.  Latent types become manifest when a

(As has already been noted, this definition may seem at odds with the
definition given in the Scheme report, R5RS, but I'll address that in a
separate post.)

There's a close connection between latent types in the sense I've
described, and the "tagged values" present at runtime.  However, as type
theorists will tell you, the tags used to tag values at runtime, as e.g.
a number or a string or a FooBar object, are not the same thing as the
sort of types which statically-typed languages have.

A simple example of the distinction can be seen in the type of a
function.  Using Javascript as a lingua franca:

function timestwo(x) { return x * 2 }

In a statically-typed language, the type of a function like this might
be something like "number -> number", which tells us three things: that
timestwo is a function; that it accepts a number argument; and that it
returns a number result.

But if we ask Javascript what it thinks the type of timestwo is, by
evaluating "typeof timestwo", it returns "function".  That's because the
value bound to timestwo has a tag associated with it which says, in
effect, "this value is a function".

But "function" is not a useful type.  Why not?  Because if all you know
is that timestwo is a function, then you have no idea what an expression
like "timestwo(foo)" means.  You couldn't write working programs, or
read them, if all you knew about functions was that they were functions.
As a type, "function" is incomplete.

By my definition, though, the latent type of timestwo is "number ->
number".  Any programmer looking at the function can figure out that
this is its type, and programmers do exactly that when reasoning about a
program.

(Aside: technically, you can pass timestwo something other than a
number, but then you get NaN back, which is usually not much use except
to generate errors.  I'll ignore that here; latent typing requires being
less rigourous about some of these issues.)

So, where do tagged values fit into this?  Tags help to check types at
runtime, but that doesn't mean that there's a 1:1 correspondence between
tags and types.  For example, when an expression such as "timestwo(5) *
3" is evaluated, three checks occur that are relevant to the type of
timestwo:

1. Before the function call takes place, a check ensures that timestwo
is a function.

2. Before the multiplication in "x * 2", a check ensures that x is a number.

3. When timestwo returns, before the subsequent multiplication by 3, a
check ensures that the return type of timestwo is a number.

These three checks correspond to the three pieces of information
contained in the function type signature "number -> number".

However, these dynamic checks still don't actually tell us the type of a
function.  All they do is check that in a particular case, the values
involved are compatible with the type of the function.  In many cases,
the checks may infer a signature that's either more or less specific
than the function's type, or they may infer an incomplete signature --
e.g., the return type doesn't need to be checked when evaluating "arr[i]
= timestwo(5)".

I used a function just as an example.  There are many other cases where
a value's tag doesn't match the static (or latent) type of the terms
through which it flows.  A simple example is an expression such as:

(flag ? 5 : "foo")

Here, the latent type of this expression could be described as "number |
string".  There won't be a runtime tag anywhere which represents that
type, though, since the language implementation never deals with the
actual type of expressions, except in those degenerate cases where the
type is so simple that it happens to be a 1:1 match to the corresponding
tag.  It's only the programmer that "knows" that this expression has
that type.

Anton

 0

Pascal Bourguignon wrote:
> Pascal Costanza <pc@p-cos.net> writes:
>>Andreas Rossberg wrote:
>>>Pascal Costanza wrote:
>>>
>>>>Consider a simple expression like 'a + b': In a dynamically typed
>>>>language, all I need to have in mind is that the program will
>>>>attempt to add two numbers. In a statically typed language, I
>>>>additionally need to know that there must a guarantee that a and b
>>>>will always hold numbers.
>>>
>>>I'm confused. Are you telling that you just write a+b in your
>>>programs without trying to ensure that a and b are in fact numbers??
>>
>>Basically, yes.
>>
>>Note that this is a simplistic example. Consider, instead, sending a
>>message to an object, or calling a generic function, without ensuring
>>that there will be applicable methods for all possible cases. When I
>>get a "message not understood" exception, I can then decide whether
>>that kind of object shouldn't be a receiver in the first place, or
>>else whether I should define an appropriate method. I don't want to be
>>forced to decide this upfront, because either I don't want to be
>>bothered, or maybe I simply can't because I don't understand the
>>domain well enough yet, or maybe I want to keep a hook to be able to
>>update the program appropriately while it is running.
>
> Moreover, a good proportion of the program and a good number of
> algorithms don't even need to know the type of the objects they
> manipulate.
>
> For example, sort doesn't need to know what type the objects it sorts
> are.  It only needs to be given a function that is able to compare the
> objects.

But this is true also in a statically typed language with parametric
polymorphism.

[...]
> Why should adding a few functions or methods, and providing input
> values of a new type be rejected from a statically checked  point of
> view by a compiled program that would be mostly bit-for-bit the same
> with or without this new type?

It usually wouldn't be -- adding methods in a typical statically typed
OO language is unlikely to cause type errors (unless there is a naming
conflict, in some cases). Nor would adding new types or new functions.

(*Using* new methods without declaring them would cause an error, yes.)

--
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>

 0

Pascal Bourguignon wrote:
> But it's always possible at run-time that new functions and new
> function calls be generated such as:
>
> (let ((x "two"))
>   (eval (defmethod g ((self ,(type-of x))) t))
>   (eval (defmethod h ((x ,(type-of x)) (y string))
>            (,(intern (format nil "DO-SOMETHING-WITH-A-~A" (type-of x))) x)
>            (do-something-with-a-string y)))
>   (funcall (compile nil (let ((x ,x)) (lambda () (f x "Hi!"))))))
>
> Will you execute the whole type-inference on the whole program "black
> box" everytime you define a new function?  Will you recompile all the
> "black box" functions to take into account the new type the arguments
> can be now?

Yes, why not?

> This wouldn't be too efficient.

It's rare, so it doesn't need to be efficient. 'eval' is inherently
inefficient, anyway.

--
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>

 0

Neelakantan Krishnaswami wrote:
> Marshall wrote:
> >
> > That's the important point:  I want to run broken code.  I want to run
> > as much of the working fragments as I can, and I want a safety net' to
> > prevent me from performing undefined operations, but I want the safety
> > net to catch me at the *last* possible moment.  I'm not playing it safe
> > and staying where the compiler can prove I'll be ok.  I'm living
> > dangerously and wandering near the edge where the compiler can't quite
> > prove that I'll fail.
>
> Hi Joe,
>
> How do you write programs? Specifically, how do you write and debug
> higher-order programs that involve lots of combinators (eg, code
> that's partially CPS-converted, or in state-passing style, and also
> uses maps and folds)?
>
> The reason I ask is that I see that there are Scheme programmers that
> manage to do this successfully. However, I switched to ML because I
> just couldn't get that kind of code right without having type errors
> to guide me.
>
> Since people like you and Matthias and Shriram obviously *can* write
> this kind of code, I'm curious what your strategies are.

That's a hard question.  It never occurred to me to remember or
document the process of creating the code.  I'll try to think about it.

I've been thinking it would be really interesting to give the same
problem to two different groups of people that have different opinions
on the static/dynamic thing and see how the approaches differ.  The
problem would have to be meaty enough to highlight the differences.


 0
Reply eval.apply (328) 6/22/2006 10:44:04 PM

Vesa Karvonen wrote:
> In comp.lang.functional Anton van Straaten <anton@appsolutions.com> wrote:
> [...]
>
> This static vs dynamic type thing reminds me of one article written by
> Bjarne Stroustrup where he notes that "Object-Oriented" has become a
> synonym for "good".  More precisely, it seems to me that both camps
> (static & dynamic) think that "typed" is a synonym for having
> "well-defined semantics" or being "safe" and therefore feel the need
> to be able to speak of their language as "typed" whether or not it
> makes sense.

I reject this comparison.  There's much more to it than that.  The point
is that the reasoning which programmers perform when working with an
program in a latently-typed language bears many close similiarities to
the purpose and behavior of type systems.

This isn't an attempt to jump on any bandwagons, it's an attempt to
characterize what is actually happening in real programs and with real
programmers.  I'm relating that activity to type systems because that is
what it most closely relates to.

Usually, formal type theory ignores such things, because of course
what's in the programmer's head is outside the domain of the formal
definition of an untyped language.  But all that means is that formal
type theory can't account for the entirety of what's happening in the
case of programs in untyped languages.

Unless you can provide some alternate theory of the subject that's
better than what I'm offering, it's not sufficient to complain "but that
goes beyond (static/syntactic) type theory".  Yes, it goes beyond
it to type theory, and if you can't see those reasons, you need to be

> I agree.  I think that instead of "statically typed" we should say
> "typed" and instead of "(dynamically|latently) typed" we should say
> "untyped".

The problem with "untyped" is that there are obvious differences in
typing behavior between the untyped lambda calculus and, say, a language
like Scheme (and many others).  Like all latently-typed languages,
Scheme includes, in the language, mechanisms to tag values in a way that
supports checks which help the programmer to ensure that the program's
behavior matches the latent type scheme that the programmer has in mind.
See my other recent reply to Marshall for a more detailed explanation
of what I mean.

I'm suggesting that if a language classifies and tags values in a way
that supports the programmer in static reasoning about the behavior of
terms, that calling it "untyped" does not capture the entire picture,
even if it's technically accurate in a restricted sense (i.e. in the
sense that terms don't have static types that are known within the
language).

Let me come at this from another direction: what do you call the
classifications into number, string, vector etc. that a language like
Scheme does?  And when someone writes a program which includes the
following lines, how would you characterize the contents of the comment:

; third : integer -> integer
(define (third n) (quotient n 3))

In my experience, answering these questions without using the word
"type" results in a lot of silliness.  And if you do use "type", then
you're going to have to adjust the rest of your position significantly.

>>In a statically-checked language, people tend to confuse automated
>>static checking with the existence of types, because they're thinking in
>>a strictly formal sense: they're restricting their world view to what
>>they see "within" the language.
>
>
> That is not unreasonable.  You see, you can't have types unless you
> have a type system.  Types without a type system are like answers
> without questions - it just doesn't make any sense.

The first point I was making is that *automated* checking has very
little to do with anything, and conflating static types with automated
checking tends to lead to a lot of confusion on both sides of the
static/dynamic fence.

But as to your point, latently typed languages have informal type
systems.  Show me a latently typed language or program, and I can tell
you a lot about its type system or type scheme.

Soft type inferencers demonstrate this by actually defining a type
system and inferring type schemes for programs.  That's a challenging
thing for an automated tool to do, but programmers routinely perform the
same sort of activity on an informal basis.

>>But a program as seen by the programmer has types: the programmer
>>performs (static) type inference when reasoning about the program, and
>>debugs those inferences when debugging the program, finally ending up
>>with a program which has a perfectly good type scheme.  It's may be
>>messy compared to say an HM type scheme, and it's usually not proved to
>>be perfect, but that again is an orthogonal issue.
>
>
> There is a huge hole in your argument above.  Types really do not make
> sense without a type system.  To claim that a program has a type
> scheme, you must first specify the type system.  Otherwise it just
> doesn't make any sense.

Again, the type system is informal.  What you're essentially saying is
that only things that are formally defined make sense.  But you can't
wish dynamically-checked languages out of existence.  So again, how
would you characterize these issues in dynamically-checked languages?

Saying that it's just a matter of well-defined semantics doesn't do
anything to address the details of what's going on.  I'm asking for a
more specific account than that.

>>Mathematicians operated for thousands of years without automated
>>checking of proofs, so you can't argue that because a
>>dynamically-checked program hasn't had its type scheme proved correct,
>>that it somehow doesn't have types.  That would be a bit like arguing
>>that we didn't have Math until automated theorem provers came along.
>
>
> No - not at all.  First of all, mathematics has matured quite a bit
> since the early days.  I'm sure you've heard of the axiomatic method.
> However, what you are missing is that to prove that your program has
> types, you first need to specify a type system.  Similarly, to prove
> something in math you start by specifying [fill in the rest].

I agree, to make the comparison perfect, you'd need to define a type
system.  But that's been done in various cases.  So is your complaint
simply that most programmers are working with informal type systems?

However, I think that you want to suggest that those programmers are not
working with type systems at all.

This reminds me of a comedy skit which parodied the transparency of
Superman's secret identity: Clark Kent is standing in front of Lois Lane
and removes his glasses for some reason.  Lois looks confused and says
"where did Clark go?"  Clark quickly puts his glasses back on, and Lois
breathes a sigh of relief, "Oh, there you are, Clark".

The problem we're dealing with in this case is that anything that's not
formally defined is essentially claimed to not exist.  It's lucky that
this isn't really the case, otherwise much of the world around us would
vanish in a puff of formalist skepticism.

We're discussing systems that operate on an informal basis: in this
case, the reasoning about the classification of values which flow
through terms in a dynamically-checked language.  If you can produce a
useful formal model of those systems that doesn't omit significant
information, that's great, and I'm all ears.

However, claiming that e.g. using a universal type is a complete model
what's happening misses the point: it doesn't account at all for the
reasoning process I've just described.

>>1. "Untyped" is really quite a misleading term, unless you're talking
>>about something like the untyped lambda calculus.  That, I will agree,
>>can reasonably be called untyped.
>
>
> Untyped is not misleading.  "Typed" is not a synonym for "safe" or
> "having well-defined semantics".

Again, your two suggested replacements don't come close to capturing
what I'm talking about.  Without better alternatives, "type" is the
closest appropriate term.  I'm qualifying it with the term "latent",
precisely to indicate that I'm not talking about formally-defined types.

I'm open to alternative terminology or ways of characterizing this, but
they need to address issues that exist outside the boundaries of formal
type systems, so simply applying terms from formal type theory is not
usually sufficient.

>>So, will y'all just switch from using "dynamically typed" to "latently
>>typed"
>
>
> I won't (use "latently typed").  At least not without further
> qualification.

This and my other recent post give a fair amount of qualification, so
let me know if you need anything else to be convinced. :)

But to be fair, I'll start using "untyped" if you can come up with a
satisfactory answer to the two questions I asked above, just before I
used the word "silliness".

Anton

 0

Rob Thorpe wrote:
>>So, will y'all just switch from using "dynamically typed" to "latently
>>typed", and stop talking about any real programs in real programming
>>languages as being "untyped" or "type-free", unless you really are
>>talking about situations in which human reasoning doesn't come into
>>whole issue.
>
>
> I agree with most of what you say except regarding "untyped".
>
> In machine language or most assembly the type of a variable is
> something held only in the mind of the programmer writing it, and
> nowhere else.  In latently typed languages though the programmer can
> ask what they type of a particular value is.  There is a vast
> difference to writing code in the latter kind of language to writing
> code in assembly.

The distinction you describe is pretty much exactly what I was getting
at.  I may have put it poorly.

> I would suggest that at least assembly should be referred to as
> "untyped".

Yes, I agree.

While we're talking about "untyped", I want to expand on something I
wrote in a recent reply to Vesa Karvonen: I accept that "untyped" has a
technical definition which means that a language doesn't statically
assign types to terms.  But this doesn't capture the distinction between
"truly" untyped languages, and languages which tag their values to
support the programmer's ability to think in terms of latent types.

The point of avoiding the "untyped" label is not because it's not
technically accurate (in a limited sense) -- it's because in the absence
of other information, it misses out on important aspects of the nature
of latently typed languages.  My reply to Vesa goes into more detail

Anton

 0

Andreas Rossberg wrote:
> Rob Warnock wrote:
>
>>
>> Here's what the Scheme Standard has to say:
>>
>>     http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-4.html
>>     1.1  Semantics
>>     ...
>>     Scheme has latent as opposed to manifest types. Types are assoc-
>>     iated with values (also called objects) rather than with variables.
>>     (Some authors refer to languages with latent types as weakly typed
>>     or dynamically typed languages.) Other languages with latent types
>>     are APL, Snobol, and other dialects of Lisp. Languages with manifest
>>     types (sometimes referred to as strongly typed or statically typed
>>     languages) include Algol 60, Pascal, and C.
>
>
> Maybe this is the original source of the myth that static typing is all
> about assigning types to variables...
>
> With all my respect to the Scheme people, I'm afraid this paragraph is
> pretty off, no matter where you stand. Besides the issue just mentioned
> it equates "manifest" with static types. I understand "manifest" to mean
> "explicit in code", which of course is nonsense - static typing does not
> require explicit types. Also, I never heard "weakly typed" used in the
> way they suggest - in my book, C is a weakly typed language (= typed,
> but grossly unsound).

That text goes back at least 20 years, to R3RS in 1986, and possibly
earlier, so part of what it represents is simply changing use of
terminology, combined with an attempt to put Scheme in context relative
to multiple languages and terminology usages.  The fact that we're still
discussing this now, and haven't settled on terminology acceptable to
all sides, illustrates the problem.

The Scheme report doesn't actually say anything about how latent types
relate to static types, in the way that I've recently characterized the
relationship here.  However, I thought this was a good place to explain
how I got to where I am on this subject.

There's been a lot of work done on soft type inference in Scheme, and
other kinds of type analysis.  The Scheme report talks about latent
types as meaning "types are associated with values".  While this might
cause some teeth-grinding amongst type theorists, it's meaningful enough
when you look at simple cases: cases where the type of a term is an
exact match for the type tags of the values that flow through that term.

When you get to more complex cases, though, most type inferencers for
Scheme assign traditional static-style types to terms.  If you think
connection to make that what the inferencer is doing is recovering types
that are latent in the source.

Once that connection is made, it's obvious that the tags associated with
values are not the whole story: that the conformance of one or more
values to a "latent type" may be checked by a series of tag checks, in
different parts of a program (i.e. before, during and after the
expression in question is evaluated).  I gave a more detailed
description of how latent types relate to tags in an earlier reply to
Marshall (Spight, not Joe).

Anton

 0

Joe Marshall wrote:
>
> That's the important point:  I want to run broken code.

I want to make sure I understand. I can think of several things
you might mean by this. It could be:
1) I want to run my program, even though I know parts of it
are broken, because I think there are parts that are not broken
and I want to try them out.
2) I want to run my program, even though it is broken, and I
want to run right up to a broken part and trap there, so I can
use the runtime facilities of the language to inspect what's
going on.

> I want to run
> as much of the working fragments as I can, and I want a safety net' to
> prevent me from performing undefined operations, but I want the safety
> net to catch me at the *last* possible moment.

This statement is interesting, because the conventional wisdom (at
least as I'm used to hearing it) is that it is best to catch bugs
at the *first* possible moment. But I think maybe we're talking
about different continua here. The last last last possible moment
is after the software has shipped to the customer, and I'm pretty
sure that's not what you mean. I think maybe you mean something
more like 2) above.

Marshall


 0

Timo Stamm wrote:
>
> This is actually one of the most interesting threads I have read in a
> long time. If you ignore the evangelism, there is a lot if high-quality
> information and first-hand experience you couldn't find in a dozen books.

Hear hear! This is an *excellent* thread. The evangelism is at
rock-bottom
and the open exploration of other people's way of thinking is at what
looks to me like an all-time high.

Marshall


 0

Anton van Straaten wrote:
> Vesa Karvonen wrote:
> >
> > This static vs dynamic type thing reminds me of one article written by
> > Bjarne Stroustrup where he notes that "Object-Oriented" has become a
> > synonym for "good".  More precisely, it seems to me that both camps
> > (static & dynamic) think that "typed" is a synonym for having
> > "well-defined semantics" or being "safe" and therefore feel the need
> > to be able to speak of their language as "typed" whether or not it
> > makes sense.
>
> I reject this comparison.  There's much more to it than that.

I agree that there's more to it than that.  I also agree, however, that
Vesa's observation is true, and is a big part of the reason why it's
difficult to discuss this topic.  I don't recall who said what at this
point, but earlier today someone else posted -- in this same thread --
the idea that static type "advocates" want to classify some languages as
untyped in order to put them in the same category as assembly language
programming.  That's something I've never seen, and I think it's far
from the goal of pretty much anyone; but clearly, *someone* was
concerned about it.  I don't know if much can be done to clarify this
rhetorical problem, but it does exist.

The *other* bit that's been brought up in this thread is that the word
"type" is just familiar and comfortable for programmers working in
dynamically typed languages, and that they don't want to change their
vocabulary.

The *third* thing that's brought up is that there is a (to me, somewhat
vague) conception going around that the two really ARE varieties of the
same thing.  I'd like to pin this down more, and I hope we get there,
but for the time being I believe that this impression is incorrect.  At
the very least, I haven't seen a good way to state any kind of common
definition that withstands scrutiny.  There is always an intuitive word
involved somewhere which serves as an escape hatch for the author to
retain some ability to make a judgement call, and that of course
sabotages the definition.  So far, that word has varied through all of
"type", "type error", "verify", and perhaps others... but I've never
seen anything that allows someone to identify some universal concept of
typing (or even the phrase "dynamic typing" in the first place) in a way
that doesn't appeal to intuition.

> The point
> is that the reasoning which programmers perform when working with an
> program in a latently-typed language bears many close similiarities to
> the purpose and behavior of type systems.

Undoubtedly, some programmers sometimes perform reasoning about their
programs which could also be performed by a static type system.  This is
fairly natural, since static type systems specifically perform tractable
analyses of programs (Pierce even uses the word "tractable" in the
definition of a type system), and human beings are often (though not
always) best-served by trying to solve tractable problems as well.

> There are reasons to connect
> it to type theory, and if you can't see those reasons, you need to be

Let me pipe up, then, as saying that I can't see those reasons; or at
least, if I am indeed seeing the same reasons that everyone else is,
then I am unconvinced by them that there's any kind of rigorous
connection at all.

> I'm suggesting that if a language classifies and tags values in a way
> that supports the programmer in static reasoning about the behavior of
> terms, that calling it "untyped" does not capture the entire picture,
> even if it's technically accurate in a restricted sense (i.e. in the
> sense that terms don't have static types that are known within the
> language).

It is, nevertheless, quite appropriate to call the language "untyped" if
I am considering static type systems.  I seriously doubt that this usage
in any way misleads anyone into assuming the absence of any mental
processes on the part of the programmer.  I hope you agree.  If not,
then I think you significantly underestimate a large category of people.

> The first point I was making is that *automated* checking has very
> little to do with anything, and conflating static types with automated
> checking tends to lead to a lot of confusion on both sides of the
> static/dynamic fence.

I couldn't disagree more.  Rather, when you're talking about static
types (or just "types" in most research literature that I've seen), then
the realm of discussion is specifically defined to be the very set of
errors that are automatically caught and flagged by the language
translator.  I suppose that it is possible to have an unimplemented type
system, but it would be unimplemented only because someone hasn't felt
the need nor gotten around to it.  Being implementABLE is a crucial part
of the definition of a static type system.

I am beginning to suspect that you're make the converse of the error I
made earlier in the thread.  That is, you may be saying things regarding
the psychological processes of programmers and such that make sense when
discussing dynamic types, and in any case I haven't seen any kind of
definition of dynamic types that is more acceptable yet; but it's
completely irrelevant to static types.  Static types are not fuzzy -- if
they were fuzzy, they would cease to be static types -- and they are not
a phenomenon of psychology.  To try to redefine static types in this way
not only ignores the very widely accepted basis of entire field of
existing literature, but also leads to false ideas such as that there is
some specific definable set of problems that type systems are meant to
solve.

> I agree, to make the comparison perfect, you'd need to define a type
> system.  But that's been done in various cases.

I don't think that has been done, in the case of dynamic types.  It has
been done for static types, but much of what you're saying here is in
contradiction to the definition of a type system in that sense of the
word.

> The problem we're dealing with in this case is that anything that's not
> formally defined is essentially claimed to not exist.

I see it as quite reasonable when there's an effort by several
participants in this thread to either imply or say outright that static
type systems and dynamic type systems are variations of something
generally called a "type system", and given that static type systems are
quite formally defined, that we'd want to see a formal definition for a
dynamic type system before accepting the proposition that they are of a
kind with each other.  So far, all the attempts I've seen to define a
dynamic type system seem to reduce to just saying that there is a well-
defined semantics for the language.

I believe that's unacceptable for several reasons, but the most
significant of them is this.  It's not reasonable to ask anyone to
accept that static type systems gain their essential "type system-ness"
from the idea of having well-defined semantics.  From the perspective of
a statically typed language, this looks like a group of people getting
together and deciding that the real "essence" of what it means to be a
type system is... and then naming something that's so completely non-
essential that we don't generally even mention it in lists of the
benefits of static types, because we have already assumed that it's true
of all languages except C, C++, and assembly language.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Anton van Straaten wrote:
> Marshall wrote:
> > Can you be more explicit about what "latent types" means?
> > I'm sorry to say it's not at all natural or intuitive to me.
> > Are you referring to the types in the programmers head,
> > or the ones at runtime, or what?
>
> Sorry, that was a huge omission.  (What I get for posting at 3:30am.)

Thanks for the excellent followup.

> The short answer is that I'm most directly referring to "the types in

In the database theory world, we speak of three levels: conceptual,
logical, physical. In a dbms, these might roughly be compared to
business entities described in a requirements doc, (or in the
dbms, and the b-tree indicies the dbms uses for performance.

So when you say "latent types", I think "conceptual types."

are using conceptual types (as best I can tell.) And also, the
conceptual/latent types are not actually a property of the
program; they are a property of the programmer's mental
model of the program.

It seems we have languages:
with or without static analysis
with or without runtime type information (RTTI or "tags")
with or without (runtime) safety
with or without explicit type annotations
with or without type inference

Wow. And I don't think that's a complete list, either.

I would be happy to abandon "strong/weak" as terminology
because I can't pin those terms down. (It's not clear what

> A more complete informal summary is as follows:
>
> Languages with latent type systems typically don't include type
> declarations in the source code of programs.  The "static" type scheme
> of a given program in such a language is thus latent, in the English
> dictionary sense of the word, of something that is present but
> undeveloped.  Terms in the program may be considered as having static
> types, and it is possible to infer those types, but it isn't necessarily
> easy to do so automatically, and there are usually many possible static
> type schemes that can be assigned to a given program.
>
> Programmers infer and reason about these latent types while they're
> writing or reading programs.  Latent types become manifest when a

Uh, oh, a new term, "manifest." Should I worry about that?

> (As has already been noted, this definition may seem at odds with the
> definition given in the Scheme report, R5RS, but I'll address that in a
> separate post.)
>
> There's a close connection between latent types in the sense I've
> described, and the "tagged values" present at runtime.  However, as type
> theorists will tell you, the tags used to tag values at runtime, as e.g.
> a number or a string or a FooBar object, are not the same thing as the
> sort of types which statically-typed languages have.
>
> A simple example of the distinction can be seen in the type of a
> function.  Using Javascript as a lingua franca:
>
>    function timestwo(x) { return x * 2 }
>
> In a statically-typed language, the type of a function like this might
> be something like "number -> number", which tells us three things: that
> timestwo is a function; that it accepts a number argument; and that it
> returns a number result.
>
> But if we ask Javascript what it thinks the type of timestwo is, by
> evaluating "typeof timestwo", it returns "function".  That's because the
> value bound to timestwo has a tag associated with it which says, in
> effect, "this value is a function".

Well, darn. It strikes me that that's just a decision the language
designers
made, *not* to record complete RTTI. (Is it going to be claimed that
there is an *advantage* to having only incomplete RTTI? It is a
serious question.)

> But "function" is not a useful type.  Why not?  Because if all you know
> is that timestwo is a function, then you have no idea what an expression
> like "timestwo(foo)" means.  You couldn't write working programs, or
> read them, if all you knew about functions was that they were functions.
>   As a type, "function" is incomplete.

Yes, function is a parameterized type, and they've left out the
parameter values.

> By my definition, though, the latent type of timestwo is "number ->
> number".  Any programmer looking at the function can figure out that
> this is its type, and programmers do exactly that when reasoning about a
> program.

Gotcha.

> (Aside: technically, you can pass timestwo something other than a
> number, but then you get NaN back, which is usually not much use except
> to generate errors.  I'll ignore that here; latent typing requires being
> less rigourous about some of these issues.)
>
> So, where do tagged values fit into this?  Tags help to check types at
> runtime, but that doesn't mean that there's a 1:1 correspondence between
> tags and types.  For example, when an expression such as "timestwo(5) *
> 3" is evaluated, three checks occur that are relevant to the type of
> timestwo:
>
> 1. Before the function call takes place, a check ensures that timestwo
> is a function.
>
> 2. Before the multiplication in "x * 2", a check ensures that x is a number.
>
> 3. When timestwo returns, before the subsequent multiplication by 3, a
> check ensures that the return type of timestwo is a number.
>
> These three checks correspond to the three pieces of information
> contained in the function type signature "number -> number".
>
> However, these dynamic checks still don't actually tell us the type of a
> function.  All they do is check that in a particular case, the values
> involved are compatible with the type of the function.  In many cases,
> the checks may infer a signature that's either more or less specific
> than the function's type, or they may infer an incomplete signature --
> e.g., the return type doesn't need to be checked when evaluating "arr[i]
> = timestwo(5)".
>
> I used a function just as an example.  There are many other cases where
> a value's tag doesn't match the static (or latent) type of the terms
> through which it flows.  A simple example is an expression such as:
>
>      (flag ? 5 : "foo")
>
> Here, the latent type of this expression could be described as "number |
> string".  There won't be a runtime tag anywhere which represents that
> type, though, since the language implementation never deals with the
> actual type of expressions, except in those degenerate cases where the
> type is so simple that it happens to be a 1:1 match to the corresponding
> tag.  It's only the programmer that "knows" that this expression has
> that type.

Hmmm. Another place where the static type isn't the same thing as
the runtime type occurs in languages with subtyping.

Question: if a language *does* record complete RTTI, and the
languages does *not* have subtyping, could we then say that
the runtime type information *is* the same as the static type?

Marshall


 0

Joe Marshall wrote:
>
> That's a hard question.  It never occurred to me to remember or
> document the process of creating the code.  I'll try to think about it.
>
> I've been thinking it would be really interesting to give the same
> problem to two different groups of people that have different opinions
> on the static/dynamic thing and see how the approaches differ.  The
> problem would have to be meaty enough to highlight the differences.

Yes. Actually it would be best to do this with observers who are,
say, cognitive psychologists. The inner monologue we experience
is *not* the same thing as our actual cognitive process; sometimes
they can be quite different.

Marshall


 0
Reply marshall.spight (580) 6/23/2006 2:39:03 AM

Chris Smith <cdsmith@twu.net> wrote:
> I see it as quite reasonable when there's an effort by several
> participants in this thread to either imply or say outright that static
> type systems and dynamic type systems are variations of something
> generally called a "type system" [...]

I didn't say that right.  Obviously, no one is making all that great an
EFFORT to say anything.  Typing is not too difficult, after all.  What I
meant is that there's an argument being made to that effect.

--
Chris Smith - Lead Software Developer / Technical Trainer
MindIQ Corporation

 0

Chris Smith wrote:
> I don't recall who said what at this
> point, but earlier today someone else posted -- in this same thread --
> the idea that static type "advocates" want to classify some languages as
> untyped in order to put them in the same category as assembly language
> programming.  That's something I've never seen, and I think it's far
> from the goal of pretty much anyone; but clearly, *someone* was
> concerned about it.  I don't know if much can be done to clarify this
> rhetorical problem, but it does exist.

For the record, I'm not concerned about that problem as such.  However,
I do think that characterizations of dynamically typed languages from a
static type perspective tend to oversimplify, usually because they
ignore the informal aspects which static type systems don't capture.

Terminology is a big part of this, so there are valid reasons to be
careful about how static type terminology and concepts are applied to
languages which lack formally defined static type systems.

> The *other* bit that's been brought up in this thread is that the word
> "type" is just familiar and comfortable for programmers working in
> dynamically typed languages, and that they don't want to change their
> vocabulary.

What I'm suggesting is actually a kind of bridge between the two
positions.  The dynamically typed programmer tends to think in terms of
values having types, rather than variables.  What I'm pointing out is
that even those programmers reason about something much more like static
types than they might realize; and that there's a connection between
that reasoning and static types, and also a connection to the tags
associated with values.

If you wanted to take the word "type" and have it mean something
reasonably consistent between the static and dynamic camps, what I'm
suggesting at least points in that direction.  Obviously, nothing in the
dynamic camp is perfectly isomorphic to a real static type, which is why
I'm qualifying the term as "latent type", and attempting to connect it
to both static types and to tags.

> The *third* thing that's brought up is that there is a (to me, somewhat
> vague) conception going around that the two really ARE varieties of the
> same thing.  I'd like to pin this down more, and I hope we get there,
> but for the time being I believe that this impression is incorrect.  At
> the very least, I haven't seen a good way to state any kind of common
> definition that withstands scrutiny.  There is always an intuitive word
> involved somewhere which serves as an escape hatch for the author to
> retain some ability to make a judgement call, and that of course
> sabotages the definition.  So far, that word has varied through all of
> "type", "type error", "verify", and perhaps others... but I've never
> seen anything that allows someone to identify some universal concept of
> typing (or even the phrase "dynamic typing" in the first place) in a way
> that doesn't appeal to intuition.

It's obviously going to be difficult to formally pin down something that
is fundamentally informal.  It's fundamentally informal because if
reasoning about the static properties of terms in DT languages were
formalized, it would essentially be something like a formal type system.

However, there are some pretty concrete things we can look at.  One of
them, which as I've mentioned elsewhere is part of what led me to my
position, is to look at what a soft type inferencer does.  It takes a
program in a dynamically typed language, and infers a static type scheme
for it (obviously, it also defines an appropriate type system for the
language.)  This has been done in both Scheme and Erlang, for example.

How do you account for such a feat in the absence of something like
latent types?  If there's no static type-like information already
present in the program, how is it possible to assign a static type
scheme to a program without dramatically modifying its source?

I think it's reasonable to look at a situation like that and conclude
that even DT programs contain information that corresponds to types.
Sure, it's informal, and sure, it's usually messy compared to an
explicitly defined equivalent.  But the point is that there is
"something" there that looks so much like static types that it can be
identified and formalized.

> Undoubtedly, some programmers sometimes perform reasoning about their
> programs which could also be performed by a static type system.

I think that's a severe understatement.  Programmers always reason about
things like the types of arguments, the types of variables, the return
types of functions, and the types of expressions.  They may not do
whole-program inference and proof in the way that a static type system
does, but they do it locally, all the time, every time.

BTW, I notice you didn't answer any of the significant questions which I
posed to Vesa.  So let me pose one directly to you: how should I rewrite
the first sentence in the preceding paragraph to avoid appealing to an
admittedly informal notion of type?  Note, also, that I'm using the word
in the sense of static properties, not in the sense of tagged values.

>>There are reasons to connect
>>it to type theory, and if you can't see those reasons, you need to be
>
>
> Let me pipe up, then, as saying that I can't see those reasons; or at
> least, if I am indeed seeing the same reasons that everyone else is,
> then I am unconvinced by them that there's any kind of rigorous
> connection at all.

For now, I'll stand on what I've written above.  When I see if or how
that doesn't convince you, I can go further.

> It is, nevertheless, quite appropriate to call the language "untyped" if
> I am considering static type systems.

I agree that in the narrow context of considering to what extent a
dynamically typed language has a formal static type system, you can call
it untyped.  However, what that essentially says is that formal type
theory doesn't have the tools to deal with that language, and you can't
go much further than that.  As long as that's what you mean by untyped,
I'm OK with it.

> I seriously doubt that this usage
> in any way misleads anyone into assuming the absence of any mental
> processes on the part of the programmer.  I hope you agree.

I didn't suggest otherwise (or didn't mean to).  However, the term
"untyped" does tend to lead to confusion, to a lack of recognition of
the significance of all the static information in a DT program that is
outside the bounds of a formal type system, and the way that runtime tag
checks relate to that static information.

One misconception that occurs is the assumption that all or most of the
static type information in a statically-typed program is essentially
nonexistent in a dynamically-typed program, or at least is no longer
statically present.  That can easily be demonstrated to be false, of
course, and I'm not arguing that experts usually make this mistake.

> If not,
> then I think you significantly underestimate a large category of people.

If you think there's no issue here, I think you significantly
overestimate a large category of people.  Let's declare that line of
argument a draw.

>>The first point I was making is that *automated* checking has very
>>little to do with anything, and conflating static types with automated
>>checking tends to lead to a lot of confusion on both sides of the
>>static/dynamic fence.
>
>
> I couldn't disagree more.  Rather, when you're talking about static
> types (or just "types" in most research literature that I've seen), then
> the realm of discussion is specifically defined to be the very set of
> errors that are automatically caught and flagged by the language
> translator.  I suppose that it is possible to have an unimplemented type
> system, but it would be unimplemented only because someone hasn't felt
> the need nor gotten around to it.  Being implementABLE is a crucial part
> of the definition of a static type system.

I agree with the latter sentence.  However, it's nevertheless the case
that it's common to confuse "type system" with "compile-time checking".
This doesn't help reasoning in debates like this, where the existence
of type systems in languages that don't have automated static checking
is being examined.

> I am beginning to suspect that you're make the converse of the error I
> made earlier in the thread.  That is, you may be saying things regarding
> the psychological processes of programmers and such that make sense when
> discussing dynamic types, and in any case I haven't seen any kind of
> definition of dynamic types that is more acceptable yet; but it's
> completely irrelevant to static types.  Static types are not fuzzy -- if
> they were fuzzy, they would cease to be static types -- and they are not
> a phenomenon of psychology.  To try to redefine static types in this way
> not only ignores the very widely accepted basis of entire field of
> existing literature, but also leads to false ideas such as that there is
> some specific definable set of problems that type systems are meant to
> solve.

I'm not trying to redefine static types.  I'm observing that there's a
connection between the static properties of untyped programs, and static
types; and attempting to characterize that connection.

You need to be careful about being overly formalist, considering that in
real programming languages, the type system does have a purpose which
has a connection to informal, fuzzy things in the real world.  If you
were a pure mathematician, you might get away with claiming that type
systems are just a self-contained symbolic game which doesn't need any
connections beyond its formal ruleset.

Think of it like this: the more ambitious a language's type system is,
the fewer uncaptured static properties remain in the code of programs in
that language.  However, there are plenty of languages with rather weak
static type systems.  In those cases, code has more static properties
that aren't captured by the type system.  I'm pointing out that in many
of these cases, those properties resemble types, to the point that it
can make sense to think of them and reason about them as such, applying
the same sort of reasoning that an automated type inferencer applies.

If you disagree, then I'd be interested to hear your answers to the two
questions I posed to Vesa, and the related one I posed to you above,
about what else to call these things.

>>I agree, to make the comparison perfect, you'd need to define a type
>>system.  But that's been done in various cases.
>
>
> I don't think that has been done, in the case of dynamic types.

I was thinking of the type systems designed for soft type inferencers;
as well as those cases where e.g. a statically-typed subset of an
untyped language is defined, as in the case of PreScheme.

But in such cases, you end up where a program in these systems, while in
some sense statically typed, is also a valid untyped program.  There's
also nothing to stop someone familiar with such things programming in a
type-aware style - in fact, books like Felleisen et al's "How to Design
Programs" encourage that, recommending that functions be annotated with

;; product : (listof number) -> number

;; copy : N X -> (listof X)

You also see something similar in e.g. many Erlang programs.  In these
cases, reasoning about types is done explicitly by the programmer, and
documented.

What would you call the descriptions in those comments?  Once you tell
me what I should call them other than "type" (or some qualified variant
such as "latent type"), then we can compare terminology and decide which
is more appropriate.

> It has
> been done for static types, but much of what you're saying here is in
> contradiction to the definition of a type system in that sense of the
> word.

Which is why I'm using a qualified version of the term.

>>The problem we're dealing with in this case is that anything that's not
>>formally defined is essentially claimed to not exist.
>
>
> I see it as quite reasonable when there's an effort by several
> participants in this thread to either imply or say outright that static
> type systems and dynamic type systems are variations of something
> generally called a "type system", and given that static type systems are
> quite formally defined, that we'd want to see a formal definition for a
> dynamic type system before accepting the proposition that they are of a
> kind with each other.

A complete formal definition of what I'm talking about may be impossible
in principle, because if you could completely formally define it, you'd
have a static type system.

If that makes you throw up your hands, then all you're saying is that
you're unwilling to deal with a very real phenomenon that has obvious
connections to type theory, examples of which I've given above.  That's
your choice, but at the same time, you have to give up exclusive claim
to any variation of the word "type".

Terms are used in a context, and it's perfectly reasonable to call
something a "latent type" or even a "dynamic type" in a certain context
and point out connections between those terms and their cousins (or if
you insist, their completely unrelated namesakes) static types.

> So far, all the attempts I've seen to define a
> dynamic type system seem to reduce to just saying that there is a well-
> defined semantics for the language.

That's a pretty strong claim, considering you have so far ducked the
most important questions I raised in the post you replied to.

> I believe that's unacceptable for several reasons, but the most
> significant of them is this.  It's not reasonable to ask anyone to
> accept that static type systems gain their essential "type system-ness"
> from the idea of having well-defined semantics.

The definition of static type system is not in question.  However,
realistically, as I pointed out above, you have to acknowledge that type
systems exist in, and are inextricably connected to, a larger, less
formal context.  (At least, you have to acknowledge that if you're
interested in programs that do anything related to the real world.)  And
outside the formally defined borders of static type systems, there are
static properties that bear a pretty clear relationship to types.

Closing your eyes to this and refusing to acknowledge any connection
doesn't achieve anything.  In the absence of some other account of the
phenomena in question, (an informal version of) types turn out to be a
pretty convenient way to deal with the situation.

> From the perspective of
> a statically typed language, this looks like a group of people getting
> together and deciding that the real "essence" of what it means to be a
> type system is...

There's a sense in which one can say that yes, the informal types I'm
referring to have an interesting degree of overlap with static types;
and also that static types do, loosely speaking, have the "purpose" of
formalizing the informal properties I'm talking about.

But I hardly see why such an informal characterization should bother
you.  It doesn't affect the definition of static type.  It's not being
held up as a foundation for type theory.  It's simply a way of dealing
with the realities of programming in a dynamically-checked language.

There are some interesting philophical issues there, to be sure
(although only if you're willing to stray outside the formal), but you
don't have to worry about those unless you want to.

> and then naming something that's so completely non-
> essential that we don't generally even mention it in lists of the
> benefits of static types, because we have already assumed that it's true
> of all languages except C, C++, and assembly language.

This is based on the assumption that all we're talking about is
"well-defined semantics".  However, there's much more to it than that.
I need to hear your characterization of the properties I've described
before I can respond.

Anton

 0

Marshall wrote:
>>The short answer is that I'm most directly referring to "the types in
>
>
> In the database theory world, we speak of three levels: conceptual,
> logical, physical. In a dbms, these might roughly be compared to
> business entities described in a requirements doc, (or in the
> dbms, and the b-tree indicies the dbms uses for performance.
>
> So when you say "latent types", I think "conceptual types."

That sounds plausible, but of course at some point we need to pick a
term and attempt to define it.  What I'm attempting to do with "latent
types" is to point out and emphasize their relationship to static types,
which do have a very clear, formal definition.  Despite some people's
skepticism, that definition gives us a lot of useful stuff that can be
applied to what I'm calling latent types.

> are using conceptual types (as best I can tell.)

Well, a big difference is that the Haskell programmers have language
implementations that are clever enough to tell them, statically, a very
precise type for every term in their program.  Lisp programmers usually
don't have that luxury.  And in the Haskell case, the conceptual type
and the static type match very closely.

There can be differences, e.g. a programmer might have knowledge about
the input to the program or some programmed constraint that Haskell
isn't capable of inferring, that allows them to figure out a more
precise type than Haskell can.  (Whether that type can be expressed in
Haskell's type system is a separate question.)

In that case, you could say that the conceptual type is different than
the inferred static type.  But most of the time, the human is reasoning
about pretty much the same types as the static types that Haskell
infers.  Things would get a bit confusing otherwise.

> And also, the
> conceptual/latent types are not actually a property of the
> program;

That's not exactly true in the Haskell case (or other languages with
static type inference), assuming you accept the static/conceptual
equivalence I've drawn above.

Although static types are often not explicitly written in the program in
such languages, they are unambiguously defined and automatically
inferrable.  They are a static property of the program, even if in many
cases those properties are only implicit with respect to the source
code.  You can ask the language to tell you what the type of any given
term is.

> they are a property of the programmer's mental
> model of the program.

That's more accurate.  In languages with type inference, the programmer
still has to figure out what the implicit types are (or ask the language
to tell her).

You won't get any argument from me that this figuring out of implicit
types in a Haskell program is quite similar to what a Lisp, Scheme, or
Python programmer does.  That's one of the places the connection between
static types and what I call latent types is the strongest.

(However, I think I just heard a sound as though a million type
theorists howled in unison.[*])

[*] most obscure inadvertent pun ever.

> It seems we have languages:
> with or without static analysis
> with or without runtime type information (RTTI or "tags")
> with or without (runtime) safety
> with or without explicit type annotations
> with or without type inference
>
> Wow. And I don't think that's a complete list, either.

Yup.

> I would be happy to abandon "strong/weak" as terminology
> because I can't pin those terms down. (It's not clear what

I wasn't following the discussion earlier, but I agree that strong/weak
don't have strong and unambiguous definitions.

> Uh, oh, a new term, "manifest." Should I worry about that?

Well, people have used the term "manifest type" to refer to a type
that's explicitly apparent in the source, but I wouldn't worry about it.
I just used the term to imply that at some point, the idea of "latent
type" has to be converted to something less latent.  Once you explicitly
identify a type, it's no longer latent to the entity doing the identifying.

>>But if we ask Javascript what it thinks the type of timestwo is, by
>>evaluating "typeof timestwo", it returns "function".  That's because the
>>value bound to timestwo has a tag associated with it which says, in
>>effect, "this value is a function".
>
>
> Well, darn. It strikes me that that's just a decision the language
> designers
> made, *not* to record complete RTTI.

No, there's more to it.  There's no way for a dynamically-typed language
to figure out that something like the timestwo function I gave has the
type "number -> number" without doing type inference, by examining the
source of the function, at which point it pretty much crosses the line
into being statically typed.

> (Is it going to be claimed that
> there is an *advantage* to having only incomplete RTTI? It is a
> serious question.)

More than an advantage, it's difficult to do it any other way.  Tags are
associated with values.  Types in the type theory sense are associated
with terms in a program.  All sorts of values can flow through a given
term, which means that types can get complicated (even in a nice clean
statically typed language).  The only way to reason about types in that
sense is statically - associating tags with values at runtime doesn't
get you there.

This is the sense in which the static type folk object to the term
"dynamic type" - because the tags/RTTI are not "types" in the type
theory sense.

Latent types as I'm describing them are intended to more closely
correspond to static types, in terms of the way in which they apply to
terms in a program.

> Hmmm. Another place where the static type isn't the same thing as
> the runtime type occurs in languages with subtyping.

Yes.

> Question: if a language *does* record complete RTTI, and the
> languages does *not* have subtyping, could we then say that
> the runtime type information *is* the same as the static type?

You'd still have a problem with function types, as I mentioned above,
as well as expressions like the conditional one I gave:

(flag ? 5 : "foo")

The problem is that as the program is running, all it can ever normally
do is tag a value with the tags obtained from the path the program
followed on that run.  So RTTI isn't, by itself, going to determine
anything similar to a static type for such a term, such as "string |
number".

One way to think about "runtime" types is as being equivalent to certain
kinds of leaves on a statically-typed program syntax tree.  In an
expression like "x = 3", an inferring compiler might determine that 3 is
of type "integer".  The tagged values which RTTI uses operate at this
level: the level of individual values.

However, as soon as you start dealing with other kinds of nodes in the
syntax tree -- nodes that don't represent literal values, or compound
nodes (that have children) -- the possibility arises that the type of
the overall expression will be more complex than that of a single value.
At that point, RTTI can't do what static types do.

Even in the simple "x = 3" case, a hypothetical inferring compiler might
notice that elsewhere in the same function, x is treated as a floating
point number, perhaps via an expression like "x = x / 2.0".   According
to the rules of its type system, our language might determine that x has
type "float", as opposed to "number" or "integer".  It might then either
treat the original "3" as a float, or supply a conversion when assigning
it to "x".  (Of course, some languages might just give an error and
force you to be more explicit, but bear with me for the example - it's
after 3:30am again.)

Compare this to the dynamically-typed language: it sees "3" and,
depending on the language, might decide it's a "number" or perhaps an
"integer", and tag it as such.  Either way, x ends up referring to that
tagged value.  So what type is x?  All you can really say, in the RTTI
case, is that x is a number or an integer, depending on the tag the
value has.  There's no way it can figure out that x should be a float at
that point.

Of course, further down when it runs into the code "x = x / 2.0", x
might end up holding a value that's tagged as a float.  But that only
tells you the value of x at some point in time, it doesn't help you with
the static type of x, i.e. a type that either encompasses all possible
values x could have during its lifetime, or alternatively determines how
values assigned to x should be treated (e.g. cast to float).

BTW, it's worth noting at this point, since it's implied by the last
paragraph, that a static type is only an approximation to the values
that a term can have during the execution of a program.  Static types
can (often!) be too conservative, describing possible values that a
particular term couldn't actually ever have.  This gives another hint as
to why RTTI can't be equivalent to static types.  It's only ever dealing
with the concrete values right in front of it, it can't see the bigger
static picture.

Anton

 0

Neelakantan Krishnaswami wrote:
> How do you write programs? Specifically, how do you write and debug
> higher-order programs that involve lots of combinators (eg, code
> that's partially CPS-converted, or in state-passing style, and also
> uses maps and folds)?

Writing the types down helps.  (Assuming I'm allowed to call them
types.)  For debugging, something like PLT's contracts help, but
ordinary assertions can be useful too.

I definitely see a benefit to being able to rely on static inference
when dealing with complex higher-order procedures.  But if I find myself
getting lost in higher-orderness in Scheme, I usually consider that a
sign that I'm being too low-level, and that I need some encapsulation of
the abstractions.  E.g. I rely heavily on operators like fold, but once
you start nesting folds, that soon starts to beg for refactoring, and a
bit of naming.

Macros can also help.  You can encapsulate higher-order patterns in a
macro so that you really don't have to think about it much when using
them.  For a nice example of that, look at module or unit systems that
are defined as macros.  You really wouldn't want to express some of
those directly as lambdas.

(Whether you would in Haskell or ML, I'm not sure.  But I'd be suspicious.)

Anton

 0
Reply anton58 (1238) 6/23/2006 8:21:15 AM

David Hopwood wrote:
> Rob Thorpe wrote:
> > David Hopwood wrote:
> >
> >>As far as I can tell, the people who advocate using "typed" and "untyped"
> >>in this way are people who just want to be able to discuss all languages in
> >>a unified terminological framework, and many of them are specifically not
> >>advocates of statically typed languages.
> >
> > Its easy to create a reasonable framework. My earlier posts show simple
> > ways of looking at it that could be further refined, I'm sure there are
> > others who have already done this.
> >
> > The real objection to this was that latently/dynamically typed
> > languages have a place in it.
>
> You seem to very keen to attribute motives to people that are not apparent
> from what they have said.

The term "dynamically typed" is well used and understood.  The term
untyped is generally associated with languages that as you put it "have
no memory safety", it is a pejorative term.  "Latently typed" is not
well used unfortunately, but more descriptive.

Most of the arguments above describe a static type system then follow
by saying that this is what "type system" should mean, and finishing by
saying everything else should be considered untyped. This seems to me
to be an effort to associate dynamically typed languages with this
perjorative term.

> > But some of the advocates of statically
> > typed languages wish to lump these languages together with assembly
> > language a "untyped" in an attempt to label them as unsafe.
>
> A common term for languages which have defined behaviour at run-time is
> "memory safe". For example, "Smalltalk is untyped and memory safe."
> That's not too objectionable, is it?

Memory safety isn't the whole point, it's only half of it.  Typing
itself is the point. Regardless of memory safety if you do a
calculation in a latently typed langauge, you can find the type of the
resulting object.


 0

In comp.lang.functional Anton van Straaten <anton@appsolutions.com> wrote:
[...]
> I reject this comparison.  There's much more to it than that.  The point
> is that the reasoning which programmers perform when working with an
> program in a latently-typed language bears many close similiarities to
> the purpose and behavior of type systems.

> This isn't an attempt to jump on any bandwagons, it's an attempt to
> characterize what is actually happening in real programs and with real
> programmers.  I'm relating that activity to type systems because that is
> what it most closely relates to.
[...]

I think that we're finally getting to the bottom of things.  While reading
your reponses something became very clear to me: latent-typing and latent-
types are not a property of languages.  Latent-typing, also known as
informal reasoning, is something that all programmers do as a normal part
of programming.  To say that a language is latently-typed is to make a
category mistake, because latent-typing is not a property of languages.

A programmer, working in any language, whether typed or not, performs
informal reasoning.  I think that is fair to say that there is a
correspondence between type theory and such informal reasoning.  The
correspondence is like the correspondence between informal and formal
math.  *But* , informal reasoning (latent-typing) is not a property of
languages.

An example of a form of informal reasoning that (practically) every
programmer does daily is termination analysis.  There are type systems
that guarantee termination, but I think that is fair to say that it is not
yet understood how to make a practical general purpose language, whose
type system would guarantee termination (or at least I'm not aware of such
a language).  It should also be clear that termination analysis need not
be done informally.  Given a program, it may be possible to formally prove
that it terminates.

I'm now more convinced than ever that "(latently|dynamically)-typed
language" is an oxymoron.  The terminology really needs to be fixed.

-Vesa Karvonen

 0

Anton van Straaten wrote:
>
> Languages with latent type systems typically don't include type
> declarations in the source code of programs.  The "static" type scheme
> of a given program in such a language is thus latent, in the English
> dictionary sense of the word, of something that is present but
> undeveloped.  Terms in the program may be considered as having static
> types, and it is possible to infer those types, but it isn't necessarily
> easy to do so automatically, and there are usually many possible static
> type schemes that can be assigned to a given program.
>
> Programmers infer and reason about these latent types while they're
> writing or reading programs.  Latent types become manifest when a

I very much agree with the observation that every programmer performs
"latent typing" in his head (although Pascal Constanza's seems to have
the opposite opinion).

But I also think that "latently typed language" is not a meaningful
characterisation. And for the very same reason! Since any programming
activity involves latent typing - naturally, even in assembler! - it
cannot be attributed to any language in particular, and is hence
useless to distinguish between them. (Even untyped lambda calculus
would not be a counter-example. If you really were to program in it,
you certainly would think along lines like "this function takes two
chuch numerals and produces a third one".)

I hear you when you define latently typed languages as those that
support the programmer's latently typed thinking by providing dynamic
tag checks. But in the very same post (and others) you also explain in
length why these tags are far from being actual types. This seems a bit

As Chris Smith points out, these dynamic checks are basically a
necessaity for a well-defined operational semantics. You need them
whenever you have different syntactic classes of values, but lack a
type system to preclude interference. They are just an encoding for
differentiating these syntactic classes. Their connection to types is
rather coincidential.

- Andreas


 0

Andreas Rossberg wrote:
> Chris Uppal wrote:
> >
> > > > It's worth noting, too, that (in some sense) the type of an object
> > > > can change over time[*].
> > >
> > > No. Since a type expresses invariants, this is precisely what may
> > > *not* happen. If certain properties of an object may change then the
> > > type of
> > > the object has to reflect that possibility. Otherwise you cannot
> > > legitimately call it a type.
> >
> > Well, it seems to me that you are /assuming/ a notion of what kinds of
> > logic can be called type (theories), and I don't share your
> > assumptions.  No offence intended.
>
> OK, but can you point me to any literature on type theory that makes a
> different assumption?

'Fraid not.  (I'm not a type theorist -- for all I know there may be lots, but
my suspicion is that they are rare at best.)

But perhaps I shouldn't have used the word theory at all.  What I mean is that
there is one or more logic of type (informal or not -- probably informal) with
respect to which the object in question has changed it categorisation.   If no
existing type /theory/ (as devised by type theorists) can handle that case,
then that is a deficiency in the set of existing theories -- we need newer and
better ones.

But, as a sort of half-way, semi-formal, example: consider the type environment
in a Java runtime.  The JVM does formal type-checking of classfiles as it loads
them.  In most ways that checking is static -- it's treating the bytecode as
program text and doing a static analysis on it before allowing it to run (and
rejecting what it can't prove to be acceptable by its criteria).  However, it
isn't /entirely/ static because the collection of classes varies at runtime in
a (potentially) highly dynamic way.  So it can't really examine the "whole"
text of the program -- indeed there is no such thing.  So it ends up with a
hybrid static/dynamic type system -- it records any assumptions it had to make
in order to find a proof of the acceptability of the new code, and if (sometime
in the future) another class is proposed which violates those assumptions, then
that second class is rejected.

> > I see no reason,
> > even in practise, why a static analysis should not be able to see that
> > the set of acceptable operations (for some definition of acceptable)
> > for some object/value/variable can be different at different times in
> > the execution.
>
> Neither do I. But what is wrong with a mutable reference-to-union type,
> as I suggested? It expresses this perfectly well.

Maybe I misunderstood what you meant by union type.  I took it to mean that the
type analysis didn't "know" which of the two types was applicable, and so would
reject both (or maybe accept both ?).  E..g  if at instant A some object, obj,
was in a state where it to responds to #aMessage, but not #anotherMessage; and
at instant B it is in a state where it responds to #anotherMessage but not
#aMessage.  In my (internal and informal) type logic, make the following
judgements:

In code which will be executed at instant A
obj aMessage.                "type correct"
obj anotherMessage.       "type incorrect"

In code which will be executed at instant B
obj aMessage.                 "type incorrect"
obj anotherMessage.        "type correct"

I don't see how a logic with no temporal element can arrive at all four those
judgements, whatever it means by a union type.

-- chris


 0

Vesa Karvonen wrote:
....
> An example of a form of informal reasoning that (practically) every
> programmer does daily is termination analysis.  There are type systems
> that guarantee termination, but I think that is fair to say that it is not
> yet understood how to make a practical general purpose language, whose
> type system would guarantee termination (or at least I'm not aware of such
> a language).  It should also be clear that termination analysis need not
> be done informally.  Given a program, it may be possible to formally prove
> that it terminates.

To make the halting problem decidable one would have to do one of two
things: Depend on memory size limits, or have a language that really is
less expressive, at a very deep level, than any of the languages
mentioned in the newsgroups header for this message.

A language for which the halting problem is decidable must also be a
language in which it is impossible to simulate an arbitrary Turing
machine (TM). Otherwise, one could decide the notoriously undecidable TM
halting problem by generating a language X program that simulates the TM
and deciding whether the language X program halts.

One way out might be to depend on the boundedness of physical memory. A
language with a fixed maximum memory size cannot simulate an arbitrary
TM. However, the number of states for a program is so great that a
method that depends on its finiteness, but would not work for an
infinite memory model, is unlikely to be practical.

Patricia

 0

Rene_de_Visser@hotmail.com wrote:
> "Pascal Costanza" <pc@p-cos.net> wrote in message
> news:4fv081F1jh4ifU1@individual.net...
>> A statically type language requires you to think about two models of
>> your program at the same time: the static type model and the dynamic
>> behavioral model. A static type system ensures that these two
>> _different_ (that's important!) perspectives are always in sync. This is
>> especially valuable in settings where you know your domain well and want
>> to rely on feedback by your compiler that you haven't made any mistakes
>> in encoding your knowledge. (A static type system based on type
>> inferencing doesn't essentially change the requirement to think in two
>> models at the same time.)
>
> I think this may be true in your line of research, where you are looking at
> very abstract
> ways of representing algorithms.
>
> I used to use common lisp for exploratory programming, but having read an
> article from another person who used to use common lisp a lot, and later
>
> In the same way that common lisp gives you a good notation for experimenting
> with algorithms, the Haskell type system/notation gives you a good notation
> for experimenting with constraints/structure and meaning.
>
> I think also missing in the discussion of types so far, is the use of types
> to give explicit meanings to values (and variables / parameters) (I would
> argue that in a lot of cases in common lisp these are also there, just
> implicitly). Once something is explicit I find it easier to manipulate and
> reason with.

I cannot really relate to what you're saying here. Where can I find
examples of what you're describing? (Papers, books, ...?)

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0
Reply pc56 (3930) 6/23/2006 11:20:47 AM

Matthias Blume wrote:
> Pascal Costanza <pc@p-cos.net> writes:
>
>> Chris Smith wrote:
>>
>>> While this effort to salvage the term "type error" in dynamic
>>> languages is interesting, I fear it will fail.  Either we'll all
>>> have to admit that "type" in the dynamic sense is a psychological
>>> concept with no precise technical definition (as was at least hinted
>>> by Anton's post earlier, whether intentionally or not) or someone is
>>> going to have to propose a technical meaning that makes sense,
>>> independently of what is meant by "type" in a static system.
>> invoke an operation on values that are not appropriate for this
>> operation.
>>
>> Examples: adding numbers to strings; determining the string-length of
>> a number; applying a function on the wrong number of parameters;
>> applying a non-function; accessing an array with out-of-bound indexes;
>> etc.
>
> Yes, the phrase "runtime type error" is actually a misnomer.  What one
> usually means by that is a situation where the operational semantics
> is "stuck", i.e., where the program, while not yet arrived at what's
> considered a "result", cannot make any progress because the current
> configuration does not match any of the rules of the dynamic
> semantics.
>
> The reason why we call this a "type error" is that such situations are
> precisely the ones we want to statically rule out using sound static
> type systems.  So it is a "type error" in the sense that the static
> semantics was not strong enough to rule it out.
>
>> Sending a message to an object that does not understand that message
>> is a type error. The "message not understood" machinery can be seen
>> either as a way to escape from this type error in case it occurs and
>> allow the program to still do something useful, or to actually remove
>> (some) potential type errors.
>
> I disagree with this.  If the program keeps running in a defined way,
> then it is not what I would call a type error.  It definitely is not
> an error in the sense I described above.

If your view of a running program is that it is a "closed" system, then
you're right. However, maybe there are several layers involved, so what
appears to be a well-defined behavior from the outside may still be
regarded as a type error internally.

A very obvious example of this is when you run a program in a debugger.
There are two levels involved here: the program signals a type error,
but that doesn't mean that the system as a whole is stuck. Instead, the
debugger takes over and offers ways to deal with the type error. The
very same program run without debugging support would indeed simply be
stuck in the same situation.

So to rephrase: It depends on whether you use the "message not
understood" machinery as a way to provide well-defined behavior for the
base level, or rather as a means to deal with an otherwise unanticipated
situation. In the former case it extends the language to remove certain
type errors, in the latter case it provides a kind of debugging facility
(and it indeed may be a good idea to deploy programs with debugging
facilities, and not only use debugging tools at development time).

This is actually related to the notion of reflection, as coined by Brian
C. Smith. In a reflective architecture, you distinguish between various
interpreters, each of which interprets the program at the next level. A
debugger is a program that runs at a different level than a base program
that it debugs. However, the reflective system as a whole is "just" a
single program seen from the outside (with one interpreter that runs the
whole reflective tower). This distinction between the internal and the
external view of a reflective system was already made by Brian Smith.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Chris Smith wrote:
> Pascal Costanza <pc@p-cos.net> wrote:
>> invoke an operation on values that are not appropriate for this operation.
>>
>> Examples: adding numbers to strings; determining the string-length of a
>> number; applying a function on the wrong number of parameters; applying
>> a non-function; accessing an array with out-of-bound indexes; etc.
>
> Hmm.  I'm afraid I'm going to be picky here.  I think you need to
> clarify what is meant by "appropriate".

No, I cannot be a lot clearer here. What operations are appropriate for
what values largely depends on the intentions of a programmer. Adding a
number to a string is inappropriate, no matter how a program behaves
when this actually occurs (whether it continues to execute the operation
blindly, throws a continuable exception, or just gets stuck).

> If you mean "the operation will
> not complete successfully" as I suspect you do, then we're closer...

No, we're not. You're giving a purely technical definition here, that
may or may not relate to the programmer's (or "designer's")
understanding of the domain.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Marshall wrote:

> I am sceptical of the idea that when programming in a dynamically
> typed language one doesn't have to think about both models as well.
> I don't have a good model of the mental process of working
> in a dynamically typed language, but how could that be the case?
> (I'm not asking rhetorically.) Do you then run your program over
> and over, mechanically correcting the code each time you discover
> a type error? In other words, if you're not thinking of the type model,
> are you using the runtime behavior of the program as an assistant,
> the way I use the static analysis of the program as an assistant?

Yes.

> I don't accept the idea about pairing the appropriateness of each
> system according to whether one is doing exploratory programming.
> I do exploratory programming all the time, and I use the static type
> system as an aide in doing so. Rather I think this is just another
> manifestation of the differences in the mental processes between
> static typed programmers and dynamic type programmers, which
> we are beginning to glimpse but which is still mostly unknown.

Probably.

> Oh, and I also want to say that of all the cross-posted mega threads
> on static vs. dynamic typing, this is the best one ever. Most info;
> least flames. Yay us!

Yay! :)

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Marshall wrote:
> Pascal Costanza wrote:
>> Consider a simple expression like 'a + b': In a dynamically typed
>> language, all I need to have in mind is that the program will attempt to
>> add two numbers. In a statically typed language, I additionally need to
>> know that there must a guarantee that a and b will always hold numbers.
>
> I still don't really see the difference.
>
> I would not expect that the dynamic programmer will be
> thinking that this code will have two numbers most of the
> time but sometimes not, and fail. I would expect that in both
> static and dynamic, the thought is that that code is adding
> two numbers, with the difference being the static context
> gives one a proof that this is so.

There is a third option: it may be that at the point where I am writing
this code, I simply don't bother yet whether a and b will always be
numbers. In case something other than numbers pop up, I can then make a
decision how to proceed from there.

> In this simple example,
> the static case is better, but this is not free, and the cost
> of the static case is evident elsewhere, but maybe not
> illuminated by this example.

Yes, maybe the example is not the best one. This kind of example,
however, occurs quite often when programming in an object-oriented
style, where you don't know yet what objects will and will not respond
to a message / generic function. Even in the example above, it could be
that you can give an appropriate definition for + for objects other than
numbers.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Vesa Karvonen wrote:
> I think that we're finally getting to the bottom of things.  While reading
> your reponses something became very clear to me: latent-typing and latent-
> types are not a property of languages.  Latent-typing, also known as
> informal reasoning, is something that all programmers do as a normal part
> of programming.  To say that a language is latently-typed is to make a
> category mistake, because latent-typing is not a property of languages.
>
> A programmer, working in any language, whether typed or not, performs
> informal reasoning.  I think that is fair to say that there is a
> correspondence between type theory and such informal reasoning.  The
> correspondence is like the correspondence between informal and formal
> math.  *But* , informal reasoning (latent-typing) is not a property of
> languages.

Well, it's obviously the case that latent types as I've described them
are not part of the usual semantics of dynamically typed languages.  In
other messages, I've mentioned types like "number -> number" which have
no meaning in a dynamically typed language.  You can only write them in
comments (unless you implement some kind of type handling system), and
language implementations aren't aware of such types.

OTOH, a programmer reasoning explicitly about such types, writing them
in comments, and perhaps using assertions to check them has, in a sense,
defined a language.  Having done that, and reasoned about the types
in his program, he manually erases them, "leaving" code written in the
original dynamically-typed language.  You can think of it as though it
were generated code, complete with comments describing types, injected
during the erasure process.

So, to address your category error objection, I would say that latent
typing is a property of latently-typed languages, which are typically
informally-defined supersets of what we know as dynamically-typed languages.

I bet that doesn't make you happy, though.  :D

Still, if that sounds a bit far-fetched, let me start again at ground
level, with a Haskell vs. Scheme example:

let double x = x * 2

vs.:

(define (double x) (* x 2))

Programmers in both languages do informal reasoning to figure out the
type of 'double'.  I'm assuming that the average Haskell programmer
doesn't write out a proof whenever he wants to know the type of a term,
and doesn't have a compiler handy.

But the Haskell programmer's informal reasoning takes place in the
context of a well-defined formal type system.  He knows what the "type
of double" means: the language defines that for him.

The type-aware Scheme programmer doesn't have that luxury: before he can
talk about types, he has to invent a type system, something to give
meaning to an expression such as "number -> number".  Performing that
invention gives him types -- albeit informal types, a.k.a. latent types.

In the Haskell case, the types are a property of the language.  If
you're willing to acknowledge the existence of something like latent
types, what are they a property of?  Just the amorphous informal cloud
which surrounds dynamically-typed languages?  Is that a satisfactory
explanation of these two quite similar examples?

I want to mention two other senses in which latent types become
connected to real languages.  That doesn't make them properties of the
formal semantics of the language, but the connection is a real one at a
different level.

The first is that in a language without a rich formal type system,
informal reasoning outside of the formal type system becomes much more
important.  Conversely, in Haskell, even if you accept the existence of
latent types, they're close enough to static types that it's hardly
necessary to consider them.  This is why latent types are primarily
associated with languages without rich formal type systems.

The second connection is via tags: these are part of the definition of a
dynamically-typed language, and if the programmer is reasoning
explicitly about latent types, tags are a useful tool to help ensure
that assumptions about types aren't violated.  So this is a connection
between a feature in the semantics of the language, and these
extra-linguistic latent types.

> An example of a form of informal reasoning that (practically) every
> programmer does daily is termination analysis.  There are type systems
> that guarantee termination, but I think that is fair to say that it is not
> yet understood how to make a practical general purpose language, whose
> type system would guarantee termination (or at least I'm not aware of such
> a language).  It should also be clear that termination analysis need not
> be done informally.  Given a program, it may be possible to formally prove
> that it terminates.

Right.  And this is partly why talking about latent types, as opposed to
the more general "informal reasoning", makes sense: because latent types
are addressing the same kinds of things that static types can capture.
Type-like things.

> I'm now more convinced than ever that "(latently|dynamically)-typed
> language" is an oxymoron.  The terminology really needs to be fixed.

I agree that fixing is needed.  The challenge is to do it in a way that
accounts for, rather than simply ignores, the many informal correlations
to formal type concepts that exist in dynamically-typed languages.
Otherwise, the situation won't improve.

Anton

 0

Marshall wrote:
> Joe Marshall wrote:
>> That's the important point:  I want to run broken code.
>
> I want to make sure I understand. I can think of several things
> you might mean by this. It could be:
> 1) I want to run my program, even though I know parts of it
> are broken, because I think there are parts that are not broken
> and I want to try them out.
> 2) I want to run my program, even though it is broken, and I
> want to run right up to a broken part and trap there, so I can
> use the runtime facilities of the language to inspect what's
> going on.
>
>
>> I want to run
>> as much of the working fragments as I can, and I want a safety net' to
>> prevent me from performing undefined operations, but I want the safety
>> net to catch me at the *last* possible moment.
>
> This statement is interesting, because the conventional wisdom (at
> least as I'm used to hearing it) is that it is best to catch bugs
> at the *first* possible moment. But I think maybe we're talking
> about different continua here. The last last last possible moment
> is after the software has shipped to the customer, and I'm pretty
> sure that's not what you mean. I think maybe you mean something
> more like 2) above.

Nowadays, we have more options wrt what it means to "ship" code. It
could be that your program simply runs as a (web) service to which you
have access even after the customer has started to use the program. See
http://www.paulgraham.com/road.html for a good essay on this idea.

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Eliot Miranda wrote:

[me:]
> > Taking Smalltalk /specifically/, there is a definite sense in which it
> > is typeless -- or trivially typed -- in that in that language there are
> > no[*] operations which are forbidden[**],
>
> Come one Chris U.   One has to distinguish an attempt to invoke an
> operation with it being carried out.  There is nothing in Smalltalk to
> stop one attempting to invoke any "operation" on any object.  But one
> can only actually carry-out operations on objects that implement them.
> (which is important, but avoidable), Smalltalk is in fact
> strongly-typed, but not statically strongly-typed.

What are you doing /here/, Eliot, this is Javaland ?  Smalltalk is thatta
way ->

;-)

But this discussion has been all about /whether/ it is ok to apply the notion
of (strong) typing to what runtime-checked languages do.   We all agree that
the checks happen, but the question is whether it is
reasonable/helpful/legitimate to extend the language of static checking to the
dynamic case.  (I'm on the side which says yes, but good points have been made
against it).

The paragraph you quoted doesn't represent most of what I have been saying --
it was just a side-note looking at one small aspect of the issue from a
different angle.

-- chris


 0

Anton van Straaten wrote:

> In that case, you could say that the conceptual type is different than
> the inferred static type.  But most of the time, the human is reasoning
> about pretty much the same types as the static types that Haskell
> infers.  Things would get a bit confusing otherwise.

Or any mechanised or formalised type system, for any language.  If a system
doesn't match pretty closely with at least part of the latent type systems (in
your sense) used by the programmers, then that type system is useless.

(I gather that it took, or maybe is still taking, theorists a while to get to
grips with the infromal type logics which were obvious to working OO
programmers.)

-- chris


 0

David Hopwood wrote:

> > But some of the advocates of statically
> > typed languages wish to lump these languages together with assembly
> > language a "untyped" in an attempt to label them as unsafe.
>
> A common term for languages which have defined behaviour at run-time is
> "memory safe". For example, "Smalltalk is untyped and memory safe."
> That's not too objectionable, is it?

I find it too weak, as if to say: "well, ok, it can't actually corrupt memory
as such, but the program logic is still apt go all over the shop"...

-- chris


 0

Vesa Karvonen wrote:
> In comp.lang.functional Anton van Straaten <anton@appsolutions.com> wrote:
> [...]
>> I reject this comparison.  There's much more to it than that.  The point
>> is that the reasoning which programmers perform when working with an
>> program in a latently-typed language bears many close similiarities to
>> the purpose and behavior of type systems.
>
>> This isn't an attempt to jump on any bandwagons, it's an attempt to
>> characterize what is actually happening in real programs and with real
>> programmers.  I'm relating that activity to type systems because that is
>> what it most closely relates to.
> [...]
>
> I think that we're finally getting to the bottom of things.  While reading
> your reponses something became very clear to me: latent-typing and latent-
> types are not a property of languages.  Latent-typing, also known as
> informal reasoning, is something that all programmers do as a normal part
> of programming.  To say that a language is latently-typed is to make a
> category mistake, because latent-typing is not a property of languages.

I disagree with you and agree with Anton. Here, it is helpful to
understand the history of Scheme a bit: parts of its design are a
reaction to what Schemers perceived as having failed in Common Lisp (and
other previous Lisp dialects).

One particularly illuminating example is the treatment of nil in Common
Lisp. That value is a very strange beast in Common Lisp because it
stands for several concepts at the same time: most importantly the empty
list and the boolean false value. Its type is also "interesting": it is
both a list and a symbol at the same time. It is also "interesting" that
its quoted value is equivalent to the value nil itself. This means that
the following two forms are equivalent:

(if nil 42 4711)
(if 'nil 42 4711)

Both forms evaluate to 4711.

It's also the case that taking the car or cdr (first or rest) of nil
doesn't give you an error, but simply returns nil as well.

The advantage of this design is that it allows you to express a lot of
code in a very compact way. See
illustration.

The disadvantage is that it is mostly impossible to have a typed view of
nil, at least one that clearly disambiguates all the cases. There are
also other examples where Common Lisp conflates different types, and
sometimes only for special cases. [1]

Now compare this with the Scheme specification, especially this section:
http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-6.html#%25_sec_3.2

This clearly deviates strongly from Common Lisp (and other Lisp
dialects). The emphasis here is on a clear separation of all the types
specified in the Scheme standard, without any exception. This is exactly
what makes it straightforward in Scheme to have a latently typed view of
programs, in the sense that Anton describes. So latent typing is a
property that can at least be enabled / supported by a programming
dynamically typed languages.

Pascal

[1] Yet Common Lisp allows you to write beautiful code, more often than
not especially _because_ of these "weird" conflations, but that's a
different topic.

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0

Patricia Shanahan wrote:
> Vesa Karvonen wrote:
> ...
>> An example of a form of informal reasoning that (practically) every
>> programmer does daily is termination analysis.  There are type systems
>> that guarantee termination, but I think that is fair to say that it is
>> not
>> yet understood how to make a practical general purpose language, whose
>> type system would guarantee termination (or at least I'm not aware of
>> such
>> a language).  It should also be clear that termination analysis need not
>> be done informally.  Given a program, it may be possible to formally
>> prove
>> that it terminates.
>
> To make the halting problem decidable one would have to do one of two
> things: Depend on memory size limits, or have a language that really is
> less expressive, at a very deep level, than any of the languages
> mentioned in the newsgroups header for this message.

Not quite. See http://en.wikipedia.org/wiki/ACL2

Pascal

--
3rd European Lisp Workshop
July 3 - Nantes, France - co-located with ECOOP 2006
http://lisp-ecoop06.bknr.net/

 0
 Chris Smith wrote: [me:] > > I think we're agreed (you and I anyway, if not everyone in this thread) > > that we don`