f



mathementical/formal foundations of computing ?

2 independant issues here:

1. I'm still searching for material re. "the mathementical/formal
foundation[s] of computing" to help move programming away from an 
art towards a science.  Patterns seem to be a heuristic step in that 
direction ?

Here's some text re. the language :joy 
http://www.latrobe.edu.au/philosophy/phimvt/joy/j00ovr.html
> Overview of the language JOY 
> Two  other functional languages are the combinatory logic of Curry and
> the  FP language of Backus. They are not based on the lambda calculus,
> they  eliminate  abstraction  completely and hence do not have to deal
> with substitution and environments. As a result these languages can be
> manipulated  using  simple  algebraic  techniques
> ........Backus acknowledges a dept to
> combinatory logic, but his aim was to produce a variable free notation
> that is amenable to simple algebraic manipulation by people. Such ease
> should produce clearer and more reliable programs. .........
> Paulson remarked  that  "Programming  and pure mathematics are 
> difficult to combine into one formal framework". Joy attempts this task.

2. the second issue is about how NOT to write tutors.
The correct way is to chain-backwards from the goal.
Ie. put the 'bottom line' first, so the reader knows if he wants
to invest the labour to 'build forward' towards the goal.

I've been reading pages of the details of 'joy', without knowing
if & when & how it will claim to allow manipulation of 'joy code' 
in a formal, mathematical way. 

Q - does anybody know if joy can help me reach that goal ?

I don't want to learn another language which is a hybrid of forth, lisp
& pop-11 just to repeat "hello world: 2+3=5".

The correct way to present the capabilities which I/we seek is:

1. In order to simplify/formalise/be-able-to-prove-XYZ, we
  need to have the 'format ABC'.

2. This is acheived by transforming .....

3. On the basis of proven rules & manipulations ...

Ie. don't start with pages of rules & manipulations, where the 
   reader doesn't know if the effort will take him towards his goal.


== Chris Glur.

0
news7585 (171)
2/22/2007 2:50:49 PM
comp.lang.functional 2791 articles. 0 followers. Post Follow

214 Replies
1310 Views

Similar Articles

[PageSpeed] 27

 news@absamail.co.za wrote:

> I don't want to learn another language which is a hybrid of forth, lisp
> & pop-11 just to repeat "hello world: 2+3=5".

pop-11 is /already/ viewable as a hybrid of forth and lisp.

-- 
Chris "electric hedgehog" Dollin
"People are part of the design. It's dangerous to forget that." /Star Cops/

0
chris.dollin (1683)
2/22/2007 3:00:42 PM
On Feb 22, 9:50 am, n...@absamail.co.za wrote:
> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an
> art towards a science.  Patterns seem to be a heuristic step in that
> direction ?

This may be the boy scout who helped the little old lady across the
street who did not want to cross the street.

The target of moving some aspects of programming from an art (ie,
craft) to a science is not to move all of programming to a science,
but to clear clutter from the palette to provide more leverage to the
craft of design.

Its like the move from pen to typewriter to word processor ... the
goal is not to eliminate the writer from the act of writing, it is to
reduce the effort of recording and editing the written word so that
attention can be focused on the actual composition.

0
agila61 (3956)
2/22/2007 3:05:05 PM
On Feb 22, 7:50 am, n...@absamail.co.za wrote:
> 2 independant issues here:
>
> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an
> art towards a science.  Patterns seem to be a heuristic step in that
> direction ?
>
> Here's some text re. the language :joyhttp://www.latrobe.edu.au/philosophy/phimvt/joy/j00ovr.html
>
> > Overview of the language JOY
> > Two  other functional languages are the combinatory logic of Curry and
> > the  FP language of Backus. They are not based on the lambda calculus,
> > they  eliminate  abstraction  completely and hence do not have to deal
> > with substitution and environments. As a result these languages can be
> > manipulated  using  simple  algebraic  techniques
> > ........Backus acknowledges a dept to
> > combinatory logic, but his aim was to produce a variable free notation
> > that is amenable to simple algebraic manipulation by people. Such ease
> > should produce clearer and more reliable programs. .........
> > Paulson remarked  that  "Programming  and pure mathematics are
> > difficult to combine into one formal framework". Joy attempts this task.
>
> 2. the second issue is about how NOT to write tutors.
> The correct way is to chain-backwards from the goal.
> Ie. put the 'bottom line' first, so the reader knows if he wants
> to invest the labour to 'build forward' towards the goal.
>
> I've been reading pages of the details of 'joy', without knowing
> if & when & how it will claim to allow manipulation of 'joy code'
> in a formal, mathematical way.
>
> Q - does anybody know if joy can help me reach that goal ?
>
> I don't want to learn another language which is a hybrid of forth, lisp
> & pop-11 just to repeat "hello world: 2+3=5".
>
> The correct way to present the capabilities which I/we seek is:
>
> 1. In order to simplify/formalise/be-able-to-prove-XYZ, we
>   need to have the 'format ABC'.
>
> 2. This is acheived by transforming .....
>
> 3. On the basis of proven rules & manipulations ...
>
> Ie. don't start with pages of rules & manipulations, where the
>    reader doesn't know if the effort will take him towards his goal.
>
> == Chris Glur.


0
werty (556)
2/22/2007 4:29:45 PM
Responding to News...

> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an 
> art towards a science.  Patterns seem to be a heuristic step in that 
> direction ?

I don't think the mathematics of computing is the way to move software 
development from art to science. That's already been done with the 
hardware computational models and the graph/set theory that underlies 
all 3GLs. The computing domain is already quite rigorously defined.

Today the art lies in mapping the informally defined customer space into 
the formally defined computing space and the black art lies in project 
management of all the processes that /surround/ the intellectual 
activities of creating software.

Forty-odd years ago I saw estimates that it would take 1000 developers 
10 years to develop a 1 MLOC application. Today a 1 MLOC application is 
considered small and a 100 KLOC application is a toy. So I think we have 
pretty much made a science of creating applications.

One way that is manifested is the advent of practical general purpose 
4GLs and semantic model frameworks like MDA in the late '90s. When we 
compile such computing-independent 4GLs we are well on the way to 
completely automating the entire computing space. One simply can't have 
translation-based development if the computing space was not already 
defined rigorously.

Mathematics and other scientific approaches will undoubtedly play a part 
in making requirements specification and software project management 
sciences. But I think those approaches will be quite different than 
those underlying the computing space. So I don't think looking at 
computer languages like Joy is likely to move art to science quickly.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl@pathfindermda.com
Pathfinder Solutions
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
"Model-Based Translation: The Next Step in Agile Development".  Email
info@pathfindermda.com for your copy.
Pathfinder is hiring: 
http://www.pathfindermda.com/about_us/careers_pos3.php.
(888)OOA-PATH



0
h.lahman (3600)
2/22/2007 4:31:25 PM
> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an
> art towards a science.

ever tried algorithms ?

ever tried Google with keyword: Lisp (where functions are data)

also, http://www.cs.mu.oz.au/research/mercury/

>  Patterns seem to be a heuristic step in that direction ?


you mean Desgn-Patterns ?


> Here's some text re. the language :joyhttp://www.latrobe.edu.au/philosophy/phimvt/joy/j00ovr.html
>
> > Overview of the language JOY
> > Two  other functional languages are the combinatory logic of Curry and
> > the  FP language of Backus. They are not based on the lambda calculus,
> > they  eliminate  abstraction  completely and hence do not have to deal
> > with substitution and environments. As a result these languages can be
> > manipulated  using  simple  algebraic  techniques
> > ........Backus acknowledges a dept to
> > combinatory logic, but his aim was to produce a variable free notation
> > that is amenable to simple algebraic manipulation by people. Such ease
> > should produce clearer and more reliable programs. .........
> > Paulson remarked  that  "Programming  and pure mathematics are
> > difficult to combine into one formal framework". Joy attempts this task.
>
> 2. the second issue is about how NOT to write tutors.
> The correct way is to chain-backwards from the goal.
> Ie. put the 'bottom line' first, so the reader knows if he wants
> to invest the labour to 'build forward' towards the goal.
>
> I've been reading pages of the details of 'joy', without knowing
> if & when & how it will claim to allow manipulation of 'joy code'
> in a formal, mathematical way.
>
> Q - does anybody know if joy can help me reach that goal ?
>
> I don't want to learn another language which is a hybrid of forth, lisp
> & pop-11 just to repeat "hello world: 2+3=5".
>
> The correct way to present the capabilities which I/we seek is:
>
> 1. In order to simplify/formalise/be-able-to-prove-XYZ, we
>   need to have the 'format ABC'.
>
> 2. This is acheived by transforming .....
>
> 3. On the basis of proven rules & manipulations ...
>
> Ie. don't start with pages of rules & manipulations, where the
>    reader doesn't know if the effort will take him towards his goal.
>
> == Chris Glur.

i can not understand that as my a programming Newbie.

0
geek.arnuld (542)
2/22/2007 4:52:15 PM
news@absamail.co.za schrieb:
> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an 
> art towards a science.

First, programming is a craft, not an art (though it *can* be art, just 
as craft can be art).
Second, mathematics alone isn't going to help programming move along. It 
won't help with getting customer requirements into something that can be 
programmed (mathematics could even be an obstacle here), and it won't 
help with getting bugs out of programs (which is a question of human 
error, and quality testing: again, the obstacles here are more human 
than technological, and mathematics isn't going to help with that).

That doesn't mean that mathematics is unimportant in programming, just 
as an architect needs mathematics to do his work when calculating loads 
and stability. However, it's just a tool, not an end to be pursued.

That said, "functional programming languages" are "more mathematical" in 
some sense, and they do have properties that make it easier to avoid the 
technical bugs, and a portion of the customer requirements bugs, too. 
That's not because it's mathematics, it's because being nearer to 
mathematics simplifies the semantics of these languages...

 > Patterns seem to be a heuristic step in that direction ?

No. Patterns are just like tricks of the trade: helpful if you know what 
you're doing, nonsensical if you don't.

> Here's some text re. the language :joy 

Language rationales list what the designers had in mind.
I have observed that few if any languages actually do as the designer 
intended. Most underperform; a few overperform (Lisp and PHP being the 
cases where the difference is most marked). Fewer perform as expected 
(Haskell, I think).

> 2. the second issue is about how NOT to write tutors.
> The correct way is to chain-backwards from the goal.
> Ie. put the 'bottom line' first, so the reader knows if he wants
> to invest the labour to 'build forward' towards the goal.

An interesting thought, though I can't tell whether that style will be 
successful. It's good to determine whether a language (or other 
technology) fits your needs, but it doesn't help with actually learning 
it, so I fear this style won't become very popular.

Regards,
Jo
0
jo427 (1164)
2/22/2007 7:23:27 PM
Joachim Durchholz <jo@durchholz.org> writes:
> > 2. the second issue is about how NOT to write tutors.
> > The correct way is to chain-backwards from the goal.
> > Ie. put the 'bottom line' first, so the reader knows if he wants
> > to invest the labour to 'build forward' towards the goal.
> 
> An interesting thought, though I can't tell whether that style will be
> successful. It's good to determine whether a language (or other
> technology) fits your needs, but it doesn't help with actually
> learning it, so I fear this style won't become very popular.

Reading this group makes me feel like the underwear gnomes from South
Park:

   1. Abstract category theory, GADT, Curry-Howard isomorphism, etc.
   2. ???
   3. Software!

I've bought into it enough to be spending time banging my head against
learning Haskell, but it would help a lot if step 2 were explained
better.
0
phr.cx (5493)
2/22/2007 8:25:21 PM
news@absamail.co.za wrote:
> 2 independant issues here:
> 
> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an 
> art towards a science.  Patterns seem to be a heuristic step in that 

That direction is a dead end. It is no more possible to create a 
mathematical/formal foundation[s] of computing than it is to create one 
of bridge or cathedral design. To be sure, mathematics is a useful tool 
in the process, but we would have no Ponte Veccio or Chartres if it were 
the foundation.

The foundation is craft. Math builds on that foundation.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
2/22/2007 9:15:33 PM
In article <7x7iuarlce.fsf@ruckus.brouhaha.com>, Paul Rubin wrote:
> 
> Reading this group makes me feel like the underwear gnomes from South
> Park:
> 
>    1. Abstract category theory, GADT, Curry-Howard isomorphism, etc.
>    2. ???
>    3. Software!
> 
> I've bought into it enough to be spending time banging my head
> against learning Haskell, but it would help a lot if step 2 were
> explained better.

Step 2 is:

2. Algebraic structures == insanely effective design patterns for libraries



-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
2/22/2007 9:30:31 PM
Joachim Durchholz wrote:

> news@absamail.co.za schrieb:
>> 1. I'm still searching for material re. "the mathementical/formal
>> foundation[s] of computing" to help move programming away from an
>> art towards a science.

Which gets me wondering why he hasn't done a search on Formal methods with
google to start with before worrying us here.

> First, programming is a craft, not an art (though it *can* be art, just
> as craft can be art)

I'm with you on that front. It is also a craft that requires a degree of
skill and intelligence to accomplish a satisfactory end product.

> Second, mathematics alone isn't going to help programming move along. It
> won't help with getting customer requirements into something that can be
> programmed (mathematics could even be an obstacle here), and it won't
> help with getting bugs out of programs (which is a question of human
> error, and quality testing: again, the obstacles here are more human
> than technological, and mathematics isn't going to help with that).

I have seen some places where maths (formal methods) has been a help in
extracting sense out of complex customer requirements in order to re-write
the requirements in a way that made a great deal more sense to system
developers. It is a process best left to those with a distinctly weird bent
of mathematical mind. On the other hand, there are limitations to the use
of formal methods and it is of no use at all in some situations.

> That doesn't mean that mathematics is unimportant in programming, just
> as an architect needs mathematics to do his work when calculating loads
> and stability. However, it's just a tool, not an end to be pursued.

Sometimes a highly specialised tool that needs to be wielded by experts.
 
> That said, "functional programming languages" are "more mathematical" in
> some sense, and they do have properties that make it easier to avoid the
> technical bugs, and a portion of the customer requirements bugs, too.
> That's not because it's mathematics, it's because being nearer to
> mathematics simplifies the semantics of these languages...

Have heard a proponent of Haskell and Gopher talk of Forth as a language
that is half way between procedural and functional and "quite nice for some
things".

-- 
********************************************************************
Paul E. Bennett ....................<email://peb@amleth.demon.co.uk>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
0
peb (807)
2/22/2007 10:45:57 PM
Paul E. Bennett wrote:
> Have heard a proponent of Haskell and Gopher talk of Forth as a language
> that is half way between procedural and functional and "quite nice for some
> things".

In what sense is Forth even remotely a functional language?  Careful: 
I'm not asking about functional *extensions* to Forth.  I'm asking about 
Forth itself.

Functional programming languages avoiding state and eschew mutable data. 
  Forth is (like any imperative programming language) explicit about 
state and freely mutates data.

Or put another way, languages like Joy show what Forth is missing if one 
wanted to do functional programming.


0
nntp4274 (973)
2/22/2007 11:13:12 PM
Neelakantan Krishnaswami <neelk@cs.cmu.edu> writes:
> Step 2 is:
> 2. Algebraic structures == insanely effective design patterns for libraries

Interesting, but similar library functions are showing up in languages
like C++ and Python, without all that theory, or am I missing something?

It also seems to me that the mathematical hair in the Haskell stuff
that I look at comes more from mathematical logic than from algebra.
Certainly all the stuff about types, which is pretty new to me (I did
get hold of Pierce's TAPL book and have started to look at it).  I
can't help wondering how much of this approach is really necessary.
0
phr.cx (5493)
2/23/2007 6:06:21 AM
news@absamail.co.za wrote:
> 2 independant issues here:
> 
> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an 
> art towards a science.

You'll probably not have much luck in that.  Programming consists of
making computers do what people want them to.  People are capable
of wanting new things that break any formalism you come up with,
over and over and over again.  That is why it is partially an art.

Now, understanding how computers work and what is possible can be
a science.  But computer science is not programming.

> Patterns seem to be a heuristic step in that direction ?

I read an interesting perspective on patterns the other day, which
is that patterns are standard ways of dealing with weaknesses in
computer languages.  Or to describe it a different way, patterns
are all about writing down abstractions that the languages doesn't
support so that you can do them the same way and not have to
reinvent them.  If the language (or library, whatever) already supported
the abstraction, you wouldn't need the pattern; you would just ask the
language to do it.

I'm not sure whether I agree with this or not, but it raises an
interesting point:  once we have added some abstractions into a
language, we think of more abstractions.  We codify those as part
of the language, or we codify them as part of a book on design
patterns.  But either way, we are codifying them.  And why is that
important?  Because once we are done codifying them, we move on
to the next thing.

The point here is that there will always be a next thing.  No matter
how many abstractions you think of and nail down as a science, there
will be more that can be formed.

Does anyone ever seriously suggest that we ought to turn "coming up
with new theorems in math" into a science?  What about coming up
with new axioms for mathematical systems that are interesting or
useful?  Does anyone suggest we should make that into a scientific
process?  No, because the point is that it requires high-level
analysis.  The analysis *is* the work.

   - Logan
0
lshaw-usenet (927)
2/23/2007 7:39:28 AM
Paul Rubin wrote:
> Reading this group makes me feel like the underwear gnomes from South
> Park:
> 
>    1. Abstract category theory, GADT, Curry-Howard isomorphism, etc.
>    2. ???
>    3. Software!
> 
> I've bought into it enough to be spending time banging my head against
> learning Haskell, but it would help a lot if step 2 were explained
> better.

I recommend banging your head against functional programming some more.
At some point, you gain some enlightenment out of it.  I've even found
myself gravitating towards writing in short fragments of functional style
in C++ (constructors wrapped around constructors wrapped around
construtors...) and Perl (map() instead of for loop, etc.).

It's not the be-all and end-all of software (because *nothing* is or
ever will be), but there is something very satisfying about it.

   - Logan
0
lshaw-usenet (927)
2/23/2007 7:44:40 AM
Logan Shaw <lshaw-usenet@austin.rr.com> writes:
> Does anyone ever seriously suggest that we ought to turn "coming up
> with new theorems in math" into a science?  

Sure, Hilbert's formalist program of the 1930's, and prior to that,
Leibniz's in the 18th century.  Hilbert's program apart when they
found out about Godel's incompleteness theorem, but for a while they
were trying to do precisely what you describe.  Godel himself wrote to
von Neumann in the 1950's asking basically about the complexity of
proving decidable theorems leaving aside the undecidable ones; this
foresaw the development of NP-completeness theory.

I don't understand this foundations-of-computing stuff much myself,
but foundations of math is a perfectly respectable subject that
everyone should know something about.  We have a much more precise
notion of what a mathematical theorem is today, than we had 100 years
ago.  And it's only been recently that we've had full-blown formal
proofs of substantial theorems.  We have a traditional "proof" thats
so complicated that nobody is sure it's correct (Hales' proof of
Kepler's conjecture) so we have the (ongoing) Flyspeck project to
settle the matter once and for all.  And then we have godawful
software like Windows that crashes all the time; can we use (e.g.)
constructive type theory to make programs that verifiably do what
they're supposed to?  I'm a wide-eyed newbie but this stuff seems much
more promising than the insanity that goes on in most of the software
world today.
0
phr.cx (5493)
2/23/2007 8:00:39 AM
Logan Shaw <lshaw-usenet@austin.rr.com> writes:
> > I've bought into it enough to be spending time banging my head
> > against learning Haskell...

> I recommend banging your head against functional programming some more.
> At some point, you gain some enlightenment out of it.  I've even found
> myself gravitating towards writing in short fragments of functional style
> in C++ (constructors wrapped around constructors wrapped around
> construtors...) and Perl (map() instead of for loop, etc.).

I'm not having much trouble with functional part (it's familiar from
Lisp and Scheme).  It's the type systems that are confusing me.  I've
borrowed a 600 page book about type systems (Pierce, TAPL) which I
hope will help, but I see that it's volume 1 of a two-volume work.
Sigh.
0
phr.cx (5493)
2/23/2007 8:08:28 AM
> On Feb 23, 12:39 pm, Logan Shaw <lshaw-use...@austin.rr.com> wrote:

> Now, understanding how computers work and what is possible can be
> a science.  But computer science is not programming.

what ?

then what is programming?

and how will  you define computer science?

algorithms ?

i really want to understand what you are talking about. i want to
learn.


> I read an interesting perspective on patterns the other day, which
> is that patterns are standard ways of dealing with weaknesses in
> computer languages.  Or to describe it a different way, patterns
> are all about writing down abstractions that the languages doesn't
> support so that you can do them the same way and not have to
> reinvent them.  If the language (or library, whatever) already supported
> the abstraction, you wouldn't need the pattern; you would just ask the
> language to do it.
>
> I'm not sure whether I agree with this or not, but it raises an
> interesting point:  once we have added some abstractions into a
> language, we think of more abstractions.  We codify those as part
> of the language, or we codify them as part of a book on design
> patterns.  But either way, we are codifying them.  And why is that
> important?  Because once we are done codifying them, we move on
> to the next thing.
>
> The point here is that there will always be a next thing.  No matter
> how many abstractions you think of and nail down as a science, there
> will be more that can be formed.

Logan, it reminds me of Bruce Lee. this is exactly how he developed
"jeet kuan do"  after geting dissatisfied with Wing-Chun and other
expert-styles.


-- arnuld
http://arnuld.blogspot.com

0
geek.arnuld (542)
2/23/2007 8:31:04 AM
Paul Rubin schrieb:
> I can't help wondering how much of this approach is really necessary.

I'm not sure that you understood that the term "algebraic structures" 
means just those sum types (unions-of-structs in C parlance), and 
possibly the associated machinery (pattern matching in particular).

Of course, "algebraic structure" has less concepts than 
"union-of-structs", and "union-of-structs" doesn't capture that the 
entire thing is typesafe.
Propose a less intimidating terminology if you wish :-)

Regards,
Jo
0
jo427 (1164)
2/23/2007 10:32:58 AM
Paul Rubin schrieb:
> And then we have godawful software like Windows that crashes all the
> time; can we use (e.g.) constructive type theory to make programs
> that verifiably do what they're supposed to?

Insofar as you can formalize a requirement, yes.

E.g. one approach that actually made it into practice is Spark (Ada with 
those parts stripped that are difficult to reason about, plus a proof 
checker).

Problems with this approach are:
a) Formal requirements can be longer than the programs that implement 
them, and then you end up debugging the requirements. This makes the 
method unsuitable for some tasks.
b) As soon as intangibles like aesthetics or almost-intangibles like 
ergonomy come into play, there's no set of formal rules that you can 
check against, so such requirements are impossible to verify with a 
formalism.
c) You need a way to deal with situations where the formalism says 
something and reality says something different. E.g. if a file read 
operation fails because the disk was disconnected, and or some random 
bit in RAM flipped, or whatever; and the formal requirements didn't 
consider that case. You can largely ignore the issue for application 
programs, but you cannot for operating systems and safety-critical 
software. However, capturing such situations in a formalism is extremely 
hard; in practice, people usually make sure that the system detects such 
a failure and falls back to a restricted but safe mode of operation. 
(Things get *really* hairy if there is no such mode, e.g. in nuclear 
plants.)

Regards,
Jo
0
jo427 (1164)
2/23/2007 10:43:39 AM
arnuld schrieb:
>> On Feb 23, 12:39 pm, Logan Shaw <lshaw-use...@austin.rr.com> wrote:
> 
>> Now, understanding how computers work and what is possible can be
>> a science.  But computer science is not programming.
> 
> what ?
> 
> then what is programming?
> 
> and how will  you define computer science?

Computer science is scientific research about programming.

The relationship between computer science and programming is similar to 
that between metallurgy and mechanical engineering.

> algorithms ?

Study of the property of algorithms is CS.
Implementing them is programming.

>> The point here is that there will always be a next thing.  No matter
>> how many abstractions you think of and nail down as a science, there
>> will be more that can be formed.
> 
> Logan, it reminds me of Bruce Lee. this is exactly how he developed
> "jeet kuan do"  after geting dissatisfied with Wing-Chun and other
> expert-styles.

The personal styles developed by martial arts experts are biased towards 
the specific strengths and weaknesses of the developer. It is impossible 
to appreciate the advantage of a specific style unless you have spent 
years practicing it, at which point your opinion will be heavily biased.

I see a parallel to learning a concrete programming language here. 
Different schools of sometimes violent opposition, each founded by an 
"enlightened master".

Computer science is different. New knowledge about type systems is 
always an improvement. (That's because it's just knowledge, not practice.)

Regards,
Jo
0
jo427 (1164)
2/23/2007 10:52:47 AM
John Passaniti wrote:
> In what sense is Forth even remotely a functional language?  Careful:
> I'm not asking about functional *extensions* to Forth.  I'm asking about
> Forth itself.

Yes, leave @ and ! out (plus MOVE and so), and you get a functional subset
of Forth. As long as you pass everything on the stack, Forth is purely
functional. If you miss something, it's data types (e.g. a string stack or
a way to construct lists in a "purely functional" way).

-- 
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
0
bernd.paysan (2418)
2/23/2007 11:09:24 AM
On Feb 23, 9:08 am, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
> Logan Shaw <lshaw-use...@austin.rr.com> writes:
> > > I've bought into it enough to be spending time banging my head
> > > against learning Haskell...
> > I recommend banging your head against functional programming some more.
> > At some point, you gain some enlightenment out of it.  I've even found
> > myself gravitating towards writing in short fragments of functional style
> > in C++ (constructors wrapped around constructors wrapped around
> > construtors...) and Perl (map() instead of for loop, etc.).
>
> I'm not having much trouble with functional part (it's familiar from
> Lisp and Scheme).  It's the type systems that are confusing me.

How?

> I've
> borrowed a 600 page book about type systems (Pierce, TAPL) which I
> hope will help, but I see that it's volume 1 of a two-volume work.
> Sigh.

This is as if you bought a work about thermodynamics in order to
better understand the engine of your car, and also in order to drive
better.
This is not to say that TAPL isn't a great book.

For me, the type discipline has been a blessing in all languages that
have some form of strong typing. Unfortunately, the ALGOL descendants
require us to always write down the obvious, sometimes twice, as e.g.
in java:
    List<Map<String, Integer>> foo = new
LinkedList<HashMap<String,Integer>>();
AFAIK, C# has (or will have) something called local type inference,
that would allow to drop at least part of the clutter.
But, in functional languages, it is just wonderfull, when the compiler
says something like: Look, here you use "foo bar" as a list, but in
the definition of foo, you don't compute a list. This is surely not
what you intended, ohh great master.

0
quetzalcotl (241)
2/23/2007 11:17:26 AM
On Thu, 22 Feb 2007 08:50:49 -0600, news@absamail.co.za wrote:
j
>2 independant issues here:
>
>1. I'm still searching for material re. "the mathementical/formal
>foundation[s] of computing" to help move programming away from an 
>art towards a science.  Patterns seem to be a heuristic step in that 
>direction ?
>
Are most aspects of the formal mathematical theory of computing all
that relevant to the day-to-day process of coming up with practical
programs which will handle the data they are likely to be given in
reasonable time and space?

For example, consider:

http://en.wikipedia.org/wiki/NP-complete

How is this likely to make practical computing more of a science?
0
teest (20)
2/23/2007 3:40:38 PM
Tester schrieb:
> http://en.wikipedia.org/wiki/NP-complete
> 
> How is this likely to make practical computing more of a science?

NP-completeness and undecidability just delinate what a knowledgeable 
programmer will simply refuse to do, on grounds that it can't be done.

(Similar to an engineer who will refuse to build a dam made of bubblegum.)

Most of the time, other issues than the limits of what can be computed 
in reasonable space and time are more prevalent. Such as bridging the 
gap between requirements and programs, debugging, hardening against 
attacks, occasionally performance improvements, and adapting software to 
changing requirements.

Regards,
Jo
0
jo427 (1164)
2/23/2007 4:32:31 PM
On Feb 23, 11:32 am, Joachim Durchholz <j...@durchholz.org> wrote:
> Tester schrieb:
>
> >http://en.wikipedia.org/wiki/NP-complete
>
> > How is this likely to make practical computing more of a science?
>
> NP-completeness and undecidability just delinate what a knowledgeable
> programmer will simply refuse to do, on grounds that it can't be done.

No, that's a red herring. For some particular NP-complete problem you
might find that a solution for N=100 would take the current entire
world computing resources a million years to solve. But if your
customer wants a solution for N=50 and you can get your N=50 solution
in 250 milliseconds on an old PC, why refuse the job?

NP-completeness deserves at most a warning to the customer about
future upgrade problems.

> (Similar to an engineer who will refuse to build a dam made of bubblegum.)

Doesn't it depend on the job? A small dam that needs only a short
lifespan might feasibly be made of bubblegum, though some other
material might fit the needs better and/or cheaper.

0
jethomas5 (1449)
2/23/2007 4:47:45 PM
"J Thomas" <jethomas5@gmail.com> writes:

> On Feb 23, 11:32 am, Joachim Durchholz <j...@durchholz.org> wrote:
>> Tester schrieb:
>>
>> >http://en.wikipedia.org/wiki/NP-complete
>>
>> > How is this likely to make practical computing more of a science?
>>
>> NP-completeness and undecidability just delinate what a knowledgeable
>> programmer will simply refuse to do, on grounds that it can't be done.
>
> No, that's a red herring. For some particular NP-complete problem you
> might find that a solution for N=100 would take the current entire
> world computing resources a million years to solve. But if your
> customer wants a solution for N=50 and you can get your N=50 solution
> in 250 milliseconds on an old PC, why refuse the job?
>
> NP-completeness deserves at most a warning to the customer about
> future upgrade problems.

Ah - no. Some 15 years ago I had a customer who wanted a custom
database (and UI) which would be loaded with ~2000 records of
potential customers. The first thing he then did, was to load it with
all ~20000 data sets from a CD directory (kind of an industry guide).
Since the index building algorithm was all but efficient (we took some
short cuts because dur to limited memory we could not just sort in
memory), the first build of the index of the larger database took more
than 2 days.

So scaling is not only an upgrade problem: Often your customer is a
bit vague about the problem size and you should probably write scaling
behaviour in the specs.

>> (Similar to an engineer who will refuse to build a dam made of bubblegum.)
>
> Doesn't it depend on the job? A small dam that needs only a short
> lifespan might feasibly be made of bubblegum, though some other
> material might fit the needs better and/or cheaper.

Right. But without theory you might just use bubble gum for every dam,
since you don't knoe about the principal restrictions of using bubble
gum.

My answer to the OP, BTW, would have been Jo's.

And, @tester: where do ypu read this thread? I don't want to continue
cross posting.

Regards -- Markus


0
2/23/2007 5:13:02 PM
J Thomas wrote:
> On Feb 23, 11:32 am, Joachim Durchholz <j...@durchholz.org> wrote:
>> Tester schrieb:
>>
>>> http://en.wikipedia.org/wiki/NP-complete
>>> How is this likely to make practical computing more of a science?
>> NP-completeness and undecidability just delinate what a knowledgeable
>> programmer will simply refuse to do, on grounds that it can't be done.
> 
> No, that's a red herring. For some particular NP-complete problem you
> might find that a solution for N=100 would take the current entire
> world computing resources a million years to solve. But if your
> customer wants a solution for N=50 and you can get your N=50 solution
> in 250 milliseconds on an old PC, why refuse the job?
> 
> NP-completeness deserves at most a warning to the customer about
> future upgrade problems.
> 
>> (Similar to an engineer who will refuse to build a dam made of bubblegum.)
> 
> Doesn't it depend on the job? A small dam that needs only a short
> lifespan might feasibly be made of bubblegum, though some other
> material might fit the needs better and/or cheaper.

For the effective equivalent of a dam made of bubblegum, look into the 
history of D.B. Steinman's Peace River Bridge on the ALCAN highway. He 
promised five years of service before collapse. It provided seven.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
2/23/2007 5:47:11 PM
Joachim Durchholz <jo@durchholz.org> writes:
> > I can't help wondering how much of this approach is really necessary.
> I'm not sure that you understood that the term "algebraic structures"
> means just those sum types (unions-of-structs in C parlance), and
> possibly the associated machinery (pattern matching in particular).

I see, thanks.  I thought it meant the collection of higher order
functions in libraries that are easily composible so you end up
writing programs that are sort of like commutative diagrams.  

The part I'm having trouble with is the connection between
theoretical, foundational stuff and actual programming.  In most other
programming language communities there's a vast disconnect between
foundations and practice, just like automobile engineering usually
doesn't concern itself with elementary particle physics.  

> Of course, "algebraic structure" has less concepts than
> "union-of-structs", and "union-of-structs" doesn't capture that the
> entire thing is typesafe.
> Propose a less intimidating terminology if you wish :-)

This is pretty cool, and I think it's more like C++ templates than
union-of-structs. 
0
phr.cx (5493)
2/23/2007 7:02:34 PM
"Ingo Menger" <quetzalcotl@consultant.com> writes:
> >  It's the type systems that are confusing me.
> How?

It's a lot of new stuff to try to internalize.  As a simple example,
I don't understand why (Maybe a) is a monad.  Past that, there's
a bunch of concepts like "higher kinded polymorphism" where I don't
even know what the words mean.

I also have trouble figuring out what's wrong with a program based on
error messages from Hugs.  I don't know if that's from my own
inexperience or because the error messages actually aren't very good.
However it seems like the compiler faces a difficult problem, type
inference in the presence of type classes, so it doesn't know the
initial concrete type of anything.  It's as if it tries to solve some
big system of equations based on type info from 17 different places in
the program, finds there's no solution, and announces that the program
must have an error, but is not so helpful in saying where the error is.

> But, in functional languages, it is just wonderfull, when the compiler
> says something like: Look, here you use "foo bar" as a list, but in
> the definition of foo, you don't compute a list. This is surely not
> what you intended, ohh great master.

That's nice in theory but the error messages (so far) are often hard
to understand.  All I can tell is that the compiler was complaining
about -something-.  Maybe it's easier in ML than in Haskell, because
there aren't type classes.
0
phr.cx (5493)
2/23/2007 7:31:05 PM
Paul Rubin schrieb:
> That's nice in theory but the error messages (so far) are often hard
> to understand.

That's a typical newcomer problem in Haskell.
It seems that you get used to finding the real sources of the errors, 
similar to when you get a syntax error, you almost automatically look at 
the line *before* the one that the error was reported in.


Regards,
Jo
0
jo427 (1164)
2/23/2007 7:52:45 PM
Paul Rubin wrote:
> Joachim Durchholz <jo@durchholz.org> writes:

[...]

>> Of course, "algebraic structure" has less concepts than
>> "union-of-structs", and "union-of-structs" doesn't capture that the
>> entire thing is typesafe.
>> Propose a less intimidating terminology if you wish :-)
> 
> This is pretty cool, and I think it's more like C++ templates than
> union-of-structs. 

This is the closest I've ever seen C++ come to capturing the concept. 
And it still isn't quite there:

www.oonumerics.org/tmpw01/alexandrescu.pdf

-thant



0
adm (205)
2/23/2007 8:34:46 PM
Paul Rubin <http://phr.cx@nospam.invalid> wrote:
> It's a lot of new stuff to try to internalize.  As a simple example,
> I don't understand why (Maybe a) is a monad.  Past that, there's
> a bunch of concepts like "higher kinded polymorphism" where I don't
> even know what the words mean.

(Maybe a) happens to be a monad because it obeys the Monad laws.

> That's nice in theory but the error messages (so far) are often hard
> to understand.  All I can tell is that the compiler was complaining
> about -something-.  Maybe it's easier in ML than in Haskell, because
> there aren't type classes.

Sometimes it's helpful to start putting type annotations on your
functions to push the error around until it gets to the point where
you've made the actual mistake (as against the point where the
compiler finally notices that it has two irreconcilable types and
gives up, which can of course be somewhere else entirely.)

Phil

-- 
http://www.kantaka.co.uk/ .oOo. public key: http://www.kantaka.co.uk/gpg.txt
0
phil-news (48)
2/23/2007 8:49:30 PM
Joachim Durchholz <jo@durchholz.org> writes:
> It seems that you get used to finding the real sources of the errors,
> similar to when you get a syntax error, you almost automatically look
> at the line *before* the one that the error was reported in.

Thanks, I'll keep that in mind.  For amusement purposes, here's a
syntax error that confused me.  I did manage to fix it by trial and
error, but I'm still not absolutely clear on the exact workings of the
rule that it broke.  I'd be interested to know if you can spot the
error without trying to compile the program.  In my case, even with
the error message I was not able to figure out what was wrong except
through a bunch of experiments.

main :: IO ()
main = do
   putStr "Enter num1: "
   n1 <- read1 "n1: "
   putStr "enter operation: "
   op <- getLine
   n2 <- read1 "Enter num2: "
   let func = case op of
       "+" -> Just (+);
       "-" -> Just (-);
       "*" -> Just (*);
       "/" -> Just (/);
       "**" -> Just (**);
       _ -> Nothing

   case func of 
     Nothing -> print "unknown operator"
     Just op -> print (n1 `op` n2)

   main   -- loop
0
phr.cx (5493)
2/23/2007 9:08:10 PM
Phil Armstrong <phil-news@kantaka.co.uk> writes:
> (Maybe a) happens to be a monad because it obeys the Monad laws.

Well why does it even support the >>= operator?  It could be that
I'm simply confused about how typeclasses work, but I thought something
like that couldn't happen by accident.
0
phr.cx (5493)
2/23/2007 9:27:48 PM
In article <7xzm75qug2.fsf@ruckus.brouhaha.com>, Paul Rubin wrote:
> Neelakantan Krishnaswami <neelk@cs.cmu.edu> writes:
>> Step 2 is:
>> 2. Algebraic structures == insanely effective design patterns for libraries
> 
> Interesting, but similar library functions are showing up in
> languages like C++ and Python, without all that theory, or am I
> missing something?

I don't think this is quite true. Many of the simpler constructions
show up, but as a general tendency the more complex ones do not. I
think this is because you don't get enough language support to use
them effectively. Either you don't get enough typechecking support
to catch stupid but hard-to-debug errors (eg, in Python), or the type
annotations you must write grow without bound (eg, Java or C++).

> It also seems to me that the mathematical hair in the Haskell stuff
> that I look at comes more from mathematical logic than from algebra.
> Certainly all the stuff about types, which is pretty new to me (I
> did get hold of Pierce's TAPL book and have started to look at it).
> I can't help wondering how much of this approach is really
> necessary.

If you're a mathematically minded programmer, you'll start noticing
all the algebraic structures that appear in your program -- for
example, you'll recognize that time intervals form a vector field,
that integers form a ring, that strings form a monoid, and so on. So
you'll start organizing your APIs to emphasize the algebraic
properties of each type, and then you'll start to hack.

At that point, you'll notice that your code is mostly transformations
from one type to another, and most of them have homomorphic structure.
At this point, the natural thing (for a mathematician) is to start
using some category theory to organize all these types and mappings
between them.

This turns out to connect very naturally to type theory, because the
typed lambda calculus has a very elegant categorical semantics. So now
your programs end up very tightly connected to the semantic, algebraic
concepts you had: your programs are the constructive witnesses to the
existential statements in the math.

It all kind of happens just by following your nose, and I think that's
really, really cool. 

-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
2/24/2007 12:00:04 AM
Paul Rubin <http://phr.cx@NOSPAM.invalid> writes:

> Phil Armstrong <phil-news@kantaka.co.uk> writes:
>> (Maybe a) happens to be a monad because it obeys the Monad laws.
>
> Well why does it even support the >>= operator?  

Largely because it makes some sort of sense for it to.

> It could be that I'm simply confused about how typeclasses work, but I
> thought something like that couldn't happen by accident.

Indeed it doesn't: somewhere in the standard libraries there'll be an
instance definition for Maybe being an instance of Monad.

Monads are often about chaining computations together in one way or
another. The Maybe monad basically says, if one part of the computation
fails (returning Nothing), don't bother with the rest of it, just return
Nothing for the whole thing.

Mark.

-- 
Functional programming vacancy at http://www.aetion.com/
0
Mark.Carroll (154)
2/24/2007 12:03:03 AM
Joachim Durchholz wrote:
{stuff deleted}
> Insofar as you can formalize a requirement, yes.
> 
> E.g. one approach that actually made it into practice is Spark (Ada with 
> those parts stripped that are difficult to reason about, plus a proof 
> checker).

BTW I've heard the Spark people give a talk, they basically were 
verifying that their software implemented a particular military 
encryption algorithm correctly.

The ACL2 folk have verified both hardware and software stacks implement 
various other security related systems correctly.
0
danwang74 (207)
2/24/2007 4:03:53 AM
> On Feb 23, 3:52 pm, Joachim Durchholz <j...@durchholz.org> wrote:
> arnuld schrieb:
>
> >> On Feb 23, 12:39 pm, Logan Shaw <lshaw-use...@austin.rr.com> wrote:
>
> >> Now, understanding how computers work and what is possible can be
> >> a science.  But computer science is not programming.
>
> > what ?
>
> > then what is programming?
>
> > and how will  you define computer science?
>
> Computer science is scientific research about programming.
>
> The relationship between computer science and programming is similar to
> that between metallurgy and mechanical engineering.
>
> > algorithms ?
>
> Study of the property of algorithms is CS.
> Implementing them is programming.


ok

> >> The point here is that there will always be a next thing.  No matter
> >> how many abstractions you think of and nail down as a science, there
> >> will be more that can be formed.
>
> > Logan, it reminds me of Bruce Lee. this is exactly how he developed
> > "jeet kuan do"  after geting dissatisfied with Wing-Chun and other
> > expert-styles.
>
> The personal styles developed by martial arts experts are biased towards
> the specific strengths and weaknesses of the developer.


Bruce Lee's "Jeet Kuan Do" style is *no style*. you adapt your self
according to the present situation, you develop a new style every time
you use Jeet Kuan Do. Jeet Kuan Do means, how one can express himself,
totally and completel, without any restrictions imposed by all other
styles, Wing-Chun e.g.

> It is impossible
> to appreciate the advantage of a specific style unless you have spent
> years practicing it, at which point your opinion will be heavily biased.

you can't practice Jeet Kuan Do for years because, as i said, if you
spend years practicing on Bruce Lee's style, then you are just
practicing a new style everyday

> I see a parallel to learning a concrete programming language here.
> Different schools of sometimes violent opposition, each founded by an
> "enlightened master".
>
> Computer science is different. New knowledge about type systems is
> always an improvement. (That's because it's just knowledge, not practice.)
>
> Regards,
> Jo


0
geek.arnuld (542)
2/24/2007 4:46:49 AM
> On Feb 23, 3:52 pm, Joachim Durchholz <j...@durchholz.org> wrote:

> I see a parallel to learning a concrete programming language here.
> Different schools of sometimes violent opposition, each founded by an
> "enlightened master".

yes, it is. IMVHO, Martial-Arts, Hacking and Music Composition are
very-closely related, as per my experience. yu can check the article
on my BLOG on this.

--arnuld
http://arnuld.blogspot.com

0
geek.arnuld (542)
2/24/2007 4:48:47 AM
On Feb 24, 9:48 am, "arnuld" <geek.arn...@gmail.com> wrote:
> > On Feb 23, 3:52 pm, Joachim Durchholz <j...@durchholz.org> wrote:
> > I see a parallel to learning a concrete programming language here.
> > Different schools of sometimes violent opposition, each founded by an
> > "enlightened master".
>
> yes, it is. IMVHO, Martial-Arts, Hacking and Music Composition are
> very-closely related, as per my experience. yu can check the article
> on my BLOG on this.
>

to be clear, what i i meant by "yes, it is". it means, next sentence
is right:

 "i see a parallel to learning a concrete programming language"

0
geek.arnuld (542)
2/24/2007 4:51:44 AM
Paul Rubin <http://phr.cx@nospam.invalid> wrote:
> Phil Armstrong <phil-news@kantaka.co.uk> writes:
>> (Maybe a) happens to be a monad because it obeys the Monad laws.

> Well why does it even support the >>= operator?  

For some Monads, one can think about (>>=) as some sort of "sequencing".
Then in an expression like "F >>= \x -> G[x]" (by G[x] I mean that x is
a free variable in the expression G), it means "first do F, bind the
result of F to x, if possible, and then do G". 

For the Maybe monad, the result can be either "Just v" or "Nothing".
So the obvious thing to is to bind v to x in the first case, and then 
do G, and in the second case just "stop" and return "Nothing" for the
whole thing. (Exercise: What should return do?)

This is like error handling: If an error happens in F, you stop, otherwise
you continue with G. That's the reason the Maybe monad is sometimes also 
called the error monad. (Exercise: What do you have to do if not only
want to flag an error, but also have an extra argument (say, a string)
indicting what sort of error happened? What well-known datatype from
the library corresponds to that, and what new monad do you get this
way?)

It's not always so simple, the list monad for example does something 
a bit more complex.

> It could be that I'm simply confused about how typeclasses work, but
> I thought something like that couldn't happen by accident.

Of course it doesn't happen by accident :-) You must prove to the
compiler that Maybe is indeed an instance of the Monad typeclass,
which is what the library does in the corresponding instance
declaration. Just look it up, and see if you can match the
implementation to the effect described above.

- Dirk

[Unnecessary NGs removed from f'up]
0
dthierbach2 (260)
2/24/2007 7:47:09 AM
"Paul Rubin" wrote:
> I'm not having much trouble with functional part (it's familiar from
> Lisp and Scheme).  It's the type systems that are confusing me.  I've
> borrowed a 600 page book about type systems (Pierce, TAPL) which I
> hope will help, but I see that it's volume 1 of a two-volume work.
> Sigh.

You might be interested in the lectures notes for a class Pierce taught last
fall.
http://www.cis.upenn.edu/~bcpierce/home.html

See Old Course Materials: Software Foundations, Schedule

(I downloaded the lectures and exams with solutions, because I plan to read
TAPL after EoPL and ToPL - if I live that long - and I am not sure how long
the links will stay around.)

Marlene


0
2/24/2007 8:04:59 AM
"Marlene Miller" <marlenemiller@worldnet.att.net> writes:
> You might be interested in the lectures notes for a class Pierce taught last
> fall.
> http://www.cis.upenn.edu/~bcpierce/home.html

Thanks, interesting.  "A Gentle Introduction to Haskell" is also
looking useful.  I think I looked at it when I was first getting
started and couldn't make sense of it then, but it's easier now.
0
phr.cx (5493)
2/24/2007 8:11:11 AM
On 23 Feb., 20:31, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
> "Ingo Menger" <quetzalc...@consultant.com> writes:

> > But, in functional languages, it is just wonderfull, when the compiler
> > says something like: Look, here you use "foo bar" as a list, but in
> > the definition of foo, you don't compute a list. This is surely not
> > what you intended, ohh great master.
>
> That's nice in theory but the error messages (so far) are often hard
> to understand.  All I can tell is that the compiler was complaining
> about -something-.  Maybe it's easier in ML than in Haskell, because
> there aren't type classes.

Not really. It's - unfortunatley - a property of the basic type
inference algorithm. AFAIK, there is some research going on to make
things better in this respect. Your notion of a set of equations that
can't be solved is not so far off the mark.
It helps sometimes to keep in mind that type inference has a left to
right bias (although not necessarily a top to bottom one), so, for
example, in

   foo xs = head xs && xs

the compiler will most probably infer xs :: [Bool] (from the
definition of head and the appearance of (head xs) as first argument
of &&) and then it will bitterly complain that it can't unify Bool and
[Bool]. But note that if it'd do it the other way around, assuming
first that xs must be Bool, the result would be no better. In any
case, the compiler arrives at some conclusion about the type a certain
variable should have, and then it will stick to that, even if all
other uses clearly suggest another type. Often, this will be
counterintuitive to the human mind (which has it's own notion about
this matter).
Note that this example is quite simple, it is perhaps not a good idea
to propose, that the compiler should print out a detailed log about
what assumptions it has made, and how they were justified. A single
error message could easily become 1000 lines and more this way.
As others have pointed out, whenever you have problems with the type
system, you should explicitely annotate your definitions. Not only
will the error messages get more understandable, but the discipline of
thinking every moment about the question: "Well, what will the type of
this expression, this function, this variable be?" will most probably
prevent you from making type errors in the first place.
Last but not least: No matter how bad the messages from the type
inference algorithm are, they are certainly more informative than
  Segmentation fault, core dumped
which is what you get when you run programs containing type errors
written in languages that allow you to confuse a boolean and a list of
booleans.

0
quetzalcotl (241)
2/24/2007 1:20:13 PM
Paul Rubin wrote:

> I also have trouble figuring out what's wrong with a program based on
> error messages from Hugs.  I don't know if that's from my own
> inexperience or because the error messages actually aren't very good.

It would be nice if there was some sort of switch to tell the system "I
know it's wrong, but I want to run it anyway", perhaps implemented as
some simple meta-interpreter.  Beginers could then /watch/ it go wrong
in the debugger (I assume there's a halfway decent debugger available?)
which is usually enlightening.

For all I know, the feature may already be there.

    -- chris
0
chris.uppal (3980)
2/24/2007 6:17:07 PM
On Sat, 24 Feb 2007, Chris Uppal wrote in comp.programming:
> Paul Rubin wrote:
>> I also have trouble figuring out what's wrong with a program based on
>> error messages from Hugs.  I don't know if that's from my own
>> inexperience or because the error messages actually aren't very good.
>
> It would be nice if there was some sort of switch to tell the system "I
> know it's wrong, but I want to run it anyway", perhaps implemented as
> some simple meta-interpreter.  Beginers could then /watch/ it go wrong
> in the debugger (I assume there's a halfway decent debugger available?)
> which is usually enlightening.

   I believe this thread is discussing languages which are high-level
enough that "run it anyway" is a meaningless request. If the program
is malformed, you /can't/ "run it anyway"; there's no legal program
present to run!

   Some interpreted languages (Turbo Pascal 3.0, Perl) will let you
start running a malformed program, bailing only when they get to the
unrecoverable error. However, those languages are far closer to the
machine than ML or Haskell, and don't do lots of inferences about
type, so they can afford to "ignore" large parts of the program at
first. With type inference, you need to figure out all the types in
advance, whether they're on executable codepaths or not.

   And with a compiled language, the compiler /must/ understand everything 
in the program, because it has to translate it to machine code. There's
no way for the compiler to "ignore" parts of the program it doesn't
understand; everything must be assigned a meaning.

HTH,
-Arthur
0
ajonospam (382)
2/24/2007 7:28:50 PM
Arjan <arjan@example.com> writes:
> Have you read Edsger Dijkstra's "A discipline of programming"
> or David Gries' "The science of programming"?

Those sound hopelessly out of date.  I looked at Dijkstra's book
a long time ago but I think current approaches are much different.

Someone on #haskell recommend this a while back:

        http://www-2.cs.cmu.edu/~rwh/plbook/

It looks excellent and I hope to read it someday.
0
phr.cx (5493)
2/24/2007 7:52:21 PM
Paul Rubin wrote:
> Someone on #haskell recommend this a while back:
> 
>         http://www-2.cs.cmu.edu/~rwh/plbook/

Thank you for sharing this! It seems to discuss lots of things I am 
currently hoping for to learn.

-- 
Scalad - a salad of Scala abstractions:
http://users.utu.fi/hvkhut/scalad/scalad.htm
0
hvkhut (5)
2/24/2007 8:18:39 PM
"Arthur J. O'Dwyer" <ajonospam@andrew.cmu.edu> writes:

>   Some interpreted languages (Turbo Pascal 3.0, Perl) will let you
> start running a malformed program, bailing only when they get to the
> unrecoverable error.

I used Turbo Pascal 3.0 back in, um, perhaps 1988 or so, and I
don't remember it working that way.  It compiled the whole
program, then ran it.  Is there some newer product also called
Turbo Pascal 3.0 that works differently?
-- 
"In the PARTIES partition there is a small section called the BEER.
 Prior to turning control over to the PARTIES partition,
 the BIOS must measure the BEER area into PCR[5]."
--TCPA PC Specific Implementation Specification
0
blp (3955)
2/24/2007 8:23:20 PM
In message <1172155799.675959@vasbyt.isdsl.net>
          news@absamail.co.za wrote:

> 2 independant issues here:
> 
> 1. I'm still searching for material re. "the mathementical/formal
> foundation[s] of computing" to help move programming away from an 
> art towards a science.  Patterns seem to be a heuristic step in that 
> direction ?

Have you read Edsger Dijkstra's "A discipline of programming"
or David Gries' "The science of programming"?



Regards,
Arjan
0
arjan1 (9)
2/24/2007 8:28:35 PM
Chris Uppal wrote:
> Paul Rubin wrote:
> 
>>I also have trouble figuring out what's wrong with a program based on
>>error messages from Hugs.  I don't know if that's from my own
>>inexperience or because the error messages actually aren't very good.
> 
> It would be nice if there was some sort of switch to tell the system "I
> know it's wrong, but I want to run it anyway", [...]

To the extent that this is possible, it can be done by replacing the term
that caused the compilation error with "undefined".

-- 
David Hopwood <david.nospam.hopwood@blueyonder.co.uk>
0
2/24/2007 9:08:14 PM
David Hopwood <david.nospam.hopwood@blueyonder.co.uk> writes:
> > It would be nice if there was some sort of switch to tell the system "I
> > know it's wrong, but I want to run it anyway", [...]
> To the extent that this is possible, it can be done by replacing the term
> that caused the compilation error with "undefined".

I could imagine compiling it with Lispish semantics, i.e. sticking the
inferred type into the runtime object:

   (*) :: Integer -> Integer -> Integer   -- ordinary multiplication
   foobar = "foo" * "bar"                 -- attempt to multiply two strings

You'd get a runtime error when you try to actually evaluate foobar.
What good that would do, I don't know.
0
phr.cx (5493)
2/24/2007 9:17:08 PM
Joachim Durchholz wrote:
> Paul Rubin wrote:
> > Neelakantan Krishnaswami <neelk@cs.cmu.edu> writes:
> >> Step 2 is:
> >> 2. Algebraic structures == insanely effective design patterns for
> >> libraries
> > 
> > Interesting, but similar library functions are showing up in languages
> > like C++ and Python, without all that theory, or am I missing something?
> > 
> > It also seems to me that the mathematical hair in the Haskell stuff
> > that I look at comes more from mathematical logic than from algebra.
> >> I can't help wondering how much of this approach is really necessary.
> 
> I'm not sure that you understood that the term "algebraic structures"
> means just those sum types (unions-of-structs in C parlance), and
> possibly the associated machinery (pattern matching in particular).
> 
> Of course, "algebraic structure" has less concepts than
> "union-of-structs", and "union-of-structs" doesn't capture that the
> entire thing is typesafe.
> Propose a less intimidating terminology if you wish :-)

I think you misunderstood 'algebraic structure', especially as an 'insanely
effective design pattern for libraries'. The term here refers to the set of
algebraic laws that are satisfied by the operations exported from the
library. In most cases the interface to such a library /must/ make the
exported data types abstract so that the laws can be guaranteed to hold.
The so called 'algebraic data types' you are refering to are merely the
basic building blocks, one could say they are '0-th order' algebraic
structures, that is, the laws that apply to them are the ones guaranteed by
the language alone.

The following paper was a real eye-opener for me. It explains how the right
algebraic structure leads to a simpler, yet more effective interface. It
also explains how to use such laws internally to tweak performance while
maintaining correctness.

homepages.inf.ed.ac.uk/ wadler/papers/prettier/prettier.pdf

Cheers
Ben
0
2/24/2007 9:18:29 PM
On Sat, 24 Feb 2007 14:23:20 -0600, Ben Pfaff wrote
(in article <87hctbz4nb.fsf@blp.benpfaff.org>):

> "Arthur J. O'Dwyer" <ajonospam@andrew.cmu.edu> writes:
> 
>> Some interpreted languages (Turbo Pascal 3.0, Perl) will let you
>> start running a malformed program, bailing only when they get to the
>> unrecoverable error.
> 
> I used Turbo Pascal 3.0 back in, um, perhaps 1988 or so, 

I used it earlier than that, probably 3 or 4 years at least.

> and I don't remember it working that way.  

Neither do I.

> It compiled the whole program, then ran it.  

Right, no interpreter at all.  No idea what he's talking about.


-- 
Randy Howard (2reply remove FOOBAR)
"The power of accurate observation is called cynicism by those 
 who have not got it."  - George Bernard Shaw





0
randyhoward (4848)
2/24/2007 9:22:43 PM
Paul Rubin wrote:
> For amusement purposes, here's a
> syntax error that confused me. �I did manage to fix it by trial and
> error, but I'm still not absolutely clear on the exact workings of the
> rule that it broke. �I'd be interested to know if you can spot the
> error without trying to compile the program. �In my case, even with
> the error message I was not able to figure out what was wrong except
> through a bunch of experiments.
> 
> main :: IO ()
> main = do
>    putStr "Enter num1: "
>    n1 <- read1 "n1: "
>    putStr "enter operation: "
>    op <- getLine
>    n2 <- read1 "Enter num2: "
>    let func = case op of
>        "+" -> Just (+);
>        "-" -> Just (-);
>        "*" -> Just (*);
>        "/" -> Just (/);
>        "**" -> Just (**);
>        _ -> Nothing
> 
>    case func of 
>      Nothing -> print "unknown operator"
>      Just op -> print (n1 `op` n2)
> 
>    main   -- loop

I suppose it's something to do either with the indentation of the (first)
case branches or with the semicolons separating them (or both). The
semicolons are not necessary (since you are using layout anyway) and may
interfere with the 'let'. I'd leave them out. And the case branches ("+" ->
Just (+) etc...) should be indented at least one space further than the
start of the let item (starting with "func = case op of").

Haskell's layout rule really has its subtleties, especially in connection
with do-notation. There are some pitfalls that almost all beginners fall
into.

Cheers
Ben
0
2/24/2007 9:38:36 PM
Phil Armstrong wrote:
> Paul Rubin <http://phr.cx@nospam.invalid> wrote:
>> It's a lot of new stuff to try to internalize.  As a simple example,
>> I don't understand why (Maybe a) is a monad.  Past that, there's
>> a bunch of concepts like "higher kinded polymorphism" where I don't
>> even know what the words mean.
> 
> (Maybe a) happens to be a monad because it obeys the Monad laws.

More precisely, (Maybe a) /can be viewed as/ a monad by giving suitable
definitions of (>>=) and (return). Here, 'suitable' is of course meant to
entail that the monad laws are obeyed, but to be useful the monad should be
non-trivial, too.

Cheers
Ben
0
2/24/2007 9:45:05 PM
Benjamin Franksen <benjamin.franksen@bessy.de> writes:
> I suppose it's something to do either with the indentation of the (first)
> case branches or with the semicolons separating them (or both).

Oh woops, I put in the semicolons while trying to debug the problem
and then I forgot to take them out.  And yes, the problem was that the
case branches were not indented far enough.  It amazed me.  Coming
from Python, I was used to significant indentation, but Haskell's
indentation rules are more complex and seem to make less sense, and
theyaren't described in the tutorial I looked at.

In Python, if you indent a line further to the right than the previous
line, the lexical scanner calls that an "indent" and if you indent it
further to the left than the previous line, the scanner calls it a
"dedent".  Python's "indent" and "dedent" are equivalent to begin/end
or left and right curly braces in other languages.  But the amount of
additional indentation is not significant.  Only the direction matters.
0
phr.cx (5493)
2/24/2007 10:16:25 PM
Arthur J. O'Dwyer schrieb:
> With type inference, you need to figure out all the types in
> advance, whether they're on executable codepaths or not.

Sorry, that's plain wrong.
You can always assign an Error type to those names where type inference 
didn't give a meaningful result, and run the code anyway. Just make sure 
you have run-time checks in place when assigning to or from that name, 
that's all you need to do to get a perfectly running program (which may 
terminate with a type error, of course, but hey! - that's what was 
requested).

> And with a compiled language, the compiler /must/ understand 
> everything in the program, because it has to translate it to machine 
> code.

That's even more wrong. You can do type checks with machine code just fine.

There's one thing you need to do: have the type information available at 
runtime. If you leave that out, then the run-time system will have 
trouble checking types (unsurprisingly).
I'm not sure how much of an impact on run-time system design that has. 
It would probably be part of making the run-time data structures 
printable (and without that "just running the code" doesn't make much 
sense, you want to see the code in action), so it's probably no 
additional overhead anyway.

Regards,
Jo
0
jo427 (1164)
2/24/2007 10:32:50 PM
On Feb 23, 12:13 pm, Markus E Leypold
<development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
> "J Thomas" <jethom...@gmail.com> writes:
> > On Feb 23, 11:32 am, Joachim Durchholz <j...@durchholz.org> wrote:

> >> NP-completeness and undecidability just delinate what a knowledgeable
> >> programmer will simply refuse to do, on grounds that it can't be done.
>
> > No, that's a red herring. For some particular NP-complete problem you
> > might find that a solution for N=100 would take the current entire
> > world computing resources a million years to solve. But if your
> > customer wants a solution for N=50 and you can get your N=50 solution
> > in 250 milliseconds on an old PC, why refuse the job?
>
> > NP-completeness deserves at most a warning to the customer about
> > future upgrade problems.
>
> Ah - no. Some 15 years ago I had a customer who wanted a custom
> database (and UI) which would be loaded with ~2000 records of
> potential customers. The first thing he then did, was to load it with
> all ~20000 data sets from a CD directory (kind of an industry guide).
> Since the index building algorithm was all but efficient (we took some
> short cuts because dur to limited memory we could not just sort in
> memory), the first build of the index of the larger database took more
> than 2 days.
>
> So scaling is not only an upgrade problem: Often your customer is a
> bit vague about the problem size and you should probably write scaling
> behaviour in the specs.

Yes, but at this point I believe that NP-complete is a red herring.
What the customer cares about is at what N the resources become an
issue, and at what N the solution becomes impractical.

As I understand it, to prove a problem is NP-complete or prove it
isn't, will not in either case answer those questions. To know that
scaling issues will at some point make the solution completely
impractical doesn't at all say whether it's impractical for the
customer's uses. And vice versa, it can scale gently and be too
unwieldy even at the scale the customer needs now.

So it looks to me like finding out whether it's NP-complete is solving
the wrong problem. What you need to know is something else. Some of
the details you use to decide NP-completely might be used for that
other question, though.

> >> (Similar to an engineer who will refuse to build a dam made of bubblegum.)
>
> > Doesn't it depend on the job? A small dam that needs only a short
> > lifespan might feasibly be made of bubblegum, though some other
> > material might fit the needs better and/or cheaper.
>
> Right. But without theory you might just use bubble gum for every dam,
> since you don't knoe about the principal restrictions of using bubble
> gum.

If you don't understand your materials you can expect problems. On the
other hand, civil engineering has traditionally operated with a very
large component of precedent. If a theoretician decides that a
particular procedure is unsafe, but it has been in widespread use
without incident, he's likely to generate lots of comments about
proofs that bumblebees can't fly. On the other hand if a design is
constructed with traditional safety margins much reduced because
theory says they're not needed, it's likely to get a lot of interest
while the engineers wait for it to fall down.

Theory is a useful supplement to practice. When you operate outside
your area of expertise it's all you have. And this is probably why
people are generally so hesitant to do significant projects outside
their areas of expertise.

0
jethomas5 (1449)
2/25/2007 4:00:19 AM
On Feb 24, 4:38 pm, Benjamin Franksen <benjamin.frank...@bessy.de>
wrote:
> Paul Rubin wrote:

> > For amusement purposes, here's a
> > syntax error that confused me.  I did manage to fix it by trial and
> > error, but I'm still not absolutely clear on the exact workings of the
> > rule that it broke.  I'd be interested to know if you can spot the
> > error without trying to compile the program.  In my case, even with
> > the error message I was not able to figure out what was wrong except
> > through a bunch of experiments.

> I suppose it's something to do either with the indentation of the (first)
> case branches or with the semicolons separating them (or both). The
> semicolons are not necessary (since you are using layout anyway) and may
> interfere with the 'let'. I'd leave them out. And the case branches ("+" ->
> Just (+) etc...) should be indented at least one space further than the
> start of the let item (starting with "func = case op of").
>
> Haskell's layout rule really has its subtleties, especially in connection
> with do-notation. There are some pitfalls that almost all beginners fall
> into.

This is why I say it's important for Forth to become more intuitive
for beginners. Each little trap that beginners predictably fall into
is a barrier, something that lengthens the learning curve.

Of course it's hard to expect the experts to change their habits for
beginners, when we have so many more experts than beginners. But
stil....

0
jethomas5 (1449)
2/25/2007 4:11:20 AM
J Thomas wrote:

> proofs that bumblebees can't fly. On the other hand if a design is
> constructed with traditional safety margins much reduced because
> theory says they're not needed, it's likely to get a lot of interest
> while the engineers wait for it to fall down.

That is not the reason why structures fall down. Initially new
structural designs are built with a large safety margin. Subsequently
this safety margin is not reduced dramatically because of theory, but
is rather gradually lowered little by little for practical reasons
(cost, aesthetics, etc.) as people gain confidence in the design.
Eventually the safety margin is gone and the structure falls down.

See Henry Petroski's "Design Paradigms" for a thorough explanation
of this phenomenon.

-- 
mail1dotstofanetdotdk
0
breese (255)
2/25/2007 11:26:16 AM
J Thomas schrieb:
> On Feb 23, 11:32 am, Joachim Durchholz <j...@durchholz.org> wrote:
>> Tester schrieb:
>>
>>> http://en.wikipedia.org/wiki/NP-complete
>>> How is this likely to make practical computing more of a science?
>> NP-completeness and undecidability just delinate what a knowledgeable
>> programmer will simply refuse to do, on grounds that it can't be done.
> 
> No, that's a red herring.

Isn't :-)

Seriously, you're right that NP-complete algorithms are occasionally 
useful. Even undecidable algorithms are used (some varieties of 
dependent type systems do that).

However, NP-complete and undecidable algorithms drastically reduce your 
design space, so the first reaction of a programmer asked to do such a 
thing will be a search for alternatives.

I.e. CS defines bounds. You can overstep them, but you pay steep price.

Regards,
Jo
0
jo427 (1164)
2/25/2007 12:12:48 PM
<Paul Rubin <http://phr.cx@NOSPAM.invalid>> wrote:
> And then we have godawful
> software like Windows that crashes all the time; can we use (e.g.)
> constructive type theory to make programs that verifiably do what
> they're supposed to?

No.  People claim that they do this, but all they really prove is that 
one extrapolation of the requirement into a formal language (the 
program) is equivalent to another (the input to the validator, which is 
often confusingly called the "specification" or something like that, in 
order to distract you from the fact that it can be every bit as likely 
to contain bugs as the program).  Sometimes, the fact that both 
techniques, which are traditionally somewhat radically different from 
each other, agree can be useful information.  It probablistically 
supports the proposition that the program is correct.  Any claim beyond 
this -- that such a method actually proves that the program contains no 
bugs or does "what it's supposed to do", for example -- is misleading 
and exaggerated.

At a lower level, there are many commonly used proof systems that prove 
the absence of specific program behaviors.  They are called type 
systems, and a good number of programming languages have them built in.

-- 
Chris Smith
0
cdsmith (3862)
2/25/2007 5:03:41 PM
On Feb 22, 3:13 pm, John Passaniti <n...@JapanIsShinto.com> wrote:
> In what sense is Forth even remotely a functional language?

> Functional programming languages avoiding state and eschew
> mutable data.
>   Forth is (like any imperative programming language)
> explicit about state and freely mutates data.

I would suggest that functional programming languages are not nice
because they avoid those things; rather, they are nice for other
reasons, but those other reasons happen to require a lack of state and
mutable data.

For example, most people would identify "referential transparency" as
an important part of functional programming. Forth has quite strong
referential transparency, even when you use the parts of it that do
horrible things like mutate global state.

> Or put another way, languages like Joy show what Forth is
> missing if one wanted to do functional programming.

Hardly. Do you know Joy? I do. Joy is a toy language; Forth isn't
missing anything functional that it has (well, Joy does have way
better list manipulation, but that's not functional). Forth is a
mature and useful language that accidentally falls into the same
theoretical category as Joy was deliberately constructed for.

Forth doesn't fit the category perfectly, but it's an amazingly close
fit. Color Forth, by the way, happens to fit the category even more
closely.

-Billy

0
2/25/2007 7:47:21 PM
Arthur J. O'Dwyer wrote:

[me:]
> > It would be nice if there was some sort of switch to tell the
> > system "I know it's wrong, but I want to run it anyway", perhaps
> > implemented as some simple meta-interpreter.

>   I believe this thread is discussing languages which are high-level
> enough that "run it anyway" is a meaningless request. If the program
> is malformed, you can't "run it anyway"; there's no legal program
> present to run!

I don't think that's true at all.

Functional languages (real ones, I mean, not LISP) all have trivial
variations on lambda calculus as their runtime semantics (the
NON-trivial variation is in evaluation order, which is vitally
important, but irrelevant here).  It is simple to create an "untyped"
implementation of that concept.  For instance the earliest functional
programming langage implementation that I, personally, know of
(remember that I don't count LISP) was Turner's SASL -- an untyped,
lazy, pure functional programming language (for his PhD in 1979).

His SASL interpreter was done in C, which meant he had to implement GC,
etc (3.5 K lines of code in total).  But these days nobody would follow
that path -- they'd probably write it /in/ Haskell, and the thing would
be trivial.

    -- chris
0
chris.uppal (3980)
2/25/2007 9:28:52 PM
Chris Uppal <chris.uppal@metagnostic.REMOVE-THIS.org> wrote:
> Functional languages (real ones, I mean, not LISP)

> For instance the earliest functional
> programming langage implementation that I, personally, know of
> (remember that I don't count LISP)

I'm curious why you don't count LISP as a functional language.  I can't 
stand LISP myself, but I don't see how you'd call it non-functional.  
The mutation features are not much different from ML's mutation features 
except that they aren't so clearly delineated so that mutation can 
pollute programs.  Other than that, the only significant difference I 
see between LISP and the pure lambda calculus is that function 
parameters are lists rather than being curried.

Is it one of those things, or something else?

-- 
Chris Smith
0
cdsmith (3862)
2/26/2007 1:47:09 AM
On Feb 24, 11:16 pm, Paul Rubin <http://phr...@NOSPAM.invalid> wrote:
>Coming
> from Python, I was used to significant indentation, but Haskell's
> indentation rules are more complex and seem to make less sense, and
> theyaren't described in the tutorial I looked at.

They *are* described, however, in the Haskell report. (in the
appendix)


0
quetzalcotl (241)
2/26/2007 9:55:02 AM
Chris Smith wrote:

[me:]
> > Functional languages (real ones, I mean, not LISP)
....
> I'm curious why you don't count LISP as a functional language.  I
> can't stand LISP myself, but I don't see how you'd call it
> non-functional.

That Lisp has an important functional aspect is beyond dispute.  But
for me (and, as far as I know, for most people outside the US[*]) Lisp
is not a member of that category of languages known as Functional
Programming Languages.

([*] At least that's how it was when I last used a functional
programming language in anger -- years ago.)

Why ?  A matter of history, a matter of emphasis.

History: it never has been -- Lisp has always been in a category by
itself, functional, programming is a separate, albeit related,
category. <shrug/>

Emphasis: Lisp is to a large degree about lists.  Remove lists from
Lisp and you are left with nothing.  Functional Programming languages
are about higher order functions, remove lists from them and you are
left with an FP language with reduced expressiveness.  Applicative
evaluation order.  Currying.  Pattern matching.  They all add up to a
style and feel which is common to all functional languages, and which
is not shared by Lisp.  (I know there are exceptions to all the above
-- ML has the "wrong" evaluation order, for instance -- it's a "family
resemblence" thing, not a list of formal criteria).

OTOH, Lisp has a feel and a collection of features of its own, which is
not shared by functional programming languages.

I see no benefit in conflating the two.

    -- chris
0
chris.uppal (3980)
2/26/2007 1:17:10 PM
"Chris Uppal" <chris.uppal@metagnostic.REMOVE-THIS.org> writes:
>Remove lists from Lisp and you are left with nothing.

  (Try to post this on comp.lang.lisp, and see what happens.)

                           ~

  A functional language /enables/ functional programming,
  which means:

    - functions are values and at the same time can be
      applied (to an argument)

    - it has function literals for a wide class of functions

    - functions can be returned from functions

    - evaluation of arguments can be suppressed somewhat,
      so as to implement �if� or �cond� as a function

    - it has a garbage collector

    - it has a constructor for some kind of container
      (like a list)

  A /pure/ functional language /enforces/ functional programming.

    - no inbuilt notion of state or statements or an
      execution sequence in time, only a notation for
      values being specified in terms of other values.

0
ram (2986)
2/26/2007 1:31:42 PM
In article <xn0f2x2j980wlzp000@news.gradwell.net>,
 "Chris Uppal" <chris.uppal@metagnostic.REMOVE-THIS.org> wrote:

> Chris Smith wrote:
> 
> [me:]
> > > Functional languages (real ones, I mean, not LISP)
> ...
> > I'm curious why you don't count LISP as a functional language.  I
> > can't stand LISP myself, but I don't see how you'd call it
> > non-functional.
> 
> That Lisp has an important functional aspect is beyond dispute.  But
> for me (and, as far as I know, for most people outside the US[*]) Lisp
> is not a member of that category of languages known as Functional
> Programming Languages.
> 
> ([*] At least that's how it was when I last used a functional
> programming language in anger -- years ago.)
> 
> Why ?  A matter of history, a matter of emphasis.
> 
> History: it never has been -- Lisp has always been in a category by
> itself, functional, programming is a separate, albeit related,
> category. <shrug/>
> 
> Emphasis: Lisp is to a large degree about lists.

I'd say you are stuck even before 1958.

Lisp is a family of programming languages.
LISP (all caps) is typically used to denote some ancient dialect of Lisp 
btw.

In the family of Lisp typical dialects are Scheme and Common Lisp.
Both are not pure FPLs. Both support a functional programming
style. Scheme a bit more than Common Lisp. But Common Lisp also
carries the functional programming style over to  object-oriented
programming (which is not provided in Scheme by default).
In Common Lisp, functions are objects, methods
are objects, there are meta-classes for those and so on.
A higher-order functional programming style is quite
common (!) in Common Lisp - even with CLOS. More so in Scheme.

True is that Scheme and Common Lisp are quite different from Haskell, 
SML and other some other FPLs.

>  Remove lists from
> Lisp and you are left with nothing.  Functional Programming languages
> are about higher order functions,

I use higher-order functions in Lisp all day. Thank you. It is
hard to write any Lisp code ignoring higher-order functions.

> remove lists from them and you are
> left with an FP language with reduced expressiveness.  Applicative
> evaluation order.  Currying.  Pattern matching.

Pattern matching has nothing to do with FP. (like it might be
that some cars have automatic transmission for convenience,
but a car is perfectly a car with manual transmission).

>  They all add up to a
> style and feel which is common to all functional languages, and which
> is not shared by Lisp.

Programming in Haskell is quite different from programming in SML. 
Laziness, Monads, Type Classes, Syntax (significant whitespace), etc.
Erlang is different again. J? Mercury? Oz? Clean?


>  (I know there are exceptions to all the above
> -- ML has the "wrong" evaluation order, for instance -- it's a "family
> resemblence" thing, not a list of formal criteria).
> 
> OTOH, Lisp has a feel and a collection of features of its own, which is
> not shared by functional programming languages.
> 
> I see no benefit in conflating the two.

FP is so vague as OOP is nowadays. It is almost useless.
Inclusion or not inclusion is mostly political.

> 
>     -- chris
0
joswig8642 (2203)
2/26/2007 2:06:03 PM
In article <Lisp-20070226142541@ram.dialup.fu-berlin.de>,
 ram@zedat.fu-berlin.de (Stefan Ram) wrote:

> "Chris Uppal" <chris.uppal@metagnostic.REMOVE-THIS.org> writes:
> >Remove lists from Lisp and you are left with nothing.
> 
>   (Try to post this on comp.lang.lisp, and see what happens.)
> 
>                            ~
> 
>   A functional language /enables/ functional programming,
>   which means:
> 
>     - functions are values and at the same time can be
>       applied (to an argument)
> 
>     - it has function literals for a wide class of functions
> 
>     - functions can be returned from functions
> 
>     - evaluation of arguments can be suppressed somewhat,
>       so as to implement �if� or �cond� as a function
> 
>     - it has a garbage collector
> 
>     - it has a constructor for some kind of container
>       (like a list)
> 
>   A /pure/ functional language /enforces/ functional programming.
> 
>     - no inbuilt notion of state or statements or an
>       execution sequence in time, only a notation for
>       values being specified in terms of other values.

Which makes Haskell 'pure'. But not SML. Which I would say
is a much more fundamental difference than, say,  having
pattern matching or not.
Probably I'm not telling anything new. ;-) But maybe for C.U..?
0
joswig8642 (2203)
2/26/2007 2:09:51 PM
"Chris Uppal" <chris.uppal@metagnostic.REMOVE-THIS.org> writes:

> Arthur J. O'Dwyer wrote:
>
> [me:]
>> > It would be nice if there was some sort of switch to tell the
>> > system "I know it's wrong, but I want to run it anyway", perhaps
>> > implemented as some simple meta-interpreter.
>
>>   I believe this thread is discussing languages which are high-level
>> enough that "run it anyway" is a meaningless request. If the program
>> is malformed, you can't "run it anyway"; there's no legal program
>> present to run!
>
> I don't think that's true at all.
>
> Functional languages (real ones, I mean, not LISP) all have trivial
> variations on lambda calculus as their runtime semantics (the
> NON-trivial variation is in evaluation order, which is vitally
> important, but irrelevant here).  It is simple to create an "untyped"
> implementation of that concept.  For instance the earliest functional
> programming langage implementation that I, personally, know of
> (remember that I don't count LISP) was Turner's SASL -- an untyped,
> lazy, pure functional programming language (for his PhD in 1979).
>
> His SASL interpreter was done in C, which meant he had to implement GC,
> etc (3.5 K lines of code in total).  But these days nobody would follow
> that path -- they'd probably write it /in/ Haskell, and the thing would
> be trivial.
>
>     -- chris


And how does that relate to

  "If the program is malformed, you can't "run it anyway"; there's no
   legal program present to run!"

?

Regards -- Markus




0
2/26/2007 4:13:53 PM
"J Thomas" <jethomas5@gmail.com> writes:

> On Feb 24, 4:38 pm, Benjamin Franksen <benjamin.frank...@bessy.de>
> wrote:
>> Paul Rubin wrote:
>
>> > For amusement purposes, here's a
>> > syntax error that confused me.  I did manage to fix it by trial and
>> > error, but I'm still not absolutely clear on the exact workings of the
>> > rule that it broke.  I'd be interested to know if you can spot the
>> > error without trying to compile the program.  In my case, even with
>> > the error message I was not able to figure out what was wrong except
>> > through a bunch of experiments.
>
>> I suppose it's something to do either with the indentation of the (first)
>> case branches or with the semicolons separating them (or both). The
>> semicolons are not necessary (since you are using layout anyway) and may
>> interfere with the 'let'. I'd leave them out. And the case branches ("+" ->
>> Just (+) etc...) should be indented at least one space further than the
>> start of the let item (starting with "func = case op of").
>>
>> Haskell's layout rule really has its subtleties, especially in connection
>> with do-notation. There are some pitfalls that almost all beginners fall
>> into.
>
> This is why I say it's important for Forth to become more intuitive
                                       ^^^^^
I completely agree with that. You have my support,

Regards -- Markus

0
2/26/2007 4:18:38 PM
"J Thomas" <jethomas5@gmail.com> writes:

> On Feb 23, 12:13 pm, Markus E Leypold
> <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
>> "J Thomas" <jethom...@gmail.com> writes:
>> > On Feb 23, 11:32 am, Joachim Durchholz <j...@durchholz.org> wrote:
>
>> >> NP-completeness and undecidability just delinate what a knowledgeable
>> >> programmer will simply refuse to do, on grounds that it can't be done.
>>
>> > No, that's a red herring. For some particular NP-complete problem you
>> > might find that a solution for N=100 would take the current entire
>> > world computing resources a million years to solve. But if your
>> > customer wants a solution for N=50 and you can get your N=50 solution
>> > in 250 milliseconds on an old PC, why refuse the job?
>>
>> > NP-completeness deserves at most a warning to the customer about
>> > future upgrade problems.
>>
>> Ah - no. Some 15 years ago I had a customer who wanted a custom
>> database (and UI) which would be loaded with ~2000 records of
>> potential customers. The first thing he then did, was to load it with
>> all ~20000 data sets from a CD directory (kind of an industry guide).
>> Since the index building algorithm was all but efficient (we took some
>> short cuts because dur to limited memory we could not just sort in
>> memory), the first build of the index of the larger database took more
>> than 2 days.
>>
>> So scaling is not only an upgrade problem: Often your customer is a
>> bit vague about the problem size and you should probably write scaling
>> behaviour in the specs.
>
> Yes, but at this point I believe that NP-complete is a red herring.
> What the customer cares about is at what N the resources become an
> issue, and at what N the solution becomes impractical.

The OPs problem was "what the **** is all that theory good for". Well:
You won't be able to discuss "upgrade problems" or "scaling" or even
see that there is a problem (and will stay here) if you don't know
about NP-completeness etc.

>
> As I understand it, to prove a problem is NP-complete or prove it
> isn't, will not in either case answer those questions. To know that
> scaling issues will at some point make the solution completely
> impractical doesn't at all say whether it's impractical for the
> customer's uses. And vice versa, it can scale gently and be too
> unwieldy even at the scale the customer needs now.
>
> So it looks to me like finding out whether it's NP-complete is solving
> the wrong problem. What you need to know is something else. Some of

? Whoever said that?

> the details you use to decide NP-completely might be used for that
> other question, though.

>> >> (Similar to an engineer who will refuse to build a dam made of bubblegum.)
>>
>> > Doesn't it depend on the job? A small dam that needs only a short
>> > lifespan might feasibly be made of bubblegum, though some other
>> > material might fit the needs better and/or cheaper.
>>
>> Right. But without theory you might just use bubble gum for every dam,
>> since you don't knoe about the principal restrictions of using bubble
>> gum.
>
> If you don't understand your materials you can expect problems. On the

Same applies to CS and algorithms and their complexity.

> other hand, civil engineering has traditionally operated with a very
> large component of precedent. If a theoretician decides that a
> particular procedure is unsafe, but it has been in widespread use
> without incident, he's likely to generate lots of comments about
> proofs that bumblebees can't fly. On the other hand if a design is
> constructed with traditional safety margins much reduced because
> theory says they're not needed, it's likely to get a lot of interest
> while the engineers wait for it to fall down.

> Theory is a useful supplement to practice. When you operate outside

Supplement? It's actually the _fundament_ of practice. If not, in
chemistry we'd still be searching how to make gold,
because "it might just work this time".

> your area of expertise it's all you have. And this is probably why
> people are generally so hesitant to do significant projects outside
> their areas of expertise.


Regards -- Markus



0
2/26/2007 4:24:18 PM
Rainer Joswig <joswig@lisp.de> writes:

> FP is so vague as OOP is nowadays. It is almost useless.
> Inclusion or not inclusion is mostly political.

A better term -- one I heard first from Andrew Appel (although I am
not sure where it originated) -- is "value-oriented programming",
which hits the nail on the head much better.

Whether a language is functional is (to me) a matter of degree:
Haskell is certainly functional (unless you program in the IO monad,
in which case it is still functional, but more so only in name, and
less so in spirit).

ML is a bit less functional yet, since you are forced to program in
the IO monad all the time.  Still, most of its data structures are
pure, and there is a significant and useful subset of the language
where programmers, compilers, and program analysis tools can safely
rely on freedom from side effects.

Scheme is even less functional, although it is still quite close to
the ML.  However, there are significantly fewer "pure" constructs.
Almost anything involving structured data has side effects.  (CONS in
Scheme has a side effect, and even LAMBDA does, for crying out loud!)

Common Lisp and other Lisp 2s is still further removed, as they give
up on the notion of functions being "just ordinary data" by making a
distinction between the rules of evaluation in operator position and
other positions.

That's my take on it, anyway.

> Pattern matching has nothing to do with FP. (like it might be
> that some cars have automatic transmission for convenience,
> but a car is perfectly a car with manual transmission).

To me, as the elimination form for sums, pattern matching is an
important aspect of /typed/ functional programming.

Matthias
0
find19 (1244)
2/27/2007 3:34:49 AM
In article <m2hct82rza.fsf@hanabi.local>,
 Matthias Blume <find@my.address.elsewhere> wrote:

> Rainer Joswig <joswig@lisp.de> writes:
> 
> > FP is so vague as OOP is nowadays. It is almost useless.
> > Inclusion or not inclusion is mostly political.
> 
> A better term -- one I heard first from Andrew Appel (although I am
> not sure where it originated) -- is "value-oriented programming",
> which hits the nail on the head much better.

A Lisp programmer knows the value of everything, but the cost of 
nothing. -- Alan Perlis
 
> Whether a language is functional is (to me) a matter of degree:
> Haskell is certainly functional (unless you program in the IO monad,
> in which case it is still functional, but more so only in name, and
> less so in spirit).

The not-side-effect-free outside world makes a lot problems. ;-)

> ML is a bit less functional yet, since you are forced to program in
> the IO monad all the time.  Still, most of its data structures are
> pure, and there is a significant and useful subset of the language
> where programmers, compilers, and program analysis tools can safely
> rely on freedom from side effects.

How practical is it in 'reality'?
Say, you don't (re)use hash-tables and arrays in 'typical' ML
programs?

> Scheme is even less functional, although it is still quite close to
> the ML.  However, there are significantly fewer "pure" constructs.
> Almost anything involving structured data has side effects.  (CONS in
> Scheme has a side effect, and even LAMBDA does, for crying out loud!)

What is the LAMBDA or CONS side effect?

> Common Lisp and other Lisp 2s is still further removed, as they give
> up on the notion of functions being "just ordinary data" by making a
> distinction between the rules of evaluation in operator position and
> other positions.

For my practical programming purposes, this is mostly a
complication, but not a limitation.

> 
> That's my take on it, anyway.
> 
> > Pattern matching has nothing to do with FP. (like it might be
> > that some cars have automatic transmission for convenience,
> > but a car is perfectly a car with manual transmission).
> 
> To me, as the elimination form for sums, pattern matching is an
> important aspect of /typed/ functional programming.

If you drive all time a car with automatic transmission, you
start to think that it is necessary to have it.
If you remove or add pattern matching, it does not make a
programming language more or less 'functional', IMHO.


> 
> Matthias
0
joswig8642 (2203)
2/27/2007 8:15:45 AM
> "Chris Uppal" <chris.uppal@metagnostic.REMOVE-THIS.org> writes:
> > Functional languages (real ones, I mean, not LISP) all have trivial
> > variations on lambda calculus as their runtime semantics (the
> > NON-trivial variation is in evaluation order, which is vitally
> > important, but irrelevant here).  It is simple to create an "untyped"
> > implementation of that concept.

Markus E Leypold wrote:
> And how does that relate to
> 
>   "If the program is malformed, you can't "run it anyway"; there's no
>    legal program present to run!"
> 
> ?

Simple.  The error messages from Hugs are almost certainly type errors.  
In that case, they are complaining that something can go wrong, but the 
program still has well-defined semantics.  If one has trouble 
identifying the meaning of the type error, one could just run the 
program and try to watch something go wrong, and thereby identify the 
meaning of the error.

That is assuming we're talking about type errors; but in practice, we 
almost always are.  Syntax errors are comparatively rare (except for 
beginners when they have to do with Haskell's bizarre indentation rules) 
and easy to fix.  I think you can assume Chris's comment was intended 
for type errors only.

-- 
Chris Smith
0
cdsmith (3862)
2/27/2007 3:27:04 PM
Markus E Leypold wrote:
> "J Thomas" <jethomas5@gmail.com> writes:

   ...

>> Theory is a useful supplement to practice. When you operate outside
> 
> Supplement? It's actually the _fundament_ of practice. If not, in
> chemistry we'd still be searching how to make gold,
> because "it might just work this time".

Theory developed to explain practice. Practice came first. I recall a 
well written eleventh-century treatise titled something like "How to 
Build a Thirty-Foot Bridge with Twelve-Foot Timbers". (Wood was becoming 
scarce in England around then.) No theory, but instructions I could 
follow. The heuristics were confined to footings and piers.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
2/27/2007 3:29:19 PM
Chris Smith <cdsmith@twu.net> writes:

>> "Chris Uppal" <chris.uppal@metagnostic.REMOVE-THIS.org> writes:
>> > Functional languages (real ones, I mean, not LISP) all have trivial
>> > variations on lambda calculus as their runtime semantics (the
>> > NON-trivial variation is in evaluation order, which is vitally
>> > important, but irrelevant here).  It is simple to create an "untyped"
>> > implementation of that concept.
>
> Markus E Leypold wrote:
>> And how does that relate to
>> 
>>   "If the program is malformed, you can't "run it anyway"; there's no
>>    legal program present to run!"
>> 
>> ?
>
> Simple.  The error messages from Hugs are almost certainly type errors.  
> In that case, they are complaining that something can go wrong, but the 
> program still has well-defined semantics.  

No, I don't think so. The set of all well typed language constructs is
contains the set of all programs with a well defined semantics. The
language definitions of Haskell and ML languages I know of are thus,
that there is no such thing as a "badly typed program". "Well typed"
is a requirement for "being a program". If it doesn't type, there is
only some text that could be parsed into tokens (in adheres to the
lexis of the language), could perhaps even be used to construct a
sysntax tree (it adheres to the syntax of the language) but since it
couldn't be typed it's not a program. The semantic is only defined on
(well typed) programs.

BTW it is exactly your kind of thinking -- that "well typed",
"compiled", "can be run", "has well defined semantics" are disjoint
concepts -- that make C such a mess (at least for people that haven't
yet developed reflexes to stay well clear of areas of doubtful
reputation).


> If one has trouble identifying the meaning of the type error, one
> could just run the program and try to watch something go wrong, and
> thereby identify the meaning of the error.

Hardly. Doesn't sound like a good idea. Constructing a test case to
find the "error" would be tantamount to understanding the cause for
the type error in the beginning. No need to run the non-program then.

> That is assuming we're talking about type errors; but in practice, we 
> almost always are.  

> Syntax errors are comparatively rare (except for 
> beginners when they have to do with Haskell's bizarre indentation rules) 
> and easy to fix.  I think you can assume Chris's comment was intended 
> for type errors only.

I know. I still think it a really bad idea. Esp. from a sociological
point of view: We know how warnings from the C-compiler are usually
treated by people who want to get their stuff done before the dead
line: --no-warn-all-errors (...).

Regards -- Markus

0
2/27/2007 3:49:13 PM
Chris Smith wrote:

> That is assuming we're talking about type errors; but in practice, we 
> almost always are.  Syntax errors are comparatively rare (except for 
> beginners when they have to do with Haskell's bizarre indentation
> rules) and easy to fix.  I think you can assume Chris's comment was
> intended for type errors only.

It was -- in fact it had not even occurred to me that the OP might be
getting any /other/ kind of error from HUGS...

(Not an unreasonable assumption, given the OP's stated difficulty with
the type system -- but if I'd reallised I was making that assumption
then I'd have been more explicit.)

    -- chris

0
chris.uppal (3980)
2/27/2007 4:05:29 PM
In article <joswig-23FF7D.09154527022007@news-europe.giganews.com>, Rainer 
Joswig wrote:
> 
> How practical is it in 'reality'?  Say, you don't (re)use
> hash-tables and arrays in 'typical' ML programs?

For mappings, I normally use a purely functional balanced tree of some
kind rather than a hash table.

I use arrays very rarely, and in most of those cases I only need the
constant time lookup, so I don't ever modify an array after its
creation. In fact, I honestly can't remember the last time I needed to
modify an array after creation. (For example, in a state machine
implementation I might end with an array to hold the transition table,
which never gets modified after creation.)

This is real in the sense that it's how I program, but I dunno if that's
realistic enough for you. :)

> What is the LAMBDA or CONS side effect?

Scheme has the eq? primitive which tests for object identity, and cons
and lambda allocate memory in the heap. That means that 

  (let ((x (cons 1 2))
  	(y (const 1 2)))
    (cons x y))

and 

  (let ((x (cons 1 2)))
    (cons x x))

are observably different. (In fact, Scheme cons cells are mutable via
set-car! and set-cdr!, but eq? is enough to make allocation visible. 
Ocaml has the same problem/feature as Scheme, via its == operator, but
SML does not.)

-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
2/27/2007 4:13:01 PM
In article <MPG.204dfbb4fbe8f08c989823@news.altopia.net>, Chris Smith wrote:
> 
> Simple.  The error messages from Hugs are almost certainly type
> errors.  In that case, they are complaining that something can go
> wrong, but the program still has well-defined semantics.  If one has
> trouble identifying the meaning of the type error, one could just
> run the program and try to watch something go wrong, and thereby
> identify the meaning of the error.

With some effort, you could make this true for ML, but you definitely
cannot for Haskell.  This is because Haskell relies on typing to
resolve overloading, and so without a type you don't necessarily have
a program to run.


-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
2/27/2007 4:26:27 PM
Jerry Avins wrote:
>> Supplement? It's actually the _fundament_ of practice. If not, in
>> chemistry we'd still be searching how to make gold,
>> because "it might just work this time".
> 
> Theory developed to explain practice. Practice came first. I recall a 
> well written eleventh-century treatise titled something like "How to 
> Build a Thirty-Foot Bridge with Twelve-Foot Timbers". (Wood was becoming 
> scarce in England around then.) No theory, but instructions I could 
> follow. The heuristics were confined to footings and piers.
> 
> Jerry

Nice example, but how many trial bridges have collapsed to get to that 
invention? IMO theory helps us to save time and costs - often just by 
counter-proving wrong assumptions. And sometimes you can reach new 
results by following the "mathematical mechanics" in a theory. Not 
everybody is a Stephen Hawking who has to rotate quantum physics in his 
mind without a helpful sheet of scribbling paper.

Andreas
-------
MinForth: http://minforth.net.ms
0
akk1 (50)
2/27/2007 8:31:26 PM
Neelakantan Krishnaswami wrote:

> In article <MPG.204dfbb4fbe8f08c989823@news.altopia.net>, Chris Smith
> wrote:
> > 
> > Simple.  The error messages from Hugs are almost certainly type
> > errors.  In that case, they are complaining that something can go
> > wrong, but the program still has well-defined semantics.  If one has
> > trouble identifying the meaning of the type error, one could just
> > run the program and try to watch something go wrong, and thereby
> > identify the meaning of the error.
> 
> With some effort, you could make this true for ML, but you definitely
> cannot for Haskell.  This is because Haskell relies on typing to
> resolve overloading, and so without a type you don't necessarily have
> a program to run.

So that resolution would have to happen at runtime.  And if it didn't
resolve "correctly" (in the sense of "how it would have done at
compile-time if the type checker had worked") then /that/ is the error
which is discovered at runtime.

Some langauage need the type system before they can even /parse/ the
language (e.g. Avail, and  maybe Maude?).  But I don't think Haskell is
in that rather bizarre group, is it ?

    -- chris
0
chris.uppal (3980)
2/27/2007 8:44:55 PM
On Feb 27, 3:31 pm, Andreas Kochenburger <a...@privat.de> wrote:
> Jerry Avins wrote:
> >> Supplement? It's actually the _fundament_ of practice. If not, in
> >> chemistry we'd still be searching how to make gold,
> >> because "it might just work this time".
>
> > Theory developed to explain practice. Practice came first. I recall a
> > well written eleventh-century treatise titled something like "How to
> > Build a Thirty-Foot Bridge with Twelve-Foot Timbers". (Wood was becoming
> > scarce in England around then.) No theory, but instructions I could
> > follow. The heuristics were confined to footings and piers.
>
> > Jerry
>
> Nice example, but how many trial bridges have collapsed to get to that
> invention? IMO theory helps us to save time and costs - often just by
> counter-proving wrong assumptions. And sometimes you can reach new
> results by following the "mathematical mechanics" in a theory. Not
> everybody is a Stephen Hawking who has to rotate quantum physics in his
> mind without a helpful sheet of scribbling paper.

Correct theory helps us save time and costs.

Very few bridges get built to test how correct the theory is.

Meanwhile, there are engineers who spend their entire professional
lives re-using the rules they've learned that work within their area
of expertise. Try to get them to work outside their expertise and they
quite properly send you to somebody else. I haven't noticed so much of
that in programming outside SEI's CMM where organisations get the
highest ratings if they can manage to do the same project over and
over again. The software you're best qualified to write is the same
software you've already written a dozen times.

0
jethomas5 (1449)
2/27/2007 9:07:16 PM
"J Thomas" <jethomas5@gmail.com> writes:

> On Feb 27, 3:31 pm, Andreas Kochenburger <a...@privat.de> wrote:
>> Jerry Avins wrote:
>> >> Supplement? It's actually the _fundament_ of practice. If not, in
>> >> chemistry we'd still be searching how to make gold,
>> >> because "it might just work this time".
>>
>> > Theory developed to explain practice. Practice came first. I recall a
>> > well written eleventh-century treatise titled something like "How to
>> > Build a Thirty-Foot Bridge with Twelve-Foot Timbers". (Wood was becoming
>> > scarce in England around then.) No theory, but instructions I could
>> > follow. The heuristics were confined to footings and piers.
>>
>> > Jerry
>>
>> Nice example, but how many trial bridges have collapsed to get to that
>> invention? IMO theory helps us to save time and costs - often just by
>> counter-proving wrong assumptions. And sometimes you can reach new
>> results by following the "mathematical mechanics" in a theory. Not
>> everybody is a Stephen Hawking who has to rotate quantum physics in his
>> mind without a helpful sheet of scribbling paper.
>
> Correct theory helps us save time and costs.
>
> Very few bridges get built to test how correct the theory is.
>
> Meanwhile, there are engineers who spend their entire professional
> lives re-using the rules they've learned that work within their area

They haven't (usually) learned that rules by trial and error, though,
but by memorizing theory or the condensed rules for application of
theory during their professional education.

> of expertise. Try to get them to work outside their expertise and they
> quite properly send you to somebody else. I haven't noticed so much of
> that in programming outside SEI's CMM where organisations get the
> highest ratings if they can manage to do the same project over and
> over again. The software you're best qualified to write is the same
> software you've already written a dozen times.


Regards -- Markus

0
2/27/2007 9:25:47 PM

"Chris Uppal" <chris.uppal@metagnostic.REMOVE-THIS.org> writes:

> Neelakantan Krishnaswami wrote:
>
>> In article <MPG.204dfbb4fbe8f08c989823@news.altopia.net>, Chris Smith
>> wrote:
>> > 
>> > Simple.  The error messages from Hugs are almost certainly type
>> > errors.  In that case, they are complaining that something can go
>> > wrong, but the program still has well-defined semantics.  If one has
>> > trouble identifying the meaning of the type error, one could just
>> > run the program and try to watch something go wrong, and thereby
>> > identify the meaning of the error.
>> 
>> With some effort, you could make this true for ML, but you definitely
>> cannot for Haskell.  This is because Haskell relies on typing to
>> resolve overloading, and so without a type you don't necessarily have
>> a program to run.
>
> So that resolution would have to happen at runtime.  And if it didn't
> resolve "correctly" (in the sense of "how it would have done at
> compile-time if the type checker had worked") then /that/ is the error
> which is discovered at runtime.
>
> Some langauage need the type system before they can even /parse/ the
> language (e.g. Avail, and  maybe Maude?).  But I don't think Haskell is
> in that rather bizarre group, is it ?

I think it's a useless exercise. Sort of making a kind of Scheme from
a statically typed language. As I already wrote: It will not help
testing or finding the cause for type errors.

Regards -- Markus

0
2/27/2007 9:27:34 PM
On Tue, 27 Feb 2007 10:29:19 -0500, Jerry Avins <jya@ieee.org> wrote:

>Theory developed to explain practice. Practice came first.

And practice is extended by applying theory. Otherwise, it's _all_ just
trial and error.

Steve Schafer
Fenestra Technologies Corp.
http://www.fenestra.com/
0
steve2740 (70)
2/27/2007 9:57:40 PM
Chris Uppal <chris.uppal@metagnostic.remove-this.org> wrote in article <xn0f2ys4z9vb9mr000@news.gradwell.net> in comp.lang.functional:
> So that resolution would have to happen at runtime.  And if it didn't
> resolve "correctly" (in the sense of "how it would have done at
> compile-time if the type checker had worked") then /that/ is the error
> which is discovered at runtime.

Could you please describe how runtime resolution would work for the
expression

    negate (read "123")

?  Here "negate" negates a number, and the overloaded function "read"
converts a string to a value (which could be a number or a Boolean).

-- 
Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig
I think that it is much more likely that the reports of flying saucers
are the result of the known irrational characteristics of terrestrial
intelligence rather than the unknown rational efforts of extraterrestrial
intelligence. -Richard Feynman
0
ccshan1 (27)
2/27/2007 10:15:44 PM
In article <slrneu8m4d.8fv.neelk@gs3106.sp.cs.cmu.edu>,
 Neelakantan Krishnaswami <neelk@cs.cmu.edu> wrote:

> In article <joswig-23FF7D.09154527022007@news-europe.giganews.com>, Rainer 
> Joswig wrote:
> > 
> > How practical is it in 'reality'?  Say, you don't (re)use
> > hash-tables and arrays in 'typical' ML programs?
> 
> For mappings, I normally use a purely functional balanced tree of some
> kind rather than a hash table.
> 
> I use arrays very rarely, and in most of those cases I only need the
> constant time lookup, so I don't ever modify an array after its
> creation. In fact, I honestly can't remember the last time I needed to
> modify an array after creation. (For example, in a state machine
> implementation I might end with an array to hold the transition table,
> which never gets modified after creation.)
> 
> This is real in the sense that it's how I program, but I dunno if that's
> realistic enough for you. :)

Ha, that's hard to say from here. ;-)
From my personal experience, I find it
difficult to work without mutable hashtables and arrays,
even though I could. Practical experience with 'production'
Lisp systems shows that manual code optimizations to reuse
arrays for example improves runtime performance.

> 
> > What is the LAMBDA or CONS side effect?
> 
> Scheme has the eq? primitive which tests for object identity, and cons
> and lambda allocate memory in the heap. That means that 
> 
>   (let ((x (cons 1 2))
>   	(y (const 1 2)))
>     (cons x y))
> 
> and 
> 
>   (let ((x (cons 1 2)))
>     (cons x x))
>
> are observably different.

Referential transparency or side effects? Isn't there a difference?

> (In fact, Scheme cons cells are mutable via
> set-car! and set-cdr!, but eq? is enough to make allocation visible.

Finite memory is already enough to make allocation visible.
 
> Ocaml has the same problem/feature as Scheme, via its == operator, but
> SML does not.)
0
joswig8642 (2203)
2/27/2007 10:26:03 PM
Rainer Joswig <joswig@lisp.de> writes:

> In article <m2hct82rza.fsf@hanabi.local>,
>  Matthias Blume <find@my.address.elsewhere> wrote:
>
>> Rainer Joswig <joswig@lisp.de> writes:
>> 
>> > FP is so vague as OOP is nowadays. It is almost useless.
>> > Inclusion or not inclusion is mostly political.
>> 
>> A better term -- one I heard first from Andrew Appel (although I am
>> not sure where it originated) -- is "value-oriented programming",
>> which hits the nail on the head much better.
>
> A Lisp programmer knows the value of everything, but the cost of 
> nothing. -- Alan Perlis

Indeed.  But that's not where the phrase originated, I think.  (It may
have been inspired by this quote, though.)

>> Whether a language is functional is (to me) a matter of degree:
>> Haskell is certainly functional (unless you program in the IO monad,
>> in which case it is still functional, but more so only in name, and
>> less so in spirit).
>
> The not-side-effect-free outside world makes a lot problems. ;-)

No.  The beauty is that Haskell lets you choose effectful programming
when it is deemed necessary for some reason -- and backs this up with
a type system that clearly identifies pure parts of the code.

>> ML is a bit less functional yet, since you are forced to program in
>> the IO monad all the time.  Still, most of its data structures are
>> pure, and there is a significant and useful subset of the language
>> where programmers, compilers, and program analysis tools can safely
>> rely on freedom from side effects.
>
> How practical is it in 'reality'?
> Say, you don't (re)use hash-tables and arrays in 'typical' ML
> programs?

Others have already answered.  I personally use immutable data
structures in ML all the time -- together with a judicious sprinkling
of mutable ones.  There is no reason for CONS (or any other datatype
constructor) to allocate mutable state.  Same for tuples and other
records, or closures.  When stateful computations are needed, this can
be requested explicitly.  Not every language construct has to be
infected by it.

>> Scheme is even less functional, although it is still quite close to
>> the ML.  However, there are significantly fewer "pure" constructs.
>> Almost anything involving structured data has side effects.  (CONS in
>> Scheme has a side effect, and even LAMBDA does, for crying out loud!)
>
> What is the LAMBDA or CONS side effect?

Cons allocates mutable state.  Lambda allocates a closure (which is
not mutable, but which has identity).

>> Common Lisp and other Lisp 2s is still further removed, as they give
>> up on the notion of functions being "just ordinary data" by making a
>> distinction between the rules of evaluation in operator position and
>> other positions.
>
> For my practical programming purposes, this is mostly a
> complication, but not a limitation.

Sure.  When I program in assembler, the complete lack of high-level
language features is also mostly a complication but not a
limitation. :-)

>> 
>> That's my take on it, anyway.
>> 
>> > Pattern matching has nothing to do with FP. (like it might be
>> > that some cars have automatic transmission for convenience,
>> > but a car is perfectly a car with manual transmission).
>> 
>> To me, as the elimination form for sums, pattern matching is an
>> important aspect of /typed/ functional programming.
>
> If you drive all time a car with automatic transmission, you
> start to think that it is necessary to have it.

Speak for yourself.  This statement is not true for me.

> If you remove or add pattern matching, it does not make a
> programming language more or less 'functional', IMHO.

I realize that we may be talking about two different (albeit related)
issues here:

I was talking about matching against simple (non-nested) patterns.
The corresponding "case" expressing is the commonly used as the
elimination form of sum types.  I don't see any other (equally simple,
yet fundamentally different, and still safe) way of providing the
same functionality.  (Church-encoding "case" does not count.)

As for nested patterns -- I completely agree that they are nice to
have, but not essential.

Matthias
0
find19 (1244)
2/27/2007 10:29:07 PM
On Feb 27, 4:25 pm, Markus E Leypold
<development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
> "J Thomas" <jethom...@gmail.com> writes:

> >> Nice example, but how many trial bridges have collapsed to get to that
> >> invention?
>
> > Correct theory helps us save time and costs.
>
> > Very few bridges get built to test how correct the theory is.
>
> > Meanwhile, there are engineers who spend their entire professional
> > lives re-using the rules they've learned that work within their area
>
> They haven't (usually) learned that rules by trial and error, though,
> but by memorizing theory or the condensed rules for application of
> theory during their professional education.

I am not a civil engineer, so maybe I shouldn't pontificate too much
about how their programs are structured. Still, it looks to me like
it's a very different domain. Whenever an important bridge fails, they
have a complex inquest where they try to figure out why it happened,
and which safety margins need to be increased. They do a lot of theory
but it's in response to actual data. A lot of their theory isn't very,
well, theoretical.

When software developers have a large project fail, how often is there
a major investigation to find out just why it happened? How often are
the results of the investigation published? I occasionally see the
factoid that more than half of all large software projects fail,
though of course "large software project" is left undefined, and if
the size is increased sufficiently it becomes trivially true -- of
course more than half of the largest software projects fail. But if
more than half of the largest bridge projects failed....

So anyway, proceeding as if this factoid meant something, the obvious
conclusion is that we know how to build bridges but -- theory aside --
we don't know how to run large software projects. It's a different
domain.

And one of the differences is that engineers know how to keep small
failures from morphing into big failures. They know how to make 27
cubic meters of concrete even more stable than 9 cubic meters of
concrete. Within wide limits, a bigger beam is stronger than a smaller
beam. But software people don't know how to make 10,000 lines of code
more stable than 1000 LOC, and in general a bigger program is not more
robust than a smaller program.

It might be a better comparison if every bridge had to be made
entirely of glass. Tighten a glass nut on a glass bolt too tight and
something cracks. A glass ballast might get a tiny crack that expands
conchoidally until the whole thing splits. A car breaking through the
glass guardrails on an upper deck might spin as it falls and hit the
lower deck hard enough to break it all the way across....  Lots of
minor-looking flawss that can turn into fatal flaws.

And then it would also be more like software if the engineers simply
accepted that things go wrong sometimes.

"Er, well, it was an accident."
"Ho! Vell, zee? An exident. Vell, vun ting ve Jagerkin *understend* is
dat crezy exidents *happen*, right boyz?"
"Hoo boy, yez."
"Dot's de truth."

If software was like civil engineering, it would be a rare disaster
when a software project failed. And it would be a scandal when a bug
fix was needed. We would have a large body of shared experience, with
a large collection of tools that work reliably. But somehow software
is a lot more like military R&D contracting.

0
jethomas5 (1449)
2/27/2007 10:48:10 PM
In article <m17iu38cb0.fsf@hana.uchicago.edu>,
 Matthias Blume <find@my.address.elsewhere> wrote:

....

> >> Scheme is even less functional, although it is still quite close to
> >> the ML.  However, there are significantly fewer "pure" constructs.
> >> Almost anything involving structured data has side effects.  (CONS in
> >> Scheme has a side effect, and even LAMBDA does, for crying out loud!)
> >
> > What is the LAMBDA or CONS side effect?
> 
> Cons allocates mutable state.  Lambda allocates a closure (which is
> not mutable, but which has identity).

I asked that in another posting. Are we talking about side effects
or referential transparency? Whether CONS allocates mutable state
or not does not make CONS having side effects. The side effect
happens if I (avoiding the 'you' so I can speak of myself)
change the CONS cell later. As long as I don't do that my
programs are side effect free - the usage of the CONS function
won't change that.

> >> Common Lisp and other Lisp 2s is still further removed, as they give
> >> up on the notion of functions being "just ordinary data" by making a
> >> distinction between the rules of evaluation in operator position and
> >> other positions.
> >
> > For my practical programming purposes, this is mostly a
> > complication, but not a limitation.
> 
> Sure.  When I program in assembler, the complete lack of high-level
> language features is also mostly a complication but not a
> limitation. :-)

I don't think this is on the same level. Having higher-order
functions, but using a slighty different notation/mechanism
doesn't really complicate the use of higher-order function
that much. In Lisp 2 higher-order functions are used all the time.
Lots of them are part of the language standard.
In assembler I haven't seen them often (I should check
the assembler of my Lisp machine), though
I haven't looked at x86/PowerPC/... assembler much lately.

....
0
joswig8642 (2203)
2/27/2007 11:31:21 PM
Rainer Joswig <joswig@lisp.de> writes:

> Whether CONS allocates mutable state or not does not make CONS
> having side effects.

That is not the way I (and many others) use the term "effect".

> The side effect happens if I (avoiding the 'you' so I can speak of
> myself) change the CONS cell later.

That would be another side effect.

0
find19 (1244)
2/27/2007 11:51:23 PM
J Thomas wrote:

> We would have a large body of shared experience, with
> a large collection of tools that work reliably. 

There are some good books of best practices, like "The Pragmatic
Programmer" by Andrew Hunt and David Thomas, and if you study computer
science, you'll learn things like how to mathematically prove software or
how to indentify equivalence classes of a problem to design test cases,
which covers most of the possible program states. And there are tools,
which supports these concepts.

But I think for many projects the problem is, that it is more important to
release fast, with the idea that errors can be fixed later and too many
customers have learned to accept this. And many programmers doesn't know
the theoretical background good enough. Of course, you can't mathematically
prove big programs in reasonable time, but knowing the concepts helps you
to write better programs anyway.

-- 
Frank Buss, fb@frank-buss.de
http://www.frank-buss.de, http://www.it4-systems.de
0
fb (1533)
2/28/2007 12:09:40 AM
On 28 Feb., 00:51, Matthias Blume <f...@my.address.elsewhere> wrote:
> Rainer Joswig <jos...@lisp.de> writes:
> > Whether CONS allocates mutable state or not does not make CONS
> > having side effects.
>
> That is not the way I (and many others) use the term "effect".

How should above sentence help me to understand your point?

> > The side effect happens if I (avoiding the 'you' so I can speak of
> > myself) change the CONS cell later.
>
> That would be another side effect.


0
joswig (506)
2/28/2007 12:34:34 AM
Andreas Kochenburger wrote:
> Jerry Avins wrote:
>>> Supplement? It's actually the _fundament_ of practice. If not, in
>>> chemistry we'd still be searching how to make gold,
>>> because "it might just work this time".
>>
>> Theory developed to explain practice. Practice came first. I recall a 
>> well written eleventh-century treatise titled something like "How to 
>> Build a Thirty-Foot Bridge with Twelve-Foot Timbers". (Wood was 
>> becoming scarce in England around then.) No theory, but instructions I 
>> could follow. The heuristics were confined to footings and piers.
>>
>> Jerry
> 
> Nice example, but how many trial bridges have collapsed to get to that 
> invention? IMO theory helps us to save time and costs - often just by 
> counter-proving wrong assumptions. And sometimes you can reach new 
> results by following the "mathematical mechanics" in a theory. Not 
> everybody is a Stephen Hawking who has to rotate quantum physics in his 
> mind without a helpful sheet of scribbling paper.

There were plenty of collapses after theory ruled also. Also in England 
-- great Britain by that time -- a great deal was learned about metal 
fatigue from the frequent collapse of cast-iron railroad bridges, and of 
course there was the Firth of Tay of bridge (and more recently, 
Galloping Gertie over the Tacoma Narrows). Theory is being refined all 
the time, and that's good, but the foundation of good design is 
observing disasters and near disasters, then using whatever means 
available -- usually physics and math -- to minimize the chance of it 
happening again. The cathedral builders knew nothing about the "middle 
ninth" rule for compression stress in piers, but they raised spires on 
the pier tops to achieve it. Using the weight of the spires (later 
generations with theory to understand their purpose nevertheless assumed 
they were decorative) to close gaps in the masonry shifted the line of 
action into that region. Theory is good. Observation is good. I'm an 
engineer. I make mistakes. I try to learn from them.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
2/28/2007 3:21:40 AM
joswig@corporate-world.lisp.de writes:

> On 28 Feb., 00:51, Matthias Blume <f...@my.address.elsewhere> wrote:
>> Rainer Joswig <jos...@lisp.de> writes:
>> > Whether CONS allocates mutable state or not does not make CONS
>> > having side effects.
>>
>> That is not the way I (and many others) use the term "effect".
>
> How should above sentence help me to understand your point?

It is not a point that I am making but merely a matter of terminology.

You seem to make a distinction between what you call "not
referentially transparent" and "has an effect".  I don't.

A computation is free of effects if I can run it twice without being
able to distinguish (now or later) the two results (in any way
whatsoever).  Applications of CONS in Lisp or Scheme do not have this
property, application of :: in ML or : in Haskell do.
0
find19 (1244)
2/28/2007 3:40:39 AM
Steve Schafer wrote:
> On Tue, 27 Feb 2007 10:29:19 -0500, Jerry Avins <jya@ieee.org> wrote:
> 
>> Theory developed to explain practice. Practice came first.
> 
> And practice is extended by applying theory. Otherwise, it's _all_ just
> trial and error.

Theory isn't always pertinent, or insight doesn't leap to mind to get 
the proper theory applied. Phlogiston was a theory useful for guiding 
some experiments but ultimately misleading. Nobody supposed that a beam 
could fail in diagonal shear until slanty cracks appeared in what 
practice predicted should be a safe design. The theory of web crippling 
is known but not used in practice. Follow the rules in the handbook and 
it'll come out safe. Structural builders learned to avoid brittle 
materials in tension or bending. Very little in theory suggests why we 
should. We turn to theory when things go wrong. Usually, we turn to the 
handbook.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
2/28/2007 5:05:18 AM
Jerry Avins <jya@ieee.org> writes:

> Steve Schafer wrote:
>> On Tue, 27 Feb 2007 10:29:19 -0500, Jerry Avins <jya@ieee.org> wrote:
>>
>>> Theory developed to explain practice. Practice came first.
>> And practice is extended by applying theory. Otherwise, it's _all_
>> just
>> trial and error.
>
> Theory isn't always pertinent, or insight doesn't leap to mind to get
> the proper theory applied. Phlogiston was a theory useful for guiding

People, why do you all despise theory so? The OP (at the beginning of
this thread) asked: What is all this CD theory good for, it doesn't
seem to be applicable. A number of people tried to show him that
theory delineates the areas of things you reasonably can _expect_ to
be able do and those you can't. Theory also helps you to understand
previous failures on a better basis than "John failed in that, perhaps
you/I will fail too".

My position is that theory is the foundation of understanding and
organizing (a) empirical data (which certainly stems from practice but
which needs to be sorted into a framework to be lifted from the status
of purely circumstancal evidence and (b) allows to extrapolate a
common law from a limited number of points of empirical data and
extrapolate to cases never tried (we don't need to "try" certain types
of construction to be fairly sure they will fail).

Sometimes theory is wrong, certainly (especially in engineering
....). But that doesn't subtract a jota, that theory permeates every
word and action of practice if it is not just aimless pottering.

You couldn't even talk about why you failed in practice: E.g. "Stress"
is a concept stemming from a theoretical framework. You wouldn't have
appropriate expressions to describe what you see or experienced.

Yes, I admit, theory has been _derived_ from practice (so in a sense
it "came first" as one poster formulated). But whereas practice
doesn't permeate theory, theory is the the only framework within which
experience is even possble. 

Again: Theory is not a _supplement_ to practice (a formulation which
implies you can ignore it without too much negative consequences), but
the foundation of practice. The realtionship is admittedly a bit
recursive (since practice stimulates and supports theory) but not
symmetrical.

I've have the impression that most posters to this thread have
forgotten that they didn'd became engineers by just starting "to build
bridges" but first by learning how others have built bridges and by
doing calculations for hypothetical bridges that would never be
build. Theory, not practice.

I'd never trust an engineer who couldn't calculate (is mathematically
illiterate). Would you? But any calculation is theory, not
practice. You don't work with stone, concrete, metal or whatever, but
instead apply rules to say when something will work and when it will
fail and derive from that what you have to do that it doesn't
fail. Again: Theory. 

And yes, applied theory is still theory, not practice.

I rest my case.


> some experiments but ultimately misleading. Nobody supposed that a
> beam could fail in diagonal shear until slanty cracks appeared in what
> practice predicted should be a safe design. The theory of web
> crippling is known but not used in practice. Follow the rules in the
> handbook and it'll come out safe. Structural builders learned to avoid
> brittle materials in tension or bending. Very little in theory
> suggests why we should. We turn to theory when things go
> wrong. Usually, we turn to the handbook.

The handbook is not practice either, but a set of simplified rules: Theory.

Regards -- Markus
0
2/28/2007 12:17:40 PM
Steve Schafer wrote:
> On Tue, 27 Feb 2007 10:29:19 -0500, Jerry Avins <jya@ieee.org> wrote:
> 
>> Theory developed to explain practice. Practice came first.
> 
> And practice is extended by applying theory. Otherwise, it's _all_ just
> trial and error.

There's a shorter way to say it than I thought of last night. It _is_ 
all trial and error. _Somebody_ has to try out the theory to see if it 
works and if not, why not.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
2/28/2007 12:31:13 PM
On Wed, 28 Feb 2007 07:31:13 -0500, Jerry Avins <jya@ieee.org> wrote:

>There's a shorter way to say it than I thought of last night. It _is_ 
>all trial and error. _Somebody_ has to try out the theory to see if it 
>works and if not, why not.

You're using some very nonstandard definitions.

Theory predicts what will happen when you try something; you use it to
_direct_ your trials. If you build a bridge and it falls down, and you
look at the remains and say to yourself, "Hmmm, I'll bet that if I beef
up those two beams, it will work," then you're using theory. _Any_ time
you do a trial based on a prediction of its performance, you're using
theory. The only way to _not_ use theory is to randomly try stuff until
something works.

Steve Schafer
Fenestra Technologies Corp.
http://www.fenestra.com/
0
steve2740 (70)
2/28/2007 1:12:11 PM
Steve Schafer wrote:
> On Wed, 28 Feb 2007 07:31:13 -0500, Jerry Avins <jya@ieee.org> wrote:
> 
>> There's a shorter way to say it than I thought of last night. It _is_ 
>> all trial and error. _Somebody_ has to try out the theory to see if it 
>> works and if not, why not.
> 
> You're using some very nonstandard definitions.
> 
> Theory predicts what will happen when you try something; you use it to
> _direct_ your trials. If you build a bridge and it falls down, and you
> look at the remains and say to yourself, "Hmmm, I'll bet that if I beef
> up those two beams, it will work," then you're using theory. _Any_ time
> you do a trial based on a prediction of its performance, you're using
> theory. The only way to _not_ use theory is to randomly try stuff until
> something works.

Theory was too weak to support the approach you describe until well 
after Newton's contributions to math and physics. Many excellent 
structures were built before that time. How?

jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
2/28/2007 1:26:54 PM
Steve Schafer <steve@fenestra.com> writes:

> On Wed, 28 Feb 2007 07:31:13 -0500, Jerry Avins <jya@ieee.org> wrote:
>
>>There's a shorter way to say it than I thought of last night. It _is_ 
>>all trial and error. _Somebody_ has to try out the theory to see if it 
>>works and if not, why not.
>
> You're using some very nonstandard definitions.
>
> Theory predicts what will happen when you try something; you use it to
> _direct_ your trials. If you build a bridge and it falls down, and you
> look at the remains and say to yourself, "Hmmm, I'll bet that if I beef
> up those two beams, it will work," then you're using theory. _Any_ time
> you do a trial based on a prediction of its performance, you're using
> theory. The only way to _not_ use theory is to randomly try stuff until
> something works.

Thanks. Exactly my point(s).

Reagrds -- Markus


0
2/28/2007 1:40:56 PM
On Feb 28, 8:26 am, Jerry Avins <j...@ieee.org> wrote:
> Steve Schafer wrote:
> > On Wed, 28 Feb 2007 07:31:13 -0500, Jerry Avins <j...@ieee.org> wrote:

> > Theory predicts what will happen when you try something; you use it to
> > _direct_ your trials. If you build a bridge and it falls down, and you
> > look at the remains and say to yourself, "Hmmm, I'll bet that if I beef
> > up those two beams, it will work," then you're using theory. _Any_ time
> > you do a trial based on a prediction of its performance, you're using
> > theory. The only way to _not_ use theory is to randomly try stuff until
> > something works.
>
> Theory was too weak to support the approach you describe until well
> after Newton's contributions to math and physics. Many excellent
> structures were built before that time. How?

By stretching the idea of what theory is until it fits pretty much
anything anybody does.

But anyway, my point isn't that you can make things that work without
a sophisticated or correct theory about why they work. That's a given.
My point is that large engineering projects usually don't fall down,
but large software projects usually do fall down. This suggests that
our theory as currently understood is not adequate.

Tenth century builders did have sophisticated theory to fall back on.
It was called astrology. It may have acted to reduce construction
accidents -- I don't know, I haven't seen the data. Certainly if the
project astrologer says not to work on Monday or there's likely to be
an accident, and you work on Monday anyway and sure enough there's an
accident, you're likely to believe he knows what he's talking about.

So is modern computer science more like pre-Newtonian physics, or is
it more like astrology? It may take decades of experience to find out.

But here's one reason that things might seem to go slower today -- we
are doing things far more in parallel. If we started on average one
large software project a year, we'd pay a lot more attention to it.
We'd put a lot more effort into analysing the failures. The ratio of
theory to experience would go up. And yet with so little data the
total amount of theory might stay fairly small. As it is we have more
projects than we can pay attention to, and more theories than we can
pay attention to. It's hard to pick out the good ones, there's just
too much there.

For whatever it's worth, my own theory about project development is
based heavily on John Gall.

"A complex system that works is invariably found to have evolved from
a simple system that works. A complex system designed from scratch
never works and cannot be patched up to make it work. You have to
start over, beginning with a working simple system."

I think he overstates the case a little, what he says is more like 95%
or 99% true. But it's still the way to bet.

0
jethomas5 (1449)
2/28/2007 2:45:29 PM
"J Thomas" <jethomas5@gmail.com> writes:

> On Feb 28, 8:26 am, Jerry Avins <j...@ieee.org> wrote:
>> Steve Schafer wrote:
>> > On Wed, 28 Feb 2007 07:31:13 -0500, Jerry Avins <j...@ieee.org> wrote:
>
>> > Theory predicts what will happen when you try something; you use it to
>> > _direct_ your trials. If you build a bridge and it falls down, and you
>> > look at the remains and say to yourself, "Hmmm, I'll bet that if I beef
>> > up those two beams, it will work," then you're using theory. _Any_ time
>> > you do a trial based on a prediction of its performance, you're using
>> > theory. The only way to _not_ use theory is to randomly try stuff until
>> > something works.
>>
>> Theory was too weak to support the approach you describe until well
>> after Newton's contributions to math and physics. Many excellent
>> structures were built before that time. How?
>
> By stretching the idea of what theory is until it fits pretty much
> anything anybody does.

You seem to have rather exaggerated ideas about what "theory"
means. "Theory" does not need tensor calculus.

BTW, I dislike the insinuation of intellectual dishonesty in your
proponenents which I think I can read in your statement
("stretching"). We don't argue like we do, to "win" against you, but
rather because this is our definition and undertanding of the term
"theory".

>
> But anyway, my point isn't that you can make things that work without
> a sophisticated or correct theory about why they work. That's a given.
> My point is that large engineering projects usually don't fall down,
> but large software projects usually do fall down. This suggests that
> our theory as currently understood is not adequate.
>
> Tenth century builders did have sophisticated theory to fall back on.
> It was called astrology. It may have acted to reduce construction
  ^^^^^^^^^^^^^^^^^^^^^^^^

Nonsense. You should really read up on your history of science and
technique.

> accidents -- I don't know, I haven't seen the data. Certainly if the
> project astrologer says not to work on Monday or there's likely to be
> an accident, and you work on Monday anyway and sure enough there's an
> accident, you're likely to believe he knows what he's talking about.

A, a hypothesis, probably with an idea on cause and effect behind =>
voila', theory! 

You seem to attribute some magic to "sophisticated theory" (like
integral calculus) as if it was _fundamentally_ different from the
processing of abstraction, generalization and reasoning intelligent
homo sapiens applies from day to day. There is no fundamental
difference.

> So is modern computer science more like pre-Newtonian physics, or is

You know that Newtonian physics is wrong :-)?

Regards -- Markus

0
2/28/2007 3:08:10 PM
J Thomas wrote:
> But anyway, my point isn't that you can make things that work without
> a sophisticated or correct theory about why they work. That's a given.
> My point is that large engineering projects usually don't fall down,

They don't? Maybe you just don't see those who do. The people who designed
the Airbus A380 even got the cable tree length wrong. Yes, you can fix that
after you discovered that the original cables won't reach where they should
reach to. Also, engineering projects often work with an extremely
conservative approach, and try to change as little as possible - until they
run into problems. A380 again: Until the weight is too high so they need
other, less proven materials.

> but large software projects usually do fall down. This suggests that
> our theory as currently understood is not adequate.

Or that people don't apply common knowledge. Remember, one of the first
well-documented large software efforts, the OS/360, generated a book: "The
Mythical Man-Month". This book is about 30 years old now. Even today, it's
hard to find a manager who has read the book, or even knows the most basic
concepts of that book. Therefore, deadlines are always "political", i.e.
completely unrealistic, and feature creep does the rest. The tar pit is
still the same.

> For whatever it's worth, my own theory about project development is
> based heavily on John Gall.
> 
> "A complex system that works is invariably found to have evolved from
> a simple system that works. A complex system designed from scratch
> never works and cannot be patched up to make it work. You have to
> start over, beginning with a working simple system."

There's a shorter word for it: "Integration hell". If you don't start with a
simple, working system, and integrate other parts as they come along,
you'll face integration hell, and that means the project is doomed.

BTW: From the sheer size of object code, something like a Linux distribution
or Windows Vista is an order of magnitude larger than the human genome. So
we've produced something that contains apparently more complexity than a
human being, and it sort of works (at least the Linux distribution). And
even the human being just "sort of works", as well - there's ample of
opportunity to improve the human being.

-- 
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
0
bernd.paysan (2418)
2/28/2007 4:55:48 PM
>
> Forth doesn't fit the category perfectly, but it's an amazingly close
> fit. Color Forth, by the way, happens to fit the category even more
> closely.
>
> -Billy

In which way Colorforth is more functionnal than standard Forth?

Forgive my ignorance, I only have a vague idea of what "functionnal
programming" really means, but can we say that functionnal programming
con be applied to any language just like object-oriented can? If not,
what kind of feature a programming language must offer in order to
support functionnal programming?

 Amicalement,
  Astrobe


0
astrobe (261)
2/28/2007 5:06:35 PM
billy wrote:
>> Functional programming languages avoiding state and eschew
>> mutable data.
>>   Forth is (like any imperative programming language)
>> explicit about state and freely mutates data.
> 
> I would suggest that functional programming languages are not nice
> because they avoid those things; rather, they are nice for other
> reasons, but those other reasons happen to require a lack of state and
> mutable data.

Whatever.  I'm not making a value judgment on what is "nice."  I'm 
pointing out a couple key ideas seen in most functional languages that 
Forth doesn't have.

> For example, most people would identify "referential transparency" as
> an important part of functional programming. Forth has quite strong
> referential transparency, even when you use the parts of it that do
> horrible things like mutate global state.

Forth is referentially transparent only in the most trivial sense. 
Certainly a word like this:

	: square dup * ;

Is referentially transparent.  But the second you do anything involving 
flow-control based directly (or indirectly) on variables or changing 
state, you are no longer referentially transparent.  Or put another way, 
while you might be able to demonstrate plenty of Forth words that are 
referentially transparent, it's not a unique property of Forth:

	int square(int x) { return x*x; }

This is equally referentially transparent to the Forth definition.

So to claim Forth is a functional language because some constructs are 
referentially transparent is to claim that C is a functional language 
for the same reason.  And once you go down that road, the argument gets 
progressively sillier.

>> Or put another way, languages like Joy show what Forth is
>> missing if one wanted to do functional programming.
> 
> Hardly. Do you know Joy? I do. Joy is a toy language; Forth isn't
> missing anything functional that it has (well, Joy does have way
> better list manipulation, but that's not functional). Forth is a
> mature and useful language that accidentally falls into the same
> theoretical category as Joy was deliberately constructed for.

Again, whatever.  I'm not here to debate your arbitrary definition of 
what a "toy" language is.  What I am debating is that Joy (because it is 
a purely functional language with no state) is much closer to the core 
notions of functional programming than a language like Forth.  Can Forth 
be used in a functional style?  Sure.  And so can languages like C, with 
the same expected conceptual impedance mismatch.

Joy has *better* list manipulation than Forth?  Well, yes.  Since Forth 
has no list manipulation, you could say that Joy's is better.

> Forth doesn't fit the category perfectly, but it's an amazingly close
> fit. Color Forth, by the way, happens to fit the category even more
> closely.

I think what we're seeing here is you have an incredibly loose 
definition of "functional language."  The problem with loose definitions 
is that at some point, they cease to have meaning.  That's bad if you're 
trying to have meaningful discussion, but great if you're going out of 
your way to present Forth as something it isn't.
0
nntp4274 (973)
2/28/2007 7:31:44 PM
J Thomas wrote:

> I am not a civil engineer, so maybe I shouldn't pontificate too much
> about how their programs are structured. Still, it looks to me like
> it's a very different domain. Whenever an important bridge fails, they
> have a complex inquest where they try to figure out why it happened,
> and which safety margins need to be increased. They do a lot of theory
> but it's in response to actual data. A lot of their theory isn't very,
> well, theoretical.
> 
> When software developers have a large project fail, how often is there
> a major investigation to find out just why it happened? How often are
> the results of the investigation published? 

There have been a few investigations into what apparently software failures
and it seems it is always a management failure. Even Arianne 5 was a
management failure. Could any engineering failure be a management failure?

> I occasionally see the 
> factoid that more than half of all large software projects fail,
> though of course "large software project" is left undefined, and if
> the size is increased sufficiently it becomes trivially true -- of
> course more than half of the largest software projects fail. But if
> more than half of the largest bridge projects failed....
>
> So anyway, proceeding as if this factoid meant something, the obvious
> conclusion is that we know how to build bridges but -- theory aside --
> we don't know how to run large software projects. It's a different
> domain.

Maybe that is the nub of the problem. We seem to forget that large software
projects are an integrated collection of smaller software projects. So, we
need to be good at managing to produce decent small software projects.

> And one of the differences is that engineers know how to keep small
> failures from morphing into big failures. They know how to make 27
> cubic meters of concrete even more stable than 9 cubic meters of
> concrete. Within wide limits, a bigger beam is stronger than a smaller
> beam. But software people don't know how to make 10,000 lines of code
> more stable than 1000 LOC, and in general a bigger program is not more
> robust than a smaller program.

Considering that, as one who uses Forth for High Integrity Systems, the size
of my programmes is rarely more than 3,000 lines of active source
(discounting the comment lines). Could I write much larger programmes than
that and still maintain the integrity? Probably, and I doubt it would
require a change of my methods but would definitely need quite a bit more
time.
 
> It might be a better comparison if every bridge had to be made
> entirely of glass. Tighten a glass nut on a glass bolt too tight and
> something cracks. A glass ballast might get a tiny crack that expands
> conchoidally until the whole thing splits. A car breaking through the
> glass guardrails on an upper deck might spin as it falls and hit the
> lower deck hard enough to break it all the way across....  Lots of
> minor-looking flawss that can turn into fatal flaws.

Which is why, in software engineering, one of the tenents of design is the
need for better coherence and the minimum of coupling. Like any other
engineering effort we should also have plenty of opportunities for decent,
rigorous, technical reviews. While these alone are not enough they are the
backbone of running a resilient design process.
 
> And then it would also be more like software if the engineers simply
> accepted that things go wrong sometimes.
> 
> "Er, well, it was an accident."
> "Ho! Vell, zee? An exident. Vell, vun ting ve Jagerkin *understend* is
> dat crezy exidents *happen*, right boyz?"
> "Hoo boy, yez."
> "Dot's de truth."
> 
> If software was like civil engineering, it would be a rare disaster
> when a software project failed. And it would be a scandal when a bug
> fix was needed. We would have a large body of shared experience, with
> a large collection of tools that work reliably. But somehow software
> is a lot more like military R&D contracting.

There was a recent civil engineering project that, whilst it was not an all
out failure, did give cause for concerns. This was the Millenium Bridge
across the River Thames in London. The public were very disconcerted by the
way the whole bridge wobbled when lots of people were crossing by it. The
solution was to add some dampers to the bridge structure.

In a software project, the act of adding more software can often be
problematic in itself. We have to refine the structures, trim down the code
if possible and review the resultant code all over again. Or do we?. Taking
a component oriented approach, we could do ourselves much more of a favour
by building up the trust in the components, even certifying those
components, when we use them to build a project. Forth is very amenable to
allowing this component oriented approach to software construction. It is
achievable in other languages too. It does require discipline to employ
properly and the components, if constructed and documented properly,become
re-useable in other projects.

So, the mainstay of this argument is that managing a software engineering
project requires meticulous attention to detail at every stage and enough
auditable, reviewed design effort to assure all concerned that it is
properly constructed before it is run.

-- 
********************************************************************
Paul E. Bennett ....................<email://peb@amleth.demon.co.uk>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
0
peb (807)
2/28/2007 9:26:01 PM
I'm probably going to regret responding to this thread, but....

"Paul E. Bennett" <peb@amleth.demon.co.uk> writes:

> J Thomas wrote:
>
>> But software people don't know how to make 10,000 lines of code
>> more stable than 1000 LOC, and in general a bigger program is not more
>> robust than a smaller program.

This is not strictly true.  In particular, I write compilers for a
living.  The most recent compiler I wrote (as part of a team of 5
people) was about .75 M lines.  It took us about 5 years to write it.
Last time I looked, we had ~2k bugs total, only about 700 of those
bugs got to the next level of users (the people using that compiler to
design chips), and only 3 of the bugs affected external customers --
i.e. the customers who were using the compiler to simulate the chip
and write software for it before the chip existed.

Now, it may be a fairly specialized field, but writing programs of
that size with relatively few bugs is not "art"--it's science.  We
know how to do it.  We can replicate that success fairly consistently.

Much of the science is building things that have mostly regular
structures with a few special cases.  You use tools that help you
create the regular structure, and good tools even allow you to specify
the exception cases, and incorporate them automatically into the
structure.

Now, I'm sure some critics will claim that this just means that the
specification was actually much smaller than the lines of code I
counted.  To that I counter, I quoted the lines of code at the level
where we "understood" the code, where we would discuss it and "debug"
it.  Yes, there were parts of the code that had smaller
specifications, but those specifications were "too terse" and even
their authors did not "work" at that level (or at least not
consistently).

To bring this back to FP, another "secret" of the science is using a
primarily functional style, where you build new structures to
represent new results, rather than modifying old structures in place.

I have a particular example of that that is quite illustrative. The
compiler has two target outputs--a high level output and a low level
output, which had different rules.  For a while I tried making the
internal trees obey both sets of rules by modifying it in place, once
I knew what the target was.  That was a disaster because the rules for
doing both were too complex for me to solve simltaneously, especially
when doing in-place rearrangements.  Thus, at some point, I simply
divorced the two outputs, and made the tree correct for one or the
other. That simplification, and avoiding the in-place rewriting made
the program much simpler and thus more correct.

-Chris
0
cfc (239)
3/1/2007 12:07:52 AM
On Feb 27, 7:40 pm, Matthias Blume <f...@my.address.elsewhere> wrote:
>
> A computation is free of effects if I can run it twice without being
> able to distinguish (now or later) the two results (in any way
> whatsoever).  Applications of CONS in Lisp or Scheme do not have this
> property, application of :: in ML or : in Haskell do.

What exactly constitutes "the results?" If I change your term
"computation"
to "function" then is it just the return value of the function? Is it
the
state of the universe? Or is it the return value plus some specific
subset of the universe? If the latter, any function may be considered
free of effects or not depending on how we choose the subset.

If "the results" are only the return value of the function, then
what you are describing is more normally termed purity.


Marshall

0
3/1/2007 3:37:41 AM
>>>>> "J" == J Thomas <J> writes:

  J> I am not a civil engineer, so maybe I shouldn't pontificate too
  J> much about how their programs are structured. Still, it looks to
  J> me like it's a very different domain. Whenever an important
  J> bridge fails, they have a complex inquest where they try to
  J> figure out why it happened, and which safety margins need to be
  J> increased. They do a lot of theory but it's in response to actual
  J> data. A lot of their theory isn't very, well, theoretical.

<aside>
The issue of failing bridges is real for people living in Stockholm,
at least. One key bridge was closed for heavy traffic nearly a decade
ago, and the construction of the replacement bridge next to it
had to be put on hold for a few years because unexplained movement
was detected (two inquests were made, and came to opposite
conclusions, so a third inquest was needed to determine that it was
safe to continue.) To further aggravate the traffic situation, a
pontoon crane (Lodbrok), while being towed, rammed another key 
bridge (the stretch of road most heavily trafficked in Sweden), 
forcing a closure of two lanes for two months. 

Apart from this, things had been reasonably quiet on the bridge front,
since 1980, when the Norwegian tanker Star Clipper sailed into the 
Alm� bridge, causing it to collapse. This led to the death of 8
people, as they drove over the edge at night, and a whole community
on the island Tj�rn was isolated from the mainland for a year and a half.
</aside>

  J> So anyway, proceeding as if this factoid meant something, the
  J> obvious conclusion is that we know how to build bridges but --
  J> theory aside -- we don't know how to run large software
  J> projects. It's a different domain.

  J> And one of the differences is that engineers know how to keep
  J> small failures from morphing into big failures. They know how to
  J> make 27 cubic meters of concrete even more stable than 9 cubic
  J> meters of concrete. Within wide limits, a bigger beam is stronger
  J> than a smaller beam. But software people don't know how to make
  J> 10,000 lines of code more stable than 1000 LOC, and in general a
  J> bigger program is not more robust than a smaller program.

From a layman's perspective (re bridge building), I've assumed that 
one key difference between bridge building and software engineering
is that when building a bridge, one has a wealth of past experience
going back at least to the Roman Empire. Also, the functional 
requirements have stayed mostly unchanged about that long (allow
for transport of people and vehicles from point A to point B).

Not to aggravate anyone, I realise that there has been tremendous
room for innovation, and that there are lots of intrinsic challenges,
caused by the environment (tides, winds, etc.) and also by unexpected
load (soldiers marching in step over the bridge of Angers...) 
I also assume that there is some pressure to not build "just another 
bridge", but to come up with some new designs once in a while.
I understand that the Tacoma Narrows Bridge was an sleek new
design, partly driven by cost pressure. Still, the effect
(aeroelastic flutter - a term that wasn't invented until 7 years
later) that led to its collapse had never been seen before, and 
the disaster was unexpected.

(While it may sound terrible, having as a core requirement that
"if your product fails, people may die" tends to have a sobering
effect on many aspects of engineering. The hacker adage "besides the
fact that it didn't work, what do you think?" doesn't seem quite
as funny when "didn't work" can be substituted with "people died"...)

Shifting to an area that I actually know something about... (:

I've been in the (fortunate?) position to work in a fairly 
long-running project where we've been both doing more or less
the same thing over and over for about a decade, but also
relatively often faced entirely new challenges, where the 
technology has been new to us, and both requirements and 
existing specifications have been a moving target. In all
cases, we benefit from having a well practised team, with
a tried methodology. Still, it is much more difficult to 
predict the cost and odds of success when we're in 
uncharted territory. When the task is familiar, we can 
predict the outcome pretty accurately.

(In the following, "we" is meant generally - not necessarily,
or particularly, our projects. This is of course a standard
disclaimer, but I've witnessed this in many different organisations.)

Still, we're often at a loss if asked exactly _how_ it 
works - the software systems are (possibly) complex enough
that we cannot describe, or fully understand, the total
system. We're reduced to describing the different parts, and
then verifying experimentally whether the whole system, when
put together, behaves roughly as expected.

There are some forcings in play as well. Since we cannot 
manage the complexity using a small team (it takes a fairly
large team full time just to keep up with the ever changing 
requirements), we have to add enough people that just managing
the project becomes a project in its own right. With a large
team, the average skill of the individual member tends to 
drop, and the difficulties in coordinating and spreading 
the information causes the average level of insight into 
the problem to drop as well (even if the individuals are 
smart enough to understand more). Furthermore, the *really*
smart people tend to get aggravated by the difficulties and
overhead of working in a large project, and perhaps fail
to obtain as much information as they should ("it shouldn't
be this hard").

I think a very common problem is short-sightedness.
The mountain of complexity and uncertain requirements can 
be tackled over time, and if we were allowed to begin
designing systems several years before they were requested
by our customers, we'd be able to build an intuitive 
understanding, and have time to run through many simplification
steps. But this proves to be fiendishly difficult in industry,
because intuition is not easily expressed in project plans
and strategy documents. Sometimes it works, but the successes
are difficult to repeat - and even more difficult to 
institutionalize.

BR,
Ulf W
-- 
Ulf Wiger, Senior Specialist,
   / / /   Architecture & Design of Carrier-Class Software
  / / /    Team Leader, Software Characteristics
 / / /     Ericsson AB, IMS Gateways
0
etxuwig (64)
3/1/2007 9:19:04 AM
Markus E Leypold wrote:

> You know that Newtonian physics is wrong :-)?

It isn't. It has only a limited scope of validity.

Regards
Stephan
0
nospam41 (160)
3/1/2007 12:31:48 PM
On Feb 28, 10:08 am, Markus E Leypold
<development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
> "J Thomas" <jethom...@gmail.com> writes:

> >> Theory was too weak to support the approach you describe until well
> >> after Newton's contributions to math and physics. Many excellent
> >> structures were built before that time. How?
>
> > By stretching the idea of what theory is until it fits pretty much
> > anything anybody does.
>
> You seem to have rather exaggerated ideas about what "theory"
> means. "Theory" does not need tensor calculus.

We started out discussing computer science theory. My opinion is that
a lot of it is not directly useful but may be indirectly useful, while
some of it is directly useful, but it is still generally inadequate.

You have expanded this to theory about how to hit a rabbit with a
rock, and all the ideas that a programmer with no computer science
background at all might come up with.

> BTW, I dislike the insinuation of intellectual dishonesty in your
> proponenents which I think I can read in your statement
> ("stretching"). We don't argue like we do, to "win" against you, but
> rather because this is our definition and undertanding of the term
> "theory".

I don't mean to be disrespectful, but your theory about theories seems
to me particularly academic.

> > But anyway, my point isn't that you can make things that work without
> > a sophisticated or correct theory about why they work. That's a given.
> > My point is that large engineering projects usually don't fall down,
> > but large software projects usually do fall down. This suggests that
> > our theory as currently understood is not adequate.
>
> > Tenth century builders did have sophisticated theory to fall back on.
> > It was called astrology. It may have acted to reduce construction
>
>   ^^^^^^^^^^^^^^^^^^^^^^^^
>
> Nonsense. You should really read up on your history of science and
> technique.

Are you saying that astrology wasn't developed then? People who're
interested in astrology have told me it goes back at least to
babylonia and ancient egypt. A large body of theory with a very long
history, used to predict lots of things.

I tend to believe that it is generally less useful than computer
science theory, though both sets of theory do have the benefit of
making their users feel knowledgeable and competent.

> You seem to attribute some magic to "sophisticated theory" (like
> integral calculus) as if it was _fundamentally_ different from the
> processing of abstraction, generalization and reasoning intelligent
> homo sapiens applies from day to day. There is no fundamental
> difference.

There are fundamental differences. If I keep my eye on the goal and I
try out methods to get there, the theory I develop is limited and
parts of it won't generalise at all, but it's very goal-directed. When
I miss my rabbit my stomach rumbles. If I spend years learning a set
of complex theories I will tend to believe in themk and not much
question them. If I didn't believe they were generally useful I'd hate
to spend years of my life learning them. This is as true of astrology
as it is for physics.

On the other hand, what would happen if you threw yarrow stalks and
consulted the i ching during a software project? The cryptic message
unfolding might inspire your trained intuition to spot problems early.
But I can't imagine billing for that.

> > So is modern computer science more like pre-Newtonian physics, or is
>
> You know that Newtonian physics is wrong :-)?

No doubt quantum mechanics is wrong too. People use whatever physics
they've learned to solve the problems where experience has shown them
it gets adequate answers. Or when they have a big investment in the
theory, they'll assume the theory gives adequate answers and they'll
blame any failure on some outside influence.

0
jethomas5 (1449)
3/1/2007 1:43:19 PM
Stephan Kuhagen <nospam@domain.tld> writes:

> Markus E Leypold wrote:
>
>> You know that Newtonian physics is wrong :-)?
>
> It isn't. It has only a limited scope of validity.

I know. Actually I've been studying physics. Therefore the smiley.

Regards -- Markus

0
3/1/2007 2:38:16 PM
"J Thomas" <jethomas5@gmail.com> writes:

> On Feb 28, 10:08 am, Markus E Leypold
> <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
>> "J Thomas" <jethom...@gmail.com> writes:
>
>> >> Theory was too weak to support the approach you describe until well
>> >> after Newton's contributions to math and physics. Many excellent
>> >> structures were built before that time. How?
>>
>> > By stretching the idea of what theory is until it fits pretty much
>> > anything anybody does.
>>
>> You seem to have rather exaggerated ideas about what "theory"
>> means. "Theory" does not need tensor calculus.
>
> We started out discussing computer science theory. My opinion is that
> a lot of it is not directly useful but may be indirectly useful, while
> some of it is directly useful, but it is still generally inadequate.
>
> You have expanded this to theory about how to hit a rabbit with a
> rock, and all the ideas that a programmer with no computer science
> background at all might come up with.


There is, in my opinion, no general difference there, only in degree
and area of application. The OP asked "What is (this) theory good
for?" and also got a number of answers that suggested the practice is
more important. I think my answer, that, if one neglects theory, one
will be chasing red herrings and dead ends in one's practice -- this
answer still stands and is not restricted to computer science.

BTW, it wasn't me starting with dams mad of bubblegum.


>> BTW, I dislike the insinuation of intellectual dishonesty in your
>> proponenents which I think I can read in your statement
>> ("stretching"). We don't argue like we do, to "win" against you, but
>> rather because this is our definition and undertanding of the term
>> "theory".
>
> I don't mean to be disrespectful, but your theory about theories seems
> to me particularly academic.


Have it your, "man of practice". Since "academic" to me is not a swear
word and also doesn't imply that it it is not applicable in practice,
I can even consider that as a compliment, though it doubt it was meant
as one.

>
>> > But anyway, my point isn't that you can make things that work without
>> > a sophisticated or correct theory about why they work. That's a given.
>> > My point is that large engineering projects usually don't fall down,
>> > but large software projects usually do fall down. This suggests that
>> > our theory as currently understood is not adequate.
>>
>> > Tenth century builders did have sophisticated theory to fall back on.
>> > It was called astrology. It may have acted to reduce construction
>>
>>   ^^^^^^^^^^^^^^^^^^^^^^^^
>>
>> Nonsense. You should really read up on your history of science and
>> technique.


> Are you saying that astrology wasn't developed then? People who're
> interested in astrology have told me it goes back at least to
> babylonia and ancient egypt. A large body of theory with a very long
> history, used to predict lots of things.

"People interested in astrology" today are probably cranks. As far as
"the middle ages (or whatever) had astrology as (only) science" goes,
I suggest that you don't rely on bad movies too much for your
knowledge of history.

> I tend to believe that it is generally less useful than computer
> science theory, though both sets of theory do have the benefit of
> making their users feel knowledgeable and competent.

Brrr.


>> You seem to attribute some magic to "sophisticated theory" (like
>> integral calculus) as if it was _fundamentally_ different from the
>> processing of abstraction, generalization and reasoning intelligent
>> homo sapiens applies from day to day. There is no fundamental
>> difference.
>
> There are fundamental differences. If I keep my eye on the goal and I
> try out methods to get there, the theory I develop is limited and
> parts of it won't generalise at all, but it's very goal-directed. When
> I miss my rabbit my stomach rumbles. If I spend years learning a set
> of complex theories I will tend to believe in themk and not much
> question them. If I didn't believe they were generally useful I'd hate
> to spend years of my life learning them. This is as true of astrology
> as it is for physics.

Again I suggest reading Hobbes, Kant and Popper for a solid fundament
on the thoray of science. A bit modern empirical psychology would also
not be bad as supplement.

As far as the rabbut goes: If you only start thinking when you're
hungry, you'll probably die. The idea that rabbits can be nourishing
and that animals die when you throw stones at them have perhaps to be
conceived before you become hungry.

Actually I do not understand youre counter argument at all, since
physics is also very limited in scope and astrology is simple bull
shit (please come out and get me, astrology trolls, I say it again:
ASTROLOGY IS BULLSHIT, thanks).

So I really fail to see the general difference between the world view
(not the theory of catching rabbits, that is only a part of it) a cave
man has and physics. Both abstract "reality" to a set of rules and
assertions and bothe sets of rules allow to construct recipes if you
want to achive certain goals (catching rabbits or splitting atoms).

> On the other hand, what would happen if you threw yarrow stalks and
> consulted the i ching during a software project? The cryptic message
> unfolding might inspire your trained intuition to spot problems early.
> But I can't imagine billing for that.

?

>> > So is modern computer science more like pre-Newtonian physics, or is
>>
>> You know that Newtonian physics is wrong :-)?

> No doubt quantum mechanics is wrong too. People use whatever physics

Certainly. Its even self-contradictory.

> they've learned to solve the problems where experience has shown them
> it gets adequate answers. Or when they have a big investment in the
> theory, they'll assume the theory gives adequate answers and they'll
> blame any failure on some outside influence.

So?

Regards -- Markus

0
3/1/2007 2:55:52 PM
Op Thu, 01 Mar 2007 15:55:52 +0100 schreef Markus E Leypold:

> "People interested in astrology" today are probably cranks. As far as
> "the middle ages (or whatever) had astrology as (only) science" goes,
> I suggest that you don't rely on bad movies too much for your
> knowledge of history.
> 
Today, they are cranks, no doubt about it. But when astronomy began in the
sixteenth and seventeenth century (long after the Middle Ages), astrology
was a respected science. Newton and Kepler were foremost astrologers, they
only later became astronomers. The paradigm shift did not came as a
lightning bolt, but more like a change of climate.
-- 
Coos
0
chforth (1145)
3/1/2007 3:37:53 PM
Ulf Wiger wrote:

   ...

> I understand that the Tacoma Narrows Bridge was an sleek new
> design, partly driven by cost pressure. Still, the effect
> (aeroelastic flutter - a term that wasn't invented until 7 years
> later) that led to its collapse had never been seen before, and 
> the disaster was unexpected.

The disaster was predicted before the bridge was completed. David B. 
Steinman, then an upstart engineer who had already incurred the 
displeasure of the established bridge-building community by setting out 
on his own instead of signing on as an apprentice, got into a 
letters-to-the-editor feud with Leon S. Moisseiff, Gertie's designer and 
a member of that New York community. It's bad form to criticize a 
colleague's work, and the people who designed the George Washington 
Bridge and probably knew that Steinman was right kept out of it. The 
Bronx-Whitestone Bridge was nearly a sister to Galloping Gertie*. 
Steinman concluded that, given the strength and direction of the winds 
over Long Island Sound there, it would remain safe. (Gertie's collapse 
had catapulted Steinman into bridge-building stardom. He didn't 
disappoint.)

WW II broke out before the Bronx-Whitestone Bridge could be stiffened, 
and it served as built until nearly 1950. I once rode a bicycle across 
it on a moderately windy day and during one lurch both wheels left the 
pavement. Eventually, a stiffening truss was added (replacing the 
walkway, to the dismay of many) and the bridge calmed down.

> (While it may sound terrible, having as a core requirement that
> "if your product fails, people may die" tends to have a sobering
> effect on many aspects of engineering. The hacker adage "besides the
> fact that it didn't work, what do you think?" doesn't seem quite
> as funny when "didn't work" can be substituted with "people died"...)

Even more effective would be "If your product fails, _you_ may die."
:-) It's sobering to work on a product where "system crash" means broken 
glass and twisted metal and maybe blood on the floor. Been there.

   ...

Jerry
____________________________________
* I prefer the nickname because there is a present-day Tacoma Narrows 
Bridge which is obviously not the same.
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/1/2007 4:43:02 PM
Coos Haak <chforth@hccnet.nl> writes:

> Op Thu, 01 Mar 2007 15:55:52 +0100 schreef Markus E Leypold:
>
>> "People interested in astrology" today are probably cranks. As far as
>> "the middle ages (or whatever) had astrology as (only) science" goes,
>> I suggest that you don't rely on bad movies too much for your
>> knowledge of history.
>> 
> Today, they are cranks, no doubt about it. But when astronomy began in the

Oh yes. That's why I wrote "today". JT was talking about "People
who're interested in astrology have told me it goes back at least to
babylonia and ancient egypt". That where people of today -- but I
admit, that I assumed, perhaps wrongly, that their "interest in
astrology" was not historical, but as prcatictioners or believers.


> sixteenth and seventeenth century (long after the Middle Ages), astrology
> was a respected science. Newton and Kepler were foremost astrologers, they
> only later became astronomers. The paradigm shift did not came as a

:-). Yes. I've been allouding to that in another post.

> lightning bolt, but more like a change of climate.

Exactly my point as far as history of science goes: There is no clean
separation betwen "true science" and pre-science. Both are attempts to
deal with reality and abstract rules for cause and effect.

Regards -- Markus

0
3/1/2007 5:06:47 PM
"Marshall" <marshall.spight@gmail.com> writes:

> On Feb 27, 7:40 pm, Matthias Blume <f...@my.address.elsewhere> wrote:
>>
>> A computation is free of effects if I can run it twice without being
>> able to distinguish (now or later) the two results (in any way
>> whatsoever).  Applications of CONS in Lisp or Scheme do not have this
>> property, application of :: in ML or : in Haskell do.
>
> What exactly constitutes "the results?" If I change your term
> "computation"
> to "function" then is it just the return value of the function? Is it
> the
> state of the universe?

The latter.

> Or is it the return value plus some specific
> subset of the universe? If the latter, any function may be considered
> free of effects or not depending on how we choose the subset.

Yes, that would be ambiguous.

> If "the results" are only the return value of the function, then
> what you are describing is more normally termed purity.

I'm not doing that.

If you look at, e.g., the meaning of a term under a denotational
semantics, and let's consider only store effects for the moment.
(This is enough for the examples of CONS and LAMBDA that I gave.)

The meaning of a term is commonly described as a function mapping an
environment and a store to a value and a new store.

    E[[e]] : env * store -> val * store

We say the expression is pure (aka. free of store effects) if the new
and old stores coincide, i.e., if

  \forall rho : env . \forall sigma : store .
     \exists v : value . E[[e]](rho,sigma) = (v,sigma)

CONS and LAMBDA in Scheme are not pure since they extend the domain of
sigma.  Of course, this is a different kind of effect than that of
SET-CAR! or SET-CDR!, since these do not extend the domain of sigma
but rather map certain previously mapped locations to new values.

Matthias
0
find19 (1244)
3/1/2007 5:18:43 PM
Markus E Leypold wrote:
> Jerry Avins <jya@ieee.org> writes:
> 
>> Steve Schafer wrote:
>>> On Tue, 27 Feb 2007 10:29:19 -0500, Jerry Avins <jya@ieee.org> wrote:
>>>
>>>> Theory developed to explain practice. Practice came first.
>>> And practice is extended by applying theory. Otherwise, it's _all_
>>> just
>>> trial and error.
>> Theory isn't always pertinent, or insight doesn't leap to mind to get
>> the proper theory applied. Phlogiston was a theory useful for guiding
> 
> People, why do you all despise theory so? The OP (at the beginning of
> this thread) asked: What is all this CD theory good for, it doesn't
> seem to be applicable. A number of people tried to show him that
> theory delineates the areas of things you reasonably can _expect_ to
> be able do and those you can't. Theory also helps you to understand
> previous failures on a better basis than "John failed in that, perhaps
> you/I will fail too".

I certainly don't despise theory, but I don't assign it the primacy that 
you seem to. I took courses that qualify me to design structures in 
timber, concrete, and steel, but the only houses I actually built came 
before them. (Those houses have been standing for nearly 60 years.) 
Surely, it won't surprise you that most house carpenters know about 
horizontal shear only intuitively.

> My position is that theory is the foundation of understanding and
> organizing (a) empirical data (which certainly stems from practice but
> which needs to be sorted into a framework to be lifted from the status
> of purely circumstancal evidence and (b) allows to extrapolate a
> common law from a limited number of points of empirical data and
> extrapolate to cases never tried (we don't need to "try" certain types
> of construction to be fairly sure they will fail).

No, but we do need to try certain constructions to be fairly sure they 
_won't_ fail. How far would you accept "Theory predicts that it must 
work, so there's no need to test it"? Before you laugh at that notion, 
consider the concrete slab floor in an office building. Was it ever 
tested? Probably not. Theory doesn't attest to its adequacy, rather 
codified past practice.

> ...). But that doesn't subtract a jota, that theory permeates every
> word and action of practice if it is not just aimless pottering.

The "theory" involved here may be no more than "I have an idea ...." or 
"My theory is that if we dig a deeper hole, the pier might support more 
weight.")

> You couldn't even talk about why you failed in practice: E.g. "Stress"
> is a concept stemming from a theoretical framework. You wouldn't have
> appropriate expressions to describe what you see or experienced.

I once explained to a structural field engineer why his company's plan 
to cut a large hole in a floor girder near the wall would cause 
collapse, but that a similar hole mid span might work. I knew nothing of 
stress, bending moment, shear, or any of the other terms that might have 
simplified the explanation (and impressed him with my erudition!), but 
he eventually understood and acted accordingly. Was that theory as you 
understand the word?

> Yes, I admit, theory has been _derived_ from practice (so in a sense
> it "came first" as one poster formulated). But whereas practice
> doesn't permeate theory, theory is the the only framework within which
> experience is even possble. 
> 
> Again: Theory is not a _supplement_ to practice (a formulation which
> implies you can ignore it without too much negative consequences), but
> the foundation of practice. The realtionship is admittedly a bit
> recursive (since practice stimulates and supports theory) but not
> symmetrical.

Theory is ignored all the time in construction. A builder simply follows 
the code. One code for a roof of a certain span mandates 2x6 rafters on 
16" centers. An architect friend wanted to build a roof entirely of 2x4 
rafters with no spacing between them. Such a roof is much stronger, but 
not allowed by the code. Studs go on 16" centers, even though theory 
proves that 24" centers is adequate and there is practice to confirm 
that. And so on. Nobody designs steel girder-to-column connections for 
ordinary buildings. An appropriate one is chosen from a table or spit 
out by a computer program.

> I've have the impression that most posters to this thread have
> forgotten that they didn'd became engineers by just starting "to build
> bridges" but first by learning how others have built bridges and by
> doing calculations for hypothetical bridges that would never be
> build. Theory, not practice.

I suppose I was on my way to becoming an engineer when I built a 
pull-toy dump truck out of two cheese boxes when I was two. My father 
helped: he screwed on the hinges and drilled the checkers I used for 
wheels. Building my own toys continued. My start in electronics was 
building a HiFi system (using a soldering iron heated by the Bunsen 
burner from my chemistry set). That was just an extension of house 
wiring, though. Eventually, I learned enough to fix those sorts of 
things when they didn't work. I was a serviceman before I became a 
respected technician, and became an EE after that. For me, practice has 
always preceded theory and enlightened it.

> I'd never trust an engineer who couldn't calculate (is mathematically
> illiterate). Would you? But any calculation is theory, not
> practice. You don't work with stone, concrete, metal or whatever, but
> instead apply rules to say when something will work and when it will
> fail and derive from that what you have to do that it doesn't
> fail. Again: Theory. 

And when it fails anyway, what then?

> And yes, applied theory is still theory, not practice.

Just as it takes practice to drive a nail straight, it takes practice 
to know how to apply theory.  Do you recall Feynman's Brazilian optics 
student who knew the theory of plane parallel plates, but couldn't 
recognize a window pane as an instance of it?  You don't learn theory 
out of books alone.

> I rest my case.

>>    ... We turn to theory when things go
>> wrong. Usually, we turn to the handbook.
> 
> The handbook is not practice either, but a set of simplified rules: Theory.

The handbook is codified practice. It is certainly enlightened by 
theory, but not limited to it. Nothing in theory requires that the 
deflection of a ceiling joist caused by live load be limited to a fixed 
fraction of its span (1/120 if memory serves), but the handbook does. 
That's practice (to avoid plaster cracking) not theory.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/1/2007 6:08:08 PM
J Thomas wrote:
> On Feb 28, 10:08 am, Markus E Leypold
> <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
>> "J Thomas" <jethom...@gmail.com> writes:
....
>>> But anyway, my point isn't that you can make things that work without
>>> a sophisticated or correct theory about why they work. That's a given.
>>> My point is that large engineering projects usually don't fall down,
>>> but large software projects usually do fall down. This suggests that
>>> our theory as currently understood is not adequate.
>>> Tenth century builders did have sophisticated theory to fall back on.
>>> It was called astrology. It may have acted to reduce construction
>>   ^^^^^^^^^^^^^^^^^^^^^^^^
>>
>> Nonsense. You should really read up on your history of science and
>> technique.
> 
> Are you saying that astrology wasn't developed then? People who're
> interested in astrology have told me it goes back at least to
> babylonia and ancient egypt. A large body of theory with a very long
> history, used to predict lots of things.

Astrology certainly has a long history, but it wasn't a theoretical 
basis for medieval building design.  Medieval architects and builders 
had a considerable body of knowledge about what things worked and what 
didn't, which they learned through apprenticeships and other direct 
contact.  Architects were able to innovate brilliantly -- probably by 
using block models to test things such as arch construction.  What they 
*didn't* do is write lengthy academic papers on their theories:  some of 
them were regarded as 'trade secrets', and in general they were outside 
any academic tradition that encouraged committing designs to paper.

Cheers,
Elizabeth

-- 
==================================================
Elizabeth D. Rather   (US & Canada)   800-55-FORTH
FORTH Inc.                         +1 310-491-3356
5155 W. Rosecrans Ave. #1018  Fax: +1 310-978-9454
Hawthorne, CA 90250
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
==================================================
0
eratherXXX (903)
3/1/2007 6:42:09 PM
In article <xczwt21nwxj.fsf@cbe.ericsson.se>, Ulf Wiger wrote:
> 
> From a layman's perspective (re bridge building), I've assumed that
> one key difference between bridge building and software engineering
> is that when building a bridge, one has a wealth of past experience
> going back at least to the Roman Empire. Also, the functional
> requirements have stayed mostly unchanged about that long (allow for
> transport of people and vehicles from point A to point B).

I've tended to assume that the difference is that software is much
more nonlinear than anything else that people routinely build -- a
single-bit error can kill a program, but a single-grain-of-sand error
is not going to kill a bridge. That is, real engineering is over 
much more continuous than software engineering is.

If you want continuity/robustness in software, you have to explicitly
build it in yourself. 

> I've been in the (fortunate?) position to work in a fairly 
> long-running project where we've been both doing more or less
> the same thing over and over for about a decade, but also
> relatively often faced entirely new challenges, where the 
> technology has been new to us, and both requirements and 
> existing specifications have been a moving target. In all
> cases, we benefit from having a well practised team, with
> a tried methodology. Still, it is much more difficult to 
> predict the cost and odds of success when we're in 
> uncharted territory. When the task is familiar, we can 
> predict the outcome pretty accurately.

This is a really dumb question: why do you do familiar tasks?

-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
3/1/2007 7:28:50 PM
Elizabeth D Rather <eratherXXX@forth.com> writes:

> J Thomas wrote:
>> On Feb 28, 10:08 am, Markus E Leypold
>> <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
>>> "J Thomas" <jethom...@gmail.com> writes:
> ...
>>>> But anyway, my point isn't that you can make things that work without
>>>> a sophisticated or correct theory about why they work. That's a given.
>>>> My point is that large engineering projects usually don't fall down,
>>>> but large software projects usually do fall down. This suggests that
>>>> our theory as currently understood is not adequate.
>>>> Tenth century builders did have sophisticated theory to fall back on.
>>>> It was called astrology. It may have acted to reduce construction
>>>   ^^^^^^^^^^^^^^^^^^^^^^^^
>>>
>>> Nonsense. You should really read up on your history of science and
>>> technique.
>> Are you saying that astrology wasn't developed then? People who're
>> interested in astrology have told me it goes back at least to
>> babylonia and ancient egypt. A large body of theory with a very long
>> history, used to predict lots of things.
>
> Astrology certainly has a long history, but it wasn't a theoretical
> basis for medieval building design.  

Yes. The was the point I wanted to make.

> Medieval architects and builders had a considerable body of
> knowledge about what things worked and what didn't, which they
> learned through apprenticeships and other direct contact.

> Architects were able to innovate brilliantly -- probably by
> using block models to test things such as arch construction.

> What they *didn't* do is write lengthy academic papers on their
> theories: some of them were regarded as 'trade secrets', and in
> general they were outside any academic tradition that encouraged
> committing designs to paper.

And this too.

(It is, for a non-native speaker, sometimes difficult to put such
considerations into non-aggressive, elegant English at 2:00 at night :-( ).

Regards -- Markus

0
3/1/2007 7:30:44 PM
Jerry Avins <jya@ieee.org> writes:

> Just as it takes practice to drive a nail straight, it takes practice
> to know how to apply theory.  Do you recall Feynman's Brazilian optics
> student who knew the theory of plane parallel plates, but couldn't
> recognize a window pane as an instance of it?  

I actually know about that event very well. My memory tells me, it was
not "parallel planes" but the phenomenon that certain crystals emit
light when splitted along certain plains. I've forgotten what the
process is called, but I rememember very well at least one example
where it occurs, since I'm one of the probably very few people in
Germany that actually went into a store after reading that, bought a
package of candis and convinced themselves in the darkness of their
(then) windowless bath room that it actually emits light when crunched
with a pair of pliers.

> You don't learn theory
> out of books alone.

Well, I certainly haven't said that ...

More about that later / another day. I still do not agree with you in
general, but that is probably because you define theory much more
narrowly than I do (I think, knowing about "horizontal shear", even
about "force" is already theory, you seem only to allow it being
theory when (numerical) calculations are done).

Regards -- Markus

0
3/1/2007 7:40:43 PM
Markus E Leypold wrote:

   ...

> ASTROLOGY IS BULLSHIT, thanks).

Are you absolutely sure? Psychology theory tells us that people's 
personalities are heavily influenced by their early experiences. The 
term "formative" comes to mind. Especially in rural settings -- most of 
the world was rural not so long ago -- a toddler's experiences would be 
heavily influenced by the season in which it became ambulatory. In 
summer, moving freely between indoors and out while in winter, the 
transition requiring a lengthy procedure putting on or taking off heavy 
garments. There are certainly other seasonal influences on early 
experience, and can you say with confidence that they don't, on the 
average, shape kid's psyches in consistent ways? Has anyone done a 
Northern-Southern hemisphere correlation? Would it surprise you if such 
a study revealed common traits six months apart?

I don't believe much of that, but I won't rule it out. I have learned 
not to dismiss observations for which there is no theoretic grounding. I 
remember being told that I only imagined the ball lightning I had seen.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/1/2007 9:49:14 PM
On Mar 1, 1:42 pm, Elizabeth D Rather <erather...@forth.com> wrote:
> J Thomas wrote:
> > On Feb 28, 10:08 am, Markus E Leypold
> > <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
> >> "J Thomas" <jethom...@gmail.com> writes:
> ...
> >>> But anyway, my point isn't that you can make things that work without
> >>> a sophisticated or correct theory about why they work. That's a given.
> >>> My point is that large engineering projects usually don't fall down,
> >>> but large software projects usually do fall down. This suggests that
> >>> our theory as currently understood is not adequate.
> >>> Tenth century builders did have sophisticated theory to fall back on.
> >>> It was called astrology. It may have acted to reduce construction
> >>   ^^^^^^^^^^^^^^^^^^^^^^^^
>
> >> Nonsense. You should really read up on your history of science and
> >> technique.
>
> > Are you saying that astrology wasn't developed then? People who're
> > interested in astrology have told me it goes back at least to
> > babylonia and ancient egypt. A large body of theory with a very long
> > history, used to predict lots of things.
>
> Astrology certainly has a long history, but it wasn't a theoretical
> basis for medieval building design.  Medieval architects and builders
> had a considerable body of knowledge about what things worked and what
> didn't, which they learned through apprenticeships and other direct
> contact.  Architects were able to innovate brilliantly -- probably by
> using block models to test things such as arch construction.

Yes, they had all that. And they also had astrology to help them with
their scheduling.

0
jethomas5 (1449)
3/1/2007 9:50:19 PM
Jerry Avins <jya@ieee.org> writes:

> Markus E Leypold wrote:
>
>    ...
>
>> ASTROLOGY IS BULLSHIT, thanks).
>
> Are you absolutely sure? Psychology theory tells us that people's
> personalities are heavily influenced by their early experiences. The
> term "formative" comes to mind. Especially in rural settings -- most
> of the world was rural not so long ago -- a toddler's experiences
> would be heavily influenced by the season in which it became
> ambulatory. In summer, moving freely between indoors and out while in
> winter, the transition requiring a lengthy procedure putting on or
> taking off heavy garments. There are certainly other seasonal
> influences on early experience, and can you say with confidence that
> they don't, on the average, shape kid's psyches in consistent ways?

Yes. Sorry about that. I don't even want to argue about it, but as a
hint at the direction my arguments would take: There are other even
more varying influences and I don't see why the seasonal influence
above all others would take so much precedence. 

> Has anyone done a Northern-Southern hemisphere correlation? Would it
> surprise you if such a study revealed common traits six months apart?

Yes.

> I don't believe much of that, but I won't rule it out. I have learned

I will.

> not to dismiss observations for which there is no theoretic

There are no such observation.

> grounding. I remember being told that I only imagined the ball
> lightning I had seen.

Regards -- Markus

0
3/1/2007 10:09:47 PM
"J Thomas" <jethomas5@gmail.com> writes:

> On Mar 1, 1:42 pm, Elizabeth D Rather <erather...@forth.com> wrote:
>> J Thomas wrote:
>> > On Feb 28, 10:08 am, Markus E Leypold
>> > <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
>> >> "J Thomas" <jethom...@gmail.com> writes:
>> ...
>> >>> But anyway, my point isn't that you can make things that work without
>> >>> a sophisticated or correct theory about why they work. That's a given.
>> >>> My point is that large engineering projects usually don't fall down,
>> >>> but large software projects usually do fall down. This suggests that
>> >>> our theory as currently understood is not adequate.
>> >>> Tenth century builders did have sophisticated theory to fall back on.
>> >>> It was called astrology. It may have acted to reduce construction
>> >>   ^^^^^^^^^^^^^^^^^^^^^^^^
>>
>> >> Nonsense. You should really read up on your history of science and
>> >> technique.
>>
>> > Are you saying that astrology wasn't developed then? People who're
>> > interested in astrology have told me it goes back at least to
>> > babylonia and ancient egypt. A large body of theory with a very long
>> > history, used to predict lots of things.
>>
>> Astrology certainly has a long history, but it wasn't a theoretical
>> basis for medieval building design.  Medieval architects and builders
>> had a considerable body of knowledge about what things worked and what
>> didn't, which they learned through apprenticeships and other direct
>> contact.  Architects were able to innovate brilliantly -- probably by
>> using block models to test things such as arch construction.
>
> Yes, they had all that. And they also had astrology to help them with
> their scheduling.

I doubt that. If -- building churches -- there was any scheduling
apart from "as fast as possible" or "as soon as the wood arrives", it
was 

Early churches where also built by monks. It was AFAIK praying and
working all the day except sunday and I doubt there was any free time
based on _astrological reasoning_ and I doubt there was any
astrological reasoning that lead to one thing done before the
other. Indeed a lot of what those people did, was done out of
technical reasons at most of the times.

As short example: The material for Adobe stones needed to be prepared
3 years in advance (clay needed to be frozen winter and processed in
spring for at least 3 times before it was homogenous enough to keep
for >800 years -- BTW: At Bad Doberan stones replaces in the 1800s are
now in worse state than those 800 years old). Or: They had a lot of
problems to stabilize a fundament (AFAIR that cost them half
of their funds to create a fundament which was holding up to the
ground water) -- of course these things need to be done at the proper
time of the year when it is a bit dryer than at other times. Wood:
Everyone owning a wood or a sawmill could tell you, that you shouldn't
cut timber in spring or summer.

And so on. For your information: I'm talking about the middle
ages. There was now astrology to steer things but rather a good
understanding how things work (and very often how they work).

Regards -- Markus




0
3/1/2007 10:21:52 PM
J Thomas wrote:
> On Mar 1, 1:42 pm, Elizabeth D Rather <erather...@forth.com> wrote:
>> J Thomas wrote:
>>> On Feb 28, 10:08 am, Markus E Leypold
>>> <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
>>>> "J Thomas" <jethom...@gmail.com> writes:
>> ...
>>>>> But anyway, my point isn't that you can make things that work without
>>>>> a sophisticated or correct theory about why they work. That's a given.
>>>>> My point is that large engineering projects usually don't fall down,
>>>>> but large software projects usually do fall down. This suggests that
>>>>> our theory as currently understood is not adequate.
>>>>> Tenth century builders did have sophisticated theory to fall back on.
>>>>> It was called astrology. It may have acted to reduce construction
>>>>   ^^^^^^^^^^^^^^^^^^^^^^^^
>>>> Nonsense. You should really read up on your history of science and
>>>> technique.
>>> Are you saying that astrology wasn't developed then? People who're
>>> interested in astrology have told me it goes back at least to
>>> babylonia and ancient egypt. A large body of theory with a very long
>>> history, used to predict lots of things.
>> Astrology certainly has a long history, but it wasn't a theoretical
>> basis for medieval building design.  Medieval architects and builders
>> had a considerable body of knowledge about what things worked and what
>> didn't, which they learned through apprenticeships and other direct
>> contact.  Architects were able to innovate brilliantly -- probably by
>> using block models to test things such as arch construction.
> 
> Yes, they had all that. And they also had astrology to help them with
> their scheduling.

You are way overestimating its influence.  I have read fairly detailed 
accounts of the building of Salisbury Cathedral, for example, and saw no 
mention of astrology as a scheduling issue.  Priorities included 
weather, church calendar (can't conflict with planned celebrations or 
other events), and availability of material and workers.

Cheers,
Elizabeth

-- 
==================================================
Elizabeth D. Rather   (US & Canada)   800-55-FORTH
FORTH Inc.                         +1 310-491-3356
5155 W. Rosecrans Ave. #1018  Fax: +1 310-978-9454
Hawthorne, CA 90250
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
==================================================
0
eratherXXX (903)
3/1/2007 11:29:55 PM
Elizabeth D Rather <eratherXXX@forth.com> writes:

> J Thomas wrote:
>> On Mar 1, 1:42 pm, Elizabeth D Rather <erather...@forth.com> wrote:
>>> J Thomas wrote:
>>>> On Feb 28, 10:08 am, Markus E Leypold
>>>> <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
>>>>> "J Thomas" <jethom...@gmail.com> writes:
>>> ...
>>>>>> But anyway, my point isn't that you can make things that work without
>>>>>> a sophisticated or correct theory about why they work. That's a given.
>>>>>> My point is that large engineering projects usually don't fall down,
>>>>>> but large software projects usually do fall down. This suggests that
>>>>>> our theory as currently understood is not adequate.
>>>>>> Tenth century builders did have sophisticated theory to fall back on.
>>>>>> It was called astrology. It may have acted to reduce construction
>>>>>   ^^^^^^^^^^^^^^^^^^^^^^^^
>>>>> Nonsense. You should really read up on your history of science and
>>>>> technique.
>>>> Are you saying that astrology wasn't developed then? People who're
>>>> interested in astrology have told me it goes back at least to
>>>> babylonia and ancient egypt. A large body of theory with a very long
>>>> history, used to predict lots of things.
>>> Astrology certainly has a long history, but it wasn't a theoretical
>>> basis for medieval building design.  Medieval architects and builders
>>> had a considerable body of knowledge about what things worked and what
>>> didn't, which they learned through apprenticeships and other direct
>>> contact.  Architects were able to innovate brilliantly -- probably by
>>> using block models to test things such as arch construction.
>> Yes, they had all that. And they also had astrology to help them with
>> their scheduling.
>
> You are way overestimating its influence.  I have read fairly detailed
> accounts of the building of Salisbury Cathedral, for example, and saw
> no mention of astrology as a scheduling issue.  Priorities included
> weather, church calendar (can't conflict with planned celebrations or
> other events), and availability of material and workers.

I've recently become quite interested in how the M�nster at Bad
Doberan was built. Can you especially recommend any sources about the
Salisbury Cathedral?

Regards --  Markus

0
3/2/2007 12:57:15 AM
"Neelakantan Krishnaswami" <neelk@cs.cmu.edu> wrote in message 
news:slrneueabi.e3h.neelk@gs3106.sp.cs.cmu.edu...
> In article <xczwt21nwxj.fsf@cbe.ericsson.se>, Ulf Wiger wrote:

>> I've been in the (fortunate?) position to work in a fairly
>> long-running project where we've been both doing more or less
>> the same thing over and over for about a decade, but also
>> relatively often faced entirely new challenges, where the
>> technology has been new to us, and both requirements and
>> existing specifications have been a moving target. In all
>> cases, we benefit from having a well practised team, with
>> a tried methodology. Still, it is much more difficult to
>> predict the cost and odds of success when we're in
>> uncharted territory. When the task is familiar, we can
>> predict the outcome pretty accurately.
>
> This is a really dumb question: why do you do familiar tasks?

Because they are profitable!

Note that human development, particularly the development of societies, 
advanced rapidly with the concept of 'division of labour'.  This had members 
of the society specializing in particular skills and relying on other 
members of the society to do other tasks.  Effectively they did a task that 
became familiar to them, they did it more effectively as a result, and the 
society as a whole benefited: it was profitable!

Division of Labour/specialization also afforded the craftsman the 
opportunity to experiment and develop new knowledge about their task - but 
doing the 'familiar task' was what provided the foundation upon which this 
could be built!

-- 
Stuart 


0
Stuart
3/2/2007 9:35:20 AM
Jerry Avins wrote:

> Markus E Leypold wrote:
> 
>    ...
> 
>> ASTROLOGY IS BULLSHIT, thanks).
> 
> Are you absolutely sure? Psychology theory tells us that people's
> personalities are heavily influenced by their early experiences. The
> term "formative" comes to mind. Especially in rural settings -- most of
> the world was rural not so long ago -- a toddler's experiences would be
> heavily influenced by the season in which it became ambulatory. In
> summer, moving freely between indoors and out while in winter, the
> transition requiring a lengthy procedure putting on or taking off heavy
> garments. There are certainly other seasonal influences on early
> experience, and can you say with confidence that they don't, on the
> average, shape kid's psyches in consistent ways? Has anyone done a
> Northern-Southern hemisphere correlation?

Astrology as we know it here hasn't been developed in a setting with
significant winter and summer. It has been developed in Mesopotamia, where
seasons are much less important than here. The Chinese astrology on the
other hand has been developed in a climate with similar seasons, and there
the zodiac changes on a yearly basis, with a 12 year cycle (and a longer,
60 year cycle), and planets aren't important. The 60 year cycle probably
has a foundation (complete cut off from the previous generation), and some
countries have significant events on the per-60-year ticks, e.g. France:
1789, 1848, 1968.

Now try matching your theory with this yearly zodiac developed in an
environment with more seasonal influence than the origin of the monthly
zodiac. Well, for me the Chinese model works slightly better: I definitely
share more properties with the Chinese dog than with the Mesopotamian
pisces.

-- 
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
0
bernd.paysan (2418)
3/2/2007 10:02:05 AM
On Mar 2, 5:02 am, Bernd Paysan <bernd.pay...@gmx.de> wrote:
> Jerry Avins wrote:
> > Markus E Leypold wrote:
>
> >> ASTROLOGY IS BULLSHIT, thanks).
>
> > Are you absolutely sure? Psychology theory tells us that people's
> > personalities are heavily influenced by their early experiences. The
> > term "formative" comes to mind. Especially in rural settings -- most of
> > the world was rural not so long ago -- a toddler's experiences would be
> > heavily influenced by the season in which it became ambulatory.
>
> Astrology as we know it here hasn't been developed in a setting with
> significant winter and summer. It has been developed in Mesopotamia, where
> seasons are much less important than here. The Chinese astrology on the
> other hand has been developed in a climate with similar seasons, and there
> the zodiac changes on a yearly basis, with a 12 year cycle (and a longer,
> 60 year cycle), and planets aren't important. The 60 year cycle probably
> has a foundation (complete cut off from the previous generation), and some
> countries have significant events on the per-60-year ticks, e.g. France:
> 1789, 1848, 1968.

You are both trying to come up with theoretical reasons why there
might be something to one astrological system or another. I believe
this is misguided.

As Bernd points out, there are multiple astrological systems, and
professional astrologers of all systems get customers and repeat
customers, seemingly independent of their astrological system.

There are multiple variants of european astrology that take into
account different things. Some are precessed from the (claimed)
babylonian model and some are not. Indian astrology is very different
in detail. If you were doing fourier analysis and you got a nice
pattern, and the same pattern fit just as well if the data was shifted
by a sixth of a cycle, what would you think?

What I think is that astrological theory is bullshit. So astrology's
success is due to something other than its theory. Astrology is
successful *despite* its bullshit theory.

The average professional astrologer with an affluent clientele makes
more money than the average software developer working in industry.
Customers tend to be far more satisfied with the deliverables they
receive. Cost overruns and schedule delays are unknown in astrology. I
contend that if it happens they deliver accurate projections (which I
have no reliable data on), it is probably in spite of their theory.
And it's quite likely that what creates their customer satisfaction is
something else entirely.

And yet many astrologers definitely believe in their theories. I have
met two professional astrologers who bet hundreds of thousands of
dollars in one case and over a million dollars in the other case on
the stock market -- hard-earned money from their astrology businesses
-- based on their astrological theories of how the stock market should
go. I am convinced that they believe. And when they lost money, they
*refined their theories*.

If I am right that astrology's tremendous success comes in spite of
its theoretical basis, (and I freely admit I could be wrong), what
does that say about the inevitable relationship between theory and
practice?

0
jethomas5 (1449)
3/2/2007 10:50:30 AM
Bernd Paysan wrote:

> Astrology as we know it here hasn't been developed in a setting with
> significant winter and summer. It has been developed in Mesopotamia,
> where seasons are much less important than here.

Where's "here" ?   Usenet is worldwide.

BTW, another seasonal influence on person's development is the
environment he or she experiences in the womb.  The mother's health
(seasonal infections, etc) and nutrition during that time can have a
critical influence on the growing child.

Also, even if you are right that the roots of Western astrology lie in
Mesopotamia, there has been rather a lot of time for "drift" in the
contents of the theory.   More than enough time for the theory's claims
to adjust to match local truth if was going to.

    -- chris
0
chris.uppal (3980)
3/2/2007 3:53:02 PM
On Mar 2, 5:02 am, Bernd Paysan <bernd.pay...@gmx.de> wrote:
> Jerry Avins wrote:
> > Markus E Leypold wrote:
>
> >> ASTROLOGY IS BULLSHIT, thanks).
>
> > Are you absolutely sure? Psychology theory tells us that people's
> > personalities are heavily influenced by their early experiences. The
> > term "formative" comes to mind. Especially in rural settings -- most of
> > the world was rural not so long ago -- a toddler's experiences would be
> > heavily influenced by the season in which it became ambulatory.
>
> Astrology as we know it here hasn't been developed in a setting with
> significant winter and summer. It has been developed in Mesopotamia, where
> seasons are much less important than here. The Chinese astrology on the
> other hand has been developed in a climate with similar seasons, and there
> the zodiac changes on a yearly basis, with a 12 year cycle (and a longer,
> 60 year cycle), and planets aren't important. The 60 year cycle probably
> has a foundation (complete cut off from the previous generation), and some
> countries have significant events on the per-60-year ticks, e.g. France:
> 1789, 1848, 1968.

You are both trying to come up with theoretical reasons why there
might be something to one astrological system or another. I believe
this is misguided.

As Bernd points out, there are multiple astrological systems, and
professional astrologers of all systems get customers and repeat
customers, seemingly independent of their astrological system.

There are multiple variants of european astrology that take into
account different things. Some are precessed from the (claimed)
babylonian model and some are not. Indian astrology is very different
in detail. If you were doing fourier analysis and you got a nice
pattern, and the same pattern fit just as well if the data was shifted
by a sixth of a cycle, what would you think?

What I think is that astrological theory is bullshit. So astrology's
success is due to something other than its theory. Astrology is
successful *despite* its bullshit theory.

The average professional astrologer with an affluent clientele makes
more money than the average software developer working in industry.
Customers tend to be far more satisfied with the deliverables they
receive. Cost overruns and schedule delays are unknown in astrology. I
contend that if it happens they deliver accurate projections (which I
have no reliable data on), it is probably in spite of their theory.
And it's quite likely that what creates their customer satisfaction is
something else entirely.

And yet many astrologers definitely believe in their theories. I have
met two professional astrologers who bet hundreds of thousands of
dollars in one case and over a million dollars in the other case on
the stock market -- hard-earned money from their astrology businesses
-- based on their astrological theories of how the stock market should
go. I am convinced that they believe. And when they lost money, they
*refined their theories*.

If I am right that astrology's tremendous success comes in spite of
its theoretical basis, (and I freely admit I could be wrong), what
does that say about the inevitable relationship between theory and
practice?

0
jethomas5 (1449)
3/2/2007 4:02:07 PM
On Mar 2, 5:02 am, Bernd Paysan <bernd.pay...@gmx.de> wrote:
> Jerry Avins wrote:
> > Markus E Leypold wrote:
>
> >    ...
>
> >> ASTROLOGY IS BULLSHIT, thanks).
>
> > Are you absolutely sure? Psychology theory tells us that people's
> > personalities are heavily influenced by their early experiences. The
> > term "formative" comes to mind. Especially in rural settings -- most of
> > the world was rural not so long ago -- a toddler's experiences would be
> > heavily influenced by the season in which it became ambulatory. In
> > summer, moving freely between indoors and out while in winter, the
> > transition requiring a lengthy procedure putting on or taking off heavy
> > garments. There are certainly other seasonal influences on early
> > experience, and can you say with confidence that they don't, on the
> > average, shape kid's psyches in consistent ways? Has anyone done a
> > Northern-Southern hemisphere correlation?
>
> Astrology as we know it here hasn't been developed in a setting with
> significant winter and summer. It has been developed in Mesopotamia, where
> seasons are much less important than here. The Chinese astrology on the
> other hand has been developed in a climate with similar seasons, and there
> the zodiac changes on a yearly basis, with a 12 year cycle (and a longer,
> 60 year cycle), and planets aren't important. The 60 year cycle probably
> has a foundation (complete cut off from the previous generation), and some
> countries have significant events on the per-60-year ticks, e.g. France:
> 1789, 1848, 1968.
>
> Now try matching your theory with this yearly zodiac developed in an
> environment with more seasonal influence than the origin of the monthly
> zodiac. Well, for me the Chinese model works slightly better: I definitely
> share more properties with the Chinese dog than with the Mesopotamian
> pisces.
>
> --
> Bernd Paysan
> "If you want it done right, you have to do it yourself"http://www.jwdt.com/~paysan/


0
jethomas5 (1449)
3/2/2007 4:04:23 PM
On Mar 2, 5:02 am, Bernd Paysan <bernd.pay...@gmx.de> wrote:

> Jerry Avins wrote:
> > Markus E Leypold wrote:

> >> ASTROLOGY IS BULLSHIT, thanks).

> > Are you absolutely sure? Psychology theory tells us that people's
> > personalities are heavily influenced by their early experiences. The
> > term "formative" comes to mind. Especially in rural settings -- most of
> > the world was rural not so long ago -- a toddler's experiences would be
> > heavily influenced by the season in which it became ambulatory.

> Astrology as we know it here hasn't been developed in a setting with
> significant winter and summer. It has been developed in Mesopotamia, where
> seasons are much less important than here. The Chinese astrology on the
> other hand has been developed in a climate with similar seasons, and there
> the zodiac changes on a yearly basis, with a 12 year cycle (and a longer,
> 60 year cycle), and planets aren't important. The 60 year cycle probably
> has a foundation (complete cut off from the previous generation), and some
> countries have significant events on the per-60-year ticks, e.g. France:
> 1789, 1848, 1968.

You are both trying to come up with theoretical reasons why there
might be something to one astrological system or another. I believe
this is misguided.

As Bernd points out, there are multiple astrological systems, and
professional astrologers of all systems get customers and repeat
customers, seemingly independent of their astrological system.

There are multiple variants of european astrology that take into
account different things. Some are precessed from the (claimed)
babylonian model and some are not. Indian astrology is very different
in detail. If you were doing fourier analysis and you got a nice
pattern, and the same pattern fit just as well if the data was shifted
by a sixth of a cycle, what would you think?

What I think is that astrological theory is bullshit. So astrology's
success is due to something other than its theory. Astrology is
successful *despite* its bullshit theory.

The average professional astrologer with an affluent clientele makes
more money than the average software developer working in industry.
Customers tend to be far more satisfied with the deliverables they
receive. Cost overruns and schedule delays are unknown in astrology. I
contend that if it happens they deliver accurate projections (which I
have no reliable data on), it is probably in spite of their theory.
And it's quite likely that what creates their customer satisfaction is
something else entirely.

And yet many astrologers definitely believe in their theories. I have
met two professional astrologers who bet hundreds of thousands of
dollars in one case and over a million dollars in the other case on
the stock market -- hard-earned money from their astrology businesses
-- based on their astrological theories of how the stock market should
go. I am convinced that they believe. And when they lost money, they
*refined their theories*.

If I am right that astrology's tremendous success comes in spite of
its theoretical basis, (and I freely admit I could be wrong), what
does that say about the inevitable relationship between theory and
practice?

0
jethomas5 (1449)
3/2/2007 4:10:29 PM
J Thomas wrote:

   ...

> You are both trying to come up with theoretical reasons why there
> might be something to one astrological system or another. I believe
> this is misguided.
> 
> As Bernd points out, there are multiple astrological systems, and
> professional astrologers of all systems get customers and repeat
> customers, seemingly independent of their astrological system.

Don't get me wrong. Astrological theory is bullshit. Bernd points out 
that there are several incompatible versions, and I know at least two 
people who espouse them all. Logic? Who needs it?!

There are lots of bullshit theories that in fact deal with real 
phenomena. As a kid, I was warned that mixing milk and seafood might 
bring on "acute indigestion"*, a bullshit diagnosis for coronary 
thrombosis not all that long ago. That whacko notion didn't mean that 
heart attacks don't happen. My remarks to Marcus only asked if there 
might not be some ordinary phenomenon that occasionally served to 
reinforce and encourage the bullshit.

Most people aren't contend to learn something. They need to explain it 
whether with fact or invention. Many of those inventions become the 
bullshit theories. That they explain something real gives them staying 
power. Is religion in that class? There are many competing religious 
scenarios. Can they all be valid? Logic? Who needs it?!

Jerry
____________________________________
* Clam chowder, anyone?
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/2/2007 6:04:13 PM
Jerry Avins wrote:
> Don't get me wrong. Astrological theory is bullshit. Bernd points out 
> that there are several incompatible versions, and I know at least two 
> people who espouse them all. Logic? Who needs it?!
> 
> There are lots of bullshit theories that in fact deal with real 
> phenomena.

I guess that's what makes them attractive to people who like simple 
answers (btw I am not referring to the late US foreign policy ;-)

For instance, Feng Shui claims to be based on an abstruse astrological 
theory and "ancient wisdom". However if you follow the rules in the 
planning of buildings and interior architecture you can get pleasant 
results. Illogically this is then used as "proof by example" of the 
horrible theory.

Dunno what that has to do with software-engineering, but let it be..

Andreas
-------
MinForth http://minforth.net.ms/
0
akk5412 (334)
3/2/2007 6:26:47 PM
In article <45e7ec8c$1_1@glkas0286.greenlnk.net>, Stuart wrote:
> "Neelakantan Krishnaswami" <neelk@cs.cmu.edu> wrote in message 
> news:slrneueabi.e3h.neelk@gs3106.sp.cs.cmu.edu...
>> In article <xczwt21nwxj.fsf@cbe.ericsson.se>, Ulf Wiger wrote:
> 
>>> I've been in the (fortunate?) position to work in a fairly
>>> long-running project where we've been both doing more or less
>>> the same thing over and over for about a decade, but also
>>> relatively often faced entirely new challenges, where the
>>> technology has been new to us, and both requirements and
>>> existing specifications have been a moving target. In all
>>> cases, we benefit from having a well practised team, with
>>> a tried methodology. Still, it is much more difficult to
>>> predict the cost and odds of success when we're in
>>> uncharted territory. When the task is familiar, we can
>>> predict the outcome pretty accurately.
>>
>> This is a really dumb question: why do you do familiar tasks?
> 
> Because they are profitable!

This was originally a response I cancelled because I didn't have
enough time to write properly but sent out by accident. I do familiar
tasks too, and so do most of the people I know, but I still don't
understand why this is so.

Kind of the point of a computer is that you can write programs and
build abstractions. If the task is familiar, that means that there has
been a failure to construct an abstraction. It seems like a really
solid software team ought to either be doing things they *don't*
understand, because they've automated the things they do understand,
or building said automation.

But that's not what happens in real life. Why?

-- 
Neel R. Krishnaswami
neelk@cs.cmu.edu
0
neelk (298)
3/2/2007 11:28:34 PM
J Thomas wrote:

> And yet many astrologers definitely believe in their theories. I have
> met two professional astrologers who bet hundreds of thousands of
> dollars in one case and over a million dollars in the other case on
> the stock market -- hard-earned money from their astrology businesses
> -- based on their astrological theories of how the stock market should
> go. I am convinced that they believe. And when they lost money, they
> *refined their theories*.

Well, that's the only rational thing to do !

It's called the scientific method: make a prediction based on a theory,
test the prediction, and if it turns out incorrect then change the
theory.

The only justification for rejecting a theory altogether (without
something better to replace it) woiuld be if it more-or-less
continually produced wrong predictions or no predictions at all.  But,
remember that these mystics of yours were making sucessful predictions
all the time -- that's how they had made the money they invested (and
probably much more) in the first place.

(No, I don't beleive there's any interesting truth in astrology either,
But within /their/ world-view they were acting sensibly and perhaps
even optimally...)

     -- chris
0
chris.uppal (3980)
3/2/2007 11:30:04 PM
Markus E Leypold <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-
leypold.de> wrote:
> "J Thomas" <jethom...@gmail.com> writes:
> > Markus E Leypold <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
> >> "J Thomas" <jethom...@gmail.com> writes:

> > We started out discussing computer science theory. My opinion is that
> > a lot of it is not directly useful but may be indirectly useful, while
> > some of it is directly useful, but it is still generally inadequate.
>
> > You have expanded this to theory about how to hit a rabbit with a
> > rock, and all the ideas that a programmer with no computer science
> > background at all might come up with.
>
> There is, in my opinion, no general difference there, only in degree
> and area of application. The OP asked "What is (this) theory good
> for?" and also got a number of answers that suggested the practice is
> more important. I think my answer, that, if one neglects theory, one
> will be chasing red herrings and dead ends in one's practice -- this
> answer still stands and is not restricted to computer science.

Sure, and you generalised to the point that you claim I'm doing theory
any time I pay attention. So there's a quantitative slope here. On the
one extreme you do things at random while paying no attention
whatsoever and you get whatever result you happen to get, with no
memory of how you got it. At the other extreme, you go to the library
to learn theory and you never come out. The ideal is somewhere in
between, but where?

> >> BTW, I dislike the insinuation of intellectual dishonesty in your
> >> proponenents which I think I can read in your statement
> >> ("stretching"). We don't argue like we do, to "win" against you, but
> >> rather because this is our definition and undertanding of the term
> >> "theory".
>
> > I don't mean to be disrespectful, but your theory about theories seems
> > to me particularly academic.
>
> Have it your, "man of practice". Since "academic" to me is not a swear
> word and also doesn't imply that it it is not applicable in practice,
> I can even consider that as a compliment, though it doubt it was meant
> as one.

I don't think of "academic" as a swear word either. Just, when a "man
of practice" goes out to learn some new theory, he's likely to hope he
can apply it within a month or so. "I just want to get my work done."
Lots of academic stuff isn't *immediately* useful. Lots of it is like
the acorns that the squirrels bury and forget. A hundred years from
now a big part of the forest will be those acorns, grown up. But they
don't provide any squirrel a meal this winter.

> >> > But anyway, my point isn't that you can make things that work without
> >> > a sophisticated or correct theory about why they work. That's a given.
> >> > My point is that large engineering projects usually don't fall down,
> >> > but large software projects usually do fall down. This suggests that
> >> > our theory as currently understood is not adequate.
>
> >> > Tenth century builders did have sophisticated theory to fall back on.
> >> > It was called astrology. It may have acted to reduce construction
>
> >>   ^^^^^^^^^^^^^^^^^^^^^^^^
>
> >> Nonsense. You should really read up on your history of science and
> >> technique.
> > Are you saying that astrology wasn't developed then? People who're
> > interested in astrology have told me it goes back at least to
> > babylonia and ancient egypt. A large body of theory with a very long
> > history, used to predict lots of things.
>
> "People interested in astrology" today are probably cranks.

There are professional astrologers with advanced degrees from
prestigious universities (in india). Some astrologers make a lot of
money and have a lot of satisfied customers. It's unheard of for
astrology projects to come in behind schedule and over budget. Clearly
these people are doing something right that software developers
haven't learned to do.  ;)

> As far as the rabbut goes: If you only start thinking when you're
> hungry, you'll probably die. The idea that rabbits can be nourishing
> and that animals die when you throw stones at them have perhaps to be
> conceived before you become hungry.

There's some reason to think that some of our earlier ancestors
started out mostly gathering plus eating "very slow game". If you
start out by eating carrion then you have plenty of time to generalise
to live rabbits.

> Actually I do not understand youre counter argument at all, since
> physics is also very limited in scope and astrology is simple bull
> shit (please come out and get me, astrology trolls, I say it again:
> ASTROLOGY IS BULLSHIT, thanks).

Let me try to explain my point again, then. You say that theory is
important. Now, I have no compelling experience (controlled tests,
double-blind trials, etc) to say whether astrological theory works or
not. But my prejudice says it shouldn't work because I see no
plausible theoretical reason why it would. And yet, many professional
astrologers are quite successful. They have satisfied repeat
customers. The ones with well-off clients make considerable money.
Some of them pay well for custom software to meet their special needs.
So let's assume they have a complex formal theory that they use, and
that theory is complete bullshit and yet they're successful despite
it. Note that what they provide their customers that their customers
value, may be something other than accurately predicting the future.
But still, they succeed and they succeed *in spite of* their theory.
That would tell me that having a complex formal theory is not always
useful. So, is computer science theory more like engineering theory
(which most people agree is practical and useful) or is it more like
astrology?

Obviously, we shouldn't trust professional computer scientists to tell
us how useful it is any more than we'd trust professional astrologers.
Ideally we should listen to their customers and associates. A
testimonial might go something like:

"I was working away at this problem and it was slow going, something
kept going wrong. And then this guy with the CS degree came by and
looked at what I was doing and he told me it was undecidable. I said,
'Huh?'. And he showed me how there was no possible way to do it. Wow,
what a relief! I went to the contact man from the customer and I
understood it well enough to explain it to him. He said he couldn't
change the specs. So I went back to Eddy and we worked out a way to
change the specs to something I could actually do. And I went back to
the contact guy and showed it to him. He said it wasn't in my contract
to change the specs, my job was to implement the specs. But I kept at
him and after awhile he checked with the guys who could actually
change things and they agreed to change the specs! What a relief! I
got it done quick and sent it out. Eddy's great. And his CS degree is
great too. I'm thinking about arranging things so I can go back to
school and get a CS degree myself."

> >> > So is modern computer science more like pre-Newtonian physics, or is
>
> >> You know that Newtonian physics is wrong :-)?
> > No doubt quantum mechanics is wrong too. People use whatever physics
>
> Certainly. Its even self-contradictory.

So what? People who know how to use QM know which rules to apply
where. In the short run it doesn't matter if the rules contradict each
other, if you know how to use them to get the right answers.

However, people who're interested in QM have told me that QM is right
because they've gotten answers correct to 20 decimal places. I also
listened to computational guys who talked about the approximations and
fudge factors needed to solve any but the very simplest QM problems. A
statistician told me, "Give me 15 parameters and I can draw an
elephant. With 16 I can make him wag his tail." So when they say QM is
correct I smile and back away slowly. It isn't worth my time to learn
enough QM to decide whether they're right. Just like learning that
much astrology.

> > they've learned to solve the problems where experience has shown them
> > it gets adequate answers. Or when they have a big investment in the
> > theory, they'll assume the theory gives adequate answers and they'll
> > blame any failure on some outside influence.
>
> So?

So when they have a giant investment in their theories, they become
extremely unreliable at deciding the value of their knowledge. Don't
listen to them about that, listen to their customers.

Not to say it's useless. Just, don't ask your generals whether you
need a war.

0
jethomas5 (1449)
3/3/2007 12:27:47 AM
Op Thu, 01 Mar 2007 18:06:47 +0100 schreef Markus E Leypold:

> Coos Haak <chforth@hccnet.nl> writes:
> 
>> Op Thu, 01 Mar 2007 15:55:52 +0100 schreef Markus E Leypold:
>>
>>> "People interested in astrology" today are probably cranks. As far as
>>> "the middle ages (or whatever) had astrology as (only) science" goes,
>>> I suggest that you don't rely on bad movies too much for your
>>> knowledge of history.
>>> 
>> Today, they are cranks, no doubt about it. But when astronomy began in the
> 
> Oh yes. That's why I wrote "today". JT was talking about "People
> who're interested in astrology have told me it goes back at least to
> babylonia and ancient egypt". That where people of today -- but I
> admit, that I assumed, perhaps wrongly, that their "interest in
> astrology" was not historical, but as prcatictioners or believers.
> 
> 
>> sixteenth and seventeenth century (long after the Middle Ages), astrology
>> was a respected science. Newton and Kepler were foremost astrologers, they
>> only later became astronomers. The paradigm shift did not came as a
> 
> :-). Yes. I've been allouding to that in another post.
> 
>> lightning bolt, but more like a change of climate.
> 
> Exactly my point as far as history of science goes: There is no clean
> separation betwen "true science" and pre-science. Both are attempts to
> deal with reality and abstract rules for cause and effect.
> 
As I understand now, we agree. Thanks.
-- 
Coos
0
chforth (1145)
3/3/2007 12:33:01 AM
Op Fri, 02 Mar 2007 01:57:15 +0100 schreef Markus E Leypold:

> 
> I've recently become quite interested in how the M�nster at Bad
> Doberan was built. Can you especially recommend any sources about the
> Salisbury Cathedral?

From memory (I could have read wikipedia/google ;-) this summer 32 years
ago: For some reason Old Sarum, the old town, Normandic I think, was
abandoned and in the new town the cathedral was built in a very short time,
some 25-35 years?, and - important - one single architect. So it has one
predominant style, different from nearly all churches in England and other
countries at the time.
-- 
Coos
0
chforth (1145)
3/3/2007 12:41:53 AM
Markus E Leypold <development-2006-8ecbb5cc8aREMOVETHIS@ANDTHATm-e-leypold.de> writes:
> I've recently become quite interested in how the M�nster at Bad
> Doberan was built. Can you especially recommend any sources about the
> Salisbury Cathedral?

Ken Follett had a big novel called "Pillars of the Earth" (.de edition
"Die Saeulen der Erde") which was about the adventures of some monks
building a cathedral (and oh, heh, it looks like he's just written a
sequel to it).  I remember his web site (ken-follett.com) formerly
said some things about the references he used, but I don't see that
there now.  The info might be in the introduction to the book.  Anyway
you might enjoy the novel, but keep in mind that it's a fiction story
and not a technical work.
0
phr.cx (5493)
3/3/2007 1:04:41 AM
Coos Haak wrote:
> Op Fri, 02 Mar 2007 01:57:15 +0100 schreef Markus E Leypold:
> 
>> I've recently become quite interested in how the M�nster at Bad
>> Doberan was built. Can you especially recommend any sources about the
>> Salisbury Cathedral?
> 
> From memory (I could have read wikipedia/google ;-) this summer 32 years
> ago: For some reason Old Sarum, the old town, Normandic I think, was
> abandoned and in the new town the cathedral was built in a very short time,
> some 25-35 years?, and - important - one single architect. So it has one
> predominant style, different from nearly all churches in England and other
> countries at the time.

The most thorough coverage from an architectural perspective is Sir 
Banister Fletcher's A History of Architecture (Boston: Butterworths, 
1994).  It's quite expensive on Amazon, but maybe you can find a library 
copy (which I did).  The Wikipedia article is quite thorough, though.

Yes, it was built in 38 years, a real record.

Cheers,
Elizabeth

-- 
==================================================
Elizabeth D. Rather   (US & Canada)   800-55-FORTH
FORTH Inc.                         +1 310-491-3356
5155 W. Rosecrans Ave. #1018  Fax: +1 310-978-9454
Hawthorne, CA 90250
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
==================================================
0
eratherXXX (903)
3/3/2007 1:10:53 AM
Markus E Leypold wrote:
> Jerry Avins <jya@ieee.org> writes:
> 
>> Just as it takes practice to drive a nail straight, it takes practice
>> to know how to apply theory.  Do you recall Feynman's Brazilian optics
>> student who knew the theory of plane parallel plates, but couldn't
>> recognize a window pane as an instance of it?  
> 
> I actually know about that event very well. My memory tells me, it was
> not "parallel planes" but the phenomenon that certain crystals emit
> light when splitted along certain plains. I've forgotten what the
> process is called, but I rememember very well at least one example
> where it occurs, since I'm one of the probably very few people in
> Germany that actually went into a store after reading that, bought a
> package of candis and convinced themselves in the darkness of their
> (then) windowless bath room that it actually emits light when crunched
> with a pair of pliers.

I believe we have different anecdotes in mind. The phenomenon you 
remember is closely related to triboluminescence, a phenomenon I 
discovered to my initial horror when unrolling masking tape in a 
darkroom. (I quickly learned that it didn't noticeably affect the film.) 
To try it, wait to get dark adapted and look at the parting line as 
masking (or most other rubber-adhesive) tape is peeled back.

>> You don't learn theory
>> out of books alone.
> 
> Well, I certainly haven't said that ...
> 
> More about that later / another day. I still do not agree with you in
> general, but that is probably because you define theory much more
> narrowly than I do (I think, knowing about "horizontal shear", even
> about "force" is already theory, you seem only to allow it being
> theory when (numerical) calculations are done).

My definition is looser than that. I would almost go with "useful 
explanation". But because it must be falsifiable, a theory is not useful 
if it can't predict.

I'm a decent mechanic and a passing fair machinist. I learn much by 
watching experts and asking questions when the occasion permits. I once 
saw a difficult  intermittent cut being made in stainless steel in a 
lathe. I watched until the cut was finished and the lathe stopped, then 
asked the machinist if I might come closer and examine the bit. He 
agreed, and I saw what I presumed was the source of his success. The 
grinding of the bit required a fairly complex operation, not easily done 
off hand. I thought of another way to achieve the important part of the 
grind with less difficulty. When I asked the machinist why he ground his 
bit that particular way, he could only say that it gave better results 
than all other ways. When I asked him about the simpler shape that I 
believed would do the same thing, he said he never tried it. Among 
machinists, grinding tool bits is an art with little or no theory. They 
do good work nonetheless.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/3/2007 2:07:38 AM
Markus E Leypold wrote:

>>  ... I have learned
> 
> I will.
> 
>> not to dismiss observations for which there is no theoretic
> 
> There are no such observation.
> 
>> grounding. I remember being told that I only imagined the ball
>> lightning I had seen.

What was the theory behind administering elixir of willow root to reduce 
pain and fever? I say either none, or a worthless mystical one. 
Empirical observation of effectiveness is all the justification needed.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/3/2007 2:19:03 AM
"J Thomas" <jethomas5@gmail.com> writes:

> Markus E Leypold <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-
> leypold.de> wrote:
>> "J Thomas" <jethom...@gmail.com> writes:
>> > Markus E Leypold <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
>> >> "J Thomas" <jethom...@gmail.com> writes:
>
>> > We started out discussing computer science theory. My opinion is that
>> > a lot of it is not directly useful but may be indirectly useful, while
>> > some of it is directly useful, but it is still generally inadequate.
>>
>> > You have expanded this to theory about how to hit a rabbit with a
>> > rock, and all the ideas that a programmer with no computer science
>> > background at all might come up with.
>>
>> There is, in my opinion, no general difference there, only in degree
>> and area of application. The OP asked "What is (this) theory good
>> for?" and also got a number of answers that suggested the practice is
>> more important. I think my answer, that, if one neglects theory, one
>> will be chasing red herrings and dead ends in one's practice -- this
>> answer still stands and is not restricted to computer science.
>
> Sure, and you generalized to the point that you claim I'm doing theory
> any time I pay attention. So there's a quantitative slope here. On the
> one extreme you do things at random while paying no attention
> whatsoever and you get whatever result you happen to get, with no
> memory of how you got it. At the other extreme, you go to the library
> to learn theory and you never come out. The ideal is somewhere in
> between, but where?

Why I'm even having that discussion is beyond me ... -- somebody in
this thread seemed to deny the usability of theory and someone else
asserted that "practice was there first" implying a primacy of
practice over theory. I and others have been trying to shed some more
light on the relationship between practice and theory from what I'd
characterize as the pint of view of the theory of science (this
interesting enough is not the opposite as the practice of science, but
the science of how science works).

About the "right" eightfold way, the proper balance of scientific life
style, about when to stay in the library and when to go and be a man
of action^W practice, I cannot say much. Is suppose everyone has to
decide that for himself, I can decide very well for myself but not for
others.

After all "never coming out of the library" would also depend on the
opening times, except perhaps if you have your own (library, I mean).

> I don't think of "academic" as a swear word either. Just, when a "man
> of practice" goes out to learn some new theory, he's likely to hope he
> can apply it within a month or so. "I just want to get my work done."

Which is a bit unrealistic. There is a saying that who have to
practice something 10 years to get really good in it. Expecting to be
able to play an instrument after a month is ridiculous, especially
concerning _practice_.

> Lots of academic stuff isn't *immediately* useful. Lots of it is like
> the acorns that the squirrels bury and forget. A hundred years from
> now a big part of the forest will be those acorns, grown up. But they
> don't provide any squirrel a meal this winter.

I've no pity with people who don't have the patience t wait until such
things bear their fruit. Those are the same that never interested
themselves for mathematics because it couldn't be immediately applied
and now complain that certain insights are not accessible to them
now. The very same people that would eat all corn now (and not keep
any seed) because the new corn wouldn't be ripe next month.

Probably also the same people that wouldn't pay in an old age fund,
because they are not old next year.

You see, humans do a lot of things that only pay in the long run. That
made the species so successful: The ability to plan for a vague and
misty tomorrow that will perhaps never happen like this. Still we
plan, scheme, hatch our treasure. The same applies to science.

I repeat: As somebody that always learned by trying to practice what I
found in theory I have no, absolutely no pity with the attidute that
what I can't eat today must be worthless to keep for tomorrow.

(You hear from that that I had a tad to many students in my university
time that couldn't be bothered to understand (esp. mathematical)
theory and then found everything too bloody hard, but of course that
was our fault, because we chose difficult exercise, not theirs because
they missed out on the mathematical foundation and than of course
couldn't see no light nowhere, so to say.)

>> >> > But anyway, my point isn't that you can make things that work without
>> >> > a sophisticated or correct theory about why they work. That's a given.
>> >> > My point is that large engineering projects usually don't fall down,
>> >> > but large software projects usually do fall down. This suggests that
>> >> > our theory as currently understood is not adequate.
>>
>> >> > Tenth century builders did have sophisticated theory to fall back on.
>> >> > It was called astrology. It may have acted to reduce construction
>>
>> >>   ^^^^^^^^^^^^^^^^^^^^^^^^
>>
>> >> Nonsense. You should really read up on your history of science and
>> >> technique.
>> > Are you saying that astrology wasn't developed then? People who're
>> > interested in astrology have told me it goes back at least to
>> > babylonia and ancient egypt. A large body of theory with a very long
>> > history, used to predict lots of things.
>>
>> "People interested in astrology" today are probably cranks.

> There are professional astrologers with advanced degrees from
> prestigious universities

Muhahaha. Sorry. I repeat myself: Cranks.

> (in india). Some astrologers make a lot of
> money and have a lot of satisfied customers. It's unheard of for
> astrology projects to come in behind schedule and over budget. Clearly
> these people are doing something right that software developers
> haven't learned to do.  ;)

Certainly. They don't deliver a product. They just take the
money. Some software vendors are pretty near to that ideal
already. Just wait a sec .. we've all been waiting for, say, WinFS,
how long?

>> As far as the rabbit goes: If you only start thinking when you're
>> hungry, you'll probably die. The idea that rabbits can be nourishing
>> and that animals die when you throw stones at them have perhaps to be
>> conceived before you become hungry.

> There's some reason to think that some of our earlier ancestors
> started out mostly gathering plus eating "very slow game". If you
> start out by eating carrion then you have plenty of time to generalize
> to live rabbits.

Exactly. You generalize. You make _a theory_. You don't get hungry and
then just throw stones at rabbits because it comes into your mind. 

Anyway that rabbit example is stupid and not very scientific. If you
still want to make something from it, please reformulate your
caveman's rabbit paradigm. :-).

>> Actually I do not understand your counter argument at all, since
>> physics is also very limited in scope and astrology is simple bull
>> shit (please come out and get me, astrology trolls, I say it again:
>> ASTROLOGY IS BULLSHIT, thanks).
>
> Let me try to explain my point again, then. You say that theory is
> important. Now, I have no compelling experience (controlled tests,
> double-blind trials, etc) to say whether astrological theory works or
> not. But my prejudice says it shouldn't work because I see no
> plausible theoretical reason why it would. And yet, many professional
> astrologers are quite successful. They have satisfied repeat
> customers. The ones with well-off clients make considerable money.
> Some of them pay well for custom software to meet their special needs.
> So let's assume they have a complex formal theory that they use, and
> that theory is complete bullshit and yet they're successful despite
> it. Note that what they provide their customers that their customers
> value, may be something other than accurately predicting the future.
> But still, they succeed and they succeed *in spite of* their theory.
> That would tell me that having a complex formal theory is not always
> useful. So, is computer science theory more like engineering theory
> (which most people agree is practical and useful) or is it more like
> astrology?

Can't you have successful bullshit? After all there are successful
bank robbers. That doesn't make it a profession you can get entered in
your passport.

> Obviously, we shouldn't trust professional computer scientists to tell
> us how useful it is any more than we'd trust professional astrologers.

> Ideally we should listen to their customers and associates. A
> testimonial might go something like:
>
> "I was working away at this problem and it was slow going, something
> kept going wrong. And then this guy with the CS degree came by and
> looked at what I was doing and he told me it was undecidable. I said,
> 'Huh?'. And he showed me how there was no possible way to do it. Wow,
> what a relief! I went to the contact man from the customer and I
> understood it well enough to explain it to him. He said he couldn't
> change the specs. So I went back to Eddy and we worked out a way to
> change the specs to something I could actually do. And I went back to
> the contact guy and showed it to him. He said it wasn't in my contract
> to change the specs, my job was to implement the specs. But I kept at
> him and after awhile he checked with the guys who could actually
> change things and they agreed to change the specs! What a relief! I
> got it done quick and sent it out. Eddy's great. And his CS degree is
> great too. I'm thinking about arranging things so I can go back to
> school and get a CS degree myself."

What do you want to tell us? By analogy, Dirac was wrong and the
Positron doesn't exist, because I've never ever read anywhere
testimonials of thankful positron users and customers.


>> >> > So is modern computer science more like pre-Newtonian physics, or is
>>
>> >> You know that Newtonian physics is wrong :-)?
>> > No doubt quantum mechanics is wrong too. People use whatever physics
>>
>> Certainly. Its even self-contradictory.

> So what? People who know how to use QM know which rules to apply
> where. In the short run it doesn't matter if the rules contradict each
> other, if you know how to use them to get the right answers.

But it does matter in the long run. Scientists look for obvious "fault
lines" in existing theories to understand how the theory must be
changed to connect seamlessly to other theories that it presently
contradicts or can't integrate.

> However, people who're interested in QM have told me that QM is right
> because they've gotten answers correct to 20 decimal places. I also

Therefore they also have such problems with quantum gravity ... 

And by the way: I'm one of those interested people. I've studied it
(physics, I mean).

> listened to computational guys who talked about the approximations and
> fudge factors needed to solve any but the very simplest QM problems. 

So? That is a numerical problem. Not one of the correctness of the
theory. That you cannot analytically integrate an elliptical equation
doesn't mean that the result is not precisely defined. The same BTW
applies to PI.

> A
> statistician told me, "Give me 15 parameters and I can draw an
> elephant. With 16 I can make him wag his tail." 

> So when they say QM is correct I smile and back away slowly. It
> isn't worth my time to learn enough QM to decide whether they're
> right. Just like learning that much astrology.

You choice. 

:-)

:-))

>> > they've learned to solve the problems where experience has shown them
>> > it gets adequate answers. Or when they have a big investment in the
>> > theory, they'll assume the theory gives adequate answers and they'll
>> > blame any failure on some outside influence.
>>
>> So?
>
> So when they have a giant investment in their theories, they become
> extremely unreliable at deciding the value of their knowledge. Don't
> listen to them about that, listen to their customers.

That is a rather stupid attidute. Science does not have customers in
this sense. And where it has, they don't understand a ****
thing. Actually I hear a lot of those practical people out on the
street are still a bit annoyed that those slacker scientists haven't
invented a car motor that works with tap water: Anybody could see that
this would be one mightily useful invention.

(Joking, but only half).

> Not to say it's useless. Just, don't ask your generals whether you
> need a war.

A different thing altogether. I won't deny that there _are_ problems
with way the scientific institutions work (I'm not really qualified to
comment on that, but I've lots of opinion and some rather nasty
insights). But all in all I don't see society as a whole fit to judge
on science -- not before they either stop using the results of science
or brush up a bit on logic, reasoning and plain common sense.

Regards -- Markus
0
3/3/2007 6:09:24 AM
Jerry Avins <jya@ieee.org> writes:

> Markus E Leypold wrote:
>
>>>  ... I have learned
>> I will.
>>
>>> not to dismiss observations for which there is no theoretic
>> There are no such observation.
>>
>>> grounding. I remember being told that I only imagined the ball
>>> lightning I had seen.
>
> What was the theory behind administering elixir of willow root to

The theory was that elixir of willow root would help everyone under
specific circumstances. That was deduced from anumber of cases where
it helped. It helped Joe, Jim and Jill. But did we conclude "Joe, Jim
and Jill get better with willow root?" -- No, it was generalized:
Everyone gets better.

That process is called: "Making a theory".

> reduce pain and fever? I say either none, or a worthless mystical
> one. Empirical observation of effectiveness is all the justification
> needed.


Regards -- Markus

0
3/3/2007 6:16:55 AM
Jerry Avins <jya@ieee.org> writes:

> Markus E Leypold wrote:
>> Jerry Avins <jya@ieee.org> writes:
>>
>>> Just as it takes practice to drive a nail straight, it takes practice
>>> to know how to apply theory.  Do you recall Feynman's Brazilian optics
>>> student who knew the theory of plane parallel plates, but couldn't
>>> recognize a window pane as an instance of it?
>> I actually know about that event very well. My memory tells me, it
>> was
>> not "parallel planes" but the phenomenon that certain crystals emit
>> light when splitted along certain plains. I've forgotten what the
>> process is called, but I rememember very well at least one example
>> where it occurs, since I'm one of the probably very few people in
>> Germany that actually went into a store after reading that, bought a
>> package of candis and convinced themselves in the darkness of their
>> (then) windowless bath room that it actually emits light when crunched
>> with a pair of pliers.
>
> I believe we have different anecdotes in mind. The phenomenon you

I don't think: RF talks about the state of the brazilian university
system there system.

> remember is closely related to triboluminescence, a phenomenon I

Not sure about triboluminescens. Hasn't Triboluminescence something to
do with static electricity (I'm to lazy to look it up now)? The
phenomenon I'm referring to has not: The light is directly emitted
when the cristal lattice is split.

> discovered to my initial horror when unrolling masking tape in a
> darkroom. (I quickly learned that it didn't noticeably affect the

:-)

> film.) To try it, wait to get dark adapted and look at the parting
> line as masking (or most other rubber-adhesive) tape is peeled back.

I already did that as child. We had a largish number of techniques and
tricks to make things glow in the dark (most of them forgotten now),
but gluing transparent tape to a smooth and pulling it of slowly was
one of them :-).


>>> You don't learn theory
>>> out of books alone.
>> Well, I certainly haven't said that ...

>> More about that later / another day. I still do not agree with you in
>> general, but that is probably because you define theory much more
>> narrowly than I do (I think, knowing about "horizontal shear", even
>> about "force" is already theory, you seem only to allow it being
>> theory when (numerical) calculations are done).

> My definition is looser than that. I would almost go with "useful
> explanation". But because it must be falsifiable, a theory is not
> useful if it can't predict.

Ah, I see, you're talking about a theory in the strict positivist
sense and also want to have prediction power.

Well - I'll think about it. 

> I'm a decent mechanic and a passing fair machinist. I learn much by
> watching experts and asking questions when the occasion permits. I
> once saw a difficult  intermittent cut being made in stainless steel
> in a lathe. I watched until the cut was finished and the lathe
> stopped, then asked the machinist if I might come closer and examine
> the bit. He agreed, and I saw what I presumed was the source of his
> success. The grinding of the bit required a fairly complex operation,
> not easily done off hand. I thought of another way to achieve the
> important part of the grind with less difficulty. When I asked the
> machinist why he ground his bit that particular way, he could only say
> that it gave better results than all other ways. When I asked him
> about the simpler shape that I believed would do the same thing, he
> said he never tried it. Among machinists, grinding tool bits is an art
> with little or no theory. They do good work nonetheless.

The usefulness of such traditions notwithstanding -- I think those
anektodes tend to inflate their impact. A useful tidbit of practice,
empirical obeservation often stay a dead end until it begins to fit in
some theory whre it can be properly extrapolated and exploited.

Regards -- Markus
0
3/3/2007 6:27:09 AM
Dirk Thierbach <dthierbach@usenet.arcornews.de> writes:
> For some Monads, one can think about (>>=) as some sort of "sequencing".
> Then in an expression like "F >>= \x -> G[x]" (by G[x] I mean that x is
> a free variable in the expression G), it means "first do F, bind the
> result of F to x, if possible, and then do G". 

It is starting to glimmer on me that the \x->G[x] in the above means
something other than function application in the usual sense.  Maybe this
is how the mysterious operation of lifting manages to work.  Am I on the
right track?

Thanks.

0
phr.cx (5493)
3/3/2007 6:45:16 AM
Paul Rubin <http://phr.cx@nospam.invalid> wrote:
> Dirk Thierbach <dthierbach@usenet.arcornews.de> writes:
>> For some Monads, one can think about (>>=) as some sort of "sequencing".
>> Then in an expression like "F >>= \x -> G[x]" (by G[x] I mean that x is
>> a free variable in the expression G), it means "first do F, bind the
>> result of F to x, if possible, and then do G". 

> It is starting to glimmer on me that the \x->G[x] in the above means
> something other than function application in the usual sense. 

I just meant it as a "placeholder" for some expression. For example,

  main = 
    s <- getArgs
    print (head s)

can also be written as

  main = getArgs >>= \s -> print (head s)

so F is "getArgs", and G[s] is "print (head s)". 

> Maybe this is how the mysterious operation of lifting manages to work.  

There's nothing mysterious in lifting :-) Try to think of it in the
following way: There are two different worlds in Haskell, the pure
functional world (without side effects), and the imperative world of
the IO monad (with side effects). Those two don't mix. They cannot mix,
because in the pure functional world, order of evaluation doesn't
matter, and the result is the same for every evaluation. But in the
imperative world, order does matter, and the result can be different
for different runs (for example, if you give different arguments to
the commandline, getArgs will return different values).

You can access the imperative world only "inside" the IO monad ("inside"
a do-construct). You can use functions from the purely functional
world inside this monad in a number of ways: You can use them to 
process values returned by imperative "statements", once they are bound
to variables. You can also turn a pure function into an "imperative"
function using liftM. Just look at the type:

liftM :: Monad m => (a -> b) -> (m a -> m b)

This means that liftM takes a pure function as argument, and returns
a function that works on monadic values. So you have "lifted" this
function "inside" the monad.

Does that make things clearer? As a said, it's a good exercise to try
to implement liftM yourself. Look at the type signature, and then you'll
discover that there's really just one way to write the code for it.

- Dirk



0
dthierbach2 (260)
3/3/2007 8:24:16 AM
Markus E Leypold <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-
leypold.de> wrote:
> "J Thomas" <jethom...@gmail.com> writes:

> Why I'm even having that discussion is beyond me ...

I've been feeling the same way. We're clearly talking past each other,
failing to get each other's points. It's as if we're accidentally
trolling each other. I don't know the term for somebody who gets
trolled, but from another context it's like a couple of straight-men
ad-libbing with no comedian....

> -- somebody in
> this thread seemed to deny the usability of theory and someone else
> asserted that "practice was there first" implying a primacy of
> practice over theory. I and others have been trying to shed some more
> light on the relationship between practice and theory from what I'd
> characterize as the pint of view of the theory of science (this
> interesting enough is not the opposite as the practice of science, but
> the science of how science works).

Interesting. I hadn't noticed theory-of-science people doing
controlled experiments etc. What I've noticed from that is more
literary. I saw people looking at history and trying to make theories
about what it means, and asserting the truth of their theories from
historical anecdotes. What I've seen has looked far more like literary
theory applied to science, than science applied to science.

But that aside, I think I agree with you basicly. My claim is that we
get theory that's carefully calibrated against practice (like
engineering theory) and then we get theory that isn't (like
astrology). Not that all theory is useless and not that practice is
primary, but that the interaction of theory and practice is essential
for "good" results. Without that, the theories only get judged in an
evolutionary sense, by how well they satisfy customers (and by how
well their repeat customers do relative to customers who try something
else).

When you appear to argue the primacy of theory, and when you argue
that it's impossible to avoid doing theory, you aren't arguing about
how to do science at all. Surely we agree that it's easy to do theory
in ways that aren't useful.

> > I don't think of "academic" as a swear word either. Just, when a "man
> > of practice" goes out to learn some new theory, he's likely to hope he
> > can apply it within a month or so. "I just want to get my work done."
>
> Which is a bit unrealistic. There is a saying that who have to
> practice something 10 years to get really good in it. Expecting to be
> able to play an instrument after a month is ridiculous, especially
> concerning _practice_.

Sure. But again there's a matter of degree. Say you have somebody who
has an eclectic set of skills and he's doing pretty well at a variety
of projects. He has a binary tree and he finds he might benefit from
balancing it. So he learns enough about binary trees to balance his
tree, and implements it in less than a day, and then he gets to see
whether it actually helps him. He's gotten a useful trick that was
probably worth the time it took him to learn it. There's the chance
he'll find out that his application doesn't actually need a balanced
binary tree, but he can't be sure of that ahead of time.

On the other hand, he might find that with ten years of concentrated
study he can become a real master of object-oriented programming. With
just a month of study he'll be just good enough to shoot himself in
the foot. How much of an investment should he put into it?

There's a place for masters in each specialty (or tautologically for
each specailty there's a place for). And there's a place for people
who just want to get their work done. If a project can be done by a
small number of goal-oriented people, that's good. If the project
requires a large number of specialists who don't communicate well with
specialists in other disciplines, then sadly that's what it takes.

> > Lots of academic stuff isn't *immediately* useful. Lots of it is like
> > the acorns that the squirrels bury and forget. A hundred years from
> > now a big part of the forest will be those acorns, grown up. But they
> > don't provide any squirrel a meal this winter.
>
> I've no pity with people who don't have the patience t wait until such
> things bear their fruit.

Those people can be useful in their place. Some people run tree
nurseries and plant orchards, other people follow the harvests and
pick fruit. Both are useful, and they might not understand each other.

> You see, humans do a lot of things that only pay in the long run. That
> made the species so successful: The ability to plan for a vague and
> misty tomorrow that will perhaps never happen like this. Still we
> plan, scheme, hatch our treasure. The same applies to science.

Sure. It takes a balance. Suppose that you hear various acquaintances
complaining that their cats scratch their furniture. You might
recommend to each of them that they spend ten years studying behavior
modification and then when they are masters of it they can train their
cats. Or you might see a market opportunity, you can spend ten years
learning cat psychology and then offer to come live in each client's
house for two weeks and train their cats. Or you might offer them
practical advice that may not solve their problem. Get scratching
posts. Spray the furniture with evil-smelling odors. Give away the
cat. Perhaps you'll get so interested in cat psychology that you
forget your original reason and spend your time publishing papers that
may be useful sometime within the next hundreds of years. It takes all
kinds of people to make an interesting society.

> I repeat: As somebody that always learned by trying to practice what I
> found in theory I have no, absolutely no pity with the attidute that
> what I can't eat today must be worthless to keep for tomorrow.

I agree. And still, there's a lot more available to learn than any one
person can learn, and we have to pick and choose. Somehow I must learn
the things I need to know, well enough at least to get by.

> >> "People interested in astrology" today are probably cranks.
> > There are professional astrologers with advanced degrees from
> > prestigious universities
>
> Muhahaha. Sorry. I repeat myself: Cranks.

If you say that without controlled experiments, then what are you but
somebody who claims without evidence that your theory is better than
their theory? Not that I disagree, I tend to also believe without
evidence that your theory is better than their theory....

> > (in india). Some astrologers make a lot of
> > money and have a lot of satisfied customers. It's unheard of for
> > astrology projects to come in behind schedule and over budget. Clearly
> > these people are doing something right that software developers
> > haven't learned to do.  ;)
>
> Certainly. They don't deliver a product. They just take the
> money. Some software vendors are pretty near to that ideal
> already. Just wait a sec .. we've all been waiting for, say, WinFS,
> how long?

Their customers are satisfied and willingly come back to pay them
more. If they just took the money their customers would call the
police or sue them or whatever.

Similarly, MicroSoft filled a need that satisfied customers. When the
software industry was full of incompatibilities, MicroSoft provided an
almost-adequate standard. Not IMHO a good one, but one that many
businesses were willing to pay for. They were ready to pay well for
things that could be made to fit together, that failed in mostly
predictable ways, rather than accept a hodgepodge of applications and
operating systems etc. And they've tended to get dissatisfied not
because of the technical inadequacy so much as because they saw MS
hooking them into a planned-obsolescence upgrade cycle, where MS
products became incompatible with older MS products and it was a big
inconvenience.

Pretty much anybody who's been involved in a standards effort will
agree that if one company can establish a standard and make it stick,
that's worth money.

I can't tell you for sure what value astrology has for customers. I
don't think it's accurate predictions. Figure out what they're really
selling and you might find a better way to provide that value.

> >> Actually I do not understand your counter argument at all, since
> >> physics is also very limited in scope and astrology is simple bull
> >> shit (please come out and get me, astrology trolls, I say it again:
> >> ASTROLOGY IS BULLSHIT, thanks).
>
> > Let me try to explain my point again, then. You say that theory is
> > important. Now, I have no compelling experience (controlled tests,
> > double-blind trials, etc) to say whether astrological theory works or
> > not. But my prejudice says it shouldn't work because I see no
> > plausible theoretical reason why it would. ....

> > But still, they succeed and they succeed *in spite of* their theory.
> > That would tell me that having a complex formal theory is not always
> > useful. So, is computer science theory more like engineering theory
> > (which most people agree is practical and useful) or is it more like
> > astrology?
>
> Can't you have successful bullshit? After all there are successful
> bank robbers. That doesn't make it a profession you can get entered in
> your passport.

Yes. Sometimes there's a market for BS. It tends to be a demanding
market that accepts only the best BS, not an easy market to break
into. How much of CS is BS?

> > Obviously, we shouldn't trust professional computer scientists to tell
> > us how useful it is any more than we'd trust professional astrologers.
> > Ideally we should listen to their customers and associates.
>
> What do you want to tell us? By analogy, Dirac was wrong and the
> Positron doesn't exist, because I've never ever read anywhere
> testimonials of thankful positron users and customers.

Is it BS? You can't judge by internal consistency, any particular
school of astrology can have that. Try a different example, how much
of english lit criticism is BS? From my perspective, a lot. But you
can get an advanced degree in english lit from prestigious
universities in the USA. How about political science? Here's something
where they occasionally do controlled experiments and do statistics on
their results. They're starting to get a clear sense of how effective
push-polling is and how to make it more effective, for example.
They're making a scientific study of how to do successful BS. But I'm
convinced a lot of political science itself is BS.

So, science can help you tell what's BS, with controlled experiments.
How much of computer science has had those controlled experiments?
Ideally, when we get a new approach to programming (that may take 10
years to become fully proficient in) we should do controlled trials.
Ideally we would do double-blind experiments, where nobody including
the programmers knows whether they're doing OO etc until after the
experiment is over.  ;)  I say that in the absence of these
experiments we have the possibility for a lot of BS to slip in.


> >> > No doubt quantum mechanics is wrong too. People use whatever physics
>
> >> Certainly. Its even self-contradictory.
> > So what? People who know how to use QM know which rules to apply
> > where. In the short run it doesn't matter if the rules contradict each
> > other, if you know how to use them to get the right answers.
>
> But it does matter in the long run. Scientists look for obvious "fault
> lines" in existing theories to understand how the theory must be
> changed to connect seamlessly to other theories that it presently
> contradicts or can't integrate.

Yes. So I can learn QM to solve particular problems that help me get
my work done. Or I can learn it to aid the long-run advance of
science. (Or both.) If I have the leisure or the profession to advance
science, then that's fine. If I just need to solve a few problems I'm
probably better off to find a QM expert and ask him for answers, and
see how much he charges.

> > However, people who're interested in QM have told me that QM is right
> > because they've gotten answers correct to 20 decimal places. I also
>
> Therefore they also have such problems with quantum gravity ...

I consider these particular people cranks. But of course there are QM
practitioners who aren't cranks.

> And by the way: I'm one of those interested people. I've studied it
> (physics, I mean).

;) I don't think you're a crank for learning about it. I'd do it
myself if I didn't have more pressing priorities.

> > listened to computational guys who talked about the approximations and
> > fudge factors needed to solve any but the very simplest QM problems.
>
> So? That is a numerical problem. Not one of the correctness of the
> theory. That you cannot analytically integrate an elliptical equation
> doesn't mean that the result is not precisely defined. The same BTW
> applies to PI.

Well, if there are a lot of fudge factors then I wouldn't consider
correct answers a validation of the theory. Adjust the unknowns until
the correct answer is found. Then announce that you got the correct
answer so the theory must be correct. Bogus.

Though the other way around, if you couldn't find any way to adjust
the fudge factors to get a correct answer, *would* show the theory was
incomplete.

> > So when they say QM is correct I smile and back away slowly. It
> > isn't worth my time to learn enough QM to decide whether they're
> > right. Just like learning that much astrology.
>
> You choice.
>
> :-)


Sure. You have to pick your topics.

> > So when they have a giant investment in their theories, they become
> > extremely unreliable at deciding the value of their knowledge. Don't
> > listen to them about that, listen to their customers.
>
> That is a rather stupid attidute. Science does not have customers in
> this sense. And where it has, they don't understand a ****
> thing.

How do we decide value? Internal consistency is not enough.

> > Not to say it's useless. Just, don't ask your generals whether you
> > need a war.
>
> A different thing altogether. I won't deny that there _are_ problems
> with way the scientific institutions work (I'm not really qualified to
> comment on that, but I've lots of opinion and some rather nasty
> insights). But all in all I don't see society as a whole fit to judge
> on science -- not before they either stop using the results of science
> or brush up a bit on logic, reasoning and plain common sense.

The choice of how much of society's resources to put into science is a
social choice, and big parts of it are made by politicians. :( If we
were to base it on less-irrational reasoning, we would resort to the
science of economics. :( :(

In economic terms, funding science is like betting in a lottery except
that the payout can be much larger than the total amount bet. And we
don't have much idea how large the payouts will be or how long it will
take them to come. And we don't have very good ideas how to allocate
the fundimg among sciences.

So for example, some microbiologists were studying viruses that attack
bacteria. These viruses had no medical value -- different strains of
the same species of bacteria tended to be immune to different viruses
so we couldn't stockpile them to cure diseases, plus our own immune
systems attacked the viruses. They studied the methods the bacteria
used to protect themselves from the viruses, which also had no
immediate practical use. And eventually they found ways to use those
methods as tools, which led to genetic engineering, which maybe hasn't
yet paid back the money that's been poured into it but it has a whole
lot of potential.

I don't think anybody's competent to allocate research money, but
somehow it has to be done. As to any particular scientific theory,
judge it by agreement with data and by internal consistency. And if
somebody wants to say that astrology is worthless or acupuncture is
worthless because their theories don't have experimental evidence,
then I say before we make that judgement we need to do the experiments
which show the theories don't fit the evidence. They may be BS. They
are quite likely BS. But it's just my clinical intuition saying so
until I get evidence.

0
jethomas5 (1449)
3/3/2007 1:31:41 PM
Markus E Leypold wrote:

> The theory was that elixir of willow root would help everyone under
> specific circumstances. That was deduced from anumber of cases where
> it helped. It helped Joe, Jim and Jill. But did we conclude "Joe, Jim
> and Jill get better with willow root?" -- No, it was generalized:
> Everyone gets better.
> 
> That process is called: "Making a theory".

Note that that sort of theory has nothing whatever to do with the sort
of theory that formal method and computing theory enthusiasts are
urging programmers to make more use of.  Indeed it is much closer to
the kind of reasoning that they abhor and want us to replace.

    -- chris
0
chris.uppal (3980)
3/3/2007 2:47:53 PM
On Mar 3, 8:31 am, "J Thomas" <jethom...@gmail.com> wrote:
> Interesting. I hadn't noticed theory-of-science people doing
> controlled experiments etc. What I've noticed from that is more
> literary. I saw people looking at history and trying to make theories
> about what it means, and asserting the truth of their theories from
> historical anecdotes. What I've seen has looked far more like literary
> theory applied to science, than science applied to science.

Controlled experiment is not the defining feature of science, but
rather the search of cause and effect explanations. When controlled
experiments are possible, they are an excellent tool in that search,
but in many contexts they are not possible. Consider, for example,
astronomy, where results from experimental sciences help inform
theories of stellar evolution, but if we were to wait until we could
perform a controlled experiment on the factors leading to a
supernova,
we would still be waiting for our first tested hypothesis.

0
agila61 (3956)
3/3/2007 3:41:52 PM
"J Thomas" <jethomas5@gmail.com> writes:

> Markus E Leypold <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-
> leypold.de> wrote:
>> "J Thomas" <jethom...@gmail.com> writes:
>
>> Why I'm even having that discussion is beyond me ...
>
> I've been feeling the same way. We're clearly talking past each other,
> failing to get each other's points. It's as if we're accidentally
> trolling each other. I don't know the term for somebody who gets

I don't see it as bad as that and the last paraggraphs of your post
were certainly enlightening on the topic of where you draw your
experience and why your definitions differ from mine :-).

> trolled, but from another context it's like a couple of straight-men

Trollee?

> ad-libbing with no comedian....


>> -- somebody in
>> this thread seemed to deny the usability of theory and someone else
>> asserted that "practice was there first" implying a primacy of
>> practice over theory. I and others have been trying to shed some more
>> light on the relationship between practice and theory from what I'd
>> characterize as the pint of view of the theory of science (this
>> interesting enough is not the opposite as the practice of science, but
>> the science of how science works).
>
> Interesting. I hadn't noticed theory-of-science people doing
> controlled experiments etc. What I've noticed from that is more

Well -- for one thing philosphy is a science (though not a natural
science) as well as history: Both don't do experiments but still
observe. Theory of science does the same, mostly. Then, as
neighbouring disciplines there are experimental psychology and
educational sciences (both do experiments to assess the effects and
effectiveness of methods).

> literary. I saw people looking at history and trying to make theories
> about what it means, and asserting the truth of their theories from
> historical anecdotes. 

Historical andecdotes are still observations. 

BTW: I know another "science" which doesn't do controlled experiments:
Economy. (Though they do uncontrolled experiments (in the worst sense
of the word: out of control) where economical theory meets politics).

Actually only a few sciencific disciplines have controlled experiments
in the full sense of the word: Physics, Chemistry, ...

Even mathematics has no experiments ... :-).

So science comes in all colours, actually, and I consider your approach too narrow.

More about that another time: I've got a party to visit in half an hour :-).

Regards -- Markus

PS: Shouldn't we shift this thread to another group? Or is it still
    entertaining for the by-standers? Wondering ...


0
3/3/2007 4:13:51 PM

"Chris Uppal" <chris.uppal@metagnostic.REMOVE-THIS.org> writes:

> Markus E Leypold wrote:
>
>> The theory was that elixir of willow root would help everyone under
>> specific circumstances. That was deduced from anumber of cases where
>> it helped. It helped Joe, Jim and Jill. But did we conclude "Joe, Jim
>> and Jill get better with willow root?" -- No, it was generalized:
>> Everyone gets better.
>> 
>> That process is called: "Making a theory".
>
> Note that that sort of theory has nothing whatever to do with the sort
> of theory that formal method and computing theory enthusiasts are
> urging programmers to make more use of.  Indeed it is much closer to
> the kind of reasoning that they abhor and want us to replace.

No. We are still talking about the relationship between theory and
practice. So it applies. That there are also more and less reliable
reasoning has nothing to do with it.

And: There is huge number of enthusiasts (like me) who're no urging
people to use strict formal theory, but actually only to even _start_
_reasoning_ about programs (using means and methos available as naive
logic and natural language) instead of looking at the code and saying
"yeah, I see it is right, can't say why, but if you're good, you can
see, it's right". Absence of implementation error can be reasoned out
(at least in key parts of a system) and need not be based on "we
looked sharply and didnt see any bugs".

Regards -- Markus





>
>     -- chris
0
3/3/2007 4:20:49 PM
Markus E Leypold wrote:

   ...

> The usefulness of such traditions notwithstanding -- I think those
> anektodes tend to inflate their impact. A useful tidbit of practice,
> empirical obeservation often stay a dead end until it begins to fit in
> some theory whre it can be properly extrapolated and exploited.

Eventually exploited in some generalization or extension. In the 
meantime, they continue to be put to good use with or without 
theoretical grounding. Like salysilic acid. (Only in the past few years 
have we learned much coherent about how aspirin works in the body.)

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/3/2007 5:17:31 PM
Markus E Leypold wrote:


   ...

>           ... -- somebody in
> this thread seemed to deny the usability of theory and someone else
> asserted that "practice was there first" implying a primacy of
> practice over theory. 

I wrote that practice was there first. I didn't mean that it's superior 
(except see below) but that in the overwhelming number of instances, a 
theory of something arose after it was already being done. If that's 
primacy, so be it.

Practice takes precedence over theory in an important way. A theory is 
useless unless some set of outcomes can show it to be false. Those 
outcomes are part of practice. Practice can demolish theory, but not the 
other way round. Those who say that something can't exist because there 
is no known explanation are arrogant fools. (Ball lightning and 
continental drift come to mind. So do astrology and reincarnation, but I 
reject those.)

I don't call the supposition that something will work, a theory. My 
notion of theory is restricted to how it works. That's probably the 
difference behind this discussion of ours, which I've enjoyed.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/3/2007 5:52:38 PM
On Thu, 01 Mar 2007 13:31:48 +0100
Stephan Kuhagen <nospam@domain.tld> wrote:

> Markus E Leypold wrote:
> 
> > You know that Newtonian physics is wrong :-)?
> 
> It isn't. It has only a limited scope of validity.

	Bzzt wrong - it is wrong under all conditions it's just that under
many conditions the errors are small enough not to matter.

-- 
C:>WIN                                      |   Directable Mirror Arrays
The computer obeys and wins.                | A better way to focus the sun
You lose and Bill collects.                 |    licences available see
                                            |    http://www.sohara.org/
0
steveo (475)
3/4/2007 7:47:54 AM
Markus E Leypold wrote:

> >> The theory was that elixir of willow root would help everyone under
> >> specific circumstances. That was deduced from anumber of cases
> where >> it helped. It helped Joe, Jim and Jill. But did we conclude
> "Joe, Jim >> and Jill get better with willow root?" -- No, it was
> generalized:  >> Everyone gets better.
> >> 
> >> That process is called: "Making a theory".
> > 
> > Note that that sort of theory has nothing whatever to do with the
> > sort of theory that formal method and computing theory enthusiasts
> > are urging programmers to make more use of.  Indeed it is much
> > closer to the kind of reasoning that they abhor and want us to
> > replace.
> 
> No. We are still talking about the relationship between theory and
> practice. So it applies. That there are also more and less reliable
> reasoning has nothing to do with it.

It seems to me that the true analogue to "willow-style" theorising in
today's programming isn't  something formal (and "foundationish") like
consideration if fixpoints, and nor is it informal reasoning like
knowing that "this loop must terminate because XYZ.", but is the kind
of "voodoo" practices that no one (MF enthusiast or not) likes to see.

Things like.  "I'm getting random crashes so I add 'synchornized' to
all the methods".  "I don't know what that message means, but it goes
aways if you give the -xyz flag to the compler".  "I fixed that
data-corruption problem by declaring my static data in a different
order".  And so on...

The level of theorising involved is, at worst (or maybe I mean, at
best): "it worked last time so it's probably worth trying again".
Through: "it has usually worked in the past, so I suspect that there is
some kind of universal law there, although I don't currently have a
suggestion for why there should be such a law.  Unless shown otherwise,
I shall act on the assumption that the law is true".  Up to (or do I
mean down to?) : "it works because willow absorbs the cooling nature of
the water it grows near, and so acts to cool the inflamation".  I leave
it to you to create a plausible eqivalent of that last ad-hoc theory in
the mind of the guys scattering 'synchronised' around at random.

In all three cases, the theory has some predictive power (and may even
be useful, in the sense of "it worked").  But the predictive power is
limited to "what happened last time will probably happen again" --
which is the /weakest/ prediction that can be made.  The three cases
differ in what kinds of explanatory power (insight) they offer; two of
them make no real attempt to provide an explanation, the last offers an
explanation of sorts, but because it is completely ad-hoc it doesn't
tap into a wider web of mutally supporting concepts, understanding, and
predictions, and so offers no real insight.

I am, BTW, assuming that the person concocting the "willow is cooling"
theory wasn't seeing it as a special case of a wider-ranging (even if
wrong) theory about heating and coolng substances, and hot and cold
categories of illness.  If it was part of such a wider theory, then I
wouldn't call it ad-hoc, and I wouldn't compare it to modern voodoo
programming.

    -- chris
0
chris.uppal (3980)
3/4/2007 4:56:20 PM
Steve O'Hara-Smith schrieb:
> On Thu, 01 Mar 2007 13:31:48 +0100
> Stephan Kuhagen <nospam@domain.tld> wrote:
> 
>> Markus E Leypold wrote:
>>
>>> You know that Newtonian physics is wrong :-)?
>> It isn't. It has only a limited scope of validity.
> 
> 	Bzzt wrong - it is wrong under all conditions it's just that under
> many conditions the errors are small enough not to matter.

Bzzt wrong - you're confusing "validity" and "truth" here.

If you're after truth, none of the existing theories can claim that. 
They are all approximations to reality, some better, some worse.

"Validity" would be the area where the inevitable errors are so small 
that they don't affect the end result.
For reasonably low speeds and masses, Newtonian physics is valid. For 
high masses and/or speeds, it becomes invalid.
(One of the important arts in physics is to know which theory is valid 
for what situation, and when to develop a new theory.)

Regards,
Jo
0
jo427 (1164)
3/4/2007 7:25:12 PM

Jerry Avins <jya@ieee.org> writes:

> Markus E Leypold wrote:
>
>
>    ...
>
>>           ... -- somebody in
>> this thread seemed to deny the usability of theory and someone else
>> asserted that "practice was there first" implying a primacy of
>> practice over theory.
>
> I wrote that practice was there first. I didn't mean that it's
> superior (except see below) but that in the overwhelming number of

I simply deny the word "overwhelming" here. Say: electrical generator,
laser, positrons, particle accelerator, transistor, NMR, geo-scan
(material analysis using X-ray resonance, etc -- almost everything
APART from bridges and houses had a theory as a precursor that wasn't
perfect but rather suggested how it could be done. 

So taking examples from the domain of building has been rather
misleading so far in this thread.

> instances, a theory of something arose after it was already being
> done. If that's primacy, so be it.


> Practice takes precedence over theory in an important way. A theory is
> useless unless some set of outcomes can show it to be false. Those
> outcomes are part of practice. Practice can demolish theory, but not
> the other way round. 

Unfortunately Practice cannot evolve without SOME theory of SOME
kind. You deniers of theory miss, that even handing down the "rules of
practice" to the next generation implies using terms and abstractions
that in themselves constitute some (perhaps minimal, but nonetheless)
form of theory.

> Those who say that something can't exist because
> there is no known explanation are arrogant fools. 

But do those denying the _primacy_ of practice, like me, really say
that? They don't. But continental drift, even to formulate the
phenomenon, requires some amount and background of geology. You don't
just say: "Hey, I don't know what "continents" are and how they are
build, but why couldn't they be here today and there tomorrow --
wouldn't that be cool?"

> (Ball lightning and
> continental drift come to mind. So do astrology and reincarnation, but
> I reject those.)

> I don't call the supposition that something will work, a theory. My
> notion of theory is restricted to how it works. That's probably the
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Which certainly is wrong in my view: E.g. - The phenomena described in
the 19th century theory of electricity have been described accurately,
their measurements and the derived laws given have often been quite
accurate (Couloumb's Law, Ohm's Law e.g.), but the explanations of the
mechanisms behind have been almost completely bogus (electrical fluids
and so on).

So their theory was right and useful in my view, but worked and was
wrong in yours. Ooops :-(.

> difference behind this discussion of ours, which I've enjoyed.

Thanks :-).

Regards -- Markus

0
3/4/2007 9:26:35 PM
Steve O'Hara-Smith <steveo@eircom.net> writes:

> On Thu, 01 Mar 2007 13:31:48 +0100
> Stephan Kuhagen <nospam@domain.tld> wrote:
>
>> Markus E Leypold wrote:
>> 
>> > You know that Newtonian physics is wrong :-)?
>> 
>> It isn't. It has only a limited scope of validity.
>
> 	Bzzt wrong - it is wrong under all conditions it's just that under
> many conditions the errors are small enough not to matter.

:-) Depends perhaps on Steve's notion/definition of validity ...

Regards -- Markus

0
3/4/2007 9:28:10 PM
Joachim Durchholz <jo@durchholz.org> writes:

> Steve O'Hara-Smith schrieb:
>> On Thu, 01 Mar 2007 13:31:48 +0100
>> Stephan Kuhagen <nospam@domain.tld> wrote:
>>
>>> Markus E Leypold wrote:
>>>
>>>> You know that Newtonian physics is wrong :-)?
>>> It isn't. It has only a limited scope of validity.
>> 	Bzzt wrong - it is wrong under all conditions it's just that
>> under
>> many conditions the errors are small enough not to matter.
>
> Bzzt wrong - you're confusing "validity" and "truth" here.

In that sense there is almost certainly no truth.

Regards -- Markus
0
3/4/2007 9:28:56 PM
"Chris Uppal" <chris.uppal@metagnostic.REMOVE-THIS.org> writes:

> Markus E Leypold wrote:
>
>> >> The theory was that elixir of willow root would help everyone under
>> >> specific circumstances. That was deduced from anumber of cases
>> where >> it helped. It helped Joe, Jim and Jill. But did we conclude
>> "Joe, Jim >> and Jill get better with willow root?" -- No, it was
>> generalized:  >> Everyone gets better.
>> >> 
>> >> That process is called: "Making a theory".
>> > 
>> > Note that that sort of theory has nothing whatever to do with the
>> > sort of theory that formal method and computing theory enthusiasts
>> > are urging programmers to make more use of.  Indeed it is much
>> > closer to the kind of reasoning that they abhor and want us to
>> > replace.
>> 
>> No. We are still talking about the relationship between theory and
>> practice. So it applies. That there are also more and less reliable
>> reasoning has nothing to do with it.
>
> It seems to me that the true analogue to "willow-style" theorising in
> today's programming isn't  something formal (and "foundationish") like
> consideration if fixpoints, and nor is it informal reasoning like
> knowing that "this loop must terminate because XYZ.", but is the kind
> of "voodoo" practices that no one (MF enthusiast or not) likes to see.

You snipped the better and more significant part of my reply and then
contradict me. This is not nice! 

:-).

And the willow-style theorizing in program developmant is not what you
say -- voodoo -- but rather what we all know as important heuristic
data points: "Method XYZ on ethernet regularly gets only N Kb/s
throughput" and so on.


Regards -- Markus

0
3/4/2007 9:32:51 PM
"Chris Uppal" <chris.uppal@metagnostic.REMOVE-THIS.org> writes:

> Markus E Leypold wrote:
>
>> >> The theory was that elixir of willow root would help everyone under
>> >> specific circumstances. That was deduced from anumber of cases
>> where >> it helped. It helped Joe, Jim and Jill. But did we conclude
>> "Joe, Jim >> and Jill get better with willow root?" -- No, it was
>> generalized:  >> Everyone gets better.
>> >> 
>> >> That process is called: "Making a theory".
>> > 
>> > Note that that sort of theory has nothing whatever to do with the
>> > sort of theory that formal method and computing theory enthusiasts
>> > are urging programmers to make more use of.  Indeed it is much
>> > closer to the kind of reasoning that they abhor and want us to
>> > replace.
>> 
>> No. We are still talking about the relationship between theory and
>> practice. So it applies. That there are also more and less reliable
>> reasoning has nothing to do with it.
>
> It seems to me that the true analogue to "willow-style" theorising in
> today's programming isn't  something formal (and "foundationish") like
> consideration if fixpoints, and nor is it informal reasoning like
> knowing that "this loop must terminate because XYZ.", but is the kind
> of "voodoo" practices that no one (MF enthusiast or not) likes to see.
>
> Things like.  "I'm getting random crashes so I add 'synchornized' to
> all the methods".  "I don't know what that message means, but it goes
> aways if you give the -xyz flag to the compler".  "I fixed that
> data-corruption problem by declaring my static data in a different
> order".  And so on...
>
> The level of theorising involved is, at worst (or maybe I mean, at
> best): "it worked last time so it's probably worth trying again".
> Through: "it has usually worked in the past, so I suspect that there is
> some kind of universal law there, although I don't currently have a
> suggestion for why there should be such a law.  Unless shown otherwise,
> I shall act on the assumption that the law is true".  Up to (or do I
> mean down to?) : "it works because willow absorbs the cooling nature of
> the water it grows near, and so acts to cool the inflamation".  I leave
> it to you to create a plausible eqivalent of that last ad-hoc theory in
> the mind of the guys scattering 'synchronised' around at random.
>
> In all three cases, the theory has some predictive power (and may even
> be useful, in the sense of "it worked").  But the predictive power is
> limited to "what happened last time will probably happen again" --

The problem (and the crucial point in formation of a theory is: What
actually happened last time?

Did

  1. Joe become better after application of willow root.

  2. A _man_ become better after application of willow root.

  3. A human being become better ...

  4. A humand being become better after _some_ root extract.

  5. Somebody got better after getting something at the time of the new moon.


All thos might accurately describe the former data point correctly but
predict quite different outcomes for

  a. Give water to Jill at the new moon
  b. Give willow extract to John.
  ...

"What happened the last time" is NOT a theory. A theory is to extract
what you consider the causative factors in what happened the last
time: And requires a larger (perhaps not so well defined, but still
some) framework about what forces form the world an are the causes for
interaction between things: Chemistry and electricty, "magnetic force"
(very popular in the 19th century also as "animal magnetism"), even
good and and bad spells / influences poisons and potions etc, the
"juices" of Hippocrates, the 4 elements (also prevalent in the
pre-12th century healing theories and also not totally wrong in their
unifying function).


> which is the /weakest/ prediction that can be made.  The three cases
> differ in what kinds of explanatory power (insight) they offer; two of
> them make no real attempt to provide an explanation, the last offers an
> explanation of sorts, but because it is completely ad-hoc it doesn't
> tap into a wider web of mutally supporting concepts, understanding, and
> predictions, and so offers no real insight.


> I am, BTW, assuming that the person concocting the "willow is cooling"
> theory wasn't seeing it as a special case of a wider-ranging (even if

I think you assume wrongly. There have AFAICS always been attempts to
fit observations into larger frameworks of cause and effect. And I
often have the suspicion that only the fringes (where the validity of
those frameworks broke down) led to superstitions. If nothing else,
e.g. quasi-magic theories of herb lore provided a common framework for
handing down and memorizing (i.e. compressing) empirical observations
on the application, usability and effects of herbs and the
menufacturing of healing potions + lotions.

One shouldn't denigrate the pre-technical age: It was, usually, less
dark then people seen to think, people then were just as smart and
BTW: How many civiliced people you know could manufacture even a knife
from nothing (don't buy metal, make it!), even in decades? So "we" are
not much smarter: We just use better technology handed down to us,
mostly w/o understanding a thing about it.

> wrong) theory about heating and coolng substances, and hot and cold
> categories of illness.  If it was part of such a wider theory, then I
> wouldn't call it ad-hoc, and I wouldn't compare it to modern voodoo
> programming.

Regards -- Markus
0
3/4/2007 9:48:43 PM
Jerry Avins <jya@ieee.org> writes:

> Markus E Leypold wrote:
>
>    ...
>
>> The usefulness of such traditions notwithstanding -- I think those
>> anektodes tend to inflate their impact. A useful tidbit of practice,
>> empirical obeservation often stay a dead end until it begins to fit in
>> some theory whre it can be properly extrapolated and exploited.
>
> Eventually exploited in some generalization or extension. In the
> meantime, they continue to be put to good use with or without
> theoretical grounding. 

Yes, certainly, we have been over that for a number of times. But how
often does that happen? Laser (e.g.) was a theoretical construct first
and has been developed later. There never was a practical laser which
was only later explained by scientists.

> Like salysilic acid. (Only in the past few years have we learned
> much coherent about how aspirin works in the body.)

Certainly this too. But again it has been only available for the
masses AFTER discovering the relevant substance as the causative
factor and after being able to synthesize it. Before there was only
willow root -- and there where hardly anough willows for all people
and the process of extracing anything useful was rather epensice (at
least in terms of time) and the quality of the product poor, therefore
the product unreliable. Only the discovery of and the synthesis of the
pure substance quasi unveiled the true value of the wisdom lying in
the application of willow extract.

Practically supports my point of view: You wouldn't even quote this
example if this morsel of practical experience hadn't been taken on by
science.

Regards -- Markus


0
3/4/2007 10:08:14 PM
>>>>> "M.L." == Markus E Leypold <Markus> writes:

  M.L.> One shouldn't denigrate the pre-technical age: It was,
  M.L.> usually, less dark then people seen to think, people then were
  M.L.> just as smart and BTW: How many civiliced people you know
  M.L.> could manufacture even a knife from nothing (don't buy metal,
  M.L.> make it!), even in decades? So "we" are not much smarter: We
  M.L.> just use better technology handed down to us, mostly w/o
  M.L.> understanding a thing about it.

Indeed. The problem with forming an advanced theory is verifying
it experimentally. Ancient Greece seems to be a bit special in that
philosophy, and developing theories for the sake of argument, was
a cultural pastime. We can search the works of ancient philosophers
and find the seeds to much advanced knowledge - for example,
Democritus reasoned about the indivisible atom, remote worlds
(thinking that the Milky way was actually a collection of stars
and worlds). I believe he also reasoned that the earth was not
the center of all things (a view which, if expressed in different
times, could well have led to his death.) He also seems to have 
touched upon genetics, anthropology, and a wealth of other things.

Of course, for each ancient theory that proved reasonably sound
(sometimes thousands of years later), there were certainly a
great number of faulty ones - a predictable result of developing
theory based on assumptions that cannot be experimentally 
confirmed. I assume that they were not necessarily less
brilliant than the theories of Democritus; they were all engaging
in inspired reasoning based on guesswork.

This thread could perhaps benefit from all involved (who
haven't already done so) to read "A treatise of Human Nature",
by David Hume. It is in the public domain, and can be read
on-line. For those interested in the development of theory,
it is time well spent.

http://etext.library.adelaide.edu.au/h/hume/david/h92t/

This is considered one of the most important works in the
history of philosophy(*), and one of his contributions is 
that he argues that humans can never attain true 
knowledge - never really prove anything. To my knowledge 
(but I'm a layman in the field), his word still stands, 
but others have suggested ways to view science and 
knowledge that may still make the scientific process
worthwhile.

"Now since nothing is ever present to the mind but 
perceptions, and since all ideas are derived from 
something antecedently present to the mind; it 
follows, that it is impossible for us so much as 
to conceive or form an idea of any thing specifically 
different from ideas and impressions. Let us fix our 
attention out of ourselves as much as possible: Let 
us chase our imagination to the heavens, or to the 
utmost limits of the universe; we never really 
advance a step beyond ourselves, nor can conceive 
any kind of existence, but those perceptions, which 
have appeared in that narrow compass. This is the 
universe of the imagination, nor have we any idea 
but what is there produced."
(D. Hume, "A treatise of Human Nature", Sect vi)


(*) Hume was 26 when he wrote it in 1740.

BR,
Ulf W
-- 
Ulf Wiger, Senior Specialist,
   / / /   Architecture & Design of Carrier-Class Software
  / / /    Team Leader, Software Characteristics
 / / /     Ericsson AB, IMS Gateways
0
etxuwig (64)
3/5/2007 10:39:04 AM
Markus E Leypold wrote:
> Jerry Avins <jya@ieee.org> writes:
> 
>> Markus E Leypold wrote:
>>
>>    ...
>>
>>> The usefulness of such traditions notwithstanding -- I think those
>>> anektodes tend to inflate their impact. A useful tidbit of practice,
>>> empirical obeservation often stay a dead end until it begins to fit in
>>> some theory whre it can be properly extrapolated and exploited.
>> Eventually exploited in some generalization or extension. In the
>> meantime, they continue to be put to good use with or without
>> theoretical grounding. 
> 
> Yes, certainly, we have been over that for a number of times. But how
> often does that happen? Laser (e.g.) was a theoretical construct first
> and has been developed later. There never was a practical laser which
> was only later explained by scientists.
> 
>> Like salysilic acid. (Only in the past few years have we learned
>> much coherent about how aspirin works in the body.)
> 
> Certainly this too. But again it has been only available for the
> masses AFTER discovering the relevant substance as the causative
> factor and after being able to synthesize it. Before there was only
> willow root -- and there where hardly anough willows for all people
> and the process of extracing anything useful was rather epensice (at
> least in terms of time) and the quality of the product poor, therefore
> the product unreliable. Only the discovery of and the synthesis of the
> pure substance quasi unveiled the true value of the wisdom lying in
> the application of willow extract.
> 
> Practically supports my point of view: You wouldn't even quote this
> example if this morsel of practical experience hadn't been taken on by
> science.

There's plenty of salicylate to go around. Mint grows like a weed. It 
*is* a weed. We still use wintergreen oil in liniments. Cinchona bark 
was a commercial product long before synthesized alternatives to quinine 
became available, and long before the cause of malaria (literally, "bad 
air") was known. Some inventions grow out of prior theory. I think that 
they are in the minority.

Jerry


-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/5/2007 11:12:05 PM
Jerry Avins <jya@ieee.org> writes:

> Markus E Leypold wrote:
>> Jerry Avins <jya@ieee.org> writes:
>>
>>> Markus E Leypold wrote:
>>>
>>>    ...
>>>
>>>> The usefulness of such traditions notwithstanding -- I think those
>>>> anektodes tend to inflate their impact. A useful tidbit of practice,
>>>> empirical obeservation often stay a dead end until it begins to fit in
>>>> some theory whre it can be properly extrapolated and exploited.
>>> Eventually exploited in some generalization or extension. In the
>>> meantime, they continue to be put to good use with or without
>>> theoretical grounding.
>> Yes, certainly, we have been over that for a number of times. But how
>> often does that happen? Laser (e.g.) was a theoretical construct first
>> and has been developed later. There never was a practical laser which
>> was only later explained by scientists.
>>
>>> Like salysilic acid. (Only in the past few years have we learned
>>> much coherent about how aspirin works in the body.)
>> Certainly this too. But again it has been only available for the
>> masses AFTER discovering the relevant substance as the causative
>> factor and after being able to synthesize it. Before there was only
>> willow root -- and there where hardly anough willows for all people
>> and the process of extracing anything useful was rather epensice (at
>> least in terms of time) and the quality of the product poor, therefore
>> the product unreliable. Only the discovery of and the synthesis of the
>> pure substance quasi unveiled the true value of the wisdom lying in
>> the application of willow extract.
>> Practically supports my point of view: You wouldn't even quote this
>> example if this morsel of practical experience hadn't been taken on by
>> science.
>
> There's plenty of salicylate to go around. Mint grows like a weed. It

Mint? I was of the opinion it was willow which contains salysilic acid
naturally.

> *is* a weed. We still use wintergreen oil in liniments. Cinchona bark
> was a commercial product long before synthesized alternatives to
> quinine became available, and long before the cause of malaria
> (literally, "bad air") was known. Some inventions grow out of prior
> theory. I think that they are in the minority.

That is the point where we disagree: Semiconductors, laser most modern
technology come readily into my mind. Even the steam engine didn't get
very far before the theoretical principles where better understood.

Medicine by the way is a bad example, since there (AFAIK) was never
any practice of medicine without theory (even if this theory was bad
or incomplete from a modern point of view). Our ancestors weren't as
incurious as you seem to consider them and the question "why does it
work" and "how can I make it work" was, I think, always very
prevalent.

Regards -- Markus

0
3/6/2007 12:05:57 AM
Markus E Leypold wrote:
> 
> Jerry Avins <jya@ieee.org> writes:

   ...

>> I don't call the supposition that something will work, a theory. My
>> notion of theory is restricted to how it works. That's probably the
>   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> Which certainly is wrong in my view: E.g. - The phenomena described in
> the 19th century theory of electricity have been described accurately,
> their measurements and the derived laws given have often been quite
> accurate (Couloumb's Law, Ohm's Law e.g.), but the explanations of the
> mechanisms behind have been almost completely bogus (electrical fluids
> and so on).
> 
> So their theory was right and useful in my view, but worked and was
> wrong in yours. Ooops :-(.

Not at all. Electrical fluid was a theory, just not a valid one. That 
plunging a new red-hot sword into a prisoner made it stronger and able 
to take a keener edge was an observation. That the sword was improved 
because it absorbed the prisoner's valor was a theory. The theory had 
modified when the smiths ran out of prisoners and tried goats instead, 
and they worked too. After all, there is no valor in a goat.

>> difference behind this discussion of ours, which I've enjoyed.
> 
> Thanks :-).

Welcome!

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/6/2007 2:17:02 AM
Markus E Leypold wrote:
 > Jerry Avins <jya@ieee.org> writes:

   ...

 >> There's plenty of salicylate to go around. Mint grows like a weed. It
 >
 > Mint? I was of the opinion it was willow which contains salysilic acid
 > naturally.

Oil of wintergreen is methyl salicylate.

   ...

>>                          Some inventions grow out of prior
>> theory. I think that they are in the minority.
> 
> That is the point where we disagree: Semiconductors, laser most modern
> technology come readily into my mind. Even the steam engine didn't get
> very far before the theoretical principles where better understood.

Carnot came long after Watt and Newcomen before him.

> Medicine by the way is a bad example, since there (AFAIK) was never
> any practice of medicine without theory (even if this theory was bad
> or incomplete from a modern point of view). Our ancestors weren't as
> incurious as you seem to consider them and the question "why does it
> work" and "how can I make it work" was, I think, always very
> prevalent.

Too prevalent, I think. It led to many of the bullshit theories behind 
practices that could have better proceeded without their impediment.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/6/2007 4:08:14 AM
Jerry Avins wrote:
....
> 
> Not at all. Electrical fluid was a theory, just not a valid one. That 
> plunging a new red-hot sword into a prisoner made it stronger and able 
> to take a keener edge was an observation. That the sword was improved 
> because it absorbed the prisoner's valor was a theory. The theory had 
> modified when the smiths ran out of prisoners and tried goats instead, 
> and they worked too. After all, there is no valor in a goat.

Dunno, I've met some goats that were pretty nervy.

Cheers,
Elizabeth

-- 
==================================================
Elizabeth D. Rather   (US & Canada)   800-55-FORTH
FORTH Inc.                         +1 310-491-3356
5155 W. Rosecrans Ave. #1018  Fax: +1 310-978-9454
Hawthorne, CA 90250
http://www.forth.com

"Forth-based products and Services for real-time
applications since 1973."
==================================================
0
eratherXXX (903)
3/6/2007 5:44:15 AM
Jerry Avins <jya@ieee.org> writes:

> Markus E Leypold wrote:
>> Jerry Avins <jya@ieee.org> writes:
>
>    ...
>
>>> I don't call the supposition that something will work, a theory. My
>>> notion of theory is restricted to how it works. That's probably the
>>   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> Which certainly is wrong in my view: E.g. - The phenomena described
>> in
>> the 19th century theory of electricity have been described accurately,
>> their measurements and the derived laws given have often been quite
>> accurate (Couloumb's Law, Ohm's Law e.g.), but the explanations of the
>> mechanisms behind have been almost completely bogus (electrical fluids
>> and so on).
>> So their theory was right and useful in my view, but worked and was
>> wrong in yours. Ooops :-(.
>
> Not at all. Electrical fluid was a theory, just not a valid one. That
> plunging a new red-hot sword into a prisoner made it stronger and able
> to take a keener edge was an observation. That the sword was improved
> because it absorbed the prisoner's valor was a theory. The theory had
> modified when the smiths ran out of prisoners and tried goats instead,
> and they worked too. After all, there is no valor in a goat.


You missed my point, I think: Ohm's law, do give an example, does not
explain "how it works". It just says: Things obey this rule. 

I consider restricting the "notion of theory" "to how it works" as too
restrictive.

Regards -- Markus

0
3/6/2007 8:56:25 AM
Jerry Avins <jya@ieee.org> writes:

> Markus E Leypold wrote:
>  > Jerry Avins <jya@ieee.org> writes:
>
>    ...
>
>  >> There's plenty of salicylate to go around. Mint grows like a weed. It
>  >
>  > Mint? I was of the opinion it was willow which contains salysilic acid
>  > naturally.
>
> Oil of wintergreen is methyl salicylate.

OK.

>>>                          Some inventions grow out of prior
>>> theory. I think that they are in the minority.
>> That is the point where we disagree: Semiconductors, laser most
>> modern
>> technology come readily into my mind. Even the steam engine didn't get
>> very far before the theoretical principles where better understood.
>
> Carnot came long after Watt and Newcomen before him.

Obviously we also disgree on what "very far" means  ...

Carnot only "came" 30 years after Watt.

Furthermore I hear, that Watt even learned German to be able to read
certein writings on caloric theory. And even more I'm convinced he was
very good in simple Newtion mechanic theory.


>
>> Medicine by the way is a bad example, since there (AFAIK) was never
>> any practice of medicine without theory (even if this theory was bad
>> or incomplete from a modern point of view). Our ancestors weren't as
>> incurious as you seem to consider them and the question "why does it
>> work" and "how can I make it work" was, I think, always very
>> prevalent.
>
> Too prevalent, I think. It led to many of the bullshit theories behind
> practices that could have better proceeded without their impediment.

As I said: I disgree. Trial ("proceeding practice") without theory to
guide it, is just aimless experimentation.

Regards -- Markus
0
3/6/2007 9:33:41 AM
Markus E Leypold wrote:

   ...

> As I said: I disgree. Trial ("proceeding practice") without theory to
> guide it, is just aimless experimentation.

Metallurgy got along remarkably well before there was any theory of 
matter that could enlighten it. Bronze was discovered, not designed. 
Even smelting ore to make metal wasn't founded on theory. No theory 
helped the ancients learn to make soap. Was the notion that amethyst 
wards off intoxication a theory? You may think so, but I don't.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/6/2007 2:45:44 PM
Markus E Leypold wrote:
> Jerry Avins <jya@ieee.org> writes:
> 
>> Markus E Leypold wrote:
>>> Jerry Avins <jya@ieee.org> writes:
>>    ...
>>
>>>> I don't call the supposition that something will work, a theory. My
>>>> notion of theory is restricted to how it works. That's probably the
>>>   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> Which certainly is wrong in my view: E.g. - The phenomena described
>>> in
>>> the 19th century theory of electricity have been described accurately,
>>> their measurements and the derived laws given have often been quite
>>> accurate (Couloumb's Law, Ohm's Law e.g.), but the explanations of the
>>> mechanisms behind have been almost completely bogus (electrical fluids
>>> and so on).
>>> So their theory was right and useful in my view, but worked and was
>>> wrong in yours. Ooops :-(.
>> Not at all. Electrical fluid was a theory, just not a valid one. That
>> plunging a new red-hot sword into a prisoner made it stronger and able
>> to take a keener edge was an observation. That the sword was improved
>> because it absorbed the prisoner's valor was a theory. The theory had
>> modified when the smiths ran out of prisoners and tried goats instead,
>> and they worked too. After all, there is no valor in a goat.
> 
> 
> You missed my point, I think: Ohm's law, do give an example, does not
> explain "how it works". It just says: Things obey this rule. 

Right. You missed my point. We call it "Ohm's Law", not "Ohm's Theory". 
It is a stepping stone to Maxwell's Theory, an observation but not a 
theory in itself.

> I consider restricting the "notion of theory" "to how it works" as too
> restrictive.

I consider extending it to "Hey! It works!" too general.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/6/2007 2:57:07 PM
Jerry Avins wrote:
> Markus E Leypold wrote:
>> Jerry Avins <jya@ieee.org> writes:
>>
>>> Markus E Leypold wrote:
>>>> Jerry Avins <jya@ieee.org> writes:
>>>    ...
>>>
>>>>> I don't call the supposition that something will work, a theory. My
>>>>> notion of theory is restricted to how it works. That's probably the
>>>>   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>>> Which certainly is wrong in my view: E.g. - The phenomena described
>>>> in
>>>> the 19th century theory of electricity have been described accurately,
>>>> their measurements and the derived laws given have often been quite
>>>> accurate (Couloumb's Law, Ohm's Law e.g.), but the explanations of the
>>>> mechanisms behind have been almost completely bogus (electrical fluids
>>>> and so on).
>>>> So their theory was right and useful in my view, but worked and was
>>>> wrong in yours. Ooops :-(.
>>> Not at all. Electrical fluid was a theory, just not a valid one. That
>>> plunging a new red-hot sword into a prisoner made it stronger and able
>>> to take a keener edge was an observation. That the sword was improved
>>> because it absorbed the prisoner's valor was a theory. The theory had
>>> modified when the smiths ran out of prisoners and tried goats instead,
>>> and they worked too. After all, there is no valor in a goat.
>>
>>
>> You missed my point, I think: Ohm's law, do give an example, does not
>> explain "how it works". It just says: Things obey this rule. 
> 
> Right. You missed my point. We call it "Ohm's Law", not "Ohm's Theory". 
> It is a stepping stone to Maxwell's Theory, an observation but not a 
> theory in itself.

One confusion here is that we use the word "law" in two different ways 
in physics. They correspond to the words "axiom" and "function" in 
mathematics, two very different notions. "Newton's Laws" are the axioms 
from which Newton's theory of dynamics may be derived.

"Ohm's Law", on the other hand, is simply a functional relationship that 
any particular configuration of matter may or may not follow. It's 
essentially the definition of a resistor, not a fundamental law. It's 
also unrelated to Maxwell's equations.

> 
>> I consider restricting the "notion of theory" "to how it works" as too
>> restrictive.
> 
> I consider extending it to "Hey! It works!" too general.
> 
> Jerry


-- 
John Doty, Noqsi Aerospace, Ltd.
--
Specialization is for robots.
0
jpd1 (1586)
3/6/2007 3:25:35 PM
Elizabeth D Rather wrote:

   ...

> Dunno, I've met some goats that were pretty nervy.

I suspect you have a few old goats in mind.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/6/2007 6:14:46 PM
John Doty wrote:

   ...

> "Ohm's Law", on the other hand, is simply a functional relationship that 
> any particular configuration of matter may or may not follow. It's 
> essentially the definition of a resistor, not a fundamental law. It's 
> also unrelated to Maxwell's equations.

Ohm's law /is/ related to Maxwell's equations in that Maxwell cites it 
as part of the empirical evidence on which his equations are based. In 
"Treatise", Maxwell systematically catalogs the various known behaviors 
of electricity and magnetism, including Ohm's Law, Coulomb's Law, the 
contributions of Ampere, Faraday, Oersted, and others. He weaves those 
observations into a coherent whole by adding displacement current, then 
with those -- I called them stepping stones in what I snipped -- in 
place, he derives his equations (with mathematical help from Green, 
Gauss, and others).

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/6/2007 6:27:36 PM
Jerry Avins wrote:
> John Doty wrote:
> 
>   ...
> 
>> "Ohm's Law", on the other hand, is simply a functional relationship 
>> that any particular configuration of matter may or may not follow. 
>> It's essentially the definition of a resistor, not a fundamental law. 
>> It's also unrelated to Maxwell's equations.
> 
> Ohm's law /is/ related to Maxwell's equations in that Maxwell cites it 
> as part of the empirical evidence on which his equations are based. In 
> "Treatise", Maxwell systematically catalogs the various known behaviors 
> of electricity and magnetism, including Ohm's Law, Coulomb's Law, the 
> contributions of Ampere, Faraday, Oersted, and others. He weaves those 
> observations into a coherent whole by adding displacement current, then 
> with those -- I called them stepping stones in what I snipped -- in 
> place, he derives his equations (with mathematical help from Green, 
> Gauss, and others).

Weird. Mathematically, Ohm's Law has nothing to do with Maxwell's 
equations. Maxwell's equations work just fine whether conduction is 
ohmic or not. The other stuff is key, of course.

But the role of theory in practice can get pretty tangled. Fleming and 
DeForest were both pretty good physicists, but neither understood what 
was going on in the vacuum tubes they invented. For them, theory seems 
to have inspired empirical investigation, but it didn't predict what the 
investigation would find.

Similarly, Brattain, Bardeen, and Shockley had a concept that we now 
call the field-effect transistor. Their theory was good as far as it 
went, but what they actually found in their experiments was the bipolar 
transistor. FETs require mastery of surface effects that were poorly 
understood in the 1940's.

-- 
John Doty, Noqsi Aerospace, Ltd.
--
Specialization is for robots.
0
jpd1 (1586)
3/6/2007 6:48:13 PM
Jerry Avins <jya@ieee.org> writes:

> John Doty wrote:
>
>    ...
>
>> "Ohm's Law", on the other hand, is simply a functional relationship
>> that any particular configuration of matter may or may not
>> follow. It's essentially the definition of a resistor, not a
>> fundamental law. It's also unrelated to Maxwell's equations.
>
> Ohm's law /is/ related to Maxwell's equations in that Maxwell cites it
> as part of the empirical evidence on which his equations are based. 

Are you sure? Ohm's Law is a property of matter, or better a result of
the way in which charges can move in matter. Maxwells equations on the
other side only tell about electric and magnetic fields and don't say
anything about the cause of currents (movement of charges). Indeed the
current term (and the charge, together forming a 4dimensional vector)
is what "couples the equations governing the movement of matter
(mechanics) to the dynamics of electromagnetic fields. Actually I
should have written "realitivistic mechanics", since the combined
equations are where it becomes evident that classical mechanics is
incompatible to electrodynmaics -- one is invariant under Galilei
transformations the other under Lorentz transformations.

Special relativity is baiscally to find a theory which invariant under
Lorentz T, but for low relative velocities approximates Newtonian
mechanics. It was by the way to Einstein who found those (Lorentz and
Minkowski were there before, Einsteins merits was to interpret the
description in terms of experiment -- BY THE WAY: another instance
where theory definitely was before practice :-), sort of)

Regards -- Markus

0
3/6/2007 7:15:54 PM
John Doty <jpd@whispertel.LoseTheH.net> writes:

> Jerry Avins wrote:
>> John Doty wrote:
>>   ...
>>
>>> "Ohm's Law", on the other hand, is simply a functional relationship
>>> that any particular configuration of matter may or may not
>>> follow. It's essentially the definition of a resistor, not a
>>> fundamental law. It's also unrelated to Maxwell's equations.
>> Ohm's law /is/ related to Maxwell's equations in that Maxwell cites
>> it as part of the empirical evidence on which his equations are
>> based. In "Treatise", Maxwell systematically catalogs the various
>> known behaviors of electricity and magnetism, including Ohm's Law,
>> Coulomb's Law, the contributions of Ampere, Faraday, Oersted, and
>> others. He weaves those observations into a coherent whole by adding
>> displacement current, then with those -- I called them stepping
>> stones in what I snipped -- in place, he derives his equations (with
>> mathematical help from Green, Gauss, and others).
>
> Weird. Mathematically, Ohm's Law has nothing to do with Maxwell's
> equations. Maxwell's equations work just fine whether conduction is
> ohmic or not. The other stuff is key, of course.
>
> But the role of theory in practice can get pretty tangled. Fleming and

I never have denied that. But if you can't talk about what you're
doing, your basically lost in science.

> DeForest were both pretty good physicists, but neither understood what
> was going on in the vacuum tubes they invented. For them, theory seems
> to have inspired empirical investigation, but it didn't predict what
> the investigation would find.

That happens. Still theory governs investigations. Sometimes you find
something different. Sometimes you disprove the original theory,
sometimes you understand nothing, sometimes your apparatus is too bad
(but you hoped anyway) and sometimes you find something different,
sometimes something similar (and only later that it was different).


> Similarly, Brattain, Bardeen, and Shockley had a concept that we now
> call the field-effect transistor. Their theory was good as far as it
> went, but what they actually found in their experiments was the
> bipolar transistor. FETs require mastery of surface effects that were
> poorly understood in the 1940's.

:-)

Regards -- Markus



0
3/6/2007 7:19:11 PM
John Doty wrote:

   ...

> But the role of theory in practice can get pretty tangled. Fleming and 
> DeForest were both pretty good physicists, but neither understood what 
> was going on in the vacuum tubes they invented. For them, theory seems 
> to have inspired empirical investigation, but it didn't predict what the 
> investigation would find.

Their vacuum tubes were based on the Edison effect, which is what 
thermionic emission was first called. Edison discovered it as the cause 
of premature burnout in light bulbs and learned how to suppress it by 
using nitrogen fill do decrease an electron's mean free path. Once he 
found how to sidestep its nuisance value, he stopped thinking about it.

Flemming and DeForest didn't ignore it. They knew about electron 
emission because Edison told them. What they didn't know about was space 
charge near the cathode. Geissler tubes were old hat by then, so 
progress was fairly rapid. Space charge was discovered because electron 
ballistics calculations that ignored it failed. It was an ad-hoc 
fudge-factor fix that turns out to be real.

> Similarly, Brattain, Bardeen, and Shockley had a concept that we now 
> call the field-effect transistor. Their theory was good as far as it 
> went, but what they actually found in their experiments was the bipolar 
> transistor. FETs require mastery of surface effects that were poorly 
> understood in the 1940's.

The first field-effect transistors were built in the 30s, but they 
weren't anything to write home about because zone melting to make pure 
semiconductors hadn't been developed. Brattain, Bardeen, and Shockley 
were the fine experimentalists who developed that. Their first result 
wasn't a bipolar transistor, but point contact. Why? because that was an 
easy extension of the cats-whisker crystal. The day that point-contact 
transistors were announced to the lay world -- I read a description in 
the New York Times on the subway ride to high school -- I made one for 
myself. I used the pellet in a surplus 1N21B radar diode and two cat's 
whiskers. I had a working oscillator before bed time. Knowledge is power 
(fleapower in that case).

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/6/2007 9:40:17 PM
Jerry Avins wrote:
> John Doty wrote:
> 
>   ...
> 
>> But the role of theory in practice can get pretty tangled. Fleming and 
>> DeForest were both pretty good physicists, but neither understood what 
>> was going on in the vacuum tubes they invented. For them, theory seems 
>> to have inspired empirical investigation, but it didn't predict what 
>> the investigation would find.
> 
> Their vacuum tubes were based on the Edison effect, which is what 
> thermionic emission was first called. Edison discovered it as the cause 
> of premature burnout in light bulbs and learned how to suppress it by 
> using nitrogen fill do decrease an electron's mean free path. Once he 
> found how to sidestep its nuisance value, he stopped thinking about it.
> 
> Flemming and DeForest didn't ignore it. They knew about electron 
> emission because Edison told them.

They didn't know it was electrons. They knew there was a current. And 
thermionic emission couldn't be understood quantitatively until Einstein 
introduced the "work function" concept (as part of the explanation of 
the photoelectric effect), and it took time to put the pieces together.

> What they didn't know about was space 
> charge near the cathode. Geissler tubes were old hat by then, so 
> progress was fairly rapid. Space charge was discovered because electron 
> ballistics calculations that ignored it failed. It was an ad-hoc 
> fudge-factor fix that turns out to be real.

 From what I've read, DeForest tried to explain his audions using the 
current-as-electrified-fluid idea, which of course doesn't work. It took 
a bunch of work by a number of physicists and engineers to figure it out.

I don't know why you consider space charge a "fudge factor". It follows 
naturally from the *collective* electron ballistics.

> 
>> Similarly, Brattain, Bardeen, and Shockley had a concept that we now 
>> call the field-effect transistor. Their theory was good as far as it 
>> went, but what they actually found in their experiments was the 
>> bipolar transistor. FETs require mastery of surface effects that were 
>> poorly understood in the 1940's.
> 
> The first field-effect transistors were built in the 30s, but they 
> weren't anything to write home about because zone melting to make pure 
> semiconductors hadn't been developed.

It also wasn't clear just what they were, and they weren't reproducible.

> Brattain, Bardeen, and Shockley 
> were the fine experimentalists who developed that. Their first result 
> wasn't a bipolar transistor, but point contact.

Sorry, my mistake. But the physics is the same (different from a FET). 
Injection of minority carriers into a base from an emitter, collected by 
a collector.

> Why? because that was an 
> easy extension of the cats-whisker crystal. The day that point-contact 
> transistors were announced to the lay world -- I read a description in 
> the New York Times on the subway ride to high school -- I made one for 
> myself. I used the pellet in a surplus 1N21B radar diode and two cat's 
> whiskers. I had a working oscillator before bed time. Knowledge is power 
> (fleapower in that case).

Cool.

-- 
John Doty, Noqsi Aerospace, Ltd.
--
Specialization is for robots.
0
jpd1 (1586)
3/7/2007 2:55:33 AM
John Doty wrote:
> Jerry Avins wrote:
>> John Doty wrote:
>>
>>   ...
>>
>>> But the role of theory in practice can get pretty tangled. Fleming 
>>> and DeForest were both pretty good physicists, but neither understood 
>>> what was going on in the vacuum tubes they invented. For them, theory 
>>> seems to have inspired empirical investigation, but it didn't predict 
>>> what the investigation would find.
>>
>> Their vacuum tubes were based on the Edison effect, which is what 
>> thermionic emission was first called. Edison discovered it as the 
>> cause of premature burnout in light bulbs and learned how to suppress 
>> it by using nitrogen fill do decrease an electron's mean free path. 
>> Once he found how to sidestep its nuisance value, he stopped thinking 
>> about it.
>>
>> Flemming and DeForest didn't ignore it. They knew about electron 
>> emission because Edison told them.
> 
> They didn't know it was electrons. They knew there was a current. And 
> thermionic emission couldn't be understood quantitatively until Einstein 
> introduced the "work function" concept (as part of the explanation of 
> the photoelectric effect), and it took time to put the pieces together.

 From what I read, Edison deduced it was electrons because the way his 
filaments always burned out on the same side showed him which way the 
current in the vacuum went.

>> What they didn't know about was space charge near the cathode. 
>> Geissler tubes were old hat by then, so progress was fairly rapid. 
>> Space charge was discovered because electron ballistics calculations 
>> that ignored it failed. It was an ad-hoc fudge-factor fix that turns 
>> out to be real.
> 
>  From what I've read, DeForest tried to explain his audions using the 
> current-as-electrified-fluid idea, which of course doesn't work. It took 
> a bunch of work by a number of physicists and engineers to figure it out.

I understood that he'd have known better if he had paid more attention 
to Edison. Then again, Edison should have realized that he had invented 
a rectifier. Do you remember Tungar bulbs?

> I don't know why you consider space charge a "fudge factor". It follows 
> naturally from the *collective* electron ballistics.

A fudge factor in the sense that it was first postulated to make the 
ballistics calculations work out. (It helped that it's real.) A sort of 
Tombaugh effect: discovering the cause of observed departure from 
calculation. According to an old timer at RCA's Harrison tube plant, the 
data fitted a fictitious cathode much larger than the physical one 
before the notion of space charge rationalized that.

   ...

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/7/2007 3:35:57 AM
Jerry Avins wrote:
> John Doty wrote:
>> Jerry Avins wrote:
>>> John Doty wrote:
>>>
>>>   ...
>>>
>>>> But the role of theory in practice can get pretty tangled. Fleming 
>>>> and DeForest were both pretty good physicists, but neither 
>>>> understood what was going on in the vacuum tubes they invented. For 
>>>> them, theory seems to have inspired empirical investigation, but it 
>>>> didn't predict what the investigation would find.
>>>
>>> Their vacuum tubes were based on the Edison effect, which is what 
>>> thermionic emission was first called. Edison discovered it as the 
>>> cause of premature burnout in light bulbs and learned how to suppress 
>>> it by using nitrogen fill do decrease an electron's mean free path. 
>>> Once he found how to sidestep its nuisance value, he stopped thinking 
>>> about it.
>>>
>>> Flemming and DeForest didn't ignore it. They knew about electron 
>>> emission because Edison told them.
>>
>> They didn't know it was electrons. They knew there was a current. And 
>> thermionic emission couldn't be understood quantitatively until 
>> Einstein introduced the "work function" concept (as part of the 
>> explanation of the photoelectric effect), and it took time to put the 
>> pieces together.
> 
>  From what I read, Edison deduced it was electrons because the way his 
> filaments always burned out on the same side showed him which way the 
> current in the vacuum went.

He could tell it was negative current, but the concept of the electron 
was not available in 1880.

> 
>>> What they didn't know about was space charge near the cathode. 
>>> Geissler tubes were old hat by then, so progress was fairly rapid. 
>>> Space charge was discovered because electron ballistics calculations 
>>> that ignored it failed. It was an ad-hoc fudge-factor fix that turns 
>>> out to be real.
>>
>>  From what I've read, DeForest tried to explain his audions using the 
>> current-as-electrified-fluid idea, which of course doesn't work. It 
>> took a bunch of work by a number of physicists and engineers to figure 
>> it out.
> 
> I understood that he'd have known better if he had paid more attention 
> to Edison.

I don't think so. Maybe he should have paid attention to J. J. Thomson's 
work, though, along with statistical mechanics. But that's hindsight. He 
clearly hurt himself by aggressively trying to claim the audion idea as 
his sole property when it was partly derived from the Edison and Fleming 
and also needed the attention of others to become a seriously useful device.

> Then again, Edison should have realized that he had invented 
> a rectifier. Do you remember Tungar bulbs?

The name is familiar, a dim echo from my youth. But I don't remember 
precisely what they were.

> 
>> I don't know why you consider space charge a "fudge factor". It 
>> follows naturally from the *collective* electron ballistics.
> 
> A fudge factor in the sense that it was first postulated to make the 
> ballistics calculations work out. (It helped that it's real.) A sort of 
> Tombaugh effect: discovering the cause of observed departure from 
> calculation. According to an old timer at RCA's Harrison tube plant, the 
> data fitted a fictitious cathode much larger than the physical one 
> before the notion of space charge rationalized that.

I thought Langmuir had this all worked out in the teens, before RCA 
existed. And what you're saying doesn't make sense to me. Space charge 
*reduces* the current flow below that expected from the thermionic 
emission capability of the cathode.

-- 
John Doty, Noqsi Aerospace, Ltd.
--
Specialization is for robots.
0
jpd1 (1586)
3/7/2007 4:52:49 AM
John Doty wrote:
> Jerry Avins wrote:
>> John Doty wrote:
>>> Jerry Avins wrote:

   ...

>>  From what I read, Edison deduced it was electrons because the way his 
>> filaments always burned out on the same side showed him which way the 
>> current in the vacuum went.
> 
> He could tell it was negative current, but the concept of the electron 
> was not available in 1880.
> 
>>
>>>> What they didn't know about was space charge near the cathode. 
>>>> Geissler tubes were old hat by then, so progress was fairly rapid. 
>>>> Space charge was discovered because electron ballistics calculations 
>>>> that ignored it failed. It was an ad-hoc fudge-factor fix that turns 
>>>> out to be real.
>>>
>>>  From what I've read, DeForest tried to explain his audions using the 
>>> current-as-electrified-fluid idea, which of course doesn't work. It 
>>> took a bunch of work by a number of physicists and engineers to 
>>> figure it out.
>>
>> I understood that he'd have known better if he had paid more attention 
>> to Edison.
> 
> I don't think so. Maybe he should have paid attention to J. J. Thomson's 
> work, though, along with statistical mechanics. But that's hindsight. He 
> clearly hurt himself by aggressively trying to claim the audion idea as 
> his sole property when it was partly derived from the Edison and Fleming 
> and also needed the attention of others to become a seriously useful 
> device.

Yes. After commercial radio had become a precursor of TV's "vast 
wasteland" and divided time into 15-minute segments he called "cubits", 
DeForrest wrote a parody of Hiawatha: "This is DeForest's Prime Evil" 
decrying the use to which "his" invention had been put. Not bad for 
doggerel, IIRC.

>> Then again, Edison should have realized that he had invented a 
>> rectifier. Do you remember Tungar bulbs?
> 
> The name is familiar, a dim echo from my youth. But I don't remember 
> precisely what they were.

 
http://www.powerstream.com/1922/battery_1922_WITTE/batteryfiles/chapter11.htm, 
fig. 49. A thermionic (thoriated tungsten?) rectifier with a mogul base 
and a volume more than a pint.

   ...

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/7/2007 5:34:39 AM
Markus E Leypold wrote:

   ...

> Special relativity is baiscally to find a theory which invariant under
> Lorentz T, but for low relative velocities approximates Newtonian
> mechanics. It was by the way to Einstein who found those (Lorentz and
> Minkowski were there before, Einsteins merits was to interpret the
> description in terms of experiment -- BY THE WAY: another instance
> where theory definitely was before practice :-), sort of)

Special relativity was a successful attempt to explain the experimental 
evidence gathered by Michaelson and Morley. Practice came first.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/7/2007 11:47:14 PM
In comp.lang.functional Jerry Avins <jya@ieee.org> wrote:
> Markus E Leypold wrote:
> 
>   ...
> 
>> Special relativity is baiscally to find a theory which invariant under
>> Lorentz T, but for low relative velocities approximates Newtonian
>> mechanics. It was by the way to Einstein who found those (Lorentz and
>> Minkowski were there before, Einsteins merits was to interpret the
>> description in terms of experiment -- BY THE WAY: another instance
>> where theory definitely was before practice :-), sort of)
> 
> Special relativity was a successful attempt to explain the experimental 
> evidence gathered by Michaelson and Morley. Practice came first.
> 
> Jerry

Check out aias.us for a theory that merges general relativity with
electromagnetism.
0
daf9612 (31)
3/8/2007 2:14:23 AM
Jerry Avins <jya@ieee.org> writes:

>> I consider restricting the "notion of theory" "to how it works" as too
>> restrictive.
>
> I consider extending it to "Hey! It works!" too general.

Not "hey, it works", but "it always happens like this" or "it always
works, if ...". "How it works", i.e. an explanation of the mechanism
woul in my opinion be too special. I.e. planetary mechanics is
certainly scientific, but still we do not (really) know how gravity
works.

Regards -- Markus

0
3/8/2007 7:56:45 AM
Jerry Avins <jya@ieee.org> writes:

> Markus E Leypold wrote:
>
>    ...
>
>> Special relativity is baiscally to find a theory which invariant under
>> Lorentz T, but for low relative velocities approximates Newtonian
>> mechanics. It was by the way to Einstein who found those (Lorentz and
>> Minkowski were there before, Einsteins merits was to interpret the
>> description in terms of experiment -- BY THE WAY: another instance
>> where theory definitely was before practice :-), sort of)
>
> Special relativity was a successful attempt to explain the
> experimental evidence gathered by Michaelson and Morley. Practice came
> first.

Yes, this too. But the experiments of Michelson and Morley only
indicated the absence of ether. And that only after the problem of a
special reference frame had actually been recognized as a problem (the
experiments where looking for specific effects only after it was
noticed that the laws of electro dynamics were not invariant under
Galilei transform). Relativity is, though, much more than the
constancy of the speed of light. E = mc^2, time dilatation etc, all
that was not even KNOWN as experimental result, and certainly not
"practically applied" in any apparatus, in but only looked for AFTER
predicted by theory.

Indeed the physical theories of the 20th century are the big success
of theoretical science. Probably the least place where you could
succeed establishing the primacy of practice.

Regards -- Markus
0
3/8/2007 8:05:52 AM
In article <DLidnepHg4-e0nLYnZ2dnUVZ_qzinZ2d@rcn.net>,
Jerry Avins  <jya@ieee.org> wrote:
>Markus E Leypold wrote:
>
>   ...
>
>> Special relativity is baiscally to find a theory which invariant under
>> Lorentz T, but for low relative velocities approximates Newtonian
>> mechanics. It was by the way to Einstein who found those (Lorentz and
>> Minkowski were there before, Einsteins merits was to interpret the
>> description in terms of experiment -- BY THE WAY: another instance
>> where theory definitely was before practice :-), sort of)
>
>Special relativity was a successful attempt to explain the experimental
>evidence gathered by Michaelson and Morley. Practice came first.

I think that is a strange way to put it. Special relativity is not
explaining that the velocity of light is constant. It is a theory
based on the postulate that the light velocity is constant.

>
>Jerry

Groetjes Albert
--
-- 
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- like all pyramid schemes -- ultimately falters.
albert@spe&ar&c.xs4all.nl &=n http://home.hccnet.nl/a.w.m.van.der.horst
0
albert37 (3001)
3/9/2007 6:39:37 PM
Albert van der Horst wrote:
> In article <DLidnepHg4-e0nLYnZ2dnUVZ_qzinZ2d@rcn.net>,
> Jerry Avins  <jya@ieee.org> wrote:
>> Markus E Leypold wrote:
>>
>>   ...
>>
>>> Special relativity is baiscally to find a theory which invariant under
>>> Lorentz T, but for low relative velocities approximates Newtonian
>>> mechanics. It was by the way to Einstein who found those (Lorentz and
>>> Minkowski were there before, Einsteins merits was to interpret the
>>> description in terms of experiment -- BY THE WAY: another instance
>>> where theory definitely was before practice :-), sort of)
>> Special relativity was a successful attempt to explain the experimental
>> evidence gathered by Michaelson and Morley. Practice came first.
> 
> I think that is a strange way to put it. Special relativity is not
> explaining that the velocity of light is constant. It is a theory
> based on the postulate that the light velocity is constant.

Indeed. Michaelson and Morley obtained results that gave us the hard 
choice of believing either that the earth is stationary with respect to 
the ether, or that ether is irrelevant because the velocity of light is 
the same for unaccelerating observers everywhere. Special relativity 
attempted to answer the question, "How could that be?"

Jerry
-- 
Engineering is the art of making what you want from things you can get.
¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
0
jya (12871)
3/9/2007 9:28:31 PM
Albert van der Horst schrieb:
> Special relativity is not
> explaining that the velocity of light is constant. It is a theory
> based on the postulate that the light velocity is constant.

That postulate was actually one of three possible interpretations of the 
Michelson-Morley experiment (space isn't constant, time isn't constant, 
or velocity of light is constant).

So the experiment preselected the choices; the postulate was that it was 
time.
The formulae showed that space and time aren't constant, too, so in 
hindsight, there wasn't even a postulate, all of GR was a direct 
consequence of the Michelson-Morley experiment.

So in this case, practice indeed came first.

Well, it didn't. The Michelson-Morley experiment was an attempt to 
verify the existence of the ether, which had been a purely theoretical 
construct - which, in turn, had been postulated to explain other 
experiments.

So the answer to the question whether practice or theory come first is: 
"mu". Experiment and theory alternate: experiment inspires theory, which 
in turn induces experiment.
Asking which of them is "more primary" in whatever sense is like asking 
whether the chicken or the egg came first.

(IMHO etc.)

Regards,
Jo
0
jo427 (1164)
3/9/2007 9:40:42 PM
Joachim Durchholz wrote:

> Asking which of them is "more primary" in whatever sense is like asking 
> whether the chicken or the egg came first.

And we all know it was the egg, because of the dinosaurs etc. :)

-- Barry

-- 
http://barrkel.blogspot.com/
0
3/9/2007 10:02:37 PM
Albert van der Horst wrote:
> In article <DLidnepHg4-e0nLYnZ2dnUVZ_qzinZ2d@rcn.net>,
> Jerry Avins  <jya@ieee.org> wrote:
>> Markus E Leypold wrote:
>>
>>   ...
>>
>>> Special relativity is baiscally to find a theory which invariant under
>>> Lorentz T, but for low relative velocities approximates Newtonian
>>> mechanics. It was by the way to Einstein who found those (Lorentz and
>>> Minkowski were there before, Einsteins merits was to interpret the
>>> description in terms of experiment -- BY THE WAY: another instance
>>> where theory definitely was before practice :-), sort of)
>> Special relativity was a successful attempt to explain the experimental
>> evidence gathered by Michaelson and Morley. Practice came first.
> 
> I think that is a strange way to put it. Special relativity is not
> explaining that the velocity of light is constant. It is a theory
> based on the postulate that the light velocity is constant.

That's Einstein's approach to developing the theory. But if you read "On 
the electrodynamics of moving bodies" you see that his stated motivation 
is to explain the equality of the effects of a moving magnet on a 
stationary coil and a stationary magnet on a moving coil.

And then there's Poincar�'s approach, in which the speed of light 
varies, but the measurement apparatus is distorted on such a way that it 
appears to be the same. Same predictions, different viewpoint.

>
>> Jerry
> 
> Groetjes Albert
> --


-- 
John Doty, Noqsi Aerospace, Ltd.
--
Specialization is for robots.
0
John
3/10/2007 3:50:17 AM
"Elizabeth D Rather" <eratherXXX@forth.com> wrote in message 
news:12ue7k3erkvo95b@news.supernews.com...
>J Thomas wrote:
>> On Feb 28, 10:08 am, Markus E Leypold
>> <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de> wrote:
>>> "J Thomas" <jethom...@gmail.com> writes:
> ...
>>>> But anyway, my point isn't that you can make things that work without
>>>> a sophisticated or correct theory about why they work. That's a given.
>>>> My point is that large engineering projects usually don't fall down,
>>>> but large software projects usually do fall down. This suggests that
>>>> our theory as currently understood is not adequate.
>>>> Tenth century builders did have sophisticated theory to fall back on.
>>>> It was called astrology. It may have acted to reduce construction
>>>   ^^^^^^^^^^^^^^^^^^^^^^^^
>>>
>>> Nonsense. You should really read up on your history of science and
>>> technique.
>>
>> Are you saying that astrology wasn't developed then? People who're
>> interested in astrology have told me it goes back at least to
>> babylonia and ancient egypt. A large body of theory with a very long
>> history, used to predict lots of things.
>
> Astrology certainly has a long history, but it wasn't a theoretical basis 
> for medieval building design.  Medieval architects and builders had a 
> considerable body of knowledge about what things worked and what didn't, 
> which they learned through apprenticeships and other direct contact. 
> Architects were able to innovate brilliantly -- probably by using block 
> models to test things such as arch construction.  What they *didn't* do is 
> write lengthy academic papers on their theories:  some of them were 
> regarded as 'trade secrets', and in general they were outside any academic 
> tradition that encouraged committing designs to paper.
Astrology and its scientific treatment, astronomy, are fellow travelers. 
While scientists know that there's nothing special about, say, Leo, a 
traveler named Leroy comes, visits, takes tea.
--
LS 


0
Lane
3/10/2007 4:30:10 AM
Albert van der Horst <albert@spenarnc.xs4all.nl> writes:

> In article <DLidnepHg4-e0nLYnZ2dnUVZ_qzinZ2d@rcn.net>,
> Jerry Avins  <jya@ieee.org> wrote:
>>Markus E Leypold wrote:
>>
>>   ...
>>
>>> Special relativity is baiscally to find a theory which invariant under
>>> Lorentz T, but for low relative velocities approximates Newtonian
>>> mechanics. It was by the way to Einstein who found those (Lorentz and
>>> Minkowski were there before, Einsteins merits was to interpret the
>>> description in terms of experiment -- BY THE WAY: another instance
>>> where theory definitely was before practice :-), sort of)
>>
>>Special relativity was a successful attempt to explain the experimental
>>evidence gathered by Michaelson and Morley. Practice came first.
>
> I think that is a strange way to put it. Special relativity is not
> explaining that the velocity of light is constant. It is a theory
> based on the postulate that the light velocity is constant.

I think we should distinguish between practice and observation as well
as between postulates/axioms, models and scientific theories.

In this case a theoretical problem was first (mechanics and
electrodynamics couldn't be unified, qualities star light as observed
on earth were not really consistent with the ether hypothesis). So
people set out to confirm the existence of ether experimentally and
found nothing. Now a new/corrected theory was needed.

That nicely illustrates the interplay of theory and practice (or
better observation) in science. It might be different in engineering,
which probably explains this discussion to a certain extend.

Regards -- Markus


0
Markus
3/10/2007 10:58:57 AM
"Markus E Leypold" 
<development-2006-8ecbb5cc8aREMOVETHIS@ANDTHATm-e-leypold.de> wrote in 
message news:krabyl4b66.fsf@hod.lan.m-e-leypold.de...
>
> Albert van der Horst <albert@spenarnc.xs4all.nl> writes:
>
>> In article <DLidnepHg4-e0nLYnZ2dnUVZ_qzinZ2d@rcn.net>,
>> Jerry Avins  <jya@ieee.org> wrote:
>>>Markus E Leypold wrote:
>>>
>>>   ...
>>>
>>>> Special relativity is baiscally to find a theory which invariant under
>>>> Lorentz T, but for low relative velocities approximates Newtonian
>>>> mechanics. It was by the way to Einstein who found those (Lorentz and
>>>> Minkowski were there before, Einsteins merits was to interpret the
>>>> description in terms of experiment -- BY THE WAY: another instance
>>>> where theory definitely was before practice :-), sort of)
>>>
>>>Special relativity was a successful attempt to explain the experimental
>>>evidence gathered by Michaelson and Morley. Practice came first.
The much more serious challenge came from the experimentalist Planck. 
Anyone who looks at Lorentzian transformations realizes that special 
relativity was waiting for Albert with a bow on it.  Minkowski's lazy dog 
was the workhorse for general relativity.
--
LS


0
Lane
3/11/2007 3:03:20 AM
"Lane Straatman" <invalid@invalid.net> writes:

> "Markus E Leypold" 
> <development-2006-8ecbb5cc8aREMOVETHIS@ANDTHATm-e-leypold.de> wrote in 
> message news:krabyl4b66.fsf@hod.lan.m-e-leypold.de...
>>
>> Albert van der Horst <albert@spenarnc.xs4all.nl> writes:
>>
>>> In article <DLidnepHg4-e0nLYnZ2dnUVZ_qzinZ2d@rcn.net>,
>>> Jerry Avins  <jya@ieee.org> wrote:
>>>>Markus E Leypold wrote:
>>>>
>>>>   ...
>>>>
>>>>> Special relativity is baiscally to find a theory which invariant under
>>>>> Lorentz T, but for low relative velocities approximates Newtonian
>>>>> mechanics. It was by the way to Einstein who found those (Lorentz and
>>>>> Minkowski were there before, Einsteins merits was to interpret the
>>>>> description in terms of experiment -- BY THE WAY: another instance
>>>>> where theory definitely was before practice :-), sort of)
>>>>
>>>>Special relativity was a successful attempt to explain the experimental
>>>>evidence gathered by Michaelson and Morley. Practice came first.

> The much more serious challenge came from the experimentalist Planck.

Planck? Relativity?

Regards -- Markus

0
Markus
3/11/2007 2:36:35 PM
"Markus E Leypold" <development-2006-8ecbb5cc8aREMOVETHIS@ANDTHATm-e-leypold.de> 
wrote in message news:v3649njns4.fsf@hod.lan.m-e-leypold.de...
>
> They haven't (usually) learned that rules by trial and error, though,
> but by memorizing theory or the condensed rules for application of
> theory during their professional education.
>
I like to consider this issue as a little bit beyond theory, even
though there are theoretical concerns that can be useful.  In most
engineering, we have a body of settled knowledge that includes
good metrics for developing designs to a high level of predictability.

Software engineering is not yet at the level of predictability enjoyed
by most mature engineering disciplines.  There is some settled
knowledge (e.g., Big O), that can be used for design metrics, but
not yet a large body of agreed-upon settled knowledge.

Even so, there is an emerging set of principles that are proving
helpful in the dependable development of software.   As the
principles are used for creating solid designs, they become part
of the "body of knowledge."

In most engineering, the "principle of least surprise" is well-established.
We strive to make that a guiding principle in software engineering, but
we still have a long way to go.   A researcher, on the other hand, is
often delighted by surprise since it can lead to new theories and
new avenues of research.  An artist is thrilled by surprise and uses
it to open new perspectives on the world or aesthetics.

I wrote a paper a few years ago on the "Surprise Tolerance Continuum"
for Software Engineering Notes (ACM - SEN) where I explore the
notion of surprise across several disciplines.   I concluded, in that
paper, that an abiding characteristic of good engineering is a low
tolerance for surprise. This is not a novel, nor a new finding.  The
difference in my paper was the putting surprise on a continuum of
creational disciplines.

Theory leads to research, which leads to knowledge, which, if we
are astute enough, leads to new principles.   Those principles, as
they coalesce into settled knowledge, provide many of the foundations
for serious engineering.

Software engineering, according the Garmisch report (Naur 1969), is
the attempt to apply engineering principles and methods to the creation
of software.   The IEEE Computer Society publications echo this idea.

Our challenge, in software engineering, is to discover the settled knowledge
and corrsponding priniciples that work best in the "attempt to apply
engineering principles and methods."    This is an on-going challenge.  It
is many years away from satisfying those of us, including members of
this forum, who continue to pursue it.

The challenge, in software engineering, is all the more difficult because
there is not yet a solid concept of software physics.   Kolence (CACM 1977)
pursued the idea of software physics, but his work fell short of what we
really need.   Indeed, we may never find a well-grounded physics of
software similar to that in other engineering disciplines.   If so, that will
be a real impediment to the progress toward a sound rationale for
software engineering as an engineering discipline.

Richard Riehle



0
adaworks
4/2/2007 2:28:49 AM
"Jerry Avins" <jya@ieee.org> wrote in message 
news:y5ydnbbc4NnYaHnYnZ2dnUVZ_riknZ2d@rcn.net...
>
> There were plenty of collapses after theory ruled also.
Most members of this forum might already know this story, but I
am reminded of it from the discussion.

Roman generals tested their bridges by having those who designed
and built it stand beneath as the chariots and legions crossed over.

Which leads to the Q&A session in which some programmers were
being asked whether they would board an airplane running software
they had written.

First programmer.  Yes. I would board it because I know I am a really
good programmer and the risk is low.

Second programmer.   No.  I don't trust software to fly a plane even
if I did write it.

Third programmer.  I will happily board the plane because I know that,
if has to fly using my software, it will never get off the ground.

Richard Riehle 


0
adaworks
4/2/2007 2:35:44 AM
adaworks@sbcglobal.net wrote:

> 
> "Jerry Avins" <jya@ieee.org> wrote in message
> news:y5ydnbbc4NnYaHnYnZ2dnUVZ_riknZ2d@rcn.net...
>>
>> There were plenty of collapses after theory ruled also.
> Most members of this forum might already know this story, but I
> am reminded of it from the discussion.
> 
> Roman generals tested their bridges by having those who designed
> and built it stand beneath as the chariots and legions crossed over.
> 
> Which leads to the Q&A session in which some programmers were
> being asked whether they would board an airplane running software
> they had written.
> 
> First programmer.  Yes. I would board it because I know I am a really
> good programmer and the risk is low.
> 
> Second programmer.   No.  I don't trust software to fly a plane even
> if I did write it.
> 
> Third programmer.  I will happily board the plane because I know that,
> if has to fly using my software, it will never get off the ground.

Much as I am happy using the District Line and Jubilee Line on London
Underground as I have had a hand in the work on train and track equipment
for them.

-- 
********************************************************************
Paul E. Bennett ....................<email://peb@amleth.demon.co.uk>
Forth based HIDECS Consultancy .....<http://www.amleth.demon.co.uk/>
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-811095
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..
********************************************************************
0
Paul
4/2/2007 9:48:28 AM
On Apr 2, 3:28 am, <adawo...@sbcglobal.net> wrote:

> "Markus E Leypold" <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de>
> wrote in messagenews:v3649njns4.fsf@hod.lan.m-e-leypold.de...

>> They haven't (usually) learned that rules by trial and error, though,
>> but by memorizing theory or the condensed rules for application of
>> theory during their professional education.

> I like to consider this issue as a little bit beyond theory, even
> though there are theoretical concerns that can be useful. In most
> engineering, we have a body of settled knowledge that includes
> good metrics for developing designs to a high level of predictability.

> Software engineering is not yet at the level of predictability enjoyed
> by most mature engineering disciplines. There is some settled
> knowledge (e.g., Big O), that can be used for design metrics, but
> not yet a large body of agreed-upon settled knowledge.

In the "Professional Software Development" book, the author addresses
this aspect with some interesting information. He discusses two
coarse surveys regarding the body of knowledge circa the seminal
NATO conference, and at the end of last century.

The questions are regarding the size of the body of knowledge, and
how much of it would be relevant 10 yrs from the time of asking.

The suggestion is that in three decades that the size has grown
considerably. And more significantly, that the feeling of the
percentage that will be relevant 10 yrs hence is also very much
greater.

Perhaps in three decades things have not advanced as much as you
would wish, but for such an embryonic discipline (70+ yrs theoretical,
60+ yrs practical) the body of knowledge is hardly crawling along
either.


[ rest of posting snipped, but read and acknowledged ]


Regards,
Steven Perryman

0
ggroups5 (201)
4/2/2007 5:17:35 PM
On Feb 28, 12:31 pm, John Passaniti <n...@JapanIsShinto.com> wrote:
> billy wrote:
> >> Functional programming languages avoiding state and eschew
> >> mutable data.
> >>   Forth is (like any imperative programming language)
> >> explicit about state and freely mutates data.

> > I would suggest that functional programming languages are not nice
> > because they avoid those things; rather, they are nice for other
> > reasons, but those other reasons happen to require a lack of state and
> > mutable data.

> I'm
> pointing out a couple key ideas seen in most functional languages that
> Forth doesn't have.

With all due respect, you're not. You're actually pointing out things
that Forth *has* and (your definition of function languages) *do not
have*. That's directly the opposite of what you claim to be doing.

A language is made useful by its capabilities, not by its
incapabilities. Java did not improve on C++ by getting rid of the
'delete' operator; it improved on it by adding automatic garbage
collection. (Whether Java as a whole is an improvement over C++ I
leave to another thread -- I use both, and have no interest in
resolving that debate.)

Forth has state and mutable data; yes. But the only parts of Forth's
state that you MUST use is the global stack and dictionary, and the
stack can be modeled concatenatively (i.e. as though each word in a
definition represented a function taking a stack as input and
producing a stack as output), while the dictionary is rarely used in a
dynamic way (i.e. at runtime), and so does not complicate static
analysis.

Furthermore, the notation used in Forth is inherently ordered, exactly
like the monad notation used in Haskell, so as with Haskell, side
effects can be analyzed and dealt with properly. The language
"Enchilada", by the way, takes this to the extreme (although, of
course, I'm not claiming that Enchilada is a derivative of Forth).

> > For example, most people would identify "referential transparency" as
> > an important part of functional programming. Forth has quite strong
> > referential transparency, even when you use the parts of it that do
> > horrible things like mutate global state.

> Forth is referentially transparent only in the most trivial sense.
> Certainly a word like this:
>         : square dup * ;
> Is referentially transparent.  But the second you do anything involving
> flow-control based directly (or indirectly) on variables or changing
> state, you are no longer referentially transparent.

Keep in mind that Forth was never designed to be a functional
language. Chuck would have laughed. BUT... The conventional,
recommended practice in Forth is to avoid those things you describe
(as well as some others), because Forth is easier and more fun to work
with when you don't do that. Why? Because when you don't do those
things, Forth is functional.

> Or put another way,
> while you might be able to demonstrate plenty of Forth words that are
> referentially transparent, it's not a unique property of Forth:
>         int square(int x) { return x*x; }
> This is equally referentially transparent to the Forth definition.

Absolutely not; a lambda-centered language can't compete with a point-
free language. (Point-free, for onlookers, means roughly that it
doesn't need to name data.) Any word or sequence of words in the Forth
definition you gave can be factored out and treated as a function of
its own; the same is not true of the C function, since the function
parameters are internal to the function. Put it another way: "x" is
local to the function, so you can't simply copy text from inside a
function and expect it to work anywhere else.

Forth is (or usually is) point-free. That's a BIG win.

> So to claim Forth is a functional language because some constructs are
> referentially transparent is to claim that C is a functional language
> for the same reason.  And once you go down that road, the argument gets
> progressively sillier.

That's a fine slippery-slope argument. But it works just as invalidly
the other direction: I could say "to claim Forth is NOT a functional
language because some constructs are NOT referentially transparent..."
Your argument has to win on its own merits, not merely because you
*think* there's a slippery slope.

Of course, a slippery-slope argument isn't a fallacy; your problem is
that you simply assume the existence of the slope, and its
slipperiness, and the catastrophe waiting at the bottom. Prove it!

-Billy

0
4/3/2007 4:25:12 PM
adaworks@sbcglobal.net wrote:
> I like to consider this issue as a little bit beyond theory, even
> though there are theoretical concerns that can be useful.  In most
> engineering, we have a body of settled knowledge that includes
> good metrics for developing designs to a high level of predictability.
> 
> Software engineering is not yet at the level of predictability enjoyed
> by most mature engineering disciplines.  There is some settled
> knowledge (e.g., Big O), that can be used for design metrics, but
> not yet a large body of agreed-upon settled knowledge.

I agree with you in principle, Richard, but I think you may be 
waiting for the wrong train, or one that won't arrive in our lifetime.

Like mech and elec engineering in their infancy, SwE has many 
hobbyists and dilettantes, but there the analogy breaks down. Like 
so many analogies involving software, comparisons with traditional 
engineering are too often simplistic, misleading and even 
counterproductive.

One of the big differences is the number of software mungers who 
consider themselves 'artists' or at least expert craftsmen, and 
resist almost any form of rigour and/or discipline. These apparent 
descendants of Ned Ludd don't even need to mount an attack on the 
stocking frames - they simply carry on in the warmth and comfort of 
their cottages, confident that their looms will continue to make 
them a living. They've been proven right so far, and while they 
limit themselves to software in the small, they'll probably continue 
to do so.

Another is the fact that in traditional engineering, most of the 
hobbyists were well educated for their time, and lusted after 
external knowledge. So were most of their patrons and clients, who 
were very wary of engineering wannabees. We can have a quiet laugh 
about this one regarding current software development.

Finally and most importantly, academe (such as it was at the time) 
offered and contributed almost nothing in the early days, and 
subsequently had almost no influence on the discipline - in fact, 
most of the breakthroughs and processes came from practitioners and 
independents (like Ollie Heaviside in elec eng). By contrast, one of 
the problems with SwE is the sheer number of so called researchers 
coming out with ridiculous proposals and theories which most 
informed practioners can reject at a glance. The clutter caused by 
this rubbish severely dilutes the advancement of the discipline, and 
I have some sympathy for the hackers who reject all such offerings 
on principle.

So I don't expect too many changes in the near future.

Regarding the topic, the 'formalists' have been going strong for 20+ 
years now, with very little penetration. In general SwE they 
contribute more to the problem than the solution, due to the 
cluttering effect referred to above. The fact that some of their 
theory applies to perhaps 1% of software developed is unfortunate 
simply because it encourages them and their proponents to keep 
making the silly claims of general applicability they've made all 
this time.

Andrew
-- 
Andrew Gabb
email: agabb@tpgi.com.au       Adelaide, South Australia
phone: +61 8 8342-1021
-----
0
Andrew
4/7/2007 4:43:56 PM
Hi Andrew,

Good reply.

Richard Riehle
"Andrew Gabb" <agabb@tpgi.com.au> wrote in message 
news:57q02tF2d8ev8U1@mid.individual.net...
> adaworks@sbcglobal.net wrote:
>> I like to consider this issue as a little bit beyond theory, even
>> though there are theoretical concerns that can be useful.  In most
>> engineering, we have a body of settled knowledge that includes
>> good metrics for developing designs to a high level of predictability.
>>
>> Software engineering is not yet at the level of predictability enjoyed
>> by most mature engineering disciplines.  There is some settled
>> knowledge (e.g., Big O), that can be used for design metrics, but
>> not yet a large body of agreed-upon settled knowledge.
>
> I agree with you in principle, Richard, but I think you may be waiting for the 
> wrong train, or one that won't arrive in our lifetime.
>
> Like mech and elec engineering in their infancy, SwE has many hobbyists and 
> dilettantes, but there the analogy breaks down. Like so many analogies 
> involving software, comparisons with traditional engineering are too often 
> simplistic, misleading and even counterproductive.
>
> One of the big differences is the number of software mungers who consider 
> themselves 'artists' or at least expert craftsmen, and resist almost any form 
> of rigour and/or discipline. These apparent descendants of Ned Ludd don't even 
> need to mount an attack on the stocking frames - they simply carry on in the 
> warmth and comfort of their cottages, confident that their looms will continue 
> to make them a living. They've been proven right so far, and while they limit 
> themselves to software in the small, they'll probably continue to do so.
>
> Another is the fact that in traditional engineering, most of the hobbyists 
> were well educated for their time, and lusted after external knowledge. So 
> were most of their patrons and clients, who were very wary of engineering 
> wannabees. We can have a quiet laugh about this one regarding current software 
> development.
>
> Finally and most importantly, academe (such as it was at the time) offered and 
> contributed almost nothing in the early days, and subsequently had almost no 
> influence on the discipline - in fact, most of the breakthroughs and processes 
> came from practitioners and independents (like Ollie Heaviside in elec eng). 
> By contrast, one of the problems with SwE is the sheer number of so called 
> researchers coming out with ridiculous proposals and theories which most 
> informed practioners can reject at a glance. The clutter caused by this 
> rubbish severely dilutes the advancement of the discipline, and I have some 
> sympathy for the hackers who reject all such offerings on principle.
>
> So I don't expect too many changes in the near future.
>
> Regarding the topic, the 'formalists' have been going strong for 20+ years 
> now, with very little penetration. In general SwE they contribute more to the 
> problem than the solution, due to the cluttering effect referred to above. The 
> fact that some of their theory applies to perhaps 1% of software developed is 
> unfortunate simply because it encourages them and their proponents to keep 
> making the silly claims of general applicability they've made all this time.
>
> Andrew
> -- 
> Andrew Gabb
> email: agabb@tpgi.com.au       Adelaide, South Australia
> phone: +61 8 8342-1021
> ----- 


0
adaworks
4/9/2007 6:21:26 AM
Steve,

Agree with most of what you write.

It seems we have a long way to go before we can
claim any ground in the engineering territory.

Richard
================================================
<ggroups@bigfoot.com> wrote in message 
news:1175534255.887906.133930@o5g2000hsb.googlegroups.com...
> On Apr 2, 3:28 am, <adawo...@sbcglobal.net> wrote:
>
>> "Markus E Leypold" 
>> <development-2006-8ecbb5cc8aREMOVET...@ANDTHATm-e-leypold.de>
>> wrote in messagenews:v3649njns4.fsf@hod.lan.m-e-leypold.de...
>
>>> They haven't (usually) learned that rules by trial and error, though,
>>> but by memorizing theory or the condensed rules for application of
>>> theory during their professional education.
>
>> I like to consider this issue as a little bit beyond theory, even
>> though there are theoretical concerns that can be useful. In most
>> engineering, we have a body of settled knowledge that includes
>> good metrics for developing designs to a high level of predictability.
>
>> Software engineering is not yet at the level of predictability enjoyed
>> by most mature engineering disciplines. There is some settled
>> knowledge (e.g., Big O), that can be used for design metrics, but
>> not yet a large body of agreed-upon settled knowledge.
>
> In the "Professional Software Development" book, the author addresses
> this aspect with some interesting information. He discusses two
> coarse surveys regarding the body of knowledge circa the seminal
> NATO conference, and at the end of last century.
>
> The questions are regarding the size of the body of knowledge, and
> how much of it would be relevant 10 yrs from the time of asking.
>
> The suggestion is that in three decades that the size has grown
> considerably. And more significantly, that the feeling of the
> percentage that will be relevant 10 yrs hence is also very much
> greater.
>
> Perhaps in three decades things have not advanced as much as you
> would wish, but for such an embryonic discipline (70+ yrs theoretical,
> 60+ yrs practical) the body of knowledge is hardly crawling along
> either.
>
>
> [ rest of posting snipped, but read and acknowledged ]
>
>
> Regards,
> Steven Perryman
>
> 


0
adaworks2 (748)
4/9/2007 6:22:40 AM
adaworks@sbcglobal.net wrote:

> Agree with most of what you write.

I went and had another look at this book.
Although the survey was still coarse, the actual info was more
encouraging.

1. The suggestion is that at the time of the NATO conference in the late 
1960s, around 20% of the body of knowledge was deemed to be stable and
the "half-life" (relevance to industry etc) was deemed to be about 10 yrs.

2. For the same survey in 2003, the stable part was deemed to be around 50%
of the body of knowledge, and the half-life had increased to 30 yrs.

The implication was that there is a lot that a s/w engineer can be taught
(as a student etc) that is actually going to be *relevant over their entire
professional life-time* . With a contrast being prog langs etc that rise
and fall in popularity.


The book did balance the above with classifying the body of knowledge in
terms of what is general practice, what is practiced in specific areas
(think of schedulability in hard real-time systems etc) , what is research
etc.


 > It seems we have a long way to go before we can
 > claim any ground in the engineering territory.

Perhaps so.
But perhaps the growth rate of the body of knowledge and its increasing
half-life compares much better historically to other engineering
disciplines (even if it feels pedestrian to you :-) ) .


Regards,
Steven Perryman
0
S
4/9/2007 11:18:40 AM
Reply: