f



Why is OO Popular?

Hi, I was wondering if anyone could tell me why object oriented
approaches to programming are becoming very popular.

Thanks

Ian
0
wilvers1 (4)
4/26/2004 3:46:11 PM
comp.object 3218 articles. 1 followers. Post Follow

200 Replies
1857 Views

Similar Articles

[PageSpeed] 11

Ian Roberts wrote:
> Hi, I was wondering if anyone could tell me why object oriented
> approaches to programming are becoming very popular.
> 
> Thanks
> 
> Ian

In my opinion OO programming is popular mainly for two reasons :

1. Design is closer to the way humain think.
Classes can be understood as sets of entities, just as types can be
but the feature that is "almost" unique to classes is "set inclusion"
through inheritance
This is closer to the way human think, sets of things and there is no 
reason why two sets should always have a void intersection (generally
to types in a algol like language are disjoints)


2. Extensibility of certain families of class diagrams.
By using the only design pattern : interpretor
[by "only" I mean that all the design patterns I have seen use the 
abstract class + many sub-classes hierarchy of th interpretor pattern, 
it is a kind of "root" for all the other design patterns]
one can achieve the extensibility of the set represented by the abstract 
class, example for the set of instruction of a language

abstract class instruction {

   abstract execute ;
}

concrete class assignment : instruction {
   execute ;
}
....

the number of sub-classes is not really important, one can add any new 
subclass whithout changing the other ones :
it allow to extend the functionnalities of a library whithout modifying 
the code, especially because you only use references to instructions and 
not to the special cases represented by the sub classes
0
4/26/2004 8:00:45 PM
Ian Roberts wrote:

> Hi, I was wondering if anyone could tell me why object oriented
> approaches to programming are becoming very popular.

Because virtual methods permit one to change called code without changing
calling code. Adding features without changing a design permits projects to
iterate their versions.

-- 
  Phlip
    http://www.xpsd.org/cgi-bin/wiki?TestFirstUserInterfaces


0
phlip_cpp (3852)
4/26/2004 11:24:55 PM
On 26 Apr 2004 08:46:11 -0700, wilvers1@vodafone.net (Ian Roberts)
wrote:
>Hi, I was wondering if anyone could tell me why object oriented
>approaches to programming are becoming very popular.

Compared to sex and drugs and rock and roll, object-oriented
programming has hardly any followers at all.

J.

0
4/27/2004 2:51:30 AM
On 26 Apr 2004 08:46:11 -0700, wilvers1@vodafone.net (Ian Roberts)
wrote:

>Hi, I was wondering if anyone could tell me why object oriented
>approaches to programming are becoming very popular.

It's been around long enough that schools started teaching it to
students who became programmers and now are a majority in the
industry.

OK, that was cynical.  Probably true, but cynical.  

You might better have asked what the benefits of OO are.  OO is a set
of tools and language constructs that allow programmers to manage the
interdependencies of source code while at the same time providing a
convenient mode for expressing abstract concepts.


-----
Robert C. Martin (Uncle Bob)
Object Mentor Inc.
unclebob @ objectmentor . com
800-338-6716

"Distinguishing between the author
and the writing is the essence of civilized debate."
           -- Daniel Parker
0
unclebob2 (2724)
4/27/2004 5:29:48 AM
> > Hi, I was wondering if anyone could tell me why object oriented
> > approaches to programming are becoming very popular.
> > 
> > Thanks
> > 
> > Ian
> 
> In my opinion OO programming is popular mainly for two reasons :
> 
> 1. Design is closer to the way humain think.

Bull! It might map to the way SOME people think, but not
me and not all.

> Classes can be understood as sets of entities, just as types can be
> but the feature that is "almost" unique to classes is "set inclusion"
> through inheritance
> This is closer to the way human think, sets of things and there is no 
> reason why two sets should always have a void intersection (generally
> to types in a algol like language are disjoints)

I hate it when guessaholics tell me how I think, or
worse yet, how I *should* think. I wish to
see your psychology degree, BTW.

A bigger problem is that every individual thinks
differently.

> 
> 
> 2. Extensibility of certain families of class diagrams.
> By using the only design pattern : interpretor
> [by "only" I mean that all the design patterns I have seen use the 
> abstract class + many sub-classes hierarchy of th interpretor pattern, 
> it is a kind of "root" for all the other design patterns]
> one can achieve the extensibility of the set represented by the abstract 
> class, example for the set of instruction of a language
> 
> abstract class instruction {
> 
>    abstract execute ;
> }
> 
> concrete class assignment : instruction {
>    execute ;
> }
> ...
> 
> the number of sub-classes is not really important, one can add any new 
> subclass whithout changing the other ones :
> it allow to extend the functionnalities of a library whithout modifying 
> the code, especially because you only use references to instructions and 
> not to the special cases represented by the sub classes

Sounds great in theory, but the real world usually does not change 
in a tree-wise way. The base class becomes a fragile buggaboo
that everybody hates, but is afraid to alter in fear of
busting a bunch of existing children. It becomes like that
old COBOL code that nobody wants to touch.

As far as why OO is "popular", it may not be as popular as
some claim. In practice, OO is popular for interfaces
to low-level and external services, but usually creates
a big mess when one tries to use it for business modeling.
Thus, it is popular for the former, but wavering for the
latter.

-T-
oop.ismad.com
0
topmind (2124)
4/27/2004 5:50:58 AM
wilvers1@vodafone.net (Ian Roberts) wrote in 
news:85ea541e.0404260746.43a4bb6f@posting.google.com:

> Hi, I was wondering if anyone could tell me why object oriented
> approaches to programming are becoming very popular.
> 
> Thanks
> 
> Ian

Because it promises to perform better than previous mainstream approaches 
and it's become mainstream enough to be palatable to the industry.

Besides, when applied properly, it does perform better.
0
4/27/2004 8:05:09 AM
In article <85ea541e.0404260746.43a4bb6f@posting.google.com>, wilvers1
@vodafone.net says...
> Hi, I was wondering if anyone could tell me why object oriented
> approaches to programming are becoming very popular.
> 
> Thanks
> 
> Ian
> 

It raises the abstraction level another notch.
Machine Code -> Assembly -> Procedural -> OOP
Each of this levels are used depending on the nature of the programming 
task.  With computing resources the way they are OOP offers a higher 
level of abstraction without worrying about cost.
Try programming a large complex system in each level then youll see why 
OOP is popular.

(ignore email reply, hit wrong button)

-- 
Thanks
 Solari
0
solari (4)
4/27/2004 9:25:27 AM
On Tue, 27 Apr 2004 10:25:27 +0100, Solari <solari@blueyonder.co.uk>
wrote:

>
>It raises the abstraction level another notch.
>Machine Code -> Assembly -> Procedural -> OOP

IMO it is: 

Machine Code -> Assembler -> Spaghetti -> Structured -> OOP ->
Declarative

>Try programming a large complex system in each level then youll see why 
>OOP is popular.

And why OOP is primitive.

Regards 
0
alfredo (205)
4/27/2004 10:14:44 AM
On Mon, 26 Apr 2004 22:00:45 +0200, Mathieu Roger
<mathieu.roger@imag.fr> wrote:

>Classes can be understood as sets of entities, just as types can be
>but the feature that is "almost" unique to classes is "set inclusion"
>through inheritance

Do you mean subtyping?

>This is closer to the way human think, sets of things and there is no 
>reason why two sets should always have a void intersection (generally
>to types in a algol like language are disjoints)

Yes, Algol does not support subtyping.

>2. Extensibility of certain families of class diagrams.
>By using the only design pattern : interpretor
>[by "only" I mean that all the design patterns I have seen use the 
>abstract class + many sub-classes hierarchy of th interpretor pattern, 
>it is a kind of "root" for all the other design patterns]
>one can achieve the extensibility of the set represented by the abstract 
>class, example for the set of instruction of a language
>
>abstract class instruction {
>
>   abstract execute ;
>}
>
>concrete class assignment : instruction {
>   execute ;
>}

This looks delegation to me.

>the number of sub-classes is not really important, one can add any new 
>subclass whithout changing the other ones :
>it allow to extend the functionnalities of a library whithout modifying 
>the code, especially because you only use references to instructions and 
>not to the special cases represented by the sub classes

This is nice, but not very impressive.


Regards
  Alfredo
0
alfredo (205)
4/27/2004 3:35:39 PM
wilvers1@vodafone.net (Ian Roberts) wrote in message news:<85ea541e.0404260746.43a4bb6f@posting.google.com>...
> Hi, I was wondering if anyone could tell me why object oriented
> approaches to programming are becoming very popular.
> 
> Thanks
> 
> Ian

And what are the other approaches?

Tsolak Petrosian
0
tsolakp (97)
4/27/2004 3:35:45 PM
On 26 Apr 2004 22:50:58 -0700, topmind@technologist.com (Topmind)
wrote:

>As far as why OO is "popular", it may not be as popular as
>some claim. In practice, OO is popular for interfaces
>to low-level and external services, but usually creates
>a big mess when one tries to use it for business modeling.

But many people like to create big messes and to maximize the
development and maintenance costs. OO is perfect to do that with
Busines Information Systems.

Regards
 
0
alfredo (205)
4/27/2004 3:39:45 PM
On 27 Apr 2004 08:35:45 -0700, tsolakp@yahoo.com (Tsolak Petrosian)
wrote:

>wilvers1@vodafone.net (Ian Roberts) wrote in message news:<85ea541e.0404260746.43a4bb6f@posting.google.com>...
>> Hi, I was wondering if anyone could tell me why object oriented
>> approaches to programming are becoming very popular.
>> 
>> Thanks
>> 
>> Ian
>
>And what are the other approaches?

Structured, spaghetti, declarative, etc.


Regards
  Alfredo
0
alfredo (205)
4/27/2004 4:10:41 PM
> 
> >
> >It raises the abstraction level another notch.
> >Machine Code -> Assembly -> Procedural -> OOP
> 
> IMO it is: 
> 
> Machine Code -> Assembler -> Spaghetti -> Structured -> OOP ->
> Declarative
> 
> >Try programming a large complex system in each level then youll see why 
> >OOP is popular.
> 
> And why OOP is primitive.
> 
> Regards

The pattern is:

[Insert your enemy's paradigm] -> Slime Molds -> Machine Code 
  -> Assembler -> Goto's  
  -> Structured -> [insert your favorite paradigm]

-T-
0
topmind (2124)
4/27/2004 5:53:56 PM
"Topmind" <topmind@technologist.com> wrote in message
news:4e705869.0404262150.3ceaf835@posting.google.com...
> > > Hi, I was wondering if anyone could tell me why object oriented
> > > approaches to programming are becoming very popular.
> > >
> > > Thanks
> > >
> > > Ian
> >
> > In my opinion OO programming is popular mainly for two reasons :
> >
> > 1. Design is closer to the way humain think.
>
> Bull! It might map to the way SOME people think, but not
> me and not all.

We could reword it to take people like you into account: OO is closer to the
way humans ought to think.


Shayne Wissler


0
4/27/2004 7:20:39 PM
On Tue, 27 Apr 2004 19:20:39 GMT, "Shayne Wissler"
<thalesNOSPAM000@yahoo.com> wrote:


>We could reword it to take people like you into account: OO is closer to the
>way humans ought to think.

Closer than what?

Logic programming is closer to the way humans ought to think than OO.

OO is rather fuzzy and sloppy.


Regards
  Alfredo
0
alfredo (205)
4/27/2004 8:53:06 PM
"Alfredo Novoa" <alfredo@ncs.es> wrote in message
news:408ec7ea.1123175@news.wanadoo.es...
> On Tue, 27 Apr 2004 19:20:39 GMT, "Shayne Wissler"
> <thalesNOSPAM000@yahoo.com> wrote:
>
>
> >We could reword it to take people like you into account: OO is closer to
the
> >way humans ought to think.
>
> Closer than what?

Procedural.

> Logic programming is closer to the way humans ought to think than OO.

Nope.

> OO is rather fuzzy and sloppy.

Depends on who's writing it.


Shayne Wissler


0
4/27/2004 9:49:18 PM
Shayne Wissler wrote:
> "Alfredo Novoa" <alfredo@ncs.es> wrote in message
> news:408ec7ea.1123175@news.wanadoo.es...
> 
>>On Tue, 27 Apr 2004 19:20:39 GMT, "Shayne Wissler"
>><thalesNOSPAM000@yahoo.com> wrote:
>>
>>>We could reword it to take people like you into account: OO is closer to
>>> the way humans ought to think.
>>
>>Closer than what?
> Procedural.
>>Logic programming is closer to the way humans ought to think than OO.
> Nope.
>>OO is rather fuzzy and sloppy.
> Depends on who's writing it.

I don't think anybody quite knows how the human mind works.
There are theories, but none proven yet.

Our idea of sets and OOP make us think this system could
and perhaps should restrain our wild hare-brained human
approaches to something another wild hare-brained human
might be able to read and manage. In addition there is
the write-once practice which attempts to standardize
as much as possible, rather than re-creating the wheel
every time a new app. is written.

Declarative? I don't know how to write words to declare
what the final app. ought to be. It doesn't seem possible
to me. But, I haven't really gotten into those languages.
Maybe I just don't think that way.  :-)
0
hathawa2 (78)
4/28/2004 12:18:03 AM
"Mark S. Hathaway" <hathawa2@marshall.edu> wrote in message
news:408ebffa@alpha.wvnet.edu...
> Shayne Wissler wrote:
> > "Alfredo Novoa" <alfredo@ncs.es> wrote in message
> > news:408ec7ea.1123175@news.wanadoo.es...
> >
> >>On Tue, 27 Apr 2004 19:20:39 GMT, "Shayne Wissler"
> >><thalesNOSPAM000@yahoo.com> wrote:
> >>
> >>>We could reword it to take people like you into account: OO is closer
to
> >>> the way humans ought to think.
> >>
> >>Closer than what?
> > Procedural.
> >>Logic programming is closer to the way humans ought to think than OO.
> > Nope.
> >>OO is rather fuzzy and sloppy.
> > Depends on who's writing it.
>
> I don't think anybody quite knows how the human mind works.
> There are theories, but none proven yet.

No one knows how the brain produces the mind. Which is quite irrelevant to
the issue of its proper employment--an issue that quite a lot is known
about, by at least some people.


Shayne Wissler


0
4/28/2004 12:34:59 AM
Shayne Wissler wrote:

> "Topmind" <topmind@technologist.com> wrote in message
> news:4e705869.0404262150.3ceaf835@posting.google.com...
>>>In my opinion OO programming is popular mainly for two reasons :
>>>
>>>1. Design is closer to the way humain think.
>>
>>Bull! It might map to the way SOME people think, but not
>>me and not all.
> 
> We could reword it to take people like you into account: OO is closer to the
> way humans ought to think.
> 
> Shayne Wissler

Actually, i think OO is the way people actually think.

Lets look at a simple example of a chair. People think of a chair as a 
thing. The thing has some properties to it (it likely has 4 legs, is 
meant to be sat on, etc).

When we learn about our first chair we treat it as a unique object. When 
we see our second chair we see it has similar properties to the first 
chair and we begin to form an understanding of chairness (the type chair).

If I said i have a unique chair i built and asked if a person could sit 
on it what would you say? Odds are you know that chair like things are 
meant to be sat on so my unique chair can be sat on.

Another example is a Person. If i asked you if my friend (that you have 
never met) has blood what would you say? Odds are you would say they 
have blood because you understand a what a person is and you know a 
person has blood.

A humans ability to communicate with other people isn't based on their 
ability to send the complete information they know but rather we both 
have an understanding of basic classes of things and we use the same 
symbolic representation to define them which allows us to communicate.

If i say "Joe is sitting in a chair." i really haven't said much but you 
know what i'm saying because we both have a shared understanding of a 
chair. We both know that Joe is a name of a person so Joe is a person. 
Joe is a males names so Joe is male. We both know that people have blood 
so Joe has blood. To understand my sentence we have to understand a 
great deal that isn't communicated. We can communicate because of 
classifications we both understand.

People understand and interact with the world based classes of things.

This allows me to use a computer i have never seen before because I know 
that computers behave like other computers. I can walk on ground i have 
never been on before because i understand slopes, and texture of 
materials of ground i have never been on but they have the same 
properties of other ground i have been on.

A computer is a classification of device, slopes are classifications of 
the ground, texture is a classification of a material. My understanding 
of these classifications, and many other classifications, allows me to 
function as a person.


Quotes from "Design Principals behind Smalltalk" - byte mag Aug 1981

"In designing a language for use with computers, we do not have to look 
far to find helpful hints. Everything we know about how people think and 
communicate is applicable. The mechanisms of human thought and 
communication have been engineered for millions of years, and we should 
respect them as being of sound design. Moreover, since we must work with 
this design for the next million years, it will save time if we make our 
computer models compatible with the mind, rather that the other way around."

"The mind observes a vast universe of experience, both immediate and 
recorded. One can derive a sense of oneness with the universe simply by 
letting this experience be, just as it is. However, if one wishes to 
participate, literally to take a part, in the universe, one must draw 
distinctions. In so doing one identifies an object in the universe, and 
simultaneously all the rest becomes not-that-object. Distinction by 
itself is a start, but the process of distinguishing does not get any 
easier. Every time you want to talk about "that chair over there", you 
must repeat the entire processes of distinguishing that chair. This is 
where the act of reference comes in: we can associate a unique 
identifier with an object, and, from that time on, only the mention of 
that identifier is necessary to refer to the original object. "

"Classification is the objectification of nessness. In other words, when 
a human sees a chair, the experience is taken both literally an "that 
very thing" and abstractly as "that chair-like thing". Such abstraction 
results from the marvelous ability of the mind to merge "similar" 
experience, and this abstraction manifests itself as another object in 
the mind, the Platonic chair or chairness. "

The complete article is available here:
http://users.ipa.net/%7edwighth/smalltalk/byte_aug81/design_principles_behind_smalltalk.html

Jeff Brooks
0
jeff_brooks (199)
4/28/2004 12:51:07 AM
In article <85ea541e.0404260746.43a4bb6f@posting.google.com> (Mon, 26 Apr
2004 08:46:11 -0700), Ian Roberts wrote:

> Hi, I was wondering if anyone could tell me why object oriented
> approaches to programming are becoming very popular.

Have you asked your instructor for his or her opinion?

  NNTP-Posting-Host: 194.82.58.162
                     resnetnat.newport.ac.uk
0
hamilcar (369)
4/28/2004 1:50:42 AM
"Mark S. Hathaway" <hathawa2@marshall.edu> wrote:

> Our idea of sets and OOP make us think this system could
> and perhaps should restrain our wild hare-brained human
> approaches to something another wild hare-brained human
> might be able to read and manage.

The superior ability of the OO conceptual modelling paradigm to reduce
complexity, in most cases, in comparison to most other conceptual
paradigms chiefly stems from the closer correspondence the OO
modelling paradigm has with the way the world operates.

Responding to those who argue that we don't really know how humans
reason, its more that:
	a) the world operates according to Z
	b) humans notice the world in operation according to Z
	c) most humans find that for the most complex circumstances in
		various domains, understanding and leverage are maximized when
		employing the Z conceptual modelling paradigm,

Believe me (actually do a proper scientific study) it's as obvious and
straightforward as, "abc".

Elliott
-- 
Not approaching OO as modelling execution
of physical machines, per the creators of OO
is like not having a software engineering soul.
0
universe3 (375)
4/28/2004 6:12:00 AM
Hamilcar Barca <hamilcar@never.mind> wrote:

> In article <85ea541e.0404260746.43a4bb6f@posting.google.com> (Mon, 26 Apr
> 2004 08:46:11 -0700), Ian Roberts wrote:
> 
> > Hi, I was wondering if anyone could tell me why object oriented
> > approaches to programming are becoming very popular.
> 
> Have you asked your instructor for his or her opinion?

A real, leading OO enthusiast , huh?

Not.  And definitely "not" all 4 ways.  Haha.  :- }

If RCM doesn't thing OO is, or is becoming ever more popular, he
ought'a just state it honestly and explain concretely why he thinks
in contradistinction to what most sw engineers recognize--why he sees
and thinks errantly.

Elliott
-- 
Not approaching OO as modelling execution
of physical machines, per the creators of OO
is like not having a software engineering soul.
0
universe3 (375)
4/28/2004 6:23:15 AM
On Wed, 28 Apr 2004 02:12:00 -0400, Universe
<universe@tAkEcovadOuT.net> wrote:
>Responding to those who argue that we don't really know how humans
>reason, its more that:
>	a) the world operates according to Z

example: gravity

>	b) humans notice the world in operation according to Z

example: Newton bonked by apple

>	c) most humans find that for the most complex circumstances in
>		various domains, understanding and leverage are maximized when
>		employing the Z conceptual modelling paradigm,

example: v = 16*t^2

         d) OOP is better at capturing these observations and
                calculations.  

example:  apple1.v = myframe.gravitationalaccelerationconstant * 
             (myframe.makelocaltime(universe.absolutetime()) -
                dropevent.inittime) ^ 2

It's only (d) that is in question.

>Believe me (actually do a proper scientific study) it's as obvious and
>straightforward as, "abc".

We'd love to study you, but lack the funds.

J.

0
4/28/2004 3:29:23 PM
Shayne Wissler wrote:
> "Topmind" <topmind@technologist.com> wrote in message
> news:4e705869.0404262150.3ceaf835@posting.google.com...
> 
>>>>Hi, I was wondering if anyone could tell me why object oriented
>>>>approaches to programming are becoming very popular.
>>>>
>>>>Thanks
>>>>
>>>>Ian
>>>
>>>In my opinion OO programming is popular mainly for two reasons :
>>>
>>>1. Design is closer to the way humain think.
>>
>>Bull! It might map to the way SOME people think, but not
>>me and not all.
> 
> 
> We could reword it to take people like you into account: OO is closer to the
> way humans ought to think.

I think it would be sad if humans thought like computers - I don't think 
this is what you intend?  Of course we use intuition in ways that are 
not easily formulated as an OO program.

To speculate on how humans think, we can look to human language.  There 
must be a correspondence between human thought and human language, some 
would even say they are inseparable.  In natural language, you have 
nouns, verbs and adjectives, which correspond to objects, messages and 
members.  Therefore, there is at least some correspondence between OO 
language, human language, and therefore human thought.

OO does not have a monopoly on nouns, verbs and adjectives however.  In 
a functional language, a function is perhaps a verb, while values are 
nouns.  In a logic language, terms are nouns, while predicates are verbs 
or adjectives.

Another way of assessing the "naturalness" of a language is to look at 
the degree of abstraction.  Our brains cope with the real world well, 
this is what they have evolved to do best.  If we can make a program 
obey real-world rules, then presumably our brains can manipulate it better.

Logic programming is "better" IMO at modeling the real world, e.g.

	mother(sara,james).
	eats(james,chips).
	colour(chips,yellow).
	loves(A,B) if mother(A,B) or father(A,B).

however to implement algorithms using logic programming is IMO less easy 
than in functional/procedural/OO languages.  So this isn't the whole 
story either.

I think OO works, because it organises programs into objects, which our 
brains treat as real-world objects.  OO is an approach that emphasises 
the "noun" part of language, while other paradigms emphasise the "verb". 
  Our brains can cope with hundreds of nouns, whilst hundreds of verbs 
are difficult to visualize.

This is just speculation however, I only speak for myself.

Calum
0
calum.bulk (228)
4/28/2004 3:47:09 PM
alfredo@ncs.es (Alfredo Novoa) wrote in message news:<408e85be.22461417@news.wanadoo.es>...
> On 27 Apr 2004 08:35:45 -0700, tsolakp@yahoo.com (Tsolak Petrosian)
> wrote:
> 
> >wilvers1@vodafone.net (Ian Roberts) wrote in message news:<85ea541e.0404260746.43a4bb6f@posting.google.com>...
> >> Hi, I was wondering if anyone could tell me why object oriented
> >> approaches to programming are becoming very popular.
> >> 
> >> Thanks
> >> 
> >> Ian
> >
> >And what are the other approaches?
> 
> Structured, spaghetti, declarative, etc.
> 
> 
> Regards
>   Alfredo


Cant we do what can be done in "Structured, spaghetti, declarative, etc" without 
hackerish code?

Tsolak Petrosian
0
tsolakp (97)
4/28/2004 3:58:49 PM
On 2004-04-27, Shayne Wissler <thalesNOSPAM000@yahoo.com> wrote:
> We could reword it to take people like you into account: OO is closer to the
> way humans ought to think.
>

Though if you're not careful, you may find yourself changing to think in
terms of OO!

 - Richard

-- 
   _/_/_/  _/_/_/  _/_/_/ Richard dot Corfield    at    ntlworld dot com
  _/  _/    _/    _/      
 _/_/      _/    _/               Time is a one way street,
_/  _/  _/_/    _/_/_/                  Except in the Twilight Zone.
0
rcnews2 (30)
4/28/2004 8:20:23 PM
Responding to Roger...

> In my opinion OO programming is popular mainly for two reasons :
> 
> 1. Design is closer to the way humain think.

I have disagree strongly here.  The human abilities for abstraction and 
pattern recognition enables OO to describe complex problem spaces.  But 
I don't think OO is methodologically very intuitive.

One would expect that if OO paralleled human thought, one would be able 
to construct OO applications faster than using other approaches. 
However, most data strongly suggests that one can build applications 
faster with functional or even procedural methodologies.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl@pathfindermda.com
Pathfinder Solutions  -- Put MDA to Work
http://www.pathfindermda.com
(888)-OOA-PATH




0
h.lahman (3600)
4/28/2004 11:34:24 PM
Responding to Roberts...

> Hi, I was wondering if anyone could tell me why object oriented
> approaches to programming are becoming very popular.

Because it works, even when misused.  Data collected during the SA/SD/SP 
era indicated that modifying existing code tended to require 30-50 times 
more effort than initial developments.  It still takes more effort to 
maintain code with OO but we are down to single digits.

[Anecdotal data point.  Where I worked before retiring we ran 
experiments before committing to OO; an initial 50 KLOC pilot and a 1 
MLOC rewrite.  On the pilot they changed the requirements in a major way 
just before release.  We estimated six months using traditional 
estimation data and we turned the changes in four weeks.  Even we could 
hardly believe it.  We measured nearly an order of magnitude drop in 
maintenance time on the MLOC application.  In addition, the maintenance 
staff dropped from eight full timers to one person half time.]

*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl@pathfindermda.com
Pathfinder Solutions  -- Put MDA to Work
http://www.pathfindermda.com
(888)-OOA-PATH




0
h.lahman (3600)
4/28/2004 11:48:04 PM
wilvers1@vodafone.net (Ian Roberts) wrote in message news:<85ea541e.0404260746.43a4bb6f@posting.google.com>...
> Hi, I was wondering if anyone could tell me why object oriented
> approaches to programming are becoming very popular.

Certain software engineering principles (eg, abstraction,
dependency isolation) are better supported by OO than by
earlier incarnations of the procedural paradigm such as
modular programming.
0
grahamPerk (86)
4/29/2004 1:15:05 AM
On Wed, 28 Apr 2004 08:58:49 -0700, Tsolak Petrosian wrote:

> 
> Cant we do what can be done in "Structured, spaghetti, declarative, etc" without 
> hackerish code?
> 

In Structured and declarative, yes, we can.  

If you are truly dedicated to the spaghetti paradigm, bad is good, and the
worse the better.


0
droby2 (108)
4/29/2004 1:18:10 AM
On Wed, 28 Apr 2004 15:29:23 GMT, JXStern <JXSternChangeX2R@gte.net>
wrote:

>On Wed, 28 Apr 2004 02:12:00 -0400, Universe
><universe@tAkEcovadOuT.net> wrote:
>>Responding to those who argue that we don't really know how humans
>>reason, its more that:
>>	a) the world operates according to Z
>
>example: gravity
>
>>	b) humans notice the world in operation according to Z
>
>example: Newton bonked by apple
>
>>	c) most humans find that for the most complex circumstances in
>>		various domains, understanding and leverage are maximized when
>>		employing the Z conceptual modelling paradigm,
>
>example: v = 16*t^2
>
>         d) OOP is better at capturing these observations and
>                calculations.  
>
>example:  apple1.v = myframe.gravitationalaccelerationconstant * 
>             (myframe.makelocaltime(universe.absolutetime()) -
>                dropevent.inittime) ^ 2

Two corrections: 2 is actually

   universe.singletons.two.getValue ()

and infix notation is deprecated: "*" should be "Multiply", a method
of gravitationalaccelerationconstant etc.

>It's only (d) that is in question.

I find interesting that Elliott writtes about OOP[rogramming] as
something better than what, scientific methods? It is a breathtaking
perspective to replace philosophy by programming!?? Was it just typo
error, Elliott?

>>Believe me (actually do a proper scientific study) it's as obvious and
>>straightforward as, "abc".
>
>We'd love to study you, but lack the funds.

--
Regards,
Dmitry Kazakov
www.dmitry-kazakov.de
0
mailbox2 (6357)
4/29/2004 7:58:31 AM
Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote in message news:<dac190134o2h4fbu9njv3gvh9r76c0dlsl@4ax.com>...
> On Wed, 28 Apr 2004 15:29:23 GMT, JXStern <JXSternChangeX2R@gte.net>
> wrote:
> 
> >On Wed, 28 Apr 2004 02:12:00 -0400, Universe
> ><universe@tAkEcovadOuT.net> wrote:
> >>Responding to those who argue that we don't really know how humans
> >>reason, its more that:
> >>	a) the world operates according to Z
> >
> >example: gravity
> >
> >>	b) humans notice the world in operation according to Z
> >
> >example: Newton bonked by apple
> >
> >>	c) most humans find that for the most complex circumstances in
> >>		various domains, understanding and leverage are maximized when
> >>		employing the Z conceptual modelling paradigm,
> >
> >example: v = 16*t^2
> >
> >         d) OOP is better at capturing these observations and
> >                calculations.  
> >
> >example:  apple1.v = myframe.gravitationalaccelerationconstant * 
> >             (myframe.makelocaltime(universe.absolutetime()) -
> >                dropevent.inittime) ^ 2
> 
> Two corrections: 2 is actually
> 
>    universe.singletons.two.getValue ()
> 
> and infix notation is deprecated: "*" should be "Multiply", a method
> of gravitationalaccelerationconstant etc.
> 
> >It's only (d) that is in question.
> 
You obviously know nothing about OO.  You're focusing on irrelevent
implementation details.  What you really need is

public interface GravityCalculator {
    public interface Exporter {
        void addGravitationalAccelerationConstant(String constant);
    }
    public interface Importer {
        String provideGravitationalAccelerationConstant();
    }
    double calculateGravity(TimeInterval t);
}

public class GravityCalculatorImpl implements GravityCalculator {
    private final double gravitationalAccelerationConstant;

    public GravityCalculatorImpl(Importer importer) {
        this.gravitationalAccelerationConstant = 
            Double.parseDouble(
                importer.provideGravitationalAccelerationConstant());
    }

    public void export(Exporter exporter) {
        exporter.addGravitationalAccelerationConstant(
         Double.toString(gravitationalAccelerationConstant));
    }

    public double calculateGravity(TimeInterval t) {
        //  Implementation details go here
    }
}

Then you could write something like

GravityCalculator.Importer importer = new GravityCalculator.Importer {
    public String provideGravitationalAccelerationConstant() {
        return "16";
    }
}

GravityCalculator calculator = new GravityCalculatorImpl(importer);
double gravity = calculator.calculateGravity(t);

You might want to have a look at
http://java.sun.com/docs/books/tutorial/collections/interfaces/map.html
for an explanation of why you need to do things this way.

Regards,
Daniel Parker
0
4/29/2004 5:56:35 PM
"Daniel Parker" <danielaparker@hotmail.com> wrote in message
news:33feb190.0404290956.9c6d3d1@posting.google.com...
>
> You might want to have a look at
> http://java.sun.com/docs/books/tutorial/collections/interfaces/map.html
> for an explanation of why you need to do things this way.
>
Sorry, I meant
http://www.javaworld.com/javaworld/jw-01-2004/jw-0102-toolbox.html

Regards,
Daniel Parker


0
Daniel
4/29/2004 10:20:06 PM
On 29 Apr 2004 10:56:35 -0700, danielaparker@hotmail.com (Daniel
Parker) wrote:
....
>> and infix notation is deprecated: "*" should be "Multiply", a method
>> of gravitationalaccelerationconstant etc.
....
>You obviously know nothing about OO.  You're focusing on irrelevent
>implementation details.  What you really need is
[OO implementation of entire universe snipped]


.... and that, boys, girls, OT, ladies and gentlemen, Earthers, aliens,
and philosophers, is why OO is so darned popular.

LOL

J.


0
4/29/2004 10:35:21 PM
Daniel Parker wrote:

> You obviously know nothing about OO.  You're focusing on irrelevent
> implementation details.  What you really need is
> 
> public interface GravityCalculator {
>     public interface Exporter {
>         void addGravitationalAccelerationConstant(String constant);
>     }
>     public interface Importer {
>         String provideGravitationalAccelerationConstant();
>     }
>     double calculateGravity(TimeInterval t);
> }
> 
> public class GravityCalculatorImpl implements GravityCalculator {
>     private final double gravitationalAccelerationConstant;
> 
>     public GravityCalculatorImpl(Importer importer) {
>         this.gravitationalAccelerationConstant = 
>             Double.parseDouble(
>                 importer.provideGravitationalAccelerationConstant());
>     }
> 
>     public void export(Exporter exporter) {
>         exporter.addGravitationalAccelerationConstant(
>          Double.toString(gravitationalAccelerationConstant));
>     }
> 
>     public double calculateGravity(TimeInterval t) {
>         //  Implementation details go here
>     }
> }
> 
> Then you could write something like
> 
> GravityCalculator.Importer importer = new GravityCalculator.Importer {
>     public String provideGravitationalAccelerationConstant() {
>         return "16";
>     }
> }
> 
> GravityCalculator calculator = new GravityCalculatorImpl(importer);
> double gravity = calculator.calculateGravity(t);
> 
> You might want to have a look at
> http://java.sun.com/docs/books/tutorial/collections/interfaces/map.html
> for an explanation of why you need to do things this way.
> 
> Regards,
> Daniel Parker

You may not know this but the code you used as an example is the reason 
that people don't like OO. I think your proving the case against OO.

I looked at the article you posted (the fixed one) and a few developers 
had some comments on the code which I feel are interesting. There were a 
few "I agree with it" and a few "I disagree" messages as well. Most of 
it was a mini flame war which i won't bother posting.

Quotes:

"So, instead of "name=employee.getName();", we have 
"employee.export(myBuilder);name=myBuilder.getName();" Same difference - 
just hiding the getter (and setter) behind a pointless (and 
maintenence-unfriendly) layer of indirection. " - anonymous

"It seems with each subsequent article, more and more time is spent 
disputing readers who disagreed with something Allen's previous article. 
Rather than presenting new topics, this Javaworld forum seems to be 
constantly clairfying, justifying, or altering the previous article to 
prove that Allen was right and those who doubt or disagree were wrong. I 
wonder if we all just responded "You're right Allen, and anyone who 
disagrees with you is an inexperienced procedural-thinking hacker" Allen 
would be happy and JavaWorld might publish something on a different 
topic. Frankly, I'm a little tired of seeing the same article over and 
over." - anonymous

Just as a note: JavaWorld doesn't post articles anymore.

Here are some of my own personal thoughts on the article, and your code:

The author of the article seems to think that because it is possible to 
misuse gets/sets that all gets/sets are evil. The problem with this 
argument is just because there is a bad way to use gets/sets doesn't 
mean there isn't a good way to use them. Using gets/sets doesn't have to 
expose the implementation.

To solve the getter/setter problem he uses an exporter/importer that 
exports/imports to/from an interface to edit the properties but does 
nothing to illiminate the problem of exposing implementations (the 
importer/exporter could be used to export the implementation). Using his 
logic on his own article exporters/importers are evil because they can 
be used to export the implementation details of a class so don't use them.

It appears that in your code your trying to hide the type passed in but 
your own code requires a String. It then converts it to a double so that 
it can use it. Your code could also parse the String into an int by 
providing a different implementation of a GravityCalculator. Your code 
just changes the type your coupled to, it doesn't change the fact that 
you are still coupled to a type.

I think what you really wanted to do is pass in an Object. That way the 
constant could be a String, a Double, an Integer, etc. The 
implementation could be modified to use the appropriate type.

It appears your trying to make a statically typed language act like a 
dynamically typed language with a lot of extra interfaces to abstract 
the datatype away.

In a dynamically typed language such as Smalltalk you can just pass in 
an Integer, a Double, or your own type. As long as the type you pass in 
knows how to respond to the appropriate messages (like *, +, -, / or 
whatever your calculation requires) it will work without adding any new 
classes.

Static languages are coupled to types. Static languages require type 
definitions so that it can verify the types at compile time.

In conclusion I think the author's points of getters/setters being evil 
then promoting a complex solution that doesn't solve the problem he is 
trying to fix only detracts from the "Why is OO popular" debate. Showing 
such code will only scare away people learning OO because they will 
think that it is proper OO code.

If you want to program in a way that doesn't have strong dependencyies 
on types then i suggest you program in a dynamic language instead of 
creating elabrate methods to try to simulate it in a statically typed 
language. You will be much happier.

A few languages you may want to check out are: Python, Smalltalk, or Ruby.

Jeff Brooks
0
jeff_brooks (199)
4/30/2004 1:33:19 AM
Jeff Brooks:

> ... Actually, i think OO is the way people actually think.

There appears to be a relationship, yes.
However, people are capable of thinking in ways that are much
richer than the results obtained by orienting on Objects alone.


> Lets look at a simple example of a chair. People think of a chair as a thing. The thing has
> some properties to it (it likely has 4 legs, is
> meant to be sat on, etc).

People *can* think of a chair as a thing.
It is often, but not always wise to do this.


> When we learn about our first chair we treat it as a unique object. When we see our second
> chair we see it has similar properties to the first chair and we begin to form an
> understanding of chairness (the type chair).
>
> If I said i have a unique chair i built and asked if a person could sit
> on it what would you say? Odds are you know that chair like things are
> meant to be sat on so my unique chair can be sat on.

Perhaps - but playing these odds can lead to unexpected results.

Suppose that in addition to a shared understanding of "properties",
we *presume* a shared understanding of "inheritance" and "hierarchy".

One might be tempted to apply something like
the "Liskov Substitution Principle" in arriving
at the expectation alluded to above.

Would you be surprised if I suggested that more information
was required to know whether I should agree?

I fully expect that, on reflection, you would see the problem.

We might detour a bit into a discussion of principles, and beliefs,
and chatter a bit about notions of class, and of delegation and
prototypes, of encapsulation and name spaces and bindings,
and of future performance and cached values, of identity
and persistance and referential integrity.

We might then return to the original question,
and realizing that "everything is an object"
is not to be taken *literally*, agree that we
really do need something more to describe
how people think - something akin to
*perspective*.

And so we might now remember to ask:

  " How long did it take you to build this chair? "
or
  " What makes this chair of yours unique? "
or, even
  " How many legs does this chair have?

before we decide

  " if a person could sit on it."


> Another example is a Person. If i asked you if my friend (that you have
> never met) has blood what would you say?

"If we prick him, does he not bleed?"


....snip...

> To understand my sentence we have to understand a great deal that isn't communicated. We can
> communicate because of

not because of, but rather "only to the extent of"


> classifications we both understand.

and/or we are both utilizing "in the current context".


>  My understanding of these classifications, and many other classifications, allows me to
> function as a person.

Well said.


Regards,

-cstb

===

.... remainder snipped, including the rather excellent

> Quotes from "Design Principals behind Smalltalk" - byte mag Aug 1981

> The complete article is available here:
> http://users.ipa.net/%7edwighth/smalltalk/byte_aug81/design_principles_behind_smalltalk.html

0
jas9383 (60)
4/30/2004 4:24:52 AM
> >>>In my opinion OO programming is popular mainly for two reasons :
> >>>
> >>>1. Design is closer to the way humain think.
> >>
> >>Bull! It might map to the way SOME people think, but not
> >>me and not all.
> > 
> > We could reword it to take people like you into account: OO is closer to the
> > way humans ought to think.

"Ought to" ha ha ha. You God, me mortal.

> > 
> > Shayne Wissler
> 
> Actually, i think OO is the way people actually think.
> 
> Lets look at a simple example of a chair. People think of a chair as a 
> thing. The thing has some properties to it (it likely has 4 legs, is 
> meant to be sat on, etc).


Databases do the same thing. In fact, databases were purposely 
designed to manage large quantities of attributes. OO was
not. OO was originally designed to simulate physical
tugboat actions. Toot toot.


> 
> When we learn about our first chair we treat it as a unique object. When 
> we see our second chair we see it has similar properties to the first 
> chair and we begin to form an understanding of chairness (the type chair).

Yes, but if you actually start putting together traits and
attributes on paper and cataloging things, you will see that
the world is not really hierarchical. Philosophers
have known this for a long time. Set theory is better than
subtype theory for classifications. Sets can beat up
hierarchies and send them crying to their mommy. No contest.
(Sorry, I'm in a chest-thumping mood.)

-T-
0
topmind (2124)
4/30/2004 5:20:19 AM
On 29 Apr 2004 10:56:35 -0700, danielaparker@hotmail.com (Daniel
Parker) wrote:

>You obviously know nothing about OO.  You're focusing on irrelevent
>implementation details.  What you really need is
>
>public interface GravityCalculator {
>    public interface Exporter {
>        void addGravitationalAccelerationConstant(String constant);
>    }
>    public interface Importer {
>        String provideGravitationalAccelerationConstant();
>    }
>    double calculateGravity(TimeInterval t);
>}

Firstly, you took it too seriosly.

Secondly what many refer as OO is in many cases just artefacts of the
programming languages they have accustomed to. That is not related to
our Elliott, who as somebody said, is rather a poet. So his poetical
views are somewhat consistent, being sometimes completely wrong. (:-))

Returning to your example of "good" OO, I would like to ask you very
simple questions:

1. What physical sense has the type "double". Namely, is gravitation
double? [hint: it is not] That's for "implementation details".

2. Explain why a *constant* has to be "added" and what logical sense
has "adding" constants? Importing constants? Should WTO be involved? I
mean, when I import a GravitationalAccelerationConstant from USA, what
about customs duty?

>You might want to have a look at
>http://java.sun.com/docs/books/tutorial/collections/interfaces/map.html
>for an explanation of why you need to do things this way.

BTW->GetValue ("1") = I am not against OO.

BTW->GetValue ("2") = Getters/setters nightmare is a consequence of
OOPLs deficiency.

BTW->GetValue ("3") = Elliott wrote that the world talks OO.

--
Regards,
Dmitry Kazakov
www.dmitry-kazakov.de
0
mailbox2 (6357)
4/30/2004 8:17:12 AM
On 29 Apr 2004 22:20:19 -0700, topmind@technologist.com (Topmind)
wrote:

>Set theory is better than
>subtype theory for classifications.

And current OO languages don't support subtyping properly.

In OO ellipse is a subclass of circle, but all we know that circle is
a subtype of ellipse.


Regards
  Alfredo


0
alfredo (205)
4/30/2004 10:01:38 AM
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:8414901ct50d6mo9nbhk1vhapsf28di2fn@4ax.com...
> On 29 Apr 2004 10:56:35 -0700, danielaparker@hotmail.com (Daniel
> Parker) wrote:
>
> >You obviously know nothing about OO.  You're focusing on irrelevent
> >implementation details.  What you really need is
> >
Responding to Dmitry
>
> Firstly, you took it too seriosly.
>
and also to Jeff Brooks

> You may not know this but the code you used as an example is the reason
> that people don't like OO. I think your proving the case against OO.
> I looked at the article you posted (the fixed one) and ... There were a
> few "I agree with it"

Tell me you guys are kidding, please?  If _anyone_ mistook my answer for
anything other than a joke, you are going to confirm JXStern's worst
nightmares.

Daniel


0
Daniel
4/30/2004 12:20:46 PM
alfredo@ncs.es (Alfredo Novoa) wrote in message news:<40922322.328191@news.wanadoo.es>...
> 
> And current OO languages don't support subtyping properly.
> 
> In OO ellipse is a subclass of circle, but all we know that circle is
> a subtype of ellipse.
> 

See the Journal of Object Technology (www.jot.fm) for an ongoing
series of articles on the Theory of Classification.  It's up to part
12 right now, and a few of the parts covered type theory with respect
to subtyping, etc.  The articles are geared toward non-specialists,
but they're still pretty complex if you don't know the genre that
well.

And an OO ellipse is not a circle.  Claiming so breaks the Liskov
Substitution Principle.  Mathematically, a circle is a type of
ellipse... But only in math.

Steve
0
4/30/2004 4:44:18 PM
Daniel Parker wrote:
> Tell me you guys are kidding, please?  If _anyone_ mistook my answer for
> anything other than a joke, you are going to confirm JXStern's worst
> nightmares.
> 
> Daniel

No, i wasn't kidding. I was just worried that someone wanting to learn 
OO would look at this newsgroup, see your code, and never want to look 
at OO again.

By showing why your code has problems beginers who can't tell the 
difference between good OO, and bad OO won't run away screaming.

Jeff Brooks
0
jeff_brooks (199)
4/30/2004 8:53:08 PM
Alfredo Novoa a �crit :
> On Mon, 26 Apr 2004 22:00:45 +0200, Mathieu Roger
> <mathieu.roger@imag.fr> wrote:
> 
> 
>>Classes can be understood as sets of entities, just as types can be
>>but the feature that is "almost" unique to classes is "set inclusion"
>>through inheritance
> 
> 
> Do you mean subtyping?

Yes, but limited to special cases

> 
> 
>>This is closer to the way human think, sets of things and there is no 
>>reason why two sets should always have a void intersection (generally
>>to types in a algol like language are disjoints)
> 
> 
> Yes, Algol does not support subtyping.


I meant that in procedural languages, types are disjoints but in OO one
can have non disjoint types, it is more general, but less general than
logic


> 
> 
>>2. Extensibility of certain families of class diagrams.
>>By using the only design pattern : interpretor
>>[by "only" I mean that all the design patterns I have seen use the 
>>abstract class + many sub-classes hierarchy of th interpretor pattern, 
>>it is a kind of "root" for all the other design patterns]
>>one can achieve the extensibility of the set represented by the abstract 
>>class, example for the set of instruction of a language
>>
>>abstract class instruction {
>>
>>  abstract execute ;
>>}
>>
>>concrete class assignment : instruction {
>>  execute ;
>>}
> 
> 
> This looks delegation to me.

It was not really what I meant, I take another example :

abstract class expression {

   value evaluate ;
}


concrete class plus {

   value evaluate {
     return left.evaluate + right.evaluate ;

   }

   expression left ;
   expression right ;
}

it is more like something not OO : recursion


> 
>>the number of sub-classes is not really important, one can add any new 
>>subclass whithout changing the other ones :
>>it allow to extend the functionnalities of a library whithout modifying 
>>the code, especially because you only use references to instructions and 
>>not to the special cases represented by the sub classes
> 
> 
> This is nice, but not very impressive.
> 

It is not very impressive, the ability to extend the set of objects must 
be paid to the price of difficulties to extend the set of methods (in 
procedural language it is something like the inverse)
0
4/30/2004 9:51:26 PM
"Jeff Brooks" <jeff_brooks@nospam.com> wrote in message
news:U0zkc.306226$Pk3.41950@pd7tw1no...
> Daniel Parker wrote:
> > Tell me you guys are kidding, please?  If _anyone_ mistook my answer for
> > anything other than a joke, you are going to confirm JXStern's worst
> > nightmares.
> >
> > Daniel
>
> No, i wasn't kidding. I was just worried that someone wanting to learn
> OO would look at this newsgroup, see your code, and never want to look
> at OO again.
>
> By showing why your code has problems beginers who can't tell the
> difference between good OO, and bad OO won't run away screaming.
>
I see, well no doubt the excellent JXStern, having read your post, will now
have a clear understanding of good and bad OO, and will reconsider his
position.

Daniel


0
Daniel
4/30/2004 11:28:35 PM
cstb wrote:

> Jeff Brooks:
> 
>>... Actually, i think OO is the way people actually think.
> 
> There appears to be a relationship, yes.
> However, people are capable of thinking in ways that are much
> richer than the results obtained by orienting on Objects alone.

Are you sure?

The research that created Smalltalk was done by looking at how very 
young children interact and think about the world and they based the GUI 
+ Smalltalk on that (the first gui was made by this research group). 
This research allows us to understand the basics of understanding 
because they looked at the primitive thoughts of people.

For example:

To interact with a thing you have to identify it. Children can identify 
things by pointing at them even if they don't know the word for it. This 
resulted in a pointing device being created called the mouse which would 
allow people to point at what they want to use. All access to objects in 
Smalltalk are done via references so there is a uniform way to "point" 
at an object in code.

Children can't read well, but they can identify shapes and understand 
how to move things. So they concluded making an interface based on 
shapes and moving them is more natural than text interfaces. This 
resulted in shapes and the ability to pick them up and put them where 
you want them. Some of the shapes were icons, windows, etc.

Children understand certain things behave in certain ways. Things that 
behave in similar ways are easier for children to learn.

Allowing things in a computer that are different to behave in similar 
ways allows people to learn them faster and those things feel more 
natural. This is why actions like opening a document is done in the same 
way no matter what type of document it is in a GUI. The concept of 
allowing different things to behave in similar ways effects both the GUI 
and the programming language.

I think people can think in more complex ways as they get older but that 
doesn't mean they don't think in an object oriented way. Children don't 
understand logic, but they understand objects and classifications. I 
think people can learn logic but i think we understand it by 
understanding things, and classifications.

Another way of putting it is we can program different types of languages 
using object oriented languages. That doesn't mean that OO isn't at the 
core of the new languages. Children first understand an 
objects/classifications so i think it is likely the basis of our 
understanding.

>>When we learn about our first chair we treat it as a unique object. When we see our second
>>chair we see it has similar properties to the first chair and we begin to form an
>>understanding of chairness (the type chair).
>>
>>If I said i have a unique chair i built and asked if a person could sit
>>on it what would you say? Odds are you know that chair like things are
>>meant to be sat on so my unique chair can be sat on.
> 
> Perhaps - but playing these odds can lead to unexpected results.
> 
> Suppose that in addition to a shared understanding of "properties",
> we *presume* a shared understanding of "inheritance" and "hierarchy".

I think we have a shared understand of types and how they mix. People 
try to formalize them for programming purposes while the brain is much 
more dynamic. Our understanding of programming inheritance may be 
different but i doubt our basic understanding of types is different.

If we both understand "big", and "chair" we should both understand "big 
chair" as a combination of those two things. We also understand "brown 
cows", "big ugly rock", "a broken mouse without a scroll wheel". These 
are ideas that are combinations of other things. We combine them in the 
same way because if we didn't we couldn't communicate with each other 
because our language is based combinations of things.

My personal view on OO in computers is it represents a very static type 
of OO while our brains are a more dynamic OO. Our brains have the 
ability to deal with ambiguity without crashing. If you found out that 
not all chairs can be sat on (like a model chair is to small to sit on) 
you brain doesn't suddenly fail even though every other chair you know 
of can be sat on. Having one instance that doesn't behave like other 
instances of the same type violates strict type rules in most 
programming languages.

> One might be tempted to apply something like
> the "Liskov Substitution Principle" in arriving
> at the expectation alluded to above.

I think the Liskov rule is a step to formalize OO usage because we 
haven't figured out how to have computers respond to ambiguity. I don't 
think the brain has this problem.

> Would you be surprised if I suggested that more information
> was required to know whether I should agree?
> 

No, but i would be surprised if you couldn't read my post and understand 
the little bit of knowledge i did send.

> I fully expect that, on reflection, you would see the problem.
> 
> We might detour a bit into a discussion of principles, and beliefs,
> and chatter a bit about notions of class, and of delegation and
> prototypes, of encapsulation and name spaces and bindings,
> and of future performance and cached values, of identity
> and persistance and referential integrity.
> 
> We might then return to the original question,
> and realizing that "everything is an object"
> is not to be taken *literally*, agree that we
> really do need something more to describe
> how people think - something akin to
> *perspective*.
> 

I think perspective is just based on our own personal knowledge. We both 
understand what a chair is but if you grew up with a more artistic 
family your first chair may of had a single leg that was shaped like an 
upside down funnel.

So your view of chair may be "likely has 1, or 4 legs", while mine is 
"likely has for legs". These differences in our knowledge is what i view 
as perspective. We both know different things so we have a slightly 
different view of the same things because "chair" means slightly 
different things to both of us.

> And so we might now remember to ask:
> 
>   " How long did it take you to build this chair? "
> or
>   " What makes this chair of yours unique? "
> or, even
>   " How many legs does this chair have?
> 
> before we decide
> 
>   " if a person could sit on it."

I agree, but i think we both understand generally that "chairs are meant 
to be sat on" even if we both have different criteria for sitting on it.

Also, having different criteria doesn't change that we both understand 
chair as a classification.

> .... remainder snipped, including the rather excellent
> 
>>Quotes from "Design Principals behind Smalltalk" - byte mag Aug 1981
> 
>>The complete article is available here:
>>http://users.ipa.net/%7edwighth/smalltalk/byte_aug81/design_principles_behind_smalltalk.html

Seriously, that document i included a link to is just amazing. It really 
shows why OO is they way it is. I think everyone should read it even if 
they don't like like dynamic programming languages.

Jeff Brooks
0
jeff_brooks (199)
4/30/2004 11:31:34 PM

Alfredo Novoa wrote:

> ... And current OO languages don't support subtyping properly.

In what sense do you mean this?

> In OO ellipse is a subclass of circle, but all we know that circle is
> a subtype of ellipse.

Borrowing from an argument attributed to former President Clinton,
(albeit in a different context) -
   "It depends what the definition of 'is' is".

Neither OO subclassing or the theory of subtypes
suggests anything about which view is correct -
because either view *can* be correct.

We must move away from the realm of abstraction
and consider concrete implementations to know
whether the choice of position in the subclass hierarchy
results in a particular "subtype" relationship.

If we choose to implement ellipse as a subclass of circle,
and we want the relation "Circle is-a-subtype-of Ellipse",
then we must ensure that wherever an Ellipse is the expected
receiver, we may instead supply a circle.  Therefore, any
method defined on ellipse must also be defined on Circle.
Therefore, we cannot add any methods to Ellipse - we
can only override implementations inherited from Circle.

In the remaining three cases (swapping either the positions
in the subclass hierarchy or positions in the subtype relation),
we'll get a different set of constraints, but we can always
derive a "correct" implementation.  All four implementations
will differ, but only in ways which we did not restrict.

For example, in another case, we will be forced to include
what might be considered a "redundant" instance variable.
If we expect to have lots of instances at the same time,
this might not be a very good implementation choice.
Yet if we do not expect to have many instances at
the same time, or if we in fact have oodles of memory
on the target machine, it may be a good choice for
some other reason we do care about.

Yet another case will result in a definition of 'ellipse'
which cannot express the entire range of ellipses
we might think of as 'mathematically elliptical'.
(Of course, this is *always* the case, but we tend
to ignore this fact, because the holes are less obvious,
and less likely to be in conflict with our actual intent).

Making beneficial design choices for a given context
is often more about "skilled art" than "skilled craft".
In certain areas of endeavor, this is referred to as
"Architecture", and while schools that teach it
can be found grouped within a "College of Engineering",
the variation in results have more to do with the "art",
and less to the specific engineering knowledge employed.

The discipline of computer science is young, and still
in development, which might be a reason these distinctions
aren't as sharply delineated as they are elsewhere.

-cstb

0
jas9383 (60)
4/30/2004 11:38:54 PM
On 30 Apr 2004 09:44:18 -0700, stevenwurster@lycos.com (Steven
Wurster) wrote:


>See the Journal of Object Technology (www.jot.fm) for an ongoing
>series of articles on the Theory of Classification.

Thanks for the link..

>And an OO ellipse is not a circle. 

In the typical OO designs it is, and squares are not rectangles.

> Claiming so breaks the Liskov
>Substitution Principle.

How?

>  Mathematically, a circle is a type of
>ellipse... But only in math.

Subtyping is only math. OO does not match with math thus what OO does
is not subtyping.


Regards
  Alfredo

0
alfredo (205)
5/1/2004 12:13:54 AM
Topmind wrote:

>>When we learn about our first chair we treat it as a unique object. When 
>>we see our second chair we see it has similar properties to the first 
>>chair and we begin to form an understanding of chairness (the type chair).
> 
> Yes, but if you actually start putting together traits and
> attributes on paper and cataloging things, you will see that
> the world is not really hierarchical. Philosophers
> have known this for a long time. Set theory is better than
> subtype theory for classifications. Sets can beat up
> hierarchies and send them crying to their mommy. No contest.
> (Sorry, I'm in a chest-thumping mood.)
> 
> -T-

Not all OO classification is hierarchal. There is multiple inheritance 
as well which allows for the same types of subclassing as combinations 
of sets.

Even with single inheritance one can define something that has 2 
unrelated classes combined. If one has a concept of a Person, and 
BillingAddress one can create a new type called Customer that is a 
combination of the two ideas.

For example(pseudo code)

class Person
{
	String name;

	setName(String);
	String getName();
}

class BillingAddress
{
	String address;

	setAddress(String);
	String getAddress();
}

class Customer
{
	BillingAddress billingAddress;
	Person person;

	// These delagate to billingAddress, and person
	setName(String);
	String getName();
	setAddress(String);
	String getAddress();
}


The problem with the above pseudo code is how do you use a Customer as 
the two base types?

Smalltalk uses dynamic typing so that objects in different parts of the 
hierarchy can be use in the same way if they respond to the same 
messages. So you can just pass in a Customer and use it like an 
BillingAddress, Person, or a Customer.

In java you have to make interfaces for BillingAddress, and Person. 
Customer is an interface that inherits from both Person, and Billing 
address (java supports multiple inheritance for interfaces).

The world isn't heiarchial. Languages like java deal with this with 
interfaces, Smalltalk deals with this via dynamic typing. I'm not sure 
what you think sets can do that existing programming languages can't do.

Can you give an example of classifications that can't be represented 
with in Java, or in Smalltalk but can be represented with sets?

Jeff Brooks
0
jeff_brooks (199)
5/1/2004 12:42:41 AM
Jeff Brooks wrote:
>> Shayne Wissler wrote:

>>> "Topmind" <topmind@technologist.com> wrote in message
>>> news:4e705869.0404262150.3ceaf835@posting.google.com...

 >>>> someone wrote:
 >>>>
>>>> In my opinion OO programming is popular mainly for two reasons :
>>>>
>>>> 1. Design is closer to the way humain think.

>>> Bull! It might map to the way SOME people think, but not
>>> me and not all.

>> We could reword it to take people like you into account:
>> OO is closer to the way humans ought to think.

> Actually, i think OO is the way people actually think.

Well, there we have a range of opinions and no consensus.
There is also a question of whether by "design" is meant
the process of designing software or the way an OO system
is organized, the product or by the mental process by which
some application software is designed. These can all be
quite different.

> Lets look at a simple example of a chair. People think of a chair as a 
> thing. The thing has some properties to it (it likely has 4 legs, is 
> meant to be sat on, etc).
> 
> When we learn about our first chair we treat it as a unique object. When 
> we see our second chair we see it has similar properties to the first 
> chair and we begin to form an understanding of chairness (the type chair).
> 
> If I said i have a unique chair i built and asked if a person could sit 
> on it what would you say? Odds are you know that chair like things are 
> meant to be sat on so my unique chair can be sat on.

Weren't the first chairs some natural thing, like a fallen log,
which a person simply sat on? Then we began to design (plagiarizing
nature) our own fancier chairs to have similar qualities.

But, in the one instance we simply used the log by sitting, in the
other instance we had to design something and then the final product,
a man-made chair, had some functionality and structure. All these
are different.

> Another example is a Person. If i asked you if my friend (that you have 
> never met) has blood what would you say? Odds are you would say they 
> have blood because you understand a what a person is and you know a 
> person has blood.
> 
> A humans ability to communicate with other people isn't based on their 
> ability to send the complete information they know but rather we both 
> have an understanding of basic classes of things and we use the same 
> symbolic representation to define them which allows us to communicate.
> 
> If i say "Joe is sitting in a chair." i really haven't said much but you 
> know what i'm saying because we both have a shared understanding of a 
> chair. We both know that Joe is a name of a person so Joe is a person. 
> Joe is a males names so Joe is male. We both know that people have blood 
> so Joe has blood. To understand my sentence we have to understand a 
> great deal that isn't communicated. We can communicate because of 
> classifications we both understand.
> 
> People understand and interact with the world based classes of things.

But, we also are limited by our senses and don't completely sense
all the "world" around us. And, our interaction with specific
things is often on our own human terms, rather than based on
their class-like qualities. We might throw a baseball, but we
can also crush one with a steamroller or smash on down on
someone's head (not recommended). Which of these is ball-like?
Which of these is utility chosen by the human? What class of
object is manipulated by humans -- all of them.

> This allows me to use a computer i have never seen before because I know 
> that computers behave like other computers. I can walk on ground i have 
> never been on before because i understand slopes, and texture of 
> materials of ground i have never been on but they have the same 
> properties of other ground i have been on.

The object being manipulated is as much the human body as it is
the ground/beach/mountain, etc.

> A computer is a classification of device, slopes are classifications of 
> the ground, texture is a classification of a material. My understanding 
> of these classifications, and many other classifications, allows me to 
> function as a person.

What understanding does one have to have of some new object,
say a chunk of something rock-like, which later turns out to
be from outerspace, in order to put it in one's mouth and
chew on it?  :-)   Is that a recognition of it's class or
of someone's idea that they should test it's hardness to see
if it's gold or something else?

> Quotes from "Design Principals behind Smalltalk" - byte mag Aug 1981
> 
> "In designing a language for use with computers, we do not have to look 
> far to find helpful hints. Everything we know about how people think and 
> communicate is applicable. The mechanisms of human thought and 
> communication have been engineered for millions of years, and we should 
> respect them as being of sound design. Moreover, since we must work with 
> this design for the next million years, it will save time if we make our 
> computer models compatible with the mind, rather that the other way 
> around."
> 
> "The mind observes a vast universe of experience, both immediate and 
> recorded. One can derive a sense of oneness with the universe simply by 
> letting this experience be, just as it is. However, if one wishes to 
> participate, literally to take a part, in the universe, one must draw 
> distinctions.

But, like subjects of scholarship, such differentiating is only
the result of our human limitation and is not inherent in the
subject matter. I know this is getting to the philosophical/
spiritual level of discussion but that's just the nature of
things.

> In so doing one identifies an object in the universe, and 
> simultaneously all the rest becomes not-that-object. Distinction by 
> itself is a start, but the process of distinguishing does not get any 
> easier. Every time you want to talk about "that chair over there", you 
> must repeat the entire processes of distinguishing that chair. This is 
> where the act of reference comes in: we can associate a unique 
> identifier with an object, and, from that time on, only the mention of 
> that identifier is necessary to refer to the original object. "

That there are distinct objects is peculiar and yet seems very real.
That there would seem to be distinct objects to some other creature
of this world isn't certain. Take for example a small small living
creature which can float through the air like dust and is pushed
around by air or dust or rain or when it lands on 'solid ground'.
To that kind of creature the world would perhaps seem to be all of
one material and it simply moves through that, if it recognizes
itself as distinct from the whole at all. Perhaps, like a fetus
in the womb, it doesn't recognize distinct differences at all.

> "Classification is the objectification of nessness. In other words, when 
> a human sees a chair, the experience is taken both literally an "that 
> very thing" and abstractly as "that chair-like thing". Such abstraction 
> results from the marvelous ability of the mind to merge "similar" 
> experience, and this abstraction manifests itself as another object in 
> the mind, the Platonic chair or chairness. "
> 
> The complete article is available here:
> http://users.ipa.net/%7edwighth/smalltalk/byte_aug81/design_principles_behind_smalltalk.html 
> 
> 
> Jeff Brooks

I don't argue against OO, just that OO is somehow a perfect
example of how humans relate to the world or how we think.
Let's not imbue OO with more than it really is, a tool.
0
hathawa2 (78)
5/1/2004 4:20:22 PM
Jeff Brooks wrote:

>> cstb wrote:
>> 
>>> Jeff Brooks:
>>>
>>> ... Actually, i think OO is the way people actually think.
>>
>> There appears to be a relationship, yes.
>> However, people are capable of thinking in ways that are much
>> richer than the results obtained by orienting on Objects alone.
> 
> Are you sure?
> 
> The research that created Smalltalk was done by looking at how very 
> young children interact and think about the world and they based the GUI 
> + Smalltalk on that (the first gui was made by this research group). 
> This research allows us to understand the basics of understanding 
> because they looked at the primitive thoughts of people.
> 
> For example:
> 
> To interact with a thing you have to identify it. Children can identify 
> things by pointing at them even if they don't know the word for it. This 
> resulted in a pointing device being created called the mouse which would 
> allow people to point at what they want to use. All access to objects in 
> Smalltalk are done via references so there is a uniform way to "point" 
> at an object in code.
> 
> Children can't read well, but they can identify shapes and understand 
> how to move things. So they concluded making an interface based on 
> shapes and moving them is more natural than text interfaces. This 
> resulted in shapes and the ability to pick them up and put them where 
> you want them. Some of the shapes were icons, windows, etc.
> 
> Children understand certain things behave in certain ways. Things that 
> behave in similar ways are easier for children to learn.
> 
> Allowing things in a computer that are different to behave in similar 
> ways allows people to learn them faster and those things feel more 
> natural. This is why actions like opening a document is done in the same 
> way no matter what type of document it is in a GUI. The concept of 
> allowing different things to behave in similar ways effects both the GUI 
> and the programming language.

What you're describing makes me think an OO program is
designed for a human being to execute, rather than a computer.
But, what about the process of designing an app/OO program?
Is that process natural for humans? Maybe the maintenance
programmer would have an easier time reviewing the universe
of that program, discerning the objects and utilizing them,
but to "play God" and create new objects is certainly not
a natural human practice. Though we've begun to do just
that in the last several thousand years it's not at all
clear that OO-thinking is related to the human creative
process.

> I think people can think in more complex ways as they get older but that 
> doesn't mean they don't think in an object oriented way. Children don't 
> understand logic, but they understand objects and classifications. I 
> think people can learn logic but i think we understand it by 
> understanding things, and classifications.
> 
> Another way of putting it is we can program different types of languages 
> using object oriented languages. That doesn't mean that OO isn't at the 
> core of the new languages. Children first understand an 
> objects/classifications so i think it is likely the basis of our 
> understanding.

....

>>> Quotes from "Design Principals behind Smalltalk" - byte mag Aug 1981
>>
>>
>>> The complete article is available here:
>>> http://users.ipa.net/%7edwighth/smalltalk/byte_aug81/design_principles_behind_smalltalk.html 
>>>
> 
> 
> Seriously, that document i included a link to is just amazing. It really 
> shows why OO is they way it is. I think everyone should read it even if 
> they don't like like dynamic programming languages.
> 
> Jeff Brooks
0
hathawa2 (78)
5/1/2004 4:43:31 PM
|
|Yes, but if you actually start putting together traits and
|attributes on paper and cataloging things, you will see that
|the world is not really hierarchical. Philosophers
|have known this for a long time. Set theory is better than
|subtype theory for classifications. Sets can beat up
|hierarchies and send them crying to their mommy. No contest.
|(Sorry, I'm in a chest-thumping mood.)

But you are definitely right to the bones. 

Back to original question, OO is popular because of C++, i.e. it is 
only extension of the C (and to lesser degree of Pascal) produced by 
major software companies like Borland or MS. It allowed people to 
stay compatible with earlier code, and progress in the level of 
abstraction slowly. That is very successeful marketing combination 
and programmers accepted it with lot of enthusiasm. I remember I 
struggled alot before I realized that not my lack of understanding of 
OO, but OO itself is the reason of the problems. 

It would be much better that major compiler vendors were 
influenced by Setl instead of Smalltalk. But, if game of the words is 
allowed here, if one is 30 years ahead of his time, he cannot expect 
that his time will come next year.


-- 
Kazimir Majorinc 
0
5/1/2004 7:47:32 PM
|
|Can you give an example of classifications that can't be represented 
|with in Java, or in Smalltalk but can be represented with sets?
|
|Jeff Brooks
|

How can you do that object o change its class membership during 
runtime, in one moment o is element of c1,c2,c5; in another it is 
element of c4 and c6 (and generally, any combination is allowed) 
That all classes ci has defined own function member f, and that f(o) 
is defined as 

f(o)=min{ ci.f(o)| o is element of ci in the moment f is computed }

using only classes for membership relation? Any OO language.

-- 
Kazimir Majorinc 
0
5/1/2004 9:52:11 PM
alfredo@ncs.es (Alfredo Novoa) wrote in message news:<4092e989.4349984@news.wanadoo.es>...
> 
> >And an OO ellipse is not a circle. 
> 
> In the typical OO designs it is, and squares are not rectangles.

In a lot of the "stereotypical" examples you'll see for OO designs,
ellipses are circles and squares are rectangles.  Unfortunately, these
designs are wrong (based on breakage of the LSP, see below).  And to
make matters worse, it seems that these examples are given by people
who don't seem to know what the LSP is, nor what the Open Closed
Principle is.  At least, they never seemed to get mentioned in these
examples.  It's a shame, as it gives people the wrong impression about
OO designs.


> > Claiming so breaks the Liskov Substitution Principle.
> 
> How?

We'll use the square/rectangle example.  If square inherits from
rectangle, then we know the standard problem that the sides of a
rectangle can be changed independently, but that's not true for a
square.  Because of that, a square cannot be treated like a rectangle
by clients, who only know the interface provided by rectangle, and
think they are getting a rectangle when they are actually getting a
sqaure at run-time (which is fine when the LSP is upheld).

Looking at these side-setting routines for rectangle, we see that
their contracts tell us that when we attempt to change the length of a
side, the change is successful, and the new length is what we
requested.  But square breaks that by (most likely) setting the other
side equal to the new one that we just changed.  But this breaks the
contract, because clients of rectangle aren't expecting that other
side to change.  So, this kind of inheritance breaks the LSP, which of
course says that descendant classes must be able to pass for
ancestors.


> Subtyping is only math. OO does not match with math thus what OO does
> is not subtyping.

OO does not claim to match math.  Calling a descendant class a subtype
is misleading.  In fact, in a large number of cases, descendants might
be considered supertypes, as they tend to add features.  In reality,
descendants are simply specializations of their ancestors, not
subtypes.  Many people, especially those who teach OO incorrectly (as
mentioned above), make this mistake.  Descendants are not expected to
cover "less" cases, as doing so breaks the LSP.

Steve
0
5/2/2004 1:47:34 AM
> 
> The world isn't heiarchial. Languages like java deal with this with 
> interfaces, Smalltalk deals with this via dynamic typing. I'm not sure 
> what you think sets can do that existing programming languages can't do.

It is not a matter of "can't do", because they are all Turing
Equivalent. It is a matter of human convenience (maintainability).

Your "customer" example is essentially a navigational database
hard-wired into app code. App code is a sh*tty place to store
relationship schemas IMO. Maybe you like it and like navigational
techniques (for some odd reason), but I don't. They happily died in
the 70's thanks to Dr. Codd, but OO zealots resurrected it from the
dead for OO and OODBMS. Does this mean goto's are coming back?
(Navigational is the Goto of data structures.)

-T-
0
topmind (2124)
5/2/2004 5:11:33 AM
Steven Wurster wrote:

> > >And an OO ellipse is not a circle.
> >
> > In the typical OO designs it is, and squares are not rectangles.
>
> In a lot of the "stereotypical" examples you'll see for OO designs,
> ellipses are circles and squares are rectangles.  Unfortunately, these
> designs are wrong (based on breakage of the LSP, see below).

LSP depends on behavior. Some behaviors for circle/ellipse break LSP if done
wrong.

But the project's domain dictates which behaviors the shapes get, which in
turn dictates what inheritance - if any - the shapes get.

-- 
  Phlip
    http://www.xpsd.org/cgi-bin/wiki?TestFirstUserInterfaces


0
phlip_cpp (3852)
5/2/2004 5:17:08 AM
|dead for OO and OODBMS. Does this mean goto's are coming back?
|(Navigational is the Goto of data structures.)
|

Even goto's are OK, without goto's only very simple program control 
flow can be organized in code, and for anything complex, labyrinth 
like control flow one must make his own pocket emulator of goto's 
with list, where indexes of list emulate labels and some counter that 
emulates number of line in the program. For example, try to write 
program for game like monopoly, you know, roll the dice, if result is 
7 do xxx and go three fields back, if result is 9 do yyy and go 7 
fields forward. With goto's it is trivial. Without goto's it is not trivial 
any more, not very hard but certainly harder and less readable than 
with gotos. 

There are similarities between SP movement and OO movement, 
both tried to elevate simple and sometimes useful, but relatively 
weak tools (loops, objects) on the level of "the way human think" 
and both  ignored significant portions of mathematics (Turing 
machines, set theory respectively) that cannot be described 
adequately without writing pocket emulator for oneself. 

OO is of course, worse, because it is much more complex, and it is 
not easy to see where it leads. Some people think that it leads 
nowhere, but many still struggle and think that if they do not 
understand how to model circle and ellipse relation that problem 
might be in them, not in paradigm itself. They should ask 
themselves, if people do not understand how to model ellipse and 
circle, then what will happen with really complex problems. 


-- 
Kazimir Majorinc 
0
5/2/2004 11:50:47 AM
"Phlip" <phlip_cpp@yahoo.com> wrote in message news:<ov%kc.764$np3.460@newssvr15.news.prodigy.com>...
> 
> LSP depends on behavior. Some behaviors for circle/ellipse break LSP if done
> wrong.
> 
> But the project's domain dictates which behaviors the shapes get, which in
> turn dictates what inheritance - if any - the shapes get.

That's true, and I totally forgot about it.  If you make your shapes
immutable, then the chances of violating LSP go way down, assuming
there's any kind of inheritance.

Steve
0
5/2/2004 11:56:58 AM
I'm not familiar with Set based languages so I have to make some 
assumptions about what your asking (i'm not 100% convinced I understand 
the terminology your using in the context of Set languages).

Please correct me if i'm wrong.

Majorinc wrote:
> |
> |Can you give an example of classifications that can't be represented 
> |with in Java, or in Smalltalk but can be represented with sets?
> |
> |Jeff Brooks
> |
> 
> How can you do that object o change its class membership during 
> runtime, 

By class membership do you mean "the class of the object"? Are you 
asking if 'o' can change its class at runtime?

In both Smalltalk, and Self this can be done.

People tend to find dynamic inheritance confusing so they typically 
don't change the class heiarchy, or the class of an object at runtime 
except during development.

Both Smalltalk, and Self have a runtine that is constantly, well, 
running. The user uses editors, or code to change the structure of the 
class heirarchy (instance heirachy in Self) at runtime to create 
software in these languages. All class/instance structures in 
Smalltalk/Self can be updated at runtime in both languages.

Or did you mean class membership to refer to methods/variables of the 
class? Both Smalltalk, and Self can change these at runtime as well.

> in one moment o is element of c1,c2,c5; 

To me an element is a part of something (like an specific item in an 
array). Perhaps this means something different in a set language.

In OO you can reference the same object from many other objects. Is this 
  what your talking about?

Does being an element of something change the behavior of the element in 
some way in set based languages? Does it change the behavior of the set 
containing the element?

> in another it is 
> element of c4 and c6 (and generally, any combination is allowed) 
> That all classes ci has defined own function member f, and that f(o) 
> is defined as 
> 
> f(o)=min{ ci.f(o)| o is element of ci in the moment f is computed }

In OO, if there is a class "ci" and you pass in "o", "o" can become a 
part of "ci" if the method choses to do that but it doesn't have to.

I doubt thats what you mean.

How is your code different from this:
	set.add("test"); // the string "test" becomes an element of set.

Jeff Brooks
0
jeff_brooks (199)
5/2/2004 9:51:40 PM
On Fri, 30 Apr 2004 16:38:54 -0700, cstb <jas@cruzio.com> wrote:

>Neither OO subclassing or the theory of subtypes
>suggests anything about which view is correct -
>because either view *can* be correct.

Circle is a subtype of ellipse and the contrary is false. All circles
are ellipses, but not all ellipses are circles.

Integer is also subtype of rational and rational subtype of real, etc.

For example:

var a: real;
a := 2.5;
a := a - 0,5;
if a is integer then 
  WriteLn('Correct')
else
  WriteLn('OO is mad');

A good language should show 'Correct'.

>We must move away from the realm of abstraction
>and consider concrete implementations to know
>whether the choice of position in the subclass hierarchy
>results in a particular "subtype" relationship.

No, subtyping is only an abstract issue.

>If we choose to implement ellipse as a subclass of circle,
>and we want the relation "Circle is-a-subtype-of Ellipse",

Then it is clear that subclassing and subtyping are very different
things.

>then we must ensure that wherever an Ellipse is the expected
>receiver, we may instead supply a circle.  Therefore, any
>method defined on ellipse must also be defined on Circle.

Like SetSemiAxisLength() which is nonsensical in circle.

>Therefore, we cannot add any methods to Ellipse - we
>can only override implementations inherited from Circle.

Then we can not create a good model of circle and ellipse. Ellipse
should have methods not present in circle like: rotate(...) and
SetSemiAxisLength(...).

var e: ellipse;
e := ellipse(10,10,2,5);
var c: circle;
c := e; // error;
SemiAxisB(e) := 2;
c := e; // OK;
Radius(c) := 10; //OK
c.rotate(30); //error
e.rotate(30) ; // OK
e := c;  //OK
e := circle(10, 10, 10); // OK
SemiAxisB(e) := 2; // OK
c := e; // error


>In the remaining three cases (swapping either the positions
>in the subclass hierarchy or positions in the subtype relation),
>we'll get a different set of constraints, but we can always
>derive a "correct" implementation.

The only model that matches reality is: circle is a subtype of
ellipse.

OO is unable to model this simple reality. But you can derive
"correct" implementations of flawed models.

>For example, in another case, we will be forced to include
>what might be considered a "redundant" instance variable.
>If we expect to have lots of instances at the same time,
>this might not be a very good implementation choice.

You are assuming we are using a typical OO language. If you are forced
to do that is because the language is flawed.

>Yet another case will result in a definition of 'ellipse'
>which cannot express the entire range of ellipses
>we might think of as 'mathematically elliptical'.

But then it is not ellipse. Then you are not solving the problem, only
changing it.

>Making beneficial design choices for a given context
>is often more about "skilled art" than "skilled craft".

We are forced to look for workarounds, because OO is not able to map
the reality. 

>The discipline of computer science is young, and still
>in development,

Indeed, that's my point! There are many things to fix in OO and OO is
not the end of the road. But many OO practicioners think that OO is
the perfect silver bullet, and all criticism and research is heresy.

For instance in OO one method belongs to a single class only, but in
the real world an operator might belong to several types.


Regards
  Alfredo
0
alfredo (205)
5/2/2004 9:54:26 PM
On Sun, 2 May 2004 13:50:47 +0200, Majorinc, Kazimir
<kazimir@chem.pmf.notcombuthr> wrote:

>OO is of course, worse, because it is much more complex, and it is 
>not easy to see where it leads. Some people think that it leads 
>nowhere, but many still struggle and think that if they do not 
>understand how to model circle and ellipse relation that problem 
>might be in them, not in paradigm itself.

They are wrong because the problem is in the pointer based inheritance
model.

> They should ask 
>themselves, if people do not understand how to model ellipse and 
>circle, then what will happen with really complex problems. 

The model fails representing circle and ellipse, then what will happen
with really complex problems?


Regards

0
alfredo (205)
5/2/2004 10:10:18 PM
Mark S. Hathaway wrote:
> Jeff Brooks wrote:
>> Lets look at a simple example of a chair. People think of a chair as a 
>> thing. The thing has some properties to it (it likely has 4 legs, is 
>> meant to be sat on, etc).
>>
>> When we learn about our first chair we treat it as a unique object. 
>> When we see our second chair we see it has similar properties to the 
>> first chair and we begin to form an understanding of chairness (the 
>> type chair).
>>
>> If I said i have a unique chair i built and asked if a person could 
>> sit on it what would you say? Odds are you know that chair like things 
>> are meant to be sat on so my unique chair can be sat on.
> 
> 
> Weren't the first chairs some natural thing, like a fallen log,
> which a person simply sat on? Then we began to design (plagiarizing
> nature) our own fancier chairs to have similar qualities.
> 

No, logs are not chairs but they can be sat on.

> But, in the one instance we simply used the log by sitting, in the
> other instance we had to design something and then the final product,
> a man-made chair, had some functionality and structure. All these
> are different.

I'm not arguing that there aren't different things that have similair 
properties.

The point is people make classifications of things. We then use these 
classifications to both identify objects, and we make assumptions about 
new objects based on the classifications.

An example of this is if you see a chair you have never seen before you 
will assume you can sit on it because you understand chairs can be sat on.

An example of this failing is showing kids that big things can't be 
picked up. Then when they see another big object they assume they can't 
pick it up even if it just an empty box. If you make the empty box look 
like metal an adult will assume they can't pick up either even if it is 
just a painted empty box.

>> Another example is a Person. If i asked you if my friend (that you 
>> have never met) has blood what would you say? Odds are you would say 
>> they have blood because you understand a what a person is and you know 
>> a person has blood.
>>
>> A humans ability to communicate with other people isn't based on their 
>> ability to send the complete information they know but rather we both 
>> have an understanding of basic classes of things and we use the same 
>> symbolic representation to define them which allows us to communicate.
>>
>> If i say "Joe is sitting in a chair." i really haven't said much but 
>> you know what i'm saying because we both have a shared understanding 
>> of a chair. We both know that Joe is a name of a person so Joe is a 
>> person. Joe is a males names so Joe is male. We both know that people 
>> have blood so Joe has blood. To understand my sentence we have to 
>> understand a great deal that isn't communicated. We can communicate 
>> because of classifications we both understand.
>>
>> People understand and interact with the world based classes of things.
> 
> But, we also are limited by our senses and don't completely sense
> all the "world" around us.

So? That doesn't mean we don't classify the things we do sense.

> And, our interaction with specific
> things is often on our own human terms, rather than based on
> their class-like qualities. We might throw a baseball, but we
> can also crush one with a steamroller or smash on down on
> someone's head (not recommended). Which of these is ball-like?
> Which of these is utility chosen by the human? What class of
> object is manipulated by humans -- all of them.

I'm not sure what your trying to say here. You seem to think that we 
don't think OO but your arguement appears to support my views.

>> This allows me to use a computer i have never seen before because I 
>> know that computers behave like other computers. I can walk on ground 
>> i have never been on before because i understand slopes, and texture 
>> of materials of ground i have never been on but they have the same 
>> properties of other ground i have been on.
> 
> The object being manipulated is as much the human body as it is
> the ground/beach/mountain, etc.
> 

True, we even classify ourselves as objects.

>> A computer is a classification of device, slopes are classifications 
>> of the ground, texture is a classification of a material. My 
>> understanding of these classifications, and many other 
>> classifications, allows me to function as a person.
> 
> What understanding does one have to have of some new object,
> say a chunk of something rock-like, which later turns out to
> be from outerspace, in order to put it in one's mouth and
> chew on it?  :-)   Is that a recognition of it's class or
> of someone's idea that they should test it's hardness to see
> if it's gold or something else?
>

Well, to communiate you classified the object as "rock like".

To me something that is "rock like" isn't chewable so i wouldn't try to 
chew on it because i know rocks are harder then my teeth. I don't think 
rocks are interesting so unless the rock looked really different from 
other rocks I would probably just ignore it. If it was different and it 
was small I might pick it up to look at it.

So my classification of rock defines my behavior with rocks. If the rock 
was different from other rocks (like it actually was something that 
could be eaten) but it looked like other rocks I wouldn't treat it any 
differently then any other rock so I would just ignore it and never find 
out that it could be eaten.

Young children that are still learning about things may not understand 
"rock like" so they will might put it in their mouths. In this case I 
would try to take it away from them and i would be very surprised it was 
soft because it doesn't match my classification of rock.

>> Quotes from "Design Principals behind Smalltalk" - byte mag Aug 1981
>>
>> "The mind observes a vast universe of experience, both immediate and 
>> recorded. One can derive a sense of oneness with the universe simply 
>> by letting this experience be, just as it is. However, if one wishes 
>> to participate, literally to take a part, in the universe, one must 
>> draw distinctions.
> 
> But, like subjects of scholarship, such differentiating is only
> the result of our human limitation and is not inherent in the
> subject matter. I know this is getting to the philosophical/
> spiritual level of discussion but that's just the nature of
> things.

I'm debating how humans think of things which is defined by the 
limitations of humans. We wouldn't need to draw distinctions if we could 
understand everything at once.

>> In so doing one identifies an object in the universe, and 
>> simultaneously all the rest becomes not-that-object. Distinction by 
>> itself is a start, but the process of distinguishing does not get any 
>> easier. Every time you want to talk about "that chair over there", you 
>> must repeat the entire processes of distinguishing that chair. This is 
>> where the act of reference comes in: we can associate a unique 
>> identifier with an object, and, from that time on, only the mention of 
>> that identifier is necessary to refer to the original object. "
> 
> That there are distinct objects is peculiar and yet seems very real.
> That there would seem to be distinct objects to some other creature
> of this world isn't certain. Take for example a small small living
> creature which can float through the air like dust and is pushed
> around by air or dust or rain or when it lands on 'solid ground'.
> To that kind of creature the world would perhaps seem to be all of
> one material and it simply moves through that, if it recognizes
> itself as distinct from the whole at all. Perhaps, like a fetus
> in the womb, it doesn't recognize distinct differences at all.

What does that have to do with the way humans think?

>> "Classification is the objectification of nessness. In other words, 
>> when a human sees a chair, the experience is taken both literally an 
>> "that very thing" and abstractly as "that chair-like thing". Such 
>> abstraction results from the marvelous ability of the mind to merge 
>> "similar" experience, and this abstraction manifests itself as another 
>> object in the mind, the Platonic chair or chairness. "
>>
>> The complete article is available here:
>> http://users.ipa.net/%7edwighth/smalltalk/byte_aug81/design_principles_behind_smalltalk.html 
> 
> I don't argue against OO, just that OO is somehow a perfect
> example of how humans relate to the world or how we think.
> Let's not imbue OO with more than it really is, a tool.

"In designing a language for use with computers, we do not have to look 
far to find helpful hints. Everything we know about how people think and 
communicate is applicable. The mechanisms of human thought and 
communication have been engineered for millions of years, and we should 
respect them as being of sound design. Moreover, since we must work with 
this design for the next million years, it will save time if we make our 
computer models compatible with the mind, rather that the other way around."

OO is based on human understanding of how people think.

Jeff Brooks
0
jeff_brooks (199)
5/2/2004 11:16:36 PM
Mark S. Hathaway wrote:

> What you're describing makes me think an OO program is
> designed for a human being to execute, rather than a computer.

OO is designed around what is known about human thought. Programs are 
built to be ran on a computer, and understandable to a human.

> But, what about the process of designing an app/OO program?

To design something we must design it in terms we understand. Designing 
things may not be natural but the result must be understood.

> Is that process natural for humans? Maybe the maintenance
> programmer would have an easier time reviewing the universe
> of that program, discerning the objects and utilizing them,
> but to "play God" and create new objects is certainly not
> a natural human practice. Though we've begun to do just
> that in the last several thousand years it's not at all
> clear that OO-thinking is related to the human creative
> process.

Personally i don't think being creative is a process (it can be part of 
a process). People can generate new ideas but the result still fits into 
  the way people think.

Jeff Brooks
0
jeff_brooks (199)
5/2/2004 11:31:20 PM
stevenwurster@lycos.com (Steven Wurster) wrote in message news:<d853834.0405011747.7816184d@posting.google.com>...
> alfredo@ncs.es (Alfredo Novoa) wrote in message news:<4092e989.4349984@news.wanadoo.es>...
> > 
> > >And an OO ellipse is not a circle. 
> > 
> > In the typical OO designs it is, and squares are not rectangles.
> 
> In a lot of the "stereotypical" examples you'll see for OO designs,
> ellipses are circles and squares are rectangles.

Ellipses are circles is horribly wrong and squares are rectangles is
correct, but it is contrary to the stereotypical examples.

> Unfortunately, these
> designs are wrong (based on breakage of the LSP, see below).

Circles are ellipses breaks LSP
Circles are ellipses
--------------------------------
LSP is broken QED

> And to
> make matters worse, it seems that these examples are given by people
> who don't seem to know what the LSP is

LSP is only a wrong principle.

> We'll use the square/rectangle example.  If square inherits from
> rectangle, then we know the standard problem that the sides of a
> rectangle can be changed independently

Like most OO practicioners you are confusing values and variables all
the time.

Rectangles are values and values are immutable, but you can place a
square value in a rectangle variable or a square value in a square
variable.

If you have a square value in a rectangle variable then you can
replace the square value by a rectangle value, but you are not
changing the sides of the square, you are replacing a value for
another value.

var r: rectangle;
r := square(5); // there is a square value in r
r.height := 10; // there is a rectangle value in r
if r is square then
  WriteLn('OO is mad');

r.height := 10; // is only a shorthand for r := rectangle(10,
r.width);

>, but that's not true for a
> square.  Because of that, a square cannot be treated like a rectangle
> by clients

You are confusing values and variables. A square value can be treated
like a rectangle value because it is a rectangle value, but a square
variable can not be treated as a rectangle variable because the square
variable is more restricted.

There are two kinds of substitutability: value substitutability and
variable substitutability. Something forgotten by the LSP.

>, who only know the interface provided by rectangle, and
> think they are getting a rectangle when they are actually getting a
> sqaure at run-time (which is fine when the LSP is upheld).

They think they are getting a rectangle and they are getting a
rectangle. Squares ARE rectangles. Squares have rect angles.

> Looking at these side-setting routines for rectangle, we see that
> their contracts tell us that when we attempt to change the length of a
> side, the change is successful, and the new length is what we
> requested. But square breaks that by (most likely) setting the other
> side equal to the new one that we just changed. But this breaks the
> contract, because clients of rectangle aren't expecting that other
> side to change.

If the client of rectangle receive a rectangle that is also a square
and he changes only one side then he will have a rectangle that is not
a square. No problem here.

var r: rectangle;
r := rectangle(5, 5);
if not (r is square) then
  WriteLn('OO is mad')
r.height := 10;
if not (r is square) then
  WriteLn('Correct');

> So, this kind of inheritance breaks the LSP, which of
> course says that descendant classes must be able to pass for
> ancestors.

That's why LSP is a broken principle.

> > Subtyping is only math. OO does not match with math thus what OO does
> > is not subtyping.
> 
> OO does not claim to match math.

Then it should not claim to match the real world.

>  Calling a descendant class a subtype
> is misleading.

I completely agree. Subclasing is related to delegation and not to
subtyping.

> In fact, in a large number of cases, descendants might
> be considered supertypes, as they tend to add features.

And it is contrary to the reality. OO subclasing is not good for
modeling the real world.

> In reality,
> descendants are simply specializations of their ancestors, not
> subtypes.

A rectangle is a generalization of a square and not a specialization.

Square is a specialization of rectangle.

With a good language it could be defined as:

type square is rectangle where height = width;

>  Many people, especially those who teach OO incorrectly (as
> mentioned above), make this mistake.  Descendants are not expected to
> cover "less" cases, as doing so breaks the LSP.

But if you teach correctly something that is incorrect, the situation
is not a lot better.


Regards
  Alfredo
0
alfredo (205)
5/2/2004 11:36:41 PM
Majorinc wrote:

> I remember I struggled alot before I realized that not my lack
> of understanding of OO, but OO itself is the reason of the problems. 

What are the problems with OO? It might help to give an example of 
something that OO is bad at, but set languages are good at.

The example shouldn't be abstract or people won't understand it.

Jeff Brooks
0
jeff_brooks (199)
5/2/2004 11:41:48 PM
Topmind wrote:

>>The world isn't heiarchial. Languages like java deal with this with 
>>interfaces, Smalltalk deals with this via dynamic typing. I'm not sure 
>>what you think sets can do that existing programming languages can't do.
> 
> It is not a matter of "can't do", because they are all Turing
> Equivalent. It is a matter of human convenience (maintainability).

How are set programs are easier to maintain?

> Your "customer" example is essentially a navigational database
> hard-wired into app code. App code is a sh*tty place to store
> relationship schemas IMO. Maybe you like it and like navigational
> techniques (for some odd reason), but I don't. They happily died in
> the 70's thanks to Dr. Codd, but OO zealots resurrected it from the
> dead for OO and OODBMS. Does this mean goto's are coming back?
> (Navigational is the Goto of data structures.)

I assume by relationship schema you are refering to the types of 
instance variables. For example:

class T
{
	A a;
	B b;
}

Do you think specifying the types of the variables a, and b defines the 
relationships to other types?

There are tons of OO languages that don't do that. Just to name a few: 
Smalltalk, Self, Ruby, Python, etc.

Not all OO is C++, and Java.

Jeff Brooks
0
jeff_brooks (199)
5/2/2004 11:57:04 PM
Majorinc wrote:

> Even goto's are OK, without goto's only very simple program control 
> flow can be organized in code, and for anything complex, labyrinth 
> like control flow one must make his own pocket emulator of goto's 
> with list, where indexes of list emulate labels and some counter that 
> emulates number of line in the program. For example, try to write 
> program for game like monopoly, you know, roll the dice, if result is 
> 7 do xxx and go three fields back, if result is 9 do yyy and go 7 
> fields forward. With goto's it is trivial. Without goto's it is not trivial 
> any more, not very hard but certainly harder and less readable than 
> with gotos. 

Java
----
int roll = dice.roll();

if (roll == 7) {
	xxx();
	moveBack(3);
}
else if (roll == 9) {
	yyy();
	moveForward(7);
}

Why isn't that readable?

> There are similarities between SP movement and OO movement, 
> both tried to elevate simple and sometimes useful, but relatively 
> weak tools (loops, objects) on the level of "the way human think" 
> and both  ignored significant portions of mathematics (Turing 
> machines, set theory respectively) that cannot be described 
> adequately without writing pocket emulator for oneself. 

By pocket emulator I assume you mean an emulator that emulates a device 
like an instruction set for a pocket calculator.

I came up with this:

Smalltalk
---------
| code instructions stack end |
code := ...
stack := Stack new.
end := false.

instructions := Dictionary new.
instructions
	at: 1 put: [ stack push ];
	at: 2 put: [ stack pop ];
	"cont..."
	at: 100 put: [ end := true ].

[ end ] whileFalse: [
	(instructions at: code currentInstruction) value.
	code nextInstruction.
].


It seems simple to me.

> OO is of course, worse, because it is much more complex, and it is 
> not easy to see where it leads. Some people think that it leads 
> nowhere, but many still struggle and think that if they do not 
> understand how to model circle and ellipse relation that problem 
> might be in them, not in paradigm itself. They should ask 
> themselves, if people do not understand how to model ellipse and 
> circle, then what will happen with really complex problems. 

Just because some people think that OO leads no where doesn't mean it's 
true. There are lots of programs that successfully use OO.

Not all OO languages are complex. Look at Smalltalk, or Python as examples.

OO developers model classes as they understand them. These may not be 
correct in the matematical sense but that doesn't mean the software 
doesn't work.

A lot of people like to point out the programming languages ignore 
mathematics and say the languages are bad for doing so. If following 
mathematics was required to write software then most of the existing 
languages wouldn't be able to be used to create software. Obvously, this 
isn't the case.

The only way using mathematics as the basis of a language would be 
easier for people to understand is if people understood mathematics. I 
don't think this is the case because most developers don't seem to know 
why people complain that programming languages are not mathematically 
correct.

You mean a subclass is really a superclass? Code compiles, it runs, it 
works, it's fine.

Jeff Brooks
0
jeff_brooks (199)
5/3/2004 1:39:31 AM
On 2 May 2004 16:36:41 -0700, alfredo@ncs.es (Alfredo Novoa) wrote:

>There are two kinds of substitutability: value substitutability and
>variable substitutability. Something forgotten by the LSP.

No. There are many kinds of substitutability. Note that what you call
"variable substitutability" is not atomic. It consists of
in-substitutability (=your value substitutability) and
out-substitutability.

>> So, this kind of inheritance breaks the LSP, which of
>> course says that descendant classes must be able to pass for
>> ancestors.
>
>That's why LSP is a broken principle.

It is not broken, it is just difficult to maintain. The idea of
substitutability is right. What is broken, is a fiction to ensure
absolute substitutability independent on any context.

>> In reality,
>> descendants are simply specializations of their ancestors, not
>> subtypes.
>
>A rectangle is a generalization of a square and not a specialization.
>
>Square is a specialization of rectangle.
>
>With a good language it could be defined as:
>
>type square is rectangle where height = width;

Ah, but here the problem starts. You have imposed a [arbitrary]
constraint, but knowing this constraint tells nothing about which
programs written for rectangles will work for squares. OK, for sure it
is known that all in-methods (=value substitutability) would. But that
is not very interesting. Which out-methods will? A mathematician would
not care. He would develop a theory or squares. A programmer cannot do
it, he will reuse. So he tries to approach the problem from other
side: what kind of type relations would preserve substitutability?
Behold, LSP is born!

--
Regards,
Dmitry Kazakov
www.dmitry-kazakov.de
0
mailbox2 (6357)
5/3/2004 8:46:22 AM
On Sun, 02 May 2004 21:54:26 GMT, alfredo@ncs.es (Alfredo Novoa)
wrote:

>On Fri, 30 Apr 2004 16:38:54 -0700, cstb <jas@cruzio.com> wrote:
>
>>Neither OO subclassing or the theory of subtypes
>>suggests anything about which view is correct -
>>because either view *can* be correct.
>
>Circle is a subtype of ellipse and the contrary is false.

Wrong. Values of the type "Circle" do not have the type "Ellipse". So
"Circle" is not contained in "Ellipse".

1. You can make circle object equivalent to ellipse object by creating
a mapping circle->ellipse. Only existence of this mapping would make
circle a subtype. But note that one could also create a mapping
ellipse->circle. So being a subtype is an artefact which has nothing
to do with the reality.

2. If the type "Circle" models mathematical circles, while "Ellipse"
models mathematical ellipses, then there could be mappings one to
another induced by subset relation. But even then this has nothing to
do with subtyping relation between them.

>>Yet another case will result in a definition of 'ellipse'
>>which cannot express the entire range of ellipses
>>we might think of as 'mathematically elliptical'.
>
>But then it is not ellipse. Then you are not solving the problem, only
>changing it.

Yes, you cannot model mathematical ellipses precisely. As a matter of
fact, the cardinality of the set of ellipses does not allow this.

>>Making beneficial design choices for a given context
>>is often more about "skilled art" than "skilled craft".
>
>We are forced to look for workarounds, because OO is not able to map
>the reality. 

Because any finite, deterministic system is unable to do it. It is not
an OO fault.

>For instance in OO one method belongs to a single class only, but in
>the real world an operator might belong to several types.

Hmm, nothing in OO forbids multiple dispatch. There are OO languages
which has it.

--
Regards,
Dmitry Kazakov
www.dmitry-kazakov.de
0
mailbox2 (6357)
5/3/2004 9:04:23 AM
Alfredo Novoa wrote:

>>Neither OO subclassing or the theory of subtypes
>>suggests anything about which view is correct -
>>because either view *can* be correct.
> 
> Circle is a subtype of ellipse and the contrary is false. All circles
> are ellipses, but not all ellipses are circles.

Perhaps from a mathematical view point but OO isn't based on mathematics.

OO, and mathematics model things differently. Just because you like the 
mathematical model doesn't mean the mathematical model is correct for OO.

> OO is unable to model this simple reality. But you can derive
> "correct" implementations of flawed models.

Mathematics cant map reality.

There are mathematical models to predict the weather but none of them 
are accurate. Math can't even accurately predict the path a feather will 
take when dropped outside.

>>The discipline of computer science is young, and still
>>in development,
> 
> Indeed, that's my point! There are many things to fix in OO and OO is
> not the end of the road. But many OO practicioners think that OO is
> the perfect silver bullet, and all criticism and research is heresy.

Re-read the above paragraph but replace OO with mathematics.

Jeff Brooks


All models are wrong; some models are useful.
	George Box

As well ask whether the metric system is true and the avoirdupois system 
is false; whether Cartesian coordinates are true and polar coordinates 
are false. One geometry can not be more true than another; it can only 
be more convenient. Geometry is not true, it is advantageous.
	Zen and the art of motor cycle maintaince - 1974

There are many methods for predicting the future. For example, you can 
read horoscopes, tea leaves, tarot cards, or crystal balls. 
Collectively, these methods are known as "nutty methods." Or you can put 
well-researched facts into sophisticated computer models, more commonly 
referred to as "a complete waste of time."
	Scott Adams (1957 - ), The Dilbert Future
0
jeff_brooks (199)
5/3/2004 9:15:08 AM
On Mon, 03 May 2004 11:04:23 +0200, Dmitry A. Kazakov
<mailbox@dmitry-kazakov.de> wrote:

>>Circle is a subtype of ellipse and the contrary is false.
>
>Wrong. Values of the type "Circle" do not have the type "Ellipse". So
>"Circle" is not contained in "Ellipse".

Types are sets, the ellipse set contains the circle set. The members
of the circle set are mebers of the ellipse set.

Values may have many types and all the values of the type circle have
the type ellipse. You are biased by the current languages.

For instance 1 may have many types: real, rational, integer, natural,
etc.

>1. You can make circle object equivalent to ellipse object by creating
>a mapping circle->ellipse. Only existence of this mapping would make
>circle a subtype. But note that one could also create a mapping
>ellipse->circle. So being a subtype is an artefact which has nothing
>to do with the reality.

I don't see any sense here, a circle value is always an ellipse value.

Object is a very bad term because sometimes it means value, sometimes
means variable and sometimes means "thing" like in "everything is an
object".

>2. If the type "Circle" models mathematical circles, while "Ellipse"
>models mathematical ellipses, then there could be mappings one to
>another induced by subset relation. But even then this has nothing to
>do with subtyping relation between them.

Types are sets, subtypes subsets, the membership relation has all to
do wih subtyping.

>Yes, you cannot model mathematical ellipses precisely.

That's my point. OO is failing here and the solution is specialization
by constraint.

>>We are forced to look for workarounds, because OO is not able to map
>>the reality. 
>
>Because any finite, deterministic system is unable to do it. It is not
>an OO fault.

Specialization by constraint is able to do it.

>>For instance in OO one method belongs to a single class only, but in
>>the real world an operator might belong to several types.
>
>Hmm, nothing in OO forbids multiple dispatch. There are OO languages
>which has it.

Which one?

I agree about that nothing in OO forbids that but I don't know any OO
language that allows do define methods that belong to several classes.


Regards
  Alfredo
0
alfredo (205)
5/3/2004 12:34:29 PM
On Mon, 03 May 2004 09:15:08 GMT, Jeff Brooks <jeff_brooks@nospam.com>
wrote:

>> Circle is a subtype of ellipse and the contrary is false. All circles
>> are ellipses, but not all ellipses are circles.
>
>Perhaps from a mathematical view point but OO isn't based on mathematics.

Nor in common sense.

>OO, and mathematics model things differently. Just because you like the 
>mathematical model doesn't mean the mathematical model is correct for OO.
>
>> OO is unable to model this simple reality. But you can derive
>> "correct" implementations of flawed models.
>
>Mathematics cant map reality.

Astonishing statement!

>>>The discipline of computer science is young, and still
>>>in development,
>> 
>> Indeed, that's my point! There are many things to fix in OO and OO is
>> not the end of the road. But many OO practicioners think that OO is
>> the perfect silver bullet, and all criticism and research is heresy.
>
>Re-read the above paragraph but replace OO with mathematics.

Mathematics is as old as mankind.

Regards
  Alfredo
0
alfredo (205)
5/3/2004 12:38:20 PM
alfredo@ncs.es (Alfredo Novoa) wrote in message news:<e4330f45.0405021536.596a3957@posting.google.com>...
> 
> Circles are ellipses breaks LSP
> Circles are ellipses
> --------------------------------
> LSP is broken QED

Well, that's not true.  See Philip's post, and my reply to it. 
Immutability is the problem.  Plus one bad example does not disprove
the rule.


> Like most OO practicioners you are confusing values and variables all
> the time.

No, I wasn't confusing them, I simply forgot about the client
perspective, which Philip reminded me of.  Plus remember that I never
said I had circle and ellipse in an inheritance relationship.


> There are two kinds of substitutability: value substitutability and
> variable substitutability. Something forgotten by the LSP.

It's not forgotten by the LSP.  Immutable classes have a better chance
of supporting the LSP than those with "mutator" routines.  Of course,
it all depends on what clients can and cannot do with an object.


> > So, this kind of inheritance breaks the LSP, which of
> > course says that descendant classes must be able to pass for
> > ancestors.
> 
> That's why LSP is a broken principle.

LSP is not broken.  This is the definition of inheritance.  You might
not like it, and you might not see where it can be used, but it is the
definition.


> > OO does not claim to match math.
> 
> Then it should not claim to match the real world.

It doesn't.  You and Topmind seem to think that people claim it does. 
I've *never* seen that claim.  Plus not claiming to match math has
nothing to do with whether or not something claims to match the real
world (whatever that is).


> > In fact, in a large number of cases, descendants might
> > be considered supertypes, as they tend to add features.
> 
> And it is contrary to the reality. OO subclasing is not good for
> modeling the real world.

That's fine, because we don't model the real world.  We model some
domain, and how we wish to view that domain.  Descendant classes are
specialization, nothing more and nothing less.  Note that I do not use
the term "subclass", because it is wrong, as I said before.

Steve
0
5/3/2004 12:48:28 PM
On 3 May 2004 05:48:28 -0700, stevenwurster@lycos.com (Steven Wurster)
wrote:

>> Circles are ellipses breaks LSP
>> Circles are ellipses
>> --------------------------------
>> LSP is broken QED
>
>Well, that's not true.  See Philip's post, and my reply to it. 
>Immutability is the problem.

Pointer based inheritance is the problem. Specialization by constraint
is the solution.

>  Plus one bad example does not disprove
>the rule.

One bad example disprove any rule, but there are infinite examples.

If one model fails with trivial examples it will fail a lot more with
complex examples.

>> Like most OO practicioners you are confusing values and variables all
>> the time.
>
>No, I wasn't confusing them, I simply forgot about the client
>perspective, which Philip reminded me of.  Plus remember that I never
>said I had circle and ellipse in an inheritance relationship.

But what I want to prove is that OO inheritance is not subtyping. OO
inheritance is related to pointer based delegation.

>> There are two kinds of substitutability: value substitutability and
>> variable substitutability. Something forgotten by the LSP.
>
>It's not forgotten by the LSP.  Immutable classes have a better chance
>of supporting the LSP than those with "mutator" routines.

Classes are immutable by definition, classes are constant sets of
values with a set of associated opertors.

>  Of course,
>it all depends on what clients can and cannot do with an object.

>LSP is not broken.  This is the definition of inheritance.

No, this is one possible definition of inheritance. One which has many
problems. Inheritence is not a formal term and it does not have a
precise meaning. But this is not the case of subtyping.

> You might
>not like it, and you might not see where it can be used, but it is the
>definition.

You are biased by the current primitive OO languages, it is not the
only possible definition, but it is the only you can find in the
market currently. There are better ways to define inheritance like to
base inheritance in subtyping.

>> > OO does not claim to match math.
>> 
>> Then it should not claim to match the real world.
>
>It doesn't.  You and Topmind seem to think that people claim it does. 
>I've *never* seen that claim.  Plus not claiming to match math has
>nothing to do with whether or not something claims to match the real
>world (whatever that is).

To match the real world is the purpose of any computing model, and
even more the objective of languages like Simula and SmallTalk.

>> > In fact, in a large number of cases, descendants might
>> > be considered supertypes, as they tend to add features.
>> 
>> And it is contrary to the reality. OO subclasing is not good for
>> modeling the real world.
>
>That's fine, because we don't model the real world.  We model some
>domain, and how we wish to view that domain.

Define domain.

A domain is a real world problem you want to solve. By the way it is
also a fuzzy and confusion prone term, like most OO terms.

In math a domain is a set of individual constants.

> Descendant classes are
>specialization, nothing more and nothing less.  Note that I do not use
>the term "subclass", because it is wrong, as I said before.

Then which term do you use for a class that inherits from another
class?


Regards
  Alfredo
0
alfredo (205)
5/3/2004 2:13:14 PM
On Mon, 03 May 2004 12:34:29 GMT, alfredo@ncs.es (Alfredo Novoa)
wrote:

>On Mon, 03 May 2004 11:04:23 +0200, Dmitry A. Kazakov
><mailbox@dmitry-kazakov.de> wrote:
>
>>>Circle is a subtype of ellipse and the contrary is false.
>>
>>Wrong. Values of the type "Circle" do not have the type "Ellipse". So
>>"Circle" is not contained in "Ellipse".
>
>Types are sets,

Yes.

>the ellipse set contains the circle set. The members
>of the circle set are mebers of the ellipse set.

No, types are sets of values and operations on them. The set of
operations valid for circles is not a subset of ones of ellipses.

>Values may have many types and all the values of the type circle have
>the type ellipse. You are biased by the current languages.
>
>For instance 1 may have many types: real, rational, integer, natural,
>etc.

1 has no type, it is a symbol. Then the set of real numbers is not a
type. You can build a type over this set, which does not make them
equivalent.

>>1. You can make circle object equivalent to ellipse object by creating
>>a mapping circle->ellipse. Only existence of this mapping would make
>>circle a subtype. But note that one could also create a mapping
>>ellipse->circle. So being a subtype is an artefact which has nothing
>>to do with the reality.
>
>I don't see any sense here, a circle value is always an ellipse value.

An [program] object cannot be a circle or ellipse. You can only say,
that there is a mapping between some objects in a program and some
geometrical objects called circles.

>Object is a very bad term because sometimes it means value, sometimes
>means variable and sometimes means "thing" like in "everything is an
>object".

It is no matter. We could use subprograms to model circles, in the
sense that a circle would have a corresponding subprogram. We could
take types, meta-types, whatsoever. Then mutability is not the thing
which kills LSP.

>>2. If the type "Circle" models mathematical circles, while "Ellipse"
>>models mathematical ellipses, then there could be mappings one to
>>another induced by subset relation. But even then this has nothing to
>>do with subtyping relation between them.
>
>Types are sets, subtypes subsets, the membership relation has all to
>do wih subtyping.

See above. [LSP] subtyping (by constraining/specialization) tells that
not only the set of subtype values can be mapped to a subset of values
of the base, but also that the set of operations [and valid programs]
defined on the subtype is a subset of ones of the base. It is plainly
wrong for circles vs. ellipses. Moreover, if we would formally define
what a type is, and what a LSP subtype could be, we would see that no
non-trivial type may have any LSP subtype.

>>Yes, you cannot model mathematical ellipses precisely.
>
>That's my point. OO is failing here and the solution is specialization
>by constraint.

Classes of equivalence, I would say. Look at floating point numbers,
they are not a specialization of reals.

>>>We are forced to look for workarounds, because OO is not able to map
>>>the reality. 
>>
>>Because any finite, deterministic system is unable to do it. It is not
>>an OO fault.
>
>Specialization by constraint is able to do it.

It is a powerful technique of creation new types, but do not
underestimate others (generalization, not-nested domain sets etc).

>>>For instance in OO one method belongs to a single class only, but in
>>>the real world an operator might belong to several types.
>>
>>Hmm, nothing in OO forbids multiple dispatch. There are OO languages
>>which has it.
>
>Which one?

Google for "multiple dispatch programming languages."

>I agree about that nothing in OO forbids that but I don't know any OO
>language that allows do define methods that belong to several classes.

There are many problems with a consistent implementation of MD.
Managing a geometrically exploding number of overrides is a heavy
burden for a programmer. It will be solved, some day. The theory is
just not mature.

--
Regards,
Dmitry Kazakov
www.dmitry-kazakov.de
0
mailbox2 (6357)
5/3/2004 2:57:15 PM
On Mon, 03 May 2004 10:46:22 +0200, Dmitry A. Kazakov
<mailbox@dmitry-kazakov.de> wrote:

>On 2 May 2004 16:36:41 -0700, alfredo@ncs.es (Alfredo Novoa) wrote:
>
>>There are two kinds of substitutability: value substitutability and
>>variable substitutability. Something forgotten by the LSP.
>
>No. There are many kinds of substitutability.

Which ones?

> Note that what you call
>"variable substitutability" is not atomic. It consists of
>in-substitutability (=your value substitutability) and
>out-substitutability.

No, variable substitutability is variable substitutability and nothing
more. It is about when a variable can be substituted by another
variable with a different type.

>>That's why LSP is a broken principle.
>
>It is not broken, it is just difficult to maintain. The idea of
>substitutability is right.

The idea is right of course but it is based on the asumption of a
pointer based inheritance model which makes it impossible. If we base
inheritance in subtyping and specialization by constraint then we can
have flawless substitutability.

>What is broken, is a fiction to ensure
>absolute substitutability independent on any context.

Then we should look for absolute substitutability independent on any
context. It is easy basing inheritance in specialization by
constraint.

I am surprised by the restless resistance to the progress of the OO
folks.

>>With a good language it could be defined as:
>>
>>type square is rectangle where height = width;
>
>Ah, but here the problem starts. You have imposed a [arbitrary]
>constraint

It is not arbitrary, it is precisely the condition that makes a
rectangle a square.

>, but knowing this constraint tells nothing about which
>programs written for rectangles will work for squares.

Programs that do not know the constraint do not know what a square is
and they will treat squares as if they were non squared rectangles.

> OK, for sure it
>is known that all in-methods (=value substitutability) would. But that
>is not very interesting. Which out-methods will?

All, of course. Programs written for rectangles don't have square
variables. I am afraid you have not caught the idea of specialization
by constraint yet.

>A programmer cannot do
>it, he will reuse. 

But subtyping is not about code reuse. This is one of the greatest
mistakes of OO. 

Programing languages should make a clear distinction between
inheritance and delegation and not to make a botch up trying to mix
both concepts.



Regards
  Alfredo
0
alfredo (205)
5/3/2004 3:07:23 PM
On Sun, 02 May 2004 23:31:20 GMT, Jeff Brooks <jeff_brooks@nospam.com>
wrote:

>OO is designed around what is known about human thought.

Where did you get that notion.  OO was created by two programmers who
were messing around with ALGOL.  They created a syntax element that
created procedure stack frames on the heap and did not destroy them
when the procedure returned.  Those data structures became objects.

OO was an accident of dynamic memory scope combined with block
structure.  Long after the fact some folks started talking about human
thought and "real world models" and the like.  



-----
Robert C. Martin (Uncle Bob)
Object Mentor Inc.
unclebob @ objectmentor . com
800-338-6716

"Distinguishing between the author
and the writing is the essence of civilized debate."
           -- Daniel Parker
0
unclebob2 (2724)
5/3/2004 4:06:11 PM
On Mon, 03 May 2004 12:34:29 GMT, alfredo@ncs.es (Alfredo Novoa)
wrote:

>
>Types are sets, the ellipse set contains the circle set. The members
>of the circle set are mebers of the ellipse set.

Would you store the data for Circles in a table designed for Ellipses?



-----
Robert C. Martin (Uncle Bob)
Object Mentor Inc.
unclebob @ objectmentor . com
800-338-6716

"Distinguishing between the author
and the writing is the essence of civilized debate."
           -- Daniel Parker
0
unclebob2 (2724)
5/3/2004 4:11:46 PM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
news:oarc9052482fdpk92refvi7mns2bmvo5sl@4ax.com...
> On Sun, 02 May 2004 23:31:20 GMT, Jeff Brooks <jeff_brooks@nospam.com>
> wrote:
>
> >OO is designed around what is known about human thought.
>
> Where did you get that notion.  OO was created by two programmers who
> were messing around with ALGOL.  They created a syntax element that
> created procedure stack frames on the heap and did not destroy them
> when the procedure returned.  Those data structures became objects.
>
> OO was an accident of dynamic memory scope combined with block
> structure.  Long after the fact some folks started talking about human
> thought and "real world models" and the like.

You are very far off base here. Every programming language reflects, to some
degree or other, the way the language designer thinks about programs.
Therefore it reflects to some degree or other the way he thinks in general.

It goes the other direction too. A programming language can shape the way
you think about programs. The most creative of us are not restricted to this
influence, but many can only think in terms that the original language
designer had set.


Shayne Wissler


0
5/3/2004 4:20:10 PM
Alfredo Novoa wrote:

> Circles are ellipses breaks LSP
> Circles are ellipses
> --------------------------------
> LSP is broken QED

The LSP doesn't say anything about an "ISA" relationship. It's a definition
for the term "subtype".

Take care, Ilja


0
preuss (368)
5/3/2004 4:26:30 PM
On Mon, 03 May 2004 16:57:15 +0200, Dmitry A. Kazakov
<mailbox@dmitry-kazakov.de> wrote:

>>Types are sets,
>
>Yes.
>
>>the ellipse set contains the circle set. The members
>>of the circle set are mebers of the ellipse set.
>
>No, types are sets of values and operations on them. The set of
>operations valid for circles is not a subset of ones of ellipses.

I was talking about values not about operators, but what you said is
true of course.

I meant that the values which are members of the circle's value set
are members of the ellipse's value set.

>>Values may have many types and all the values of the type circle have
>>the type ellipse. You are biased by the current languages.
>>
>>For instance 1 may have many types: real, rational, integer, natural,
>>etc.
>
>1 has no type, it is a symbol.

I was not talking about the char '1' I was talking about the value 1,
and all values have type or types.

> Then the set of real numbers is not a
>type.

An evident non sequitur.

The set of real numbers plus the set of operators which act on real
numbers is a type, but it is often supposed that every value carry
with their operators and they are not mentioned explicitly.

> You can build a type over this set, which does not make them
>equivalent.

It depends if you consider operators as an integral part of values or
not. If you don't you could say that the set of real numbers is only a
domain.

But nothing changes. The value represented by the symbol '1' may have
many types.

>>I don't see any sense here, a circle value is always an ellipse value.
>
>An [program] object cannot be a circle or ellipse.

Why not?

It is obvious that we can represent circle values in a program, and
object might mean value (object might mean anything, it is a
meaningless term).

> You can only say,
>that there is a mapping between some objects in a program and some
>geometrical objects called circles.

What does object means here?

Fuzzy terms dificult the communication and the reasoning.

> We could use subprograms to model circles, in the
>sense that a circle would have a corresponding subprogram.

Circles are logical entities and programs are concrete
implementations.

> We could
>take types, meta-types, whatsoever. Then mutability is not the thing
>which kills LSP.

No, it is pointer based subclassing.

>>Types are sets, subtypes subsets, the membership relation has all to
>>do wih subtyping.
>
>See above. [LSP] subtyping (by constraining/specialization) tells that
>not only the set of subtype values can be mapped to a subset of values
>of the base

There is not any map, the values are the same.

Integer 1 is the same value as real 1.0

Remember the primary school set theory about sets and contained sets.

>, but also that the set of operations [and valid programs]

Don't mix programs with operators.

>defined on the subtype is a subset of ones of the base. It is plainly
>wrong for circles vs. ellipses.

It is plainly wrong always. A subtype has a subset of the values of
the base type, but nobody said that it has also a subset of the
operators.

>Moreover, if we would formally define
>what a type is, and what a LSP subtype could be, we would see that no
>non-trivial type may have any LSP subtype.

"LSP subtype" is not the same as subtype like the German Democratic
Republic was not democratic.

>>>Yes, you cannot model mathematical ellipses precisely.
>>
>>That's my point. OO is failing here and the solution is specialization
>>by constraint.
>
>Classes of equivalence, I would say. Look at floating point numbers,
>they are not a specialization of reals.

Floating point number is not a type, but integers are a specialization
of reals and rationals.

>>Specialization by constraint is able to do it.
>
>It is a powerful technique of creation new types, but do not
>underestimate others (generalization, not-nested domain sets etc).

Generalization by "relaxation" is simply the other side of
specialization by constraint.

Non overlappling types is simply when you are not using subtyping. It
is of course required for any usable language.

>>Which one?
>
>Google for "multiple dispatch programming languages."

Thanks. But I am afraid it is not exactly what I am looking for.

>>I agree about that nothing in OO forbids that but I don't know any OO
>>language that allows do define methods that belong to several classes.
>
>There are many problems with a consistent implementation of MD.

The problem how to define the visibility of the internal parts of the
type implementation.

We need to grant the acces to the private parts of the type
implementation to the "friend" operators which are defined outside of
the type definition.


Regards
  Alfredo
0
alfredo (205)
5/3/2004 5:00:21 PM
alfredo@ncs.es (Alfredo Novoa) wrote in message news:<40956d4c.177915@news.wanadoo.es>...
> 
> Integer is also subtype of rational and rational subtype of real, etc.
> 
> For example:
> 
> var a: real;
> a := 2.5;
> a := a - 0,5;
> if a is integer then 
>   WriteLn('Correct')
> else
>   WriteLn('OO is mad');
> 
> A good language should show 'Correct'.
> 

This has absolutely *nothing* to do with OO.  In fact, it has a whole
lot to do with the underlying architecture, and the IEEE specs. 
Especially if the original number (2.5 in this example) cannot be
expressed as a finite value in the internal representation (i.e.
binary for most systems).  Some languages or libraries do provide
support for rational numbers and/or fractions, but the performance
penalties often outweigh the benefits.

Also note that 2 is a real number, not just an integer.

Steve
0
5/3/2004 5:11:42 PM
Shayne Wissler wrote:

>"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
>news:oarc9052482fdpk92refvi7mns2bmvo5sl@4ax.com...
>  
>
>>On Sun, 02 May 2004 23:31:20 GMT, Jeff Brooks <jeff_brooks@nospam.com>
>>wrote:
>>
>>    
>>
>>>OO is designed around what is known about human thought.
>>>      
>>>
>>Where did you get that notion.  OO was created by two programmers who
>>were messing around with ALGOL.  They created a syntax element that
>>created procedure stack frames on the heap and did not destroy them
>>when the procedure returned.  Those data structures became objects.
>>
>>OO was an accident of dynamic memory scope combined with block
>>structure.  Long after the fact some folks started talking about human
>>thought and "real world models" and the like.
>>    
>>
>
>You are very far off base here. Every programming language reflects, to some
>degree or other, the way the language designer thinks about programs.
>Therefore it reflects to some degree or other the way he thinks in general.
>
>It goes the other direction too. A programming language can shape the way
>you think about programs. The most creative of us are not restricted to this
>influence, but many can only think in terms that the original language
>designer had set.
>  
>
That sounds like it ought to be correct since I don't think its limited 
to computer languages.  Not everything written in one language can be 
accurately translated into another.  The bible is a good example, as are 
the poems of Dante.  If Dante were familiar with English he may have 
written "The Inferno" differently than he had.  The poetry would have 
been different.  Of course, even being familiar with English he may have 
concieved his writings first in his native Italian (he was Italian, 
wasn't he) before transposing them into English.

Similar, perhaps to how someone who thinks in C may concieve of their 
programs in their head using what they're familiar with from C before 
translating them into Java--and thereby diluting the final product of 
anything that could be thought inspirational from Java.  I remember a 
friend of mine telling me that after moving to Mexico it took a long 
time before he stopped converting conversations into and out of English 
before speaking.  Some time after becoming fluent in Spanish (no longer 
relying on near-real-time English translations in his head) did he 
eventually have his first dream in Spanish.  That, he thought, was a big 
step.

How long, do you think, does it take people to make a similar transition 
going from procedural to object?  Is going from C to C++ and to Java 
more of a migration of dialect than language?  Many of the curly 
languages seem more similar than not--similar approaches to program 
design (variables, iterating, parameters in order, etc.) but with a 
different vocabulary (new keywords, new libraries).

If you're hoping to write OO your results may be better if you think in 
OO.  If you want to write procedural your results may be better if you 
think procedural.  A program written with a mix of thought from both 
areas may not be as enjoyable to read (or use?) as one written in one.


-- 
..tom
remove email address' dashes for replies
opensource middleware at <http://isectd.sourceforge.net>
<http://gagne.homedns.org/~tgagne/>
0
tgagne (596)
5/3/2004 5:39:36 PM
On Mon, 03 May 2004 11:06:11 -0500, Robert C. Martin
<unclebob@objectmentor.com> wrote:
>>OO is designed around what is known about human thought.
>
>Where did you get that notion.  OO was created by two programmers who
>were messing around with ALGOL.  They created a syntax element that
>created procedure stack frames on the heap and did not destroy them
>when the procedure returned.  Those data structures became objects.

Is that true, that Simula only created instances on the stack?

>OO was an accident of dynamic memory scope combined with block
>structure.  Long after the fact some folks started talking about human
>thought and "real world models" and the like.  

Well, Simula was 1967 and Smalltalk got underway circa 1974, and much
of how we talk about OO comes thru Smalltalk (tho it seems we've lost
the habit of referring to messages ...).  The specific claim that OO
"is how people think" or "is how the real world is," hmm, make me get
out the Goldberg & Robson book, hey?  Make me *find* the G&R book, ...
Addison Wesley 1983.

Nope, they (in the preface) talk about Smalltalk as a language and
environment and way to manage complexity.  And it really is a badly
written book.  But it's about the only OO source I have handy that
reaches back before about 1985.

Hey, I hadn't seen this online before:
Xerox Loops Truckin' demo, c. 1983
http://www2.parc.com/istl/members/stefik/truckin.htm
Not terribly informative, but it brings back some memories of the time
I spent with Xerox ...

Anyway, short of quoting from the Simula paper, which I don't have at
hand (though I know it's been cited hereabouts before), your "messing
around" theory seems reasonable.

But I wonder.  I have here a dusty 600 page book about Smalltalk, and
it does talk about stack frames and memory management and whatnot, but
to what end?  What is interesting here is that it is supposed to be
completely obvious why these programming language styles and entities
are worth pursuing.  Back in those dim days, I suppose it was supposed
to be equally obvious why *any* programming language entity was worth
pursuing: because *any* programming language, in being a logical
exercise, was "how people think" and/or "how the real world is."
Laterly, it seems we decided to reserve this intuitive credit only to
object-oriented systems.  Quite interesting, to me, at least.

J.




0
5/3/2004 5:50:34 PM
alfredo@ncs.es (Alfredo Novoa) wrote in message news:<40964e27.13265014@news.wanadoo.es>...
> 
> Pointer based inheritance is the problem. Specialization by constraint
> is the solution.

That is, Design by Contract.  Which helps maintain the LSP, by the
way.

> 
> >  Plus one bad example does not disprove the rule.
> 
> One bad example disprove any rule, but there are infinite examples.

No, I meant the example is a bad one.  It fails LSP, true, but I've
got plenty of bad examples that fail LSP.  We need a good example that
fails LSP.  One where the true behavior is expected, and assumed by
all (or shown by contract), given inheritance rules.


> But what I want to prove is that OO inheritance is not subtyping. OO
> inheritance is related to pointer based delegation.

Pointer based delegation is a means to an end.  It isn't needed for
inheritance.  And *I* never said OO inheritance was subtyping.  In
fact, I said it was not.  I'm not sure who did, and I don't feel like
looking through past posts.


> No, this is one possible definition of inheritance. One which has many
> problems. Inheritence is not a formal term and it does not have a
> precise meaning. But this is not the case of subtyping.

This is the definition of inheritance in the OO domain, period. 
Descendants *are* substitutable for their ancestors, given that
nothing else has broken.


> You are biased by the current primitive OO languages, it is not the
> only possible definition, but it is the only you can find in the
> market currently. There are better ways to define inheritance like to
> base inheritance in subtyping.

See that link I gave you for more on this.


> Define domain.
> 
> A domain is a real world problem you want to solve. By the way it is
> also a fuzzy and confusion prone term, like most OO terms.

Most OO terms are not confusing.  People might disagree on their
miniscule differences, and often tend to make mountains out of
molehills, to use the cliche.  But to me, those often come from people
who aren't even sure what it is they are talking about.


> Then which term do you use for a class that inherits from another
> class?

Descendant.  The class being inherited from is known as the ancestor.

Steve
0
5/3/2004 6:47:21 PM
"Thomas Gagn�" <tgagne@wide-open-west.com> wrote in message
news:-pOdnek4BuhIHgvdRVn-vw@wideopenwest.com...

> How long, do you think, does it take people to make a similar transition
> going from procedural to object?  Is going from C to C++ and to Java
> more of a migration of dialect than language?  Many of the curly
> languages seem more similar than not--similar approaches to program
> design (variables, iterating, parameters in order, etc.) but with a
> different vocabulary (new keywords, new libraries).
>
> If you're hoping to write OO your results may be better if you think in
> OO.  If you want to write procedural your results may be better if you
> think procedural.  A program written with a mix of thought from both
> areas may not be as enjoyable to read (or use?) as one written in one.

I don't think of it in these terms. I think the best procedural programmers
will naturally do well with OO, because OO is just a natural extension of
procedural. A good procedural programmer will realize that there's nothing
special about the built-in data types, that they are there only because
someone put them there (perhaps to support a CPU that had them only because
someone put them there), and will therefore want the ability to create his
own data types as appropriate to the context. Coming from a C and Fortran
background, when I first learned OO it wasn't so much an experience of
something new, but of better support for the way I already thought of how to
program.

There are other people, like some of the anti-OO zealots that hang out here
take what others have given them (i.e. procedural) and regard it as a
dispensation from heaven or something, absorbing it as a sponge does water,
without understanding or evaluation, and so cannot comprehend OO. They see
it as a kind of blasphemy against their own religion. But these people would
have been stuck in assembly if that's what they'd learned first.


Shayne Wissler


0
5/3/2004 7:11:32 PM
"Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
news:yzAjc.32058$YP5.2531509@attbi_s02...
>
> "Alfredo Novoa" <alfredo@ncs.es> wrote in message
> news:408ec7ea.1123175@news.wanadoo.es...
> > On Tue, 27 Apr 2004 19:20:39 GMT, "Shayne Wissler"
> > <thalesNOSPAM000@yahoo.com> wrote:
> >
> >
> > >We could reword it to take people like you into account: OO is closer
to
> the
> > >way humans ought to think.

Ought to? For what reason?

> > Closer than what?
>
> Procedural.

That's funny, since some of the worst procedural code I've yet seen has been
in OO languages, albeit spread out (in an ad hoc way) over many classes.
Just because it's spread across methods in classes doesn't mean it's not
procedural.

> > Logic programming is closer to the way humans ought to think than OO.
>
> Nope.

Yup. Well, unless that word "logic" is somehow divorced from "programming",
in which case I really hope we don't wind up on the same team...

Logic and functional both give optimization power to the computer, which is
(presumably) where we'd like to go if we actually want to benefit from both
advances in hardware and software, and from the benefits of (drumroll
please) logic.

> > OO is rather fuzzy and sloppy.
>
> Depends on who's writing it.

As is pointed out in another message, human beings create and influence
languages, and in turn are influenced (and created?) by language. So since
(not if) it's so easy to go horribly wrong in Java and other OO languages, a
reasonable inference is that the languages give the programmer too much
rope, and from experience I can say that there's no commensurate benefit
(other than making the language designer's life easier, which I don't care
about).

- Eric


0
ekaun (22)
5/3/2004 7:27:28 PM
"Universe" <universe@tAkEcovadOuT.net> wrote in message
news:jqhu80p0tbhr5ov90k5pflblrkqigtbqb1@4ax.com...
> "Mark S. Hathaway" <hathawa2@marshall.edu> wrote:
>
> > Our idea of sets and OOP make us think this system could
> > and perhaps should restrain our wild hare-brained human
> > approaches to something another wild hare-brained human
> > might be able to read and manage.
>
> The superior ability of the OO conceptual modelling paradigm to reduce
> complexity, in most cases, in comparison to most other conceptual
> paradigms

What others have you looked at? And what type of complexity are you reducing
with OO?

> chiefly stems from the closer correspondence the OO
> modelling paradigm has with the way the world operates.

Hmmm. How precisely do you distinguish the way the world operates from what
we notice about the world? And how exactly does the way the world operates
relate to computers, which use 0s and 1s... unless you're caught in a
Matrix-style argument of some sort?

Hint: we're developing systems to be of use to the world. To be useful, they
must be basically correct. We define correctness via (after all is said and
done) facts and predicates and assertions and such - logical statements. We
have a very successful history with math and logic. None of this implies
that our systems must in some way "look like" (a very fuzzy phrase) the
"real world" (another one).

> Responding to those who argue that we don't really know how humans
> reason, its more that:
> a) the world operates according to Z
> b) humans notice the world in operation according to Z

How do you distinguish A and B?

> c) most humans find that for the most complex circumstances in
> various domains, understanding and leverage are maximized when
> employing the Z conceptual modelling paradigm,

A large leap - in fact, a groundless assertion. What is a "modelling
paradigm", anyway? Do you mean that because the world "contains objects",
that an "object-based modelling paradigm" is the best? Read some Hume, just
for kicks.

> Believe me (actually do a proper scientific study) it's as obvious and
> straightforward as, "abc".

What if you don't use this alphabet?

- erk


0
ekaun (22)
5/3/2004 7:32:54 PM
"Eric Kaun" <ekaun@yahoo.com> wrote in message
news:A2xlc.737$dd.268@newssvr33.news.prodigy.com...

> > > >We could reword it to take people like you into account: OO is closer
> to
> > the
> > > >way humans ought to think.
>
> Ought to? For what reason?

Because concepts are not arbitrary.

> > > Closer than what?
> >
> > Procedural.
>
> That's funny, since some of the worst procedural code I've yet seen has
been
> in OO languages, albeit spread out (in an ad hoc way) over many classes.
> Just because it's spread across methods in classes doesn't mean it's not
> procedural.

OO is a superset of procedural. It is closer to how we think because it is
flexible enough to embody more of what we think (that is, for those of us
who do actually think).

> > > Logic programming is closer to the way humans ought to think than OO.
> >
> > Nope.
>
> Yup. Well, unless that word "logic" is somehow divorced from
"programming",
> in which case I really hope we don't wind up on the same team...
>
> Logic and functional both give optimization power to the computer, which
is
> (presumably) where we'd like to go if we actually want to benefit from
both
> advances in hardware and software, and from the benefits of (drumroll
> please) logic.

Thankfully this viewpoint is constrained to fringe groups.

> > > OO is rather fuzzy and sloppy.
> >
> > Depends on who's writing it.
>
> As is pointed out in another message, human beings create and influence
> languages, and in turn are influenced (and created?) by language. So since

That we are "created" by language is another (thankfully) fringe notion.

> (not if) it's so easy to go horribly wrong in Java and other OO languages,
a
> reasonable inference is that the languages give the programmer too much
> rope, and from experience I can say that there's no commensurate benefit
> (other than making the language designer's life easier, which I don't care
> about).

"From experience" == "I don't have a damn clue."


Shayne Wissler


0
5/3/2004 7:39:05 PM
"Jeff Brooks" <jeff_brooks@nospam.com> wrote in message
news:%dDjc.288974$Ig.234395@pd7tw2no...
> [much snipping]
>
> Actually, i think OO is the way people actually think.

I don't think you're right. Do you think that I'm correct is not thinking
that you're right, or am I thinking about it incorrectly? Did my
think(Object subject) method throw an exception?

> Lets look at a simple example of a chair. People think of a chair as a
> thing. The thing has some properties to it (it likely has 4 legs, is
> meant to be sat on, etc).
>
> When we learn about our first chair we treat it as a unique object. When
> we see our second chair we see it has similar properties to the first
> chair and we begin to form an understanding of chairness (the type chair).
>
> If I said i have a unique chair i built and asked if a person could sit
> on it what would you say? Odds are you know that chair like things are
> meant to be sat on so my unique chair can be sat on.

Yes, but it all depends on the axis of abstraction. Couldn't chairness mean
4-legged? How about "having a seat"? Is resting on the ground significant?

> Another example is a Person. If i asked you if my friend (that you have
> never met) has blood what would you say? Odds are you would say they
> have blood because you understand a what a person is and you know a
> person has blood.

And the vast majority of our common-sense observations have nothing to do
with the system we need to create.

> A humans ability to communicate with other people isn't based on their
> ability to send the complete information they know but rather we both
> have an understanding of basic classes of things and we use the same
> symbolic representation to define them which allows us to communicate.

Agreed, and that has nothing to do with a computer, nor does it imply that
we need to express our systems with the initial set of common-sense
observations we make. Manipulative ability, and correctness, and power, and
ability to automate all play a part.

> People understand and interact with the world based classes of things.

Really? I hardly ever touch real-world things in my job. They're mostly
abstractions.

In most domains there are many abstractions, plenty of confusion between
abstractions and concrete things, much more confusion regarding workflows
and processes and generally anything asynchronous, oh and then there's
technical architecture which is orthogonal, etc.

> This allows me to use a computer i have never seen before because I know
> that computers behave like other computers.

Hmmm. Really? The O/S, virtual machines, languages and all that are
essentially all the same thing and thus you can use them all at will? Wow.

> Quotes from "Design Principals behind Smalltalk" - byte mag Aug 1981
>
> "In designing a language for use with computers, we do not have to look
> far to find helpful hints. Everything we know about how people think and
> communicate is applicable. The mechanisms of human thought and
> communication have been engineered for millions of years, and we should
> respect them as being of sound design.

Human digestion is an equally respectable model, isn't it? Fecal movement is
surely just as solid a basis for design, based on the above comments. What a
load of crap.

> Moreover, since we must work with
> this design for the next million years, it will save time if we make our
> computer models compatible with the mind, rather that the other way
around."

This may simply be false, since this implies that language has no impact on
the way the mind works. Evidently we just have to settle for how the
language designers were thinking when they invented the language... or at
least reached their deadline.

> "The mind observes a vast universe of experience, both immediate and
> recorded. One can derive a sense of oneness with the universe simply by
> letting this experience be, just as it is. However, if one wishes to
> participate, literally to take a part, in the universe, one must draw
> distinctions. In so doing one identifies an object in the universe,

No, not every distinction is about an "object"; or if it is, then it's an
object in a not-necessarily-OO sense of the term.

> and
> simultaneously all the rest becomes not-that-object. Distinction by
> itself is a start, but the process of distinguishing does not get any
> easier. Every time you want to talk about "that chair over there", you
> must repeat the entire processes of distinguishing that chair. This is
> where the act of reference comes in: we can associate a unique
> identifier with an object, and, from that time on, only the mention of
> that identifier is necessary to refer to the original object. "

This paragraph identifies the two major flaws in OO: the insistence upon
rooting each function in a single distinguished class (making for arbitrary
assignments which then, in a language like Java, inherently produces package
dependencies), and the notion that the Object ID is significant in some
sense (it's not).

> "Classification is the objectification of nessness. In other words, when
> a human sees a chair, the experience is taken both literally an "that
> very thing" and abstractly as "that chair-like thing". Such abstraction
> results from the marvelous ability of the mind to merge "similar"
> experience, and this abstraction manifests itself as another object in
> the mind, the Platonic chair or chairness. "

We also have the marvelous ability to recognize phenomena whose source (and
"target") are non-obvious (or even undefined), to abstract "relationships"
into relations, etc. But apparently those things are all subservient to
"objects".

I'm not condemning type theory (the only adequate support for OO). I'm
condemning the mantra:
"Everything-is-an-object-except-when-it's-not-in-which-case-we-invent-a-patt
ern-to-cover-our-inadequate-language."

> The complete article is available here:
>
http://users.ipa.net/%7edwighth/smalltalk/byte_aug81/design_principles_behind_smalltalk.html

The above exerpts convince me that it just ain't worth the time. I really
thought the computing industry would be a little further along by now, but
it's LOGO from to bottom (hey! maybe the whole universe is that way!)

- erk


0
ekaun (22)
5/3/2004 7:57:55 PM
On Mon, 03 May 2004 16:20:10 GMT, "Shayne Wissler"
<thalesNOSPAM000@yahoo.com> wrote:

>
>"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
>news:oarc9052482fdpk92refvi7mns2bmvo5sl@4ax.com...
>> On Sun, 02 May 2004 23:31:20 GMT, Jeff Brooks <jeff_brooks@nospam.com>
>> wrote:
>>
>> >OO is designed around what is known about human thought.
>>
>> Where did you get that notion.  OO was created by two programmers who
>> were messing around with ALGOL.  They created a syntax element that
>> created procedure stack frames on the heap and did not destroy them
>> when the procedure returned.  Those data structures became objects.
>>
>> OO was an accident of dynamic memory scope combined with block
>> structure.  Long after the fact some folks started talking about human
>> thought and "real world models" and the like.
>
>You are very far off base here. Every programming language reflects, to some
>degree or other, the way the language designer thinks about programs.
>Therefore it reflects to some degree or other the way he thinks in general.
>
>It goes the other direction too. A programming language can shape the way
>you think about programs. The most creative of us are not restricted to this
>influence, but many can only think in terms that the original language
>designer had set.

I have no problem with that.  However, if you accept that then *all*
programming paradigms are designed around the way humans think, and OO
is not special in that regard.  I have no problem with that either. 

My problem comes when someone claims that OO is closer to human
thought patterns than other forms of programming.  My problem comes in
when someone claims that OO is better *because* it is closer to human
thought patterns than other forms of programming.


-----
Robert C. Martin (Uncle Bob)
Object Mentor Inc.
unclebob @ objectmentor . com
800-338-6716

"Distinguishing between the author
and the writing is the essence of civilized debate."
           -- Daniel Parker
0
unclebob2 (2724)
5/3/2004 8:53:59 PM
"Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message news:<yzAjc.32058$YP5.2531509@attbi_s02>...

Sorry for the delay, but my news server lose lots of messages :(

> > >We could reword it to take people like you into account: OO is closer to
>  the
> > >way humans ought to think.
> >
> > Closer than what?
> 
> Procedural.

You probably mean classic structured programming, but there are far
more advanced computational models than the used in classic structured
programming.

> > Logic programming is closer to the way humans ought to think than OO.
> 
> Nope.

Do you know what logic programming is?

> > OO is rather fuzzy and sloppy.
> 
> Depends on who's writing it.

Who is not?


Regards
  Alfredo
0
alfredo (205)
5/3/2004 8:58:05 PM
Robert C. Martin wrote:

> My problem comes when someone claims that
> OO is closer to human thought patterns
> than other forms of programming.  
> My problem comes in when someone claims that OO is better
> *because* it is closer to human thought patterns
> than other forms of programming.

I have a problem with statements like
"C is closer to 'the machine'
  than other high level programming languages."

These "topologies" and those of the "Flat-Landers"
are found nowhere else in science or mathematics.

0
5/3/2004 9:09:55 PM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
news:08cd90triusjj84qpe5q3fpgnbqeuk2og6@4ax.com...

> >You are very far off base here. Every programming language reflects, to
some
> >degree or other, the way the language designer thinks about programs.
> >Therefore it reflects to some degree or other the way he thinks in
general.
> >
> >It goes the other direction too. A programming language can shape the way
> >you think about programs. The most creative of us are not restricted to
this
> >influence, but many can only think in terms that the original language
> >designer had set.
>
> I have no problem with that.  However, if you accept that then *all*
> programming paradigms are designed around the way humans think, and OO
> is not special in that regard.  I have no problem with that either.
>
> My problem comes when someone claims that OO is closer to human
> thought patterns than other forms of programming.  My problem comes in
> when someone claims that OO is better *because* it is closer to human
> thought patterns than other forms of programming.

Why do you have a problem with that claim? Is it because you think that
there are no better or worse ways of thinking, that one way or another is
just a matter of personal preference, that a witchdoctor is no better at
thinking than a scientist?


Shayne Wissler


0
5/3/2004 9:11:19 PM
"Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
news:Xzylc.22462$kh4.1292492@attbi_s52...

> Why do you have a problem with that claim? Is it because you think that
> there are no better or worse ways of thinking, that one way or another is
> just a matter of personal preference, that a witchdoctor is no better at
> thinking than a scientist?

Whoops--I meant to say "a scientist is no better at thinking than a
witchdoctor".


Shayne Wissler


0
5/3/2004 9:12:42 PM
On 3 May 2004 11:47:21 -0700, stevenwurster@lycos.com (Steven Wurster)
wrote:

>> One bad example disprove any rule, but there are infinite examples.
>
>No, I meant the example is a bad one.

What is bad in: circles are ellipses?

>  It fails LSP, true, but I've
>got plenty of bad examples that fail LSP.  We need a good example that
>fails LSP.

It seems that you define good example as an example that does not fail
with LSP.

>> But what I want to prove is that OO inheritance is not subtyping. OO
>> inheritance is related to pointer based delegation.
>
>Pointer based delegation is a means to an end.

It seems just the contrary. OO inheritance is an attempt to justify
the previously implemented pointer based delegation. But OO
inheritance lacks any sound theoretical basis like subtyping. It is an
ad hoc approach made a posteriory.

>  It isn't needed for
>inheritance.  And *I* never said OO inheritance was subtyping.  In
>fact, I said it was not.  I'm not sure who did, and I don't feel like
>looking through past posts.

It is just what I said in the begining of the thread and I received a
lot of contestations.

>> No, this is one possible definition of inheritance. One which has many
>> problems. Inheritence is not a formal term and it does not have a
>> precise meaning. But this is not the case of subtyping.
>
>This is the definition of inheritance in the OO domain

In the OO value set? :)

The OO community is not the owner of the term "inheritance". We might
call inheritance to subtyping if we want.

>, period. 
>Descendants *are* substitutable for their ancestors, given that
>nothing else has broken.

Liskov's paper was titled: "Family Values: A Behavioral Notion of
Subtyping"

http://citeseer.ist.psu.edu/liskov94family.html

And you agree with me about what it describes can't be called
subtyping.

It this is not a flaw what is? 

The paper talks about subtypes and supertypes all the time. It's all
about subtyping.

The funniest thing is that with the LSP a circle can not be
substituted by the same circle defined as an ellipse. It is a
violation of mathematical substitutability: if a = b then a can be
substituted by b.

>> A domain is a real world problem you want to solve. By the way it is
>> also a fuzzy and confusion prone term, like most OO terms.
>
>Most OO terms are not confusing.  People might disagree on their
>miniscule differences, and often tend to make mountains out of
>molehills, to use the cliche.

Object sometimes means variable, sometimes means value, sometimes
means type, sometimes means thing, etc. They are huge differences.

Class sometimes means type, sometimes means type definition, sometimes
means type implementation, sometimes means entity, etc.

It is similar for almost all OO terms.

OO terminology is chaotic I have seen lots of discussions derived from
different interpretations of the OO terms and each author uses the
terms as he likes.

> But to me, those often come from people
>who aren't even sure what it is they are talking about.

Fuzzyness is exploited by tricksters that do not know what they are
talking about to cheat gullibles who know even less. I see that in
every product presentation.

>> Then which term do you use for a class that inherits from another
>> class?
>
>Descendant.  The class being inherited from is known as the ancestor.

A fuzzier term.


Regards
  Alfredo
0
alfredo (205)
5/3/2004 10:10:42 PM
On Mon, 3 May 2004 18:26:30 +0200, "Ilja Preu�" <preuss@disy.net>
wrote:

>Alfredo Novoa wrote:
>
>> Circles are ellipses breaks LSP
>> Circles are ellipses
>> --------------------------------
>> LSP is broken QED
>
>The LSP doesn't say anything about an "ISA" relationship. It's a definition
>for the term "subtype".

Do you think that the "ISA" relationship does not have anything to do
with type membership?

Regards
  Alfredo

0
alfredo (205)
5/3/2004 10:12:11 PM
On Mon, 03 May 2004 11:11:46 -0500, Robert C. Martin
<unclebob@objectmentor.com> wrote:

>On Mon, 03 May 2004 12:34:29 GMT, alfredo@ncs.es (Alfredo Novoa)
>wrote:
>
>>
>>Types are sets, the ellipse set contains the circle set. The members
>>of the circle set are mebers of the ellipse set.
>
>Would you store the data for Circles in a table designed for Ellipses?

We should clearly distinguish between model and implementation.

But I don't see big problems in storing a few circles in a table
designed for ellipses. It is like to store an integer like 1.0 in a
big table designed for reals, or a byte value in a table designed for
int32 values.

If you have to store lots of circles then  it would be better to have
a physical table designed for circles and to show both tables united
at the logical level..

BTW I am currently working in this kind of implementation problems for
the DBMS I am implementing (in which circles are ellipses :). 


Regards
  Alfredo
0
alfredo (205)
5/3/2004 10:28:36 PM
On 3 May 2004 10:11:42 -0700, stevenwurster@lycos.com (Steven Wurster)
wrote:

>> Integer is also subtype of rational and rational subtype of real, etc.
>> 
>> For example:
>> 
>> var a: real;
>> a := 2.5;
>> a := a - 0.5;
>> if a is integer then 
>>   WriteLn('Correct')
>> else
>>   WriteLn('OO is mad');
>> 
>> A good language should show 'Correct'.
>> 
>
>This has absolutely *nothing* to do with OO.

Do you think that subtyping has nothing to do with OO?

Curious!

> In fact, it has a whole
>lot to do with the underlying architecture, and the IEEE specs. 
>Especially if the original number (2.5 in this example) cannot be
>expressed as a finite value in the internal representation (i.e.
>binary for most systems).

Frankly, I would not invest in an architecture that can not express
2.5 with accuracy :-)

Seriously, it is an implementation problem, not a model problem. For
instance we could raise an exception if we can not decide if a real is
an integer or not, or we could use arbitrary precision arithmetic only
when it is needed.

> Some languages or libraries do provide
>support for rational numbers and/or fractions, but the performance
>penalties often outweigh the benefits.

Not always. Sometimes arbitrary precision arithmetic is a requirement.
I know people that would reject any system that does not implement
arbitrary precision arithmetic.

>Also note that 2 is a real number, not just an integer.

Of course, 2 is member of several types.

if 2 is real then
  WriteLn('Correct');

if 2 is integer then
  WriteLn('Correct');

if 2 is even then
  WriteLn('Correct');

etc.

Regards
  Alfredo
0
alfredo (205)
5/3/2004 10:47:44 PM
"JXStern" <JXSternChangeX2R@gte.net> wrote in message
news:7c0d905fs3f0v3scocre7jbhbee44ffv1o@4ax.com...
> On Mon, 03 May 2004 11:06:11 -0500, Robert C. Martin
> <unclebob@objectmentor.com> wrote:
> >>OO is designed around what is known about human thought.
> >
> >Where did you get that notion.  OO was created by two programmers
who
> >were messing around with ALGOL.  They created a syntax element that
> >created procedure stack frames on the heap and did not destroy them
> >when the procedure returned.  Those data structures became objects.
>
> Is that true, that Simula only created instances on the stack?

No, RCM spouts baloney in support of his narrow hacker view and
practices.

From Kristen Nygaard's web site:
http://heim.ifi.uio.no/~kristen/FORSKNINGSDOK_MAPPE/F_OO_start.html

"How Object-Oriented Programming Started

by Ole-Johan Dahl and Kristen Nygaard,
Dept. of Informatics, University of Oslo

SIMULA I (1962-65) and Simula 67 (1967) are the two first
object-oriented languages. Simula 67 introduced most of the key
concepts of object-oriented programming: both objects and classes,
subclasses (usually referred to as inheritance) and virtual
procedures, combined with safe referencing and mechanisms for bringing
into a program collections of program structures described under a
common class heading (prefixed blocks).

The Simula languages were developed at the Norwegian Computing Center,
Oslo, Norway by Ole-Johan Dahl and Kristen Nygaard. Nygaard's work in
Operational Research in the 1950s and early 1960s created the need for
precise tools for the description and simulation of complex
man-machine systems. In 1961 the idea emerged for developing a
language that both could be used for system description (for people)
and for system prescription (as a computer program through a
compiler). Such a language had to contain an algorithmic language, and
Dahl's knowledge of compilers became essential...."

Elliott
--
http://www.univercenet.net
Madsen helped to create Beta the successor the 1st OOPL Simula
with Nygaard.  Madsen continues to write the simulationist, centered
truth about OO.  He continues to draw the line in the sand for
genuine, domain modelling oriented, OO software science and
engineering against the establishment, maximum ROI oriented,
anti-simulationist, pragmatism philosophy based, faux OO, as
represented by XP/Alliance.  His site:
http://www.daimi.au.dk/~olm/


0
universe2 (613)
5/3/2004 10:48:27 PM
Alfredo Novoa wrote:

>>>Circle is a subtype of ellipse and the contrary is false. All circles
>>>are ellipses, but not all ellipses are circles.
>>
>>Perhaps from a mathematical view point but OO isn't based on mathematics.
> 
> Nor in common sense.

Mathematics isn't based on common sense. If your going to claim that one 
  model is more correct then another model you should make sure your 
model doesn't have the same flaw that you falting the opposing model with.

>>Mathematics cant map reality.
> 
> Astonishing statement!

It's a true statement.

> Mathematics is as old as mankind.

Thats a false statement.

Jeff Brooks
0
jeff_brooks (199)
5/3/2004 11:00:04 PM
JXStern wrote:

> Well, Simula was 1967 and Smalltalk got underway circa 1974, and much
> of how we talk about OO comes thru Smalltalk (tho it seems we've lost
> the habit of referring to messages ...).  The specific claim that OO
> "is how people think" or "is how the real world is," hmm, make me get
> out the Goldberg & Robson book, hey?  Make me *find* the G&R book, ...
> Addison Wesley 1983.
> 
> Nope, they (in the preface) talk about Smalltalk as a language and
> environment and way to manage complexity.  And it really is a badly
> written book.  But it's about the only OO source I have handy that
> reaches back before about 1985.

Here is an article that was included in byte magazine in 1981 written by 
Daniel Ingalls. He is one of the people on the project that made 
Smalltalk and the original GUI. In the article he talks about the design 
of Smalltalk.

Design principles behind smalltalk - byte 1981
http://users.ipa.net/%7edwighth/smalltalk/byte_aug81/design_principles_behind_smalltalk.html

I don't think OO "is how the real world is" but I do think people 
understand the world by classifications of things.

Jeff Brooks
0
jeff_brooks (199)
5/3/2004 11:35:18 PM
On Mon, 03 May 2004 23:35:18 GMT, Jeff Brooks <jeff_brooks@nospam.com>
wrote:
>Here is an article that was included in byte magazine in 1981 written by 
>Daniel Ingalls. He is one of the people on the project that made 
>Smalltalk and the original GUI. In the article he talks about the design 
>of Smalltalk.
>
>Design principles behind smalltalk - byte 1981
>http://users.ipa.net/%7edwighth/smalltalk/byte_aug81/design_principles_behind_smalltalk.html
>
>I don't think OO "is how the real world is" but I do think people 
>understand the world by classifications of things.

Righto.  I probably have the article lying around here somewhere, but
the pointer is much handier.

So, by 1981, people were starting to use the spacey language about the
world and the mind, but had not really formalized it yet. 

 :)

"We have said that a computer system should provide models that are
compatible with those in the mind. Therefore:
Objects: A computer language should support the concept of "object"
and provide a uniform means for referring to the objects in its
universe."

The article posits Smalltalk as a tool for creating your own language
as a tool for, um, whatever.  One of the ideas kicked around in
Smalltalk, and in a more technical book, "The Art of the MetaObject
Protocol".  Another citation from this article:

"Classification: A language must provide a means for classifying
similar objects, and for adding new classes of objects on equal
footing with the kernel classes of the system."

I disagree with this on many grounds, and I don't think current styles
and trends are much in agreement with this, either.

Thanks for the pointer.

J.

0
5/3/2004 11:53:28 PM
Why do programming languages belong closer to science than the 
humanities?  Is it heresy to suggest or consider that program design 
(including language design) is more similar to artwork (or craft work as 
some prefer) than science?

E. Robert Tisdale wrote:

> Robert C. Martin wrote:
>
>> My problem comes when someone claims that
>> OO is closer to human thought patterns
>> than other forms of programming.  My problem comes in when someone 
>> claims that OO is better
>> *because* it is closer to human thought patterns
>> than other forms of programming.
>
>
> I have a problem with statements like
> "C is closer to 'the machine'
>  than other high level programming languages."
>
> These "topologies" and those of the "Flat-Landers"
> are found nowhere else in science or mathematics.
>

-- 
..tom
remove email address' dashes for replies
opensource middleware at <http://isectd.sourceforge.net>
<http://gagne.homedns.org/~tgagne/>
0
tgagne (596)
5/4/2004 2:09:41 AM
JXStern wrote:

> <snip>
>
>Another citation from this article:
>
>"Classification: A language must provide a means for classifying
>similar objects, and for adding new classes of objects on equal
>footing with the kernel classes of the system."
>
>I disagree with this on many grounds, and I don't think current styles
>and trends are much in agreement with this, either.
>
>
>  
>
What do you disagree with;

   1. a language must provide a means for classifying similar objects and
   2. for adding new classes of objects on equal footing with the kernel
      classes of the system?

I'm also what current styles or trends you think are in opposition to 
this and whether those trends are trends forward or backward.

-- 
..tom
remove email address' dashes for replies
opensource middleware at <http://isectd.sourceforge.net>
<http://gagne.homedns.org/~tgagne/>
0
tgagne (596)
5/4/2004 2:18:40 AM
Robert C. Martin wrote:

> My problem comes when someone claims that OO is closer to human
> thought patterns than other forms of programming.  My problem comes in
> when someone claims that OO is better *because* it is closer to human
> thought patterns than other forms of programming.

Perhaps instead of listing your problems it would be better to post your 
viewpoint and provide supporting arguments for it, or provide counter 
arguments against my viewpoint.

Jeff Brooks
0
jeff_brooks (199)
5/4/2004 3:53:07 AM
> >>The world isn't heiarchial. Languages like java deal with this with 
> >>interfaces, Smalltalk deals with this via dynamic typing. I'm not sure 
> >>what you think sets can do that existing programming languages can't do.
> > 
> > It is not a matter of "can't do", because they are all Turing
> > Equivalent. It is a matter of human convenience (maintainability).
> 
> How are set programs are easier to maintain?
> 
> > Your "customer" example is essentially a navigational database
> > hard-wired into app code. App code is a sh*tty place to store
> > relationship schemas IMO. Maybe you like it and like navigational
> > techniques (for some odd reason), but I don't. They happily died in
> > the 70's thanks to Dr. Codd, but OO zealots resurrected it from the
> > dead for OO and OODBMS. Does this mean goto's are coming back?
> > (Navigational is the Goto of data structures.)
> 
> I assume by relationship schema you are refering to the types of 
> instance variables. For example:
> 
> class T
> {
> 	A a;
> 	B b;
> }
> 
> Do you think specifying the types of the variables a, and b defines the 
> relationships to other types?

I am not sure what you are asking. However, it is not just trees
versus sets, but WHERE to store noun relationships. I prefer to manage
those in a database, not app code. It is easier to study and more
consistent from developer-to-developer. It is a lot quicker to
dump/print schemas than to scour source code for relationship info.

-T-
0
topmind (2124)
5/4/2004 4:15:14 AM
Alfredo Novoa wrote:
> On Mon, 3 May 2004 18:26:30 +0200, "Ilja Preu�" <preuss@disy.net>
> wrote:
>
>> Alfredo Novoa wrote:
>>
>>> Circles are ellipses breaks LSP
>>> Circles are ellipses
>>> --------------------------------
>>> LSP is broken QED
>>
>> The LSP doesn't say anything about an "ISA" relationship. It's a
>> definition for the term "subtype".
>
> Do you think that the "ISA" relationship does not have anything to do
> with type membership?

Yes, that's what I am thinking.

Take care, Ilja


0
it3974 (470)
5/4/2004 6:59:44 AM
"Jeff Brooks" <jeff_brooks@nospam.com> wrote in message
news:qlBkc.306859$Pk3.256340@pd7tw1no...
> cstb wrote:
>
> > Jeff Brooks:
> >
> >>... Actually, i think OO is the way people actually think.
> >
> > There appears to be a relationship, yes.
> > However, people are capable of thinking in ways that are much
> > richer than the results obtained by orienting on Objects alone.
>
> Are you sure?
>
> The research that created Smalltalk was done by looking at how very
> young children interact and think about the world and they based the GUI
> + Smalltalk on that (the first gui was made by this research group).
> This research allows us to understand the basics of understanding
> because they looked at the primitive thoughts of people.

This is just great - having advanced philosophy, math, logic, linguistics,
and science over thousands of years, it's time to base our computer systems
on the interactions of young children.

Should we introduce CPU nap times as well?

> For example:
>
> To interact with a thing you have to identify it. Children can identify
> things by pointing at them even if they don't know the word for it. This
> resulted in a pointing device being created called the mouse which would
> allow people to point at what they want to use. All access to objects in
> Smalltalk are done via references so there is a uniform way to "point"
> at an object in code.

And this is inadequate. A HUGE amount of code is devoted to "finding"
objects when you DON'T have a pointer handy - in other words, queries
against the mass of objects, based on various attributes. If all you have is
pointers, how do you distinguish the values they point to (directly or
indirectly)? Of course the values matter - why then waste the time
traversing networks of pointers, when the computer should be able to find
what you want based on your logical criteria?

> Children can't read well, but they can identify shapes and understand
> how to move things. So they concluded making an interface based on
> shapes and moving them is more natural than text interfaces.

Natural is not always better. Toilets are unnatural, as are clothes. I
wouldn't want to do without either, and would certainly not want my
coworkers to give them up.

> Allowing things in a computer that are different to behave in similar
> ways allows people to learn them faster and those things feel more
> natural.

So everything has the "doYerThang(Object whatever)" method?

> This is why actions like opening a document is done in the same
> way no matter what type of document it is in a GUI. The concept of
> allowing different things to behave in similar ways effects both the GUI
> and the programming language.
>
> I think people can think in more complex ways as they get older but that
> doesn't mean they don't think in an object oriented way.

And it doesn't mean they do. We got along well for many years without OO,
and in fact failed to fully explore functional and logical programming (and
even relations) in any commercially significant way.

> Children don't
> understand logic, but they understand objects and classifications. I
> think people can learn logic but i think we understand it by
> understanding things, and classifications.

No, that's not true. Think predicates, and much opens up to you.

> Another way of putting it is we can program different types of languages
> using object oriented languages. That doesn't mean that OO isn't at the
> core of the new languages.

Yes, it does. What is it Guy Steele said about LISP? "If you program in X,
you have X programs. If you program in LISP, you have any language you
like." Sorry, can't cough it up right now, but that's the spirit of it.

- erk



0
ekaun (22)
5/4/2004 1:03:01 PM
"Steven Wurster" <stevenwurster@lycos.com> wrote in message
news:d853834.0405011747.7816184d@posting.google.com...
>
> We'll use the square/rectangle example.  If square inherits from
> rectangle, then we know the standard problem that the sides of a
> rectangle can be changed independently, but that's not true for a
> square.  Because of that, a square cannot be treated like a rectangle
> by clients, who only know the interface provided by rectangle, and
> think they are getting a rectangle when they are actually getting a
> sqaure at run-time (which is fine when the LSP is upheld).

Mutation is the problem here - specifically the brand of value/variable
confusion that objects introduce. If Rectangle has no setters (e.g. you
simply select or construct it), you don't have a problem. It's when mutators
are inherited indiscriminately by subclasses that problems result. As far as
I know, in math they don't run into problems with mutating existing
"objects" - that's been introduced by a processor-emulation mentality (von
Neumann, as far as I know - I'm not a computing historian, alas).

> Looking at these side-setting routines for rectangle, we see that
> their contracts tell us that when we attempt to change the length of a
> side, the change is successful, and the new length is what we
> requested.  But square breaks that by (most likely) setting the other
> side equal to the new one that we just changed.  But this breaks the
> contract, because clients of rectangle aren't expecting that other
> side to change.  So, this kind of inheritance breaks the LSP, which of
> course says that descendant classes must be able to pass for
> ancestors.

Only the inheritance of mutators breaks LSP. If you think instead of simply
selecting a new value for a variable, you'll gain much control over such
nonsense...

> OO does not claim to match math.  Calling a descendant class a subtype
> is misleading.  In fact, in a large number of cases, descendants might
> be considered supertypes, as they tend to add features.

That's not true at all. Adding features doesn't make something a supertype -
rather, subtypes can have additional "features". Think of set membership -
all values of type SUB are members of the set defined by type SUPER. SUB's
predicate further restricts SUPER's predicate - any values for which SUB's
predicate evaluates true also satisfy SUPER's.

> In reality,
> descendants are simply specializations of their ancestors, not
> subtypes.

What does that mean? I suspect the answer is nothing. Subtyping is the only
valid use I can think of for object inheritance...

> Many people, especially those who teach OO incorrectly (as
> mentioned above), make this mistake.  Descendants are not expected to
> cover "less" cases, as doing so breaks the LSP.

What do you mean "less" cases? Please clarify.

- erk



0
ekaun (22)
5/4/2004 1:03:01 PM
"Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
news:tdxlc.22334$Ik.1620682@attbi_s53...
>
> "Eric Kaun" <ekaun@yahoo.com> wrote in message
> news:A2xlc.737$dd.268@newssvr33.news.prodigy.com...
>
> > > > >We could reword it to take people like you into account: OO is
closer
> > to
> > > the
> > > > >way humans ought to think.
> >
> > Ought to? For what reason?
>
> Because concepts are not arbitrary.

Thanks - that explains nothing. You mean no concepts are arbitrary? Is this
some neo-Platonism?

> OO is a superset of procedural.

Not really. Most procedural languages have the ability to define a function,
and have that function invoked based on declared or runtime types. That
function doesn't have to be located in an object class, and while it can
have a namepsace, doesn't have to (which has good and bad aspects). No
superset relationship there.

> It is closer to how we think because it is
> flexible enough to embody more of what we think (that is, for those of us
> who do actually think).

Ah, I see. I don't agree with you, and therefore I am not actually thinking.
Cool.

> > > > Logic programming is closer to the way humans ought to think than
OO.
> > >
> > > Nope.
> >
> > Yup. Well, unless that word "logic" is somehow divorced from
> "programming",
> > in which case I really hope we don't wind up on the same team...
> >
> > Logic and functional both give optimization power to the computer, which
> is
> > (presumably) where we'd like to go if we actually want to benefit from
> both
> > advances in hardware and software, and from the benefits of (drumroll
> > please) logic.
>
> Thankfully this viewpoint is constrained to fringe groups.

Why thankfully? Oh, and feel free to write more than terse 1-sentence
responses. I won't complain.

> > As is pointed out in another message, human beings create and influence
> > languages, and in turn are influenced (and created?) by language. So
since
>
> That we are "created" by language is another (thankfully) fringe notion.

Can you define "fringe"? Is all fringe thinking bad? I thought that
historically we've derived a lot of value from "fringe" thinkers. Not that
all of them have value, but generally I prefer to evaluate the contents of a
thought, rather than dismissing based on whether it's labeled "fringe" or
not. I suppose, however, that I'm wrong.

> > (not if) it's so easy to go horribly wrong in Java and other OO
languages,
> a
> > reasonable inference is that the languages give the programmer too much
> > rope, and from experience I can say that there's no commensurate benefit
> > (other than making the language designer's life easier, which I don't
care
> > about).
>
> "From experience" == "I don't have a damn clue."

Well, then forget "from experience" and replace with "from theory". Or
anything else you like.

Choppy responses.
Difficult to analyze.
Easy to sound wise.
Without saying anything.

- erk


0
ekaun (22)
5/4/2004 1:13:24 PM
"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> wrote in message
news:jgjc9011mm5cmk63hnst3a2vj19ih3fq17@4ax.com...
> On Mon, 03 May 2004 12:34:29 GMT, alfredo@ncs.es (Alfredo Novoa)
> wrote:
>
> >On Mon, 03 May 2004 11:04:23 +0200, Dmitry A. Kazakov
> ><mailbox@dmitry-kazakov.de> wrote:
> >
> >>>Circle is a subtype of ellipse and the contrary is false.
> >>
> >>Wrong. Values of the type "Circle" do not have the type "Ellipse". So
> >>"Circle" is not contained in "Ellipse".
> >
> >Types are sets,
>
> Yes.
>
> >the ellipse set contains the circle set. The members
> >of the circle set are mebers of the ellipse set.
>
> No, types are sets of values and operations on them. The set of
> operations valid for circles is not a subset of ones of ellipses.

You're thinking about it backwards. The set of operations valid for ellipses
is a SUBSET of the ones valid for circles, while the set of all ellipse
values is a SUPERSET of the set of all circle values. That is a result
(debatable) of circle being a subtype. A subtype tends to have more
operations, along with more constraints - the constraints tend to make the
additional operations possible. A subtype cannot possibly have fewer
operations, or it violates its supertype's defining predicate, and thus
isn't in a subset relationship. In Java, for example, if you have a subclass
that throws an OperationNotSupportedException in its implementation of a
superclass method, that's a serious problem - that's not a subtype.

> [SNIP]
>
> [LSP] subtyping (by constraining/specialization) tells that
> not only the set of subtype values can be mapped to a subset of values
> of the base, but also that the set of operations [and valid programs]
> defined on the subtype is a subset of ones of the base.

No, it doesn't say that, and that's patently wrong.

> There are many problems with a consistent implementation of MD.
> Managing a geometrically exploding number of overrides is a heavy
> burden for a programmer. It will be solved, some day. The theory is
> just not mature.

Agreed.

- erk


0
ekaun (22)
5/4/2004 1:23:44 PM
"Jeff Brooks" <jeff_brooks@nospam.com> wrote in message
news:nphlc.350119$oR5.75668@pd7tw3no...
> [SNIP]
> Just because some people think that OO leads no where doesn't mean it's
> true. There are lots of programs that successfully use OO.

It really depends on how you define "successfully" - much OO code is bad
procedural code, and when people abstract badly (and it's easy to do, since
the abstractions in the solution space are not always directly implied by
the abstractions in the problem space, which are difficult to properly
identify), it's a far worse mess than bad procedural code. In my opinion. I
can refactor long spaghetti procedure, but networks of spaghetti objects?
BLECH!

> Not all OO languages are complex. Look at Smalltalk, or Python as
examples.

There are natural complexities to object languages, of course. The biggest
areas of confusion: identity and equivalence, value vs. variable (i.e.
aliasing in mutation, and the attendent "deep copy" sorts of confusion),
proper subtyping relationships (defining and obeying superclass predicates),
etc.

> OO developers model classes as they understand them. These may not be
> correct in the matematical sense but that doesn't mean the software
> doesn't work.

It just means that there's no basis for the modeling other than intuition,
and furthermore the worst OO languages force everything to be an object,
when everything is obviously not. Look at functional languages for the power
of "unobjectified" functions. Then look at Haskell or ML to see the power of
strong type inference (strong typing with less pain, basically what
dynamically-typed languages claim without safety). Then see whether all of
your desired logic fits so cleanly into OO.

> A lot of people like to point out the programming languages ignore
> mathematics and say the languages are bad for doing so. If following
> mathematics was required to write software then most of the existing
> languages wouldn't be able to be used to create software. Obvously, this
> isn't the case.

That's the problem - it CAN be used. To paraphrase Chris Rock: You can drive
a truck with your feet if you want to, but that doesn't make it a good idea.
Lots of things can be done - we're discussing what SHOULD be done.

Math and logic are a sound basis, as is Occam's Razor. I'm not a zealot, but
if you have a better basis, propose it. Saying that math is insufficient,
but replacing it with just any idea that sounds OK at the moment, is a bad
way to proceed. I'm not saying objects are entirely bad, but the fact that
they mix value and variable semantics adds complexity, complexity we may not
need. There may be attendant benefits (as Robert Martin said, dependency
management), but we have to be aware of what the downside is.

> The only way using mathematics as the basis of a language would be
> easier for people to understand is if people understood mathematics.

So in other words, the cure to mathematical ignorance is to abandon it? Like
it or not, we're using logic. It may be clear or not, may be consistent or
not, may be overly complex or not, but it's there. Ignorance is a poor
excuse. Math doesn't have to be complex - in fact, its aim is simplicity.
Spaghetti code is a result of an overly simplistic approach which ultimately
yields excessive complexity, far beyond what a better approach would have.
This suggests we need a bit more discipline up front (uh oh, the XPers are
screaming now...)

- erk


0
ekaun (22)
5/4/2004 1:40:09 PM
Assembly was created to save programmers from writing machine code.  As 
a language it represented how the machine worked but not how the humans 
thought--except what the programmer (an instance of Human) may have 
thought of the machine.  C is fun--but is still more about how machines 
work than how humans think.  for() loops, primitivite types, etc. are 
all artifacts of the machines we work with and not how human brains work 
in the world.

Object Oriented programming (credit an be given to anyone you want) is 
the first paradigm to attempt putting humans first and machines second.  
The burden of execution is tilted towards the machine--making it work 
harder so programmers don't have to.  Now instead of having primitives 
like integers of various bit sizes, floating-point mantissas or BCD 
libraries representing money we can actually create a money type that 
knows how to do things money does.  That, thankfully, has little to do 
with the machine and everything about the programmer (as a 
representative human) and their world (or domain).

OO languages, being as complete as they are, still allow people to write 
C-like code that reads as though it were written to make the compiler's 
life easier (cite bad C++ and Java examples--we've all seen them and 
have written some of them).  This, unfortunately, hasn't made 
programming a cake-walk.  As difficult as it is for some of us to write 
literature describing our world about the same percentage of us have 
difficulty designing objects that exist in our domain-at-hand.  It 
remains a challenge to be peculiarly observant about the domain to glean 
a topology.  Perhaps a certain measure of intuition is required.

OO Programmers are engaged in a philosophical stuggle about modeling 
their world--the same stuggle begun by Plato when he attempted to 
describe forms (classes).  I'm unsure if Plato found anything scientific 
or mathmatical about forms, but here we are 2500 years later (or so, 
wasn't he approx 400BC?) and we're still describing forms.

JXStern wrote:

> <snip>
>
>But I wonder.  I have here a dusty 600 page book about Smalltalk, and
>it does talk about stack frames and memory management and whatnot, but
>to what end?  What is interesting here is that it is supposed to be
>completely obvious why these programming language styles and entities
>are worth pursuing.  Back in those dim days, I suppose it was supposed
>to be equally obvious why *any* programming language entity was worth
>pursuing: because *any* programming language, in being a logical
>exercise, was "how people think" and/or "how the real world is."
>Laterly, it seems we decided to reserve this intuitive credit only to
>object-oriented systems.  Quite interesting, to me, at least.
>
>J.
>
>
>
>
>  
>

-- 
..tom
remove email address' dashes for replies
opensource middleware at <http://isectd.sourceforge.net>
<http://gagne.homedns.org/~tgagne/>
0
tgagne (596)
5/4/2004 1:45:17 PM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
news:0qrr80lied0pjcatnd781cju6vpo8k47jb@4ax.com...
> On 26 Apr 2004 08:46:11 -0700, wilvers1@vodafone.net (Ian Roberts)
> wrote:
>
> >Hi, I was wondering if anyone could tell me why object oriented
> >approaches to programming are becoming very popular.
>
> It's been around long enough that schools started teaching it to
> students who became programmers and now are a majority in the
> industry.
>
> OK, that was cynical.  Probably true, but cynical.
>
> You might better have asked what the benefits of OO are.  OO is a set
> of tools and language constructs that allow programmers to manage the
> interdependencies of source code while at the same time providing a
> convenient mode for expressing abstract concepts.

I agree, but most (all?) OO languages only express type abstractions.
Functional abstractions, a la Lisp, Scheme, ML, etc., are another vector for
abstraction. Some of those are very powerful, so while I certainly
understand and agree with type abstractions, don't agree that systems
require only type abstractions. And relational constraints, for example, are
another type of abstraction - I guess it can only really be called
relational.

OO code written well (and in a less-limited language) can be rigorous - my
beef is that the C++ family has given developers too much rope to bungee
themselves with, and has attempted at the same time to stay "close to the
metal", a ridiculous notion. And in the case of dynamically typed languages,
well, the notion that type isn't there is nonsense... an object's type can
be seen as the union of all the messages passed to it at runtime, and to
leave that poorly controlled and poorly-documented (yes, even with unit
tests) seems to me an artifact of inadequate type inference in C++ and its
ilk.

Just my ramblings.

- erk


0
ekaun (22)
5/4/2004 1:47:57 PM
Eric Kaun wrote:

>
>
>This is just great - having advanced philosophy, math, logic, linguistics,
>and science over thousands of years, it's time to base our computer systems
>on the interactions of young children.
>
>Should we introduce CPU nap times as well?
>
>  
>
No, but it sounds like programmer nap times are a good idea!

How much of the science have you just identified that actually damages 
your argument?  Philosophy strives for simple explanations to complex 
phenomena.  Math tries to reduce expressions to their simplest forms.  
Logic is simple.  Linguistics are complicated, but we all appreciate the 
utility of a simple, well crafted statement over a more complicated 
one.  Science is always looking for the simple explanation.  Occam's 
razor.  e=mc2, etc.

As far as humans are concerned, children are about as simple as you can 
get--and thankfully haven't had 12+ years of crap telling them what 
things are.  The interact naturally without thinking about what they're 
thinking about.

To follow your argument, to catch a ball I would have to calculate its 
trajectory, its starting speed, air density and friction, its mass, the 
affect of gravity, gross motor skills, and any number of things just to 
catch a ball thrown at me.  Perhaps some of us do (or try to which is 
why some aren't athletically inclined) but I suspect not.  Children do 
all kinds of things using much simpler models than those that propose 
more complicated ones must exist and be understood.  The evidence is to 
the contrary (IMO).

Want to study how brains work?  Study the simple ones.  Want to study 
intelligence?  Observe the development of a child from birth through the 
first 12 months and measure what it is they've discovered without the 
benefits of philosophy, math, logic, linguistics and science. 

Using your argument nothing would be developed (including children) 
because of the lack of a prerequisite education.

Or, I may have missed your whole point because I lack the 
prerequisites.  Time to roll a ball to my 9-month old to see if he's 
conquered motor skills, friction, acceleration/deceleration, 
objects-in-motion, etc.

> <snip>
>
>
>
>  
>

-- 
..tom
remove email address' dashes for replies
opensource middleware at <http://isectd.sourceforge.net>
<http://gagne.homedns.org/~tgagne/>
0
tgagne (596)
5/4/2004 2:02:52 PM
"Thomas Gagn�" <tgagne@wide-open-west.com> wrote in message
news:z6ednX-9z7MLPwrdRVn-gQ@wideopenwest.com...
> Eric Kaun wrote:
>
> >This is just great - having advanced philosophy, math, logic,
linguistics,
> >and science over thousands of years, it's time to base our computer
systems
> >on the interactions of young children.
> >
> >Should we introduce CPU nap times as well?
> >
> >
> >
> No, but it sounds like programmer nap times are a good idea!

zzzzzz...

> To follow your argument, to catch a ball I would have to calculate its
> trajectory, its starting speed, air density and friction, its mass, the
> affect of gravity, gross motor skills, and any number of things just to
> catch a ball thrown at me.

That's not the argument I was making (as you probably guessed).

> Perhaps some of us do (or try to which is
> why some aren't athletically inclined) but I suspect not.  Children do
> all kinds of things using much simpler models

I don't see a model aspect to catching a ball, or at least don't see that
the unconscious complexities of human brain function (motor response) apply
to the argument at hand.

> than those that propose
> more complicated ones must exist and be understood.  The evidence is to
> the contrary (IMO).

The human brain and body encapsulate those more complicated models. That
work has been done for us, though we continue to try to understand it. In
modeling businesses that we're inventing, that's not the case - we're
creating the abstractions, the mechanisms.

> Want to study how brains work?  Study the simple ones.  Want to study
> intelligence?  Observe the development of a child from birth through the
> first 12 months and measure what it is they've discovered without the
> benefits of philosophy, math, logic, linguistics and science.

Agreed.

> Using your argument nothing would be developed (including children)
> because of the lack of a prerequisite education.

Again, not the argument I was making - I was overly terse, and didn't
explain.

> Or, I may have missed your whole point because I lack the
> prerequisites.  Time to roll a ball to my 9-month old to see if he's
> conquered motor skills, friction, acceleration/deceleration,
> objects-in-motion, etc.

No, I wasn't trying to be condescending. I was simply suggesting that
although the conceptual basis of OO is intuitively appealing, that we have
as a species discovered more powerful metaphors for many aspects of logic
and math (types are an important part of this, but not the only part). Math
arose from simple needs, but our understanding is now very different, and
we've accomplished much. I was suggesting simply that we take advantage of
this progress in our computer languages and systems.

- erk


0
ekaun (22)
5/4/2004 2:16:49 PM
alfredo@ncs.es (Alfredo Novoa) wrote in news:40922322.328191
@news.wanadoo.es:

> In OO ellipse is a subclass of circle, but all we know that circle is
> a subtype of ellipse.

"we all know"? In which sense a circle is a subtype of ellipse? Better 
said, what's your definition of subtype?

In any declarative type theory, types and subtypes are what one declares 
them to be.

0
5/4/2004 2:53:12 PM
Thomas Gagn� <tgagne@wide-open-west.com> wrote:
> Object Oriented programming (credit an be given to anyone you want) is 
> the first paradigm to attempt putting humans first and machines second.  
> The burden of execution is tilted towards the machine--making it work 
> harder so programmers don't have to.

Hardly.  All high-level languages since the very earliest FORTRAN days 
have been compromises of human effort vs. efficient use of the hardware.  
I suppose the same can be said of writing in assembler instead of 
toggling in 0's and 1's by hand.

> Now instead of having primitives like integers of various bit sizes, 
> floating-point mantissas or BCD libraries representing money we can 
> actually create a money type that knows how to do things money does.

Money doesn't know how to do anything.  People know how to do things 
with money.  I can lay a dollar bill on the table and tell it to add 6% 
interest to itself all day long and nothing will happen.

> OO languages, being as complete as they are, still allow people to write 
> C-like code that reads as though it were written to make the compiler's 
> life easier (cite bad C++ and Java examples--we've all seen them and 
> have written some of them).

Surely you don't think C++ and Java are a representative sample of OO 
languages?  They represent one (albeit popular) niche in the OO world, 
and a bad one at that, IMHO.
0
roy (2295)
5/4/2004 2:53:37 PM
On Tue, 04 May 2004 09:45:17 -0400, Thomas Gagn�
<tgagne@wide-open-west.com> wrote:
>Assembly was created to save programmers from writing machine code.  As 
>a language it represented how the machine worked but not how the humans 
>thought--except what the programmer (an instance of Human) may have 
>thought of the machine.

You could argue that it was made to be more similar to how humans
thought, or at least communicated, by using familiar English-like
words, like "ADD" instead of "1010101", not to mention "HoursWorked"
instead of "100101010101".

>Object Oriented programming (credit an be given to anyone you want) is 
>the first paradigm to attempt putting humans first and machines second.  

See above.  

>The burden of execution is tilted towards the machine--making it work 
>harder so programmers don't have to.  

Aw, how about, y'know, Fortran and Cobol and stuff, where you can
write in English-like syntax and the mapping to machine code is quite
variable, especially with optimizing compilers?

>Now instead of having primitives 
>like integers of various bit sizes, floating-point mantissas or BCD 
>libraries representing money we can actually create a money type that 
>knows how to do things money does.  That, thankfully, has little to do 
>with the machine and everything about the programmer (as a 
>representative human) and their world (or domain).

If you want to make this point, I think you want to talk about more
complex types, like Employee, even if there is no cannonical form for
it.

Which immediately gets you to the question of whether OO can really
ever deliver on its promises, until and unless it *is* extended to
something using much higher-level and standardized objects.

>OO Programmers are engaged in a philosophical stuggle about modeling 
>their world--the same stuggle begun by Plato when he attempted to 
>describe forms (classes).  I'm unsure if Plato found anything scientific 
>or mathmatical about forms, but here we are 2500 years later (or so, 
>wasn't he approx 400BC?) and we're still describing forms.

The thing about Plato is that (a) he thought there was a cannonical
description of the world and of each object in it, at least ideally
(though the real world had so many faults that just what use the ideal
really was, was never all that clear), (b) the form, whether we here
below heaven really ever had access to it or not, had real causal
power.  Plato was not modelling the world, the world was modelling
heaven, and badly.  Aristotle, now, was the emperor of taxonomy, he
wanted everything in a hierarchy of classifications, but this too was
not really modelling the world, just labelling it.

The antithesis to Platonism is probably nominalism, in which there is
no ideal or perfect form for anything.  Each object needs to be
described, but similarities between objects, even such common
properties as "red", are not themselves classes.  There have been OO
systems that took this approach, where every instance is also a new
class definition (don't ask me to name any, please, but maybe someone
can chime in here).  And there are a dozen or so major varieties of
nominalism.  I rather like some of them.  So, I wouldn't put too much
emphasis on the idea that the common OO practice of differentiating
classes and instances and the form-al modelling involved, is necessary
or sufficient as a basis for further progress in OO.

J.

0
5/4/2004 3:20:12 PM
Roy Smith wrote:

> <snip>
>
>
>  
>
>>Now instead of having primitives like integers of various bit sizes, 
>>floating-point mantissas or BCD libraries representing money we can 
>>actually create a money type that knows how to do things money does.
>>    
>>
>
>Money doesn't know how to do anything.  People know how to do things 
>with money.  I can lay a dollar bill on the table and tell it to add 6% 
>interest to itself all day long and nothing will happen.
>
>  
>
>>OO languages, being as complete as they are, still allow people to write 
>>C-like code that reads as though it were written to make the compiler's 
>>life easier (cite bad C++ and Java examples--we've all seen them and 
>>have written some of them).
>>    
>>
>
>  
>
You're right.  And I thought about that (really) driving to an 
appointment today.  I should have picked something more interesting like 
a stock, a bond, a portfolio, an event, a calendar, a score card, ...

>Surely you don't think C++ and Java are a representative sample of OO 
>languages?  They represent one (albeit popular) niche in the OO world, 
>and a bad one at that, IMHO.
>  
>
No, but they are very popular and many people can relate to them.  I 
avoid both when possible.

-- 
..tom
remove email address' dashes for replies
opensource middleware at <http://isectd.sourceforge.net>
<http://gagne.homedns.org/~tgagne/>
0
tgagne (596)
5/4/2004 3:46:21 PM
On Tue, 4 May 2004 14:53:12 +0000 (UTC), Cristiano Sadun
<cristianoTAKEsadunTHIS@OUThotmail.com> wrote:
> alfredo@ncs.es (Alfredo Novoa) wrote in news:40922322.328191
> @news.wanadoo.es:
>
>> In OO ellipse is a subclass of circle, but all we know that circle is
>> a subtype of ellipse.
>
> "we all know"? In which sense a circle is a subtype of ellipse? Better 
> said, what's your definition of subtype?
>
> In any declarative type theory, types and subtypes are what one declares 
> them to be.
>

It's only a "problem" because "Everybody Knows" a circle is a "kind of"
an ellipse--mathematically.  But the mathematical model of the conic
section doesn't match the presentation of the problem:

It's always presented in a context where the constraints of the subclass
violate the assumptions of its superclass.

But while often instructive for beginners in oop--I certainly found it
useful--there are many mathematical contexts where "If the eccentricity
is unity, label it a circle" is perfectly valid.

A simple example is passing a circle through a coodinate transformation
that distorts it into an ellipse.  Autocad does this automatically
without blinking or complaining about bad inheritance because the
"programming model" matches the real object--rotating a circular disc
about one axis so its apparant shape changes.

The "trick" is in not getting lost in the names of things.  Most of the
time our "Second Year Algebra, First Year Analyetic Geometry" ideas
about circles and ellipses are inappropriate as models.  Sometimes they
are.  It's not always easy to tell the difference.

0
U
5/4/2004 4:09:13 PM
"Eric Kaun" <ekaun@yahoo.com> wrote in message
news:UFMlc.565$5n5.272@newssvr32.news.prodigy.com...
> "Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
> news:tdxlc.22334$Ik.1620682@attbi_s53...
> >
> > "Eric Kaun" <ekaun@yahoo.com> wrote in message
> > news:A2xlc.737$dd.268@newssvr33.news.prodigy.com...
> >
> > > > > >We could reword it to take people like you into account: OO is
> closer
> > > to
> > > > the
> > > > > >way humans ought to think.
> > >
> > > Ought to? For what reason?
> >
> > Because concepts are not arbitrary.
>
> Thanks - that explains nothing.

Sure it does. But since you think that concepts are arbitrary, then I might
as well be randomly pushing keys on my keyboard as explaining something to
you. Holding concepts as arbitrary is tantamount to being totally closed to
any kind of rational argument. (Although people of this school usually like
to pretend that they are intellectual, quoting Popper and such).

> > That we are "created" by language is another (thankfully) fringe notion.
>
> Can you define "fringe"? Is all fringe thinking bad? I thought that
> historically we've derived a lot of value from "fringe" thinkers. Not that

From some fringe thinkers. But we won't get any value out of your particular
fringe.

> all of them have value, but generally I prefer to evaluate the contents of
a
> thought, rather than dismissing based on whether it's labeled "fringe" or
> not. I suppose, however, that I'm wrong.

Thankfully we don't need to evaluate every silly thought that enters
anyone's head.

> > "From experience" == "I don't have a damn clue."
>
> Well, then forget "from experience" and replace with "from theory". Or
> anything else you like.

Yeah, concepts are arbitrary and you can just shuffle them around until
someone likes the way they look. Sorry, not interested.


Shayne Wissler


0
5/4/2004 4:15:56 PM
"Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
news:0lPlc.31191$kh4.1539944@attbi_s52...
>
> "Eric Kaun" <ekaun@yahoo.com> wrote in message
> news:UFMlc.565$5n5.272@newssvr32.news.prodigy.com...
> > "Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
> > news:tdxlc.22334$Ik.1620682@attbi_s53...
> > >
> > > "Eric Kaun" <ekaun@yahoo.com> wrote in message
> > > news:A2xlc.737$dd.268@newssvr33.news.prodigy.com...
> > >
> > > > > > >We could reword it to take people like you into account: OO is
> > closer
> > > > to
> > > > > the
> > > > > > >way humans ought to think.
> > > >
> > > > Ought to? For what reason?
> > >
> > > Because concepts are not arbitrary.
> >
> > Thanks - that explains nothing.
>
> Sure it does. But since you think that concepts are arbitrary,

I don't. I'm not disagreeing. It's just insufficient as an explanation or a
response.

> then I might
> as well be randomly pushing keys on my keyboard as explaining something to
> you. Holding concepts as arbitrary is tantamount to being totally closed
to
> any kind of rational argument.

I agree.

> > Can you define "fringe"? Is all fringe thinking bad? I thought that
> > historically we've derived a lot of value from "fringe" thinkers. Not
that
>
> From some fringe thinkers. But we won't get any value out of your
particular
> fringe.

Which fringe is that? That objects aren't the alpha and omega of software
design?

> > all of them have value, but generally I prefer to evaluate the contents
of
> a
> > thought, rather than dismissing based on whether it's labeled "fringe"
or
> > not. I suppose, however, that I'm wrong.
>
> Thankfully we don't need to evaluate every silly thought that enters
> anyone's head.

Which silly thought are you referring to?

> > > "From experience" == "I don't have a damn clue."
> >
> > Well, then forget "from experience" and replace with "from theory". Or
> > anything else you like.
>
> Yeah, concepts are arbitrary and you can just shuffle them around until
> someone likes the way they look. Sorry, not interested.

Neither am I. Why do you keep bringing it up?


0
ekaun (22)
5/4/2004 6:40:16 PM
"Eric Kaun" <ekaun@yahoo.com> wrote in message
news:ksRlc.613$pi6.600@newssvr32.news.prodigy.com...
> "Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
> news:0lPlc.31191$kh4.1539944@attbi_s52...
> >
> > "Eric Kaun" <ekaun@yahoo.com> wrote in message
> > news:UFMlc.565$5n5.272@newssvr32.news.prodigy.com...
> > > "Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
> > > news:tdxlc.22334$Ik.1620682@attbi_s53...
> > > >
> > > > "Eric Kaun" <ekaun@yahoo.com> wrote in message
> > > > news:A2xlc.737$dd.268@newssvr33.news.prodigy.com...
> > > >
> > > > > > > >We could reword it to take people like you into account: OO
is
> > > closer
> > > > > to
> > > > > > the
> > > > > > > >way humans ought to think.
> > > > >
> > > > > Ought to? For what reason?
> > > >
> > > > Because concepts are not arbitrary.
> > >
> > > Thanks - that explains nothing.
> >
> > Sure it does. But since you think that concepts are arbitrary,
>
> I don't. I'm not disagreeing. It's just insufficient as an explanation or
a
> response.

Why should I arbitrarily be restricted to the set of data types that a given
procedural language decides to confer to me? What is so special about a
32-bit int that it deserves a special status in the language?


Shayne Wissler


0
5/4/2004 7:39:42 PM
Alfredo:

> Of course, 2 is member of several types.

You are confusing "type" and "set".
0
laurent (379)
5/4/2004 7:44:54 PM
Thomas Gagn� <tgagne@wide-open-west.com> wrote:

> Eric Kaun wrote:
> 
> >
> >
> >This is just great - having advanced philosophy, math, logic, linguistics,
> >and science over thousands of years, it's time to base our computer systems
> >on the interactions of young children.
> >
> >Should we introduce CPU nap times as well?
> >
> >  
> >

> No, but it sounds like programmer nap times are a good idea!

> As far as humans are concerned, children are about as simple as you can 
> get--and thankfully haven't had 12+ years of crap telling them what 
> things are.  The interact naturally without thinking about what they're 
> thinking about.

> Want to study how brains work?  Study the simple ones.  Want to study 
> intelligence?  Observe the development of a child from birth through the 
> first 12 months and measure what it is they've discovered without the 
> benefits of philosophy, math, logic, linguistics and science. 
> 
> Using your argument nothing would be developed (including children) 
> because of the lack of a prerequisite education.

And actually such study of children developing cognition has been done
especially by Cognitive Psychologists like Piaget.  And a central
conclusion is that object recognition is a major stage in the
development of early childhood consciousness.  

I spoke of Piaget, children, and objects in the late '80's early '90's
on comp.object.  Booch refers to Piaget and objects in the 2nd edition
of his OO Analysis and Design when discussing the sociological and
mental origins of the object concept, view and ancillary topics.

Elliott
-- 
Circle, Triangle, Square variation on a theme is the
cell,and gen/spec anchored in commonality are the
life blood 

Notable on the part of some XP'ers is their aversion
o this kind of OO basics 101, single interface, Circle,
Triangle, Square variation on a theme (the commonality)
through a single interface.
0
universe3 (375)
5/4/2004 7:49:29 PM
Jeff Brooks wrote:
> Mark S. Hathaway wrote:
> 
>> Jeff Brooks wrote:
>>
>>> Lets look at a simple example of a chair. People think of a chair as 
>>> a thing. The thing has some properties to it (it likely has 4 legs, 
>>> is meant to be sat on, etc).
>>>
>>> When we learn about our first chair we treat it as a unique object. 
>>> When we see our second chair we see it has similar properties to the 
>>> first chair and we begin to form an understanding of chairness (the 
>>> type chair).
>>>
>>> If I said i have a unique chair i built and asked if a person could 
>>> sit on it what would you say? Odds are you know that chair like 
>>> things are meant to be sat on so my unique chair can be sat on.
>>
>> Weren't the first chairs some natural thing, like a fallen log,
>> which a person simply sat on? Then we began to design (plagiarizing
>> nature) our own fancier chairs to have similar qualities.
>>
> 
> No, logs are not chairs but they can be sat on.

I say a log can be a chair. Is not the human provided
label of something at the essence of OO?

>> But, in the one instance we simply used the log by sitting, in the
>> other instance we had to design something and then the final product,
>> a man-made chair, had some functionality and structure. All these
>> are different.
> 
> I'm not arguing that there aren't different things that have similair 
> properties.
> 
> The point is people make classifications of things. We then use these 
> classifications to both identify objects, and we make assumptions about 
> new objects based on the classifications.
> 
> An example of this is if you see a chair you have never seen before you 
> will assume you can sit on it because you understand chairs can be sat on.

There's discussion in the linguistics community about how humans
are limited by our languages. The idea is that we can only think
and do things for which we have words to label and think. I tend
to think a human has some built-in sense of objects and words are
our labels of both objects and actions and everything we experience
as unique and differentiated from everything else (including feelings).
But, to say these objects somehow exist with pre-existing functionality
or relationships to humans is absurd. We recognize an object,
sometimes even create them, label them and assign our uses for them.
We can sit on a computer chip if we like; call it a chair too.

> An example of this failing is showing kids that big things can't be 
> picked up. Then when they see another big object they assume they can't 
> pick it up even if it just an empty box. If you make the empty box look 
> like metal an adult will assume they can't pick up either even if it is 
> just a painted empty box.

like the Stephen Wright joke about having a balsa wood house...
when the kids in the neighborhood would bother him he'd just
pick up his house and point at them.   LOL

He turns a house into a warning device of some weird kind.
Of course, everything in Wright's joke world is very weird.

>>> Another example is a Person. If i asked you if my friend (that you 
>>> have never met) has blood what would you say? Odds are you would say 
>>> they have blood because you understand a what a person is and you 
>>> know a person has blood.
>>>
>>> A humans ability to communicate with other people isn't based on 
>>> their ability to send the complete information they know but rather 
>>> we both have an understanding of basic classes of things and we use 
>>> the same symbolic representation to define them which allows us to 
>>> communicate.
>>>
>>> If i say "Joe is sitting in a chair." i really haven't said much but 
>>> you know what i'm saying because we both have a shared understanding 
>>> of a chair. We both know that Joe is a name of a person so Joe is a 
>>> person. Joe is a males names so Joe is male. We both know that people 
>>> have blood so Joe has blood. To understand my sentence we have to 
>>> understand a great deal that isn't communicated. We can communicate 
>>> because of classifications we both understand.
>>>
>>> People understand and interact with the world based classes of things.
>>
>> But, we also are limited by our senses and don't completely sense
>> all the "world" around us.
> 
> So? That doesn't mean we don't classify the things we do sense.

No, it just means we don't have the entire Universe under our
thumbs (by labeling it all); and this means we only have an
internal universe (our mental construct) which is that which
we can manipulate and put to our purposes. We define the object,
we don't just identify it, recognize it and it's functionalities.

>> And, our interaction with specific
>> things is often on our own human terms, rather than based on
>> their class-like qualities. We might throw a baseball, but we
>> can also crush one with a steamroller or smash on down on
>> someone's head (not recommended). Which of these is ball-like?
>> Which of these is utility chosen by the human? What class of
>> object is manipulated by humans -- all of them.
> 
> I'm not sure what your trying to say here. You seem to think that we 
> don't think OO but your arguement appears to support my views.

We don't "think OO"! Nobody quite knows how we do think.
Remember also that what we're consciously doing is only
the tip of the mental iceberg. Most of what we do is very
subconscious. Remember all the habits you've developed
during your life and how difficult those things were when
you had to consciously apply yourself to doing them? I
remember learning to tie my lace-up shoes. It was difficult
without prior knowledge. But, once learned it's only a
matter of remembering and doing.

I don't think a lot about what I'm going to write here
on this forum, but when I start typing ...out come real
words, spelled correctly, and my thoughts, all beginning
in the subconscious, appears on the computer screen.
Where's the OO in that?

>>> This allows me to use a computer i have never seen before because I 
>>> know that computers behave like other computers. I can walk on ground 
>>> i have never been on before because i understand slopes, and texture 
>>> of materials of ground i have never been on but they have the same 
>>> properties of other ground i have been on.
>>
>> The object being manipulated is as much the human body as it is
>> the ground/beach/mountain, etc.
>>
> True, we even classify ourselves as objects.

only in a most abstract way

>>> A computer is a classification of device, slopes are classifications 
>>> of the ground, texture is a classification of a material. My 
>>> understanding of these classifications, and many other 
>>> classifications, allows me to function as a person.
>>
>> What understanding does one have to have of some new object,
>> say a chunk of something rock-like, which later turns out to
>> be from outerspace, in order to put it in one's mouth and
>> chew on it?  :-)   Is that a recognition of it's class or
>> of someone's idea that they should test it's hardness to see
>> if it's gold or something else?
>>
> Well, to communiate you classified the object as "rock like".

to show it's something new, with no true identity known to me
and without a proper label or known qualities

> To me something that is "rock like" isn't chewable so i wouldn't try to 
> chew on it because i know rocks are harder then my teeth. I don't think 
> rocks are interesting so unless the rock looked really different from 
> other rocks I would probably just ignore it. If it was different and it 
> was small I might pick it up to look at it.
> 
> So my classification of rock defines my behavior with rocks. If the rock 
> was different from other rocks (like it actually was something that 
> could be eaten) but it looked like other rocks I wouldn't treat it any 
> differently then any other rock so I would just ignore it and never find 
> out that it could be eaten.
> 
> Young children that are still learning about things may not understand 
> "rock like" so they will might put it in their mouths. In this case I 
> would try to take it away from them and i would be very surprised it was 
> soft because it doesn't match my classification of rock.

I have a cousin who, as a young girl, had a penchant for eating coal.
It turned out there was something in the coal which she actually
needed. Her label of the rock might've been 'strange food'. Does
that mean you couldn't use the coal in a fire to generate heat or
as a door-stop to keep a door open to let a breeze through to cool
your room? Utility is as often assigned as recognized (inherent in
the object).

>>> Quotes from "Design Principals behind Smalltalk" - byte mag Aug 1981
>>>
>>> "The mind observes a vast universe of experience, both immediate and 
>>> recorded. One can derive a sense of oneness with the universe simply 
>>> by letting this experience be, just as it is. However, if one wishes 
>>> to participate, literally to take a part, in the universe, one must 
>>> draw distinctions.
>>
>> But, like subjects of scholarship, such differentiating is only
>> the result of our human limitation and is not inherent in the
>> subject matter. I know this is getting to the philosophical/
>> spiritual level of discussion but that's just the nature of
>> things.
> 
> I'm debating how humans think of things which is defined by the 
> limitations of humans. We wouldn't need to draw distinctions if we could 
> understand everything at once.

Perhaps, perhaps not. There is also a question of whether
we really need to "understand" things, whether we have a
label of it or any understanding of it's utility.

Imagine an ape-man leaving the jungles of sub-saharan Africa
and journeying onto the plains or towards the eastern shore
and an ocean. How many new things might that being encounter
which are distinctly separate from the background, yet
unnamed and their qualities unknown? Does that being have
to understand those objects? How would that being manage
to exist in such an unknown environment?

Now, imagine the same being, encountering the ocean and
walking into the cool waters, being washed clean of dust
and grime. The inherent nature of the waters provides a
functionality which only has to be recognized.

Next, the being sees another, perhaps dangerous, thing
coming toward it and picks up a big rock from nearby
and throws it at the danger. Is the rock known? Is it
understood, it's utility known? Has that ape-man ever
used such a rock for self-defense before? Isn't the
utility of the rock in this case simply assigned by
the being, rather than the rock ever previously having
the inherent purpose or utility of providing defense?

An object has inherent qualities and functionality,
but we can also assign new purposes to it. Is that OO?

>>> In so doing one identifies an object in the universe, and 
>>> simultaneously all the rest becomes not-that-object. Distinction by 
>>> itself is a start, but the process of distinguishing does not get any 
>>> easier. Every time you want to talk about "that chair over there", 
>>> you must repeat the entire processes of distinguishing that chair. 
>>> This is where the act of reference comes in: we can associate a 
>>> unique identifier with an object, and, from that time on, only the 
>>> mention of that identifier is necessary to refer to the original 
>>> object. "
>>
>> That there are distinct objects is peculiar and yet seems very real.
>> That there would seem to be distinct objects to some other creature
>> of this world isn't certain. Take for example a small small living
>> creature which can float through the air like dust and is pushed
>> around by air or dust or rain or when it lands on 'solid ground'.
>> To that kind of creature the world would perhaps seem to be all of
>> one material and it simply moves through that, if it recognizes
>> itself as distinct from the whole at all. Perhaps, like a fetus
>> in the womb, it doesn't recognize distinct differences at all.
> 
> What does that have to do with the way humans think?

That we relate to the World via the seive that is our perceptions,
not just on some pure perfect understanding of the unfiltered
universe.

>>> "Classification is the objectification of nessness. In other words, 
>>> when a human sees a chair, the experience is taken both literally an 
>>> "that very thing" and abstractly as "that chair-like thing". Such 
>>> abstraction results from the marvelous ability of the mind to merge 
>>> "similar" experience, and this abstraction manifests itself as 
>>> another object in the mind, the Platonic chair or chairness. "
>>>
>>> The complete article is available here:
>>> http://users.ipa.net/%7edwighth/smalltalk/byte_aug81/design_principles_behind_smalltalk.html 
>>
>> I don't argue against OO, just that OO is somehow a perfect
>> example of how humans relate to the world or how we think.
>> Let's not imbue OO with more than it really is, a tool.
> 
> "In designing a language for use with computers, we do not have to look 
> far to find helpful hints. Everything we know about how people think and 
> communicate is applicable. The mechanisms of human thought and 
> communication have been engineered for millions of years, and we should 
> respect them as being of sound design. Moreover, since we must work with 
> this design for the next million years, it will save time if we make our 
> computer models compatible with the mind, rather that the other way 
> around."
> 
> OO is based on human understanding of how people think.

a very very small part of human understanding, not how we think
Nobody knows how humans think.
0
hathawa2 (78)
5/4/2004 9:34:30 PM
Jeff Brooks wrote:

> Mark S. Hathaway wrote:
> 
>> What you're describing makes me think an OO program is
>> designed for a human being to execute, rather than a computer.
> 
> OO is designed around what is known about human thought. Programs are 
> built to be ran on a computer, and understandable to a human.

Human languages are complex. The English language has several
hundred thousand words, with new ones being created regularly.
We use about ten thousand of those words regularly. Each word,
if one looks in a dictionary, is defined by using other words
which are in turn defined by other words.

Can you imagine the babel of computer program 'words' which
would exist if there were no central unified dictionary we would
all use? This would not be understandable.

>> But, what about the process of designing an app/OO program?
>
> To design something we must design it in terms we understand. Designing 
> things may not be natural but the result must be understood.

Yet, it's the designing of programs which we're spending the
majority of our time on. Shouldn't that be somewhat natural?

>> Is that process natural for humans? Maybe the maintenance
>> programmer would have an easier time reviewing the universe
>> of that program, discerning the objects and utilizing them,
>> but to "play God" and create new objects is certainly not
>> a natural human practice. Though we've begun to do just
>> that in the last several thousand years it's not at all
>> clear that OO-thinking is related to the human creative
>> process.
> 
> Personally i don't think being creative is a process (it can be part of 
> a process). People can generate new ideas but the result still fits into 
>  the way people think.

I think you don't truly know how you think and you should think
about how people think.   :-)

Look into cognitive theory or something. There have been lots
of great books written on the subject in the last couple of
decades. I'm not the greatest expert on it, but I think you'll
be surprised that it's not what you think either.

0
hathawa2 (78)
5/4/2004 9:40:05 PM
"Mark S. Hathaway" <hathawa2@marshall.edu> wrote:
> Human languages are complex. The English language has several
> hundred thousand words, with new ones being created regularly.
> We use about ten thousand of those words regularly. Each word,
> if one looks in a dictionary, is defined by using other words
> which are in turn defined by other words.

Importent all the more (double-plus, mehaps?), me can brake rules of 
grammer english (speling two!) still understand me, can you?
0
roy (2295)
5/4/2004 9:48:19 PM
Universe wrote:

> Thomas Gagn� <tgagne@wide-open-west.com> wrote:
> 
>>Eric Kaun wrote:
>>
>>>This is just great - having advanced philosophy, math, logic, linguistics,
>>>and science over thousands of years, it's time to base our computer systems
>>>on the interactions of young children.
....
>>Want to study how brains work?  Study the simple ones.  Want to study 
>>intelligence?  Observe the development of a child from birth through the 
>>first 12 months and measure what it is they've discovered without the 
>>benefits of philosophy, math, logic, linguistics and science. 
> 
> And actually such study of children developing cognition has been done
> especially by Cognitive Psychologists like Piaget.  And a central
> conclusion is that object recognition is a major stage in the
> development of early childhood consciousness.  
> 
> I spoke of Piaget, children, and objects in the late '80's early '90's
> on comp.object.  Booch refers to Piaget and objects in the 2nd edition
> of his OO Analysis and Design when discussing the sociological and
> mental origins of the object concept, view and ancillary topics.

And yet consciousness, childhood or adult, isn't necessarily
where THINKING occurs. One theory has it that consciousness
is just where results of thought 'bubble up' or are 'sent to'
part of the brain where this can be presented as consciousness,
perhaps relating in some way to the same kinds of thought
results being sent to my fingertips as I type this sentence.
Are my fingers conscious of the thoughts? No, but somehow
the 'mind' does have consciousness. Is that consciousness
the entirety of my thinking? No.

If our computing world of objects was predefined and limited
then we could write programs much more easily. But, the task
of creating so many new types is definately not a natural
human task.

0
hathawa2 (78)
5/4/2004 11:39:08 PM
"Mark S. Hathaway" <hathawa2@marshall.edu> wrote:

> Jeff Brooks wrote:
> > Mark S. Hathaway wrote:
> > 
> >> Jeff Brooks wrote:
> >>
> >>> Lets look at a simple example of a chair. People think of a chair as 
> >>> a thing. The thing has some properties to it (it likely has 4 legs, 
> >>> is meant to be sat on, etc).
> >>>
> >>> When we learn about our first chair we treat it as a unique object. 
> >>> When we see our second chair we see it has similar properties to the 
> >>> first chair and we begin to form an understanding of chairness (the 
> >>> type chair).
> >>>
> >>> If I said i have a unique chair i built and asked if a person could 
> >>> sit on it what would you say? Odds are you know that chair like 
> >>> things are meant to be sat on so my unique chair can be sat on.
> >>
> >> Weren't the first chairs some natural thing, like a fallen log,
> >> which a person simply sat on? Then we began to design (plagiarizing
> >> nature) our own fancier chairs to have similar qualities.
> >>
> > 
> > No, logs are not chairs but they can be sat on.
> 
> I say a log can be a chair. Is not the human provided
> label of something at the essence of OO?

The essence of the chair definition seems arguably to be something to
sit on.

Most things can sub-optimally be many other things.  Context
determines which aspect of a thing is key at any one time, and place.

Elliott
-- 
Theory Leads, Practice Verifies.
0
universe3 (375)
5/5/2004 3:39:44 AM
"Mark S. Hathaway" <hathawa2@marshall.edu> wrote:

> Jeff Brooks wrote:
> > Mark S. Hathaway wrote:
> > 
> >> Jeff Brooks wrote:

> > The point is people make classifications of things. We then use these 
> > classifications to both identify objects, and we make assumptions about 
> > new objects based on the classifications.

Oh so right on target!

Classification is essential to proper analysis and synthesis -
thinking.

Better classification was a key to creating OO modelling to reduce
complexity better than it had been at the time:

> "How Object-Oriented Programming Started
> 
> by Ole-Johan Dahl and Kristen Nygaard,
> Dept. of Informatics, University of Oslo
> 
> SIMULA I (1962-65) and Simula 67 (1967) are the two first
> object-oriented languages. Simula 67 introduced most of the key
> concepts of object-oriented programming: both objects and classes,
> subclasses (usually referred to as inheritance) and virtual
> procedures, combined with safe referencing and mechanisms for bringing
> into a program collections of program structures described under a
> common class heading (prefixed blocks).
> 
> The Simula languages were developed at the Norwegian Computing Center,
> Oslo, Norway by Ole-Johan Dahl and Kristen Nygaard. Nygaard's work in
> Operational Research in the 1950s and early 1960s created the need for
> precise tools for the description and simulation of complex
> man-machine systems. In 1961 the idea emerged for developing a
> language that both could be used for system description (for people)
> and for system prescription (as a computer program...

> There's discussion in the linguistics community about how humans
> are limited by our languages. The idea is that we can only think
> and do things for which we have words to label and think. 

??

Certainly we can do things with things for which we have no label.  At
least most of us.

> I tend
> to think a human has some built-in sense of objects and words are
> our labels of both objects and actions and everything we experience
> as unique and differentiated from everything else (including feelings).

> But, to say these objects somehow exist with pre-existing functionality
> or relationships to humans is absurd.

Obviously until we confront an object there is little or nil in the
way of "relationships to humans".  But as scientists we must accept
that objects exist even if we are not aware of them.  Just as when a
tree falls in a forest there IS a sound, human presence or not.

That is the heart of being *objective*.

Something most XP'ers do not firmly grasp, or strive to operate on the
basis of.

> > I'm not sure what your trying to say here. You seem to think that we 
> > don't think OO but your arguement appears to support my views.
 
> We don't "think OO"! Nobody quite knows how we do think.

You don't.  Many cognitive psychologists and others, have varying
degrees of accurately knowing how we think.

Both in the physical and mental realms and how they interact.

> Remember also that what we're consciously doing is only
> the tip of the mental iceberg. Most of what we do is very
> subconscious.

So how does that deny the significance of the OO paradigm?

>  Remember all the habits you've developed
> during your life and how difficult those things were when
> you had to consciously apply yourself to doing them? I
> remember learning to tie my lace-up shoes. It was difficult
> without prior knowledge. But, once learned it's only a
> matter of remembering and doing.

Ditto: So how does that deny the significance of the OO paradigm?

> I don't think a lot about what I'm going to write here
> on this forum, but when I start typing ...out come real
> words, spelled correctly, and my thoughts, all beginning
> in the subconscious, appears on the computer screen.

Ahhhhh...

Naww, be to easy.

> Where's the OO in that?

Right, likely no OO there in the way you mentate.  <g>

But really, you need to define OO in "that", what?  Physical,
chemical, symbols, etc.

Symbolically, every noun is an object.  Sentences, often detail how
nouns behave, interact and collaborate to achieve goals.

Why is this difficult for hackers and neo-hackers?

Over thinking by applying subjective rubbish gets many who are not
outright opportunists like some other vocal vets on comp.object.

Elliott
-- 
Not approaching OO as modelling execution
of physical machines, per the creators of OO
is like not having a software engineering soul.
0
universe3 (375)
5/5/2004 4:34:03 AM
Jeff Brooks wrote:

> Personally i don't think being creative is a process (it can be part of 
> a process). People can generate new ideas but the result still fits into 
>  the way people think.

Very much agreed.

Elliott
Circle, Triangle, Square variation on a theme is the
cell,and gen/spec anchored in commonality are the
lifeblood of OO design.

GoF Bridge and Strategy patterns are canonical OO
structural design patterns that have been formed on the
basis of the OO cell and life blood, while simultaneously
the 2 patterns extend use of the OO cell and life blood
for use at more complex levels of OO system design.

Quite amazing - fantastic!

Together the OO cell - abstraction and OO life blood -
polymorphism, as parts, may be combined in multiple
ways to comprise varying OO design kernel aggregations kernels.


Each design kernel features a relatively different
structural topology.  These altetnate OO design kernel 
aggregations are mixed and matched to form the foundation
of most significant OO system designs.

GoF Bridge and Strategy are classic examples of such
OO system design kernels.  The need for these 2 design
kernels is so ubiquitous and so frequent,they have taken
on the status of OO design *patterns*
0
universe3 (375)
5/5/2004 5:01:29 AM
"Mark S. Hathaway" <hathawa2@marshall.edu> wrote in message
news:40939b6b@alpha.wvnet.edu...
> Jeff Brooks wrote:
>
> >> cstb wrote:
> >>
> >>> Jeff Brooks:
> >>>
> >>> ... Actually, i think OO is the way people actually think.
> >>
> >> There appears to be a relationship, yes.
> >> However, people are capable of thinking in ways that are much
> >> richer than the results obtained by orienting on Objects alone.
> >
> > Are you sure?
> >
> > The research that created Smalltalk was done by looking at how very
> > young children interact and think about the world and they based the GUI
> > + Smalltalk on that (the first gui was made by this research group).
> > This research allows us to understand the basics of understanding
> > because they looked at the primitive thoughts of people.
> >
> > For example:
> >
> > To interact with a thing you have to identify it. Children can identify
> > things by pointing at them even if they don't know the word for it. This
> > resulted in a pointing device being created called the mouse which would
> > allow people to point at what they want to use. All access to objects in
> > Smalltalk are done via references so there is a uniform way to "point"
> > at an object in code.
> >
> > Children can't read well, but they can identify shapes and understand
> > how to move things. So they concluded making an interface based on
> > shapes and moving them is more natural than text interfaces. This
> > resulted in shapes and the ability to pick them up and put them where
> > you want them. Some of the shapes were icons, windows, etc.
> >
> > Children understand certain things behave in certain ways. Things that
> > behave in similar ways are easier for children to learn.
> >
> > Allowing things in a computer that are different to behave in similar
> > ways allows people to learn them faster and those things feel more
> > natural. This is why actions like opening a document is done in the same
> > way no matter what type of document it is in a GUI. The concept of
> > allowing different things to behave in similar ways effects both the GUI
> > and the programming language.
>
> What you're describing makes me think an OO program is
> designed for a human being to execute, rather than a computer.
> But, what about the process of designing an app/OO program?
> Is that process natural for humans? Maybe the maintenance
> programmer would have an easier time reviewing the universe
> of that program, discerning the objects and utilizing them,
> but to "play God" and create new objects is certainly not
> a natural human practice. Though we've begun to do just
> that in the last several thousand years it's not at all
> clear that OO-thinking is related to the human creative
> process.
>
> > I think people can think in more complex ways as they get older but that
> > doesn't mean they don't think in an object oriented way. Children don't
> > understand logic, but they understand objects and classifications. I
> > think people can learn logic but i think we understand it by
> > understanding things, and classifications.
> >
> > Another way of putting it is we can program different types of languages
> > using object oriented languages. That doesn't mean that OO isn't at the
> > core of the new languages. Children first understand an
> > objects/classifications so i think it is likely the basis of our
> > understanding.
>

I think the main reason OO is popular is that the tasks modern computers are
being asked to do most often require network architecture.  Applications are
becoming more and more distributed and the object paradigm has very good
support for highly distributed architectures (at least compared to
functional and procedural).  If the buying public was asking for huge
quantities of symbolic algebra programs I'm sure functional would be "the
most popular" paradigm.  If large business didn't require integration of
information coming in from both internal and external sources around the
world in any number of data fromats, I'm sure procedural/relational would be
more than adequate for any conceivable business task.  P/R was good in the
days when everything was more centralized and of a smaller scale.  It is
not, IMO, the best paradigm for global network computing, and neither is
functional.

On a more philosopical note, I also see the increasing popularity of
'complex systems', where you have to almost embrace complexity to an extent.
The power of this type of system comes from the parallel _interactions_ of
spacially separated nodes/objects/cells/atoms (or whatever).  In fact, one
can almost think of complex systems (from models of vibrations through
crystals to the internet), as types of asynchronous message-passing systems.
While OO may not have been built with this in mind, its built-in support for
decentralized network computing is what I see as driving market interest in
the paradigm.

Oh, and another market driver, IMO, is the fact that objects lend themselves
well to diagrams (UML).  The idea of a 'software blueprints'  probably
sounds like a pretty good idea to a lot of CEOs.  Everyone complains about
the software development process being more of an art than a science.  Using
UML may lend, at least the illusion of, increased process structure.



l8r, Mike N. Christoff



0
mchristoff (248)
5/5/2004 7:01:18 AM
"U-CDK_CHARLES\\Charles" <"Charles Krug"@cdksystems.com> wrote in 
news:JePlc.8875$vz5.970@nwrdny01.gnilink.net:

> t's only a "problem" because "Everybody Knows" a circle is a "kind of"
> an ellipse--mathematically.  But the mathematical model of the conic
> section doesn't match the presentation of the problem:
> 

Yes, I was too under the impression he had a geometric bias.
0
5/5/2004 7:33:14 AM
"Mark S. Hathaway" <hathawa2@marshall.edu> wrote in
news:40939b6b@alpha.wvnet.edu: 

> What you're describing makes me think an OO program is
> designed for a human being to execute, rather than a computer.

Actually - to tie with the other thread branch - when Nygaard and Dahl 
tought about objects, they had in mind just both, or so they say 
(http://heim.ifi.uio.no/~kristen/FORSKNINGSDOK_MAPPE/F_OO_start.html)

> But, what about the process of designing an app/OO program?
> Is that process natural for humans? Maybe the maintenance
> programmer would have an easier time reviewing the universe
> of that program, discerning the objects and utilizing them,
> but to "play God" and create new objects is certainly not
> a natural human practice. 

Well, it seems to me that it is. As u said, we've spent the last ten 
thousands years creating new objects in the real world. I don't think 
we'd have a problem of the idea of creating new objects in the software 
universe.

Actually, I've often found that while OO is difficult to understand and 
learn for people with a strong procedural experience, it's actually 
easier than learning procedural decomposition for absolute beginners. Of 
course, that's just an empriical observation.

What actually might make a difference is the idea of being able to 
engineer *perfect* objects - I mean, perfect from the start - which used 
to be pervasive in design and is still very strong (see all the questions 
about the "right" thing to do). 

Let me elaborate a bit: in the real world, u do a prototype, you use it 
some, if it's useful other people use it, then gets refined, a new 
version come in, someone has the idea of applying a new technology to it, 
and so on. Creating objects and playing god is an evolutionary process.
The cars and even hammers of nowadays are the result of that kind of 
evoulution.

In software, on the contrary, we expect our objects to be "done" and 
"right", even in a given context. Which is doable for small and simple 
objects, but not for complex functionality: our creative process can't do 
that - we can tackle complexity by improving a bit at a time, in small 
steps, at every step guided by forces that determine what is "right" or 
"wrong" - and eliminate objects which aren't "right".

In software, such forces are less evident and constraining - which might 
be one reason for which doing objects is easy, doing 'em well is not.
0
5/5/2004 11:16:50 AM

Michael N. Christoff wrote:

> <snip>
>
>>What you're describing makes me think an OO program is
>>designed for a human being to execute, rather than a computer.
>>But, what about the process of designing an app/OO program?
>>Is that process natural for humans? Maybe the maintenance
>>programmer would have an easier time reviewing the universe
>>of that program, discerning the objects and utilizing them,
>>but to "play God" and create new objects is certainly not
>>a natural human practice. Though we've begun to do just
>>that in the last several thousand years it's not at all
>>clear that OO-thinking is related to the human creative
>>process.
>>
>>    
>>
".. but to 'play God' and create new objects is certainly not a natural 
human practice."

You must be kidding!  We create things all the time.  Find three things 
on your desk someone didn't create.  We call these people "inventors."  
Some are more successful at it than others.  Some of my favorite inventions:

   1. The BBQ grill
   2. A shaker to put your seasonings in to
   3. "adult" beverages
   4. oh, and computers.

-- 
..tom
remove email address' dashes for replies
opensource middleware at <http://isectd.sourceforge.net>
<http://gagne.homedns.org/~tgagne/>
0
tgagne (596)
5/5/2004 12:41:03 PM
On Tue, 4 May 2004 21:44:54 +0200, Laurent Bossavit
<laurent@dontspambossavit.com> wrote:

>Alfredo:
>
>> Of course, 2 is member of several types.
>
>You are confusing "type" and "set".

Why?

Types are sets, but not all sets are types.

Regards
  Alfredo

0
alfredo (205)
5/5/2004 2:32:46 PM
"Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
news:2kSlc.33533$I%1.2039065@attbi_s51...
> Why should I arbitrarily be restricted to the set of data types that a
given
> procedural language decides to confer to me? What is so special about a
> 32-bit int that it deserves a special status in the language?

Nothing at all, and in fact I should be able to define my own integer type
and swap it for "int" if I feel like it (there may be some problems), as
well as more basic business types. I'm not disagreeing with the usefulness
(I'd say need, actually) of that idea. Relational theory even requires such
a thing - defining types (domains) beyond the basic system-supplied ones.

I just disagree with the extrapolation from that to the notion that such
objects are then sufficient as the sole architectural principle for a
system, which leads to object graphs, code to maintain the object graphs,
mappings from object graphs to some data store, aliasing (pointer) problems,
and the transfer of logic from a more flexible and natural predicate form
into complex assertions about graphs. All of that is a far cry from simply
defining types.

- Eric


0
ekaun (22)
5/5/2004 4:02:35 PM
"Universe" <universe@tAkEcovadOuT.net> wrote in message
news:l7tg905ljinmlfncud1rlbnghdm7ui9l49@4ax.com...
> GoF Bridge and Strategy patterns are canonical OO
> structural design patterns that have been formed on the
> basis of the OO cell and life blood, while simultaneously
> the 2 patterns extend use of the OO cell and life blood
> for use at more complex levels of OO system design.
>
> Quite amazing - fantastic!
>
> Together the OO cell - abstraction and OO life blood -
> polymorphism, as parts, may be combined in multiple
> ways to comprise varying OO design kernel aggregations kernels.

Note, however, that the GoF design patterns (and many others), however
useful they are, are grounded on language failings (or at least point the
way toward higher-level languages). I won't go into many details because
someone else (Peter Norvig) already has (http://norvig.com/design-patterns).
It's really incorrect to claim that OO "enables" patterns when in fact those
patterns are addressed elegantly by other languages, without require manual
composition.

> Each design kernel features a relatively different
> structural topology.  These altetnate OO design kernel
> aggregations are mixed and matched to form the foundation
> of most significant OO system designs.

I agree that the patterns are useful. I disagree that OO designers should
have to manually implement them to get useful work done.

> GoF Bridge and Strategy are classic examples of such
> OO system design kernels.  The need for these 2 design
> kernels is so ubiquitous and so frequent,they have taken
> on the status of OO design *patterns*

Strategy and Command simply cover for a lack of first-class functions.
Contrast with Lisp.

- erk


0
ekaun (22)
5/5/2004 4:30:42 PM
"Michael N. Christoff" <mchristoff@sympatico.caREMOVETHIS> wrote in message
news:eh0mc.26812$3Q4.694288@news20.bellglobal.com...
> I think the main reason OO is popular is that the tasks modern computers
are
> being asked to do most often require network architecture.

Agreed.

> Applications are
> becoming more and more distributed and the object paradigm has very good
> support for highly distributed architectures (at least compared to
> functional and procedural).

I disagree. The most ubiquitous and useful components in Java are servlets
and JSPs, neither of which are in any special way "object-oriented".
Component-based and service-oriented architecture don't mandate objects. Web
services don't mandate objects. Objects are useful when you're dealing with
an object-oriented language (obviously), and when you have language
homogeneity. With Web services, we reduce interfaces to hierarchical
messages, and all data to strings, to cope with heterogeneous systems. Where
is the object-oriented paradigm? The CORBA protocols, more OO than any other
that I can think of right now, are arguably dead (or at least buried in a
niche). The most active protocols are not OO in the least. And the work
required to "bake" objects from the byte streams moving around are for the
convenience of processing in the language of the target component, not for
any other reason.

> If the buying public was asking for huge
> quantities of symbolic algebra programs I'm sure functional would be "the
> most popular" paradigm.

The public doesn't know to ask for that, and the development community
doesn't either. Don't mistake acceptability (especially after a long period
of growth in the computing industry, which brings in many more people and
requires spoon-fed education) with technical credentials. People are willing
to do a lot of stupid things.

And functional <> symbolic algrebra.

> If large business didn't require integration of
> information coming in from both internal and external sources around the
> world in any number of data fromats, I'm sure procedural/relational would
be
> more than adequate for any conceivable business task.

And it is - at least relational, not procedural. And from the point of view
of integrating services / interfaces across non-OO protocols and
heterogeneous systems, procedural is just as good as OO. What makes OO
better in such a case? It's commonly used in distributed compuing because OO
is popular - not because it's ideally suited for it.

> P/R was good in the
> days when everything was more centralized and of a smaller scale.  It is
> not, IMO, the best paradigm for global network computing, and neither is
> functional.

Functional is much better, but OO is no better.

> On a more philosopical note, I also see the increasing popularity of
> 'complex systems', where you have to almost embrace complexity to an
extent.

Embracing complexity is to be avoided. Simplicity should be embraced.
Complexity should only be tolerated where you have no choice in the matter.
To embrace complexity is to spread your legs for any "design" which happens
by.

> The power of this type of system comes from the parallel _interactions_ of
> spacially separated nodes/objects/cells/atoms (or whatever).

There are models for such things - read Dijkstra and Hoare (e.g. CSP).
Unfortunately, they're not used, and as an industry we seem to believe that
we can cope with complexity by abandoning models and symbols of any sort.

> In fact, one
> can almost think of complex systems (from models of vibrations through
> crystals to the internet), as types of asynchronous message-passing
systems.

Yes, and these are well-studied by a few, and ignored by most. I agree with
your observations, but to assume that we're in any way addressing them using
OO is nonsense. OO is around because it's around and useful for some things,
but like XML, it's now the proverbial hammer for the computing nail.

> While OO may not have been built with this in mind, its built-in support
for
> decentralized network computing is what I see as driving market interest
in
> the paradigm.

It doesn't have such built-in support. That's insane. In what way does an OO
language inherently support asychronous processing and distributed failure
modes?

> Oh, and another market driver, IMO, is the fact that objects lend
themselves
> well to diagrams (UML).

True.

> The idea of a 'software blueprints'  probably
> sounds like a pretty good idea to a lot of CEOs.

Unfortunately UML (and even OCL) are far inferior for their stated purpose
than material architectural blueprints. They're hand-waving, and UML for the
most part simply stops us all from arguing about whether an object class
should be a box or a cloud. Since we use boxes for everything anyway, does
it make a difference? And does anyone even use OCL? Check out Alloy, from
Daniel Jackson at MIT, for a strong attempt at a useful specification
(blueprint) language.

> Everyone complains about
> the software development process being more of an art than a science.
Using
> UML may lend, at least the illusion of, increased process structure.

Illusion is the right word. Don't get me wrong - it can be useful for human
communication. But like everything the RUP (also from Rational), it adopts a
kitchen-sink approach, stuffs everything on (and inside) the pizza, and then
expects coherence to emerge. At best, it's a toolkit, not an integrated
approach to anything.

- erk


0
ekaun (22)
5/5/2004 4:45:05 PM
Thomas Gagn� wrote:

> Michael N. Christoff wrote:
> 
>> <snip>
>>
 >>> Mark Hathaway wrote:
 >>>
>>> What you're describing makes me think an OO program is
>>> designed for a human being to execute, rather than a computer.
>>> But, what about the process of designing an app/OO program?
>>> Is that process natural for humans? Maybe the maintenance
>>> programmer would have an easier time reviewing the universe
>>> of that program, discerning the objects and utilizing them,
>>> but to "play God" and create new objects is certainly not
>>> a natural human practice. Though we've begun to do just
>>> that in the last several thousand years it's not at all
>>> clear that OO-thinking is related to the human creative
>>> process.
> 
> ".. but to 'play God' and create new objects is certainly not a natural 
> human practice."
> 
> You must be kidding!  We create things all the time.  Find three things 
> on your desk someone didn't create.  We call these people "inventors."  
> Some are more successful at it than others.  Some of my favorite 
> inventions:
> 
>   1. The BBQ grill
>   2. A shaker to put your seasonings in to
>   3. "adult" beverages
>   4. oh, and computers.

Aside from people in the computer world how many inventors
do you know of? As has been said, creating is easy, but
doing it right (like those products you listed) isn't so
easy.

For example, if you were to buy an off the shelf library
of some kind, would you use any part of it without wanting
to know every detail about how it worked? Would you trust
any black-box class library or would you want to look at
the details to ensure it was 'done right'?

If it's so easy to create, then why is the discussion about
circles and rectangles so confusing and continuing?

I think it's a marvel that we've evolved to be able to create
as much and as well as we do, but in the long history of mankind
this is just the last fraction of a second of the historical day.
I don't recall from history that the everyday life of man changed
much prior to 1600.

Wasn't the more natural way for people to behave just to repeat
what they knew and what was 'passed down from one generation to
the next'?  Reusing known objects in known ways was the norm.


0
hathawa2 (78)
5/5/2004 5:22:15 PM
Mark S. Hathaway wrote:

> Thomas Gagn� wrote:
>
>
> Aside from people in the computer world how many inventors
> do you know of? As has been said, creating is easy, but
> doing it right (like those products you listed) isn't so
> easy.

I would think that composers, painters, architects, engineers, and chefs 
are all examples of people who through the course of their everyday 
lives create things from scratch.

>
> For example, if you were to buy an off the shelf library
> of some kind, would you use any part of it without wanting
> to know every detail about how it worked? Would you trust
> any black-box class library or would you want to look at
> the details to ensure it was 'done right'?

We by off-the-shelf operating systems, install GNU utilities, and 
purchase applications all the time without looking under the covers to 
ensure they were done right.  Look at all the things going haywire in IE 
and Outlook.  People complain but they keep using them.

>
> If it's so easy to create, then why is the discussion about
> circles and rectangles so confusing and continuing?

Rectangular pegs and elliptical holes?

>
> <snip>
>
> Wasn't the more natural way for people to behave just to repeat
> what they knew and what was 'passed down from one generation to
> the next'?  Reusing known objects in known ways was the norm.

The brain is an amazing patterm-matching engine that is able to observe 
and predict without equal.  This is how we're able to catch things that 
are thrown to us without complicated physics and mathematical 
calculations (some of us anyway).  It is also the same principal that 
allows some to look at what is, imagine what could (or should) be and 
fashion it from available parts to suit their needs.

This happens so frequently, invention, that it has become invisible to 
us.  I can flip my burgers on the grill using any device able to assert 
enough friction and leverage to overcome the steak's inertia without 
breaking.  I've used spachulas, tongs, screwdrivers, knives, my fingers 
(ouch!), and I'm sure a few other devices and combinations of all the 
above.  All of this reuse, imaginative or not, was from recognizing the 
patterns of what worked and employing them on devices otherwise 
unintended for the application.

-- 
..tom
remove email address' dashes for replies
opensource middleware at <http://isectd.sourceforge.net>
<http://gagne.homedns.org/~tgagne/>
0
tgagne (596)
5/5/2004 6:53:38 PM

Topmind wrote:
>>The world isn't heiarchial. Languages like java deal with this with 
>>interfaces, Smalltalk deals with this via dynamic typing. I'm not sure 
>>what you think sets can do that existing programming languages can't do.
> 
> 
> It is not a matter of "can't do", because they are all Turing
> Equivalent. It is a matter of human convenience (maintainability).
> 
> Your "customer" example is essentially a navigational database
> hard-wired into app code. App code is a sh*tty place to store
> relationship schemas IMO. Maybe you like it and like navigational
> techniques (for some odd reason), but I don't. They happily died in
> the 70's thanks to Dr. Codd, but OO zealots resurrected it from the
> dead for OO and OODBMS. Does this mean goto's are coming back?
> (Navigational is the Goto of data structures.)

Hi T,

How about an example?

Kurt

0
5/5/2004 8:43:49 PM
Thomas Gagn� wrote:
> Mark S. Hathaway wrote:
> 
>> Thomas Gagn� wrote:
>>
>> Aside from people in the computer world how many inventors
>> do you know of? As has been said, creating is easy, but
>> doing it right (like those products you listed) isn't so
>> easy.
> 
> I would think that composers, painters, architects, engineers, and chefs 
> are all examples of people who through the course of their everyday 
> lives create things from scratch.

Most, if not all of those, are not creating something new
everyday. They're making something from a known recipe or
design. The creative aspect is in there, but it's not the
most common thing those folks do. Ask a chef how well his
brand new recipe turns out. I think they'd usually tell
you it will take a little fine-tuning. Ask an engineer
whether they've created anything really new or if they
just reuse known ideas and techniques.

As for the abstract creations of composers and painters,
as much as we enjoy those things the ones who aren't
ignored are the ones who create 'new' product based on
very familiar old ideas, ones the audience already knows
and enjoys.

>> For example, if you were to buy an off the shelf library
>> of some kind, would you use any part of it without wanting
>> to know every detail about how it worked? Would you trust
>> any black-box class library or would you want to look at
>> the details to ensure it was 'done right'?

> We by off-the-shelf operating systems, install GNU utilities, and 
> purchase applications all the time without looking under the covers to 
> ensure they were done right.  Look at all the things going haywire in IE 
> and Outlook.  People complain but they keep using them.

I said 'class library', something you would then reuse in your
own app. (the one with your name and reputation on the outside).
You can complain if someone else's software doesn't work, but
if it has your name on it you'd think twice, wouldn't you?

>> If it's so easy to create, then why is the discussion about
>> circles and rectangles so confusing and continuing?
> 
> Rectangular pegs and elliptical holes?

:-)

>> Wasn't the more natural way for people to behave just to repeat
>> what they knew and what was 'passed down from one generation to
>> the next'?  Reusing known objects in known ways was the norm.
> 
> The brain is an amazing patterm-matching engine that is able to observe 
> and predict without equal.  This is how we're able to catch things that 
> are thrown to us without complicated physics and mathematical 
> calculations (some of us anyway).  It is also the same principal that 
> allows some to look at what is, imagine what could (or should) be and 
> fashion it from available parts to suit their needs.

Now you're getting closer to what I've been writing recently --
that the object's classification and use is defined as much,
if not more, by the human than by it's inherent qualities.

Having to restrain ourselves to using an object in some very
narrow ways, ways we can't change, is confining and uncomfortable.
Of course, we can do that, but if the set of objects/classes
isn't broad enough then we have to go back to scratch and
bake that cake without a recipe. We have to take that chance
that our new creation will be wrong, untested, uncritiqued
and not capable of being used with anybody else's objects.

> This happens so frequently, invention, that it has become invisible to 
> us.  I can flip my burgers on the grill using any device able to assert 
> enough friction and leverage to overcome the steak's inertia without 
> breaking.  I've used spachulas, tongs, screwdrivers, knives, my fingers 
> (ouch!), and I'm sure a few other devices and combinations of all the 
> above.  All of this reuse, imaginative or not, was from recognizing the 
> patterns of what worked and employing them on devices otherwise 
> unintended for the application.

So, you're a fan of dynamic languages, without so much type-checking?
:-)

0
hathawa2 (78)
5/6/2004 12:44:55 AM
Mark S. Hathaway wrote:

> Thomas Gagn� wrote:
>
>> I would think that composers, painters, architects, engineers, and 
>> chefs are all examples of people who through the course of their 
>> everyday lives create things from scratch.
>
>
> Most, if not all of those, are not creating something new
> everyday. They're making something from a known recipe or
> design. The creative aspect is in there, but it's not the
> most common thing those folks do. Ask a chef how well his
> brand new recipe turns out. I think they'd usually tell
> you it will take a little fine-tuning. Ask an engineer
> whether they've created anything really new or if they
> just reuse known ideas and techniques.

Everyday programmers make stuff from recipes or reuse known ideas and 
techniques.  In fact, I think programmers need to get better at it.

>
> As for the abstract creations of composers and painters,
> as much as we enjoy those things the ones who aren't
> ignored are the ones who create 'new' product based on
> very familiar old ideas, ones the audience already knows
> and enjoys.

Programmers designing user interfaces use idioms the audience is already 
familiar with--it makes their application easier to use (or listen to).  
By being familiar it is more easily embraced and hopefully finds success 
in the marketplace (pop music, pop GUIs, pop languages). 

> <snip>
>
>> We by off-the-shelf operating systems, install GNU utilities, and 
>> purchase applications all the time without looking under the covers 
>> to ensure they were done right.  Look at all the things going haywire 
>> in IE and Outlook.  People complain but they keep using them.
>
>
> I said 'class library', something you would then reuse in your
> own app. (the one with your name and reputation on the outside).
> You can complain if someone else's software doesn't work, but
> if it has your name on it you'd think twice, wouldn't you?

Is this why programmers don't reuse software?  They don't trust each 
other?  I reuse as much code as possible and hope to get it from 
reliable sources.  When a problem develops it damages the credibility 
the source--which may be an author or a website.

>
> <snip>
>
> Having to restrain ourselves to using an object in some very
> narrow ways, ways we can't change, is confining and uncomfortable.
> Of course, we can do that, but if the set of objects/classes
> isn't broad enough then we have to go back to scratch and
> bake that cake without a recipe. We have to take that chance
> that our new creation will be wrong, untested, uncritiqued
> and not capable of being used with anybody else's objects.

Like Java's class libraries? 

Languages that don't allow the tweaking of their libraries are 
frustrating.  For a program/class author to presume their class is 
perfect (final) and should not (or can not) be modified or extended is 
the epitome of either arrogance or short-sightedness.

>
>> This happens so frequently, invention, that it has become invisible 
>> to us.  I can flip my burgers on the grill using any device able to 
>> assert enough friction and leverage to overcome the steak's inertia 
>> without breaking.  I've used spachulas, tongs, screwdrivers, knives, 
>> my fingers (ouch!), and I'm sure a few other devices and combinations 
>> of all the above.  All of this reuse, imaginative or not, was from 
>> recognizing the patterns of what worked and employing them on devices 
>> otherwise unintended for the application.
>
>
> So, you're a fan of dynamic languages, without so much type-checking?

Do you know me from somewhere or did you really get that from the 
burger-flipping?

-- 
..tom
remove email address' dashes for replies
opensource middleware at <http://isectd.sourceforge.net>
<http://gagne.homedns.org/~tgagne/>
0
tgagne (596)
5/6/2004 2:49:43 AM
Thomas Gagn� <tgagne@wide-open-west.com> wrote:

> I would think that composers, painters, architects, engineers, and chefs 
> are all examples of people who through the course of their everyday 
> lives create things from scratch.

Yes, very true.

Elliott
The universe with life exists because
the absence of same is meaningless
without the presence of same.

Non-existence implies existence &
Existence implies non-existence.

"Why?" can not be asked against itself.
"Why?" can only be asked and have
meaning against the actuality of "Why not?".
0
universe3 (375)
5/6/2004 6:01:12 AM
On Fri, 30 Apr 2004 23:51:26 +0200, Mathieu Roger
<mathieu.roger@imag.fr> wrote:

>>>Classes can be understood as sets of entities, just as types can be
>>>but the feature that is "almost" unique to classes is "set inclusion"
>>>through inheritance
>> 
>> Do you mean subtyping?
>
>Yes, but limited to special cases

Why?

Why not to use subtyping always you need it?

>> Yes, Algol does not support subtyping.
>
>
>I meant that in procedural languages, types are disjoints

Java and C++ are procedural languages.

> but in OO one
>can have non disjoint types, it is more general, but less general than
>logic

I would say that in OO one can have non disjoint types but without
logical consistency.

>> This is nice, but not very impressive.
>> 
>
>It is not very impressive, the ability to extend the set of objects must 
>be paid to the price of difficulties to extend the set of methods (in 
>procedural language it is something like the inverse)

What I meant is that this is not enough to say that "the book" of
Computing Science is finished and we should not spend more time
researching about how to improve computer languages.

Regards
  Alfredo


0
alfredo (205)
5/6/2004 10:02:53 AM
On Sat, 1 May 2004 21:47:32 +0200, Majorinc, Kazimir
<kazimir@chem.pmf.notcombuthr> wrote:

> I remember I 
>struggled alot before I realized that not my lack of understanding of 
>OO, but OO itself is the reason of the problems. 

It was the same for me.

>It would be much better that major compiler vendors were 
>influenced by Setl instead of Smalltalk. But, if game of the words is 
>allowed here, if one is 30 years ahead of his time, he cannot expect 
>that his time will come next year.

I completely agree. Nygaard and Dahl were about 30 years ahead of
their time, but their time was the early 60's.

It seems to me that Codd was 60 years or so ahead of his time.


Regards
  Alfredo
0
alfredo (205)
5/6/2004 2:34:51 PM
On Wed, 28 Apr 2004 16:47:09 +0100, Calum <calum.bulk@ntlworld.com>
wrote:

>To speculate on how humans think, we can look to human language.  There 
>must be a correspondence between human thought and human language, some 
>would even say they are inseparable.  In natural language, you have 
>nouns, verbs and adjectives, which correspond to objects, messages and 
>members.  Therefore, there is at least some correspondence between OO 
>language, human language, and therefore human thought.
>
>OO does not have a monopoly on nouns, verbs and adjectives however.  In 
>a functional language, a function is perhaps a verb, while values are 
>nouns.  In a logic language, terms are nouns, while predicates are verbs 
>or adjectives.

Good point! OO knows a little about nouns, but but almost nothing
about phrases.

OO is only yet another little step, not the end.

>Logic programming is "better" IMO at modeling the real world, e.g.

Of course, but it is still in a primitive stage due to the conformism
of the masses among other things

>however to implement algorithms using logic programming is IMO less easy 
>than in functional/procedural/OO languages.  So this isn't the whole 
>story either.

Algorithms are low level stuff, but logic and procedural programming
can be used together.

Regards
  Alfredo
0
alfredo (205)
5/6/2004 2:34:52 PM
Thomas Gagn� wrote:
> Mark S. Hathaway wrote:
> 
>> Thomas Gagn� wrote:
>>
>>> I would think that composers, painters, architects, engineers, and 
>>> chefs are all examples of people who through the course of their 
>>> everyday lives create things from scratch.
>>
>> Most, if not all of those, are not creating something new
>> everyday. They're making something from a known recipe or
>> design. ... Ask an engineer
>> whether they've created anything really new or if they
>> just reuse known ideas and techniques.
> 
> Everyday programmers make stuff from recipes or reuse known ideas and 
> techniques.  In fact, I think programmers need to get better at it.

True, and the programming language(s) that person uses
can either help or hinder the process.

>> As for the abstract creations of composers and painters,
>> as much as we enjoy those things the ones who aren't
>> ignored are the ones who create 'new' product based on
>> very familiar old ideas, ones the audience already knows
>> and enjoys.
> 
> Programmers designing user interfaces use idioms the audience is already 
> familiar with--it makes their application easier to use (or listen to).  
> By being familiar it is more easily embraced and hopefully finds success 
> in the marketplace (pop music, pop GUIs, pop languages).

That's what I've been saying we're pretty good at, reusing
known things which work. But, let's see someone create a
new widget library...

>> <snip>
>>
>>> We by off-the-shelf operating systems, install GNU utilities, and 
>>> purchase applications all the time without looking under the covers 
>>> to ensure they were done right.  Look at all the things going haywire 
>>> in IE and Outlook.  People complain but they keep using them.
>>
>> I said 'class library', something you would then reuse in your
>> own app. (the one with your name and reputation on the outside).
>> You can complain if someone else's software doesn't work, but
>> if it has your name on it you'd think twice, wouldn't you?
> 
> Is this why programmers don't reuse software?  They don't trust each 
> other?  I reuse as much code as possible and hope to get it from 
> reliable sources.  When a problem develops it damages the credibility 
> the source--which may be an author or a website.

You must be a very trusting soul. I hope it doesn't come
back to bite you on the arse.

>> <snip>
>>
>> Having to restrain ourselves to using an object in some very
>> narrow ways, ways we can't change, is confining and uncomfortable.
>> Of course, we can do that, but if the set of objects/classes
>> isn't broad enough then we have to go back to scratch and
>> bake that cake without a recipe. We have to take that chance
>> that our new creation will be wrong, untested, uncritiqued
>> and not capable of being used with anybody else's objects.
> 
> Like Java's class libraries?

I haven't used Java, so I don't know. Are they sealed?

> Languages that don't allow the tweaking of their libraries are 
> frustrating.  For a program/class author to presume their class is 
> perfect (final) and should not (or can not) be modified or extended is 
> the epitome of either arrogance or short-sightedness.

Yet we assume that at some point a class will be perfected,
or at least close enough to perfect that we don't have to
fret over it or rewrite it specifically for this or that app.

>>> This happens so frequently, invention, that it has become invisible 
>>> to us.  I can flip my burgers on the grill using any device able to 
....
>> So, you're a fan of dynamic languages, without so much type-checking?
> 
> Do you know me from somewhere or did you really get that from the 
> burger-flipping?

I don't know you from Adam.  :-)

What languages do you favor? Let me guess, just for fun...
Smalltalk, of course. Lisp to a lesser extent. C++ because
you have to. Oh, and you probably have some experience with
Ada.

How'd I do?


0
hathawa2 (78)
5/6/2004 3:55:42 PM
"Eric Kaun" <ekaun@yahoo.com> wrote in message
news:ve8mc.1382$_E1.534@newssvr15.news.prodigy.com...
> "Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
> news:2kSlc.33533$I%1.2039065@attbi_s51...
> > Why should I arbitrarily be restricted to the set of data types that a
> given
> > procedural language decides to confer to me? What is so special about a
> > 32-bit int that it deserves a special status in the language?
>
> Nothing at all, and in fact I should be able to define my own integer type
> and swap it for "int" if I feel like it (there may be some problems), as
> well as more basic business types. I'm not disagreeing with the usefulness
> (I'd say need, actually) of that idea. Relational theory even requires
such
> a thing - defining types (domains) beyond the basic system-supplied ones.

So when I define my type, why shouldn't I be allowed to:
    1. Use other types to build it from;
    2. Provide my own implementation of operations;
    3. Specify the type in terms of difference from other types
(inheritance);
    4. Use a derived type as if it were the base type (substitutability)?

> I just disagree with the extrapolation from that to the notion that such
> objects are then sufficient as the sole architectural principle for a
> system, which leads to object graphs, code to maintain the object graphs,
> mappings from object graphs to some data store, aliasing (pointer)
problems,
> and the transfer of logic from a more flexible and natural predicate form
> into complex assertions about graphs. All of that is a far cry from simply
> defining types.

I don't think your objection is central to object-oriented programming.
Really it's an argument of a hierarchical database vs. a relational. This is
a related issue but I think not important to this discussion.


Shayne Wissler


0
5/6/2004 4:22:58 PM
"Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
news:CDtmc.32919$_41.2704138@attbi_s02...
> So when I define my type, why shouldn't I be allowed to:
>     1. Use other types to build it from;

Certainly - its internals aren't part of any model, and therefore have no
relevance. Delegation is invisible to the external world.

>     2. Provide my own implementation of operations;

Not a problem.

>     3. Specify the type in terms of difference from other types
(inheritance);

This is where you have a proble, because few languages provide any support
for this - in particular, they provide no support for expressing a type as a
different, without simultaneously allowing violations of substitutability.
In particular, there's no clear expression of the difference, other than
implementation. In contrast, look at Tutorial D, or any other language which
specifies subtyping via constraints.

>     4. Use a derived type as if it were the base type (substitutability)?

This is fine, except without language support for #3 above, you can't say
much about this, except that "type D says it's also of type C".
Substitutability in this way is critical to proper typing, yet without any
language support at all, it's a sheer matter of faith - the compiler trusts
the programmer, and offers no assistance in helping to check such type
definitions.

> > I just disagree with the extrapolation from that to the notion that such
> > objects are then sufficient as the sole architectural principle for a
> > system, which leads to object graphs, code to maintain the object
graphs,
> > mappings from object graphs to some data store, aliasing (pointer)
> problems,
> > and the transfer of logic from a more flexible and natural predicate
form
> > into complex assertions about graphs. All of that is a far cry from
simply
> > defining types.
>
> I don't think your objection is central to object-oriented programming.
> Really it's an argument of a hierarchical database vs. a relational. This
is
> a related issue but I think not important to this discussion.

I was talking about OO programming, not databases.

- Eric


0
ekaun (22)
5/6/2004 6:41:12 PM
"Eric Kaun" <ekaun@yahoo.com> wrote in message
news:bFvmc.373$jd.267@newssvr33.news.prodigy.com...
> "Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
> news:CDtmc.32919$_41.2704138@attbi_s02...
> > So when I define my type, why shouldn't I be allowed to:
> >     1. Use other types to build it from;
>
> Certainly - its internals aren't part of any model, and therefore have no
> relevance. Delegation is invisible to the external world.
>
> >     2. Provide my own implementation of operations;
>
> Not a problem.
>
> >     3. Specify the type in terms of difference from other types
> (inheritance);
>
> This is where you have a proble, because few languages provide any support
> for this - in particular, they provide no support for expressing a type as
a
> different, without simultaneously allowing violations of substitutability.

So your complaint is that the language won't do your thinking for you? If
that's your only complaint about OO then I don't see what you are
complaining about.

> In particular, there's no clear expression of the difference, other than
> implementation.

Wrong. The entire body of the derived class is precisely an expression of
the difference.

> In contrast, look at Tutorial D, or any other language which
> specifies subtyping via constraints.

Feel free to post the details of a mechanism that you find superior to OO.

> >     4. Use a derived type as if it were the base type
(substitutability)?
>
> This is fine, except without language support for #3 above, you can't say
> much about this, except that "type D says it's also of type C".
> Substitutability in this way is critical to proper typing, yet without any
> language support at all, it's a sheer matter of faith - the compiler
trusts
> the programmer, and offers no assistance in helping to check such type
> definitions.

Again, your complaint against OO is tantamount to claiming that the program
doesn't write itself. You are being silly.

> > > I just disagree with the extrapolation from that to the notion that
such
> > > objects are then sufficient as the sole architectural principle for a
> > > system, which leads to object graphs, code to maintain the object
> graphs,
> > > mappings from object graphs to some data store, aliasing (pointer)
> > problems,
> > > and the transfer of logic from a more flexible and natural predicate
> form
> > > into complex assertions about graphs. All of that is a far cry from
> simply
> > > defining types.
> >
> > I don't think your objection is central to object-oriented programming.
> > Really it's an argument of a hierarchical database vs. a relational.
This
> is
> > a related issue but I think not important to this discussion.
>
> I was talking about OO programming, not databases.

Regardless, I don't see your complaint as being central to OO as is normally
conceived (although I will admit to being in favor of object hierarchies,
most OO languages don't really support them). The defining characteristics
of OO pertain to my list above.


Shayne Wissler


0
5/6/2004 6:54:22 PM
"Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message news:<CDtmc.32919$_41.2704138@attbi_s02>...

> > I just disagree with the extrapolation from that to the notion that such
> > objects are then sufficient as the sole architectural principle for a
> > system, which leads to object graphs, code to maintain the object graphs,
> > mappings from object graphs to some data store, aliasing (pointer)
>  problems,
> > and the transfer of logic from a more flexible and natural predicate form
> > into complex assertions about graphs. All of that is a far cry from simply
> > defining types.
> 
> I don't think your objection is central to object-oriented programming.
> Really it's an argument of a hierarchical database vs. a relational. This is
> a related issue but I think not important to this discussion.

Almost all programs have to manage some data and the relational
approach is the besy way to manage data by far. But current OO
languages don't support relational variables.


Regards
  Alfredo
0
alfredo (205)
5/6/2004 11:09:33 PM
"Alfredo Novoa" <alfredo@ncs.es> wrote in message
news:e4330f45.0405061509.7a96e047@posting.google.com...
> "Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
news:<CDtmc.32919$_41.2704138@attbi_s02>...
>
> > > I just disagree with the extrapolation from that to the notion that
such
> > > objects are then sufficient as the sole architectural principle for a
> > > system, which leads to object graphs, code to maintain the object
graphs,
> > > mappings from object graphs to some data store, aliasing (pointer)
> >  problems,
> > > and the transfer of logic from a more flexible and natural predicate
form
> > > into complex assertions about graphs. All of that is a far cry from
simply
> > > defining types.
> >
> > I don't think your objection is central to object-oriented programming.
> > Really it's an argument of a hierarchical database vs. a relational.
This is
> > a related issue but I think not important to this discussion.
>
> Almost all programs have to manage some data and the relational
> approach is the besy way to manage data by far. But current OO
> languages don't support relational variables.

What general purpose programming language does?


Shayne Wissler


0
5/6/2004 11:45:05 PM
"Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
news:yRvmc.42349$Ik.2733098@attbi_s53...
>
> "Eric Kaun" <ekaun@yahoo.com> wrote in message
> news:bFvmc.373$jd.267@newssvr33.news.prodigy.com...
> > "Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
> > news:CDtmc.32919$_41.2704138@attbi_s02...
> > > So when I define my type, why shouldn't I be allowed to:
> > >     1. Use other types to build it from;
> >
> > Certainly - its internals aren't part of any model, and therefore have
no
> > relevance. Delegation is invisible to the external world.
> >
> > >     2. Provide my own implementation of operations;
> >
> > Not a problem.
> >
> > >     3. Specify the type in terms of difference from other types
> > (inheritance);
> >
> > This is where you have a proble, because few languages provide any
support
> > for this - in particular, they provide no support for expressing a type
as
> a
> > different, without simultaneously allowing violations of
substitutability.
>
> So your complaint is that the language won't do your thinking for you? If
> that's your only complaint about OO then I don't see what you are
> complaining about.

You made a large logical leap here, minus the logic. I'm talking about
expressing logical type differences as something other than procedural code
(which is what you do in OO languages, though you do break it up into
methods which are much like subroutines). Remember, all that OO code you're
writing could be just as written in direct machine code - why don't you use
that? Why let a higher-level language do your thinking for you?

> > In particular, there's no clear expression of the difference, other than
> > implementation.
>
> Wrong. The entire body of the derived class is precisely an expression of
> the difference.

Correct - a poor expression. Contrast with any logic at all. The body of the
class is procedural code. One of the major goals of computing has been (or
should have been) to elevate our mode of expression, or else (once again, to
keep beating the drum) we'd still be doing assembler. I agree OO is much
higher-level than assembler. But to assume we've reached our destination is
just juvenile.

> > In contrast, look at Tutorial D, or any other language which
> > specifies subtyping via constraints.
>
> Feel free to post the details of a mechanism that you find superior to OO.

I'm not going to post details when they're publicly available in books on
Tutorial D (The Third Manifesto, and to some extent dbdebunk.com), books on
Lisp and PROLOG and Haskell and Scheme and ML and Ocaml and ...

You use the right word, though: mechanism. As long as we're addicted like
junkies to staying "close to the metal", we'll feel compelled to express
ourselves like a processor - another case of the master in bondage to the
servant.

> > >     4. Use a derived type as if it were the base type
> (substitutability)?
> >
> > This is fine, except without language support for #3 above, you can't
say
> > much about this, except that "type D says it's also of type C".
> > Substitutability in this way is critical to proper typing, yet without
any
> > language support at all, it's a sheer matter of faith - the compiler
> trusts
> > the programmer, and offers no assistance in helping to check such type
> > definitions.
>
> Again, your complaint against OO is tantamount to claiming that the
program
> doesn't write itself. You are being silly.

Maybe I am silly, but I like having the computer do some of the crap work
for me. If you don't, well, enjoy your assembler - wait, I mean machine
code. You wouldn't want to cheat and have that machine code write itself,
would you? I mean, that's a direct expression of your intentions, right?

There are many more expressions of a program than procedural (oops - I mean
OO) code. Examine the following quote, from a man far smarter than both of
us put together:

"Progress is possible only if we train ourselves to think about programs
without thinking of them as pieces of executable code." - E.W. Dijkstra

> > I was talking about OO programming, not databases.
>
> Regardless, I don't see your complaint as being central to OO as is
normally
> conceived (although I will admit to being in favor of object hierarchies,
> most OO languages don't really support them).

?

Every OO language that I know does. For example, typically if you have an
order processing system, you'll have an Order object, with LineItem objects
contained in it, and each LineItem containing (perhaps) PartialShipment
objects. Most OO systems proceed by instantiating and modifying such
hierarchies. What did you mean by "object hierarchies"?

- erk


0
ekaun (22)
5/7/2004 1:59:39 PM
"Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
news:56Amc.44349$I%1.2884758@attbi_s51...
> What general purpose programming language does?

No common ones, although Tutorial D is a possible one, or D4, Alphora's
implementation of it. It's a shame, too - they're really incredibly useful,
and would supplant the hierarchical search through objectspace that much of
OO requires...

- erk


0
ekaun (22)
5/7/2004 2:01:31 PM
"Eric Kaun" <ekaun@yahoo.com> wrote in message
news:fDMmc.1990$iu6.1849@newssvr15.news.prodigy.com...

> > So your complaint is that the language won't do your thinking for you?
If
> > that's your only complaint about OO then I don't see what you are
> > complaining about.
>
> You made a large logical leap here, minus the logic. I'm talking about
> expressing logical type differences as something other than procedural
code
> (which is what you do in OO languages, though you do break it up into
> methods which are much like subroutines). Remember, all that OO code
you're
> writing could be just as written in direct machine code - why don't you
use
> that? Why let a higher-level language do your thinking for you?

Actually you're the one making the illogical leap. I think you are after
something that either can't exist or would be overly constraining if it did.
When we express these differences I think it's inherent in the fact that the
computer can't predict what we are going to write that we might go "out of
bounds".

> > > In particular, there's no clear expression of the difference, other
than
> > > implementation.
> >
> > Wrong. The entire body of the derived class is precisely an expression
of
> > the difference.
>
> Correct - a poor expression. Contrast with any logic at all. The body of
the
> class is procedural code. One of the major goals of computing has been (or
> should have been) to elevate our mode of expression, or else (once again,
to
> keep beating the drum) we'd still be doing assembler. I agree OO is much
> higher-level than assembler. But to assume we've reached our destination
is
> just juvenile.

I don't assume we've reached it but on the other hand you are being so and
non-specific that I think you must be talking about your woozy wishing and
not about anything that's implementable.

> > > In contrast, look at Tutorial D, or any other language which
> > > specifies subtyping via constraints.
> >
> > Feel free to post the details of a mechanism that you find superior to
OO.
>
> I'm not going to post details when they're publicly available in books on
> Tutorial D (The Third Manifesto, and to some extent dbdebunk.com), books
on
> Lisp and PROLOG and Haskell and Scheme and ML and Ocaml and ...

We are talking about a specific aspect of OO that you think could be done
better. I'm not about to go read up on some obscure language and try to
guess at what part of it you think would implement that aspect better. If
all you are going to do is whine then I'm not interested; there are plenty
of pretentious whiners out there, and they're all boring.


Shayne Wissler


0
5/7/2004 3:26:40 PM
"Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
news:QUNmc.46085$kh4.2545320@attbi_s52...
> Actually you're the one making the illogical leap. I think you are after
> something that either can't exist or would be overly constraining if it
did.
> When we express these differences I think it's inherent in the fact that
the
> computer can't predict what we are going to write that we might go "out of
> bounds".

No, it's not. I agree that the computer can't stop us from going out of
bounds (insert pseudo-Goedelian argument here), but computer languages
generally claim to aspire to being more expressive than their forebears. The
question is what you're expressing. While OO gives you a nice facility for
implementing types, it's less clear on defining types.

Take, for example, Java vs. Java with iContract, a design-by-contract
library. iContract lets you express what invariants a method requires for
invocation, and guarantees following execution. This lets you ensure that
your method actually does what you expect - more to the point, it makes you
think about what you're doing. Test-first programming is somewhat similar,
in that if you write the code as simply as you can from the standpoint of an
outsider using your code, you have something to test your implementation
against.

So it's a matter of being able to clearly state your expectations (via
logical expressions), and then (in an automated way!) test your
implementation against the specification. It's roughly the same motivation
as you have for modularizing: in addition to code reuse, it gives you
smaller parcels that are easier to reason about.

> > I agree OO is much
> > higher-level than assembler. But to assume we've reached our destination
> is
> > just juvenile.
>
> I don't assume we've reached it but on the other hand you are being so and
> non-specific that I think you must be talking about your woozy wishing and
> not about anything that's implementable.

I've given you several references and languages. Alloy is an executable
(checkable) specification language based on relations.

If you're looking for a language with strong commercial acceptance already,
though, you're not going to find it, and that's my point: the industry has
pursued one path and ignored many others. I hope that will change. You can
see some swing in that direction by looking at products like XDoclet for
Java, as well as the increased use of scripting languages and template
engines, which beyond generating code allow you to maintain only a
declarative specification. Any "framework" you mention has declarative
(usually XML) config files that essentially constitute a new language.

Take XSLT - rather than writing code to parse and then create DOMs, you can
express their differences as equations (heinous though XSLT is).

So I guess I have 2 points:

a. We need good language support for specification, not just implementation.
OO has some of this (interfaces and hierarchies for types), but lacks much
more (design by contract, enforced encapsulation to prevent subclasses from
violating superclass predicates, etc.)

b. We need more declarative languages, which (much like compiler history)
lets us offload the question of how and concentrate on what we're saying.
Few human assembly coders today can match the optimizations coded into the
best compilers, which consolidate man-centuries of experience. Same with
DBMSs to avoid hard-coded data structures and access paths (unless you're
doing XML, of course).

> We are talking about a specific aspect of OO that you think could be done
> better.

Honestly, I forget which aspect we were talking about. I'll suggest that OO
is sufficient for type implementation, but not for rigorous type design, nor
for functional aspects of a system (not all of which should be crammed into
object methods, since they often involve more than one object and the choice
can become quite arbitrary).

> I'm not about to go read up on some obscure language and try to
> guess at what part of it you think would implement that aspect better.

The languages I've listed aren't that obscure, but if you're relying on
commercial success to guide you... then it will be nothing but objects all
the way down, for some time. Wait - how about Aspect-Oriented? That's
trendy.

> If all you are going to do is whine then I'm not interested; there are
plenty
> of pretentious whiners out there, and they're all boring.

Sounds like a quote from The Young Ones...

I thought we were having a discussion - sorry if I whined. I was trying to
communicate some well-grounded complaints. I don't claim to have all the
answers, but find it quite useful to step outside the bounds of OO once in a
while.

- erk


0
ekaun (22)
5/7/2004 5:47:56 PM
"Eric Kaun" <ekaun@yahoo.com> wrote in message
news:gZPmc.69$t85.66@newssvr32.news.prodigy.com...
> "Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
> news:QUNmc.46085$kh4.2545320@attbi_s52...
> > Actually you're the one making the illogical leap. I think you are after
> > something that either can't exist or would be overly constraining if it
> did.
> > When we express these differences I think it's inherent in the fact that
> the
> > computer can't predict what we are going to write that we might go "out
of
> > bounds".
>
> No, it's not. I agree that the computer can't stop us from going out of
> bounds (insert pseudo-Goedelian argument here), but computer languages
> generally claim to aspire to being more expressive than their forebears.
The
> question is what you're expressing. While OO gives you a nice facility for
> implementing types, it's less clear on defining types.
>
> Take, for example, Java vs. Java with iContract, a design-by-contract
> library. iContract lets you express what invariants a method requires for
> invocation, and guarantees following execution. This lets you ensure that
> your method actually does what you expect - more to the point, it makes
you
> think about what you're doing. Test-first programming is somewhat similar,
> in that if you write the code as simply as you can from the standpoint of
an
> outsider using your code, you have something to test your implementation
> against.
>
> So it's a matter of being able to clearly state your expectations (via
> logical expressions), and then (in an automated way!) test your
> implementation against the specification. It's roughly the same motivation
> as you have for modularizing: in addition to code reuse, it gives you
> smaller parcels that are easier to reason about.

1. I still don't see what your complaint has to do with critiquing OO.
Eiffel has DBC and it's OO.

2. DBC is just as error-prone as any kind of coding, humans make mistakes
and no language or technique is going to guarantee otherwise.

> > If all you are going to do is whine then I'm not interested; there are
> plenty
> > of pretentious whiners out there, and they're all boring.
>
> Sounds like a quote from The Young Ones...
>
> I thought we were having a discussion - sorry if I whined. I was trying to
> communicate some well-grounded complaints. I don't claim to have all the
> answers, but find it quite useful to step outside the bounds of OO once in
a
> while.

I thought we were discussing how OO as such was flawed, but instead you seem
to be talking about ways of refining existing OO implementations. A fine
topic but not the one I thought we were one.


Shayne Wissler


0
5/7/2004 7:55:34 PM
"Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message
news:VQRmc.46318$0H1.4385502@attbi_s54...
> 1. I still don't see what your complaint has to do with critiquing OO.
> Eiffel has DBC and it's OO.

You're right in that my critique is primarily when OO is the only structure
available (e.g. in Java, C++, and I think Smalltalk, though I'm not
certain).

> 2. DBC is just as error-prone as any kind of coding, humans make mistakes
> and no language or technique is going to guarantee otherwise.

Right, but we can protect ourselves from our own mistakes. If we didn't make
the mistakes, we wouldn't need protection. And comparing what you did with
what you intended to do (a clear statement of it - something executable) is
a good way to protect yourself FROM yourself (and others).

> I thought we were discussing how OO as such was flawed, but instead you
seem
> to be talking about ways of refining existing OO implementations. A fine
> topic but not the one I thought we were one.

You're probably right - I might have drifted. I suggest that objects are
insufficient - not flawed for their main purpose (types) but insufficient
for writing systems, and inferior to relations (properly implemented
according to the model), and no better than functional (which abstracts
along the functional (of course) axis rather than the data type axis).

- erk


0
ekaun (22)
5/7/2004 8:16:32 PM
> Topmind wrote:
> >>The world isn't heiarchial. Languages like java deal with this with 
> >>interfaces, Smalltalk deals with this via dynamic typing. I'm not sure 
> >>what you think sets can do that existing programming languages can't do.
> > 
> > 
> > It is not a matter of "can't do", because they are all Turing
> > Equivalent. It is a matter of human convenience (maintainability).
> > 
> > Your "customer" example is essentially a navigational database
> > hard-wired into app code. App code is a sh*tty place to store
> > relationship schemas IMO. Maybe you like it and like navigational
> > techniques (for some odd reason), but I don't. They happily died in
> > the 70's thanks to Dr. Codd, but OO zealots resurrected it from the
> > dead for OO and OODBMS. Does this mean goto's are coming back?
> > (Navigational is the Goto of data structures.)
> 
> Hi T,

Hey, long time no battle.

> 
> How about an example?

Example of what? Customer database? Goto-like pointers?

> 
> Kurt

-T-
0
topmind (2124)
5/7/2004 11:03:20 PM

Topmind wrote:
>>Topmind wrote:
>>
>>>>The world isn't heiarchial. Languages like java deal with this with 
>>>>interfaces, Smalltalk deals with this via dynamic typing. I'm not sure 
>>>>what you think sets can do that existing programming languages can't do.
>>>
>>>
>>>It is not a matter of "can't do", because they are all Turing
>>>Equivalent. It is a matter of human convenience (maintainability).
>>>
>>>Your "customer" example is essentially a navigational database
>>>hard-wired into app code. App code is a sh*tty place to store
>>>relationship schemas IMO. Maybe you like it and like navigational
>>>techniques (for some odd reason), but I don't. They happily died in
>>>the 70's thanks to Dr. Codd, but OO zealots resurrected it from the
>>>dead for OO and OODBMS. Does this mean goto's are coming back?
>>>(Navigational is the Goto of data structures.)
>>
>>Hi T,
> 
> 
> Hey, long time no battle.

Yes, haven't you missed me? :-)

> 
> 
>>How about an example?
> 
> 
> Example of what? Customer database? Goto-like pointers?

I think you were criticizing some sample OO code denoting Customers
that are Persons and have an address.
If I understand you correctly, what you are saying is that you, in your 
code, are able to avoid expressing, for instance, the relationship 
between customers and their address.
If my assumption is correct, could you please show me an example?


Kurt


0
5/8/2004 3:09:38 PM
> >>>>The world isn't heiarchial. Languages like java deal with this with 
> >>>>interfaces, Smalltalk deals with this via dynamic typing. I'm not sure 
> >>>>what you think sets can do that existing programming languages can't do.
> >>>
> >>>
> >>>It is not a matter of "can't do", because they are all Turing
> >>>Equivalent. It is a matter of human convenience (maintainability).
> >>>
> >>>Your "customer" example is essentially a navigational database
> >>>hard-wired into app code. App code is a sh*tty place to store
> >>>relationship schemas IMO. Maybe you like it and like navigational
> >>>techniques (for some odd reason), but I don't. They happily died in
> >>>the 70's thanks to Dr. Codd, but OO zealots resurrected it from the
> >>>dead for OO and OODBMS. Does this mean goto's are coming back?
> >>>(Navigational is the Goto of data structures.)
> >>
> >>Hi T,
> > 
> > 
> > Hey, long time no battle.
> 
> Yes, haven't you missed me? :-)
> 
> > 
> > 
> >>How about an example?
> > 
> > 
> > Example of what? Customer database? Goto-like pointers?
> 
> I think you were criticizing some sample OO code denoting Customers
> that are Persons and have an address.
> If I understand you correctly, what you are saying is that you, in your 
> code, are able to avoid expressing, for instance, the relationship 
> between customers and their address.
> If my assumption is correct, could you please show me an example?
> 
> Kurt

Those relationships belong in the database, not hard-wired
into app code. I don't like digging through source code
to try to extract the schema among all that stuff.

-T-
0
topmind (2124)
5/9/2004 6:09:37 AM

Topmind wrote:
>>>Example of what? Customer database? Goto-like pointers?
>>
>>I think you were criticizing some sample OO code denoting Customers
>>that are Persons and have an address.
>>If I understand you correctly, what you are saying is that you, in your 
>>code, are able to avoid expressing, for instance, the relationship 
>>between customers and their address.
>>If my assumption is correct, could you please show me an example?
>>
>>Kurt
> 
> 
> Those relationships belong in the database, not hard-wired
> into app code. I don't like digging through source code
> to try to extract the schema among all that stuff.

Well then, I would expect you to be able to illustrate what you mean
with an example. Show me how you would avoid coding the fact that 
customers have addresses.
If you prefer a different example, go ahead, just propose it.

Kurt


0
5/9/2004 6:50:48 PM
On 8 May 2004 23:09:37 -0700, topmind@technologist.com (Topmind)
wrote:

>Those relationships belong in the database, not hard-wired
>into app code.

And those relationships should not be represented using pointers.

Regards
  Alfredo
0
alfredo (205)
5/10/2004 12:27:07 AM
"Eric Kaun" <ekaun@yahoo.com> wrote in message
news:lS8mc.1392$qU1.392@newssvr15.news.prodigy.com...
> "Michael N. Christoff" <mchristoff@sympatico.caREMOVETHIS> wrote in
message
> news:eh0mc.26812$3Q4.694288@news20.bellglobal.com...
> > I think the main reason OO is popular is that the tasks modern computers
> are
> > being asked to do most often require network architecture.
>
> Agreed.
>
> > Applications are
> > becoming more and more distributed and the object paradigm has very good
> > support for highly distributed architectures (at least compared to
> > functional and procedural).
>
> I disagree. The most ubiquitous and useful components in Java are servlets
> and JSPs, neither of which are in any special way "object-oriented".
> Component-based and service-oriented architecture don't mandate objects.

Come on.  You're saying OO has no better support for components than
functional languages or structured languages?  And note: modules <>
components.

> Web
> services don't mandate objects.
> Objects are useful when you're dealing with
> an object-oriented language (obviously), and when you have language
> homogeneity. With Web services, we reduce interfaces to hierarchical
> messages, and all data to strings, to cope with heterogeneous systems.
Where
> is the object-oriented paradigm? The CORBA protocols, more OO than any
other
> that I can think of right now, are arguably dead (or at least buried in a
> niche). The most active protocols are not OO in the least. And the work
> required to "bake" objects from the byte streams moving around are for the
> convenience of processing in the language of the target component, not for
> any other reason.
>

I disagree.  Encapsulated objects with interfaces (note I didn't mention
inheritance) are the foundation of distributed systems.  Whether you
implement the idea of "autonomous nodes in a network with private local data
that communicate by passing messages defined by interfaces" with functional
or whatever is not the point.  The point is that OO has direct support for
this basic idea and the other options do not.

> > If the buying public was asking for huge
> > quantities of symbolic algebra programs I'm sure functional would be
"the
> > most popular" paradigm.
>
> The public doesn't know to ask for that, and the development community
> doesn't either. Don't mistake acceptability (especially after a long
period
> of growth in the computing industry, which brings in many more people and
> requires spoon-fed education) with technical credentials. People are
willing
> to do a lot of stupid things.
>

So you're essentially saying "noone knows".  So how do you know that OO is
not the best model?  Note: I'm not saying it is (although I _believe_ it
is).

> And functional <> symbolic algrebra.
>

Of course, but functional languages are extremely popular (and very well
suited) for writing symbolic algebra apps.

> > If large business didn't require integration of
> > information coming in from both internal and external sources around the
> > world in any number of data fromats, I'm sure procedural/relational
would
> be
> > more than adequate for any conceivable business task.
>
> And it is - at least relational, not procedural. And from the point of
view
> of integrating services / interfaces across non-OO protocols and
> heterogeneous systems, procedural is just as good as OO. What makes OO
> better in such a case? It's commonly used in distributed compuing because
OO
> is popular - not because it's ideally suited for it.
>

** -
I totally disagree.  The idea of 'objects', be they actual implementaions
found in a particular OOPL or more abstract notions, are the foundation of
non-shared memory message-passing distributed architectures.  They are the
simplest way to think about such architectures as far as I'm concerned since
there is almost a 1-1 mapping from object to network node, from network
message to object method invocation, and from local private state to
encapsulation.

> > P/R was good in the
> > days when everything was more centralized and of a smaller scale.  It is
> > not, IMO, the best paradigm for global network computing, and neither is
> > functional.
>
> Functional is much better, but OO is no better.
>
> > On a more philosopical note, I also see the increasing popularity of
> > 'complex systems', where you have to almost embrace complexity to an
> extent.
>
> Embracing complexity is to be avoided. Simplicity should be embraced.
> Complexity should only be tolerated where you have no choice in the
matter.
> To embrace complexity is to spread your legs for any "design" which
happens
> by.
>

A design should be as simple as possible, but no simpler.  Often the
simplest solution is complex.

> > The power of this type of system comes from the parallel _interactions_
of
> > spacially separated nodes/objects/cells/atoms (or whatever).
>
> There are models for such things - read Dijkstra and Hoare (e.g. CSP).
> Unfortunately, they're not used, and as an industry we seem to believe
that
> we can cope with complexity by abandoning models and symbols of any sort.
>

Dijkstra?  That stuff, while very important, is out of date.  ie: although
Dijkstra may have been the first to mention the idea of self-stabilizing
systems, I would not recommend people read him to get the state of the art
in this area.  About models:  Almost all papers in distributed computing
rely on either combinatorial graph theoretic models, or more recently,
higher dimensional geometric models that employ techniques from algebraic
topology.  I would agree that they are moving away from purely logic-based
models (like the various predicate calculi for concurrency etc...).  Symbols
are still very important in distributed computing, and are even making a
come-back in general dynamical systems theory through the use of statistical
methods.

> > In fact, one
> > can almost think of complex systems (from models of vibrations through
> > crystals to the internet), as types of asynchronous message-passing
> systems.
>
> Yes, and these are well-studied by a few, and ignored by most. I agree
with
> your observations, but to assume that we're in any way addressing them
using
> OO is nonsense. OO is around because it's around and useful for some
things,
> but like XML, it's now the proverbial hammer for the computing nail.
>

Again, I believe that the 'object concept' is the simplest way to model such
systems.  One can immediately see how we can take a network of nodes and
turn it into collaboration between objects.

> > While OO may not have been built with this in mind, its built-in support
> for
> > decentralized network computing is what I see as driving market interest
> in
> > the paradigm.
>
> It doesn't have such built-in support. That's insane. In what way does an
OO
> language inherently support asychronous processing and distributed failure
> modes?
>

See **.

> > Oh, and another market driver, IMO, is the fact that objects lend
> themselves
> > well to diagrams (UML).
>
> True.
>
> > The idea of a 'software blueprints'  probably
> > sounds like a pretty good idea to a lot of CEOs.
>
> Unfortunately UML (and even OCL) are far inferior for their stated purpose
> than material architectural blueprints. They're hand-waving, and UML for
the
> most part simply stops us all from arguing about whether an object class
> should be a box or a cloud.

Ha ha! :)

> Since we use boxes for everything anyway, does
> it make a difference? And does anyone even use OCL? Check out Alloy, from
> Daniel Jackson at MIT, for a strong attempt at a useful specification
> (blueprint) language.
>

I'll check it out.

> > Everyone complains about
> > the software development process being more of an art than a science.
> Using
> > UML may lend, at least the illusion of, increased process structure.
>
> Illusion is the right word. Don't get me wrong - it can be useful for
human
> communication. But like everything the RUP (also from Rational), it adopts
a
> kitchen-sink approach, stuffs everything on (and inside) the pizza, and
then
> expects coherence to emerge. At best, it's a toolkit, not an integrated
> approach to anything.
>

I tend to agree.  However, I think the basic premise has merit, even if its
not a silver bullet.



l8r, Mike N. Christoff



0
mchristoff (248)
5/10/2004 12:54:48 AM
"Michael N. Christoff" <mchristoff@sympatico.caREMOVETHIS> wrote in message
news:UqAnc.5291$FH5.207930@news20.bellglobal.com...
> Come on.  You're saying OO has no better support for components than
> functional languages or structured languages?  And note: modules <>
> components.

I understand that, and stick by my assertion. I'll change my tune if you
give me a good argument - in what way does OO provide better support for
components? My argument that it doesn't is based on the typical need for the
external component interface to support different languages (meaning that
you can't just hand out serialized Java objects, for example), and that most
components provide stateless interfaces which amount to little more than a
collection of conceptually-related functions.

Of course, since component is such a fuzzy and overloaded term, I could be
wrong.

> > The most active protocols are not OO in the least. And the work
> > required to "bake" objects from the byte streams moving around are for
the
> > convenience of processing in the language of the target component, not
for
> > any other reason.
> >
>
> I disagree.  Encapsulated objects with interfaces (note I didn't mention
> inheritance) are the foundation of distributed systems.

Just because the various components are implemented using OO doesn't speak
to the importance of OO in distributed systems - all it means is that the
most common languages are OO ones. That says nothing about whether or not OO
supports components and distributed systems better than non-OO paradigms.

> Whether you
> implement the idea of "autonomous nodes in a network with private local
data
> that communicate by passing messages defined by interfaces" with
functional
> or whatever is not the point.

Yes, that's exactly the point.

> The point is that OO has direct support for this basic idea and the other
options do not.

What direct support? Point to it, name it, describe it, whatever.

> > The public doesn't know to ask for that, and the development community
> > doesn't either. Don't mistake acceptability (especially after a long
> period
> > of growth in the computing industry, which brings in many more people
and
> > requires spoon-fed education) with technical credentials. People are
> willing
> > to do a lot of stupid things.
>
> So you're essentially saying "noone knows".  So how do you know that OO is
> not the best model?  Note: I'm not saying it is (although I _believe_ it
> is).

I was obviously exaggerating, but

> > And it is - at least relational, not procedural. And from the point of
> view
> > of integrating services / interfaces across non-OO protocols and
> > heterogeneous systems, procedural is just as good as OO. What makes OO
> > better in such a case? It's commonly used in distributed compuing
because
> OO
> > is popular - not because it's ideally suited for it.
> >
>
> ** -
> I totally disagree.  The idea of 'objects', be they actual implementaions
> found in a particular OOPL or more abstract notions, are the foundation of
> non-shared memory message-passing distributed architectures.  They are the
> simplest way to think about such architectures as far as I'm concerned
since
> there is almost a 1-1 mapping from object to network node,

Wrong - are you really serious? Half the J2EE patterns are based on ways of

> from network
> message to object method invocation, and from local private state to
> encapsulation.

So you use exclusively stateful component frameworks? Most of us use
stateless, and when you're in stateless territory, there ain't no
encapsulation of state. In HTTP the state is passed back and forth - in rich
GUI - to - EJB architectures, the state is client-side. There are
exceptions, of course, but if object state were such a natural fit to
components, you wouldn't have the vast majority of J2EE users hiding their
entity beans behind stateless session bean facades.

> > Embracing complexity is to be avoided. Simplicity should be embraced.
> > Complexity should only be tolerated where you have no choice in the
> matter.
> > To embrace complexity is to spread your legs for any "design" which
> happens
> > by.
> >
>
> A design should be as simple as possible, but no simpler.  Often the
> simplest solution is complex.

I disagree, but this is slippery to argue either way. The complexities arise
out of a focus on procedural (even within OO) and product-focused designs,
and out of insufficient design.

> Dijkstra?  That stuff, while very important, is out of date.

hahahahahahahaha - you kill me.

> ie: although
> Dijkstra may have been the first to mention the idea of self-stabilizing
> systems, I would not recommend people read him to get the state of the art
> in this area.

Couldn't disagree more - the state of the art these days is precisely that:
art without logical foundation. It's a feel-good collection of practices
that he formalized years ago, but because formalism is "out", it doesn't
sell.

> About models:  Almost all papers in distributed computing
> rely on either combinatorial graph theoretic models, or more recently,
> higher dimensional geometric models that employ techniques from algebraic
> topology.

True - these are researchers looking to impress, and of course their work
will make its way in some fashion into mainstream products, albeit in a
mangled form, since industry tends to take research, adopt some of its
lingo, do a half-assed implementation, and then pile on sellable features
that sound good but would be easier if they hadn't abandoned the foundation.
It's happened with process models, with relational, with industrial
engineering in general, with every methodology, etc. etc.

> I would agree that they are moving away from purely logic-based
> models (like the various predicate calculi for concurrency etc...).
Symbols
> are still very important in distributed computing, and are even making a
> come-back in general dynamical systems theory through the use of
statistical
> methods.

Statistics? Fun. We haven't even mastered the deterministic and static
models yet.

My point is this: there's a gulf between common practice and research, and
in particular solid research from years ago never makes its way into
products, and the focus on products infiltrates everyone's brain, resulting
in ability to comprehend design only through product comparisons.

> > Yes, and these are well-studied by a few, and ignored by most. I agree
> with
> > your observations, but to assume that we're in any way addressing them
> using
> > OO is nonsense. OO is around because it's around and useful for some
> things,
> > but like XML, it's now the proverbial hammer for the computing nail.
>
> Again, I believe that the 'object concept' is the simplest way to model
such
> systems.  One can immediately see how we can take a network of nodes and
> turn it into collaboration between objects.

That's assuming you know what the nodes are - you're presupposing a
component design (or object - depends on the architectural tiers you're
mentioning). Prior to that, there are many ways to slice it, and many
possibilities that get ignored because of a premature commitment to objects.

And besides, I seldom do designs with networks of nodes. They're around, of
course, but design at such a high level presupposes the existence of member
systems that meet certain criteria for interoperability. And those criteria
aren't OO criteria.

> > Illusion is the right word. Don't get me wrong - it can be useful for
> human
> > communication. But like everything the RUP (also from Rational), it
adopts
> a
> > kitchen-sink approach, stuffs everything on (and inside) the pizza, and
> then
> > expects coherence to emerge. At best, it's a toolkit, not an integrated
> > approach to anything.
> >
>
> I tend to agree.  However, I think the basic premise has merit, even if
its
> not a silver bullet.

Right, nothing is. I would just like to see some formal notation actually
get automated - that's what computers should be good for. You state the
invariants, the engine enforces 'em - it's a basic notion every framework is
built on.

- erk


0
ekaun (22)
5/11/2004 3:44:50 PM
> >>>Example of what? Customer database? Goto-like pointers?
> >>
> >>I think you were criticizing some sample OO code denoting Customers
> >>that are Persons and have an address.
> >>If I understand you correctly, what you are saying is that you, in your 
> >>code, are able to avoid expressing, for instance, the relationship 
> >>between customers and their address.
> >>If my assumption is correct, could you please show me an example?
> >>
> >>Kurt
> > 
> > 
> > Those relationships belong in the database, not hard-wired
> > into app code. I don't like digging through source code
> > to try to extract the schema among all that stuff.
> 
> Well then, I would expect you to be able to illustrate what you mean
> with an example. Show me how you would avoid coding the fact that 
> customers have addresses.
> If you prefer a different example, go ahead, just propose it.
> 
> Kurt

select * from customers C, addresses A where C.addrRef = A.addrID

(This assumes one-to-one, but it could also be many-to-many)

The fact that there is a relationship may be reflected in some of the
code at a small, local scale; but the code design itself does not
reflect it. In other words, there are no "pointers" in the code that
links the two and no coupling between two different
modules/objects/classes/thingies to represent such a link. We avoid
doing noun modeling in the app code.
   
-T-
0
topmind (2124)
5/12/2004 10:02:39 PM

Topmind wrote:
>>>>>Example of what? Customer database? Goto-like pointers?
>>>>
>>>>I think you were criticizing some sample OO code denoting Customers
>>>>that are Persons and have an address.
>>>>If I understand you correctly, what you are saying is that you, in your 
>>>>code, are able to avoid expressing, for instance, the relationship 
>>>>between customers and their address.
>>>>If my assumption is correct, could you please show me an example?
>>>>
>>>>Kurt
>>>
>>>
>>>Those relationships belong in the database, not hard-wired
>>>into app code. I don't like digging through source code
>>>to try to extract the schema among all that stuff.
>>
>>Well then, I would expect you to be able to illustrate what you mean
>>with an example. Show me how you would avoid coding the fact that 
>>customers have addresses.
>>If you prefer a different example, go ahead, just propose it.
>>
>>Kurt
> 
> 
> select * from customers C, addresses A where C.addrRef = A.addrID
> 
> (This assumes one-to-one, but it could also be many-to-many)
> 
> The fact that there is a relationship may be reflected in some of the
> code at a small, local scale; but the code design itself does not
> reflect it. In other words, there are no "pointers" in the code that
> links the two and no coupling between two different
> modules/objects/classes/thingies to represent such a link. We avoid
> doing noun modeling in the app code.

What is the difference between a "pointer" and a key reference?

Anyway, with your coding style you would probably embed the statement 
above everywhere you needed a customer's address.

In contrast, even a mediocre OO programmer would have the decency to 
hide away the fact that customers and their addresses are stored in 
different places.
In other words, the relationship would be hidden. In your case it would not.

Kurt

0
5/13/2004 7:12:33 PM
topmind@technologist.com (Topmind) wrote in
news:4e705869.0405121402.4677416a@posting.google.com: 

> select * from customers C, addresses A where C.addrRef = A.addrID

This is similar to using untyped pointers (void * in C++, and Object
references in Java, to say).

In OO you can implement sets. Hence, all u can do with SQL you can do
with OO (performance and persistence considerations require some more
discussion, of course, but that's besides the argument in point). If the
language has RTTI, you can implement * almost directly (without RTTI,
client code must either know in advance the structure of the set
elements, or make use of some protocol for obtaining a description of it
at runtime - incidentally, exactly what any RDBMS does today to support
*). 

For example, in a OO context, you might have separate customer and
address sets and leave the relationship to the client code. 

That would *not* in general be a good thing. The reason is the same for
which I avoid * in SQL and like to have explicit foreign keys in my
tables (and constraints as well, when I can): stronger typing; which
helps spot errors when future changes happen (there's little more
annoying to me that work on a legacy database written grown over several
years and without referential integrity whatsoever - and usually even
with cryptic field and table names). 

Note that with your approach you *exactly* have to go wading trough the
client code to discover the schema (the query you propose is an example
of such client code).  

That's since you have two possibilities: either
you dont declare an explicit foreign key (no explicit relationship) or
you do. If you do (for example in the Customer table), you obviously
have a "pointer" - a hard-coded reference in the Customer entity
referring another entity (similar to a typed pointer). If you don't,
there's nothing in the schema that tells u that customer and address are
related - so the only way to find the complete set of relationships is
to go into all the client code. 
0
5/14/2004 9:51:41 AM
Cristiano Sadun <cristianoTAKEsadunTHIS@OUThotmail.com> wrote in message news:<Xns94E978A99988FSadun@212.45.188.38>...
> topmind@technologist.com (Topmind) wrote in
> news:4e705869.0405121402.4677416a@posting.google.com: 
>  
> In OO you can implement sets. Hence, all u can do with SQL you can do
> with OO 

That's interesting.  What set operations do your favorite OO languages
support?  Can you give us an example of how you use them?
> 
> That's since you have two possibilities: either
> you dont declare an explicit foreign key (no explicit relationship) or
> you do. If you do (for example in the Customer table), you obviously
> have a "pointer" - a hard-coded reference in the Customer entity
> referring another entity (similar to a typed pointer). If you don't,
> there's nothing in the schema that tells u that customer and address are
> related - so the only way to find the complete set of relationships is
> to go into all the client code.

A foreign key relationship is an integrity constraint on data, a
pointer defines a navigation path between entities.  Do you understand
the distinction?

Regards,
Daniel Parker
0
5/14/2004 7:30:30 PM
> >>>>>Example of what? Customer database? Goto-like pointers?
> >>>>
> >>>>I think you were criticizing some sample OO code denoting Customers
> >>>>that are Persons and have an address.
> >>>>If I understand you correctly, what you are saying is that you, in your 
> >>>>code, are able to avoid expressing, for instance, the relationship 
> >>>>between customers and their address.
> >>>>If my assumption is correct, could you please show me an example?
> >>>>
> >>>>Kurt
> >>>
> >>>
> >>>Those relationships belong in the database, not hard-wired
> >>>into app code. I don't like digging through source code
> >>>to try to extract the schema among all that stuff.
> >>
> >>Well then, I would expect you to be able to illustrate what you mean
> >>with an example. Show me how you would avoid coding the fact that 
> >>customers have addresses.
> >>If you prefer a different example, go ahead, just propose it.
> >>
> >>Kurt
> > 
> > 
> > select * from customers C, addresses A where C.addrRef = A.addrID
> > 
> > (This assumes one-to-one, but it could also be many-to-many)
> > 
> > The fact that there is a relationship may be reflected in some of the
> > code at a small, local scale; but the code design itself does not
> > reflect it. In other words, there are no "pointers" in the code that
> > links the two and no coupling between two different
> > modules/objects/classes/thingies to represent such a link. We avoid
> > doing noun modeling in the app code.
> 
> What is the difference between a "pointer" and a key reference?

The custom application developer does not implement them. There are
little or no actual pointers for such in the app code.

> 
> Anyway, with your coding style you would probably embed the statement 
> above everywhere you needed a customer's address.

No. I use functions, stored procedures, and/or views 
to reduce redundancy if need be.

> 
> In contrast, even a mediocre OO programmer would have the decency to 
> hide away the fact that customers and their addresses are stored in 
> different places.
> In other words, the relationship would be hidden. In your case it would not.

Huh? I don't understand. If you build classes with pointers to 
each other, nothing is hidden away from the app developer. 

What exactly is a "different place"? Queries request information,
not RAM addresses. If you want to hide the join syntax (for
whatever reason), again you can use functions, stored procedures, 
and/or views. 

> 
> Kurt

-T-
0
topmind (2124)
5/14/2004 8:04:35 PM

Topmind wrote:

>>>select * from customers C, addresses A where C.addrRef = A.addrID
>>>
>>>(This assumes one-to-one, but it could also be many-to-many)
>>>
>>>The fact that there is a relationship may be reflected in some of the
>>>code at a small, local scale; but the code design itself does not
>>>reflect it. In other words, there are no "pointers" in the code that
>>>links the two and no coupling between two different
>>>modules/objects/classes/thingies to represent such a link. We avoid
>>>doing noun modeling in the app code.
>>
>>What is the difference between a "pointer" and a key reference?
> 
> 
> The custom application developer does not implement them. There are
> little or no actual pointers for such in the app code.

A pointer is an address, an index into an array. Do the mental exercise 
of replacing the memory array by an associative array, keyed on customer 
ID. Now tell me, what is the difference between looking up an address 
using an array index, and doing it using a numeric ID?

>>Anyway, with your coding style you would probably embed the statement 
>>above everywhere you needed a customer's address.
> 
> 
> No. I use functions, stored procedures, and/or views 
> to reduce redundancy if need be.
> 
> 
>>In contrast, even a mediocre OO programmer would have the decency to 
>>hide away the fact that customers and their addresses are stored in 
>>different places.
>>In other words, the relationship would be hidden. In your case it would not.
> 
> 
> Huh? I don't understand. If you build classes with pointers to 
> each other, nothing is hidden away from the app developer. 
> 
> What exactly is a "different place"? Queries request information,
> not RAM addresses. If you want to hide the join syntax (for
> whatever reason), again you can use functions, stored procedures, 
> and/or views. 

Address address = customer.address();

Where is the "pointer"?


Kurt

0
5/14/2004 10:26:42 PM
> >>>select * from customers C, addresses A where C.addrRef = A.addrID
> >>>
> >>>(This assumes one-to-one, but it could also be many-to-many)
> >>>
> >>>The fact that there is a relationship may be reflected in some of the
> >>>code at a small, local scale; but the code design itself does not
> >>>reflect it. In other words, there are no "pointers" in the code that
> >>>links the two and no coupling between two different
> >>>modules/objects/classes/thingies to represent such a link. We avoid
> >>>doing noun modeling in the app code.
> >>
> >>What is the difference between a "pointer" and a key reference?
> > 
> > 
> > The custom application developer does not implement them. There are
> > little or no actual pointers for such in the app code.
> 
> A pointer is an address, an index into an array. Do the mental exercise 
> of replacing the memory array by an associative array, keyed on customer 
> ID. Now tell me, what is the difference between looking up an address 
> using an array index, and doing it using a numeric ID?

An ID is an attribute, just like any other attribute. And, the
code does not necessarily have to see or work with specific
ID's to do many of the tasks, such as joins.

> 
> >>Anyway, with your coding style you would probably embed the statement 
> >>above everywhere you needed a customer's address.
> > 
> > 
> > No. I use functions, stored procedures, and/or views 
> > to reduce redundancy if need be.
> > 
> > 
> >>In contrast, even a mediocre OO programmer would have the decency to 
> >>hide away the fact that customers and their addresses are stored in 
> >>different places.
> >>In other words, the relationship would be hidden. In your case it would not.
> > 
> > 
> > Huh? I don't understand. If you build classes with pointers to 
> > each other, nothing is hidden away from the app developer. 
> > 
> > What exactly is a "different place"? Queries request information,
> > not RAM addresses. If you want to hide the join syntax (for
> > whatever reason), again you can use functions, stored procedures, 
> > and/or views. 
> 
> Address address = customer.address();
> 
> Where is the "pointer"?

In the customer class. By the way, what do you do if a customer
can have multiple addresses such as billing address, mailing
address, etc?

> 
> 
> Kurt

-T-
0
topmind (2124)
5/15/2004 6:39:58 AM

Topmind wrote:
>>>>>select * from customers C, addresses A where C.addrRef = A.addrID
>>>>>
>>>>>(This assumes one-to-one, but it could also be many-to-many)
>>>>>
>>>>>The fact that there is a relationship may be reflected in some of the
>>>>>code at a small, local scale; but the code design itself does not
>>>>>reflect it. In other words, there are no "pointers" in the code that
>>>>>links the two and no coupling between two different
>>>>>modules/objects/classes/thingies to represent such a link. We avoid
>>>>>doing noun modeling in the app code.
>>>>
>>>>What is the difference between a "pointer" and a key reference?
>>>
>>>
>>>The custom application developer does not implement them. There are
>>>little or no actual pointers for such in the app code.
>>
>>A pointer is an address, an index into an array. Do the mental exercise 
>>of replacing the memory array by an associative array, keyed on customer 
>>ID. Now tell me, what is the difference between looking up an address 
>>using an array index, and doing it using a numeric ID?
> 
> 
> An ID is an attribute, just like any other attribute. And, the
> code does not necessarily have to see or work with specific
> ID's to do many of the tasks, such as joins.
> 

ID's are artificial attributes. In general, ID's are nothing but 
portable pointers.


> 
>>>>Anyway, with your coding style you would probably embed the statement 
>>>>above everywhere you needed a customer's address.
>>>
>>>
>>>No. I use functions, stored procedures, and/or views 
>>>to reduce redundancy if need be.
>>>
>>>
>>>
>>>>In contrast, even a mediocre OO programmer would have the decency to 
>>>>hide away the fact that customers and their addresses are stored in 
>>>>different places.
>>>>In other words, the relationship would be hidden. In your case it would not.
>>>
>>>
>>>Huh? I don't understand. If you build classes with pointers to 
>>>each other, nothing is hidden away from the app developer. 
>>>
>>>What exactly is a "different place"? Queries request information,
>>>not RAM addresses. If you want to hide the join syntax (for
>>>whatever reason), again you can use functions, stored procedures, 
>>>and/or views. 
>>
>>Address address = customer.address();
>>
>>Where is the "pointer"?
> 
> 
> In the customer class. By the way, what do you do if a customer
> can have multiple addresses such as billing address, mailing
> address, etc?
> 


Not necessarily. In fact, for situations as the above, I would be very 
tempted to retrieve the address on demand.
Above all, since I have an interface, I can choose the implementation.

2nd question:

Depends, but one choice could be

Address bill_address = customer.billing_address();
Address mail_address = customer.mailing_address();
....

Another choice would be

Address bill_address = customer.address("Billing");


It is likely that, for instance, the billing address defaults to the 
mailing address if a specific billing address hasn't been registered.
That is when interfaces come handy.

Regards,

Kurt

0
5/15/2004 9:17:33 AM
> >>>>>select * from customers C, addresses A where C.addrRef = A.addrID
> >>>>>
> >>>>>(This assumes one-to-one, but it could also be many-to-many)
> >>>>>
> >>>>>The fact that there is a relationship may be reflected in some of the
> >>>>>code at a small, local scale; but the code design itself does not
> >>>>>reflect it. In other words, there are no "pointers" in the code that
> >>>>>links the two and no coupling between two different
> >>>>>modules/objects/classes/thingies to represent such a link. We avoid
> >>>>>doing noun modeling in the app code.
> >>>>
> >>>>What is the difference between a "pointer" and a key reference?
> >>>
> >>>
> >>>The custom application developer does not implement them. There are
> >>>little or no actual pointers for such in the app code.
> >>
> >>A pointer is an address, an index into an array. Do the mental exercise 
> >>of replacing the memory array by an associative array, keyed on customer 
> >>ID. Now tell me, what is the difference between looking up an address 
> >>using an array index, and doing it using a numeric ID?
> > 
> > 
> > An ID is an attribute, just like any other attribute. And, the
> > code does not necessarily have to see or work with specific
> > ID's to do many of the tasks, such as joins.
> > 
> 
> ID's are artificial attributes. In general, ID's are nothing but 
> portable pointers.

Perhaps. But that is how one gets a language-neutral attribute
manager (database), something OO is not very good at.

> 
> > 
> >>>>Anyway, with your coding style you would probably embed the statement 
> >>>>above everywhere you needed a customer's address.
> >>>
> >>>
> >>>No. I use functions, stored procedures, and/or views 
> >>>to reduce redundancy if need be.
> >>>
> >>>
> >>>
> >>>>In contrast, even a mediocre OO programmer would have the decency to 
> >>>>hide away the fact that customers and their addresses are stored in 
> >>>>different places.
> >>>>In other words, the relationship would be hidden. In your case it would not.
> >>>
> >>>
> >>>Huh? I don't understand. If you build classes with pointers to 
> >>>each other, nothing is hidden away from the app developer. 
> >>>
> >>>What exactly is a "different place"? Queries request information,
> >>>not RAM addresses. If you want to hide the join syntax (for
> >>>whatever reason), again you can use functions, stored procedures, 
> >>>and/or views. 
> >>
> >>Address address = customer.address();
> >>
> >>Where is the "pointer"?
> > 
> > 
> > In the customer class. By the way, what do you do if a customer
> > can have multiple addresses such as billing address, mailing
> > address, etc?
> > 
> 
> 
> Not necessarily. In fact, for situations as the above, I would be very 
> tempted to retrieve the address on demand.
> Above all, since I have an interface, I can choose the implementation.

Like I said, one can make interfaces without using OO. I generally
take a YAGNI approach to it.

> 
> 2nd question:
> 
> Depends, but one choice could be
> 
> Address bill_address = customer.billing_address();
> Address mail_address = customer.mailing_address();
> ...
> 
> Another choice would be
> 
> Address bill_address = customer.address("Billing");
> 

You are simply reinventing a custom query language when 
the DBMS already provides one. That ain't reuse.

> 
> It is likely that, for instance, the billing address defaults to the 
> mailing address if a specific billing address hasn't been registered.
> That is when interfaces come handy.

Tables could do the same thing directly if RDBMS implemented
"view columns", but unfortunately most don't. 
One could also use table views, but it is 
a bit more work. Or, just leave the original column alone
and add a second one. But anyhow, one probably needs to visit
every usage to determine which address a given peice of 
code requires rather than assume the default is the right
choice. Otherwise, the receptionist may end up being flooded
with billing questions and the like.

> 
> Regards,
> 
> Kurt

-T-
0
topmind (2124)
5/15/2004 6:38:31 PM

Topmind wrote:
>>>>>>>select * from customers C, addresses A where C.addrRef = A.addrID
>>>>>>>

>>ID's are artificial attributes. In general, ID's are nothing but 
>>portable pointers.
> 
> 
> Perhaps. But that is how one gets a language-neutral attribute
> manager (database), something OO is not very good at.

Giberish and more Giberish.
The DBMS's you can mention implement only one QL. Would you call that 
language-neutrality?

>>>>>>Anyway, with your coding style you would probably embed the statement 
>>>>>>above everywhere you needed a customer's address.
>>>>>
>>>>>
>>>>>No. I use functions, stored procedures, and/or views 
>>>>>to reduce redundancy if need be.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>>In contrast, even a mediocre OO programmer would have the decency to 
>>>>>>hide away the fact that customers and their addresses are stored in 
>>>>>>different places.
>>>>>>In other words, the relationship would be hidden. In your case it would not.
>>>>>
>>>>>
>>>>>Huh? I don't understand. If you build classes with pointers to 
>>>>>each other, nothing is hidden away from the app developer. 
>>>>>
>>>>>What exactly is a "different place"? Queries request information,
>>>>>not RAM addresses. If you want to hide the join syntax (for
>>>>>whatever reason), again you can use functions, stored procedures, 
>>>>>and/or views. 
>>>>
>>>>Address address = customer.address();
>>>>
>>>>Where is the "pointer"?
>>>
>>>
>>>In the customer class. By the way, what do you do if a customer
>>>can have multiple addresses such as billing address, mailing
>>>address, etc?
>>>
>>
>>
>>Not necessarily. In fact, for situations as the above, I would be very 
>>tempted to retrieve the address on demand.
>>Above all, since I have an interface, I can choose the implementation.
> 
> 
> Like I said, one can make interfaces without using OO. I generally
> take a YAGNI approach to it.
> 
> 
>>2nd question:
>>
>>Depends, but one choice could be
>>
>>Address bill_address = customer.billing_address();
>>Address mail_address = customer.mailing_address();
>>...
>>
>>Another choice would be
>>
>>Address bill_address = customer.address("Billing");
>>
> 
> 
> You are simply reinventing a custom query language when 
> the DBMS already provides one. That ain't reuse.
> 

How do you know I am re-inventinganything without seeing the 
implementation of the method?

> 
>>It is likely that, for instance, the billing address defaults to the 
>>mailing address if a specific billing address hasn't been registered.
>>That is when interfaces come handy.
> 
> 
> Tables could do the same thing directly if RDBMS implemented
> "view columns", but unfortunately most don't. 
> One could also use table views, but it is 
> a bit more work. Or, just leave the original column alone
> and add a second one. But anyhow, one probably needs to visit
> every usage to determine which address a given peice of 
> code requires rather than assume the default is the right
> choice. Otherwise, the receptionist may end up being flooded
> with billing questions and the like.

And what does this have to do with whatever it was we were discussing?

Kurt

0
5/15/2004 8:55:07 PM
On Sat, 15 May 2004 09:17:33 GMT, "Kurt M. Alonso"
<kurt.magnus.alonso@mailbox.swipnet.se> wrote:

>ID's are artificial attributes. In general, ID's are nothing but 
>portable pointers.

ID's are values not pointers.


0
alfredo (205)
5/16/2004 12:03:09 PM

Alfredo Novoa wrote:
> On Sat, 15 May 2004 09:17:33 GMT, "Kurt M. Alonso"
> <kurt.magnus.alonso@mailbox.swipnet.se> wrote:
> 
> 
>>ID's are artificial attributes. In general, ID's are nothing but 
>>portable pointers.
> 
> 
> ID's are values not pointers.

C language pointers are also values, that become memory pointers when 
you apply certain operations to them.

Do the following thought experiment and tell me the outcome:

(0) Assume that you are using the C language and that
     Address *a = &my_customers_address;

(1) printf("Street = %s\n", a->street);
     /* is a a pointer? */

(2) a[0] = my_customer_address;
     printf("Street = %s\n", a[0].street);
     /* Ooops, 'a' was not a pointer but an array */
     /* are we still using pointers */

(3) a[customer.id] = my_customer_address;
     printf("Street = %s\n", a[0].street);
     /* Are we still doing pointers? */

(4) Associative_array *arr = assocArray_new();
     assocArray_add(arr, customer.id, &my_customer_address);

     Address *a = assocArray_get(arr, customer.id);
     printf("Street = %s\n", a->street);
     /* Are we using pointers? */


(5) Now, instead of DBMS sequence generated id's use rowid's.
     Are you 'values' or pointers? If the latter, what is the difference?
     Both were machine generated.


Kurt

0
5/16/2004 7:30:01 PM
Your C version only means something to C (and possibly a tightly bound
language), not outside of it. We need employee ID's etc. anyhow. Might
as well not reinvent the wheel inside a language.
     
-T-
0
topmind (2124)
5/17/2004 3:04:42 AM
Your C version only means something to C (and possibly a tightly bound
language), not outside of it.
     
-T-
0
topmind (2124)
5/17/2004 3:08:18 AM
>  
> >>ID's are artificial attributes. In general, ID's are nothing but 
> >>portable pointers.
> > 
> > Perhaps. But that is how one gets a language-neutral attribute
> > manager (database), something OO is not very good at.
> 
> Giberish and more Giberish.
> The DBMS's you can mention implement only one QL. Would you call that 
> language-neutrality?

How would you remedy this if given a chance? (I am thinking of
an approach, but want to hear yours first.)

[...]
> > 
> >>It is likely that, for instance, the billing address defaults to the 
> >>mailing address if a specific billing address hasn't been registered.
> >>That is when interfaces come handy.
> > 
> > 
> > Tables could do the same thing directly if RDBMS implemented
> > "view columns", but unfortunately most don't. 
> > One could also use table views, but it is 
> > a bit more work. Or, just leave the original column alone
> > and add a second one. But anyhow, one probably needs to visit
> > every usage to determine which address a given peice of 
> > code requires rather than assume the default is the right
> > choice. Otherwise, the receptionist may end up being flooded
> > with billing questions and the like.
> 
> And what does this have to do with whatever it was we were discussing?

We could make a virtual/calculated column that could reference whatever
you wanted it to reference. However, I question the wisdom
of doing that globally.

> 
> Kurt

-T-
0
topmind (2124)
5/17/2004 3:16:03 AM
>  
> >>ID's are artificial attributes. In general, ID's are nothing but 
> >>portable pointers.
> > 
> > Perhaps. But that is how one gets a language-neutral attribute
> > manager (database), something OO is not very good at.
> 
> Giberish and more Giberish.
> The DBMS's you can mention implement only one QL. Would you call that 
> language-neutrality?

How would you remedy this if given a chance? (I am thinking of
an approach, but want to hear yours first.)

[...]
> > 
> >>It is likely that, for instance, the billing address defaults to the 
> >>mailing address if a specific billing address hasn't been registered.
> >>That is when interfaces come handy.
> > 
> > 
> > Tables could do the same thing directly if RDBMS implemented
> > "view columns", but unfortunately most don't. 
> > One could also use table views, but it is 
> > a bit more work. Or, just leave the original column alone
> > and add a second one. But anyhow, one probably needs to visit
> > every usage to determine which address a given peice of 
> > code requires rather than assume the default is the right
> > choice. Otherwise, the receptionist may end up being flooded
> > with billing questions and the like.
> 
> And what does this have to do with whatever it was we were discussing?

We could make a virtual/calculated column that could reference whatever
you wanted it to reference. However, I question the wisdom
of doing that globally.

> 
> Kurt

-T-
0
topmind (2124)
5/17/2004 3:17:03 AM
(Sorry about the duplicate messages. The newsreader service was acting
up. I couldn't tell if Send was working.)

-T-
0
topmind (2124)
5/18/2004 3:36:58 AM
danielaparker@hotmail.com (Daniel Parker) wrote in
news:33feb190.0405141130.7a643d63@posting.google.com: 

>> In OO you can implement sets. Hence, all u can do with SQL you can do
>> with OO 
> 
> That's interesting.  What set operations do your favorite OO languages
> support?  Can you give us an example of how you use them?

Insertion, removal and selection, what else? Obviously they aren't 
implemented at language level - but as I said, one can implement them in 
no trouble, by defining and using classes and operations. SQL engines are 
implemented like that, after all.

For example, I can define or use a Set class, which expose a query method 
with a condition object. I can add an element to that set. I can impose 
that all elements have the same structure. I can delete an element from 
the set. With RTTI, I can even discover the structure of the elements at 
runtime without compile-time knowledge. And so on.

What RDBMS do is to optimize the storage off core, and provide an 
explicitly set oriented query language with a more or less well defined 
semantics.

>> That's since you have two possibilities: either
>> you dont declare an explicit foreign key (no explicit relationship)
>> or you do. If you do (for example in the Customer table), you
>> obviously have a "pointer" - a hard-coded reference in the Customer
>> entity referring another entity (similar to a typed pointer). If you
>> don't, there's nothing in the schema that tells u that customer and
>> address are related - so the only way to find the complete set of
>> relationships is to go into all the client code.
> 
> A foreign key relationship is an integrity constraint on data, a
> pointer defines a navigation path between entities.  Do you understand
> the distinction?

I do, thank you. :) But that distinction is irrelevant from the coupling 
perspective. For two entities to be coupled - which is what we are 
talking about (the original poster said "those relationships belong in 
the database, not hard-wired, into app code. I don't like digging through 
source code, to try to extract the schema among all that stuff") - the 
naming of a table and its internal parts from the other (which occur in 
an integrity constraint) is enough - for example, if you change the 
second table structure or name, you'll have effects on the first. That's 
coupling - how I described above.
0
5/18/2004 8:34:38 AM

Topmind wrote:
> Your C version only means something to C (and possibly a tightly bound
> language), not outside of it.
>    

"Pointer" is a meaningful concept in the C programming language and a 
few others. As you well know there are many mainstream languages that 
have dispensed of "pointers".

Kurt



0
5/19/2004 7:13:38 PM
> > Your C version only means something to C (and possibly a tightly bound
> > language), not outside of it.
> >    
> 
> "Pointer" is a meaningful concept in the C programming language and a 
> few others. As you well know there are many mainstream languages that 
> have dispensed of "pointers".

Objects referencing other others is a form of "pointers". 

> 
> Kurt

-T-
0
topmind (2124)
5/24/2004 4:30:10 AM

Topmind wrote:
>>>Your C version only means something to C (and possibly a tightly bound
>>>language), not outside of it.
>>>   
>>
>>"Pointer" is a meaningful concept in the C programming language and a 
>>few others. As you well know there are many mainstream languages that 
>>have dispensed of "pointers".
> 
> 
> Objects referencing other others is a form of "pointers". 


You mean like foreign keys in RDBMSs?



Kurt

0
5/24/2004 7:07:24 PM
> >>>Your C version only means something to C (and possibly a tightly bound
> >>>language), not outside of it.
> >>>   
> >>
> >>"Pointer" is a meaningful concept in the C programming language and a 
> >>few others. As you well know there are many mainstream languages that 
> >>have dispensed of "pointers".
> > 
> > 
> > Objects referencing other others is a form of "pointers". 
> 
> 
> You mean like foreign keys in RDBMSs?
> 

To some extent, but note how multiple languages
can make use of that "pointer". This is something that
OO has a hard time with.

> 
> 
> Kurt

-T-
0
topmind (2124)
5/28/2004 7:23:24 PM

Topmind wrote:
>>>>>Your C version only means something to C (and possibly a tightly bound
>>>>>language), not outside of it.
>>>>>  
>>>>
>>>>"Pointer" is a meaningful concept in the C programming language and a 
>>>>few others. As you well know there are many mainstream languages that 
>>>>have dispensed of "pointers".
>>>
>>>
>>>Objects referencing other others is a form of "pointers". 
>>
>>
>>You mean like foreign keys in RDBMSs?
>>
> 
> 
> To some extent, but note how multiple languages
> can make use of that "pointer". This is something that
> OO has a hard time with.

What multiple languages? SQL and ... SQL?

Kurt

0
5/31/2004 1:37:06 PM
> >>>>>Your C version only means something to C (and possibly a tightly bound
> >>>>>language), not outside of it.
> >>>>>  
> >>>>
> >>>>"Pointer" is a meaningful concept in the C programming language and a 
> >>>>few others. As you well know there are many mainstream languages that 
> >>>>have dispensed of "pointers".
> >>>
> >>>
> >>>Objects referencing other others is a form of "pointers". 
> >>
> >>
> >>You mean like foreign keys in RDBMSs?
> >>
> > 
> > 
> > To some extent, but note how multiple languages
> > can make use of that "pointer". This is something that
> > OO has a hard time with.
> 
> What multiple languages? SQL and ... SQL?

SQL is a protocol/interface. If 2+ OO languages talked to
each other, it would also be via a protocol, such 
as an API most likely. (SQL is not the ideal
relational language/protocol IMO.)

> 
> Kurt

-T-
0
topmind (2124)
6/2/2004 5:22:42 AM

Topmind wrote:
>>>>>>>Your C version only means something to C (and possibly a tightly bound
>>>>>>>language), not outside of it.
>>>>>>> 
>>>>>>
>>>>>>"Pointer" is a meaningful concept in the C programming language and a 
>>>>>>few others. As you well know there are many mainstream languages that 
>>>>>>have dispensed of "pointers".
>>>>>
>>>>>
>>>>>Objects referencing other others is a form of "pointers". 
>>>>
>>>>
>>>>You mean like foreign keys in RDBMSs?
>>>>
>>>
>>>
>>>To some extent, but note how multiple languages
>>>can make use of that "pointer". This is something that
>>>OO has a hard time with.
>>
>>What multiple languages? SQL and ... SQL?
> 
> 
> SQL is a protocol/interface. If 2+ OO languages talked to
> each other, it would also be via a protocol, such 
> as an API most likely. (SQL is not the ideal
> relational language/protocol IMO.)
> 

Sorry Topmind, but I think SQL stands for Structured Query Language.
At least that is what I've been told.


Kurt

0
6/2/2004 7:20:08 PM
> >>>To some extent, but note how multiple languages
> >>>can make use of that "pointer". This is something that
> >>>OO has a hard time with.
> >>
> >>What multiple languages? SQL and ... SQL?
> > 
> > 
> > SQL is a protocol/interface. If 2+ OO languages talked to
> > each other, it would also be via a protocol, such 
> > as an API most likely. (SQL is not the ideal
> > relational language/protocol IMO.)
> > 
> 
> Sorry Topmind, but I think SQL stands for Structured Query Language.
> At least that is what I've been told.

Is there a "hard" difference between a language and a protocol?
If so, spill it!

> 
> 
> Kurt

-T-
0
topmind (2124)
6/3/2004 8:41:58 PM

Topmind wrote:
>>>>>To some extent, but note how multiple languages
>>>>>can make use of that "pointer". This is something that
>>>>>OO has a hard time with.
>>>>
>>>>What multiple languages? SQL and ... SQL?
>>>
>>>
>>>SQL is a protocol/interface. If 2+ OO languages talked to
>>>each other, it would also be via a protocol, such 
>>>as an API most likely. (SQL is not the ideal
>>>relational language/protocol IMO.)
>>>
>>
>>Sorry Topmind, but I think SQL stands for Structured Query Language.
>>At least that is what I've been told.
> 
> 
> Is there a "hard" difference between a language and a protocol?
> If so, spill it!
> 

 From popular dictionnaries

Language -
(1) a formal system of signs and symbols (as FORTRAN or a calculus in 
logic) including rules for the formation and transformation of 
admissible expressions
(2) the set of symbolic instruction codes usually in binary form that is 
used to represent operations and data in a machine (as a computer)

Protocol -
In computer science, a set of rules or procedures for transmitting data 
between electronic devices, such as computers


Kurt

0
6/4/2004 6:45:56 PM
> >>>>>To some extent, but note how multiple languages
> >>>>>can make use of that "pointer". This is something that
> >>>>>OO has a hard time with.
> >>>>
> >>>>What multiple languages? SQL and ... SQL?
> >>>
> >>>
> >>>SQL is a protocol/interface. If 2+ OO languages talked to
> >>>each other, it would also be via a protocol, such 
> >>>as an API most likely. (SQL is not the ideal
> >>>relational language/protocol IMO.)
> >>>
> >>
> >>Sorry Topmind, but I think SQL stands for Structured Query Language.
> >>At least that is what I've been told.
> > 
> > 
> > Is there a "hard" difference between a language and a protocol?
> > If so, spill it!
> > 
> 
>  From popular dictionnaries
> 
> Language -
> (1) a formal system of signs and symbols (as FORTRAN or a calculus in 
> logic) including rules for the formation and transformation of 
> admissible expressions
> (2) the set of symbolic instruction codes usually in binary form that is 
> used to represent operations and data in a machine (as a computer)
> 
> Protocol -
> In computer science, a set of rules or procedures for transmitting data 
> between electronic devices, such as computers
> 

I suspect your "altnernative" to SQL will fall under "language" also,
using those definitions.

> 
> Kurt

-T-
0
topmind (2124)
6/5/2004 2:23:58 AM

Topmind wrote:
>>>>>>>To some extent, but note how multiple languages
>>>>>>>can make use of that "pointer". This is something that
>>>>>>>OO has a hard time with.
>>>>>>
>>>>>>What multiple languages? SQL and ... SQL?
>>>>>
>>>>>
>>>>>SQL is a protocol/interface. If 2+ OO languages talked to
>>>>>each other, it would also be via a protocol, such 
>>>>>as an API most likely. (SQL is not the ideal
>>>>>relational language/protocol IMO.)
>>>>>
>>>>
>>>>Sorry Topmind, but I think SQL stands for Structured Query Language.
>>>>At least that is what I've been told.
>>>
>>>
>>>Is there a "hard" difference between a language and a protocol?
>>>If so, spill it!
>>>
>>
>> From popular dictionnaries
>>
>>Language -
>>(1) a formal system of signs and symbols (as FORTRAN or a calculus in 
>>logic) including rules for the formation and transformation of 
>>admissible expressions
>>(2) the set of symbolic instruction codes usually in binary form that is 
>>used to represent operations and data in a machine (as a computer)
>>
>>Protocol -
>>In computer science, a set of rules or procedures for transmitting data 
>>between electronic devices, such as computers
>>
> 
> 
> I suspect your "altnernative" to SQL will fall under "language" also,
> using those definitions.

Would you be kind enought to tell me what you are talking about?

Kurt

0
6/5/2004 10:57:32 PM
> >>>>>>>To some extent, but note how multiple languages
> >>>>>>>can make use of that "pointer". This is something that
> >>>>>>>OO has a hard time with.
> >>>>>>
> >>>>>>What multiple languages? SQL and ... SQL?
> >>>>>
> >>>>>
> >>>>>SQL is a protocol/interface. If 2+ OO languages talked to
> >>>>>each other, it would also be via a protocol, such 
> >>>>>as an API most likely. (SQL is not the ideal
> >>>>>relational language/protocol IMO.)
> >>>>>
> >>>>
> >>>>Sorry Topmind, but I think SQL stands for Structured Query Language.
> >>>>At least that is what I've been told.
> >>>
> >>>
> >>>Is there a "hard" difference between a language and a protocol?
> >>>If so, spill it!
> >>>
> >>
> >> From popular dictionnaries
> >>
> >>Language -
> >>(1) a formal system of signs and symbols (as FORTRAN or a calculus in 
> >>logic) including rules for the formation and transformation of 
> >>admissible expressions
> >>(2) the set of symbolic instruction codes usually in binary form that is 
> >>used to represent operations and data in a machine (as a computer)
> >>
> >>Protocol -
> >>In computer science, a set of rules or procedures for transmitting data 
> >>between electronic devices, such as computers
> >>
> > 
> > 
> > I suspect your "altnernative" to SQL will fall under "language" also,
> > using those definitions.
> 
> Would you be kind enought to tell me what you are talking about?
> 
> Kurt

Assuming we are not using raw RAM addressing, *something* has
to link 2 different systems/languages. I propose a relational
language of some sort for most cases, and you propose ____????

-T-
0
topmind (2124)
6/15/2004 7:05:24 AM

Topmind wrote:
>>>>>>>>>To some extent, but note how multiple languages
>>>>>>>>>can make use of that "pointer". This is something that
>>>>>>>>>OO has a hard time with.
>>>>>>>>
>>>>>>>>What multiple languages? SQL and ... SQL?
>>>>>>>
>>>>>>>
>>>>>>>SQL is a protocol/interface. If 2+ OO languages talked to
>>>>>>>each other, it would also be via a protocol, such 
>>>>>>>as an API most likely. (SQL is not the ideal
>>>>>>>relational language/protocol IMO.)
>>>>>>>
>>>>>>
>>>>>>Sorry Topmind, but I think SQL stands for Structured Query Language.
>>>>>>At least that is what I've been told.
>>>>>
>>>>>
>>>>>Is there a "hard" difference between a language and a protocol?
>>>>>If so, spill it!
>>>>>
>>>>
>>>>From popular dictionnaries
>>>>
>>>>Language -
>>>>(1) a formal system of signs and symbols (as FORTRAN or a calculus in 
>>>>logic) including rules for the formation and transformation of 
>>>>admissible expressions
>>>>(2) the set of symbolic instruction codes usually in binary form that is 
>>>>used to represent operations and data in a machine (as a computer)
>>>>
>>>>Protocol -
>>>>In computer science, a set of rules or procedures for transmitting data 
>>>>between electronic devices, such as computers
>>>>
>>>
>>>
>>>I suspect your "altnernative" to SQL will fall under "language" also,
>>>using those definitions.
>>
>>Would you be kind enought to tell me what you are talking about?
>>
>>Kurt
> 
> 
> Assuming we are not using raw RAM addressing, *something* has
> to link 2 different systems/languages. I propose a relational
> language of some sort for most cases, and you propose ____????

I don't propose anything at all. But SQL is still not a protocol,
only a query language.
SQL*Net is a protocol. SQL is not.
OpenClient is a protocol. Transact-SQL is not.

Kurt

0
6/15/2004 8:46:00 PM
Reply:

Similar Artilces:

documentation of [oo::object new] and [oo::object create] methods...
Hi Donal, The documentation of [oo::object] page does not seem to provide the syntax = and details of these two methods. It simply says "see oo::class for more de= tails". And it also asks to refer [oo::objdefine] - which is fine for furth= er configuration of the created object. However, won't [oo::object new] and [oo::object create] methods need to be = documented separately in [oo::object] page? Best Regards, nagu. On 27/10/2013 20:11, nagu wrote: > The documentation of [oo::object] page does not seem to provide the > syntax and details of these two methods. It simply says "see > oo::class for more details". And it also asks to refer > [oo::objdefine] - which is fine for further configuration of the > created object. > > However, won't [oo::object new] and [oo::object create] methods need > to be documented separately in [oo::object] page? Those are formally methods defined on the [oo::class] class, which [oo::object] is an instance of. The operations on a class object are defined in that class. Yes, [oo::class] is a subclass of [oo::object]; this is definitely a little brain-boggling if you think about it too much (the two classes are brought into being simultaneously during the initialization of TclOO). To show you exactly what I mean, check out what introspection says: % info class methods oo::class create new % info class methods oo::object destroy See? Donal ...

For each object in object
I'm trying to do this in VO: Have tried ForEachElement( oEvalObj ) CLASS AbstractCollection:VOCOM32.AbstractCollection:ForEachElement without any luck. My code vo is working but goBeregning:BeregnRefusjon(oRegning) are generating some new objects. I can't get these objects in vo. Any suggestions? "VB code For Each objKode In objRegning.Koder If Not objKode.Gyldig Then strStatus = "Ugyldig!" ElseIf objKode.Stjernekode Then strStatus = "Stjernekode" ElseIf objKode.Aggregert Then If...

Re: Why is OO popular?
Perhaps we'll have to have a less definitional system and a more dynamic one, where we simply create a 'blank' object and in our using of it the functionality and states are grown on, composed, added. This certainly wouldn't prevent inheritance; not by any means, but it would shift the focus more to composition. I think the more people reuse classes which have been thoroughly tested then apps. are more likely to be correct. I suspect most people are much more comfortable and capable of USING objects/classes than in creating their own. This means most people are capable within the procedural framework, but with some added capabilities which would naturally come from having not just int, double, char, but also a Smalltalk class system. On Mon, 10 May 2004 14:06:48 -0400, "Mark S. Hathaway" <hathawa2@marshall.edu> wrote: >Perhaps we'll have to have a less definitional system >and a more dynamic one, where we simply create a 'blank' >object and in our using of it the functionality and >states are grown on, composed, added. Yep, I've wanted something like that from day one. But it's tricky to come up with any kind of rigorous basis for it. J. Responding to Hathaway... > Perhaps we'll have to have a less definitional system > and a more dynamic one, where we simply create a 'blank' > object and in our using of it the functionality and > states are grown on, composed, added. In a sense...

an (object, object) -> object mapping, please?
I wonder, what are the common ways of implementing an (object, object) -> object mapping in Scheme (of various flavours)? Say, I have a set of `shop' objects, a set of `item' ones, and then a mapping, like: (shop, item) -> stock. With an option to get an (item -> stock) mapping for any given `shop'. One of the solutions (in standard Scheme) would be to embed a mapping into either each `shop' or `item' (or both), and then, e. g.: (define (get-stock shop item) (assq item (shop-items shop))) or, conversely: (define (get-stock shop item) (assq shop (...

Oo, Oo, Oo! Buy a Surface NOW
Microsoft cuts Surface Pro tablet prices by $100 <http://news.cnet.com/8301-10805_3-57596910-75/microsoft-cuts-surface-pro -tablet-prices-by-$100/> Discounts come just days after the tech titan reveals that marketing the tablets has cost more than the revenue they have brought in. ** Apple's going bust any day now!!! :-D In article <fmoore-51DDB4.11293405082013@mx05.eternal-september.org>, Fred Moore <fmoore@gcfn.org> wrote: > Microsoft cuts Surface Pro tablet prices by $100 > <http://news.cnet.com/8301-10805_3-57596910-75/microsoft-cuts-sur...

Definition of Business Object/Domain Object/Transaction Object
Could anyone please give me a definition? -- Steven Woody steven@lczmsoft.com ...

Object.object.method()
Hello, obviously it's a time for a noobie question. What the following statement means? Object.object.method() Sample code: window = Gtk::Window.new(Gtk::Window::TOPLEVEL) area = Gtk::DrawingArea.new() area.window.set_cursor(Gdk::Cursor.new(Gdk::Cursor::PENCIL)) I'm confused. Can anyone enlight me? Thanks. -- Posted via http://www.ruby-forum.com/. On Sun, Oct 3, 2010 at 11:29 AM, Johan Soderholm <teisto@surfy.net> wrote: > Hello, obviously it's a time for a noobie question. What the following > statement means? > > Object.object.method...

Oo, Oo, Oo, I Can't Wait To Do THIS!
How to add emoji icons to file names in OS X <http://reviews.cnet.com/8301-13727_7-57546042-263/how-to-add-emoji-icons -to-file-names-in-os-x/> The closing paragraph is a masterpiece of understatement: 'Unfortunately since the emoticon fonts are Unicode-based, they will not work in some services that do not support Unicode. If you frequently use the OS X Terminal, then you will find that adding symbols to file names will have them appear as question marks in the Terminal, making them difficult to identify and manage.' I just LOVE the triumph of form over substance! After all, what could go wrong, go wrong, go wrong...? -- Nothing is foolproof to a sufficiently talented fool In article <fmoore-267DC3.13374507112012@news.eternal-september.org>, Fred Moore <fmoore@gcfn.org> wrote: > The closing paragraph is a masterpiece of understatement: > > 'Unfortunately since the emoticon fonts are Unicode-based, they will not > work in some services that do not support Unicode. If you frequently use > the OS X Terminal, then you will find that adding symbols to file names > will have them appear as question marks in the Terminal, making them > difficult to identify and manage.' They don't have transparent backgrounds, but they work in iTerm2 (in Lion). -- .... do not cover a warm kettle or your stock may sour. -- Julia Child On 11-07-2012 13:37, Fred Moore wrote: > How to ...

Re: Why is OO popular? #4
"Universe" <universe@tAkEcovadOut.net> wrote in message news:... > "JXStern" <JXSternChangeX2R@gte.net> wrote in message > > <h.lahman@verizon.net> wrote: > > >It's a paradigm issue. In OO we abstract /existing/ problem space > > >entities with /existing/ properties rather than constructing /new/ > > >entities by composition just because it is convenient to do so. > > Doesn't address the issues at all. > > > > Say you're doing a farm system. You create a class "cow", and you > > use it for all the four-legged things standing out in the field. "You", huh? x.) How? y.) Analysis done? z.) An overall architecture done based upon "y)"? > > You confirm the design and solution, thing works in beta, you roll it > > out, it works fine. 1) "You" and 2) for god knows what and whose requirements. > > Turns out, you not being a farm boy, that many of > > those four-legged thingies are, after all, horses. What did the abstraction accomplish in the program? What did the program accomplish? How well in tech respect? Was any client happy, sad, so-so, to this point? > > It doesn't even > > matter for six months, but then the customer finds a need to track > > some properties only horses have. > > > > So, did you do a good job back then, or not? Again: a) What did...

objects as indexes of objects
I have this: <div id="a"> a </div> <div id="b"> b </div> <script type="text/javascript"> var X = new Object(); var a = document.getElementById('a'); var b = document.getElementById('b'); X[a] = 'A'; if(X[a]){alert('object a')} // I get the expected alert if(X[b]){alert('object b')} // There is no X[b] defined but I still get an alert. Why? </script> I'm initializing an instance for various objects on a page and I want to keep track of them. How do I do that? Jeff On Wednesday, March 9, 2011 3:56:59 PM UTC+1, Jeff Thies wrote: .... > <script type="text/javascript"> > var X = new Object(); > > var a = document.getElementById('a'); > var b = document.getElementById('b'); > > X[a] = 'A'; > > if(X[a]){alert('object a')} // I get the expected alert > > if(X[b]){alert('object b')} // There is no X[b] defined but I still get > an alert. Why? > > > </script> > > I'm initializing an instance for various objects on a page and I want to > keep track of them. How do I do that? Per the spec PropertyName can either be a number, or a string - since you are using an object, this will be coerced to a string by calling `toString`, which most likely returns something like '[object Object]'. Since both `a` and `b` resolve into this ...

Are Delphi's objects always pointers to objects? (comparing Delphi objects and C++ objects)
I'm trying to convert some C++ code to Delphi but I'm a bit confused on how objects (and object pointers) are different between the two languages. In Delphi it seems like objects are all pointers to objects. The reason I'm asking is because of the C++ "->" operator (or whatever you would classify that symbol as). I'm confused on how this translates to Delphi. For example, consider the following code: var MyObj1, MyObj2: TMyObject; begin MyObj1 := TMyObject.Create; // instantiate object (allocate memory) and have MyObj1 point to the memory ...

How do I tell an object to free up an owned object from thta object itself?
I have built a server application that uses a server object (MyServer) that creates a TServerSocket to manage connections. Every time a connection is made a ClientHandler object is created in the OnClientConnect event and it has a Socket property that is assigned the socket the TServerSocket supplies in the event. Now it seems like my ClientHandler object is all on its own with respect to the communication. When the client disconnects it fires the OnSocketEvent event on the socket with SocketEvent set to seDisconnect. I am decoding this in the ClientHandler so I can handle some cleanu...

OO matlab
Do we have to do explicit memory cleanup in matlab eg. %%%%%%%%%%%%%%%% classdef ContainerModule < Module methods function obj = ContainerModule(Fs) obj.bpfhandle = OSCModule(); end function delete(obj) % here delete(obj.bpfhandle); end %%%%%%%%%%%%%%%% when the object is destroyed isn't the member object delete() method automatically called? thanks <krindikzulmann@gmail.com> wrote in message news:0beda853-2849-4199-9986-2543f502f482@d7g2000prl.googlegroups.com... > Do we have to do explicit memory cleanup in matlab > No. See below http://www.mathworks.co...

Re: Why is OO popular? #3
"Universe" <universe@tAkEcovadOut.net> wrote in message news:... > "JXStern" <JXSternChangeX2R@gte.net> wrote in message > > <h.lahman@verizon.net> wrote: > > >It's a paradigm issue. In OO we abstract /existing/ problem space > > >entities with /existing/ properties rather than constructing /new/ > > >entities by composition just because it is convenient to do so. > > Doesn't address the issues at all. > > > > Say you're doing a farm system. You create a class "cow", and you > > use it for all the four-legged things standing out in the field. "You", huh? x.) How? y.) Analysis done? z.) An overall architecture done based upon "y)"? > > You confirm the design and solution, thing works in beta, you roll it > > out, it works fine. 1) "You" and 2) for god knows what and whose requirements. > > Turns out, you not being a farm boy, that many of > > those four-legged thingies are, after all, horses. What did the abstraction accomplish in the program? What did the program accomplish? How well in tech respect? Was any client happy, sad, so-so, to this point? > > It doesn't even > > matter for six months, but then the customer finds a need to track > > some properties only horses have. > > > > So, did you do a good job back then, or not? Again: a) What did...

Does Object extend Object?
The javadoc on the class Object is as follows: Class Object is the root of the class hierarchy. Every class has Object as a superclass. All objects, including arrays, implement the methods of this class. Does this mean that Object implicitly extends Object? It passes all the inheritance tests such as implementing the methods of the parent class, the "is a" relationship etc. So does it in fact extend itself or should it say "Every OTHER class has Object as a superclass? Thanks for your thoughts! <william.lichtenberger@gmail.com> wrote: > The javadoc on the...

Question about objects in objects.
I have been writing a game and some of my objects have objecs as members of that I have a graphic with a color or unit with a coord structure. If I have a: class unit{ color cl; int attackFactor; ..... ..... When I create the object, I create an empty color but I have a color constructor: color::color(int r, int g, int b); I want to create the unit unit::unit(int af, int r, int g, int b); attackFactor =af; color(r,g,b); My unit objects will have quite a bit of data so I want to load all the information with istream from a text file. How can I use constructor for the m...

Objects containing objects
I have a class that takes in an object in its constructor, and stores it as a member variable as follows: # INSIDE CONSTRUCTOR .... $self->{MY_OBJECT} = shift; .... I can make calls to this member object's methods inside the constructor, and they work fine: .... print $self->{MY_OBJECT}->myMethod(); .... I have a getter method in this top-level object that returns the member object: sub getMyObject { my $self = shift; return $self->{MY_OBJECT}; } However, when other code retrieves this object using the getter method, it is unable to make method calls against ...

cast object to object
So I have an object of class (user defined) Dave() and Dave2() This may seem totally assuming, but I have a string that I get. It will either be "Dave" or "Dave2". Is there a way to if string=="Dave" (Dave)string.theMethod() else (Dave2)string.theMethod() end I know, again, I am taking a lot for granted, but in Python I was able to do this sort of thing... And of course in Java... -- Posted via http://www.ruby-forum.com/. On 12/3/2010 9:27 AM, David E. wrote: > So I have an object of class (user defined) Dave() and Dave2() >...

INVALID objects
Hello, Can ANY object in Oracle have the status "invalid"? Or is this only applicable to source code objects (Stored Procedures, Packages, Types etc...) -- With regards, Martijn Tonies Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL Server. Upscene Productions http://www.upscene.com can but shouldn't they should usually be recompiled by running appropriate scripts check metalink for any particulars, and if they have bugs. - On Mon, 25 Oct 2004 14:41:05 +0200, "Martijn Tonies" <m.tonies@upscene_nospam_.com> wrote: >Hel...

object and object types
"Ulrich Eckhardt" <doomster@knuut.de> wrote in message news:8omqe9FiotU1@mid.uni-berlin.de... > Paul wrote: >> A class is an object type as the C++ standards clearly state: > > Yes, a class is a type, just like an enumeration. A type is not an object. > An object type is also not an object, but a type. Instantiating it will > yield an object. > > Simple thing, at least for those willing to understand. > Consider the following Cat frisky; frisky is an object yet frisky is also an object type. frsiky is region of storage yet frisky is also an identifier. frisky is also a Cat type. An object is an object type, that is why it's called an object. An integer is an integer type , this is why we call it an integer etc etc. There is no instantiation here, yet there is an object. The object declared here is defined by the definition of its type, in the class Cat. If Cat has an Eat() member function then frisky has an Eat() member function. If Cat has a Meow() member function then so does frisky. And so on. The term object is not simply a block of memory. Whether the inconsistent standards state this or not, which they don't. Paul wrote: > Consider the following > > Cat frisky; > > frisky is an object yet frisky is also an object type. Wrong. > frsiky is region of storage yet frisky is also an identifier. Wrong due to oversimplification. > ...

objects build up from other objects
I've stood up to the challenge to show how ciforth's mini-OO (with current object) can be used to build larger objects. REGRESS is a new invention, it is like the { <- } of Hopkins test suite. In my system it aborts if the test fails, but you can see it as a stack diagram by example. Once you have a class Point (a defining word) you get also ^Point :contains the pointer to the current Point) BUILD-Point : lay down a data structure in memory, leave its address. There is no current object per se. Only current Point, current Rectangle etc. There are no fields. Offsets ...

OO Problem Base Object
Hi, I am new to object orientated programming and am having a problem with a child class. My class hierarchy is of Assets. I have one type of Asset: Stocks. When trying to instantiate the Stock child class I get the following error: >> s=stock('XYZ',100,25); 23 s=class(s,'stock', a); ??? Error using ==> class Base Object for class stock constructor is not valid. Error in ==> stock.stock at 23 s=class(s,'stock', a); This error occurs in the child class constructor. I have attached a copy of this at the end of this message. What does this error ...

Re: Why is OO popular? #2
I've read several times that OO resembles or is most like the way people think. Is this really important? Which kind of transportation is most like the way we walk? A car has four wheels (generally) and no legs. A bicycle has two supports, but they're wheels, not feet. A unicycle has only one support and isn't very fast. An airplane has no legs and yet transports many people faster than any one of them could run. What kind of chair is most like the way we sit? A stool with four long legs is fairly stable, but usually has an unpadded flat sitter. A la-z-boy is very soft and often reclines, lending to snoozes. What kind of food most resembles the way we eat? Is bread the perfect food because it fills our mouth and is chewy? Is ice cream better because it's soft and will form-fit our mouths? Is a bagel inappropriate because it has a hole in the middle? Back to programming languages... Isn't it more likely that a great programming language would be like a great tool; it is easily used by us as a tool to do the task(s) at hand. Using that idea we can reverse-engineer the tool. What needs to be done? Is the process of designing and writing code of any importance? if so, then what is involved that goes beyond current OO languages? Let's assume OOPLs do a lot of 'what needs to be done'. What in the ease of designing and writing code is missing? What really irks us about OO languages? On Tue, 11 May 2004 12:32:12 -0400, "Mark S. Hathawa...

Web resources about - Why is OO Popular? - comp.object

Popular music - Wikipedia, the free encyclopedia
Another way to define popular music is "to link popularity with means of dissemination" (e.g., being aired on the radio); however, this is problematic, ...

Facebook buys popular selfie filter iOS app Masquerade with Snapchat-like features
Remember six months ago when Snapchat introduced its crazy selfie lens feature that sort of went viral? Facebook wants in on the “how’d you do ...

The Most Popular FishbowlNY Posts for The Week
Here’s a look at the posts that made the most buzz the past seven days. How Popular Mechanics’ EIC Wowed Michael Keaton at The Oscars Oprah ...

World’s most popular version of Android is finally less than two years old
... world. KitKat, mind you, was released on Halloween Day back in 2013. And yes, up until this month — March 2016 — it was still the most popular ...

Facebook buys popular face swapping app for silly selfies
Get ready to see more silly selfies. Facebook is buying the company behind one of the most popular apps in the App Store, Masquerade . Masquerade's ...

Popular Atlanta Transit Expansion & ½¢ Tax Killed In Georgia Legislature
... late last week. It was reported that the bill would have allowed Fulton, Dekalb, and Clayton counties to levy a half-cent [&hellip Popular ...

Ivory Coast: Witnesses say bursts of gunfire heard in Grand-Bassam, popular beach resort
ABIDJAN, Ivory Coast (AP) — Ivory Coast: Witnesses say bursts of gunfire heard in Grand-Bassam, popular beach resort.

Big Pharma Scrambles As E-Cigs Become Most Popular Way To Quit
Big Pharma Scrambles As E-Cigs Become Most Popular Way To Quit

Google Chromecast the Most Popular Streaming Device in 2015 -
Google Chromecast was the most popular streaming video device sold last year, according to a new report by Strategy Analytics . According to ...

10 Popular Career-Related New Year’s Resolutions
10 Popular Career-Related New Year’s Resolutions

Resources last updated: 3/14/2016 3:12:22 AM