f



OOP/OOD Philosophy

Hi,


I understand the mechanics of objects; I use objects, polymorphism,
even patterns.  I understand some of OOD, but I'm trying to grasp the
bigger picture, how to "think in OOD".  I hope an example will help:

Let's say I want a program that given A produces B.  I have 2 ways of
analyzing this.  I can either formally specify how B relates to A, or I
can use the OOD approach in which I identify nouns, relationships,
arity, patterns, etc...

The first approach is much more direct and is very reliable, so why
would I use the OOD approach?

Or am I comparing apples to oranges?  Is OOD for designing
applications, or is it for designing an application pattern (framework?
architecture?) that is then customized to provide the desired
application (the justification being that such a design is much more
extensible)? Do I need to stop thinking "program" and instead think
"architecture" as the primary purpose and then customize it to provide
the "program" as the secondary purpose?

This is why I need to understand the philosophy.  I want to understand
how to "think OOD".  I don't care about specific design techniques
unless they help illustrate this shift in thinking.  What can you tell
me about this?  What references (online and printed are fine) can you
point me to?  I'd love something that contrasts the two methodologies
and provides examples to drive it home.  Something that explains and
justifies OOD from a more philosophical perspective.

Thanks in advance.

0
xmp333 (26)
6/29/2005 3:24:50 PM
comp.object 3218 articles. 1 followers. Post Follow

264 Replies
1803 Views

Similar Articles

[PageSpeed] 24

xmp333@yahoo.com wrote:
> Hi,
> 
Hi, I am interested on this topic too and, although I think I am not the 
best person to answer you, here is modest opinion.

> 
> I understand the mechanics of objects; I use objects, polymorphism,
> even patterns.  I understand some of OOD, but I'm trying to grasp the
> bigger picture, how to "think in OOD".  I hope an example will help:
> 
> Let's say I want a program that given A produces B.  I have 2 ways of
> analyzing this.  I can either formally specify how B relates to A, or I
> can use the OOD approach in which I identify nouns, relationships,
> arity, patterns, etc...
> 
Here you are saying you know how to programming in an OO language, but 
you are not sure to understand the main purpose of OO design.

> The first approach is much more direct and is very reliable, so why
> would I use the OOD approach?
> 
OOD helps you to organize and represents the information. All people 
makes a mental design to resolve a problem and then programs the 
solution. In OO terms one first designs an OO solution and then programs 
it in an OO language (or not :) ).

> Or am I comparing apples to oranges?  Is OOD for designing
> applications, or is it for designing an application pattern (framework?
> architecture?) 

A very, very bad description could be:
The architecture is the big picture of the problem. How organize the 
entery problem, application, anything.
Then you need to design how every architecture will be. Here the 
patterns (like wikipedia says) are "standard solutions to common 
problems in software design".

that is then customized to provide the desired
> application (the justification being that such a design is much more
> extensible)? Do I need to stop thinking "program" and instead think
> "architecture" as the primary purpose and then customize it to provide
> the "program" as the secondary purpose?
> 
Design is more abstract than programming, you can design thins that are 
impossible to translate directly to the code (then you need to 
"normalize" things before programming it).

> This is why I need to understand the philosophy.  I want to understand
> how to "think OOD".  I don't care about specific design techniques
> unless they help illustrate this shift in thinking.  What can you tell
> me about this?  What references (online and printed are fine) can you
> point me to?  I'd love something that contrasts the two methodologies
> and provides examples to drive it home.  Something that explains and
> justifies OOD from a more philosophical perspective.
> 
I don't know if this is the best example but:

At work I am developing an application that follows a "pipe and filter" 
architecture. I dont knew it follows it since some good person on this 
news channels says me. I was reinventing the wheel :(

Also, I was designing the application (more concretaly the data model) 
in an OO way. When I had a possible good design
(using some design patterns) I passed to normalize my model and adapt it 
to the concrete OO programming language. After a couple of changes on my 
desing (and new normalizations) it was a great hit.

I think it would be impossible to do if I dont stop and think on OOD for it.

> Thanks in advance.
> 

I hope it will be useful for you.

-- 
-----------------------------------------------------
Antonio Santiago P�rez
( email: santiago<<at>>grahi.upc.edu       )
(   www: http://www.grahi.upc.edu/santiago )
(   www: http://asantiago.blogsite.org     )
-----------------------------------------------------
GRAHI - Grup de Recerca Aplicada en Hidrometeorologia
Universitat Polit�cnica de Catalunya
-----------------------------------------------------
0
santiago5456 (105)
6/29/2005 4:30:25 PM
xmp333@yahoo.com wrote:
> Hi,
> 
> 
> I understand the mechanics of objects; I use objects, polymorphism,
> even patterns.  I understand some of OOD, but I'm trying to grasp the
> bigger picture, how to "think in OOD".  I hope an example will help:
> 
> Let's say I want a program that given A produces B.  I have 2 ways of
> analyzing this.  I can either formally specify how B relates to A, or I
> can use the OOD approach in which I identify nouns, relationships,
> arity, patterns, etc...
> 
> The first approach is much more direct and is very reliable, so why
> would I use the OOD approach?
> 
> Or am I comparing apples to oranges?  Is OOD for designing
> applications, or is it for designing an application pattern (framework?
> architecture?) that is then customized to provide the desired
> application (the justification being that such a design is much more
> extensible)? Do I need to stop thinking "program" and instead think
> "architecture" as the primary purpose and then customize it to provide
> the "program" as the secondary purpose?
> 
> This is why I need to understand the philosophy.  I want to understand
> how to "think OOD".  I don't care about specific design techniques
> unless they help illustrate this shift in thinking.  What can you tell
> me about this?  What references (online and printed are fine) can you
> point me to?  I'd love something that contrasts the two methodologies
> and provides examples to drive it home.  Something that explains and
> justifies OOD from a more philosophical perspective.
> 
> Thanks in advance.
> 

This is a large area of dicussion, might be best to take a look at a few 
books or web resources and then ask specific questions.

The book 'Object Thinking' by David West, although lenghty, is very good 
at showing the differences in 'thinking' about problems from a 
procedural and OO point of view and then more goes on to show the 
history, philisophical and politics of OO

http://www.microsoft.com/MSPress/books/6820.asp

and google with 'thinking in objects' brings up lots of refs/blogs/etc.

http://www.google.co.uk/search?biw=1280&hl=en&q=thinking+in+objects&btnG=Google+Search&meta=

HTH

Andrew
0
news248 (706)
6/29/2005 5:46:53 PM
Antonio Santiago wrote:
> Hi, I am interested on this topic too and, although I think I am not the
> best person to answer you, here is modest opinion.

That's fine with me :).  Thanks for responding.


> Here you are saying you know how to programming in an OO language, but
> you are not sure to understand the main purpose of OO design.

Kind of.  I'm trying to get an idea of what OOD is supposed to give me
as opposed to non-OOD.  But confusing this is the possibility that I
may be looking at OOD to design the wrong thing.


> OOD helps you to organize and represents the information. All people
> makes a mental design to resolve a problem and then programs the
> solution. In OO terms one first designs an OO solution and then programs
> it in an OO language (or not :) ).

But non-OOD is also design.  Non-OOD focuses more on deriving the
algorithms, while OOD focuses more on the data, although the two
intersect.  One can derive data from the algorithms/relations and vice
versa.  So, given two different ways of doing something, and a former
way that is clearer and more direct, why use the latter?  If it's an
issue of flexibility/re-usability, what's the thinking behind that?


> A very, very bad description could be:
> The architecture is the big picture of the problem. How organize the
> entery problem, application, anything.
> Then you need to design how every architecture will be. Here the
> patterns (like wikipedia says) are "standard solutions to common
> problems in software design".

Here's an example.  Let's say we have a cash register program with the
options to add, subtract, print a receipt or clear the total.

Non-OO analysis:
=========
Op = current operation invoked
Price = Price Entered
X' = value change for X for next state
Total = running total
==========
Then, the state of the system (assume it waits for input) is:
==========
(Op = Start ^ Op' = Clear) v
(Op = Add ^ Total' = Total + Price) v
(Op = Sub ^ Total' = Total - Price) v
(Op = Clear ^ Total' = 0) v
(Op = Print ^ Printed(Total))
==========

This is an informal predicate calculus model that shows the state of
the system at any time, given various inputs.  The ' represents
mutability in a more mathematical way.  This analysis gets to the point
and is easy to code and automatically test.  In addition, a procedural
or "OOP" program could be written from this.  For instance, the state
can be an object that will invoke the proper functionality from a
uniform interface, or a series of procedure calls.

Given I'm not knowledgable about OOD, please forgive a possible
butchering of OOD, but here's how I can see an OOD approach to the
problem.  First, I identify the nouns in the system:

User
CashRegister
Total
Operation
Price

Now the relationships.  The User interacts with the CashRegister; in
response, the CashRegister creates the appropriate Operation and allows
access to the Price.  Price and Total are just numbers.  Furthermore,
since there are several types of Operations, Operation is an abstract
base class with Add, Sub, Print, and Clear as children sharing a
consistent interface to allow for more streamlined code.  So, we get
the following (assuming a non-event model and garbage collection for
simplicity):

loop
   CashRegister.Interact
   op = CashRegister.Op
   op.DoOp(Total, CashRegister.Price())

However, this is a bit of a kludge.  Clear and Print need only one
parameter, but take 2 in order to conform with the interface.  In
addition, Print gets mutable access to the Total even though it doesn't
need it, which is unsafe.  Furthermore, the analysis up to this point
was not as clear (or IMO as verifiable) as the previous one.

On the other hand, maybe I was solving the wrong problem.  Maybe I need
to look at the pattern of this program and build the architecture then
customize it.  In this case, what I'm trying to build is a type of
machine that accepts operations, parameters, and can maintain a state
that is the result of the previous operations.  Analyzing it this way,
we get:

User
Machine
Params
OutputState
Operation

Params are what the Machine returns as data -- they are
instruction/data pairs (where data can be an additional collection).
OutputState can store any number of outputs and allows read/write
access.  Otherwise, the semantics are the same:

[Assume output is an instance of OutputState]
loop
   Machine.Interact
   plist = Machine.Params
   foreach i in plist.Size
      Operation op = Factory.CreateOp(plist[i].Op)
      op.DoOp(plist[i].Data, output)

This is the generalized pattern, and by deriving different Operations,
Factories, OutputsStates, and even Machines we can simulate a wide
variety of machines -- perhaps even primitive operating systems.  We
can accomplish things like history lists, screen writes, etc, all with
the same basic framework, because we solved a general problem.  Now,
all future machine-like tasks will consist solely of deriving the
appropriate classes.

In fact, this whole thing could be made a method of a machine class:

Machine.Run(factory, output)

Then people simply derive from this class, over-ride Interact (and
anything else they want), and provide the necessary implementations.

Comments?



> that is then customized to provide the desired
> > application (the justification being that such a design is much more
> > extensible)? Do I need to stop thinking "program" and instead think
> > "architecture" as the primary purpose and then customize it to provide
> > the "program" as the secondary purpose?
> >
> Design is more abstract than programming, you can design thins that are
> impossible to translate directly to the code (then you need to
> "normalize" things before programming it).

Right, which is what I'm seeing from both cases.  An analysis of the
problem simply states relations and some of these relational statements
can't even be checked by code (take relations involving quantifications
over infinite sets or involving convenience functions that don't exist
in the implementation language).  What differs here is what we are
solving.  Are we solving the problem at hand, or do we choose to solve
a generalization one of whose instances is the problem at hand,
ostensibly for more flexibility?



> > This is why I need to understand the philosophy.  I want to understand
> > how to "think OOD".  I don't care about specific design techniques
> > unless they help illustrate this shift in thinking.  What can you tell
> > me about this?  What references (online and printed are fine) can you
> > point me to?  I'd love something that contrasts the two methodologies
> > and provides examples to drive it home.  Something that explains and
> > justifies OOD from a more philosophical perspective.
> >
> I don't know if this is the best example but:
>
> At work I am developing an application that follows a "pipe and filter"
> architecture. I dont knew it follows it since some good person on this
> news channels says me. I was reinventing the wheel :(
>
> Also, I was designing the application (more concretaly the data model)
> in an OO way. When I had a possible good design
> (using some design patterns) I passed to normalize my model and adapt it
> to the concrete OO programming language. After a couple of changes on my
> desing (and new normalizations) it was a great hit.
>
> I think it would be impossible to do if I dont stop and think on OOD for it.

Could you provide some more details?  How did you analyze the problem
using OOD?  How would you have analyzed it using non-OOD?


>
> > Thanks in advance.
> >
> 
> I hope it will be useful for you.


It was, thank you very much.

0
plan9ch7 (1)
6/29/2005 6:24:13 PM
On 29 Jun 2005 08:24:50 -0700, xmp333@yahoo.com wrote:

>Hi,
>
>
>I understand the mechanics of objects; I use objects, polymorphism,
>even patterns.  I understand some of OOD, but I'm trying to grasp the
>bigger picture, how to "think in OOD".  I hope an example will help:

There has been an enormous amount of debate and discussion about this
topic.  Some folks believe that OO is an extremely high level
technique used to make models of the world.  Others believe that OO is
a mechanism for structuring source code.  I fall into the latter camp.

To me, the whole notion that OO is a philosophy is silly.  OO is a set
of tools that help programmers structure their code better.  In
particular, it helps them invert key dependencies.

In my view the OO-ness of a system can be identified by tracing the
dependencies between the modules.  If the high level policy modules
depend on (directly call) lower level modules, which then call even
lower level modules, then the program is procedural and not object
oriented.

However, if the high level policy modules are independent, or depend
solely on polymorphic interfaces that the lower level modules
implement, then the system is OO.  

In other words, it all has to do with the *direction* of the
dependencies.

To learn more about this see the following article:

http://www.objectmentor.com/resources/articles/Principles_and_Patterns.PDF


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
6/29/2005 9:51:29 PM
plan9ch7@gmail.com wrote:
> Antonio Santiago wrote:
> 
>>Hi, I am interested on this topic too and, although I think I am not the
>>best person to answer you, here is modest opinion.
> 
> 
> That's fine with me :).  Thanks for responding.
> 
> 
> 
>>Here you are saying you know how to programming in an OO language, but
>>you are not sure to understand the main purpose of OO design.
> 
> 
> Kind of.  I'm trying to get an idea of what OOD is supposed to give me
> as opposed to non-OOD.  But confusing this is the possibility that I
> may be looking at OOD to design the wrong thing.
> 
> 
OOD only gives you a new way to orient your ideas,  like there is a very 
different point of view betwen procedural and OO programming languages 
(supossing we don't use OO prog. languages only in a procedural way). 
Which is the best? Depends on your needs. (The best answer in 
computer-science worl is "depends on" :) )

> 
>>OOD helps you to organize and represents the information. All people
>>makes a mental design to resolve a problem and then programs the
>>solution. In OO terms one first designs an OO solution and then programs
>>it in an OO language (or not :) ).
> 
> 
> But non-OOD is also design.  Non-OOD focuses more on deriving the
> algorithms, while OOD focuses more on the data, although the two
> intersect.  One can derive data from the algorithms/relations and vice
> versa.  So, given two different ways of doing something, and a former
> way that is clearer and more direct, why use the latter?  If it's an
> issue of flexibility/re-usability, what's the thinking behind that?
> 
Yes, you can design in any way. See the differences betwen a DFS in the 
"structured design" and an UML class diagram. Which is the best? Other 
time it depends.


> 
>>A very, very bad description could be:
>>The architecture is the big picture of the problem. How organize the
>>entery problem, application, anything.
>>Then you need to design how every architecture will be. Here the
>>patterns (like wikipedia says) are "standard solutions to common
>>problems in software design".
> 
> 
> Here's an example.  Let's say we have a cash register program with the
> options to add, subtract, print a receipt or clear the total.
> 
> Non-OO analysis:
> =========
> Op = current operation invoked
> Price = Price Entered
> X' = value change for X for next state
> Total = running total
> ==========
> Then, the state of the system (assume it waits for input) is:
> ==========
> (Op = Start ^ Op' = Clear) v
> (Op = Add ^ Total' = Total + Price) v
> (Op = Sub ^ Total' = Total - Price) v
> (Op = Clear ^ Total' = 0) v
> (Op = Print ^ Printed(Total))
> ==========
> 
> This is an informal predicate calculus model that shows the state of
> the system at any time, given various inputs.  The ' represents
> mutability in a more mathematical way.  This analysis gets to the point
> and is easy to code and automatically test.  In addition, a procedural
> or "OOP" program could be written from this.  For instance, the state
> can be an object that will invoke the proper functionality from a
> uniform interface, or a series of procedure calls.
> 
> Given I'm not knowledgable about OOD, please forgive a possible
> butchering of OOD, but here's how I can see an OOD approach to the
> problem.  First, I identify the nouns in the system:
> 
> User
> CashRegister
> Total
> Operation
> Price
> 
> Now the relationships.  The User interacts with the CashRegister; in
> response, the CashRegister creates the appropriate Operation and allows
> access to the Price.  Price and Total are just numbers.  Furthermore,
> since there are several types of Operations, Operation is an abstract
> base class with Add, Sub, Print, and Clear as children sharing a
> consistent interface to allow for more streamlined code.  So, we get
> the following (assuming a non-event model and garbage collection for
> simplicity):
> 
> loop
>    CashRegister.Interact
>    op = CashRegister.Op
>    op.DoOp(Total, CashRegister.Price())
> 
> However, this is a bit of a kludge.  Clear and Print need only one
> parameter, but take 2 in order to conform with the interface.  In
> addition, Print gets mutable access to the Total even though it doesn't
> need it, which is unsafe.  Furthermore, the analysis up to this point
> was not as clear (or IMO as verifiable) as the previous one.
> 
> On the other hand, maybe I was solving the wrong problem.  Maybe I need
> to look at the pattern of this program and build the architecture then
> customize it.  In this case, what I'm trying to build is a type of
> machine that accepts operations, parameters, and can maintain a state
> that is the result of the previous operations.  Analyzing it this way,
> we get:
> 
> User
> Machine
> Params
> OutputState
> Operation
> 
> Params are what the Machine returns as data -- they are
> instruction/data pairs (where data can be an additional collection).
> OutputState can store any number of outputs and allows read/write
> access.  Otherwise, the semantics are the same:
> 
> [Assume output is an instance of OutputState]
> loop
>    Machine.Interact
>    plist = Machine.Params
>    foreach i in plist.Size
>       Operation op = Factory.CreateOp(plist[i].Op)
>       op.DoOp(plist[i].Data, output)
> 
> This is the generalized pattern, and by deriving different Operations,
> Factories, OutputsStates, and even Machines we can simulate a wide
> variety of machines -- perhaps even primitive operating systems.  We
> can accomplish things like history lists, screen writes, etc, all with
> the same basic framework, because we solved a general problem.  Now,
> all future machine-like tasks will consist solely of deriving the
> appropriate classes.
> 
> In fact, this whole thing could be made a method of a machine class:
> 
> Machine.Run(factory, output)
> 
> Then people simply derive from this class, over-ride Interact (and
> anything else they want), and provide the necessary implementations.
> 
> Comments?
> 
Yes, the problem isn't veru difficult, then why make a difficult 
solution? Why not create a simple CashRegister class with a "total" 
attribute and operation like: clear, add, sub, getTotal.

> 
> 
> 
>>that is then customized to provide the desired
>>
>>>application (the justification being that such a design is much more
>>>extensible)? Do I need to stop thinking "program" and instead think
>>>"architecture" as the primary purpose and then customize it to provide
>>>the "program" as the secondary purpose?
>>>
>>
>>Design is more abstract than programming, you can design thins that are
>>impossible to translate directly to the code (then you need to
>>"normalize" things before programming it).
> 
> 
> Right, which is what I'm seeing from both cases.  An analysis of the
> problem simply states relations and some of these relational statements
> can't even be checked by code (take relations involving quantifications
> over infinite sets or involving convenience functions that don't exist
> in the implementation language).  What differs here is what we are
> solving.  Are we solving the problem at hand, or do we choose to solve
> a generalization one of whose instances is the problem at hand,
> ostensibly for more flexibility?
> 
Referent to the "infinite sets" want to say that some time ago a went to 
a class of funtional programming (I'm a nebie on this) and I like very 
much the way to approach to the problems. For example I can create a 
function that returns an infinite set of number and "connect" to a 
function that multiplies its values by 2. Is this functional program 
impossible to to in an OO language? No, it can be done in a different 
way, that the difference.

> 
> 
> 
>>>This is why I need to understand the philosophy.  I want to understand
>>>how to "think OOD".  I don't care about specific design techniques
>>>unless they help illustrate this shift in thinking.  What can you tell
>>>me about this?  What references (online and printed are fine) can you
>>>point me to?  I'd love something that contrasts the two methodologies
>>>and provides examples to drive it home.  Something that explains and
>>>justifies OOD from a more philosophical perspective.
>>>
>>
>>I don't know if this is the best example but:
>>
>>At work I am developing an application that follows a "pipe and filter"
>>architecture. I dont knew it follows it since some good person on this
>>news channels says me. I was reinventing the wheel :(
>>
>>Also, I was designing the application (more concretaly the data model)
>>in an OO way. When I had a possible good design
>>(using some design patterns) I passed to normalize my model and adapt it
>>to the concrete OO programming language. After a couple of changes on my
>>desing (and new normalizations) it was a great hit.
>>
>>I think it would be impossible to do if I dont stop and think on OOD for it.
> 
> 
> Could you provide some more details?  How did you analyze the problem
> using OOD?  How would you have analyzed it using non-OOD?
> 
Ok, First I explain the problem and then we try to make a solution in 
OOD and non-OOD way.
At work we (my and my folks :) ) work with metheorological radar data. I 
dont go to say what information radar data has because this is another 
war :D (it is pretty "complete" and can be modeleded also in OO classes 
more simple that with strucutres and simple types).

Ok, also, a set of work friend develop algorithms to filter, correct, 
imporove the recived radar data, here I called as: A1, A2, A3, ...
Every algorithm (the are procedures or functions) has some input 
parameters like the data to handle and some parameters to configure the 
behaviour of algorithm.

The idea is to develop an applciation that give as the avility to create 
  any desired configuration of what algorithms must be executed on the 
initial recived radar data. For example:

A1- Input: Needs a radar file name.
     Do: Knows how read a radar data file
     Output: returns a set of output params.

A2- Input: Needs 2D data and some configuration params (not relevant) to 
work
     Do: Corrects some radar data
     Output: Corrected 2D data and some parameters

A3, A4, ... : All the same filosofy. An algorith needs a set of inputs, 
does something and returns a set of output data that can be connected as 
inputs of other algorithms.

The idea is like programming in a procedural way, but we can decide at 
runtime which output params will be input params to other algorithms.


Design solutions:
-----------------

Non-OOD: I dont want to image how i can analyze my work application in a 
non-OOD :) I can be a very headache.

OOD: I have a set of classes like "Algorithms" and "Data". An 
"algorithm" object has 0..* input "data" objects and 0..* output "data" 
objects related. Also the same "data" object can be an input for an 
algorithm and an output for another.

I dont saying there is no solution in a non-OOD but I say the approach 
in OOD is more closely to me than the other.

> 
> 
>>>Thanks in advance.
>>>
>>
>>I hope it will be useful for you.
> 
> 
> 
> It was, thank you very much.
> 

Bye Mr plan9ch7.

-- 
-----------------------------------------------------
Antonio Santiago P�rez
( email: santiago<<at>>grahi.upc.edu       )
(   www: http://www.grahi.upc.edu/santiago )
(   www: http://asantiago.blogsite.org     )
-----------------------------------------------------
GRAHI - Grup de Recerca Aplicada en Hidrometeorologia
Universitat Polit�cnica de Catalunya
-----------------------------------------------------
0
santiago5456 (105)
6/30/2005 7:36:48 AM
Robert C. Martin wrote:
> On 29 Jun 2005 08:24:50 -0700, xmp333@yahoo.com wrote:
[snip]
> In my view the OO-ness of a system can be identified by tracing the
> dependencies between the modules.  If the high level policy modules
> depend on (directly call) lower level modules, which then call even
> lower level modules, then the program is procedural and not object
> oriented.

I'm a big fan of dependency inversion and utilising it has helped many
projects that I've worked on. I was going to write that I didn't really
consider it *the* de facto OO-ometer, but I think that I might, just in
a slightly different way.

Like you, I also view OO as simply a way to organise code efficiently.
I've seen many truely horrid architectures which have arisen from
OOA/OOD, from noun spotting etc. And I guess I see OO as a strategy
that minimises code by allowing us to take an interface, and some
behaviour, and vary the behaviour independent of that interface.

Of course, the DIP is just an example of this, but I feel the DIP is at
its most potent across assembly (package) boundaries, where it would
indeed be an effective OO-ometer. Inside an assembly, I think the DIP
can *often* left unused without plunging into procedural code.

Or perhaps any polymorphic activity suggests an inverted dependency.
Not sure, I'd have to meditate on it. 

Regards,

Tim Haughton

0
6/30/2005 8:01:45 AM
> >Hi,
> >
> >
> >I understand the mechanics of objects; I use objects, polymorphism,
> >even patterns.  I understand some of OOD, but I'm trying to grasp the
> >bigger picture, how to "think in OOD".  I hope an example will help:
>
> There has been an enormous amount of debate and discussion about this
> topic.  Some folks believe that OO is an extremely high level
> technique used to make models of the world.  Others believe that OO is
> a mechanism for structuring source code.  I fall into the latter camp.

I fall into both (though philosophy is a little strong), I'd rather
call it a school.

>
> To me, the whole notion that OO is a philosophy is silly.

do you think there are any philosophical notions behind SE...or simply
that OO is not (a distintive) one?

> OO is a set
> of tools that help programmers structure their code better.  In
> particular, it helps them invert key dependencies.

I'll try to ignore my main objection to this point, for both our
sanity, or at least just brush past it.....if I start to go on...then
feel free to tell me to 'shut the .... up'.

OK, it is possible to invert the physical dependencies between sets of
classes (modules) by moving entities between them ...we should at least
agree here......are you claiming that this is the key characteristic
value of OO?

>
> In my view the OO-ness of a system can be identified by tracing the
> dependencies between the modules.

you are?

> If the high level policy modules
> depend on (directly call) lower level modules, which then call even
> lower level modules, then the program is procedural and not object
> oriented.

this is utterly bizarre....you are claiming that the OO ness of a
system is not dependent on the logical model (i.e. 'class',
'interface'), but on the allocation of a class or interface to a
physical deployment entity (module)...and further that even if an
application uses classes, interfaces, encapsulation, abstraction,
polymorphism, it can be considered (strictly) functional in certain
deployments........I disagree.

i.e.

interface IA
{
}

class CA : IA
{
}

interface IB
{
}

class CB1
{
 IA a = new CA();
}

class CB2
{
}

so I can vary the OO'ness of the above code, not by changing the code,
but by how I allocate each entity to a module?!?!?

If I put it all in 1 module....its not OO?

>
> However, if the high level policy modules are independent, or depend
> solely on polymorphic interfaces that the lower level modules
> implement, then the system is OO.

So if a physical module (set of classes) depends on 1 non polymorphic
interfaces and n (>0) polymorphic ones, its not OO??!?!?!?!?!?

Would you characterise a C program where the dependencies between
'modules' were via function pointers as OO? (I actually would
sympathise with an answer like 'a bit' but the lack of something
corresponding to an object would worry me)

>
> In other words, it all has to do with the *direction* of the
> dependencies.

aaahhhhhhh.....I'm my own grandpa.....

>
> To learn more about this see the following article:
>
> http://www.objectmentor.com/resources/articles/Principles_and_Patterns.PDF
>
>

luckily I'm going on holiday for a week from tomorrow, I've been good
up to now, I've let loads of DIPy stuff go, but this one was just too
much for me, in one stroke you seem to be trying to characterise OO as
an mechanism for implementing your 'DIP' thing.....if I believed in DIP
that would be a step too far, but as I completely reject it as a
mirage....its two steps too far.

0
Nicholls.Mark (1061)
6/30/2005 9:38:48 AM
> There has been an enormous amount of debate and discussion about this
> topic.  Some folks believe that OO is an extremely high level
> technique used to make models of the world.  Others believe that OO is
> a mechanism for structuring source code.  I fall into the latter camp.

So OO keeps data and operations together which avoids name clashes and
serves as a grouping method for the readers of the code?  However, this
would preclude any need for OO design as one can use procedural design
and simply group the data and operations together in a class.


> To me, the whole notion that OO is a philosophy is silly.  OO is a set
> of tools that help programmers structure their code better.  In
> particular, it helps them invert key dependencies.

By inverting key dependencies, what do you mean?  Or did I get it
below?


> In my view the OO-ness of a system can be identified by tracing the
> dependencies between the modules.  If the high level policy modules
> depend on (directly call) lower level modules, which then call even
> lower level modules, then the program is procedural and not object
> oriented.
>
> However, if the high level policy modules are independent, or depend
> solely on polymorphic interfaces that the lower level modules
> implement, then the system is OO.

I'm a bit confused.  By this are you saying that objects capture the
algorithmic pattern, applying parametized objects in lieu of fixed
calls?  For example:

Non-OO by your definition:
==========================
class x {
   method y (z) {
      for each i in z
           display(z)
   }
}

OO by your definition:

class x
    method y (z,a) {
        for each i in z
             a.display(z)
}

Instead of calling a lower level display, the class calls a display
method of a parameter passed in, which now means the algorithm is
parametized; method y no longer displays all the elements in z, but
rather applys a user-defined algorithm (the display interface) over z.
This means that it's a more abstract algorithm and is more flexible as
more can be done with it by deriving different classes that fulfill the
display interface.  Is this correct?

I'll visit the link shortly.  Thank you.

0
xmp333 (26)
6/30/2005 1:43:23 PM
> OOD only gives you a new way to orient your ideas,  like there is a very
> different point of view betwen procedural and OO programming languages
> (supossing we don't use OO prog. languages only in a procedural way).
> Which is the best? Depends on your needs. (The best answer in
> computer-science worl is "depends on" :) )

Right; I'm not assuming that OO is the best tool out there.  I work on
a large number of small projects, and for many of them procedural code
is just fine.  However, I want to make sure I also understand the OO
mind-set.  If it were just about using classes as the basic module and
enforcing data integrity through methods, then it's not a paradigm
shift; in fact, it's the realization of the procedural ideal.  So to
me, there has to be a different way of viewing things.



> > But non-OOD is also design.  Non-OOD focuses more on deriving the
> > algorithms, while OOD focuses more on the data, although the two
> > intersect.  One can derive data from the algorithms/relations and vice
> > versa.  So, given two different ways of doing something, and a former
> > way that is clearer and more direct, why use the latter?  If it's an
> > issue of flexibility/re-usability, what's the thinking behind that?
> >
> Yes, you can design in any way. See the differences betwen a DFS in the
> "structured design" and an UML class diagram. Which is the best? Other
> time it depends.

Right, the question becomes this; what are the advantages of OO design
over non-OO design?  Or at least, what are the trade-offs in each case?
 The most obvious trade-off is directness.  Non-OO design cuts straight
to the heart of the matter.  However, does OO design offer something to
make up for its more vague and circuitous route?



[Cash Register Example]
> Yes, the problem isn't veru difficult, then why make a difficult
> solution? Why not create a simple CashRegister class with a "total"
> attribute and operation like: clear, add, sub, getTotal.

Because such a solution is essentially procedural, even if it uses
objects.  In addition, it makes OO analysis pointless, because we would
have spent more time analyzing a problem, only to come to the same
solution that a more direct (predicate calculus) analysis would have
yielded; in addition, the predicate calculus analysis could have led to
the same object model.  If we use the guidelines that data is private
and guarded by access methods along with the requisite guards then we
have most of that design.  Grouping data and their operations together
is another general principle that can be used to get the rest of that
model.  So in the end, we're looking at procedural analysis, along with
some code derivation guidelines to derive an object-based model.  No OO
analysis there.


> > Right, which is what I'm seeing from both cases.  An analysis of the
> > problem simply states relations and some of these relational statements
> > can't even be checked by code (take relations involving quantifications
> > over infinite sets or involving convenience functions that don't exist
> > in the implementation language).  What differs here is what we are
> > solving.  Are we solving the problem at hand, or do we choose to solve
> > a generalization one of whose instances is the problem at hand,
> > ostensibly for more flexibility?
> >
> Referent to the "infinite sets" want to say that some time ago a went to
> a class of funtional programming (I'm a nebie on this) and I like very
> much the way to approach to the problems. For example I can create a
> function that returns an infinite set of number and "connect" to a
> function that multiplies its values by 2. Is this functional program
> impossible to to in an OO language? No, it can be done in a different
> way, that the difference.

Functional languages can support infinity through lazy evaluation,
however this support is limited.  They generate the values on an
as-needed basis -- but this doesn't fulfill the needs of quantification
over infinite sets, which requires evaluation of the entire infinite
set -- an impossible concept, but one that is useful as an analysis
tool.  For example, if I quantify:

(All x such that x in Integer and Odd(x) = True)

Assuming the mathematical definition of Integer (and not the computer
limited one) this requires evaluation of each element of an infinite
set.  This is something that can never be implemented on a computer,
but it is a useful analysis tool for the user.

[Radar Example]
Ok, I think I see what you're doing. You have a series of algorithms
(or objects) that feed their output into other algorithms/objects,
hence the pipeline.  The outputs and inputs are objects, and I assume
you are using polymorphism in order to supply a unified interface?
This also means you could build a collection of the algorithms you want
for a given task and customize processing chains by supplying these
collections when needed, and deriving new ones to introduce new
processing functionality?


Thanks.

0
xmp333 (26)
6/30/2005 2:00:00 PM

Tim Haugton wrote:
[Snip]
> Of course, the DIP is just an example of this, but I feel the DIP is at
> its most potent across assembly (package) boundaries, where it would
> indeed be an effective OO-ometer. Inside an assembly, I think the DIP
> can *often* left unused without plunging into procedural code.


What's DIP?

0
xmp333 (26)
6/30/2005 2:02:00 PM
xmp333@yahoo.com wrote:
> Tim Haugton wrote:
> [Snip]
> > Of course, the DIP is just an example of this, but I feel the DIP is at
> > its most potent across assembly (package) boundaries, where it would
> > indeed be an effective OO-ometer. Inside an assembly, I think the DIP
> > can *often* left unused without plunging into procedural code.
>
>
> What's DIP?

Answering my own question here.  Just read the link provided by Mr.
Robert Martin and I see what it stands for. 

Thanks.

0
xmp333 (26)
6/30/2005 5:29:25 PM
Responding to Xmp333....

> I understand the mechanics of objects; I use objects, polymorphism,
> even patterns.  I understand some of OOD, but I'm trying to grasp the
> bigger picture, how to "think in OOD".  I hope an example will help:

First, note that OO development traditionally has three stages:

OO Analysis (OOA): a design that resolves the customers functional 
requirements abstractly in the customer's terms and in a manner that is 
independent of particular computing environments.

OO Design (OOD): a design that elaborates the OOA solution to resolve 
nonfunctional requirements.  Since nonfunctional requirements 
necessarily depend on the specific computing environment, this solution 
explicitly includes computing environment issues.  But it does so at a 
relatively high level of abstraction.  IOW, it is a strategic view of 
the full solution.

OO Programming (OOP): a design that elaborates the OOD solution at a 
detailed level using an OOPL.  IOW, OOP is about implementing a complete 
tactical solution at the 3GL level.

All three stages involve the traditional notion of 'software design' but 
the perspectives are quite different.  The stages were formalized to 
deal with a problem that plagued traditional development processes: the 
gap between the customer view and the computing view can be enormous. 
The systematic progression from

requirements -> OOA model -> OOD model -> OOP model -> executable

was designed to put step stones in that gap.  It was systematic because 
each stage was fairly well focused (at least relative to its precursor, 
Structure Programming).  Everything to the left of the executable is a 
specification of the solution to its right.  Similarly, everything to 
the right of requirements is a solution to the specification on its 
left.  So the level of abstraction decreases moving to the right.  At 
each stage the developer provides well-defined intellectual content. 
(For the last stage to produce an executable, this is now automated by 
3GL compilers, linkers, and loaders.)

[Note that practicing IID with very small scale increments does not 
change the nature of the stages.  Whether one is dealing with DoD 
projects with a cast of thousands or small feature enhancements in a 
four-man team, the basic activities still follow the same pattern.]

This dichotomy between specification and solution was one of the 
important things that the OO paradigm brought to the software 
development table.  Which segues to...

> 
> Let's say I want a program that given A produces B.  I have 2 ways of
> analyzing this.  I can either formally specify how B relates to A, or I
> can use the OOD approach in which I identify nouns, relationships,
> arity, patterns, etc...

The OO approach always begins with abstracting some problem space. 
Usually it is the customer's but it can be specific parts of the 
computing space (e.g., networking and interoperability issues for 
distributed processing, RDBs, etc.).  That problem space abstraction is 
done primarily during OOA.  One selects the problem space entities 
(concrete or conceptual) that are relevant to the problem in hand.  One 
also selects their intrinsic properties that are relevant to the problem 
in hand.

Tools like noun-identification in a requirements spec are just that: 
tools to assist in the fundamental intellectual activity of problem 
space abstraction.  So there really is no OR in your approach.  The 
formal OOA/D methodologies provide a systematic approach to problem 
space abstraction that happens to use a variety of mechanical 
techniques.  So your quest to "think OO" is really a quest to understand 
an OOA/D methodology.

In the end, you abstract what the problem space /is/ and express that 
abstraction in a very particular way.  Identifiable entities map to 
objects, intrinsic characteristics are expressed in terms of knowledge 
(what the entity knows) and behavior (what the entity does) 
responsibilities, logical connections between entities are mapped to 
relationships, and interactions between entities to solve the problem 
are expressed in terms of collaboration messages.  The solution is 
defined when the developer connects to dots between intrinsic, 
self-contained behavior responsibilities by deciding who sends messages 
across which relationship paths to whom and deciding when they are sent.

That's the high-level, purist view of OOA.  As the developer migrates 
through the stages the level of abstraction decreases and one becomes 
more and more concerned with the details.  By the time one reaches OOP 
one is almost in a different world where the dominant notions are things 
like type systems, interfaces, and procedure calls.  Nonetheless, the 
fundamental structure remains that defined in the OOA; there is just a 
whole lot more obscuring detail.

> 
> The first approach is much more direct and is very reliable, so why
> would I use the OOD approach?

The short answer is that one uses the OO paradigm to achieve better 
maintainability in the application in the face of volatile requirements 
over time.  If one just wants rapid, intuitive computational solutions 
one should be doing functional programming.

Your question, though, is more about how to proceed within the context 
of the OO paradigm.  As I indicated above, there really is no OR.  The 
key is to recognize that one needs to make an investment in OOA, OOD, 
AND OOP.  How one does that depends upon the process and/or methodology 
one uses, which can range from the OOP-based agile processes at one end 
of the spectrum to the model-based agile processes at the other end.

[Even the OOP-based agile processes do OOA/D in various forms, so one 
doesn't need UML to do it.  They use things like CRCs and whatnot to 
record OOA/D semi-formally.  They also employ a huge suite of 
refactoring practices that effectively distill OO/D into a bunch of 
cookbook guidelines to manipulate the 3GL code.]

So the real answer to your question is that you need to do three things. 
  First, acquire a fundamental knowledge of what OOA/D is about. 
Regardless of what methodology and process you eventually adopt, you 
need the ABCs first.  [The Books section of my blog has some suggestions 
on OOA/D books.]  Second, select an A&D methodology and follow it. 
Third, select a development process as a framework for applying the A&D 
methodology and follow it.  A lot of smart people with lots of 
experience developed the OO methodologies and processes so take 
advantage of that and use them religiously.  (It takes a /lot/ of 
experience to get to the point of being qualified to second guess the 
gurus.)

The differences between methodologies and processes are minor compared 
to the step function between ad hoc and systematic OO development.  So 
long as you follow /some/ accepted methodology and process religiously, 
things will tend to turn out well.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl@pathfindermda.com
Pathfinder Solutions  -- Put MDA to Work
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
(888)OOA-PATH



0
h.lahman (3600)
6/30/2005 7:18:24 PM
On 30 Jun 2005 02:38:48 -0700, "Mark Nicholls"
<Nicholls.Mark@mtvne.com> wrote:

>> >Hi,
>> >
>> >
>> >I understand the mechanics of objects; I use objects, polymorphism,
>> >even patterns.  I understand some of OOD, but I'm trying to grasp the
>> >bigger picture, how to "think in OOD".  I hope an example will help:
>>
>> There has been an enormous amount of debate and discussion about this
>> topic.  Some folks believe that OO is an extremely high level
>> technique used to make models of the world.  Others believe that OO is
>> a mechanism for structuring source code.  I fall into the latter camp.
>
>I fall into both (though philosophy is a little strong), I'd rather
>call it a school.
>
>>
>> To me, the whole notion that OO is a philosophy is silly.
>
>do you think there are any philosophical notions behind SE...or simply
>that OO is not (a distintive) one?

Whether there are philosophies of software development is irrelevant;
though I think there may be.  My point is that OO has often been
touted as a "grand overarching philosophy" having more to do with
life, the universe, and everything, than with software.  

OO is certainly a different way of thinking about software.  From that
point of view it is a kind of philosophy.  But it is a way of thinking
about software at the structural level; not the grand "analysis" level
(whatever that word happens to mean.)

>OK, it is possible to invert the physical dependencies between sets of
>classes (modules) by moving entities between them ...we should at least
>agree here......are you claiming that this is the key characteristic
>value of OO?

Yes.
>
>> If the high level policy modules
>> depend on (directly call) lower level modules, which then call even
>> lower level modules, then the program is procedural and not object
>> oriented.
>
>this is utterly bizarre....you are claiming that the OO ness of a
>system is not dependent on the logical model (i.e. 'class',
>'interface'), but on the allocation of a class or interface to a
>physical deployment entity (module)...and further that even if an
>application uses classes, interfaces, encapsulation, abstraction,
>polymorphism, it can be considered (strictly) functional in certain
>deployments........I disagree.

Not quite. There is a logical component.  Higher level policies are
decoupled from lower level policies by having both depend on
interfaces or abstract classes. 
>
>i.e.
>
>interface IA
>{
>}
>
>class CA : IA
>{
>}
>
>interface IB
>{
>}
>
>class CB1
>{
> IA a = new CA();
>}
>
>class CB2
>{
>}
>
>so I can vary the OO'ness of the above code, not by changing the code,
>but by how I allocate each entity to a module?!?!?

Certainly.  If the allocation of classes to modules does not
effectively decouple those modules, then you don't have an OO
solution.  The fact that classes and interfaces are used is
irrelevant, if those classes and interfaces are not used to create an
OO structure.

>If I put it all in 1 module....its not OO?

Its not so much a matter of whether its in one module or not.  It's a
matter of whether or not there is an obvious and convenient fracture
zone that could be used to separate the modules.  

>> However, if the high level policy modules are independent, or depend
>> solely on polymorphic interfaces that the lower level modules
>> implement, then the system is OO.
>
>So if a physical module (set of classes) depends on 1 non polymorphic
>interfaces and n (>0) polymorphic ones, its not OO??!?!?!?!?!?

OO-ness is not binary, it is a continuum.  To the extent that a module
depends on concretions, it is not OO.  To the extent that it depends
on abstractions it is.    

>Would you characterise a C program where the dependencies between
>'modules' were via function pointers as OO? 

Yes, so long as the decoupling created by the function pointers
allowed high level policy to be separated from lower level details.

>(I actually would
>sympathise with an answer like 'a bit' but the lack of something
>corresponding to an object would worry me)

The function pointers are the methods of an interface.  Typically
those pointers will be held in some kind of data structure.  That data
structure is an object.

Consider, for example, the FILE data type in C.  Deep within it there
are function pointers to the read, write, open, close, seek methods.
FILEs are objects.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
6/30/2005 8:51:53 PM
xmp333@yahoo.com wrote:
> Hi,
>
>
> I understand the mechanics of objects; I use objects, polymorphism,
> even patterns.  I understand some of OOD, but I'm trying to grasp the
> bigger picture, how to "think in OOD".  I hope an example will help:
>

There is no definitive "OOD think", or "OOA" we can only more or less
agree on the definition of "object based" and "object oriented".

But there are some guys who share their personal brand of thinking
while travelling towards an OO program.

Peter Coad has some pretty good books. In "Object Oriented Programming"
he walks through several examples.

Kent Beck's "Test driven development" is not about "OOD think" but he
does share his brand of thinking with very simple examples.

> Let's say I want a program that given A produces B.  I have 2 ways of
> analyzing this.  I can either formally specify how B relates to A, or I
> can use the OOD approach in which I identify nouns, relationships,
> arity, patterns, etc...
>

Or you can follow one of the many other paths.

In my early OOP days I tried that approach, sometimes it was ok but I
mostly went astray badly.

> The first approach is much more direct and is very reliable, so why
> would I use the OOD approach?
>

If your method works then stick with it. It helps to have several
methods up your sleave, but I don't necessarily the "OOD" one.

Some people find CRC cards handy.

> Or am I comparing apples to oranges?  Is OOD for designing
> applications, or is it for designing an application pattern (framework?
> architecture?) that is then customized to provide the desired
> application (the justification being that such a design is much more
> extensible)? Do I need to stop thinking "program" and instead think
> "architecture" as the primary purpose and then customize it to provide
> the "program" as the secondary purpose?
>

You've described "application frameworks" and I made serious effort to
use "OOD think" to create such frameworks but it made each project 10
times harder.

Trying to build applications by first building a general framework (in
the hope of reuse) is like trying to climb Mount Everest on your first
hike.

Remember, for code to be reusable it must first be usable ;-)

I found it better to just focus only on "what I needed now", ironically
that path led to lots of usable (and reusable) material.

After building so many systems I could now develop such frameworks but
I don't bother because everything changes so fast, it would all go out
of date too fast.

> This is why I need to understand the philosophy.  I want to understand
> how to "think OOD".  I don't care about specific design techniques
> unless they help illustrate this shift in thinking.  What can you tell
> me about this?  What references (online and printed are fine) can you
> point me to?  I'd love something that contrasts the two methodologies
> and provides examples to drive it home.  Something that explains and
> justifies OOD from a more philosophical perspective.
>

I've mentioned Peter Coad and Kent Beck there is also "Thinking in
Java" by Bruce Eckel, it's pretty popular (and free for download at his
site).

I've only ever used his book to grab code snippets so I personally
cannot vouche for any "thinking" part.

> Thanks in advance.

Cheers.

0
alvin321 (175)
7/1/2005 5:11:04 AM
On 30 Jun 2005 06:43:23 -0700, xmp333@yahoo.com wrote:

>> There has been an enormous amount of debate and discussion about this
>> topic.  Some folks believe that OO is an extremely high level
>> technique used to make models of the world.  Others believe that OO is
>> a mechanism for structuring source code.  I fall into the latter camp.
>
>So OO keeps data and operations together which avoids name clashes and
>serves as a grouping method for the readers of the code?  

Yes that's one way.  A more important structuring mechanism is the
decoupling that polymorphism allows.  Given two modules:

       |A|------>|B|

It is possible, by using polymorphism, to invert the source code
dependency without changing the control flow:

       |A|<------|B|

Consider:

package A;
import B.Y;
public class X {
  private Y y;
  public X(Y y) {
    this.y = y;
  }

  public void f() {
    y.g();
}
-------
package B;
public class Y {
  public void g() {
  }
}
-------
public static void main(String args[]) {
  B.Y y = new B.Y();
  A.X x = new A.X(y);
  x.f();
}

This shows the module (in this case I am equating a module with a
package) dependencies going from A to B.

I can completely invert this dependency without changing the control
flow as follows:


package A;

public interface XServer {
  public void g();
}

public class X {
  private XServer server;
  public X(XServer server) {
    this.server = server;
  }

  public void f() {
    server.g();
}
-------
package B;
import A.XServer;
public class Y implements XServer {
  public void g() {
  }
}
-------
public static void main(String args[]) {
  B.Y y = new B.Y();
  A.X x = new A.X(y);
  x.f();
}

This ability to invert key module dependencies allows me to inspect
every module dependency in the system and adjust it's direction,
without changing the way the system works. This is very powerful.

>However, this
>would preclude any need for OO design as one can use procedural design
>and simply group the data and operations together in a class.

You are speaking about OO Design as though it were some specific
activity or event.  I prefer to think about OO design as being a part
of overall software design.  We design our software using OO
principles as *part* of our toolkit of design techniques.

>> In my view the OO-ness of a system can be identified by tracing the
>> dependencies between the modules.  If the high level policy modules
>> depend on (directly call) lower level modules, which then call even
>> lower level modules, then the program is procedural and not object
>> oriented.
>>
>> However, if the high level policy modules are independent, or depend
>> solely on polymorphic interfaces that the lower level modules
>> implement, then the system is OO.
>
>I'm a bit confused.  By this are you saying that objects capture the
>algorithmic pattern, applying parametized objects in lieu of fixed
>calls?  

No, the DIP paper explains it in detail.

http://www.objectmentor.com/resources/articles/dip.pdf


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/1/2005 4:01:24 PM
On Thu, 30 Jun 2005 19:18:24 GMT, "H. S. Lahman"
<h.lahman@verizon.net> wrote:

>Responding to Xmp333....
>
>> I understand the mechanics of objects; I use objects, polymorphism,
>> even patterns.  I understand some of OOD, but I'm trying to grasp the
>> bigger picture, how to "think in OOD".  I hope an example will help:
>
>First, note that OO development traditionally has three stages:

[OOA, OOD, OOP]

Actually, this is not quite as traditional as you might think at
first.  

OOP came first with languages like Simula, Smalltalk, Obj-C, and C++.

OOD came later with books like Booch, Rumbaugh, Wirfs-Brock.  There
were, and still are, different schools of thought about what OOD is.
Among these two schools are the so-called European and American.  The
European school uses OO as a way to create an expressive Domain
Specific Language with which to express the application.  The American
school thinks of OO as a way to organize and manage source code.  Both
schools have merit.  There are other schools as well, including the
SMOO school which is quite different.

OOA came later still, and actually hasn't really come yet.  Nobody
knows exactly (or even inexactly) what OOA is.  There are a number of
books and papers written about it, but they don't agree.  There is not
even a set of cogent schools of thought.  OOA is a term that we bandy
about with authority, but have no real definition for.

One has to wonder where the {Analysis, Design, Programming} triplet
came from.  The triplet permeates our industry to the extent that no
new technique can be created without it immediately be trifurcated.

Consider Dijsktra's Structured Programming.  Within a few years there
was Structured Design and Structured Analysis.  Interestingly enough
SA and SD had *nothing* to do with SP.  They were completely different
things.  The use of the word "Structured" in front of A and D was a
brilliant marketing ploy because the word "Structured" had already
been made synonymous with "good".

When did we first believe that A, D, and P were separate activities
with separate artifacts and deliverables?  When did we first come to
the conclusion that: 

>requirements -> OOA model -> OOD model -> OOP model -> executable

Remarkably, this thought was formalized in a very famous paper written
in 1970 by Dr. Winston Royce, entitled "Managing the Development of
Large Software Systems".  This paper is often called "The Father of
the Waterfall".  The remarkable part is that the paper firmly
denounces the practice in favor of an approach in which A, D, and P
are done iteratively.

This iterative transformation means that we go from A to D to P in a
matter of minutes, repeating the process many times per day,
delivering working software every week.

>This dichotomy between specification and solution was one of the 
>important things that the OO paradigm brought to the software 
>development table.  

The dichotomy was really brought to the table by Structured Analysis
and Structured Design.  Though there were earlier rumblings, these
disciplines were the first to really formalize the different artifacts
for analysis and design.  

Even so, I believe that the intention of the original SA/SD folks
(especially DeMarco) was for lightweight iterative practices.
However, in light of the heavy acceptance of Waterfall (promoted by
Winston Royce's paper that said not to do it) people interpreted SA/SD
in a staged artifact driven way.


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/1/2005 4:52:34 PM
A number of observations come to mind in this discussion.

First that the word object is a conceptual framework of meanings in
Computer Science.

Software design objects are a family of concepts often complemented by
an iconic notation.  The notation is used to experiment with and refine
software designs.  The design of an information model uses different
families of objects than functional designs use and so on.

Different from this is OOP and OOP notations.  These notations,
although overlapping with design, are largely inventories of
programming components and their particulars.

Not all Computer Scientists are successful with design objects.  To be
successful, the individual using them must be able to think in very
abstract ways.  Not everyone can and even when they can, corporate
policy and practice often make the effort impossible.

This frustrating truth is largely responsible for the Extreme
*whathaveyou-usually-Programming* phenomenons.  With a straight face,
these proponents assert that if design is not egalitarian and if
companies don't respect it then -snip, snip- out with it except for
perfunctory lip-service.

One cannot glibly 'think' in OOD, there isn't any such thing. OOD is
very hard work, time-consuming, expensive and easy to derail (just have
bottom-up activity happening in the background that pre-empts the
designers).

Another common mistake is the literalization of the word 'procedural'
as an either/or alternative to OOP.  OOP is equally procedural as a
temporal phenomenon.  The consecutive operations are bundled
differently and obviously have their own mechanics.

Architecture is a whole other related subject.  Again, design
discussions having to do with architecture too easily get entangled in
OOD and programming quagmires.  Architects often have to spoon feed and
baby talk their way through corporate conversations.

The theory for all of this is the theory of language, the use of iconic
notations to conceptually talk about and build very complex software
frameworks with (generally speaking).  Pholosophy is coincidental.

0
Krasicki1 (73)
7/2/2005 4:01:21 AM
krasicki wrote:

> Not all Computer Scientists are successful with design objects.  To be
> successful, the individual using them must be able to think in very
> abstract ways.  Not everyone can and even when they can, corporate
> policy and practice often make the effort impossible.

I agree that it takes work, but I don't think that deep abstraction is 
involved.  To me, object design is more like concretization.  The steps 
are: 1)think of thing that can solve a problem for you, 2) think of a 
way to ask it to solve the problem 3) go inside the thing and solve
the problem.  It's just a little different from the procedural mindset
which is: 1) think of a way to solve the problem 3) solve the problem.
The abstraction in OO is really all about thinking about a way to ask
for a solution rather than leaping into a solution.

> This frustrating truth is largely responsible for the Extreme
> *whathaveyou-usually-Programming* phenomenons.  With a straight face,
> these proponents assert that if design is not egalitarian and if
> companies don't respect it then -snip, snip- out with it except for
> perfunctory lip-service.

How many XP teams have you worked with?

> One cannot glibly 'think' in OOD, there isn't any such thing. OOD is
> very hard work, time-consuming, expensive and easy to derail (just have
> bottom-up activity happening in the background that pre-empts the
> designers).

It is like anything else.  Hard when you start, but easier when you 
acclimate to it.  I "think in OO" but I've been doing it for a long time.


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall 2005)
www.objectmentor.com
0
mfeathers2 (74)
7/2/2005 12:25:48 PM
<xmp333@yahoo.com> wrote in message 
news:1120140120.479993.305730@o13g2000cwo.googlegroups.com...
>
>
> Tim Haugton wrote:
> [Snip]
>> Of course, the DIP is just an example of this, but I feel the DIP is at
>> its most potent across assembly (package) boundaries, where it would
>> indeed be an effective OO-ometer. Inside an assembly, I think the DIP
>> can *often* left unused without plunging into procedural code.
>
>
> What's DIP?
>

Dependency Injection Pattern

http://www.martinfowler.com/articles/injection.html

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
7/2/2005 1:56:54 PM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
news:0jk7c1tj3hhc2oua6pe60ulk719pr41sd1@4ax.com...
>
> Whether there are philosophies of software development is irrelevant;
> though I think there may be.  My point is that OO has often been
> touted as a "grand overarching philosophy" having more to do with
> life, the universe, and everything, than with software.

There are some folks who still claim that the world is flat.  We don't talk 
about them as though they were a significant part of scientific thought.  In 
my opinion, it is fair to include "grand OO philosophers" in this category. 
Anyone who says that OO is a grand philosophy is ignorant of both software 
engineering and philosophy.  Kant, Camus, Sartre... now that's philosophy. 
We are on the same page on this one.

>
> OO is certainly a different way of thinking about software.  From that
> point of view it is a kind of philosophy.  But it is a way of thinking
> about software at the structural level; not the grand "analysis" level
> (whatever that word happens to mean.)
>

Once again, we agree.  OO gives you a few more interesting techniques that 
are considerable more difficult to do in procedural languages.

>>OK, it is possible to invert the physical dependencies between sets of
>>classes (modules) by moving entities between them ...we should at least
>>agree here......are you claiming that this is the key characteristic
>>value of OO?
>
> Yes.

Are you sure you haven't mixed OOD with AOP?  I would agree with you if your 
statement had been "a key characteristic of AOP is the inversion of control 
and the injection of dependencies."  This is an innovation on top of OO.  It 
is true that OO enabled it. However, I agree with the prior poster's 
sentiment that an OO program is not characterized by any single pattern, 
even a good one.

>>so I can vary the OO'ness of the above code, not by changing the code,
>>but by how I allocate each entity to a module?!?!?
>
> Certainly.  If the allocation of classes to modules does not
> effectively decouple those modules, then you don't have an OO
> solution.

I would call this "definition parsing."  You do have an OO solution in that 
it uses OO mechanisms to accomplish its goal.  You may or may not have a 
"good" OO solution, with the subjective being the key thing I'm pointing 
out.  The orientation towards objects is all that is required to be OO.  Not 
the injection of dependencies.  That came much later.

In fact, if you look up the Wikipedia definition of Aspect Oriented 
Programming, you will see that the definition's author considers AOP to be 
"not OO" but in fact a successor to OO development.

I agree that you don't achieve the goals of good development by splashing a 
pile of objects against your problem.  Object Orientation is no silver 
bullet.  You have to carefully analyze the situation and seperate your 
interface from your implementation, with the goals of reducing coupling and 
increasing cohesion.  I would go a step further and state that a better OO 
program can be built by using Commonality Variability Analysis (CVA) (Jim 
Coplien's idea).  However, CVA will not, naturally, lead to AOP.  That 
cognitive leap wasn't obvious.

On the other hand, NMock and Reflection *does* naturally lead folks to AOP. 
I find these innovations in OOP to be much more of an indicator of forward 
movement towards AOP than the fundamental OO concepts of inheritance and 
polymorphism.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
7/2/2005 2:16:46 PM
Nick Malik [Microsoft] wrote:
> <xmp333@yahoo.com> wrote in message 
> news:1120140120.479993.305730@o13g2000cwo.googlegroups.com...
> 
>>
>>Tim Haugton wrote:
>>[Snip]
>>
>>>Of course, the DIP is just an example of this, but I feel the DIP is at
>>>its most potent across assembly (package) boundaries, where it would
>>>indeed be an effective OO-ometer. Inside an assembly, I think the DIP
>>>can *often* left unused without plunging into procedural code.
>>
>>What's DIP?
>> 
> Dependency Injection Pattern
> 
> http://www.martinfowler.com/articles/injection.html
> 

Not in this case.  Bob was referring to the Dependency Inversion 
Principle: http://www.objectmentor.com/resources/articles/dip.pdf

It's funny, I hadn't noticed the acronyms were the same before :)


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall 2005)
www.objectmentor.com
0
mfeathers2 (74)
7/2/2005 2:17:08 PM
>>>
>>>>Of course, the DIP is just an example of this, but I feel the DIP is at
>>>>its most potent across assembly (package) boundaries, where it would
>>>>indeed be an effective OO-ometer. Inside an assembly, I think the DIP
>>>>can *often* left unused without plunging into procedural code.
>>>
>>>What's DIP?
>>>
>> Dependency Injection Pattern
>>
>
> Not in this case.  Bob was referring to the Dependency Inversion 
> Principle: http://www.objectmentor.com/resources/articles/dip.pdf
>
> It's funny, I hadn't noticed the acronyms were the same before :)
>
>
> Michael Feathers


My apology.  I typed my reply before checking out his link.  I hadn't 
noticed the acronym similarity either.

Come to think of it, the two are related.  One could say that the Dependency 
Injection Pattern is a patterns implementation of the Dependency Inversion 
Principle.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
7/2/2005 2:35:39 PM
Hello Plan 9,

(Is that a reference to "Plan 9 from Outer Space," perhaps?  Ahhh... a fan 
of bad SF Cinema :-)


>> OOD helps you to organize and represents the information. All people
>> makes a mental design to resolve a problem and then programs the
>> solution. In OO terms one first designs an OO solution and then programs
>> it in an OO language (or not :) ).
>
> But non-OOD is also design.  Non-OOD focuses more on deriving the
> algorithms, while OOD focuses more on the data, although the two
> intersect.  One can derive data from the algorithms/relations and vice
> versa.  So, given two different ways of doing something, and a former
> way that is clearer and more direct, why use the latter?  If it's an
> issue of flexibility/re-usability, what's the thinking behind that?

Really, the goal is not so much to reuse things as to seperate the things 
that change a different times, to make them easier to change.  We start with 
the limitations of people and create languages that those people can use. 
If you watch the evolution away from OO and towards things like AOP and 
lightweight frameworks, it is part of an ongoing process towards the 
seperation of "things that change rarely" from "things that change 
frequently."

The obvious first efforts were the function libraries that would come with a 
language.  We all knew that there had to be a way to produce the square root 
of a number.  That mechanism is older than computer science (although many 
implementations exist).  The fundamental definition doesn't change very 
often so it is easy to place something like SQRT() in a math library and be 
assured of its longetivity.  That's procedural.

What OO gave us was a way to abstract that thinking a bit more... to look at 
the activities of our applications and find those activities that are, 
themselves, fundamental and rarely changing.  If we allow those activities 
to operate on interfaces, and not on actual items, we can seperate these 
fundamental activities (rarely changing) from the implemented objects 
(frequently changing).

In this way, we earn reuse, but not by seeking it.  We are seeking ease of 
maintenance, and ease of understanding.

>
> Here's an example.  Let's say we have a cash register program with the
> options to add, subtract, print a receipt or clear the total.
>
> Non-OO analysis:
> =========
> Op = current operation invoked
> Price = Price Entered
> X' = value change for X for next state
> Total = running total
> ==========
> Then, the state of the system (assume it waits for input) is:
> ==========
> (Op = Start ^ Op' = Clear) v
> (Op = Add ^ Total' = Total + Price) v
> (Op = Sub ^ Total' = Total - Price) v
> (Op = Clear ^ Total' = 0) v
> (Op = Print ^ Printed(Total))
> ==========

Ah, predicate calculus. I haven't done this in years.  For a while, I was 
pretty good in ML and later in Prolog.  However, you'll have to forgive my 
rustiness in the notations you used.  They are not directly familiar, even 
though I believe that I understand what you are trying to say.

This is a very logical approach to the state of a single object.  Your 
example, however, is not typical.  Most applications are not like a cash 
register.

A cash register is a machine that holds state for a single long-running 
transaction.  The state is a series of transactions against inventory.  Even 
in this very simple description, your model is too light, in that you have 
to represent, somehow, the inventory aspect of modern cash registers.  If 
you do not, your example devolves into an adding machine and a cash drawer.

So let's evolve an adding machine into a cash register...

/// note: the following code is considerably simplified.

class PurchaseTicket
{
    public double RunningTotal = 0;
    public AddToReciept(Item MyItem, int Quantity, Outputter PrintOutputter)
    {
         RunningTotal += MyItem.Price * Quantity;
         PrintOutputter.PrintLine("{0}/t{1} @ {2}/n",MyItem.Description, 
Quantity, MyItem.Price);
    }
    public DeductFromReciept(Item MyItem, int Quantity, Outputter 
PrintOutputter)
    {
          RunningTotal -= MyItem.Price * Quantity;
         PrintOutputter.PrintLine("{0}/t{1} @ {2} 
Credit/n",MyItem.Description, Quantity, MyItem.Price);
    }
}

There is no 'clear' in that the PurchaseTicket only exists for a single 
customer.  When the next customer comes, a new ticket is created.  If an 
entire transaction is started over, the exact same logic applies.

Some would argue that an item should print itself.  I disagree. The ticket 
would know the format of the output.  There is a grey area here.  The point 
is that placement of the "knowledge" (what does output look like) needs to 
make sense to a developer.  That way, when the crack open the code a year 
later, they can find it fairly quickly.

We have a dependency on the notion of an Outputter, and it has the 
interesting method of PrintLine().  Other than that, the code above has no 
way of knowing (or caring) if the Outputter is actually an interface and 
that the object passed in is simply one that implements that interface.

Importantly, we can add inventory functions fairly readily because we use 
the Item object to contain information about the thing we are adding to our 
ticket.  The interface, above, doesn't change very much.  One thing that 
does change: we can raise an error if we attempt to remove things from the 
ticket that were never in there in the first place:

class PurchaseTicket
{
    public double RunningTotal = 0;

    private List<Item> TicketList = new List<Item>();

    public AddToReciept(Item MyItem, int Quantity, Outputter PrintOutputter)
    {
         RunningTotal += MyItem.Price * Quantity;
         PrintOutputter.PrintLine("{0}/t{1} @ {2}/n",MyItem.Description, 
Quantity, MyItem.Price);
         for (int i = 0; i < Quantity; i++)
             TicketList.Add(MyItem);

    }
    public DeductFromReciept(Item MyItem, int Quantity, Outputter 
PrintOutputter)
    {
          if (TicketList.CountOf(MyItem) < Quantity)
               raise ApplicationException("Item does not exist in this 
quantity in the ticket");

          for (int i = 0; i < Quantity; i++)
              TicketList.Remove(MyItem);

          RunningTotal -= MyItem.Price * Quantity;
         PrintOutputter.PrintLine("{0}/t{1} @ {2} 
Credit/n",MyItem.Description, Quantity, MyItem.Price);
    }
    public void CommitOnPayment(InventoryManager Iman)
    {
         foreach (Item TicketItem in TicketList)
            Iman.RemoveFromInventory(   TicketItem );
    }
}

In this example, I added a new dependency.  We are now coupled to the 
definitions of a List.  That List has a couple of interesting methods, like 
CountOf, Add, and Remove. I don't know what other methods it has, nor should 
I care.  There are a lot of things I could say about the List, but they 
would be off topic.

The point is that, by using OO programming, I've encapsulated the idea of a 
list of items.  That list maintains a running total of items that need to 
removed from inventory when the CommitOnPayment method is called.

I can very easily implement this program using Item as a concrete class. The 
neat thing about OO is that, later, I can decide to use DIP (either 
definition :-) and change Item to an interface.  I can implement the Item 
interface in another part of the application in any way that I'd like, and 
the code in this part would not change at all.

This is an example of the Liskov Substitution Principle.  [paraphrased - 
badly] Any subtype of a type can be substituted for any other subtype as 
long as the code refers to the type.

>
> Given I'm not knowledgable about OOD, please forgive a possible
> butchering of OOD, but here's how I can see an OOD approach to the
> problem.  First, I identify the nouns in the system:
>
> User
> CashRegister
> Total
> Operation
> Price

You are on thin ice already.

>
> Now the relationships.  The User interacts with the CashRegister; in
> response, the CashRegister creates the appropriate Operation and allows
> access to the Price.  Price and Total are just numbers.  Furthermore,
> since there are several types of Operations, Operation is an abstract
> base class with Add, Sub, Print, and Clear as children sharing a
> consistent interface to allow for more streamlined code.

Interesting analysis.  Not a good one.

Ask yourself the question: in my system, do I have competing needs?  If I do 
not, then use the simplest possible implementation that I can.  If I do, 
then look for what is in common between them, and what is variable.  Pull 
the variations down in the inheritance tree, and push the commonality up. 
Prefer composition over inheritance.  Make your interfaces open for 
extension but closed for modification (google the "Open Closed Principle").

I would NOT suggest that the operations Add, Sub, Print, and Clear are 
variations.  In fact, on your list of items, I'd say that they are quite 
common to the concept of a PurchaseTicket (as described above).  Certainly, 
you could add other types of purchase tickets (say... something that is 
electronically transmitted rather than being individually scanned).  In that 
case, I'd create an interface from PurchaseTicket, and move my code above to 
a concrete child of that interface.  The calling code would be 
none-the-wiser, but I'd be able to create as many different types of 
purchase ticket as my customer needs, while limiting the changes to the 
fundamental notions of a purchase ticket.


> So, we get
> the following (assuming a non-event model and garbage collection for
> simplicity):
>
> loop
>   CashRegister.Interact
>   op = CashRegister.Op
>   op.DoOp(Total, CashRegister.Price())
>
> However, this is a bit of a kludge.  Clear and Print need only one
> parameter, but take 2 in order to conform with the interface.  In
> addition, Print gets mutable access to the Total even though it doesn't
> need it, which is unsafe.  Furthermore, the analysis up to this point
> was not as clear (or IMO as verifiable) as the previous one.

I'd say: look for what you WANT to encapsulate.  Why in the world would you 
want to encapsulate this operation at this time?  You have stated no 
business need for this encapsulation?  Certainly, you can encapsulate 
operations.  In fact, the decorator, command, strategy and 
chain-of-responsibility patterns all focus on different approaches to the 
problem of encapsulating an operation.  However, your comments imply that 
you would START there, and I, for the life of me, can't see any reason to do 
so.

>
> On the other hand, maybe I was solving the wrong problem.  Maybe I need
> to look at the pattern of this program and build the architecture then
> customize it.

Yech.

Use someone else's architecture.  Most OO systems have frameworks that they 
operate in.  Use that.  Build only what you need.  Abstract only what you 
need to abstract.

As far as building your own architecture: YAGNI ("You Ain't Gonna Need It").


> In this case, what I'm trying to build is a type of
> machine that accepts operations, parameters, and can maintain a state
> that is the result of the previous operations.  Analyzing it this way,
> we get:

No.  You are trying to build a cash register.  Your "procedural" example 
made no notion of a machine with abstract operations.  Why add requirements 
the moment you enter the OO world?

>
> User
> Machine
> Params
> OutputState
> Operation
>
> Params are what the Machine returns as data -- they are
> instruction/data pairs (where data can be an additional collection).
> OutputState can store any number of outputs and allows read/write
> access.  Otherwise, the semantics are the same:
>
> [Assume output is an instance of OutputState]
> loop
>   Machine.Interact
>   plist = Machine.Params
>   foreach i in plist.Size
>      Operation op = Factory.CreateOp(plist[i].Op)
>      op.DoOp(plist[i].Data, output)
>

That is the most unreadable bit of code I've seen in a long time.  I believe 
one of the other posters put up a good quote: "to be reusable, you have to 
first be usable."  That bit is neither.

You have certainly hit on one of the problems with OO analysis when it 
abstracts the wrong things: you can obfuscate the code so wildly as to make 
it completely unmaintainable.  At that point, you've completely defeated the 
purpose of Object Oriented development.

When you look at something like the snip above, your "gut" should say: "this 
code smells bad" and you should look for opportunities to refactor it.

> This is the generalized pattern, and by deriving different Operations,
> Factories, OutputsStates, and even Machines we can simulate a wide
> variety of machines -- perhaps even primitive operating systems.

Why would we want to?  Was this a requirement of the cash register?  Once 
again, encapsulate what you need to encapsulate, when you need it, and not 
before.  OO is a balance.  You can go too far (hint: you have).

> We can accomplish things like history lists, screen writes, etc, all with
> the same basic framework, because we solved a general problem.

My code (above) accomplishes the exact same things, and you get the added 
benefit of being able to read it.

> Now,
> all future machine-like tasks will consist solely of deriving the
> appropriate classes.
>
> In fact, this whole thing could be made a method of a machine class:
>
> Machine.Run(factory, output)
>
> Then people simply derive from this class, over-ride Interact (and
> anything else they want), and provide the necessary implementations.
>
> Comments?
>

Don't ever write code that I have to maintain.  :-)

>
>
>> Design is more abstract than programming, you can design thins that are
>> impossible to translate directly to the code (then you need to
>> "normalize" things before programming it).
>
> Right, which is what I'm seeing from both cases.  An analysis of the
> problem simply states relations and some of these relational statements
> can't even be checked by code (take relations involving quantifications
> over infinite sets or involving convenience functions that don't exist
> in the implementation language).  What differs here is what we are
> solving.  Are we solving the problem at hand, or do we choose to solve
> a generalization one of whose instances is the problem at hand,
> ostensibly for more flexibility?

We solve the problem at hand, using mechanisms that can be generalized WHEN 
we need them (and not before).

We don't do anything "ostensibly" for flexibility.  We write flexible code 
because it is actually a second nature to do so.  This is "object thinking".

>
>> > This is why I need to understand the philosophy.  I want to understand
>> > how to "think OOD".  I don't care about specific design techniques
>> > unless they help illustrate this shift in thinking.  What can you tell
>> > me about this?  What references (online and printed are fine) can you
>> > point me to?  I'd love something that contrasts the two methodologies
>> > and provides examples to drive it home.  Something that explains and
>> > justifies OOD from a more philosophical perspective.

I'm going to recommend a very readable book called "Design Patterns 
Explained" by Shalloway and Trott.  Make sure to get the second edition. 
There is an extended section on Commonality Variability Analysis.  CVA was 
introduced by Jim Coplien but his original work went out of print, so you'll 
need to get it second hand (somewhat).  The nice thing about this book is 
that it is written from the standpoint of an evolution in thinking.  The 
author describes "aha" moments and how they led to a different approach in 
the ways to solve problems.

I hope this helps.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
7/2/2005 3:48:23 PM
"Michael Feathers" <mfeathers@objectmentor.com> wrote in message 
news:42C6A1E4.1060508@objectmentor.com...
> Nick Malik [Microsoft] wrote:
>> <xmp333@yahoo.com> wrote in message 
>> news:1120140120.479993.305730@o13g2000cwo.googlegroups.com...
>>
>>>
>>>Tim Haugton wrote:
>>>[Snip]
>>>
>>>>Of course, the DIP is just an example of this, but I feel the DIP is at
>>>>its most potent across assembly (package) boundaries, where it would
>>>>indeed be an effective OO-ometer. Inside an assembly, I think the DIP
>>>>can *often* left unused without plunging into procedural code.
>>>
>>>What's DIP?
>>>
>> Dependency Injection Pattern
>>
>> http://www.martinfowler.com/articles/injection.html
>>
>
> Not in this case.  Bob was referring to the Dependency Inversion 
> Principle: http://www.objectmentor.com/resources/articles/dip.pdf
>
> It's funny, I hadn't noticed the acronyms were the same before :)
>
>
> Michael Feathers
> author, Working Effectively with Legacy Code (Prentice Hall 2005)
> www.objectmentor.com 


0
nickmalik (325)
7/2/2005 3:48:39 PM
Hello Robert Martin,

My other response to your post was errant in some ways.  I had assumed that, 
when using the acronym "DIP" that you meant the "Dependency Injection 
Pattern" as described by Martin Fowler.  Michael Feathers pointed out that 
you were referring to a much older, but related, concept, the "Dependency 
Inversion Principle."  (same acronym)

So some of my comments like "that came much later" don't apply.  I believe 
that my fundamental stand is solid, in that OO does not define the right way 
to practice it, but I cannot say that my words are all that coherent in 
retrospect.  Sorry for the confusion.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
--
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
news:0jk7c1tj3hhc2oua6pe60ulk719pr41sd1@4ax.com...
> On 30 Jun 2005 02:38:48 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
>>> >Hi,
>>> >
>>> >
>>> >I understand the mechanics of objects; I use objects, polymorphism,
>>> >even patterns.  I understand some of OOD, but I'm trying to grasp the
>>> >bigger picture, how to "think in OOD".  I hope an example will help:
>>>
>>> There has been an enormous amount of debate and discussion about this
>>> topic.  Some folks believe that OO is an extremely high level
>>> technique used to make models of the world.  Others believe that OO is
>>> a mechanism for structuring source code.  I fall into the latter camp.
>>
>>I fall into both (though philosophy is a little strong), I'd rather
>>call it a school.
>>
>>>
>>> To me, the whole notion that OO is a philosophy is silly.
>>
>>do you think there are any philosophical notions behind SE...or simply
>>that OO is not (a distintive) one?
>
> Whether there are philosophies of software development is irrelevant;
> though I think there may be.  My point is that OO has often been
> touted as a "grand overarching philosophy" having more to do with
> life, the universe, and everything, than with software.
>
> OO is certainly a different way of thinking about software.  From that
> point of view it is a kind of philosophy.  But it is a way of thinking
> about software at the structural level; not the grand "analysis" level
> (whatever that word happens to mean.)
>
>>OK, it is possible to invert the physical dependencies between sets of
>>classes (modules) by moving entities between them ...we should at least
>>agree here......are you claiming that this is the key characteristic
>>value of OO?
>
> Yes.
>>
>>> If the high level policy modules
>>> depend on (directly call) lower level modules, which then call even
>>> lower level modules, then the program is procedural and not object
>>> oriented.
>>
>>this is utterly bizarre....you are claiming that the OO ness of a
>>system is not dependent on the logical model (i.e. 'class',
>>'interface'), but on the allocation of a class or interface to a
>>physical deployment entity (module)...and further that even if an
>>application uses classes, interfaces, encapsulation, abstraction,
>>polymorphism, it can be considered (strictly) functional in certain
>>deployments........I disagree.
>
> Not quite. There is a logical component.  Higher level policies are
> decoupled from lower level policies by having both depend on
> interfaces or abstract classes.
>>
>>i.e.
>>
>>interface IA
>>{
>>}
>>
>>class CA : IA
>>{
>>}
>>
>>interface IB
>>{
>>}
>>
>>class CB1
>>{
>> IA a = new CA();
>>}
>>
>>class CB2
>>{
>>}
>>
>>so I can vary the OO'ness of the above code, not by changing the code,
>>but by how I allocate each entity to a module?!?!?
>
> Certainly.  If the allocation of classes to modules does not
> effectively decouple those modules, then you don't have an OO
> solution.  The fact that classes and interfaces are used is
> irrelevant, if those classes and interfaces are not used to create an
> OO structure.
>
>>If I put it all in 1 module....its not OO?
>
> Its not so much a matter of whether its in one module or not.  It's a
> matter of whether or not there is an obvious and convenient fracture
> zone that could be used to separate the modules.
>
>>> However, if the high level policy modules are independent, or depend
>>> solely on polymorphic interfaces that the lower level modules
>>> implement, then the system is OO.
>>
>>So if a physical module (set of classes) depends on 1 non polymorphic
>>interfaces and n (>0) polymorphic ones, its not OO??!?!?!?!?!?
>
> OO-ness is not binary, it is a continuum.  To the extent that a module
> depends on concretions, it is not OO.  To the extent that it depends
> on abstractions it is.
>
>>Would you characterise a C program where the dependencies between
>>'modules' were via function pointers as OO?
>
> Yes, so long as the decoupling created by the function pointers
> allowed high level policy to be separated from lower level details.
>
>>(I actually would
>>sympathise with an answer like 'a bit' but the lack of something
>>corresponding to an object would worry me)
>
> The function pointers are the methods of an interface.  Typically
> those pointers will be held in some kind of data structure.  That data
> structure is an object.
>
> Consider, for example, the FILE data type in C.  Deep within it there
> are function pointers to the read, write, open, close, seek methods.
> FILEs are objects.
>
>
>
> -----
> Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
> Object Mentor Inc.            | blog:  www.butunclebob.com
> The Agile Transition Experts  | web:   www.objectmentor.com
> 800-338-6716
>
>
> "The aim of science is not to open the door to infinite wisdom,
> but to set a limit to infinite error."
>    -- Bertolt Brecht, Life of Galileo 


0
nickmalik (325)
7/2/2005 3:54:29 PM
please ignore... I hit enter too soon.

-- 
--- Nick Malik [Microsoft]


0
nickmalik (325)
7/2/2005 3:55:19 PM
Responding to Martin...

>>First, note that OO development traditionally has three stages:
> 
> 
> [OOA, OOD, OOP]
> 
> Actually, this is not quite as traditional as you might think at
> first.  
> 
> OOP came first with languages like Simula, Smalltalk, Obj-C, and C++.

Simula had OO-like features but it was no more an OOPL than JSD was 
OOA/D methodology despite having OOA/D-like features nearly a decade 
before Smalltalk.  It is true that Smalltalk preceded OOA/D but by the 
time Objective-C and C++ got on the scene there were already full OOA/D 
methodologies around. [Jacobson claims OOSE's origins in '68, but I only 
recall his stuff from the late '70s.]

Those early methodologies borrowed liberally from the distinctions 
between analysis, design, and implementation in structured programming. 
  What OOA/D/P brought to the table was a more useful and less fuzzy set 
of definitions (functional vs. nonfunctional requirements, customer vs. 
computing views, strategic vs. tactical).  IOW, the OO paradigm provided 
a better package for bridging the gap between customer spaces and 
computing spaces.

> Nobody knows exactly (or even inexactly) what OOA is.   There are a number of
> books and papers written about it, but they don't agree.  There is not
> even a set of cogent schools of thought.  OOA is a term that we bandy
> about with authority, but have no real definition for.

You keep repeating this mantra but it is still untrue.  The OOA books 
only differ in details -- just like books about OOP.  Is the definition 
of Crystal exactly the same as XP?  Is your view of dependency 
management the exactly same as Fowler's?  The OOA/D books I have on my 
bookshelf differ substantially in detail but they are agreed about the 
distinctions I originally provided.  You are the only person I know who 
has written an OOD book and claims not to know what OOA is at that level 
of distinction.

>>requirements -> OOA model -> OOD model -> OOP model -> executable
> 
> 
> Remarkably, this thought was formalized in a very famous paper written
> in 1970 by Dr. Winston Royce, entitled "Managing the Development of
> Large Software Systems".  This paper is often called "The Father of
> the Waterfall".  The remarkable part is that the paper firmly
> denounces the practice in favor of an approach in which A, D, and P
> are done iteratively.
> 
> This iterative transformation means that we go from A to D to P in a
> matter of minutes, repeating the process many times per day,
> delivering working software every week.

Where is there anything in my descriptions here and elsewhere that 
precludes iterative development?  Have I not asserted on several other 
occasions that IID is routinely practiced with this development model at 
any scale?

This is just a forensic ploy to associate OOA/D with BDUF (as defined by 
XP) by innuendo.  This is like trying to deal with Topmind.



*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl@pathfindermda.com
Pathfinder Solutions  -- Put MDA to Work
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
(888)OOA-PATH



0
h.lahman (3600)
7/2/2005 5:06:28 PM
> >> OOD helps you to organize and represents the information. All people
> >> makes a mental design to resolve a problem and then programs the
> >> solution. In OO terms one first designs an OO solution and then programs
> >> it in an OO language (or not :) ).
> >
> > But non-OOD is also design.  Non-OOD focuses more on deriving the
> > algorithms, while OOD focuses more on the data, although the two
> > intersect.  One can derive data from the algorithms/relations and vice
> > versa.  So, given two different ways of doing something, and a former
> > way that is clearer and more direct, why use the latter?  If it's an
> > issue of flexibility/re-usability, what's the thinking behind that?
>
> Really, the goal is not so much to reuse things as to seperate the things
> that change a different times, to make them easier to change.  We start with
> the limitations of people and create languages that those people can use.
> If you watch the evolution away from OO and towards things like AOP and
> lightweight frameworks, it is part of an ongoing process towards the
> seperation of "things that change rarely" from "things that change
> frequently."
>
> The obvious first efforts were the function libraries that would come with a
> language.  We all knew that there had to be a way to produce the square root
> of a number.  That mechanism is older than computer science (although many
> implementations exist).  The fundamental definition doesn't change very
> often so it is easy to place something like SQRT() in a math library and be
> assured of its longetivity.  That's procedural.
>
> What OO gave us was a way to abstract that thinking a bit more... to look at
> the activities of our applications and find those activities that are,
> themselves, fundamental and rarely changing.  If we allow those activities
> to operate on interfaces, and not on actual items, we can seperate these
> fundamental activities (rarely changing) from the implemented objects
> (frequently changing).

In my domain one often cannot know ahead of time what will change. In
physics, chemistry, etc., one may be able to determine such because God
does not change the laws of physics very often, but not in most of the
business and intellectual property domains where the rules are set by
(seemingly) capricious managers, marketers, owners, and lawmakers.
Interfaces need tweaking as often as implementation.

I too seek techniques that are change-friendly; but OO does not appear
to fit that bill. Maybe if one sticks in enough indirection it might,
but then you are batteling layers and layers of interfaces that are
uglier than being closer to the implementation would be.

-T-

0
topmind (2124)
7/2/2005 5:55:05 PM
Top,

> In physics, chemistry, etc., one may be able to determine such because God
> does not change the laws of physics very often,

The latest issue of Scientific American has an article by John Barrow 
and John Webb. It suggests that the fine structure "constant" - actually 
a ratio involving other "constants" such as the speed of light - changed 
over (cosmological) time.

> Interfaces need tweaking as often as implementation.

Not *exactly* as often. There are different rates of change. A whole 
spectrum of them, from "changes all the goddam time" to "changes rather 
infrequently". (With the fine structure constant at the latter end, 
perhaps...)

Capturing different rates of change in the structure of programs is a 
win. Config files, data tables, abstract data types, configuration 
management, metadata, physical distribution of processors - those are 
all tactics for expressing differently things that have different rates 
of change.

Laurent
0
laurent (379)
7/2/2005 8:39:15 PM
<xmp333@yahoo.com> wrote in message
news:1120058689.965079.293170@g43g2000cwa.googlegroups.com...
> Hi,
> bigger picture, how to "think in OOD".  I hope an example will
help:
>
> Let's say I want a program that given A produces B.  I have 2
ways of
> analyzing this.

You have in effect already analyzed it because you have described
it in functional terms: A translation of 'A' into 'B'.

Try describing a word processor program in those same terms.
I have a word processor that when given <what?> produces <what?>
I guess it could be 'keystrokes' and 'files'
or maybe 'words','sentences',paragraphs','pages' and
'documents', 'letters', 'books', etc.

How you describe the problem will have a big influence on how
you analyze it and whether OO seems natural or not.

There are some problems which natuurally lead no non OO programs
because they are very simple functions. The more complex the
problem
the more likely that an OO solution will feel more natural.

> I can either formally specify how B relates to A, or I

And of course B and A could be objects...

> can use the OOD approach in which I identify nouns,
relationships,
> arity, patterns, etc...

Its one approach to identifying objects and their
responsibilities
but its not the only one, and many think its not a great one...

> The first approach is much more direct and is very reliable, so
why
> would I use the OOD approach?

Define reliable. Back to the word processor, how do you formally
define the behaviour of a word processor directly and reliably?
It is possible to build procedural word processors of course -
and
some of the best were built that way - but I'd debate whether it
is any more straightforward than an OO approach..

> Or am I comparing apples to oranges?  Is OOD for designing
> applications, or is it for designing an application pattern
(framework?

It can do both, but so can functional decomposition.

> "architecture" as the primary purpose and then customize it to
provide
> the "program" as the secondary purpose?

You need architecture at some level regardless of the design
approach.
They are simply alternatives.

> and provides examples to drive it home.  Something that
explains and
> justifies OOD from a more philosophical perspective.

Have you tried reading Grady Booch's classic on OOAD? The first
section
of that puts OO in a programming context.

To pick up another point you made in another post in this thread.
OOD is not based around data. It should be based around
behaviour,
the data is only there to support the behaviour. There are data
oriented design techniques too, but they no more like OOD than
functional decomosition is. When doing OOD you are looking for
concepts which exhibit key behaviour in your system. Those
conceptual
things may well require some data to perform their function(sic)
within
the system but the data is supportive of the bahaviour (which is
why it is hidden inside the interface).

There are lots of conceptual ways of looking at OO that have
been suggested over the years. Which ones will work for you is
hard
to say, but they include things like "Actors on a stage", "Ants
in
a colony", Independant parallel processes", and so on. They all
try
to capture the idea of independant objects each with their own
roles
and responsibilities within the system communicating by sending
messages to each other. If you an get that concept in mind rather
than a bunch of function calls it might help. Maybe... :-)


HTH,

Alan G.


0
7/3/2005 7:28:14 AM
Michael Feathers wrote:
> krasicki wrote:
>
> > Not all Computer Scientists are successful with design objects.  To be
> > successful, the individual using them must be able to think in very
> > abstract ways.  Not everyone can and even when they can, corporate
> > policy and practice often make the effort impossible.
>
> I agree that it takes work, but I don't think that deep abstraction is
> involved.

I don't think deep abstraction is necessary either but the ability to
juggle, weigh, and formalize multiple, sometimes conflicting complex
ideas is.  That's not something everyone can do and, quite frankly,
it's obvious.

> To me, object design is more like concretization.  The steps
> are: 1)think of thing that can solve a problem for you, 2) think of a
> way to ask it to solve the problem 3) go inside the thing and solve
> the problem.

That's certainly one way to look at it but in the world of commercial
development it is closer to:

The company has bought a product or is orthodox about a methodology, or
is managed by raving idiots who have very short tempers and zero
patience.  If you need the money you play the game.

The game is to design something that is reasonably functional under the
circumstances knowing full well that you stand a snowball's chance in
hell of running the gauntlet.

Design amounts to ensuring that the thing accurately processes the data
the client is responsible for.  This is the minimum design criteria any
software engineer needs to accomplish.

The *problem* then becomes trying to jump through the hoops that are
usually political and sometimes insurmountable.  Here, no design
methodology or tool exists - except maybe dance lessons.

I must add that on rare occasions you have the opportunity to start and
proceed with a blank slate but it is rare indeed.

> It's just a little different from the procedural mindset
> which is: 1) think of a way to solve the problem 3) solve the problem.

Well, the difference is really the difference in asking the easiest way
to get from here to there vs. asking how best to create discrete and
logically cohesive application framework components.  And in the case
of the latter there is no one set *right* answer.  Two children's
graphic applications may function identically yet be designed wholly
differently.  This is not always obvious to the novice or even the
initiated.

And.

In some shops you aren't allowed to even think that there is another
way.

> The abstraction in OO is really all about thinking about a way to ask
> for a solution rather than leaping into a solution.

Sorry.  That does not ring true.

>
> > This frustrating truth is largely responsible for the Extreme
> > *whathaveyou-usually-Programming* phenomenons.  With a straight face,
> > these proponents assert that if design is not egalitarian and if
> > companies don't respect it then -snip, snip- out with it except for
> > perfunctory lip-service.
>
> How many XP teams have you worked with?

Why do you ask?  The methodology is well-documented.  It is considered
a lightweight methodology is it not.  It cannot be lightweight unless
it is lighter somewhere.  What is lighter?  The sum programming is the
same, correct?

>
> > One cannot glibly 'think' in OOD, there isn't any such thing. OOD is
> > very hard work, time-consuming, expensive and easy to derail (just have
> > bottom-up activity happening in the background that pre-empts the
> > designers).
>
> It is like anything else.  Hard when you start, but easier when you
> acclimate to it.

No.  It is not a way of thinking because when you think about it it
makes no sense.  A virtual rock in cyberspace can have methods attached
to it.  A real-life rock can't and doesn't.  In cyberspace there is
only the formal, there is no imagination that can override the
programmatic reality whimsically.

> I "think in OO" but I've been doing it for a long time.

Let's say you think you think in OO.  When you're working software you
formalize your ideas to OO patterns.

cheers.

0
Krasicki1 (73)
7/4/2005 6:11:35 AM
krasicki wrote:
> Michael Feathers wrote:
> 
>>krasicki wrote:
>>
>>
>>>Not all Computer Scientists are successful with design objects.  To be
>>>successful, the individual using them must be able to think in very
>>>abstract ways.  Not everyone can and even when they can, corporate
>>>policy and practice often make the effort impossible.
>>
>>I agree that it takes work, but I don't think that deep abstraction is
>>involved.
> 
> 
> I don't think deep abstraction is necessary either but the ability to
> juggle, weigh, and formalize multiple, sometimes conflicting complex
> ideas is.  That's not something everyone can do and, quite frankly,
> it's obvious.

Agreed.  Not everyone is cut out to be a software developer.


 > Well, the difference is really the difference in asking the easiest way
 > to get from here to there vs. asking how best to create discrete and
 > logically cohesive application framework components.  And in the case
 > of the latter there is no one set *right* answer.  Two children's
 > graphic applications may function identically yet be designed wholly
 > differently.  This is not always obvious to the novice or even the
 > initiated.

I agree.  There is no one right structuring for a particular problem 
and, you know, we should be very thankful for that, because it means our 
jobs are easier (believe it or not).  If there was only one "right" 
structuring for any given problem, software development just wouldn't 
happen; it would be cost prohibitive.

So, once we get past the idea that there should be one "right" 
structuring, we are left with the issue of whether some structurings are 
better than others.  And, some definitely are.  It can be hard to come 
up with a great design out of the box, so the next best thing is to 
learn about criteria for class design and grade designs on them as you 
develop the design.  If you find that some design falls on some piece of 
critera, try something different and see whether you can meet all the 
criteria at once.

A good set of critera are the five class design principles we describe 
on our website: Single Responsibility, Open/Closed, Liskov Substitition, 
Interface Segregation, and Dependency Inversion.  As you grade designs 
on adherence to those principles, and look for alternatives when the 
designs veer, the designs get progressively better, they end up being 
more robust in the face of change.  It's not magic.  There is judgement 
involved.  But it does not require deep abstraction ability to formulate 
designs this way.  Weighing alternatives?  Yes, people do need that 
ability to be able to design.


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall)
www.objectmentor.com
0
mfeathers2 (74)
7/4/2005 2:32:17 PM
"krasicki" <Krasicki@gmail.com> wrote in message 
news:1120457495.193559.256250@g43g2000cwa.googlegroups.com...
>> > This frustrating truth is largely responsible for the Extreme
>> > *whathaveyou-usually-Programming* phenomenons.  With a straight face,
>> > these proponents assert that if design is not egalitarian and if
>> > companies don't respect it then -snip, snip- out with it except for
>> > perfunctory lip-service.
>>
>> How many XP teams have you worked with?
>
> Why do you ask?  The methodology is well-documented.  It is considered
> a lightweight methodology is it not.  It cannot be lightweight unless
> it is lighter somewhere.  What is lighter?  The sum programming is the
> same, correct?
>

It is an agile methodology because it allows for change to occur.  The WAY 
in which you do design can enable the team to respond to change, or can 
create artificial barriers to change.  If change is normal, then creating 
artificial barriers is "swimming upstream."  This creates cost and produces 
difficult choices.  XP, Scrum and other agile methods attempt to address the 
underlying problem by changing the way in which software is produced, 
thereby removing the artificial barriers to change.

Is this lighter?  I don't think so.  XP has much more rigor than waterfall. 
The practices require training and reinforcement.  Most agile processes are 
lighter on "ceremony" but not on rigorousness.

Even Scrum, which is my area of practice, requires training for all members. 
I've worked on a project that used FDD for requirements, Scrum for 
management, and TSP/PSP for software construction.  (In that case, the goal 
of TSP/PSP was to create better estimates for the feature stories that were 
managed by the scrum in a sprint.)  Our design document grew to ~80 pages 
long, and was kept up to date by the dev team (something often forgotten in 
traditional waterfall projects).  Our use case document was of similar size. 
There were no short cuts.

We delivered the functionality that the customer desired, when they desired 
it.  We worked normal working hours most of the time.  Our code was 
thoroughly tested by a professional and independently managed test team and 
our triage process, while rapid, was just as rigorous as most waterfall 
projects.  We cut the cost of development by an estimated 40% and delivered 
four full iterations to production in a 9 month window.

Where did the savings come from?  That's easy.  The IEEE reported in 2001 
that surveys of users have shown that nearly 50% of all features in a BDUF 
(Big Design Up Front) commercial software project are not used by the users. 
HALF.  Working in IT, I can say that the number is probably closer to 35% 
for custom software, but that number is still huge.  35% of the features is 
35% of the time spent writing code, 35% of the time spent testing code, 35% 
of the time spent planning and delivering and documenting.  It's a LOT of 
work for features that no one uses.

By using Feature Driven Development during planning, so that the customer 
knows that actual cost of each feature, and by involving the customer at 
every step and demonstrating the features on each sprint, the customer 
chooses the features to develop, and helps to guide each feature until it is 
actually valuable.  That cuts about 35% of the extra effort out of every 
iteration.  That's our savings.

These methodologies are well documented, but so is Object Oriented design... 
yet here we are, offering ongoing mentoring on object oriented development. 
No one really learns an idea by hearing a self-professed expert, especially 
an author in a book, stating a series of facts and assumptions.  I believe 
that people learn through their fingertips and their mistakes and their 
personal moments of revelation and reflection.  If you get a chance to work 
on an agile team, whether XP or Scrum or Crystal (etc), you may find that 
you'll pick up a bit more understanding of what Agile means in practice.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
7/4/2005 3:33:22 PM
Hi Nick.  In my embedded comments (and this will apply to everything we
correspond about, I will sometimes criticize Microsoft in general and I
want it made clear that all such comments have zero to do with you
because i truly enjoy your responses.  So take a deep breath if and
when I give Microsoft heartburn;-)

Nick Malik [Microsoft] wrote:
> "krasicki" <Krasicki@gmail.com> wrote in message
> news:1120457495.193559.256250@g43g2000cwa.googlegroups.com...
> >> > This frustrating truth is largely responsible for the Extreme
> >> > *whathaveyou-usually-Programming* phenomenons.  With a straight face,
> >> > these proponents assert that if design is not egalitarian and if
> >> > companies don't respect it then -snip, snip- out with it except for
> >> > perfunctory lip-service.
> >>
> >> How many XP teams have you worked with?
> >
> > Why do you ask?  The methodology is well-documented.  It is considered
> > a lightweight methodology is it not.  It cannot be lightweight unless
> > it is lighter somewhere.  What is lighter?  The sum programming is the
> > same, correct?
> >
>
> It is an agile methodology because it allows for change to occur.

It is agile because the term 'extreme' as been so over-exposed that the
audience for this stuff evaporated.  So, to be honest it is called
agile to remove the tarnish of the extreme labeling AND to emphasize
peppiness rather than dwell on the ever-present short-comings of the
pseudo-methodology.

Numerous OOD methodologies handle change much better than the agile
proponents will have you believe.  The agile practices are no more
adept at change than anything else.  What agile sells as response to
change is really immedite gratification.  IOW, because everything is
held close to the actual coding everytime a marketing rep sneezes the
code can reflect pneumonia.

This is not responsible change management.  This results in chaos and I
have seen it in profoundly big companies whose platitudes, awards, and
self-serving hype disguise the fact that beneath the covers employees
were agile enough to artificially meet deadlines to cash in on bonuses
while the quality of the software remains dubious to this day.  Two
weeks ago I came across multiple instances of Y2K errors introduced to
code after the year 2000.

Agile to me means slippery and dodgy - I don't like it and it is
unacceptable in mission critical scenarios.  Today many corporations
have had such fun laying off and off-shoring applications that no one
seems to remember that if they collapse or are incorrectly functioning
there will be unpleasnat consequences.

> The WAY
> in which you do design can enable the team to respond to change, or can
> create artificial barriers to change.  If change is normal, then creating
> artificial barriers is "swimming upstream."  This creates cost and produces
> difficult choices.

Nick, I find this argument convenient but uncompelling.  Design is
about many things.  It is about playing thought experiments on sytems
and sub-systems to optimize and analyze the choices in the field of
possibility.  The design techniques give us a short-hand to try ideas
before a line of code is written and in some cases it can automatically
generate that code.

It is also a blueprint of the system so that architects who have to
integrate one system into another when a corporate buyout occurs can
make sense of two systems.

It is also an inventory of parts, authorizations, licences, and so on.

It is also an audit trail of how it is suppossed to work so that
malicious behavior can be identified and isolated.

It is expensive to do but it is expensive not to do.  Change management
is easier and more certain in systems employing front-end design
effectively.


> XP, Scrum and other agile methods attempt to address the
> underlying problem by changing the way in which software is produced,
> thereby removing the artificial barriers to change.

Well, if the only issue was producing code you'd have a winner.  Code
has never been more than, say, 10% of systems building time and even
less cost although hardware is obscenely cheap these days so that cost
may differ.  Software is becoming increasingly expensive as the
short-sighted cost cutting of design and documentation are taking their
toll and the cacophony of bad software development ideas, frameworks,
and techno-babble take its toll.

>
> Is this lighter?  I don't think so.  XP has much more rigor than waterfall.

XP has more waterfalls than waterfall ever had.  More rapids, typhoons,
and sharks as well.  I would be astonished if you could compare the
biggest XP project's rigor with the smallest government development
project for any military sensitive project.  The feasibility study
alone would trump your entire XP enterprise.

XP thrives in the loosey goosey, let's pretend playscape of corporate
America where petty empires are built around in efficient applications
that ask for a piece of data that's retrieved from a database.

It's very hard to screw that up but that screwing happens so often that
even XP seems a relief to these environments.  It is incompetence and
not the methodology that is to blame here.

> The practices require training and reinforcement.  Most agile processes are
> lighter on "ceremony" but not on rigorousness.
>
> Even Scrum, which is my area of practice, requires training for all members.
> I've worked on a project that used FDD for requirements, Scrum for
> management, and TSP/PSP for software construction.  (In that case, the goal
> of TSP/PSP was to create better estimates for the feature stories that were
> managed by the scrum in a sprint.)  Our design document grew to ~80 pages
> long, and was kept up to date by the dev team (something often forgotten in
> traditional waterfall projects).  Our use case document was of similar size.
> There were no short cuts.

This should be a clue that change management is not a function of
rigorous design practice.

>
> We delivered the functionality that the customer desired, when they desired
> it.  We worked normal working hours most of the time.  Our code was
> thoroughly tested by a professional and independently managed test team and
> our triage process, while rapid, was just as rigorous as most waterfall
> projects.  We cut the cost of development by an estimated 40% and delivered
> four full iterations to production in a 9 month window.

Nine months is a long time these days depending on the application of
course.  A luxury that most commercial companies will not tolerate.
three month business cycles dictate development cycles and castrate any
creative processes that design entails.

>
> Where did the savings come from?  That's easy.  The IEEE reported in 2001
> that surveys of users have shown that nearly 50% of all features in a BDUF
> (Big Design Up Front) commercial software project are not used by the users.

IEEE should know that design is not an end user deliverable.  And IEEE
should ask where the requests for unused features come from.  I once
had a teacher who posed the eternal student question, "How long should
my report be!?"  He replied, "Long enough to cover the subject but
short enough to be interesting".  It seems there's a corollary there
foe software development.

Software engineers *have to* cover the subject yet make the application
easy enough to use.

On a personal note, I run Windows.  There are libraries of documented
features I don't use.  Send Bill an e-mail complaining that his product
could be cheaper without a browser and media player and whatnot.

Bill put WordPerfect under with esoteric features Three people needed
and millions didn't.

> HALF.  Working in IT, I can say that the number is probably closer to 35%
> for custom software, but that number is still huge.  35% of the features is
> 35% of the time spent writing code, 35% of the time spent testing code, 35%
> of the time spent planning and delivering and documenting.  It's a LOT of
> work for features that no one uses.

SOME one uses them or they wouldn't be requested (unless you're talking
about perfunctory stuff needed to be be added so that Microsoft will
brand the application compatible).

>
> By using Feature Driven Development during planning, so that the customer
> knows that actual cost of each feature, and by involving the customer at
> every step and demonstrating the features on each sprint, the customer
> chooses the features to develop, and helps to guide each feature until it is
> actually valuable.  That cuts about 35% of the extra effort out of every
> iteration.  That's our savings.

Maybe.  But up front design eliminates asking this question iteratively
for eacgh sprint.  That's my additional savings (we're price choppers
we are).
>
> These methodologies are well documented, but so is Object Oriented design...
> yet here we are, offering ongoing mentoring on object oriented development.
> No one really learns an idea by hearing a self-professed expert, especially
> an author in a book, stating a series of facts and assumptions.  I believe
> that people learn through their fingertips and their mistakes and their
> personal moments of revelation and reflection.  If you get a chance to work
> on an agile team, whether XP or Scrum or Crystal (etc), you may find that
> you'll pick up a bit more understanding of what Agile means in practice.
>

No doubt.  Thanks for the dialogue - people do learn here as well.

cheers,

Frank

0
Krasicki1 (73)
7/4/2005 6:03:01 PM
Wow.  um... tell me what you _really_ think.  :-)

Clearly, I'm not going to convince you, on a newsgroup, that Agile methods 
are a good practice.

Note: I did not say that "Agile" is incompatible with "design."  I believe 
it is incompatible with "Big Design."  I hope to have made it clear that I 
believe that you can (and should) perform MDA on an agile project.  I've 
done it.  I've seen it done.  It appears that you've been told that agile 
methods leave no room for design.  My guess is you heard that from a 
self-proclaimed XP evangelist.  I rank them a notch below most TV Shopping 
Channel pitchmen in their respect for science, impartiality, or fundamental 
integrity.

I do hope, however, that you and I can encourage our collegues and friends 
to keep an open mind and learn from each other.  There are opportunities for 
our industry to explore ways to reduce the costs and headaches of software 
development.  I believe that each of the "competing" mechanisms have a 
glimmer of a better way.  I'm sure that somewhere between our desire to make 
projects run smoothly, and our desire to give the users the features that 
they want to pay for, we will synthesize a process that may last more than 
the duration of the average teenage clothing fad.

With great regard,
-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
7/4/2005 8:10:22 PM
krasicki wrote:
> It is agile because the term 'extreme' as been so over-exposed that the
> audience for this stuff evaporated.  So, to be honest it is called
> agile to remove the tarnish of the extreme labeling AND to emphasize
> peppiness rather than dwell on the ever-present short-comings of the
> pseudo-methodology.

Actually, it is agile because the word 'extreme' was a roadblock in some 
  organizations.  What we've been finding recently is that there are a 
many companies going agile who pick some agile method like Scrum and 
then start using XP practices to fill it in.  There has actually been a 
net push towards XP practices over the past five years.  Their worth
has clearly been recognized independently of their original packaging.

> This is not responsible change management.  This results in chaos and I
> have seen it in profoundly big companies whose platitudes, awards, and
> self-serving hype disguise the fact that beneath the covers employees
> were agile enough to artificially meet deadlines to cash in on bonuses
> while the quality of the software remains dubious to this day.  Two
> weeks ago I came across multiple instances of Y2K errors introduced to
> code after the year 2000.
> 
> Agile to me means slippery and dodgy - I don't like it and it is
> unacceptable in mission critical scenarios.  Today many corporations
> have had such fun laying off and off-shoring applications that no one
> seems to remember that if they collapse or are incorrectly functioning
> there will be unpleasnat consequences.

Baloney.  Agile and Iterative methods are simply what many very good 
software developers have been doing behind the scenes for years.  Gerald 
Weinberg is on record saying that Project Mercury at NASA was developed 
using a process that was pretty much indistinguishable from XP.


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall 2005)
www.objectmentor.com
0
mfeathers2 (74)
7/5/2005 12:09:02 PM
Michael Feathers wrote:
> krasicki wrote:
> > It is agile because the term 'extreme' as been so over-exposed that the
> > audience for this stuff evaporated.  So, to be honest it is called
> > agile to remove the tarnish of the extreme labeling AND to emphasize
> > peppiness rather than dwell on the ever-present short-comings of the
> > pseudo-methodology.
>
> Actually, it is agile because the word 'extreme' was a roadblock in some
>   organizations.  What we've been finding recently is that there are a
> many companies going agile who pick some agile method like Scrum and
> then start using XP practices to fill it in.  There has actually been a
> net push towards XP practices over the past five years.  Their worth
> has clearly been recognized independently of their original packaging.

Given that *XP practices* are largely rebranded existing good practice
I will assert that no such push exists.  I submit to you that XP
advocates are simply looking outside their shell and discovering that
good programming practices existed despite their claims.  But failing
to acknowledge that, XP advocates, as usual, run ahead of the parade
claiming credit for the celebration.

>
> > This is not responsible change management.  This results in chaos and I
> > have seen it in profoundly big companies whose platitudes, awards, and
> > self-serving hype disguise the fact that beneath the covers employees
> > were agile enough to artificially meet deadlines to cash in on bonuses
> > while the quality of the software remains dubious to this day.  Two
> > weeks ago I came across multiple instances of Y2K errors introduced to
> > code after the year 2000.
> >
> > Agile to me means slippery and dodgy - I don't like it and it is
> > unacceptable in mission critical scenarios.  Today many corporations
> > have had such fun laying off and off-shoring applications that no one
> > seems to remember that if they collapse or are incorrectly functioning
> > there will be unpleasnat consequences.
>
> Baloney.  Agile and Iterative methods are simply what many very good
> software developers have been doing behind the scenes for years.  Gerald
> Weinberg is on record saying that Project Mercury at NASA was developed
> using a process that was pretty much indistinguishable from XP.
>

Skunkworks projects are a time-honored tradition in enterprises chock
full of talented people.  XP is not skunkworks.

You and other proponents are selling XP -er- *agile* as a generic
methodology that gets better results than anything else (the boogeyman
here being waterfall).

Project Mercury would have never gotten off the ground if it had been
developed in a financial services, insurance, banking, or commericial
enterprise using *agile* methodologies.

I particularly find these bait and switch analogies to be maliciously
misleading.

I have worked in places like Dec, Ratheon, Ham Standard and other
companies that oozed of talented engineers.  And in those places, the
quality of people could make anything work using any methodology
assuming the dullards get out of the way.

I've also worked in less discerning places where village idiots can
accuse talented people of being incompetent and get away with it.  And
in those places, 'up' may be their 'down' and everything is capricious
to whoever happens to be the audience that day.

Let's not sell kool-aid here.  There are plenty of places wallowing in
their own fecal ideas that will get on these newsgroups and testify how
good it feels - come join us.  Be careful not to sound like them.

I work at very high levels and I wallow in the trenches to stay
connected to what I love doing.  Today the quality of software is worse
than I have ever seen it in my twenty-five years in this business.  You
have a hard sell telling me bottom up methodologies make sense in that
light.

0
Krasicki1 (73)
7/5/2005 2:01:00 PM
On Sat, 2 Jul 2005 08:48:23 -0700, "Nick Malik [Microsoft]"
<nickmalik@hotmail.nospam.com> wrote:

>Really, the goal is not so much to reuse things as to seperate the things 
>that change a different times, to make them easier to change.  We start with 
>the limitations of people and create languages that those people can use. 
>If you watch the evolution away from OO and towards things like AOP and 
>lightweight frameworks, it is part of an ongoing process towards the 
>seperation of "things that change rarely" from "things that change 
>frequently."

This is very well stated.  I often say that OO is a technique for
managing dependencies; but often neglect to say what the management
criteria are.  Separating things that change at different times is one
of the most important of those criteria.  Indeed, I have written two
different principles about this topic:  The SRP (Single Responsibility
Principle) which says that every class should have one, and only one,
reason to change, is a principle that expresses this idea in the
small.  The CCP (Common Closure Principle) which says that classes
that change together should be grouped into packages together,
expresses this idea in the large.

http://www.objectmentor.com/resources/articles/srp
http://www.objectmentor.com/resources/articles/granularity.pdf



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/5/2005 2:51:14 PM
On 2 Jul 2005 10:55:05 -0700, "topmind" <topmind@technologist.com>
wrote:

>In my domain one often cannot know ahead of time what will change.

It's not so much a matter of knowing what will change, or even how it
will change.  It's a matter of recognizing that certain things will
change at a different rate than others.  For example report formats
are will change at a different rate than business rules.  GUI layout
will change at a different rate than database schemae.

You don't even have to know whether one will change more frequently
than the other.  You just have to be able to make a reasoned guess
that they will change at different rates and for different reasons.

We try not to couple report formats to business rules because it would
be a shame to inadvertently break the business rules by moving a
column on a report.  We try to decouple the GUI layout from the
database schemae because it would be a shame to crash the GUI when
adding a new column to the database.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/5/2005 2:56:12 PM
On Sat, 2 Jul 2005 07:16:46 -0700, "Nick Malik [Microsoft]"
<nickmalik@hotmail.nospam.com> wrote:

>"Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
>news:0jk7c1tj3hhc2oua6pe60ulk719pr41sd1@4ax.com...

>>Nick asked:
>
>>>OK, it is possible to invert the physical dependencies between sets of
>>>classes (modules) by moving entities between them ...we should at least
>>>agree here......are you claiming that this is the key characteristic
>>>value of OO?
>>
>> Yes.
>
>Are you sure you haven't mixed OOD with AOP?  

Yes.

>I would agree with you if your 
>statement had been "a key characteristic of AOP is the inversion of control 
>and the injection of dependencies."  

There is a certain similarity between IOC, DIJ, and AOP.
Interestingly enough IOC and DIJ are typically implemented with OO.
It is the dependency inversion capability of OO that enables IOC and
DIJ.  AOP enables IOC and DIJ through the weaving mechanism which is
also a kind of dependency inversion approach.  The difference is that
AOP weaves the callback code into the main line rather than jumping
through vectors.

>This is an innovation on top of OO.  

That it is innovation I will grant you.  However, AOP and OO are
pretty different.  You can write AO programs that are not at all OO.
Indeed, you could write AO compilers for languages that were not OO.
The whole idea of constructing a program through the weaving of many
different aspects is quite different from the ideas behind OO.  

>It 
>is true that OO enabled it. 

AOP?  I don't think so.  It might have inspired it in some way, but
I'm not clear on the history.

>>>so I can vary the OO'ness of the above code, not by changing the code,
>>>but by how I allocate each entity to a module?!?!?
>>
>> Certainly.  If the allocation of classes to modules does not
>> effectively decouple those modules, then you don't have an OO
>> solution.
>
>I would call this "definition parsing."  You do have an OO solution in that 
>it uses OO mechanisms to accomplish its goal.  You may or may not have a 
>"good" OO solution, with the subjective being the key thing I'm pointing 
>out.  The orientation towards objects is all that is required to be OO.  Not 
>the injection of dependencies.  That came much later.

Defining "Object Oriented" as "Oriented around Objects" is a bit
circular.  It also begs the questions: "What are objects, and what
does it mean to be oriented around them?"

As for what came first and what came later, I'm not sure it matters as
far as a definition of OO is concerned,  I will grant you that people
were creating data structures and functions that manipulated those
data structures, long before the term OO was coined.  However, when
Alan Kay coined the term OO, it was in the context of a language that
put polymorphism at its core, to the extent that even 'true' and
'false' were two different polymorphic objects with different
implementations.

Above you talked about "good OO solutions" and "bad OO solutions".  By
this I take it that you define an OO program to be any program that
uses the trappings of OO.  e.g. an OO programming language.  I take a
very different view.  I define OO as a set of principles that direct
design decisions.  A program is OO to the extent that it embodies
these principles, irrespective of the language in which it is written.
I can, for example, write an OO program in C.  It's hard, tedious, and
error prone, but I can do it. 

>In fact, if you look up the Wikipedia definition of Aspect Oriented 
>Programming, you will see that the definition's author considers AOP to be 
>"not OO" but in fact a successor to OO development.

I'd call it a distance cousin.
>
>On the other hand, NMock and Reflection *does* naturally lead folks to AOP. 

I don't follow that at all?  

>I find these innovations in OOP to be much more of an indicator of forward 
>movement towards AOP than the fundamental OO concepts of inheritance and 
>polymorphism.

I also don't necessarily consider AOP to be "forward".  Maybe it is,
or maybe it's sideways.  It could even be backwards.  

-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/5/2005 3:22:37 PM
On Sat, 02 Jul 2005 17:06:28 GMT, "H. S. Lahman"
<h.lahman@verizon.net> wrote:

>> Nobody knows exactly (or even inexactly) what OOA is.   There are a number of
>> books and papers written about it, but they don't agree.  There is not
>> even a set of cogent schools of thought.  OOA is a term that we bandy
>> about with authority, but have no real definition for.
>
>You keep repeating this mantra but it is still untrue.

We disagree.  Indeed, I don't think there is an agreed definition of
analysis, let alone object oriented analysis.  I have sat in too many
meetings (at a company that you and I know well) that devolved into 30
minute arguments about whether a certain technical topic was
"analysis" or "design".  

"We shouldn't be talking about that now, it's a design concept."
"No it's not, it's part of the problem space.  It's analysis."
"It is not, it's too low level."
"No, it's critical.  We have to decide it now."
"No, it can wait, it's just too low level to worry about now."
....

 

-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/5/2005 3:33:22 PM
Hello Robert,

just two snips (one compliment, one clarification):

>>> Certainly.  If the allocation of classes to modules does not
>>> effectively decouple those modules, then you don't have an OO
>>> solution.
>>
>>I would call this "definition parsing."  You do have an OO solution in 
>>that
>>it uses OO mechanisms to accomplish its goal.  You may or may not have a
>>"good" OO solution, with the subjective being the key thing I'm pointing
>>out.  The orientation towards objects is all that is required to be OO. 
>>Not
>>the injection of dependencies.  That came much later.
>
> Defining "Object Oriented" as "Oriented around Objects" is a bit
> circular.  It also begs the questions: "What are objects, and what
> does it mean to be oriented around them?"
>
> As for what came first and what came later, I'm not sure it matters as
> far as a definition of OO is concerned,  I will grant you that people
> were creating data structures and functions that manipulated those
> data structures, long before the term OO was coined.  However, when
> Alan Kay coined the term OO, it was in the context of a language that
> put polymorphism at its core, to the extent that even 'true' and
> 'false' were two different polymorphic objects with different
> implementations.
>
> Above you talked about "good OO solutions" and "bad OO solutions".  By
> this I take it that you define an OO program to be any program that
> uses the trappings of OO.  e.g. an OO programming language.  I take a
> very different view.  I define OO as a set of principles that direct
> design decisions.  A program is OO to the extent that it embodies
> these principles, irrespective of the language in which it is written.
> I can, for example, write an OO program in C.  It's hard, tedious, and
> error prone, but I can do it.
>

That was an eloquent and clear statement.  I'm going to take some time to 
really let that one settle in my brain.  There is something fundamentally 
appealing about what you said.  I appreciate that you shared that with me.


>>
>>On the other hand, NMock and Reflection *does* naturally lead folks to 
>>AOP.
>
> I don't follow that at all?
>

AOP is an interesting innovation.  As noted in another response, I did not 
interpret your acronyms correctly before posting my response, so it wasn't 
terribly coherent in retrospect.

I do think that the fundamental ideas of Reflection added to the direction 
that became AOP, especially since it is largely enabled, in Java and now in 
C#, by the use of reflective mechanisms in the language.

Cross cutting concerns , and the weaving of the modules with injection at 
the fundamental level, are pretty clever ideas.  I don't think that they 
would be obvious, or easily learned, unless folks were already practicing 
reflection and using injection.

I've found that many developers learn injection, as a rigor, when they are 
told, in no uncertain terms, that they WILL use test-driven-development or 
will write unit tests for the code.  This leads to the realization that you 
cannot unit test modules effectively until you can isolate them from their 
dependencies, which leads to a lot of refactoring.

JMock/NMock ease the pain somewhat and offer a pathway to learning the 
fundamental concept of injection or dependency inversion.  As I said, I 
believe that this helps form the conceptual framework that has allowed AOP 
to thrive (somewhat).

<aside> I think I spend more time analyzing people than code. </aside>

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
7/5/2005 3:47:51 PM
>
> In my view the OO-ness of a system can be identified by tracing the
> dependencies between the modules.  If the high level policy modules
> depend on (directly call) lower level modules, which then call even
> lower level modules, then the program is procedural and not object
> oriented.
>
> However, if the high level policy modules are independent, or depend
> solely on polymorphic interfaces that the lower level modules
> implement, then the system is OO.
>
> In other words, it all has to do with the *direction* of the
> dependencies.

Hi,
I agree with you that DIP is a valuable technique when developing
software. But so are many other techniques. Encapsulation, polymorphism
good naming conventions etc. are all useful techniques when developing
software. Some of these techniques are also classified as being
'OO' by developers.

Techniques for describing someone's perception of the world are also
valuable in software development. Categorizing phenomenon into events,
entities, roles, values etc. are indispensable techniques when building
software. Some of these techniques are also classified as 'OO' by
many developers.

It seems as if your definition of 'OO' is only related to the
direction of dependencies along the axis of level of abstraction. What
is the value of this definition? Stating that some software is OO says
close to nothing about the software.

Instead of focusing on meaningless definitions of OO, it would be far
more valuable to focus on important aspects of software development.
For one thing, far more effort should be put into techniques for how to
describe problems, not how to code them.

0
hansewetz (110)
7/5/2005 4:05:37 PM
On 4 Jul 2005 11:03:01 -0700, "krasicki" <Krasicki@gmail.com> wrote:

>It is agile because the term 'extreme' as been so over-exposed that the
>audience for this stuff evaporated.  

Their is a grain of truth to that statement, but only a grain.  I
called the meeting, I was there, I know.  The name "agile" was
selected to represent a group of similar methodologies that included
Scrum, FDD, DSDM, Xtal, and XP.  During the discussions we did mention
that the name "Extreme" was creating both positive and negative
reactions, and we wanted something a little closer to the core
motivation. 

As for the audience for XP evaporating, I think you need to actually
check your facts instead of stating your opinion AS fact.  There is
still a large and growing audience for XP.

>So, to be honest it is called
>agile to remove the tarnish of the extreme labeling AND to emphasize
>peppiness rather than dwell on the ever-present short-comings of the
>pseudo-methodology.

To be honest, you weren't there.  All you have are opinions.  I have
no problem with you expressing your opinions, but I suggest you
represent them as opinions as opposed to fact.

>Numerous OOD methodologies handle change much better than the agile
>proponents will have you believe.  

Facts would be useful here.  My experience has shown that agile
techniques strongly prepare software for change.  I have seen
significant changes easily propagate through systems that were built
using agile techniques.  I have also seen non-agile projects falter
and stall when changes needed to be applied.

>The agile practices are no more
>adept at change than anything else.  

Again, facts would be useful.  I can provide a simple counter fact.
Having a large batch of unit tests and acceptance tests that can be
run against the system in a matter of minutes, makes it much easier to
make changes to that system simply because it's easier to verify that
the change hasn't broken anything.

And here's an opinion, backed by a lot of observation and experience:
Writing tests first forces a design viewpoint that strongly encourages
decoupling, and that therefore fosters change.

Finally, here are some other observations from various teams that I
have coached.  Customers are very happy that their input is heard
early and often.  Executives love the fact that real progress is
measured on a regular (weekly) basis, and that stakeholders are
providing feedback in real time.  All these things promote change
IMHO.

>What agile sells as response to
>change is really immedite gratification.  

There is nothing wrong with immediate gratification so long as nothing
else is lost.  Indeed, so long as nothing else is lost, immediate
gratification is better than deferred gratification.  The evidence
suggests that nothing else is lost.  Indeed, the evidence suggests
that the systems turn out *better*.

This shouldn't be a big surprise.  Any control system works better
when you shorten the feedback loops.  

>This is not responsible change management.  This results in chaos and I
>have seen it in profoundly big companies whose platitudes, awards, and
>self-serving hype disguise the fact that beneath the covers employees
>were agile enough to artificially meet deadlines to cash in on bonuses
>while the quality of the software remains dubious to this day.  

Agile Methods are NOT a mad rush to functionality.  They are not
dotcom stupidity.  Indeed, the agile methods value high quality code
and high quality designs more than any other methods I know of.
Consider rules such as "no duplicate code", "write tests before code",
"Don't let the sun set on bad code", etc, etc.  There are very strong
values that are backed up by disciplines.

>Agile to me means slippery and dodgy - I don't like it and it is
>unacceptable in mission critical scenarios.  

You have put the name "Agile" on something that is not Agile.  Agile
does not mean "hacking".  Agile does not mean running off half-cocked.
Agile does not mean slippery and dodgy.  Agile means moving in very
small, determined, disciplined steps with a mass of verification at
each step, and lots of feedback from previous steps.  



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/5/2005 4:42:52 PM
On Mon, 4 Jul 2005 13:10:22 -0700, "Nick Malik [Microsoft]"
<nickmalik@hotmail.nospam.com> wrote:

>Note: I did not say that "Agile" is incompatible with "design."  I believe 
>it is incompatible with "Big Design."  

Not quite.  Agile methods involve much more design than "Big Design"
methods.  However, the design is done on a different schedule.  Design
is taking place all the way through the project, at every iteration.
This design is no less rigorous than a big design up front.  Indeed,
it is *more* rigorous, because each design decisions is documented by
a series of unit tests and acceptance tests that must be written
*before* the code that makes them pass.  

>I hope to have made it clear that I 
>believe that you can (and should) perform MDA on an agile project.  I've 
>done it.  I've seen it done.  

MDA is perhaps different from what you think it is.  MDA is the notion
of drawing diagrams and then automatically converting them to code
using some kind of translator.  In a true MDA environment you would do
all of your programming by drawing diagrams.  

>It appears that you've been told that agile 
>methods leave no room for design.  My guess is you heard that from a 
>self-proclaimed XP evangelist.  I rank them a notch below most TV Shopping 
>Channel pitchmen in their respect for science, impartiality, or fundamental 
>integrity.

Nick, I'd like you to name some names.  Who are these self-proclaimed
XP evangelists who are a notch below...?

Frankly I don't see any.  There are some folks out there who's
enthusiasm sometimes gets the better of them, but that's a whole
different matter. On the other hand, there *are* people out there
making utterly ridiculous negative assertions about XP and Agile.
Some have even written books about how bad XP is.  It's clear that
these people have never done XP, don't know much about XP, and don't
care to know anything other than that they don't like it.  They simply
bash for the joy of bashing.  THESE are the folks who remind ME of TV
pitchmen.

-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/5/2005 4:51:05 PM
Robert C. Martin wrote:
> On Mon, 4 Jul 2005 13:10:22 -0700, "Nick Malik [Microsoft]"
> <nickmalik@hotmail.nospam.com> wrote:
>
> >Note: I did not say that "Agile" is incompatible with "design."  I believe
> >it is incompatible with "Big Design."
>
> Not quite.  Agile methods involve much more design than "Big Design"
> methods.  However, the design is done on a different schedule.  Design
> is taking place all the way through the project, at every iteration.
> This design is no less rigorous than a big design up front.  Indeed,
> it is *more* rigorous, because each design decisions is documented by
> a series of unit tests and acceptance tests that must be written
> *before* the code that makes them pass.

We know, Bob, we know.  We've heard this tune many times.  The OP asked
about OOD.  This is a perfect example of what it isn't.

What you are describing is bottom up, seat-of-the-pants programming
with perfunctory salutes to an unwitting user all of whom pretend they
have immunity from reality.  And as long as the inmates are in charge
they're very happy subscribing to this stuff.  *You mean, NO BOSSES?*

Look, OOD is about designing theoretical systems that may get
implemented based on analysis, testing, cost, and security factors.
That's rigor!

The fact that programmers sweat to bolt one whimsical idea to the next
is not rigor.

Rube Goldberg software development was not invented by XP but it is
given certification status thanks to XP.  Even Rube Goldberg
contraptions are -cough- *designed*, documented, measured, and
quantified - yes, they are.  And that makes them real - just as real as
real systems.

-snip-

0
Krasicki1 (73)
7/6/2005 12:53:24 AM
Robert C. Martin wrote:
> On 4 Jul 2005 11:03:01 -0700, "krasicki" <Krasicki@gmail.com> wrote:
>
> >It is agile because the term 'extreme' as been so over-exposed that the
> >audience for this stuff evaporated.
>
> Their is a grain of truth to that statement, but only a grain.  I
> called the meeting, I was there, I know.  The name "agile" was
> selected to represent a group of similar methodologies that included
> Scrum, FDD, DSDM, Xtal, and XP.  During the discussions we did mention
> that the name "Extreme" was creating both positive and negative
> reactions, and we wanted something a little closer to the core
> motivation.
>
> As for the audience for XP evaporating, I think you need to actually
> check your facts instead of stating your opinion AS fact.  There is
> still a large and growing audience for XP.

There is still a large and growing audience for the movie, Plan 9 from
Outer Space.  I'm not holding my breath that it will be reconsidered
for an Oscar.

>
> >So, to be honest it is called
> >agile to remove the tarnish of the extreme labeling AND to emphasize
> >peppiness rather than dwell on the ever-present short-comings of the
> >pseudo-methodology.
>
> To be honest, you weren't there.  All you have are opinions.  I have
> no problem with you expressing your opinions, but I suggest you
> represent them as opinions as opposed to fact.

The posters here can google the 'fact' that a number of XP proponents
bemoaned ever calling the methodolgy 'extreme' because it had become
such a loaded phrase culturally and politically.  Tell me you go into
conservative insurance, banking, and finacial services meetings
emphasizing the extreeme nature of your methodology or the
revolutionary (enterprise) culture shock they entail.

Who's being deceitful here?

>
> >Numerous OOD methodologies handle change much better than the agile
> >proponents will have you believe.
>
> Facts would be useful here.  My experience has shown that agile
> techniques strongly prepare software for change.

My problem is that they should strongly prepare software for long-term
production activity because the software conforms to spec and QA.
There is no reason to predict change if the job is done right.  Let's
call this pernicious cost, *change-creep*.  And, let's be honest, this
is an admission that gathering reliable requirements was an exercise
akin to nailing jello to a wall - expect change because we've
conditioned the audience to have the attention span of gerbils.

> I have seen
> significant changes easily propagate through systems that were built
> using agile techniques.  I have also seen non-agile projects falter
> and stall when changes needed to be applied.

Is this mildly misleading or are we pretending the audience for this
discussion are idiots?

>
> >The agile practices are no more
> >adept at change than anything else.
>
> Again, facts would be useful.

Years ago I chased all of you around the proverbial 'fact-checking'
bush and got nowhere.  Agile is putting it mildly.  I feel like I'm in
a Marx brothers movie discussing XP and agile - no sooner than one door
closes, another opens with another comedian wanting to know Viaduct?

I agree facts would be useful. After so many years, where are your
facts?


> I can provide a simple counter fact.
> Having a large batch of unit tests and acceptance tests that can be
> run against the system in a matter of minutes, makes it much easier to
> make changes to that system simply because it's easier to verify that
> the change hasn't broken anything.

And what design do you contrast the running system to to know that it
is doing what it is intended to do?  Or is your assertion that the
running design is infallible?

>
> And here's an opinion, backed by a lot of observation and experience:
> Writing tests first forces a design viewpoint that strongly encourages
> decoupling, and that therefore fosters change.

My goal is not to foster change.  And bad tests, testing faulty
assumptions yield successful test results.  Without a well documented
design that exposes such flaws you have no metric to evaluate the
quality of what you are doing.  But let's not dwell on quality.

>
> Finally, here are some other observations from various teams that I
> have coached.  Customers are very happy that their input is heard
> early and often.  Executives love the fact that real progress is
> measured on a regular (weekly) basis, and that stakeholders are
> providing feedback in real time.  All these things promote change
> IMHO.

What promotes confidence that the thing works correctly?  Bells,
whistles, and favorite colors?

>
> >What agile sells as response to
> >change is really immedite gratification.
>
> There is nothing wrong with immediate gratification so long as nothing
> else is lost.  Indeed, so long as nothing else is lost, immediate
> gratification is better than deferred gratification.  The evidence
> suggests that nothing else is lost.  Indeed, the evidence suggests
> that the systems turn out *better*.

What evidence and how was this evidence accumulated?  All billable
hours accounted for?

>
> This shouldn't be a big surprise.  Any control system works better
> when you shorten the feedback loops.

Only if the feedback makes sense.

>
> >This is not responsible change management.  This results in chaos and I
> >have seen it in profoundly big companies whose platitudes, awards, and
> >self-serving hype disguise the fact that beneath the covers employees
> >were agile enough to artificially meet deadlines to cash in on bonuses
> >while the quality of the software remains dubious to this day.
>
> Agile Methods are NOT a mad rush to functionality.  They are not
> dotcom stupidity.  Indeed, the agile methods value high quality code
> and high quality designs more than any other methods I know of.
> Consider rules such as "no duplicate code", "write tests before code",
> "Don't let the sun set on bad code", etc, etc.  There are very strong
> values that are backed up by disciplines.

Parse the sentence.  Everything you value is code-centric.  Open your
mind to OOD.


> >Agile to me means slippery and dodgy - I don't like it and it is
> >unacceptable in mission critical scenarios.
>
> You have put the name "Agile" on something that is not Agile.  Agile
> does not mean "hacking".  Agile does not mean running off half-cocked.
> Agile does not mean slippery and dodgy.  Agile means moving in very
> small, determined, disciplined steps with a mass of verification at
> each step, and lots of feedback from previous steps.
>

Try the telephone communication game with some friends some day.
Whisper a nontrivial message to someone near you and have them do the
same with each person in the room taking a turn and see what comes out
compared to the original.

Your argument is not compelling today any more than it was many years
ago.

0
Krasicki1 (73)
7/6/2005 1:26:36 AM
"krasicki" <Krasicki@gmail.com> wrote in message 
news:1120611204.476214.162240@f14g2000cwb.googlegroups.com...

> Look, OOD is about designing theoretical systems ...

Just out of curiosity, what theory?

-- Daniel 


0
Daniel
7/6/2005 10:15:58 AM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
news:j5elc1t6jakj9cg1h9dbm70r6uv9ik2n7t@4ax.com...
>
> Who are these self-proclaimed
> XP evangelists who are a notch below...?
>
> Frankly I don't see any.  There are some folks out there who's
> enthusiasm sometimes gets the better of them, but that's a whole
> different matter. On the other hand, there *are* people out there
> making utterly ridiculous negative assertions about XP and Agile.
> Some have even written books about how bad XP is.  It's clear that
> these people have never done XP, don't know much about XP, and don't
> care to know anything other than that they don't like it.  They simply
> bash for the joy of bashing.  THESE are the folks who remind ME of TV
> pitchmen.
>
I think the criticism is of writing which is largely polemical, and I think 
it would be fair to put much of the writing by XP proponents in this 
category.  A good test would be to check whether the author treats XP as if 
it were the one and only methodology that has no problems, and a good number 
of articles on the subject appear to fall into this category.  It's hard to 
find blogs that dissect both successful and unsuccessful XP projects and 
systematically discuss the consequences of the various practices, which is 
what you'd expect if the author wanted to be taken seriously.  Instead, you 
get pictures of happy programmers on the cover of Software Development 
magazine, links to projects that seem to go dead after a while, and links to 
puff pieces.  None of this is anti-XP; it's just that evangelical writing 
tends to come across as a little bit silly when presented to a professional 
audience.

Regards,
Daniel Parker 


0
Daniel
7/6/2005 12:00:08 PM
Nick Malik [Microsoft] wrote:
> "Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
> news:0jk7c1tj3hhc2oua6pe60ulk719pr41sd1@4ax.com...
> 
>>Whether there are philosophies of software development is irrelevant;
>>though I think there may be.  My point is that OO has often been
>>touted as a "grand overarching philosophy" having more to do with
>>life, the universe, and everything, than with software.
> 
> 
> There are some folks who still claim that the world is flat.  We don't talk 
> about them as though they were a significant part of scientific thought.  In 
> my opinion, it is fair to include "grand OO philosophers" in this category. 
> Anyone who says that OO is a grand philosophy is ignorant of both software 
> engineering and philosophy.  Kant, Camus, Sartre... now that's philosophy. 
> We are on the same page on this one.

I'm not.

I'll see your "Kant, Camus, Sartre" with Socrates, Plato, and Aristotle. 
  Don't worry though, none of us paid attention in philosophy class.

<http://gagne.homedns.org/~tgagne/articles/TheObjectOrientedParadigm.pdf>

I once listened to a great lecture on Plato and was fascinated when 
midway through it became a computer science lecture -- unknown to all, 
even the professor, but the programmers in the audience.

As to your later comments, Plato's theory was working towards a 'grand 
analysis.'  I see no reason to shy away from "OO is a philosophy about 
everything," because we're in good company.
0
tgagne (596)
7/6/2005 2:28:27 PM
On Tue, 5 Jul 2005 08:47:51 -0700, "Nick Malik [Microsoft]"
<nickmalik@hotmail.nospam.com> wrote:

>AOP is an interesting innovation.  As noted in another response, I did not 
>interpret your acronyms correctly before posting my response, so it wasn't 
>terribly coherent in retrospect.
>
>I do think that the fundamental ideas of Reflection added to the direction 
>that became AOP, especially since it is largely enabled, in Java and now in 
>C#, by the use of reflective mechanisms in the language.
>
>Cross cutting concerns , and the weaving of the modules with injection at 
>the fundamental level, are pretty clever ideas.  I don't think that they 
>would be obvious, or easily learned, unless folks were already practicing 
>reflection and using injection.

The history of AOP would be interesting to mine.  Your view of it, and
my view of it, are very different.  I haven't done the research, so I
don't know which is more valid.  I will say that from my point of view
AOP aren't strongly related.  The original weavers did their work by
weaving source code together.  Pointcuts were treated rather like
elaborate macros with very detailed insertion specifications.  Of late
the weavers have taken to weaving byte-codes together.  This is
certainly an improvement because it prevents a massive recompile of
the entire system when aspects are changed.

BTW, I fear that AOP in its current form is faltering.  The pointcuts
(am I using the right vernacular) use something like regular
expression matching against the names of the methods and classes that
they insert code into.  This means that the aspects must have a very
low level dependency on the rest of the code, and that the system is
tied together by fragile naming conventions.  This might be solvable
through metadata like C# attributes and Java's new syntax.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/6/2005 2:32:16 PM
Daniel Parker wrote:
> "krasicki" <Krasicki@gmail.com> wrote in message
> news:1120611204.476214.162240@f14g2000cwb.googlegroups.com...
>
> > Look, OOD is about designing theoretical systems ...
>
> Just out of curiosity, what theory?
>
> -- Daniel

Well Daniel,

Not all systems consist of a web front end that accesses data from a
well-established database though many do.

OOD, depending on the tools and techniques you use, allows system
designers to model potential solutions in numerous ways.  If this
tradeoff is made here, this benefit is forthcoming there and so on.
Design is about applying recombinant ideas to solving problems.

Systems are developed to solve problems in effective ways.  Systems are
not developed for the sake of software or to gratify the provincial and
esoteric ego needs of the employees charged with getting the system
implemented.

I'll give you a good example.  A Hilton was just built in Hartford and
the building went up fast to meet the deadline of the new convention
center being built next door.  All federal guidelines were applied.

So come inspection day, Connecticut's inspectors applied Connecticut
building statutes to the inspection and the building failed!  Now one
could say that the building was built fast, saved money, and looks real
good and boy were the builders proud of it.  Let's call this process
agile.

So sixteen of the rooms were incorrectly built for handicapped access,
an error of three inches per room.  Where to get three inches?  Push
the rooms into the hall and the hall fails.  You can see how this goes.

Another example.  A Hospital builds a new wing onto an existing
structure and these days new hospital wings look like fancy hotels.
Everything is immaculate, grand fascades, fancy everything.  The
hospital wing opens without a hitch.

The first bed is rolled down the hall, the elevator button is pushed,
the door opens, the bed pushed into the elevetor as far as it can go,
but the bed still doesn't fit.  Wrong sized elevator.

These are true stories.  Shouldn't all of the architects of these
buildings have expected change to happen as well.  Same with the
builders.  maybe build with everything loose so that it can be
reassembled when the next minor detail arises?  Aren't we being told
this is the way things work?

Even carpenters measure before they cut.  Yet, in computer science we
are being told that we should operate as though we are all alchoholics
and take things one day at a time.

Now this philosophy can work in cases where we're building cookie
cutter, intuitive stuff but you will need a lot of luck and
extraordinary follow-through to build a complex system this way.

0
Krasicki1 (73)
7/6/2005 3:01:14 PM
Thomas Gagne wrote:

> As to your later comments, Plato's theory was working towards a 'grand
> analysis.'  I see no reason to shy away from "OO is a philosophy about
> everything," because we're in good company.

OO is the shadows of things on the cave wall.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/6/2005 3:19:01 PM
krasicki wrote:
> Daniel Parker wrote:
> > "krasicki" <Krasicki@gmail.com> wrote in message
> > news:1120611204.476214.162240@f14g2000cwb.googlegroups.com...
> >
> > > Look, OOD is about designing theoretical systems ...
> >
> > Just out of curiosity, what theory?
> >
> > -- Daniel
>
> Well Daniel,
>
> OOD, depending on the tools and techniques you use, allows system
> designers to model potential solutions in numerous ways.  If this
> tradeoff is made here, this benefit is forthcoming there and so on.
> Design is about applying recombinant ideas to solving problems.
>
My question is narrower.  I don't think OOD provides any _theoretical_
guidence to building software.  For example, if I'm writing a system
that must be fault tolerant, I refer to theoretical work on
transactioning, I don't think OOD offers anything analagous to that.
Do you disagree?

Regards,
Daniel Parker

0
7/6/2005 3:41:10 PM
krasicki wrote:
> Daniel Parker wrote:
> 
>>"krasicki" <Krasicki@gmail.com> wrote in message
>>news:1120611204.476214.162240@f14g2000cwb.googlegroups.com...
>>
>>
>>>Look, OOD is about designing theoretical systems ...
>>
>>Just out of curiosity, what theory?
>>
>>-- Daniel
> 
> 
> Well Daniel,
> 
> Not all systems consist of a web front end that accesses data from a
> well-established database though many do.
> 
> OOD, depending on the tools and techniques you use, allows system
> designers to model potential solutions in numerous ways.  If this
> tradeoff is made here, this benefit is forthcoming there and so on.
> Design is about applying recombinant ideas to solving problems.
> 
> Systems are developed to solve problems in effective ways.  Systems are
> not developed for the sake of software or to gratify the provincial and
> esoteric ego needs of the employees charged with getting the system
> implemented.
> 
> I'll give you a good example.  A Hilton was just built in Hartford and
> the building went up fast to meet the deadline of the new convention
> center being built next door.  All federal guidelines were applied.
> 
> So come inspection day, Connecticut's inspectors applied Connecticut
> building statutes to the inspection and the building failed!  Now one
> could say that the building was built fast, saved money, and looks real
> good and boy were the builders proud of it.  Let's call this process
> agile.
> 
> So sixteen of the rooms were incorrectly built for handicapped access,
> an error of three inches per room.  Where to get three inches?  Push
> the rooms into the hall and the hall fails.  You can see how this goes.
> 
> Another example.  A Hospital builds a new wing onto an existing
> structure and these days new hospital wings look like fancy hotels.
> Everything is immaculate, grand fascades, fancy everything.  The
> hospital wing opens without a hitch.
> 
> The first bed is rolled down the hall, the elevator button is pushed,
> the door opens, the bed pushed into the elevetor as far as it can go,
> but the bed still doesn't fit.  Wrong sized elevator.


It's pretty amazing to me that you find anything in common with Agile in 
these scenarios.  They all sound like cases were there was no feedback 
or testing.  Sounds more like plan-driven development to me.

 > These are true stories.  Shouldn't all of the architects of these
 > buildings have expected change to happen as well.  Same with the
 > builders.  maybe build with everything loose so that it can be
 > reassembled when the next minor detail arises?  Aren't we being told
 > this is the way things work?

Well, the fact is software is malleable.  In fact it is too malleable. 
It isn't hard to change software at all.  All you have to do is type a 
couple of characters in any program and you can break it.  Because that 
is the way that software is, we need tests to give it backbone.

 > Even carpenters measure before they cut.  Yet, in computer science we
 > are being told that we should operate as though we are all alchoholics
 > and take things one day at a time.

The problem is: misunderstanding the material you are working with. 
Code is not wood or concrete.

Michael Feathers
www.objectmentor.com


0
mfeathers2 (74)
7/6/2005 3:42:43 PM

Michael Feathers wrote:
>
> The problem is: misunderstanding the material you are working with.
> Code is not wood or concrete.
>
Some of it's more like putty, some of it's more like clay.  Nobody, not
you, not Kent Beck, not Robert Martin, not Ron Jeffries, has fully
solved that problem.

-- Daniel

0
7/6/2005 3:53:22 PM
Hi Thomas,


"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
news:Y4OdnUFqe_QPd1bfRVn-tg@wideopenwest.com...
> Nick Malik [Microsoft] wrote:
>> "Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
>> news:0jk7c1tj3hhc2oua6pe60ulk719pr41sd1@4ax.com...
>>
>>>My point is that OO has often been
>>>touted as a "grand overarching philosophy" having more to do with
>>>life, the universe, and everything, than with software.
>>
>>
>> Anyone who says that OO is a grand philosophy is ignorant of both 
>> software engineering and philosophy.  Kant, Camus, Sartre... now that's 
>> philosophy. We are on the same page on this one.
>
> I'm not.
>
> I'll see your "Kant, Camus, Sartre" with Socrates, Plato, and Aristotle. 
> Don't worry though, none of us paid attention in philosophy class.

My father is a professor of philosophy.  He holds a Ph.D. from London 
University and an Ed.D from Columbia.  I paid attention.

>
> <http://gagne.homedns.org/~tgagne/articles/TheObjectOrientedParadigm.pdf>
>

The paper you cite is interesting.  However, the author makes only one 
conclusion: that we should understanding that a connection exists between 
formalism in general and OO design.  It does not make a case for the use of 
OO as a philosophy, only that it grew out of the efforts of an early 
scientist and philosopher.

First off: in the early days of modern thought, science, philosophy, and 
mathematics tended to blend together.  That is to say that the analytical 
underpinnings of all three are related through the necessity of creating 
analytical methods of observation and categorization.  These methods were 
described well by many early thinkers, including the greats that you 
subscribe to.

If you are to look at the modern outcroppings of these strains of thought, 
you will find that the early notions of forms have had their greatest 
influence on the natural sciences, where observation, categorization, and 
analysis are fundamental to our understanding of biology and chemistry. 
They did less to influence the sciences of mathematics, and their influence 
on philosophy is indirect at best.  As the age of reason begins, science is 
already using these notions to positive effect, so it is natural to adopt 
formalisms into philosophy.  Unfortunately, those that do adopt such 
formalisms beyond simple description and categorization find that they are 
dealing more with medicine and psychology and less with philosophy.  Their 
contributions are not part of the same canon of thought as a result.

It is true that the theory of forms is related to the categorization of 
similarities and differences, as described by Plato.  However, this theory 
forms the basis for all decompositional and observational analysis.  While 
that certainly includes the analysis that leads to well structured OO 
programs, I would posit that it leads to well structured forms in any logic 
system.  There are good examples of structured programming in Pascal that 
can trace their analytical roots to the same theories.  There are good 
examples of mathematical algorithms created for assembler that can do the 
same.

More interestingly, the cited paper does nothing to forward the notion of OO 
as a philosophy in itself.  The author shows that OO inherits the 
fundamental mechanisms of analysis originally described by Plato.  But he 
does not show that object orientation has any positive effect on, as he puts 
it, our understanding of "the world and the nature of knowledge."  It is an 
expression of formal analysis, but he makes no assertion that the expression 
is somehow better or more appropriate than other forms of expression.  He 
claims that it is part of a lineage, but offers no reason to the reader to 
believe that it is anything more than an analytical dead end.  As a result, 
the paper is an analysis with no recognizable conclusion other than to say, 
to others, "isn't this neat?"

I find it interesting that the author cites only himself, even as he 
describes ideas that he attributes to others.  If I were my father, I doubt 
I would have accepted his paper in even a freshman level philosophy course.

> As to your later comments, Plato's theory was working towards a 'grand 
> analysis.'  I see no reason to shy away from "OO is a philosophy about 
> everything," because we're in good company.

What Plato founded, with his grand analysis, was science.  Philosophy uses 
these (and other) methods to analyze human existence, but the evidence it 
leaves behind is not its methods, but its conclusions.  In that, Philosophy 
is a path to analysis of the human "form," with the results being great 
opinions on the fundamental nature of human existence, thought, 
collaboration, society, and relationship with a higher power.  These 
conclusions, though reachable through various methods of analysis, are 
independently interesting.  These conclusions form the fabric of Philosophy. 
Our infant science bears little in common with these philosophical 
conclusions, except the fact that the people who create software are humans 
themselves.  While our science is influenced by these conclusions, we 
contribute nearly nothing to them.

We are in good company to call our notions of logical expression a form of 
science or a form of mathematics.
We are a misfit in the world of Philosophy.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
7/6/2005 3:54:31 PM
<hansewetz@hotmail.com> wrote in message 
news:1120579537.908439.237800@g44g2000cwa.googlegroups.com...
> >
> It seems as if your definition of 'OO' is only related to the
> direction of dependencies along the axis of level of abstraction. What
> is the value of this definition? Stating that some software is OO says
> close to nothing about the software.

If you read the papers he provides links to, you will understand the 
discussion.


-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
7/6/2005 3:59:49 PM
Thomas Gagne wrote:
>
> Plato's theory was working towards a 'grand
> analysis.'  I see no reason to shy away from "OO is a philosophy about
> everything," because we're in good company.

It's not meaningful to talk about OO as a philosophy.  Rather, the
philosphical problem would be to critique OO, to investigate the
meaning of it's statements.

-- Daniel

0
7/6/2005 4:01:40 PM
Daniel Parker wrote:
> krasicki wrote:
> > Daniel Parker wrote:
> > > "krasicki" <Krasicki@gmail.com> wrote in message
> > > news:1120611204.476214.162240@f14g2000cwb.googlegroups.com...
> > >
> > > > Look, OOD is about designing theoretical systems ...
> > >
> > > Just out of curiosity, what theory?
> > >
> > > -- Daniel
> >
> > Well Daniel,
> >
> > OOD, depending on the tools and techniques you use, allows system
> > designers to model potential solutions in numerous ways.  If this
> > tradeoff is made here, this benefit is forthcoming there and so on.
> > Design is about applying recombinant ideas to solving problems.
> >
> My question is narrower.  I don't think OOD provides any _theoretical_
> guidence to building software.  For example, if I'm writing a system
> that must be fault tolerant, I refer to theoretical work on
> transactioning, I don't think OOD offers anything analagous to that.
> Do you disagree?
>
> Regards,
> Daniel Parker

Most of the sophisticated modeling tools are just that modeling tools,
you supply the theory.  However, in cases where transactional processes
are well-defined between say a database and known SQL transaction
modules of one kind or another, there is nothing stopping someone
modeling from creating transaction design objects whose given
attributes include transaction type and other metrics (fault tolerance
under whatever constraints, and so on).

And if enough of these are identified, algorithms to identify the best
transaction for a given situation certainly is feasible.

I have been out of the CASE tool industry for a number of years but I
am aware of very sophisticated efforts to create design composites
that, if they succeed, will be very, very rich in sophistication.  I
have to leave it at that.

However, there is nothing stopping an individual from inventing their
own design object - say, the fault tolerant transaction object.

Make it a notational figure (circle square).  Map it to to more formal
system mechanics (code, module, plug-in, whatever)... and so on.

0
Krasicki1 (73)
7/6/2005 4:03:12 PM
Responding to Martin...

>>>Nobody knows exactly (or even inexactly) what OOA is.   There are a number of
>>>books and papers written about it, but they don't agree.  There is not
>>>even a set of cogent schools of thought.  OOA is a term that we bandy
>>>about with authority, but have no real definition for.
>>
>>You keep repeating this mantra but it is still untrue.
> 
> 
> We disagree.  Indeed, I don't think there is an agreed definition of
> analysis, let alone object oriented analysis.  I have sat in too many
> meetings (at a company that you and I know well) that devolved into 30
> minute arguments about whether a certain technical topic was
> "analysis" or "design".  
> 
> "We shouldn't be talking about that now, it's a design concept."
> "No it's not, it's part of the problem space.  It's analysis."
> "It is not, it's too low level."
> "No, it's critical.  We have to decide it now."
> "No, it can wait, it's just too low level to worry about now."

You should try attending a translation model review involving 
experienced developers.  Whether there is implementation pollution 
present is usually quite clear.  Authors may have blind spots as 
individuals, but they are quick to recognize the problem when it is 
pointed out.  The tricky part lies is eliminating implementation 
pollution, not recognizing it.

<Apocryphal example>
Back in the mid-'80s I had a similar skepticism about being able to 
eliminate implementation decisions from OOA models.  I thought I had 
found an irrefutable example and our group was too inexperienced not to 
agree with me.

Basically it involved a series of operations that were essentially large 
scale processing loops that were nested.  The loops "walked" the 
hardware to initialize its state.  Each loop involved interactions among 
several object state machines.  As it happens the hardware was under 
parallel development and the hardware guys could not yet tell us which 
loop should be the outer one for the fastest execution.  So it seemed 
obvious we had to commit to one "implementation" solution by fixing the 
order of loops and, if that turned out wrong, we would have to go back 
and change it.

[Note that none of us was confused about the fact that hard-wiring the 
order of the loops was an implementation decision.  That's because the 
loop order was a performance issue, not a functional issue.  So the 
problem was finding a way to avoid explicitly resolving it in the OOA. 
(The processing /within/ each loop was, of course, a matter of hardware 
functional requirements.)]

I cornered Mellor at a conference and presented the problem.  It took 
him maybe thirty seconds to recognize the each loop's processing was a 
daisy-chain sequence of events that remained the same regardless of the 
order of the loops.  That meant that the order of loops really came down 
to where the first event in the loop was issued.  It was trivial to 
locate two spots and insert AAL code to parametrically generate the 
right starter event based on configuration data.  We could then supply 
the value of the configuration data later when the hardware guys got 
their act together without touching the OOA model.

It was a substantial blow to the ego after spending many hours over a 
couple of months with some pretty sharp developers to become convinced 
there was no way out and then get blown away in a few seconds.  Today, 
with the benefit of experience, I would probably have no trouble 
recognizing that solution (though it would probably take me a couple of 
minutes rather than 30 seconds).  In all the intervening years I still 
have not seen a situation where one had to incorporate an implementation 
decision in an OOA model.  I have also seen very few situations where 
there was any confusion over whether a decision was implementation or not.
</Apocryphal example>


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl@pathfindermda.com
Pathfinder Solutions  -- Put MDA to Work
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
(888)OOA-PATH



0
h.lahman (3600)
7/6/2005 4:20:44 PM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
news:j5elc1t6jakj9cg1h9dbm70r6uv9ik2n7t@4ax.com...
> On Mon, 4 Jul 2005 13:10:22 -0700, "Nick Malik [Microsoft]"
> <nickmalik@hotmail.nospam.com> wrote:
>
>>Note: I did not say that "Agile" is incompatible with "design."  I believe
>>it is incompatible with "Big Design."
>
> Not quite.  Agile methods involve much more design than "Big Design"
> methods.  However, the design is done on a different schedule.  Design
> is taking place all the way through the project, at every iteration.
> This design is no less rigorous than a big design up front.  Indeed,
> it is *more* rigorous, because each design decisions is documented by
> a series of unit tests and acceptance tests that must be written
> *before* the code that makes them pass.

Alas, I'm guilty of attempting to imply a term without doing a good job of 
providing a meaningful definition.  The acronym BDUF is a bit off-putting in 
conversations where the other participant is intentionally ignorant of agile 
concepts.  Please forgive my lack of clarity.

>
>>I hope to have made it clear that I
>>believe that you can (and should) perform MDA on an agile project.  I've
>>done it.  I've seen it done.
>
> MDA is perhaps different from what you think it is.  MDA is the notion
> of drawing diagrams and then automatically converting them to code
> using some kind of translator.  In a true MDA environment you would do
> all of your programming by drawing diagrams.

Yes.  Many tools that are being used for this have the ability to be used 
many times through the process.  You can create the model, hand-modify the 
code, and recreate the model from the code. This round-tripping is Very 
useful in iterative design processes because each innovation can be 
inspected for compliance with the fundamental notion of "does it solve This 
business problem?"  This keeps with the notion of solving the problem when 
it is presented, and not before.  As I understand this, this is fairly key 
to keeping costs low in an agile environment.

I have done this on some small projects and found it to be a useful 
practice.  I'm hoping to do more of this in the future, as I believe it 
makes the "design conversation" more succinct during the sprint planning 
stages.  It also helps to find the "outliers" (objects that were created by 
programmers who didn't understand the design and just shoved something into 
a static utility class to get it out of the way).  These become stories to 
refactor them in the coming iteration.

>
>>It appears that you've been told that agile
>>methods leave no room for design.  My guess is you heard that from a
>>self-proclaimed XP evangelist.  I rank them a notch below most TV Shopping
>>Channel pitchmen in their respect for science, impartiality, or 
>>fundamental
>>integrity.
>
> Nick, I'd like you to name some names.  Who are these self-proclaimed
> XP evangelists who are a notch below...?

Certainly not you, Robert.  The members of the original Agile Alliance have 
(mostly) done a good job of using words that are fair and partial, if 
sometimes a bit strident.  However, there are many "hucksters" out there 
that have read half of a book on XP and then go out preaching about XP with 
no more understanding than a teenage high school dropout attempting to 
lecture me on the mechanics of the supply chain.  I have come across some of 
them.  One offered me a job (which I turned down).  Others have underbid me 
(when I was in the consulting business).  Others have vociferously 
challenged me when I dared mention UML when discussing the notions of OO 
design.  They were not credible, and they gave "agile" a bad name.

There are many people making money off of agile methods that are not as 
upstanding as you are, Robert.  Surely, you have run across some of them 
yourself.  Naming names will only get me into a libel suit.  Their goal is 
to make money, not improve software.

>
> Frankly I don't see any.  There are some folks out there who's
> enthusiasm sometimes gets the better of them, but that's a whole
> different matter.

see above

> On the other hand, there *are* people out there
> making utterly ridiculous negative assertions about XP and Agile.
> Some have even written books about how bad XP is.  It's clear that
> these people have never done XP, don't know much about XP, and don't
> care to know anything other than that they don't like it.  They simply
> bash for the joy of bashing.  THESE are the folks who remind ME of TV
> pitchmen.

When I first heard about agile methods, I was very dubious.  I decided that 
I had two choices: to get defensive or to read more about it.  I took the 
latter.  I've learned a lot and I find myself explaining agile methods to 
people who have mistaken notions about it.  I've used agile methods more 
frequently in the past few years and I expect that will continue.  While I 
am a scrummaster, I don't pretend to be an expert on agile methods.  I defer 
to those who have done more than I have, and I glide under the noise, being 
a change agent along the way.  On the other hand, I have found that simply 
using individual methods as "best practices" (which they demonstrably are) 
gets you in the door with people who otherwise scream and run for cover when 
you use the term "agile" in their presence.  There are ways to change 
organizations that are subtle, but powerful.

I agree that the noise out there is not rational.  I agree that there are 
folks who oppose agile methods with an almost hysterical response mechanism. 
I posit only that there are a few folks on the other side as well.  We need 
to "lower the heat and turn up the light" in all our conversations, 
recognizing both the good and bad, the known and the unknown, in order to 
achieve real change in this crazy profession we've chosen to practice.

I'm on the side of reason.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
7/6/2005 4:30:15 PM
Responding to Martin...

> MDA is perhaps different from what you think it is.  MDA is the notion
> of drawing diagrams and then automatically converting them to code
> using some kind of translator.  In a true MDA environment you would do
> all of your programming by drawing diagrams.  

That is the translation view.

However, MDA also supports the elaboration viewpoint where there is no 
or limited code generation.  In that context MDA is focused on providing 
interoperability for the tools the developer employs.  So the round-trip 
tools like Together also fall within the MDA umbrella.  In fact, most of 
the vendors represented in OMG who are active in MDA are round-trip vendors.

[OTOH, the round-trip tools are doing more and more code generation so 
they are evolving towards translation.  Eventually they will be 
indistinguishable.]

Fundamentally MDA is just a framework that standardizes migrating 
information between different representations.  Those representations 
can be at the same level (e.g., XML vs. RDB schemas) or at different 
levels (UML OOA and 3GL code).  The transformation can be automatic, 
manual, or some combination.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl@pathfindermda.com
Pathfinder Solutions  -- Put MDA to Work
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
(888)OOA-PATH



0
h.lahman (3600)
7/6/2005 4:31:51 PM
> So sixteen of the rooms were incorrectly built for handicapped access,
> an error of three inches per room.  Where to get three inches?  Push
> the rooms into the hall and the hall fails.  You can see how this goes.

We can see how this goes for buildings. But we're not buildings experts, 
and unless I missed something neither are you. You're asking us to draw 
insight from a foreign domain about which we know *less* than the domain 
we're supposed to apply it to - software. That's the wrong direction to 
operate an intuition pump.

Your rant, while entertaining, is at the level of caricature rather than 
real insight; for insight, read something like Stewart Brand's /How 
Buildings Learn/, which has real stories about architecture.

Most software has the property that if you twiddle one small bit 
incorrectly, an entire system might crash with disastrous consequences. 
(Think off-by one errors.) As far as I can't tell, no building has this 
property - if you remove one brick, even a brick at the very bottom, the 
building stays up.

You're saying that software design should be based in theory. You can't 
have your cake and eat it too - the *process* whereby something is 
designed should certainly take into account the characteristic 
properties of the thing being designed. (Call that meta-design.)

Buildings are brick. Software is text. Brick and text probably call for 
different design processes.

Laurent

0
laurent (379)
7/6/2005 4:40:13 PM
Laurent Bossavit wrote:

> Your rant, while entertaining, is at the level of caricature rather than
> real insight; for insight, read something like Stewart Brand's /How
> Buildings Learn/, which has real stories about architecture.

Or /Notes on the Synthesis of Form/, by Chris Alexander, for the distinction
between "self-conscious" architecture, like Frank Lloyd Wright masterpieces
that suck, and "unself-consciously" humble dwellings that tune and adjust in
iterations, and don't suck.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/6/2005 5:02:07 PM
On 5 Jul 2005 09:05:37 -0700, hansewetz@hotmail.com wrote:

>I agree with you that DIP is a valuable technique when developing
>software. But so are many other techniques. Encapsulation, polymorphism
>good naming conventions etc. are all useful techniques when developing
>software. Some of these techniques are also classified as being
>'OO' by developers.

True.  However, of all the techniques you mentioned above,
polymorphism is the one that is most strongly identified with OO.
Encapsulation, naming conventions, and other qualities of good
software structure have long been part of software development, even
before OO became popular.

DIP is the design principle that guides the use of polymorphism; and
polymorphism is the language feature that enables DIP.  These two,
working together, are what puts the OO in OOD.

>Techniques for describing someone's perception of the world are also
>valuable in software development. 

Absolutely!  Though this has little to do with OO.  I am much more
interested in the work of Ward Cunningham and Rick Mugridge in
expressing requirements as Tests.  (See www.fitnesse.org)

>Categorizing phenomenon into events,
>entities, roles, values etc. are indispensable techniques when building
>software. Some of these techniques are also classified as 'OO' by
>many developers.

I agree that some folks call these activities OO; but I find that
strange, since these activities have been a common part of software
engineering for a very long time, and predate OO.

>It seems as if your definition of 'OO' is only related to the
>direction of dependencies along the axis of level of abstraction. What
>is the value of this definition? Stating that some software is OO says
>close to nothing about the software.

I disagree.  I think there is a lot of value in precise definitions
that can act as a metric against which to measure software designs.
Given my definition, you can quickly ascertain whether a particular
design is OO or not.  Moreover, there are many benefits that are
associated with this structure.  Dependency Inversion is the primary
mechanism behind independently deployable binary components.

On the other hand, there is little value in using the term OO to
describe all good things that have come out of software development
over the last 40 years.  There is this nasty tendency for developers
to say to themselves:  "Everyone says OO is good.  I am a good
developer.  Therefore what I do (and what I have always done) is OO."
Through this strange logic, every good practice eventually gets
labeled OO.

>Instead of focusing on meaningless definitions of OO, it would be far
>more valuable to focus on important aspects of software development.
>For one thing, far more effort should be put into techniques for how to
>describe problems, not how to code them.

I agree that describing problems is very important, and it is an
active area of my own research (www.fitnesse.org).  On the other hand,
I think that we don't put enough effort into techniques for how to
code problems.  Far too many problems are related to poor coding
structure.  Whole systems, and whole development teams, are brought to
their knees because their code has become so tangled and impenetrable
that it cannot be cost-effectively maintained.

-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/6/2005 5:16:01 PM
On 5 Jul 2005 17:53:24 -0700, "krasicki" <Krasicki@gmail.com> wrote:

>Robert C. Martin wrote:
>> On Mon, 4 Jul 2005 13:10:22 -0700, "Nick Malik [Microsoft]"
>> <nickmalik@hotmail.nospam.com> wrote:
>>
>> >Note: I did not say that "Agile" is incompatible with "design."  I believe
>> >it is incompatible with "Big Design."
>>
>> Not quite.  Agile methods involve much more design than "Big Design"
>> methods.  However, the design is done on a different schedule.  Design
>> is taking place all the way through the project, at every iteration.
>> This design is no less rigorous than a big design up front.  Indeed,
>> it is *more* rigorous, because each design decisions is documented by
>> a series of unit tests and acceptance tests that must be written
>> *before* the code that makes them pass.
>
>We know, Bob, we know.  

Courteous debate reflects better on the participants than
condescension.  Are you sure that you are in a position to take a
superior tone?

>We've heard this tune many times.  The OP asked
>about OOD.  

Yes, and YOU changed the topic to XP.  You can expect me to respond
whenever you do that.

>This is a perfect example of what it isn't.

We disagree, and that's fine.  Though I think it would be better of
you expressed your opinions as opinions, instead of as hard facts.

>What you are describing is bottom up, seat-of-the-pants programming
>with perfunctory salutes to an unwitting user all of whom pretend they
>have immunity from reality.  And as long as the inmates are in charge
>they're very happy subscribing to this stuff.  *You mean, NO BOSSES?*

No, no, and no.  This is not seat-of-the-pants, it is not an
egalitarian "no-bosses" scheme, and it is strongly tied to reality.
Nor is it bottom up.  Agile methods start with requirements and work
down.  

You are making claims of fact without sure knowledge of what you are
talking about.  You are strongly misrepresenting what XP and Agile
are.  

>Look, OOD is about designing theoretical systems that may get
>implemented based on analysis, testing, cost, and security factors.
>That's rigor!

That's a strange definition of rigor.  Rigor means stiff, disciplined,
inflexible.  Now consider just the TDD rules of XP.  No production
code can be written until there is a failing acceptance test, and
failing unit tests.  That's stiff, disciplined, and inflexible.
*That's rigor!*  And that's just one aspect of Agile/XP.

>Rube Goldberg software development was not invented by XP but it is
>given certification status thanks to XP.  

Why do you insist on mischaracterization without evidence?  XP does
not create Rube Goldberg structures.  Take a look at the source code
of FitNesse (www.fitnesse.org) if you'd like to see the kind of
architecture and code that was created using XP. 

Really, you need to do some actual research before you spout your
incorrect opinions as facts.

-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/6/2005 5:28:49 PM
On Wed, 6 Jul 2005 08:00:08 -0400, "Daniel Parker"
<danielaparker@spam?nothanks.windupbird.com> wrote:

>I think the criticism is of writing which is largely polemical, and I think 
>it would be fair to put much of the writing by XP proponents in this 
>category.  A good test would be to check whether the author treats XP as if 
>it were the one and only methodology that has no problems, and a good number 
>of articles on the subject appear to fall into this category.  

Does it?  Oh I agree that there are some over-enthusiastic blurbs
written here and there.  But where are the serious articles and books
that treat XP as a discipline that has no problems?  Certainly the
primary XP books don't fall into that category.  Nor do the articles
by the original proponents.  While I would agree that there have been
certain excesses of enthusiasm amongst the original proponents (and I
have been as guilty as any), none of us has claimed silver-bullet
status for XP/Agile.

>It's hard to 
>find blogs that dissect both successful and unsuccessful XP projects and 
>systematically discuss the consequences of the various practices, which is 
>what you'd expect if the author wanted to be taken seriously. 

I'm astounded.  There has been a rather large amount of critical and
thoughtful writing about XP/Agile in the magazines and newsgroups.
Folks have tried it this way and that, and have commented on how the
practices apply to them and their situations.  Whole books have been
published with the research data on certain practices. 

> Instead, you 
>get pictures of happy programmers on the cover of Software Development 
>magazine, links to projects that seem to go dead after a while, and links to 
>puff pieces.  

You also get links to projects that are succeeding and continue to
succeed, as well as articles in Dr. Dobbs about the problems of
XP/Agile, and links to critical and thoughtful discussions on many
blogs and newsgroups. 

>None of this is anti-XP; it's just that evangelical writing 
>tends to come across as a little bit silly when presented to a professional 
>audience.

So does anti-writing; especially when it isn't based on any kind of
facts.
-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/6/2005 5:42:08 PM
Robert C. Martin wrote:

> Does it?  Oh I agree that there are some over-enthusiastic blurbs
> written here and there.  But where are the serious articles and books
> that treat XP as a discipline that has no problems?  Certainly the
> primary XP books don't fall into that category.

    Strong the Dark Side Is

TDD's force can cause problems. Used alone, without checks and balances from
code reviews and feature reviews, frequent testing can help add many useless
features, providing a very high apparent velocity. Used incompletely, with
huge gaps between tests, TDD can reinforce bad code.

Some folks write a few tests, then write code and "refactor" for a long
time, without frequent testing. These behaviors add back the problems that
TDD helps a healthy process avoid.

TDD applies our Agile "headlights metaphor" in miniature. Imagine headlights
that can only strobe, not shine continuously. Each time you hit the test
button, you get a glimpse of your implementation's  road ahead. So to change
direction, you must test more often, not less.

Teach your colleagues to write tests on code you understand, and learn to
write tests they understand. This learning begins as a team collaborates, at
project launch time, to install and use a testing framework.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand



0
phlip_cpp (3852)
7/6/2005 6:16:28 PM
On 5 Jul 2005 18:26:36 -0700, "krasicki" <Krasicki@gmail.com> wrote:

>Robert C. Martin wrote:

>> As for the audience for XP evaporating, I think you need to actually
>> check your facts instead of stating your opinion AS fact.  There is
>> still a large and growing audience for XP.
>
>There is still a large and growing audience for the movie, Plan 9 from
>Outer Space.  I'm not holding my breath that it will be reconsidered
>for an Oscar.

You are watching too much "House". 
>
>>
>> >So, to be honest it is called
>> >agile to remove the tarnish of the extreme labeling AND to emphasize
>> >peppiness rather than dwell on the ever-present short-comings of the
>> >pseudo-methodology.
>>
>> To be honest, you weren't there.  All you have are opinions.  I have
>> no problem with you expressing your opinions, but I suggest you
>> represent them as opinions as opposed to fact.
>
>The posters here can google the 'fact' that a number of XP proponents
>bemoaned ever calling the methodolgy 'extreme' because it had become
>such a loaded phrase culturally and politically.

Wrong fact.  The fact I was disputing was that the audience for XP had
evaporated.  It has not.  

>  Tell me you go into
>conservative insurance, banking, and finacial services meetings
>emphasizing the extreeme nature of your methodology or the
>revolutionary (enterprise) culture shock they entail.

Sometimes, it depends on whether the folks there have developed an
unreasoned fear of the word "extreme".  
>
>Who's being deceitful here?

Nobody except those who offer opinions as fact.

>> Facts would be useful here.  My experience has shown that agile
>> techniques strongly prepare software for change.
>
>My problem is that they should strongly prepare software for long-term
>production activity because the software conforms to spec and QA.

XP demands that, every week, the software pass tests written by QA and
Business.  These tests specify the system both functionally and
non-functionally.  They establish exactly the criterion you mention
above; and they do so unambiguously and repeatably.

>There is no reason to predict change if the job is done right.

That is the silliest thing I've seen you write.  Even perfect systems
must change as the world changes around them.  Indeed, perfection
itself is a moving target.  A system that meets all stated
requirements today, will be sub-optimal tomorrow because the world
will change around it.  And you know it.  Nobody could be in this
business without the truth of that being ground into his or her bones.

>> I have seen
>> significant changes easily propagate through systems that were built
>> using agile techniques.  I have also seen non-agile projects falter
>> and stall when changes needed to be applied.
>
>Is this mildly misleading or are we pretending the audience for this
>discussion are idiots?

Neither.  It's simply the truth.  I suppose that the statement could
be considered misleading because I did not say that I have seen the
opposite.  I didn not feel the need to be balanced because I was
simply refuting your Dean-isms that XP leads to Rube Goldberg,
seat-of-the-pants, non-rigorous, systems.

>> >The agile practices are no more
>> >adept at change than anything else.
>>
>> Again, facts would be useful.
>
>Years ago I chased all of you around the proverbial 'fact-checking'
>bush and got nowhere.  

"All of us"?  Anyway, I'm not sure I understand your point.  Let's for
the moment say that your statement is accurate, and "all of us" did,
in fact, avoid the facts.  Is it your argument that you are therefore
relieved of the standard you tried to hold us to?  In any case, I
don't know what your "years ago" reference is about.  As for facts,
there are plenty out there (both positive and negative), if you are
willing to do some due diligence prior to debate.
>
>I agree facts would be useful. After so many years, where are your
>facts?

What would you like to know?  I can point you to both successful and
failed XP projects.  I can point you to articles written by companies
claiming huge productivity and quality benefits.  I can point you to
research studies, both positive and negative.  

And, in fact, with a little tiny bit of elbow grease you could find
them yourself, because they are all freely available on the net, and
respond nicely to Google searches.

>> I can provide a simple counter fact.
>> Having a large batch of unit tests and acceptance tests that can be
>> run against the system in a matter of minutes, makes it much easier to
>> make changes to that system simply because it's easier to verify that
>> the change hasn't broken anything.
>
>And what design do you contrast the running system to to know that it
>is doing what it is intended to do?  Or is your assertion that the
>running design is infallible?

I presume you are referring to the functional design; i.e. the design
of the requirements.  We contrast the running system against the
design specified by the acceptance tests and unit tests.  

>>
>> And here's an opinion, backed by a lot of observation and experience:
>> Writing tests first forces a design viewpoint that strongly encourages
>> decoupling, and that therefore fosters change.
>
>My goal is not to foster change.  And bad tests, testing faulty
>assumptions yield successful test results.  Without a well documented
>design that exposes such flaws you have no metric to evaluate the
>quality of what you are doing.  But let's not dwell on quality.

Tests *are* documents.  Bad tests are bad documents.  Bad documents
improperly specify the system.  Executable tests, written in two forms
(unit and acceptance) are a very effective way to eliminate most bad
specifications.

>> Finally, here are some other observations from various teams that I
>> have coached.  Customers are very happy that their input is heard
>> early and often.  Executives love the fact that real progress is
>> measured on a regular (weekly) basis, and that stakeholders are
>> providing feedback in real time.  All these things promote change
>> IMHO.
>
>What promotes confidence that the thing works correctly?  Bells,
>whistles, and favorite colors?

Users observing and using the system from iteration to iteration, and
release to release, while providing continuous feedback.

>> >What agile sells as response to
>> >change is really immedite gratification.
>>
>> There is nothing wrong with immediate gratification so long as nothing
>> else is lost.  Indeed, so long as nothing else is lost, immediate
>> gratification is better than deferred gratification.  The evidence
>> suggests that nothing else is lost.  Indeed, the evidence suggests
>> that the systems turn out *better*.
>
>What evidence and how was this evidence accumulated?  All billable
>hours accounted for?

I was thinking specifically of the work we've been doing on FitNesse,
and the work I've seen on JUnit, and Eclipse, as well as the systems
that I have seen in my role as a consultant and coach.

>> This shouldn't be a big surprise.  Any control system works better
>> when you shorten the feedback loops.
>
>Only if the feedback makes sense.

That statement gives you an opportunity to say something concrete as
opposed to amorphous disparagements.  What, in particular, about the
feedback loops in XP, doesn't make sense?  I suggest you do a bit of
research on just what those feedback loops are, and what control
mechanisms XP employs around those feedback loops.

>> Agile Methods are NOT a mad rush to functionality.  They are not
>> dotcom stupidity.  Indeed, the agile methods value high quality code
>> and high quality designs more than any other methods I know of.
>> Consider rules such as "no duplicate code", "write tests before code",
>> "Don't let the sun set on bad code", etc, etc.  There are very strong
>> values that are backed up by disciplines.
>
>Parse the sentence.  Everything you value is code-centric.  Open your
>mind to OOD.

I agree that I put a lot of value on code.  Code is the medium in
which I work, and in which all software systems are eventually built
(by definition).  As such, I think that it is very appropriate to
value code.  Not that I don't value requirements, I do.  Indeed, I put
a lot of research effort into finding better ways to gather, express,
and refine requirements (e.g. www.fitnesse.org).  

Open my mind to OOD?  I've been writing books and articles about OOD
for over ten years.  I was an early adopter, and have worked hard to
advance the state of the art.  I think my mind is open to OOD; though
I am always ready to learn something new.

But I'll turn this around on you.  Open your mind to XP.  For a very
long time you have posted negative statement on this newsgroup that
show that you know very little about it.

>Your argument is not compelling today any more than it was many years
>ago.

Unfortunately yours are no more informed than they were years ago.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/6/2005 8:26:50 PM
On 5 Jul 2005 07:01:00 -0700, "krasicki" <Krasicki@gmail.com> wrote:

>Given that *XP practices* are largely rebranded existing good practice

Wait, I thought you said that XP was seat-of-the-pants, Rube Goldberg,
undisciplined, etc.  Now it's existing good practice?  

>I will assert that no such push exists.  I submit to you that XP
>advocates are simply looking outside their shell and discovering that
>good programming practices existed despite their claims.  

OK, so is your complaint is more about XP *advocates* and less about
XP itself?  

>But failing
>to acknowledge that, XP advocates, as usual, run ahead of the parade
>claiming credit for the celebration.

I think that's interesting since, from the very start, "XP Advocates"
have said that the practices of XP are based on previous best
practices.  
>
>Project Mercury would have never gotten off the ground if it had been
>developed in a financial services, insurance, banking, or commericial
>enterprise using *agile* methodologies.

The software for the Mercury Space Capsule was written iteratively.
The iterations were a day long.  Unit tests were written in the
morning, and made to pass in the afternoon.

>Let's not sell kool-aid here.  There are plenty of places wallowing in
>their own fecal ideas that will get on these newsgroups and testify how
>good it feels - come join us.  Be careful not to sound like them.

By the same token, I advise you to try to sound a bit less like Howard
Dean.  If you want to engage in a civilized debate over XP, then get
your facts together, and have at it.  But spewing emotional baggage
around benefits nobody.


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/6/2005 8:27:21 PM

Robert C. Martin wrote:
> On 2 Jul 2005 10:55:05 -0700, "topmind" <topmind@technologist.com>
> wrote:
>
> >In my domain one often cannot know ahead of time what will change.
>
> It's not so much a matter of knowing what will change, or even how it
> will change.  It's a matter of recognizing that certain things will
> change at a different rate than others.  For example report formats
> are will change at a different rate than business rules.

I am not sure what you mean. Both change, often at different
unpredictable rates.

> GUI layout
> will change at a different rate than database schemae.

Yes, but one cannot say in *advance* what will change faster.

>
> You don't even have to know whether one will change more frequently
> than the other.  You just have to be able to make a reasoned guess
> that they will change at different rates and for different reasons.

Do you mean knowing the actual reason and rate? Or just knowing they
will be different?

>
> We try not to couple report formats to business rules because it would
> be a shame to inadvertently break the business rules by moving a
> column on a report.

Please clarify. I can think of situations where relating them may save
time and situations where relating them would cause headaches.

> We try to decouple the GUI layout from the
> database schemae because it would be a shame to crash the GUI when
> adding a new column to the database.

On the flip side it is sometimes nice to have a column *automatically*
appear in the CRUD (edit screens) realm so that we don't have to make
the same column addition in two or more different places. There is no
One Right Coupling decision here. Adding new columns and having to sift
through code to manually make that addition to multiple spots can be
time-consuming.

This is one reason I like data dictionaries: describe columns in one
and only one place and have whatever needs that info use it. I agree it
is not always that simple because it is subject to the 80-20 or 90-10
rule where 10% of the time we need a custom, local tweak that deviates
from the "standard" behavior. If we add a new data dictionary
attribute/flag for each exception (deviation), we have a big mess
(large interface) after a while. This is true with any "generic"
framework whether it be via OO, procedural, FP, etc.

(The OO equiv. of data dictionaries is a Field Object, by the way.)

BTW2, I would like to see you rework your "coupling" concept into a
change analysis and change pattern analysis focus. I think many would
find that more useful and subject to more objective observation and
metrics. One may disagree about the frequencies of certain changes (as
we seem to do), but the impact on code per change pattern is fairly
objective. Thus, the discipline can be divided into frequency analysis
and impact analysis, with the latter being more documentable thus far.

You may be well-positioned for this because most OO authors seem to
focus on "mental models" of the real world while you focus more on code
structure, which is more based on concrete,
western-reductionalist-style analysis instead of the prevalent OO fuzzy
eastern style that drives me up the wall. (Not that eastern style is
"bad", just less analyzable at this stage in history.)

>
>
> -----
> Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com

-T-

0
topmind (2124)
7/6/2005 11:11:38 PM
Phlip wrote:
> 
> OO is the shadows of things on the cave wall.

:-)
0
tgagne (596)
7/6/2005 11:29:28 PM
On 6 Jul 2005 16:11:38 -0700, "topmind" <topmind@technologist.com>
wrote:

>
>
>Robert C. Martin wrote:
>> On 2 Jul 2005 10:55:05 -0700, "topmind" <topmind@technologist.com>
>> wrote:
>>
>> >In my domain one often cannot know ahead of time what will change.
>>
>> It's not so much a matter of knowing what will change, or even how it
>> will change.  It's a matter of recognizing that certain things will
>> change at a different rate than others.  For example report formats
>> are will change at a different rate than business rules.
>
>I am not sure what you mean. Both change, often at different
>unpredictable rates.

Exactly.  They change at different rates.  If we couple them, we will
be forced to make changes to one because of the other.

>> GUI layout
>> will change at a different rate than database schemae.
>
>Yes, but one cannot say in *advance* what will change faster.

Sometimes you can, and sometimes you can't; but it doesn't matter.
The issue is that they change for different reasons.

>Do you mean knowing the actual reason and rate? Or just knowing they
>will be different?

The latter.

>> We try not to couple report formats to business rules because it would
>> be a shame to inadvertently break the business rules by moving a
>> column on a report.
>
>Please clarify. I can think of situations where relating them may save
>time and situations where relating them would cause headaches.

I think that was pretty clear.  If we couple the report format to the
business rules, (for example, by doing computations at the same time
that we are generating the report) then a change to the format of the
report will break the business rules.  Or rather, when you change the
report format you'll have to make similar changes to the structure of
the business rule algorithm.
>
>> We try to decouple the GUI layout from the
>> database schemae because it would be a shame to crash the GUI when
>> adding a new column to the database.
>
>On the flip side it is sometimes nice to have a column *automatically*
>appear in the CRUD (edit screens) realm so that we don't have to make
>the same column addition in two or more different places. There is no
>One Right Coupling decision here. Adding new columns and having to sift
>through code to manually make that addition to multiple spots can be
>time-consuming.

Yes, it depends.  If we are writing a program to throw away in a day
or two, then we might take a short-cut like that.  On the other hand,
if we are developing a system that must survive through years of
changing requirements, then coupling the GUI to the Schema is suicide;
and the time you might save in so doing is a false economy.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/7/2005 12:11:38 AM
Robert C. Martin wrote:
> On 5 Jul 2005 18:26:36 -0700, "krasicki" <Krasicki@gmail.com> wrote:
>
> >Robert C. Martin wrote:
>
> >> As for the audience for XP evaporating, I think you need to actually
> >> check your facts instead of stating your opinion AS fact.  There is
> >> still a large and growing audience for XP.
> >
> >There is still a large and growing audience for the movie, Plan 9 from
> >Outer Space.  I'm not holding my breath that it will be reconsidered
> >for an Oscar.
>
> You are watching too much "House".
-snip-
>
> Wrong fact.  The fact I was disputing was that the audience for XP had
> evaporated.  It has not.

Nor has it significantly grown. And you will note that I made no
assertion that it had evaporated.  i aws making a point that no matter
how bad something might be even bad has its audience.

>
> >  Tell me you go into
> >conservative insurance, banking, and finacial services meetings
> >emphasizing the extreeme nature of your methodology or the
> >revolutionary (enterprise) culture shock they entail.
>
> Sometimes, it depends on whether the folks there have developed an
> unreasoned fear of the word "extreme".
> >
> >Who's being deceitful here?
>
> Nobody except those who offer opinions as fact.

And what about those who are deceptive with facts?

>
> >> Facts would be useful here.  My experience has shown that agile
> >> techniques strongly prepare software for change.
> >
> >My problem is that they should strongly prepare software for long-term
> >production activity because the software conforms to spec and QA.
>
> XP demands that, every week, the software pass tests written by QA and
> Business.  These tests specify the system both functionally and
> non-functionally.  They establish exactly the criterion you mention
> above; and they do so unambiguously and repeatably.

Twaddle.  One of the well-known problems with OOD in general is that
keeping complex system functional specs up to date is a full-time job.
I d o not believe for a minute that comprehensive re-anaylsis of
interim designs can or are being performed let alone what the system is
NOT doing.

Tell me this is a fact.

>
> >There is no reason to predict change if the job is done right.
>
> That is the silliest thing I've seen you write.  Even perfect systems
> must change as the world changes around them.  Indeed, perfection
> itself is a moving target.  A system that meets all stated
> requirements today, will be sub-optimal tomorrow because the world
> will change around it.  And you know it.  Nobody could be in this
> business without the truth of that being ground into his or her bones.

You may find this hard to believe but systems written thirty years ago
are still in production working fine.  The systems are occasionally
upgraded and maintained but 2 + 2 is still 4.  And lots of business
functionality is just that straightforward.

>
> >> I have seen
> >> significant changes easily propagate through systems that were built
> >> using agile techniques.  I have also seen non-agile projects falter
> >> and stall when changes needed to be applied.
> >
> >Is this mildly misleading or are we pretending the audience for this
> >discussion are idiots?
>
> Neither.  It's simply the truth.  I suppose that the statement could
> be considered misleading because I did not say that I have seen the
> opposite.  I didn not feel the need to be balanced because I was
> simply refuting your Dean-isms that XP leads to Rube Goldberg,
> seat-of-the-pants, non-rigorous, systems.

Oh, let me get this straight, *I* made you be misleading.  Your honor,
he made me do it - him and Dean - not to mention, uh hm, Rube.

>
> >> >The agile practices are no more
> >> >adept at change than anything else.
> >>
> >> Again, facts would be useful.
> >
> >Years ago I chased all of you around the proverbial 'fact-checking'
> >bush and got nowhere.
>
> "All of us"?  Anyway, I'm not sure I understand your point.  Let's for
> the moment say that your statement is accurate, and "all of us" did,
> in fact, avoid the facts.

Finally, some fresh air.

> Is it your argument that you are therefore
> relieved of the standard you tried to hold us to?

Are you asking that *moi* should provide facts despite your ability to
cleverly dodge the -cough- minor issue yourself. (satire, lest I be
quoted literally) I'll have you know that I am not beneath being agile,
slippery, and dodgy all on my own!  HARRUPH! The nerve of some people.
(end satire)

> In any case, I
> don't know what your "years ago" reference is about.  As for facts,
> there are plenty out there (both positive and negative), if you are
> willing to do some due diligence prior to debate.

If you google hard enough you'll note that I was one of the original
critics who provided those facts oh so many years ago.  So
discomforting did some of your collegues become that they moved their
traveling sideshow to private yahoo group discussions.

> >
> >I agree facts would be useful. After so many years, where are your
> >facts?
>
> What would you like to know?  I can point you to both successful and
> failed XP projects.  I can point you to articles written by companies
> claiming huge productivity and quality benefits.  I can point you to
> research studies, both positive and negative.
>
> And, in fact, with a little tiny bit of elbow grease you could find
> them yourself, because they are all freely available on the net, and
> respond nicely to Google searches.
>
> >> I can provide a simple counter fact.
> >> Having a large batch of unit tests and acceptance tests that can be
> >> run against the system in a matter of minutes, makes it much easier to
> >> make changes to that system simply because it's easier to verify that
> >> the change hasn't broken anything.
> >
> >And what design do you contrast the running system to to know that it
> >is doing what it is intended to do?  Or is your assertion that the
> >running design is infallible?
>
> I presume you are referring to the functional design; i.e. the design
> of the requirements.  We contrast the running system against the
> design specified by the acceptance tests and unit tests.

And you know perfectly well that that avoids the point of the question.
 The acceptance tests and unit tests are little more than the narrative
design of what's taken place right or wrong.  How do you distinguish
right from wrong?  What's the metric unit?

> >>
> >> And here's an opinion, backed by a lot of observation and experience:
> >> Writing tests first forces a design viewpoint that strongly encourages
> >> decoupling, and that therefore fosters change.
> >
> >My goal is not to foster change.  And bad tests, testing faulty
> >assumptions yield successful test results.  Without a well documented
> >design that exposes such flaws you have no metric to evaluate the
> >quality of what you are doing.  But let's not dwell on quality.
>
> Tests *are* documents.  Bad tests are bad documents.  Bad documents
> improperly specify the system.  Executable tests, written in two forms
> (unit and acceptance) are a very effective way to eliminate most bad
> specifications.

Tea leaves straned at the bottonm of tea cups can be considered
documentation as well.  The rest of your assertion makes no sense at
all.  Bad things happen.  The way to avoid bad things is make sure you
write good things.

Gotcha.

> >> Finally, here are some other observations from various teams that I
> >> have coached.  Customers are very happy that their input is heard
> >> early and often.  Executives love the fact that real progress is
> >> measured on a regular (weekly) basis, and that stakeholders are
> >> providing feedback in real time.  All these things promote change
> >> IMHO.
> >
> >What promotes confidence that the thing works correctly?  Bells,
> >whistles, and favorite colors?
>
> Users observing and using the system from iteration to iteration, and
> release to release, while providing continuous feedback.

How do they ensure their evaluations are correct?  And if their bonuses
depend on the expeditious delivery of a system, how reliable is that
feedback?

> >> >What agile sells as response to
> >> >change is really immedite gratification.
> >>
> >> There is nothing wrong with immediate gratification so long as nothing
> >> else is lost.  Indeed, so long as nothing else is lost, immediate
> >> gratification is better than deferred gratification.  The evidence
> >> suggests that nothing else is lost.  Indeed, the evidence suggests
> >> that the systems turn out *better*.
> >
> >What evidence and how was this evidence accumulated?  All billable
> >hours accounted for?
>
> I was thinking specifically of the work we've been doing on FitNesse,
> and the work I've seen on JUnit, and Eclipse, as well as the systems
> that I have seen in my role as a consultant and coach.

Yes. Software written for developers, not commercial applications with
budgets, deadlines, interoperability issues with legacy systems still
using screen scraping techniques.

You live an enchanted life.

> >> This shouldn't be a big surprise.  Any control system works better
> >> when you shorten the feedback loops.
> >
> >Only if the feedback makes sense.
>
> That statement gives you an opportunity to say something concrete as
> opposed to amorphous disparagements.  What, in particular, about the
> feedback loops in XP, doesn't make sense?  I suggest you do a bit of
> research on just what those feedback loops are, and what control
> mechanisms XP employs around those feedback loops.
>
> >> Agile Methods are NOT a mad rush to functionality.  They are not
> >> dotcom stupidity.  Indeed, the agile methods value high quality code
> >> and high quality designs more than any other methods I know of.

High quality *code* design that is.


> >> Consider rules such as "no duplicate code", "write tests before code",
> >> "Don't let the sun set on bad code", etc, etc.  There are very strong
> >> values that are backed up by disciplines.
> >
> >Parse the sentence.  Everything you value is code-centric.  Open your
> >mind to OOD.
>
> I agree that I put a lot of value on code.  Code is the medium in
> which I work, and in which all software systems are eventually built
> (by definition).  As such, I think that it is very appropriate to
> value code.  Not that I don't value requirements, I do.  Indeed, I put
> a lot of research effort into finding better ways to gather, express,
> and refine requirements (e.g. www.fitnesse.org).

You spelled fitness wrong.

The medium I work in is thinking.  Remember the IBM reminder; THINK.
Design is about formalizing thought, not code.  And that thought is
applied to problem solving not code generation.

Maybe it's a difference between you and I.

>
> Open my mind to OOD?  I've been writing books and articles about OOD
> for over ten years.  I was an early adopter, and have worked hard to
> advance the state of the art.  I think my mind is open to OOD; though
> I am always ready to learn something new.

I was introduced to OOD by Shelly and Cashman in the late seventies.  I
write no books and give no lectures.

>
> But I'll turn this around on you.  Open your mind to XP.  For a very
> long time you have posted negative statement on this newsgroup that
> show that you know very little about it.

I know a lot about it and my posts have contributed to its maturity and
rebranding.  I have no use for XP but I practice good code development
anyway using many of the gut techniques and practices XP claims as its
own.

XP is not the first or last word in good software development and it
holds no magic or power for me.  I don't hate it or its proponents I
just don't advocate it.  Does that make me close minded?

>
> >Your argument is not compelling today any more than it was many years
> >ago.
>
> Unfortunately yours are no more informed than they were years ago.
>
And, as usual, you've brought nothing to feed the hungry.

0
Krasicki1 (73)
7/7/2005 3:25:26 AM
Robert C. Martin wrote:
> On 5 Jul 2005 07:01:00 -0700, "krasicki" <Krasicki@gmail.com> wrote:
>
> >Given that *XP practices* are largely rebranded existing good practice
>
> Wait, I thought you said that XP was seat-of-the-pants, Rube Goldberg,
> undisciplined, etc.  Now it's existing good practice?

XP is a lightweight methodology of practices.

Practices within that methodology can be good or bad.  The absnce of
better practices within the methodology can be good or bad.

Many of the worthwhile practices co-opted into XP existed before,
during, and after the XP gold rush sometimes unbeknownst to the
advocates.

XP was not promoting testing years ago.  XP retreated to testing
emphasis because few people argue that it's good.  But testing advocacy
doesn't validate XP as a good methodology per se.

>
> >I will assert that no such push exists.  I submit to you that XP
> >advocates are simply looking outside their shell and discovering that
> >good programming practices existed despite their claims.
>
> OK, so is your complaint is more about XP *advocates* and less about
> XP itself?

I care precious little for either.  One has to acknowledge the noise of
XP nonetheless and the muddying of the waters having to do with
Object-oriented anything.

>
> >But failing
> >to acknowledge that, XP advocates, as usual, run ahead of the parade
> >claiming credit for the celebration.
>
> I think that's interesting since, from the very start, "XP Advocates"
> have said that the practices of XP are based on previous best
> practices.

I can remember weeks of such debates in which XP advocates were in
denial of this.

> >
> >Project Mercury would have never gotten off the ground if it had been
> >developed in a financial services, insurance, banking, or commericial
> >enterprise using *agile* methodologies.
>
> The software for the Mercury Space Capsule was written iteratively.
> The iterations were a day long.  Unit tests were written in the
> morning, and made to pass in the afternoon.

And where did the design come from.  Was that fabricated on a daily
basis as well?

>
> >Let's not sell kool-aid here.  There are plenty of places wallowing in
> >their own fecal ideas that will get on these newsgroups and testify how
> >good it feels - come join us.  Be careful not to sound like them.
>
> By the same token, I advise you to try to sound a bit less like Howard
> Dean.  If you want to engage in a civilized debate over XP, then get
> your facts together, and have at it.  But spewing emotional baggage
> around benefits nobody.

Every time I begin to feel bad at how personal some of these exchanges
sound you remind me that they are.

0
Krasicki1 (73)
7/7/2005 4:00:59 AM

Michael Feathers wrote:
> krasicki wrote:
> > Daniel Parker wrote:
> >
> >>"krasicki" <Krasicki@gmail.com> wrote in message
> >>news:1120611204.476214.162240@f14g2000cwb.googlegroups.com...
> >>
> >>
> >>>Look, OOD is about designing theoretical systems ...
> >>
> >>Just out of curiosity, what theory?
> >>
> >>-- Daniel
> >
> >
> > Well Daniel,
> >
> > Not all systems consist of a web front end that accesses data from a
> > well-established database though many do.
> >
> > OOD, depending on the tools and techniques you use, allows system
> > designers to model potential solutions in numerous ways.  If this
> > tradeoff is made here, this benefit is forthcoming there and so on.
> > Design is about applying recombinant ideas to solving problems.
> >
> > Systems are developed to solve problems in effective ways.  Systems are
> > not developed for the sake of software or to gratify the provincial and
> > esoteric ego needs of the employees charged with getting the system
> > implemented.
> >
> > I'll give you a good example.  A Hilton was just built in Hartford and
> > the building went up fast to meet the deadline of the new convention
> > center being built next door.  All federal guidelines were applied.
> >
> > So come inspection day, Connecticut's inspectors applied Connecticut
> > building statutes to the inspection and the building failed!  Now one
> > could say that the building was built fast, saved money, and looks real
> > good and boy were the builders proud of it.  Let's call this process
> > agile.
> >
> > So sixteen of the rooms were incorrectly built for handicapped access,
> > an error of three inches per room.  Where to get three inches?  Push
> > the rooms into the hall and the hall fails.  You can see how this goes.
> >
> > Another example.  A Hospital builds a new wing onto an existing
> > structure and these days new hospital wings look like fancy hotels.
> > Everything is immaculate, grand fascades, fancy everything.  The
> > hospital wing opens without a hitch.
> >
> > The first bed is rolled down the hall, the elevator button is pushed,
> > the door opens, the bed pushed into the elevetor as far as it can go,
> > but the bed still doesn't fit.  Wrong sized elevator.
>
>
> It's pretty amazing to me that you find anything in common with Agile in
> these scenarios.  They all sound like cases were there was no feedback
> or testing.  Sounds more like plan-driven development to me.

Au contraire.  The bricks all passed unit tests.  As did the cement,
steel, and so on.  And the plans all had feedback.  And the customer
surely showed up with a glowing smile watching the obvious progress.
And progress happened every day.

The elevator worked fine.  Up.  Down.  Ring, ring.  All positive
feedback.

The workers sweated.  The execs wore suits and went golfing.

>
>  > These are true stories.  Shouldn't all of the architects of these
>  > buildings have expected change to happen as well.  Same with the
>  > builders.  maybe build with everything loose so that it can be
>  > reassembled when the next minor detail arises?  Aren't we being told
>  > this is the way things work?
>
> Well, the fact is software is malleable.  In fact it is too malleable.
> It isn't hard to change software at all.  All you have to do is type a
> couple of characters in any program and you can break it.  Because that
> is the way that software is, we need tests to give it backbone.

It's not that malleable.  Once in production software is very hard to
change for all kinds of political reasons.

In fact a big problem for architects and designers is having
programmers undermine design activity with too much dog and pony
prototyping.  Bad ideas become adopted before any discussion of the
larger picture can be formulated.

Of course you need tests.  We aren't a bunch of ninnies here.

>  > Even carpenters measure before they cut.  Yet, in computer science we
>  > are being told that we should operate as though we are all alchoholics
>  > and take things one day at a time.
>
> The problem is: misunderstanding the material you are working with.
> Code is not wood or concrete.

But spent resources are.  Nobody fixes anything for free.  And bad code
applied to millions of daily transactions can cost companies or
customers lots and lots of money when wrong.

Testing is tricky stuff and complex logic errors don't get discussed
when daily iterations are the norm because there is no time.

Design and OOD are not code or code design.

0
Krasicki1 (73)
7/7/2005 4:18:35 AM
Laurent Bossavit wrote:
> > So sixteen of the rooms were incorrectly built for handicapped access,
> > an error of three inches per room.  Where to get three inches?  Push
> > the rooms into the hall and the hall fails.  You can see how this goes.
>
> We can see how this goes for buildings. But we're not buildings experts,
> and unless I missed something neither are you. You're asking us to draw
> insight from a foreign domain about which we know *less* than the domain
> we're supposed to apply it to - software. That's the wrong direction to
> operate an intuition pump.

And coders are not system architects or system designers yet XP sells
that idea.

Anyone who has had to try to fit yet another piece of software between
the cracks of systems boundaries understands the problem.  To increase
security you slow transaction times to levels unsatisfactory to system
specifications, these are typical conundrums in systems today.  Every
tweak has a tradeoff.  You cannot tradeoff one thing for another if
you're handed coding minutia.

And you Laurent miss the point.  This thread is about design not
software.  design is not a daily feelgood touchstone with Skippy.
Design involves hard work and great responsibility above and beyond
coding group hugs.

>
> Your rant, while entertaining, is at the level of caricature rather than
> real insight; for insight, read something like Stewart Brand's /How
> Buildings Learn/, which has real stories about architecture.

I thought you didn't like the intuition pump thing.

And, it's not a rant.

>
> Most software has the property that if you twiddle one small bit
> incorrectly, an entire system might crash with disastrous consequences.
> (Think off-by one errors.) As far as I can't tell, no building has this
> property - if you remove one brick, even a brick at the very bottom, the
> building stays up.

Civic center roof collapses during the seventies are example of just
that phenomenon.  Miscalculations of local snowfall and so on were the
culprits.

The space shuttle O-ring disaster is another example.

Or the NASA Mars probe that performed math conversions incorrectly.

During the nineties, Gingrich (I think) thought pennies were too
expensive to make.  So they stopped using copper for a short period of
time for another metal.  Trouble was that babies swallowed copper
pennies that passed through their systems without incident.  The new
pennies decomposed in babies stomachs making them severely ill.
Pennies are copper once again.

All kinds of things can cause disasters.

>
> You're saying that software design should be based in theory. You can't
> have your cake and eat it too - the *process* whereby something is
> designed should certainly take into account the characteristic
> properties of the thing being designed. (Call that meta-design.)

No we won't call it meta-design.  The design of OOD objects is
meta-design.

OOD does take software development into consideration as one of many
factors.

>
> Buildings are brick. Software is text. Brick and text probably call for
> different design processes.

Solving problems can all be done using design notation for the ideas
expressed.

0
Krasicki1 (73)
7/7/2005 4:49:10 AM
krasicki wrote:
> 
> Michael Feathers wrote:
> 
>>krasicki wrote:
>>
>>>Daniel Parker wrote:
>>>
>>>
>>>>"krasicki" <Krasicki@gmail.com> wrote in message
>>>>news:1120611204.476214.162240@f14g2000cwb.googlegroups.com...
>>>>
>>>>
>>>>
>>>>>Look, OOD is about designing theoretical systems ...
>>>>
>>>>Just out of curiosity, what theory?
>>>>
>>>>-- Daniel
>>>
>>>
>>>Well Daniel,
>>>
>>>Not all systems consist of a web front end that accesses data from a
>>>well-established database though many do.
>>>
>>>OOD, depending on the tools and techniques you use, allows system
>>>designers to model potential solutions in numerous ways.  If this
>>>tradeoff is made here, this benefit is forthcoming there and so on.
>>>Design is about applying recombinant ideas to solving problems.
>>>
>>>Systems are developed to solve problems in effective ways.  Systems are
>>>not developed for the sake of software or to gratify the provincial and
>>>esoteric ego needs of the employees charged with getting the system
>>>implemented.
>>>
>>>I'll give you a good example.  A Hilton was just built in Hartford and
>>>the building went up fast to meet the deadline of the new convention
>>>center being built next door.  All federal guidelines were applied.
>>>
>>>So come inspection day, Connecticut's inspectors applied Connecticut
>>>building statutes to the inspection and the building failed!  Now one
>>>could say that the building was built fast, saved money, and looks real
>>>good and boy were the builders proud of it.  Let's call this process
>>>agile.
>>>
>>>So sixteen of the rooms were incorrectly built for handicapped access,
>>>an error of three inches per room.  Where to get three inches?  Push
>>>the rooms into the hall and the hall fails.  You can see how this goes.
>>>
>>>Another example.  A Hospital builds a new wing onto an existing
>>>structure and these days new hospital wings look like fancy hotels.
>>>Everything is immaculate, grand fascades, fancy everything.  The
>>>hospital wing opens without a hitch.
>>>
>>>The first bed is rolled down the hall, the elevator button is pushed,
>>>the door opens, the bed pushed into the elevetor as far as it can go,
>>>but the bed still doesn't fit.  Wrong sized elevator.
>>
>>
>>It's pretty amazing to me that you find anything in common with Agile in
>>these scenarios.  They all sound like cases were there was no feedback
>>or testing.  Sounds more like plan-driven development to me.
> 
> 
> Au contraire.  The bricks all passed unit tests.  As did the cement,
> steel, and so on.  And the plans all had feedback.

Not the right feedback. It's a failure of requirements capture,
pure and simple. No methodology nor any other thing, other
than collecting all the relevant requirements and checklisting
them, would have made a bit of difference.

> And the customer
> surely showed up with a glowing smile watching the obvious progress.
> And progress happened every day.
> 
> The elevator worked fine.  Up.  Down.  Ring, ring.  All positive
> feedback.
> 
> The workers sweated.  The execs wore suits and went golfing.
> 
> 
>> > These are true stories.  Shouldn't all of the architects of these
>> > buildings have expected change to happen as well.  Same with the
>> > builders.  maybe build with everything loose so that it can be
>> > reassembled when the next minor detail arises?  Aren't we being told
>> > this is the way things work?
>>
>>Well, the fact is software is malleable.  In fact it is too malleable.
>>It isn't hard to change software at all.  All you have to do is type a
>>couple of characters in any program and you can break it.  Because that
>>is the way that software is, we need tests to give it backbone.
> 
> 
> It's not that malleable.  Once in production software is very hard to
> change for all kinds of political reasons.
> 

It shouldn't go into production with defects that are gonna
cost people money, at least without an enforceable plan
to get the defects out, upfront.

Once in production, somebody has to make the decisions of
when, how and why to deploy upgrades.

> In fact a big problem for architects and designers is having
> programmers undermine design activity with too much dog and pony
> prototyping. 

How is that possible? Other than time being wasted, prototyping
is harmless. Prototypes should not even be attempted until
there's a specific question or suite of questions they are
to answer. If it's a sandboxed protpype, just to let the
programmers play, then chunk it, or put it away. You
still need specific deliverables from the prototyping.

> Bad ideas become adopted before any discussion of the
> larger picture can be formulated.
> 

Then they have to get rooted out and killed, or at least
triaged and weighed for "badness". Bad ideas that don't get
shot are a sign of complacency, not methodology.

> Of course you need tests.  We aren't a bunch of ninnies here.
> 
> 
>> > Even carpenters measure before they cut.  Yet, in computer science we
>> > are being told that we should operate as though we are all alchoholics
>> > and take things one day at a time.
>>
>>The problem is: misunderstanding the material you are working with.
>>Code is not wood or concrete.
> 
> 
> But spent resources are.  Nobody fixes anything for free.  And bad code
> applied to millions of daily transactions can cost companies or
> customers lots and lots of money when wrong.
> 

So somebody has to do a cost-benefeit analysis of when to do what.
Good code isn't free, either. This is logistics, not particularly 
even software logistics.

> Testing is tricky stuff and complex logic errors don't get discussed
> when daily iterations are the norm because there is no time.
> 
> Design and OOD are not code or code design.
> 

You can't fix culture with tools, in other words. Mostly, yes :)

--
Les Cargill
0
lNOcargill (15)
7/7/2005 5:19:29 AM
> >>
> >> It's not so much a matter of knowing what will change, or even how it
> >> will change.  It's a matter of recognizing that certain things will
> >> change at a different rate than others.  For example report formats
> >> are will change at a different rate than business rules.
> >
> >I am not sure what you mean. Both change, often at different
> >unpredictable rates.
>
> Exactly.  They change at different rates.  If we couple them, we will
> be forced to make changes to one because of the other.

Sometimes changes are related, sometimes they are not. thing-A may
change twice as fast as thing-B, but maybe 70% of all changes to
thing-B affect thing-A also. Changing faster does not necessarily mean
something is unrelated to something that changes slower. Change speed
is only one of many factors controlling relationships.

>
> >> GUI layout
> >> will change at a different rate than database schemae.
> >
> >Yes, but one cannot say in *advance* what will change faster.
>
> Sometimes you can, and sometimes you can't; but it doesn't matter.
> The issue is that they change for different reasons.

No! Sometimes they change for the same reasons.

>
> >Do you mean knowing the actual reason and rate? Or just knowing they
> >will be different?
>
> The latter.
>
> >> We try not to couple report formats to business rules because it would
> >> be a shame to inadvertently break the business rules by moving a
> >> column on a report.
> >
> >Please clarify. I can think of situations where relating them may save
> >time and situations where relating them would cause headaches.
>
> I think that was pretty clear.  If we couple the report format to the
> business rules, (for example, by doing computations at the same time
> that we are generating the report) then a change to the format of the
> report will break the business rules.  Or rather, when you change the
> report format you'll have to make similar changes to the structure of
> the business rule algorithm.

I would like to explore specific scenarios rather than accept broad
generalizations.

> >
> >> We try to decouple the GUI layout from the
> >> database schemae because it would be a shame to crash the GUI when
> >> adding a new column to the database.
> >
> >On the flip side it is sometimes nice to have a column *automatically*
> >appear in the CRUD (edit screens) realm so that we don't have to make
> >the same column addition in two or more different places. There is no
> >One Right Coupling decision here. Adding new columns and having to sift
> >through code to manually make that addition to multiple spots can be
> >time-consuming.
>
> Yes, it depends.  If we are writing a program to throw away in a day
> or two, then we might take a short-cut like that.  On the other hand,
> if we are developing a system that must survive through years of
> changing requirements, then coupling the GUI to the Schema is suicide;
> and the time you might save in so doing is a false economy.

Why is it a "shortcut"? About 50% to 90% of the time new table columns
result in corresponding report and screen columns. If we make the two
independent, then we are doing almost double the effort when we add or
change them if we leave them un-coupled.

I am not saying a data dictionary is always the way to go, but you seem
to dismiss it without specific-enough reasoning here. Data dictionaries
can be good once-and-only-once (non-duplication). Are you against the
factoring of duplication? If we have to mention a given column name and
related attributes in 10 different places, then we are NOT factoring;
we are copying-and-pasting the same or similar information all over the
place.

Note that factoring tends to *increase* coupling because it makes
multiple spots reference (be coupled to) the same thing.

  A -------> A1

  B -------> B1

After factoring:

  A -------> C
             ^
             |
  B ---------*

>
>
>
> -----
> Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com

-T-

0
topmind (2124)
7/7/2005 5:59:14 AM
> We try to decouple the GUI layout from the
> database schemae because it would be a shame to crash the GUI when
> adding a new column to the database.

Adding a new column to the database would in no way crash the GUI. It
is the same as adding a new method to a class. Old code using the class
will not be affected at all.

But if you you add a new column to the database, it is very likely that
the you want to show this column in the GUI too. If your GUI is
decoupled from the database schema, you would have to add a lot of
extra code in your application to be able to show the new column. If
you used a data-aware GUI component (low decoupling between GUI and
database), the only thing you would have to do is to tell the GUI
component to show this new column (or nothing at all if the GUI
component shows every column in the table).

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/7/2005 6:43:46 AM
> if we are developing a system that must survive through years of
> changing requirements, then coupling the GUI to the Schema is suicide;
> and the time you might save in so doing is a false economy.

Why would it be suicide to have a coupling between the GUI and database
schema? As pointed out before, a simple column adding would cause you a
lot of extra coding using a decoupled approach. With the coupled
approach, the new column could appear automatically in the GUI after it
is added to the database table. It is easy to prove that the coupled
approach makes maintainance much easier.

How would it be easier to maintain "system that must survive through
years of changing requirements", by doing simple things harder?

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/7/2005 6:55:18 AM
Read the papers and understand the discussion. However, I do not agree
with the rather myopic view of software development as being mostly a
coding activity. Neither do I see any point in defining OO as was done
in RCM's comment.
Regards,
Hans

0
hansewetz (110)
7/7/2005 8:08:14 AM
> > and unless I missed something neither are you. You're asking us to draw
> > insight from a foreign domain about which we know *less* than the domain
> > we're supposed to apply it to - software. That's the wrong direction to
> > operate an intuition pump.
> 
> And coders are not system architects or system designers yet XP sells
> that idea.

Not so fast, mister. Coders *are* experts about the failure modes of 
software systems. There are many more things besides that it is 
necessary to know when running a software development effort, but this 
is certainly among the foremost.

> And you Laurent miss the point.  This thread is about design not
> software.  design is not a daily feelgood touchstone with Skippy.

Then provide specific (theory-backed) guidance about design, not warm 
fuzzies plucked out of a worthless analogy.

> Design involves hard work and great responsibility above and beyond
> coding group hugs.

You've got me hooked and curious. *What* kind of hard work, and can you 
prove that it meets the criterion for "design": it will eliminate bad 
solutions from the running at a lower cost than actually going ahead and 
building the thing. (Design is primarily a matter of economics.)

Laurent
0
laurent (379)
7/7/2005 8:12:06 AM
Krasicki,

> XP was not promoting testing years ago.

I think 1998 qualifies as "years ago". The first article ever published 
about XP started, "Extreme Programming rests on the values of 
simplicity, communication, *testing*, and aggressiveness." (Emphasis 
mine.) It went on to describe C3's thousands of unit tests.

> XP retreated to testing emphasis because few people argue that it's good.

I would think XP might "retreat to testing" because *many* (not few) 
people argue that testing is good. I wonder what you meant exactly - not 
that it matters, as we've established there was no "retreat to testing".

Laurent
0
laurent (379)
7/7/2005 8:50:02 AM
Robert C. Martin wrote:

> Absolutely!  Though this has little to do with OO.  I am much more
> interested in the work of Ward Cunningham and Rick Mugridge in
> expressing requirements as Tests.  (See www.fitnesse.org)

By testing, I'm not sure you are referring to validation or
verification. I assume (maybe incorrectly) that 'test' means
execution of code i.e. validation. I have difficulty in understanding
how all requirements can be expressed as 'tests'. I could see
problems with conditions that occur randomly or rarely. I'll have to
take a look at www.fitnesse.org.

> I disagree.  I think there is a lot of value in precise definitions
> that can act as a metric against which to measure software designs.
> Given my definition, you can quickly ascertain whether a particular
> design is OO or not.  Moreover, there are many benefits that are
> associated with this structure.  Dependency Inversion is the primary
> mechanism behind independently deployable binary components.

I don't disagree with the value of precise definitions and metrics.
However, I do disagree with hijacking the term OO with a meaning that
is very counter intuitive - at least to me. There are strong
relationships between concepts and objects and both concepts and
objects have lots of interesting aspects. You have reduced 'OO' to
mean something very specific about how a programming language models
one single operation on concepts (generalization) and how that
operation should be used.

Yes, you can ascertain if some software is 'OO' or not with your
definition. So what? There are lots of other more important aspects of
software than being DIP compatible (see my last comment in this reply).

> I agree that describing problems is very important, and it is an
> active area of my own research (www.fitnesse.org).  On the other hand,
> I think that we don't put enough effort into techniques for how to
> code problems.  Far too many problems are related to poor coding
> structure.  Whole systems, and whole development teams, are brought to
> their knees because their code has become so tangled and impenetrable
> that it cannot be cost-effectively maintained.

In the majority of cases where I have seen this (code mess) happen, it
has been because of a poor understanding of the problem being solved.
It has rarely been because of poor programming practices. I don't
believe it is possible to code effective solutions without having a
clear understanding of the problem at the level of the 'business'.

It is a mystery to me that most developers believe they can code a
solution to a problem when they don't understand the problem. DIP,
polymorphism (OO?) and any other technical wizardry will not help in
avoiding messy code when the problem is non-trivial. Only an
understanding of the problem or an understanding of how to deal with
the class of problems that the problem belongs to, will avoid code
disasters.

In summary: I do not believe poor coding practices are the main problem
in IT development. Knowing, understanding and describing problems are
at the heart of developing good software. Maybe the situation is
different in other domains, but in the financial domain I am convinced
that I am correct-:)

Regards,
Hans Ewetz

0
hansewetz (110)
7/7/2005 9:33:50 AM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
news:8c5oc15o20asviqcrtnnkrd1mbk944oj92@4ax.com...
> On Wed, 6 Jul 2005 08:00:08 -0400, "Daniel Parker"
> <danielaparker@spam?nothanks.windupbird.com> wrote:
>
>
>>It's hard to
>>find blogs that dissect both successful and unsuccessful XP projects and
>>systematically discuss the consequences of the various practices, which is
>>what you'd expect if the author wanted to be taken seriously.
>
> I'm astounded.  There has been a rather large amount ...

Oh, good.  Can you provide a link to a site that does the above?

Thanks,
Daniel 


0
Daniel
7/7/2005 10:13:19 AM
krasicki wrote:
> XP was not promoting testing years ago.  XP retreated to testing
> emphasis because few people argue that it's good.  But testing advocacy
> doesn't validate XP as a good methodology per se.

Nope.  Testing was a core practice of XP from the beginning.  It was it 
the white book, as test-first and functional testing.  If you go to Ron 
Jeffries' site you'll find the original writeups of the C3 project's 
practices:

http://xprogramming.com/Practices/xpractices.htm

And, if I remember correctly, the paper submitted to OOPSLA by C3 in the 
late 90s, the one that spurned interest in XP, emphasized testing as well.

Where are you getting all of these odd ideas?


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall 2005)
www.objectmentor.com
0
mfeathers2 (74)
7/7/2005 12:31:58 PM
krasicki wrote:
>>>I'll give you a good example.  A Hilton was just built in Hartford and
>>>the building went up fast to meet the deadline of the new convention
>>>center being built next door.  All federal guidelines were applied.
>>>
>>>So come inspection day, Connecticut's inspectors applied Connecticut
>>>building statutes to the inspection and the building failed!  Now one
>>>could say that the building was built fast, saved money, and looks real
>>>good and boy were the builders proud of it.  Let's call this process
>>>agile.
>>>
>>>So sixteen of the rooms were incorrectly built for handicapped access,
>>>an error of three inches per room.  Where to get three inches?  Push
>>>the rooms into the hall and the hall fails.  You can see how this goes.
>>>
>>>Another example.  A Hospital builds a new wing onto an existing
>>>structure and these days new hospital wings look like fancy hotels.
>>>Everything is immaculate, grand fascades, fancy everything.  The
>>>hospital wing opens without a hitch.
>>>
>>>The first bed is rolled down the hall, the elevator button is pushed,
>>>the door opens, the bed pushed into the elevetor as far as it can go,
>>>but the bed still doesn't fit.  Wrong sized elevator.
>>
>>
>>It's pretty amazing to me that you find anything in common with Agile in
>>these scenarios.  They all sound like cases were there was no feedback
>>or testing.  Sounds more like plan-driven development to me.
> 
> Au contraire.  The bricks all passed unit tests.  As did the cement,
> steel, and so on.  And the plans all had feedback.  And the customer
> surely showed up with a glowing smile watching the obvious progress.
> And progress happened every day.
> 
> The elevator worked fine.  Up.  Down.  Ring, ring.  All positive
> feedback.
> 
> The workers sweated.  The execs wore suits and went golfing.

And here's where your analogy crumples into a ball and falls down.  In 
software, we can try to "roll the bed down the hall" whenever we want 
to.  We can have an automated test that attempts that even before there 
is a hallway.  Running the test is free, and we can always see whether 
we are done or not.  In this sense, we have an advantage over many other 
disciplines, owing mainly to the fact that we have very malleable 
material and we have very good ways of working with it.

>> > These are true stories.  Shouldn't all of the architects of these
>> > buildings have expected change to happen as well.  Same with the
>> > builders.  maybe build with everything loose so that it can be
>> > reassembled when the next minor detail arises?  Aren't we being told
>> > this is the way things work?

It works that way, if your material allows it.

>>Well, the fact is software is malleable.  In fact it is too malleable.
>>It isn't hard to change software at all.  All you have to do is type a
>>couple of characters in any program and you can break it.  Because that
>>is the way that software is, we need tests to give it backbone.
>  
> It's not that malleable.  Once in production software is very hard to
> change for all kinds of political reasons.

Not necessarily.  It depends upon on confidently you can make changes 
and what your track record is.  There are teams that incorporate new 
features and deploy every day.

> In fact a big problem for architects and designers is having
> programmers undermine design activity with too much dog and pony
> prototyping.  Bad ideas become adopted before any discussion of the
> larger picture can be formulated.

Programming is a design activity.  There are no bricklayers in software 
development.

> Of course you need tests.  We aren't a bunch of ninnies here. 
> 
>> > Even carpenters measure before they cut.  Yet, in computer science we
>> > are being told that we should operate as though we are all alchoholics
>> > and take things one day at a time.
>>
>>The problem is: misunderstanding the material you are working with.
>>Code is not wood or concrete.
> 
> 
> But spent resources are.  Nobody fixes anything for free.  And bad code
> applied to millions of daily transactions can cost companies or
> customers lots and lots of money when wrong.

So true.  That's why we test continuously and adopt practices which 
decrease the chance of defects.  You "roll the bed down the hall" before 
production, more more times than you imagine.

> Testing is tricky stuff and complex logic errors don't get discussed
> when daily iterations are the norm because there is no time.

Why isn't there?  I think this is another case where you misunderstand 
agile processes.  Read up on "The Planning Game" in XP when you get a 
chance.  Plans are recalibrated continuously to allow quality work.

> Design and OOD are not code or code design. 

Yes, they are.
http://www.developerdotstar.com/mag/articles/reeves_design_main.html


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall 2005)
www.objectmentor.com
0
mfeathers2 (74)
7/7/2005 12:59:43 PM

Les Cargill wrote:
> krasicki wrote:
> >
> > Michael Feathers wrote:
> >
> >>krasicki wrote:
> >>
> >>>Daniel Parker wrote:
> >>>
> >>>
> >>>>"krasicki" <Krasicki@gmail.com> wrote in message
> >>>>news:1120611204.476214.162240@f14g2000cwb.googlegroups.com...
> >>>>
> >>>>
> >>>>
> >>>>>Look, OOD is about designing theoretical systems ...
> >>>>
> >>>>Just out of curiosity, what theory?
> >>>>
> >>>>-- Daniel
> >>>
> >>>
> >>>Well Daniel,
> >>>
> >>>Not all systems consist of a web front end that accesses data from a
> >>>well-established database though many do.
> >>>
> >>>OOD, depending on the tools and techniques you use, allows system
> >>>designers to model potential solutions in numerous ways.  If this
> >>>tradeoff is made here, this benefit is forthcoming there and so on.
> >>>Design is about applying recombinant ideas to solving problems.
> >>>
> >>>Systems are developed to solve problems in effective ways.  Systems are
> >>>not developed for the sake of software or to gratify the provincial and
> >>>esoteric ego needs of the employees charged with getting the system
> >>>implemented.
> >>>
> >>>I'll give you a good example.  A Hilton was just built in Hartford and
> >>>the building went up fast to meet the deadline of the new convention
> >>>center being built next door.  All federal guidelines were applied.
> >>>
> >>>So come inspection day, Connecticut's inspectors applied Connecticut
> >>>building statutes to the inspection and the building failed!  Now one
> >>>could say that the building was built fast, saved money, and looks real
> >>>good and boy were the builders proud of it.  Let's call this process
> >>>agile.
> >>>
> >>>So sixteen of the rooms were incorrectly built for handicapped access,
> >>>an error of three inches per room.  Where to get three inches?  Push
> >>>the rooms into the hall and the hall fails.  You can see how this goes.
> >>>
> >>>Another example.  A Hospital builds a new wing onto an existing
> >>>structure and these days new hospital wings look like fancy hotels.
> >>>Everything is immaculate, grand fascades, fancy everything.  The
> >>>hospital wing opens without a hitch.
> >>>
> >>>The first bed is rolled down the hall, the elevator button is pushed,
> >>>the door opens, the bed pushed into the elevetor as far as it can go,
> >>>but the bed still doesn't fit.  Wrong sized elevator.
> >>
> >>
> >>It's pretty amazing to me that you find anything in common with Agile in
> >>these scenarios.  They all sound like cases were there was no feedback
> >>or testing.  Sounds more like plan-driven development to me.
> >
> >
> > Au contraire.  The bricks all passed unit tests.  As did the cement,
> > steel, and so on.  And the plans all had feedback.
>
> Not the right feedback. It's a failure of requirements capture,
> pure and simple. No methodology nor any other thing, other
> than collecting all the relevant requirements and checklisting
> them, would have made a bit of difference.

I will infer that you're saying that More planning anfd preparation
time might have comprehensively accumulated and accounted for these
missing design considerations.  In other words a hieavier weight
methodology could have avoided the headaches - all things being equal.

>
> > And the customer
> > surely showed up with a glowing smile watching the obvious progress.
> > And progress happened every day.
> >
> > The elevator worked fine.  Up.  Down.  Ring, ring.  All positive
> > feedback.
> >
> > The workers sweated.  The execs wore suits and went golfing.
> >
> >
> >> > These are true stories.  Shouldn't all of the architects of these
> >> > buildings have expected change to happen as well.  Same with the
> >> > builders.  maybe build with everything loose so that it can be
> >> > reassembled when the next minor detail arises?  Aren't we being told
> >> > this is the way things work?
> >>
> >>Well, the fact is software is malleable.  In fact it is too malleable.
> >>It isn't hard to change software at all.  All you have to do is type a
> >>couple of characters in any program and you can break it.  Because that
> >>is the way that software is, we need tests to give it backbone.
> >
> >
> > It's not that malleable.  Once in production software is very hard to
> > change for all kinds of political reasons.
> >
>
> It shouldn't go into production with defects that are gonna
> cost people money, at least without an enforceable plan
> to get the defects out, upfront.
>
> Once in production, somebody has to make the decisions of
> when, how and why to deploy upgrades.

Well, my point is that if something goes through the XP methodology
with all of the hot air and hubris that one has performed a bazillion
tests on it already but defects still exist, who will know and how
could they prove it.

Would any of us argue for long with these people?  I lose heart just
trying to get a straight answer out of them in something as
straightforward as a newsgroup.  Haven't you heard,  XP is absolutely
right because they've tested everything every which way.

Once in commercial production, software that is mission critical is not
easily changed because, as someone said elsewhere, tweaking the wrong
bit could cause system calamities.  Reintroducing code in these
enevironments could take six months to a year of expensive rework or
total shutdown.  It's no longer a question of tweaking code but
questioning all assumptions.  With BDUF, you can isolate the problem
and hypothetically run the system without software trying to understand
the overall implications.

>
> > In fact a big problem for architects and designers is having
> > programmers undermine design activity with too much dog and pony
> > prototyping.
>
> How is that possible? Other than time being wasted, prototyping
> is harmless. Prototypes should not even be attempted until
> there's a specific question or suite of questions they are
> to answer. If it's a sandboxed protpype, just to let the
> programmers play, then chunk it, or put it away. You
> still need specific deliverables from the prototyping.

Prototyping is political dynamite in many organizations.  Software
designers and architects are usually discussing issues that are not
near and dear to the hearts of the local application domain princess
who wants to have someone to talk to.  Enter, any number of local
characters who begin prototyping their idea of what should happen.
Before long the architects and designers are entangled in favorite
color discussions and presentation fashion shows.

Add to this mix, any number of programmers who believe they know better
than the people always talking about abstract ideas and you enter the
realm of random, esoteric, and uncontrollable development.

>
> > Bad ideas become adopted before any discussion of the
> > larger picture can be formulated.
> >
>
> Then they have to get rooted out and killed, or at least
> triaged and weighed for "badness". Bad ideas that don't get
> shot are a sign of complacency, not methodology.

There is no budget to root things out and bad software is often
sponsored internally by incompetent people who control your paycheck.
XP adds authenticity to the problems involved.

Because software development is so tightly coupled to the individuals,
it is no longer a matter of correcting or eliminating problematic code.
 The XP crowd has a vociferous ego stake in what's being done.  They've
got stories and tests and feedback loops that will insist it's there
right.  They all feel good about it.  And there is no impartial design
document you can point to to say otherwise because the whole ball of
wax is personal, intimate, immediate, and a treadmill of exhaustion for
everyone involved.

>
> > Of course you need tests.  We aren't a bunch of ninnies here.
> >
> >
> >> > Even carpenters measure before they cut.  Yet, in computer science we
> >> > are being told that we should operate as though we are all alchoholics
> >> > and take things one day at a time.
> >>
> >>The problem is: misunderstanding the material you are working with.
> >>Code is not wood or concrete.
> >
> >
> > But spent resources are.  Nobody fixes anything for free.  And bad code
> > applied to millions of daily transactions can cost companies or
> > customers lots and lots of money when wrong.
> >
>
> So somebody has to do a cost-benefeit analysis of when to do what.
> Good code isn't free, either. This is logistics, not particularly
> even software logistics.

The key term is "has to".

>
> > Testing is tricky stuff and complex logic errors don't get discussed
> > when daily iterations are the norm because there is no time.
> >
> > Design and OOD are not code or code design.
> >
>
> You can't fix culture with tools, in other words. Mostly, yes :)
>
Thanks Les.  Arguing XP is as thankless a task as I've ever
encountered.  The proponents swarm on critics like hornets so try to
avoid this stuff more often than not.  I sincerely was trying to give
the OP a fair assessment of what's out there but this quagmire blocks
all light from shining through.

0
Krasicki1 (73)
7/7/2005 3:32:57 PM

Daniel Parker wrote:
> "Robert C. Martin" <unclebob@objectmentor.com> wrote in message
> news:8c5oc15o20asviqcrtnnkrd1mbk944oj92@4ax.com...
> > On Wed, 6 Jul 2005 08:00:08 -0400, "Daniel Parker"
> > <danielaparker@spam?nothanks.windupbird.com> wrote:
> >
> >
> >>It's hard to
> >>find blogs that dissect both successful and unsuccessful XP projects and
> >>systematically discuss the consequences of the various practices, which is
> >>what you'd expect if the author wanted to be taken seriously.
> >
> > I'm astounded.  There has been a rather large amount ...
>
> Oh, good.  Can you provide a link to a site that does the above?
>
> Thanks,
> Daniel

Daniel,

You will wait forever.  Every valid question you raise will be ignored
and you will exhaust yourself disputing one nonsensical assertion about
XP after another.  You need a strong sense of humor engaging this
crowd.

I do wish you luck.

0
Krasicki1 (73)
7/7/2005 4:16:44 PM
krasicki wrote:
> Robert C. Martin wrote:
> > e.g. www.fitnesse.org
>
> You spelled fitness wrong.
>

On the contrary ...

"Thou chang'd and selfe-couerd thing, for shame
Be-monster not thy feature, wer't my fitnesse"

  Shakespeare, King Lear

0
7/7/2005 5:40:11 PM

Daniel Parker wrote:
> krasicki wrote:
> > Robert C. Martin wrote:
> > > e.g. www.fitnesse.org
> >
> > You spelled fitness wrong.
> >
>
> On the contrary ...
>
> "Thou chang'd and selfe-couerd thing, for shame
> Be-monster not thy feature, wer't my fitnesse"
>
>   Shakespeare, King Lear

So Shakespeare needed a spelling checker as well...

0
Krasicki1 (73)
7/7/2005 7:44:26 PM
On 6 Jul 2005 23:55:18 -0700, "frebe" <fredrik_bertilsson@passagen.se>
wrote:

>> if we are developing a system that must survive through years of
>> changing requirements, then coupling the GUI to the Schema is suicide;
>> and the time you might save in so doing is a false economy.
>
>Why would it be suicide to have a coupling between the GUI and database
>schema? 

Consider the following pseudocode:

Item maxItem = new Item(0, "junk");
int totalAmt = 0;
foreach item in product {
  print item.name;
  print item.amount;
  maxItem = max(maxItem,item);
  totalAmt += item.amount;
}

The calculation of the 'maxItem', and 'totalAmt' are mixed in with the
printing of the the items.  Other parts of the program eventually use
totalAmt and maxItem for other calculations.

Later, the folks who use the report decide that they don't want to see
the ZIGGY item printed on the report.  This item is mostly just a
placeholder and just confuses the report.  This is strictly a cosmetic
issue and should not affect any of the business rules.

A programmer makes the following change:

Item maxItem = new Item(0, "junk");
int totalAmt = 0;
foreach item in product{
  if (item.name == "ZIGGY")
    continue;
  print item.name;
  print item.amount;
  maxItem = max(maxItem,item);
  totalAmt += item.amount;
}

This fixes the report as required, but now the totalAmt and maxItem
variables are silently incorrect for any product that happens to
include a ZIGGY item.


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/7/2005 11:11:03 PM
On 6 Jul 2005 23:43:46 -0700, "frebe" <fredrik_bertilsson@passagen.se>
wrote:

>Adding a new column to the database would in no way crash the GUI.

Use your imagination!  

-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/7/2005 11:11:57 PM
On 7 Jul 2005 02:33:50 -0700, hansewetz@hotmail.com wrote:

>Robert C. Martin wrote:
>
>> Absolutely!  Though this has little to do with OO.  I am much more
>> interested in the work of Ward Cunningham and Rick Mugridge in
>> expressing requirements as Tests.  (See www.fitnesse.org)
>
>By testing, I'm not sure you are referring to validation or
>verification. 

Neither.  I'm referring to *specification*.  We specify the
requirements of a system by writing tests that pass if those
requirements are implemented correctly.

>I have difficulty in understanding
>how all requirements can be expressed as 'tests'. 

Any requirement that cannot be expressed as a test, is not really a
requirement.  

>> I disagree.  I think there is a lot of value in precise definitions
>> that can act as a metric against which to measure software designs.
>> Given my definition, you can quickly ascertain whether a particular
>> design is OO or not.  Moreover, there are many benefits that are
>> associated with this structure.  Dependency Inversion is the primary
>> mechanism behind independently deployable binary components.
>
>I don't disagree with the value of precise definitions and metrics.
>However, I do disagree with hijacking the term OO with a meaning that
>is very counter intuitive - at least to me. There are strong
>relationships between concepts and objects and both concepts and
>objects have lots of interesting aspects. You have reduced 'OO' to
>mean something very specific about how a programming language models
>one single operation on concepts (generalization) and how that
>operation should be used.

Yes, I have done that.  I think that is much closer to the original
inception of OO.  Nygaard, Dahl, et. al. noticed that the block
structure of algol was constrained.   The function stack frame was on
the heap, and therefore was destroyed when the initializing function
returned.  They realized they could move this stack frame to the heap,
and keep it alive even after the function returned.  Voila!  The
object was born in the Simula language.

OO was born in a coding environment.  OO was about ways to make code
more expressive and have better structure.  Indeed, Dahl described
their insight in the 1972 book "Structured Programming" which was all
about coding structures.

>In the majority of cases where I have seen this (code mess) happen, it
>has been because of a poor understanding of the problem being solved.
>It has rarely been because of poor programming practices. 

There is a difference between a requirements mess, and a code mess.
Messy requirements lead to systems that are difficult to use.  Messy
code leads to systems that are difficult to change for very technical
reasons.  e.g. they take forever to build, they break in strange and
unexpected places when changes are made, changes cannot be made in one
place, but must be made in many places throughout the code.  There is
massive duplication and interdependency in the code. 

Code messes are an incredible problem that have brought teams and
companies to their knees.  The developers eventually militate for "the
grand redesign in the sky".  This is not a redesign at the
requirements level, it is a redesign at the code level.  Such grand
redesigns almost always fail spectacularly.

>I don't
>believe it is possible to code effective solutions without having a
>clear understanding of the problem at the level of the 'business'.

Agreed.  However, this has nothing to do with OO.  OO is not a scheme
for better understanding business requirements IMHO.  

>It is a mystery to me that most developers believe they can code a
>solution to a problem when they don't understand the problem. 

Agreed, it is folly.

>DIP,
>polymorphism (OO?) and any other technical wizardry will not help in
>avoiding messy code when the problem is non-trivial. 

DIP will not help you understand the problem better; but DIP *will*
help you structure the code better once you *do* understand the
problem.

>Only an
>understanding of the problem or an understanding of how to deal with
>the class of problems that the problem belongs to, will avoid code
>disasters.

It also requires good coding skills and design disciplines.  It is
entirely possible for programmers to make a horrible mess in the code,
even when they understand the problem perfectly.
>
>In summary: I do not believe poor coding practices are the main problem
>in IT development. 

Poor coding practices are a major issue.  They are not the only issue.

>Knowing, understanding and describing problems are
>at the heart of developing good software. 

Understanding the problem is necessary, but not sufficient.  



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/7/2005 11:28:16 PM
On Wed, 06 Jul 2005 16:20:44 GMT, "H. S. Lahman"
<h.lahman@verizon.net> wrote:

>You should try attending a translation model review involving 
>experienced developers.  Whether there is implementation pollution 
>present is usually quite clear.  Authors may have blind spots as 
>individuals, but they are quick to recognize the problem when it is 
>pointed out.  The tricky part lies is eliminating implementation 
>pollution, not recognizing it.

I could say the same about a good design review, or a good pair
programming session.  Individual authors may miss certain
partitionings that would better separate implementation from policy;
but the team is pretty good at getting it right.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/7/2005 11:29:43 PM
Shakespeare wrote:

> "Thou chang'd and selfe-couerd thing, for shame
> Be-monster not thy feature, wer't my fitnesse"

Be-monster not thy _feature_??

You changed and self-covered thing, for shame
be-monster not your _feature_, were it my fitnesse??

Could someone put that on http://fitnesse.org 's homepage?!

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/7/2005 11:51:16 PM
Robert C. Martin wrote:
> On 6 Jul 2005 23:55:18 -0700, "frebe" <fredrik_bertilsson@passagen.se>
> wrote:
>
> >> if we are developing a system that must survive through years of
> >> changing requirements, then coupling the GUI to the Schema is suicide;
> >> and the time you might save in so doing is a false economy.
> >
> >Why would it be suicide to have a coupling between the GUI and database
> >schema?
>
> Consider the following pseudocode:
>
> Item maxItem = new Item(0, "junk");
> int totalAmt = 0;
> foreach item in product {
>   print item.name;
>   print item.amount;
>   maxItem = max(maxItem,item);
>   totalAmt += item.amount;
> }
>
> The calculation of the 'maxItem', and 'totalAmt' are mixed in with the
> printing of the the items.  Other parts of the program eventually use
> totalAmt and maxItem for other calculations.
>
> Later, the folks who use the report decide that they don't want to see
> the ZIGGY item printed on the report.  This item is mostly just a
> placeholder and just confuses the report.  This is strictly a cosmetic
> issue and should not affect any of the business rules.
>
> A programmer makes the following change:
>
> Item maxItem = new Item(0, "junk");
> int totalAmt = 0;
> foreach item in product{
>   if (item.name == "ZIGGY")
>     continue;
>   print item.name;
>   print item.amount;
>   maxItem = max(maxItem,item);
>   totalAmt += item.amount;
> }
>
> This fixes the report as required, but now the totalAmt and maxItem
> variables are silently incorrect for any product that happens to
> include a ZIGGY item.


But the flip-side could also happen. It may be that the
requirements are for Ziggy to also be excluded from the
totals. We cannot know that *in advance*. If we separate
them, then we may forget to make the same filtering
to both loops. I have seen such issues play out both ways.

It is all back to *probability* again, just like the
last topic. You cannot say "always" in this example.
You cannot say that any such loop filtering change
will always or never be applied to the other stuff.
Summing and displaying may or may not be related
in any given filtering change. It is situational, not
absolute.

Coupling seems coupled to probability (pun intended).

>
>
> -----
> Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com

-T-

0
topmind (2124)
7/8/2005 12:03:41 AM
topmind wrote:
> Robert C. Martin wrote:
> 
>>On 6 Jul 2005 23:55:18 -0700, "frebe" <fredrik_bertilsson@passagen.se>
>>wrote:
>>
>>
>>>>if we are developing a system that must survive through years of
>>>>changing requirements, then coupling the GUI to the Schema is suicide;
>>>>and the time you might save in so doing is a false economy.
>>>
>>>Why would it be suicide to have a coupling between the GUI and database
>>>schema?
>>
>>Consider the following pseudocode:
>>
>>Item maxItem = new Item(0, "junk");
>>int totalAmt = 0;
>>foreach item in product {
>>  print item.name;
>>  print item.amount;
>>  maxItem = max(maxItem,item);
>>  totalAmt += item.amount;
>>}
>>
>>The calculation of the 'maxItem', and 'totalAmt' are mixed in with the
>>printing of the the items.  Other parts of the program eventually use
>>totalAmt and maxItem for other calculations.
>>
>>Later, the folks who use the report decide that they don't want to see
>>the ZIGGY item printed on the report.  This item is mostly just a
>>placeholder and just confuses the report.  This is strictly a cosmetic
>>issue and should not affect any of the business rules.
>>
>>A programmer makes the following change:
>>
>>Item maxItem = new Item(0, "junk");
>>int totalAmt = 0;
>>foreach item in product{
>>  if (item.name == "ZIGGY")
>>    continue;
>>  print item.name;
>>  print item.amount;
>>  maxItem = max(maxItem,item);
>>  totalAmt += item.amount;
>>}
>>
>>This fixes the report as required, but now the totalAmt and maxItem
>>variables are silently incorrect for any product that happens to
>>include a ZIGGY item.
>  
> But the flip-side could also happen. It may be that the
> requirements are for Ziggy to also be excluded from the
> totals. We cannot know that *in advance*. If we separate
> them, then we may forget to make the same filtering
> to both loops. I have seen such issues play out both ways.
> 
> It is all back to *probability* again, just like the
> last topic. You cannot say "always" in this example.
> You cannot say that any such loop filtering change
> will always or never be applied to the other stuff.
> Summing and displaying may or may not be related
> in any given filtering change. It is situational, not
> absolute.

You have to think about what is more error prone.

When you separate responsibilties and name them, you know where to look. 
  I'd be willing to bet that the original function wasn't called:

PrintNameAndAmountForItemsAndCalculateTotalAlongWithMaxItem()

:-)

But, that's what an ugly function like that should be named, isn't it? 
If we care about communicating what our code is doing?

But, no, it's probably named something stupid like Print(), and that's 
error prone.  It's very easy to step into a function named Print() and 
think that you are only filtering out printing because the name was 
lying to you.

I'd expect that if we were going to filter the list, we'd have a named 
function that produces lists without Ziggys.  Call it FilteredList(). 
When we introduce that function, we can think about where we need 
filtered lists and use the function in those places.  A side effect of 
breaking down code into single responsibilities is that the names we 
produce will guide us rather than decieve us.

function Print
     foreach item in FilteredList()
         print item.name
         print item.amount

function Total
     foreach item in FilteredList()
         total += item.amount


The FilteredList() function is a step beyond the schema and the 
beginnings of a layer that can be populated by the db and used by the UI.

Change in code with tangled responsibilities is error-prone, so it makes 
sense to detangle.


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall 2005)
www.objectmentor.com

0
mfeathers2 (74)
7/8/2005 1:08:40 AM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
news:ek7lc1plc7jdd4r1k7u45tiig7sj2c73su@4ax.com...

> We try to decouple the GUI layout from the
> database schemae because it would be a shame to crash the GUI when
> adding a new column to the database.
>

Out of curiousity ... how many times have you added a new column to a 
database table where the data is provided to the view, and not needed to 
change the view?

I am not disagreeing with the decoupling of the GUI layout from the database 
schema but I am wondering how often the GUI needs changing in some manner to 
accomodate the new column whether decoupled or not.

Cheers
Shane


-- 
Put improvement to work by not waiting for perfection.
Find a starting place, get started, and improve from there.
XPE2E - Kent Beck 


0
shanemingins (337)
7/8/2005 3:24:17 AM
>
> You have to think about what is more error prone.

I always think about what is error-prone. I don't like
making errors any more than anybody else, but it is
usually by wieghing many tradeoffs. There is no
single one right factor or dogma: Just like real
life.

>
> When you separate responsibilties and name them, you know where to look.
>   I'd be willing to bet that the original function wasn't called:
>
> PrintNameAndAmountForItemsAndCalculateTotalAlongWithMaxItem()
>
> :-)
>
> But, that's what an ugly function like that should be named, isn't it?
> If we care about communicating what our code is doing?
>
> But, no, it's probably named something stupid like Print(), and that's
> error prone.  It's very easy to step into a function named Print() and
> think that you are only filtering out printing because the name was
> lying to you.
>
> I'd expect that if we were going to filter the list, we'd have a named
> function that produces lists without Ziggys.  Call it FilteredList().
> When we introduce that function, we can think about where we need
> filtered lists and use the function in those places.  A side effect of
> breaking down code into single responsibilities is that the names we
> produce will guide us rather than decieve us.
>
> function Print
>      foreach item in FilteredList()
>          print item.name
>          print item.amount
>
> function Total
>      foreach item in FilteredList()
>          total += item.amount
>
>
> The FilteredList() function is a step beyond the schema and the
> beginnings of a layer that can be populated by the db and used by the UI.
>
> Change in code with tangled responsibilities is error-prone, so it makes
> sense to detangle.

There is one scenerio I didn't see you address: if the filtering
criteria is different for printing than for totalling. Again,
we cannot assume they will be the same, and we cannot assume
they will be different over the longer run. Generally some
will be shared and some won't. Psuedo-code may resemble:

 function Print
      foreach item in FilteredList(meth: A,B,D,G)
          print item.name
          print item.amount

 function Total
      foreach item in FilteredList(meth: A,D,G,K)
          total += item.amount

where letters represent filtering methods.

Further, your solution is more complex: it has 3 "units"
whereas the original version only had one. Complexity itself
tends to introduce other risks even if it solves some.
Throwing a bunch of indirection and deep interfaces
at things somethings make more that has to be overhauled
when bigger changes come along. In short, it may
be over-engineered at this point. I used to over-engineer
certains things, but learned to cut down.
I target "Wise K.I.S.S.".

>
>
> Michael Feathers
> author, Working Effectively with Legacy Code (Prentice Hall 2005)
> www.objectmentor.com

-T-

0
topmind (2124)
7/8/2005 4:13:58 AM

Laurent Bossavit wrote:
> Krasicki,
>
> > XP was not promoting testing years ago.
>
> I think 1998 qualifies as "years ago". The first article ever published
> about XP started, "Extreme Programming rests on the values of
> simplicity, communication, *testing*, and aggressiveness." (Emphasis
> mine.) It went on to describe C3's thousands of unit tests.

I have never encountered a software development methodology that did
not include testing.  The fact of the matter is that the XP hype of the
time was not about the virtues of testing but about pair programming, 8
hour work weeks, contracts and lots of other still to be substantiated
claims of wonder.

>
> > XP retreated to testing emphasis because few people argue that it's good.
>
> I would think XP might "retreat to testing" because *many* (not few)
> people argue that testing is good. I wonder what you meant exactly - not
> that it matters, as we've established there was no "retreat to testing".


Sorry for the malformed sentence.  My intention is to say that like a
tasty Aplle Pie, testing is unarguably 'good'.

You will note that XP discussions these days play out as Johnny
one-note discussions of testing because this is one of the few things
that nobody really attacks.  Nor do I.  In all my criticisms of XP, I
have consistently saluted testing as a fine practice.

The fact that it is neither a discovery of XP nor does XP enhance it
makes my praise that much more satisfying for me.

0
Krasicki1 (73)
7/8/2005 5:26:45 AM

Michael Feathers wrote:
> krasicki wrote:
> > XP was not promoting testing years ago.  XP retreated to testing
> > emphasis because few people argue that it's good.  But testing advocacy
> > doesn't validate XP as a good methodology per se.
>
> Nope.  Testing was a core practice of XP from the beginning.  It was it
> the white book, as test-first and functional testing.  If you go to Ron
> Jeffries' site you'll find the original writeups of the C3 project's
> practices:
>
> http://xprogramming.com/Practices/xpractices.htm
>
> And, if I remember correctly, the paper submitted to OOPSLA by C3 in the
> late 90s, the one that spurned interest in XP, emphasized testing as well.
>
> Where are you getting all of these odd ideas?

Memories of previous trips around this bush.

0
Krasicki1 (73)
7/8/2005 6:07:42 AM
krasicki wrote:

> 
> Les Cargill wrote:
> 
>>krasicki wrote:
>>
>>>Michael Feathers wrote:
>>>
>>>
>>>>krasicki wrote:
>>>>
>>>>
>>>>>Daniel Parker wrote:
>>>>>
>>>>>
>>>>>
>>>>>>"krasicki" <Krasicki@gmail.com> wrote in message
>>>>>>news:1120611204.476214.162240@f14g2000cwb.googlegroups.com...
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>Look, OOD is about designing theoretical systems ...
>>>>>>
>>>>>>Just out of curiosity, what theory?
>>>>>>
>>>>>>-- Daniel
>>>>>
>>>>>
>>>>>Well Daniel,
>>>>>
>>>>>Not all systems consist of a web front end that accesses data from a
>>>>>well-established database though many do.
>>>>>
>>>>>OOD, depending on the tools and techniques you use, allows system
>>>>>designers to model potential solutions in numerous ways.  If this
>>>>>tradeoff is made here, this benefit is forthcoming there and so on.
>>>>>Design is about applying recombinant ideas to solving problems.
>>>>>
>>>>>Systems are developed to solve problems in effective ways.  Systems are
>>>>>not developed for the sake of software or to gratify the provincial and
>>>>>esoteric ego needs of the employees charged with getting the system
>>>>>implemented.
>>>>>
>>>>>I'll give you a good example.  A Hilton was just built in Hartford and
>>>>>the building went up fast to meet the deadline of the new convention
>>>>>center being built next door.  All federal guidelines were applied.
>>>>>
>>>>>So come inspection day, Connecticut's inspectors applied Connecticut
>>>>>building statutes to the inspection and the building failed!  Now one
>>>>>could say that the building was built fast, saved money, and looks real
>>>>>good and boy were the builders proud of it.  Let's call this process
>>>>>agile.
>>>>>
>>>>>So sixteen of the rooms were incorrectly built for handicapped access,
>>>>>an error of three inches per room.  Where to get three inches?  Push
>>>>>the rooms into the hall and the hall fails.  You can see how this goes.
>>>>>
>>>>>Another example.  A Hospital builds a new wing onto an existing
>>>>>structure and these days new hospital wings look like fancy hotels.
>>>>>Everything is immaculate, grand fascades, fancy everything.  The
>>>>>hospital wing opens without a hitch.
>>>>>
>>>>>The first bed is rolled down the hall, the elevator button is pushed,
>>>>>the door opens, the bed pushed into the elevetor as far as it can go,
>>>>>but the bed still doesn't fit.  Wrong sized elevator.
>>>>
>>>>
>>>>It's pretty amazing to me that you find anything in common with Agile in
>>>>these scenarios.  They all sound like cases were there was no feedback
>>>>or testing.  Sounds more like plan-driven development to me.
>>>
>>>
>>>Au contraire.  The bricks all passed unit tests.  As did the cement,
>>>steel, and so on.  And the plans all had feedback.
>>
>>Not the right feedback. It's a failure of requirements capture,
>>pure and simple. No methodology nor any other thing, other
>>than collecting all the relevant requirements and checklisting
>>them, would have made a bit of difference.
> 
> 
> I will infer that you're saying that More planning anfd preparation
> time might have comprehensively accumulated and accounted for these
> missing design considerations.  In other words a hieavier weight
> methodology could have avoided the headaches - all things being equal.
> 

Not at all - I'm just saying thst missing the point is
missing the point. If the elevator width doesn't
meet spec, it simply doesn't.

How that's acheived isn't on the table. I dunno
the people, dunno the domain.

Heavier weight process might obscure the requirement.

It's ultimately an anthropology problem.

> 
>>>And the customer
>>>surely showed up with a glowing smile watching the obvious progress.
>>>And progress happened every day.
>>>
>>>The elevator worked fine.  Up.  Down.  Ring, ring.  All positive
>>>feedback.
>>>
>>>The workers sweated.  The execs wore suits and went golfing.
>>>
>>>
>>>
>>>>>These are true stories.  Shouldn't all of the architects of these
>>>>>buildings have expected change to happen as well.  Same with the
>>>>>builders.  maybe build with everything loose so that it can be
>>>>>reassembled when the next minor detail arises?  Aren't we being told
>>>>>this is the way things work?
>>>>
>>>>Well, the fact is software is malleable.  In fact it is too malleable.
>>>>It isn't hard to change software at all.  All you have to do is type a
>>>>couple of characters in any program and you can break it.  Because that
>>>>is the way that software is, we need tests to give it backbone.
>>>
>>>
>>>It's not that malleable.  Once in production software is very hard to
>>>change for all kinds of political reasons.
>>>
>>
>>It shouldn't go into production with defects that are gonna
>>cost people money, at least without an enforceable plan
>>to get the defects out, upfront.
>>
>>Once in production, somebody has to make the decisions of
>>when, how and why to deploy upgrades.
> 
> 
> Well, my point is that if something goes through the XP methodology
> with all of the hot air and hubris that one has performed a bazillion
> tests on it already but defects still exist, who will know and how
> could they prove it.
> 

No process can guarantee any result. You really need respectful
interaction between subject matter experts and the implementors.

If the tests are wrong, they're wrong. Fix 'em.

> Would any of us argue for long with these people?  I lose heart just
> trying to get a straight answer out of them in something as
> straightforward as a newsgroup.  Haven't you heard,  XP is absolutely
> right because they've tested everything every which way.
> 

I used XP , once. It contributed zero defects in five years*. This
was no more absolutely right than anything else - it modelled
a well-known system with loads of test data.

*sampling limit, in this case.

> Once in commercial production, software that is mission critical is not
> easily changed because, as someone said elsewhere, tweaking the wrong
> bit could cause system calamities.  Reintroducing code in these
> enevironments could take six months to a year of expensive rework or
> total shutdown.

Then you have to get it right the first time. That costs
money, you know. If it costs more than you have,
try to keep the drawing to inside straights to a minimum.

>  It's no longer a question of tweaking code but
> questioning all assumptions.  With BDUF, you can isolate the problem
> and hypothetically run the system without software trying to understand
> the overall implications.
> 
> 

That sounds heartbreaking. It sounds like fighting a war on two 
fronts. Why no alliances, then?

>>>In fact a big problem for architects and designers is having
>>>programmers undermine design activity with too much dog and pony
>>>prototyping.
>>
>>How is that possible? Other than time being wasted, prototyping
>>is harmless. Prototypes should not even be attempted until
>>there's a specific question or suite of questions they are
>>to answer. If it's a sandboxed protpype, just to let the
>>programmers play, then chunk it, or put it away. You
>>still need specific deliverables from the prototyping.
> 
> 
> Prototyping is political dynamite in many organizations.  Software
> designers and architects are usually discussing issues that are not
> near and dear to the hearts of the local application domain princess
> who wants to have someone to talk to.  Enter, any number of local
> characters who begin prototyping their idea of what should happen.
> Before long the architects and designers are entangled in favorite
> color discussions and presentation fashion shows.
> 

But that's a Big Man problem. Who is the champion
for this? Who's the Boss?

Surely methodology cannot solve organizational
psychology problems at this level.

> Add to this mix, any number of programmers who believe they know better
> than the people always talking about abstract ideas and you enter the
> realm of random, esoteric, and uncontrollable development.
> 
> 

That's where the Four W's come in - Who, What, Why and When.

>>>Bad ideas become adopted before any discussion of the
>>>larger picture can be formulated.
>>>
>>
>>Then they have to get rooted out and killed, or at least
>>triaged and weighed for "badness". Bad ideas that don't get
>>shot are a sign of complacency, not methodology.
> 
> 
> There is no budget to root things out and bad software is often
> sponsored internally by incompetent people who control your paycheck.
> XP adds authenticity to the problems involved.
> 

Incompetent people are just competent people who
haven't figured it out yet. Clue - one mechanism
of competence is being transparent.

> Because software development is so tightly coupled to the individuals,
> it is no longer a matter of correcting or eliminating problematic code.
>  The XP crowd has a vociferous ego stake in what's being done.  They've
> got stories and tests and feedback loops that will insist it's there
> right.  They all feel good about it.  And there is no impartial design
> document you can point to to say otherwise because the whole ball of
> wax is personal, intimate, immediate, and a treadmill of exhaustion for
> everyone involved.
> 

You don't need a methodology, you need a guillotine! Actually,
you need leadership, but I understand....


> 
>>>Of course you need tests.  We aren't a bunch of ninnies here.
>>>
>>>
>>>
>>>>>Even carpenters measure before they cut.  Yet, in computer science we
>>>>>are being told that we should operate as though we are all alchoholics
>>>>>and take things one day at a time.
>>>>
>>>>The problem is: misunderstanding the material you are working with.
>>>>Code is not wood or concrete.
>>>
>>>
>>>But spent resources are.  Nobody fixes anything for free.  And bad code
>>>applied to millions of daily transactions can cost companies or
>>>customers lots and lots of money when wrong.
>>>
>>
>>So somebody has to do a cost-benefeit analysis of when to do what.
>>Good code isn't free, either. This is logistics, not particularly
>>even software logistics.
> 
> 
> The key term is "has to".
> 
> 

Exactly. But in the absence of accountability, all things
are possible.


>>>Testing is tricky stuff and complex logic errors don't get discussed
>>>when daily iterations are the norm because there is no time.
>>>
>>>Design and OOD are not code or code design.
>>>
>>
>>You can't fix culture with tools, in other words. Mostly, yes :)
>>
> 
> Thanks Les.  Arguing XP is as thankless a task as I've ever
> encountered.  The proponents swarm on critics like hornets so try to
> avoid this stuff more often than not.  I sincerely was trying to give
> the OP a fair assessment of what's out there but this quagmire blocks
> all light from shining through.
> 

But isn't it amazing what can be accomplished in the face of
those sorts of odds? People *are* rational, once you do
the heavy lifting for 'em. After all, that's why you're
there.

Fighting a lost cause to a draw is about as good
as it gets on this planet. And sometimes, you are the
windshield and not the bug.

I don't mean to appear arrogant; far from it. But
nobody say it s'posed to be easy.

--
Les Cargill
0
lNOcargill (15)
7/8/2005 6:51:48 AM
Krasicki,

> I have never encountered a software development methodology that did
> not include testing.  The fact of the matter is that the XP hype of the
> time was not about the virtues of testing

I don't know about the hype. I've cited an article, the first ever 
published on XP, which devoted quite a bit of space to testing. Your 
assertion that XP "was not promoting testing years ago" is, in a 
nutshell, a false statement. (Or, if you prefer, bullcrap.)

XP *was* promoting testing. Pointing out that it's no big news for a 
software engineering discourse to promote testing is only shifting the 
goalposts: you were claiming that it did *not*, and that's a false 
claim. You've been called on it. The wise thing to do is to retract the 
claim and move on.

Laurent
0
laurent (379)
7/8/2005 8:18:05 AM
Robert C. Martin wrote:
> On 7 Jul 2005 02:33:50 -0700, hansewetz@hotmail.com wrote:
>
> >Robert C. Martin wrote:

> Neither.  I'm referring to *specification*.  We specify the
> requirements of a system by writing tests that pass if those
> requirements are implemented correctly.
>

My question was really related to if 'testing' means execution
tests (validation) or showings that the code is intrinsically correct
(verification) (i.e. proving consistency in the design/code, showing
that there are no raise conditions etc.). The two are different beasts
and both needs to be done.

Also, I'm not really sure what you mean by 'requirements'.

> Any requirement that cannot be expressed as a test, is not really a
> requirement.

Before I make comments on 'testing', let me make my terminology
clear. By 'requirements' I refer to things that should happen
outside the software/computer that we are developing. For example, an
alarm bell should sound when a patient's heartbeat stops. The
requirements are not directly related to the computer or software. By
'specification' I refer to things that should or should not happen
at the boundary of the software/computer. Requirements and
specification, using my definitions, are very different things and
should be treated differently.

With this terminology it's very simple to dream up requirements that
cannot be checked through execution tests. A requirement that specifies
that the alarm bell should sound higher each time it goes off cannot be
checked through execution tests. There is always one more test to run.
It can only be checked by proving that the machinery that manage the
alarm bell and monitors the patient is correct. In this case the
design, code and computer and alarm bell would have to be analyzed. It
is not difficult to come up with specifications that cannot be checked
through execution tests.

....

About OO: It seems like we have some disagreements that I believe are
related to semantics. Since I don't believe they will be resolved
I'll drop the subject of OO.

Regards,
Hans Ewetz

0
hansewetz (110)
7/8/2005 8:34:15 AM
Hans,

> With this terminology it's very simple to dream up requirements that
> cannot be checked through execution tests. A requirement that specifies
> that the alarm bell should sound higher each time it goes off cannot be
> checked through execution tests.

Why not ? (More precisely, what about your definition of "execution 
tests" precludes checking such a thing ?)

Laurent
0
laurent (379)
7/8/2005 10:17:13 AM
krasicki wrote:
> 
> Michael Feathers wrote:
> 
>>krasicki wrote:
>>
>>>XP was not promoting testing years ago.  XP retreated to testing
>>>emphasis because few people argue that it's good.  But testing advocacy
>>>doesn't validate XP as a good methodology per se.
>>
>>Nope.  Testing was a core practice of XP from the beginning.  It was it
>>the white book, as test-first and functional testing.  If you go to Ron
>>Jeffries' site you'll find the original writeups of the C3 project's
>>practices:
>>
>>http://xprogramming.com/Practices/xpractices.htm
>>
>>And, if I remember correctly, the paper submitted to OOPSLA by C3 in the
>>late 90s, the one that spurned interest in XP, emphasized testing as well.
>>
>>Where are you getting all of these odd ideas?
> 
> 
> Memories of previous trips around this bush.
> 

You're not learning much on each pass, are you?


Michael Feathers
www.objectmentor.com
0
mfeathers2 (74)
7/8/2005 11:41:23 AM

krasicki wrote:
> Daniel Parker wrote:
> > krasicki wrote:
> > > Robert C. Martin wrote:
> > > > e.g. www.fitnesse.org
> > >
> > > You spelled fitness wrong.
> > >
> >
> > On the contrary ...
> >
> > "Thou chang'd and selfe-couerd thing, for shame
> > Be-monster not thy feature, wer't my fitnesse"
> >
> >   Shakespeare, King Lear
>
> So Shakespeare needed a spelling checker as well...

So if it ain't Yankee talk, it ain't talk, eh?

-- Daniel

0
7/8/2005 3:08:32 PM
Laurent Bossavit wrote:
> Hans,
>
> > With this terminology it's very simple to dream up requirements that
> > cannot be checked through execution tests. A requirement that specifies
> > that the alarm bell should sound higher each time it goes off cannot be
> > checked through execution tests.
>
> Why not ? (More precisely, what about your definition of "execution
> tests" precludes checking such a thing ?)
>
> Laurent

As I said in my previous reply (the part you removed): 'There is
always one more test to run.'

Example:
-	alarm bell goes off with a sound level of 1
-	patients heart is 'restarted' by a doctor
-	alarm bell goes off with a sound level of 2
-	patients heart is 'restarted' by a doctor
-	alarm bell goes off with a sound level of 3
-	...

Say you stop running your execution tests when the bell goes off with a
sound level of N. How can you know that it will go off with a sound
level N+1 next time around unless you run yet another test?

The example is obviously silly. However, it should illustrate my point
that execution tests are not always enough. Execution tests can easily
create an illusion of correctness of a piece of software.

Regards,
Hans Ewetz

0
hansewetz (110)
7/8/2005 3:24:05 PM
On Fri, 8 Jul 2005 15:24:17 +1200, "Shane Mingins"
<shanemingins@yahoo.com.clothes> wrote:

>"Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
>news:ek7lc1plc7jdd4r1k7u45tiig7sj2c73su@4ax.com...
>
>> We try to decouple the GUI layout from the
>> database schemae because it would be a shame to crash the GUI when
>> adding a new column to the database.
>>
>
>Out of curiousity ... how many times have you added a new column to a 
>database table where the data is provided to the view, and not needed to 
>change the view?

Consider a GUI that traverses a complex data structure on the database
to present a summary of the information in some graphic form.  Suppose
that the database is strongly interconnected with lots of tables and
relationships, and that the data being displayed is not about any
individual table or relationship, but is about a calculation based on
the whole database.  

Now imaging that deep within the database structure we add a new
column to a relationship table.  This column modifies the way the
relationship works.

Let's finally say that the GUI code depends deeply on the database
schema, but the necessary 'if' statement to check the new field of
that relationship table was not put into the GUI code.  

The GUI crashes.  
-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/8/2005 3:46:38 PM
Responding to Martin...

>>You should try attending a translation model review involving 
>>experienced developers.  Whether there is implementation pollution 
>>present is usually quite clear.  Authors may have blind spots as 
>>individuals, but they are quick to recognize the problem when it is 
>>pointed out.  The tricky part lies is eliminating implementation 
>>pollution, not recognizing it.
> 
> 
> I could say the same about a good design review, or a good pair
> programming session.  Individual authors may miss certain
> partitionings that would better separate implementation from policy;
> but the team is pretty good at getting it right.

Let's stay focused on the context.  I was responding to your list of 
issues where you asserted things are fuzzy regarding the line between 
OOA and OOA/P.  OOA developers should be very rarely confused about such 
lists because they carefully distinguish between functional and 
nonfunctional requirements based on their union card.  If it resolves a 
functional requirement it goes in the OOA; otherwise it doesn't.  If it 
resolves a functional requirement in an implementation-dependent manner, 
then it needs to be recast in an implementation-independent manner.

Both issues (functional vs. nonfunctional requirements and 
implementation-independence) are really pretty clear in practice.  The 
tricky part is usually limited to finding an implementation-independent 
resolution.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl@pathfindermda.com
Pathfinder Solutions  -- Put MDA to Work
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
(888)OOA-PATH



0
h.lahman (3600)
7/8/2005 5:53:37 PM
hansewetz wrote:

> My question was really related to if 'testing' means execution
> tests (validation) or showings that the code is intrinsically correct
> (verification) (i.e. proving consistency in the design/code, showing
> that there are no raise conditions etc.). The two are different beasts
> and both needs to be done.

Converting requirements to specifications must be done, and converting
specifications to code must be done. Automating the specifications
simplifies coding.

V&V comes after, and may indeed require more tests.

> Also, I'm not really sure what you mean by 'requirements'.

Things that will profit your customer.

> With this terminology it's very simple to dream up requirements that
> cannot be checked through execution tests. A requirement that specifies
> that the alarm bell should sound higher each time it goes off cannot be
> checked through execution tests. There is always one more test to run.
> It can only be checked by proving that the machinery that manage the
> alarm bell and monitors the patient is correct. In this case the
> design, code and computer and alarm bell would have to be analyzed. It
> is not difficult to come up with specifications that cannot be checked
> through execution tests.

You can't test a negative, and you equally cannot V&V a negative. That's no
excuse not to express the specifications as tests. It is just one more
reason to try.

If you can't find a way to convert requirements "asystolic patients ring an
alarm bell" into specifications "strobe port 7 when the time between
systolic pressure spikes above 5 millibars exceeds 2 seconds" then you must
research those requirements.

Insisting on executable specifications is a powerful way to review and gate
requirements. If you can't write a test, you must plan first. So insisting
on automated tests forces a test-driven project to plan its most important
details.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/8/2005 6:02:20 PM
On 8 Jul 2005 01:34:15 -0700, hansewetz@hotmail.com wrote:

>Robert C. Martin wrote:
>> On 7 Jul 2005 02:33:50 -0700, hansewetz@hotmail.com wrote:
>>
>> >Robert C. Martin wrote:
>
>> Neither.  I'm referring to *specification*.  We specify the
>> requirements of a system by writing tests that pass if those
>> requirements are implemented correctly.
>>
>
>My question was really related to if 'testing' means execution
>tests (validation) or showings that the code is intrinsically correct
>(verification) (i.e. proving consistency in the design/code, showing
>that there are no raise conditions etc.). The two are different beasts
>and both needs to be done.

Some of both.  Most of the acceptance tests that I write are based on
functional and non-functional requirements.  Some, however, are based
on software structure.  See:
http://butunclebob.com/ArticleS.UncleBob.StableDependenciesFixture.


>Also, I'm not really sure what you mean by 'requirements'.

The traditional definition. 

>> Any requirement that cannot be expressed as a test, is not really a
>> requirement.
>
>Before I make comments on 'testing', let me make my terminology
>clear. By 'requirements' I refer to things that should happen
>outside the software/computer that we are developing. For example, an
>alarm bell should sound when a patient's heartbeat stops. The
>requirements are not directly related to the computer or software. By
>'specification' I refer to things that should or should not happen
>at the boundary of the software/computer. Requirements and
>specification, using my definitions, are very different things and
>should be treated differently.
>
>With this terminology it's very simple to dream up requirements that
>cannot be checked through execution tests. A requirement that specifies
>that the alarm bell should sound higher each time it goes off cannot be
>checked through execution tests. 

Yes it can.

>There is always one more test to run.

That's just Xeno's paradox.  It's true that there's always on more
test to run to test one last little detail, or one last little obtuse
case.  But that doesn't mean that Achilles never passes the tortoise.

>It can only be checked by proving that the machinery that manage the
>alarm bell and monitors the patient is correct. 

Which from microsecond to microsecond can change it's state.  There
is, after all, a positive probability that all the molecules in the
bell will, by chance, shift to the left by 3 feet.  Or that a cosmic
ray with the energy of a well hit baseball (they have been detected!)
will miss all the intervening air molecules and "ring" the bell.

In short, there's no way to prove that the machinery works.  But it
doesn't matter, because you can prove that it works to a within a
certain probability.

>In this case the
>design, code and computer and alarm bell would have to be analyzed. It
>is not difficult to come up with specifications that cannot be checked
>through execution tests.

True, but that doesn't mean that we don't write the tests.  We cannot
test *everything*.  But we can test enough.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/8/2005 6:08:57 PM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
news:io7tc1p4sflc4vk839tt45djr8dp1u2eb9@4ax.com...
> On Fri, 8 Jul 2005 15:24:17 +1200, "Shane Mingins"
> <shanemingins@yahoo.com.clothes> wrote:
>
>>"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
>>news:ek7lc1plc7jdd4r1k7u45tiig7sj2c73su@4ax.com...
>>
>>> We try to decouple the GUI layout from the
>>> database schemae because it would be a shame to crash the GUI when
>>> adding a new column to the database.
>>>
>>
>>Out of curiousity ... how many times have you added a new column to a
>>database table where the data is provided to the view, and not needed to
>>change the view?
>
> Consider a GUI that traverses a complex data structure on the database
> to present a summary of the information in some graphic form.  Suppose
> that the database is strongly interconnected with lots of tables and
> relationships, and that the data being displayed is not about any
> individual table or relationship, but is about a calculation based on
> the whole database.
>
> Now imaging that deep within the database structure we add a new
> column to a relationship table.  This column modifies the way the
> relationship works.
>
> Let's finally say that the GUI code depends deeply on the database
> schema, but the necessary 'if' statement to check the new field of
> that relationship table was not put into the GUI code.
>
> The GUI crashes.


I am thinking that my question was not phrased well.  If you were to add up 
the number of times that you have had to add a new column to a database 
table, of those times, how often did you not have to alter the GUI vs not 
have to alter the GUI?  I am wondering what the ratio is of having to alter 
the GUI vs not having to alter the GUI.

My experience is mainly in the area of business applications where data is 
entered and stored in a RDMS.  Most of the times when adding a new column it 
is for storing some new data.  A corresponding input widget is usually 
required on the GUI.

I would decouple my GUI for other reasons.  I have just been wondering 
whether the "adding a new column to the database" is a big enough reason if 
99% of the time adding that column requires a change in the GUI anyhow.

Am I making sense?  Or have I missed an important point or principle?

Cheers
Shane


-- 
"If I say it's grey and has a trunk why do you assume it is an elephant?"


0
shanemingins (337)
7/8/2005 7:58:26 PM

Michael Feathers wrote:
> krasicki wrote:
> >
> > Michael Feathers wrote:
> >
> >>krasicki wrote:
> >>
> >>>XP was not promoting testing years ago.  XP retreated to testing
> >>>emphasis because few people argue that it's good.  But testing advocacy
> >>>doesn't validate XP as a good methodology per se.
> >>
> >>Nope.  Testing was a core practice of XP from the beginning.  It was it
> >>the white book, as test-first and functional testing.  If you go to Ron
> >>Jeffries' site you'll find the original writeups of the C3 project's
> >>practices:
> >>
> >>http://xprogramming.com/Practices/xpractices.htm
> >>
> >>And, if I remember correctly, the paper submitted to OOPSLA by C3 in the
> >>late 90s, the one that spurned interest in XP, emphasized testing as well.
> >>
> >>Where are you getting all of these odd ideas?
> >
> >
> > Memories of previous trips around this bush.
> >
>
> You're not learning much on each pass, are you?
>
Funny you should say that.  I'm standing here off to the side reading
the same responses from the exact same handful of people as many years
ago.  

Keep patting each other on the back.

0
Krasicki1 (73)
7/9/2005 4:37:24 AM

Les Cargill wrote:
> krasicki wrote:
>
> >
> > Les Cargill wrote:
> >
> >>krasicki wrote:
> >>
> >>>Michael Feathers wrote:
> >>>
> >>>
> >>>>krasicki wrote:
> >>>>
> >>>>
> >>>>>Daniel Parker wrote:
> >>>>>
> >>>>>
> >>>>>
> >>>>>>"krasicki" <Krasicki@gmail.com> wrote in message
> >>>>>>news:1120611204.476214.162240@f14g2000cwb.googlegroups.com...
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>>Look, OOD is about designing theoretical systems ...
> >>>>>>
> >>>>>>Just out of curiosity, what theory?
> >>>>>>
> >>>>>>-- Daniel
> >>>>>
> >>>>>
> >>>>>Well Daniel,
> >>>>>
> >>>>>Not all systems consist of a web front end that accesses data from a
> >>>>>well-established database though many do.
> >>>>>
> >>>>>OOD, depending on the tools and techniques you use, allows system
> >>>>>designers to model potential solutions in numerous ways.  If this
> >>>>>tradeoff is made here, this benefit is forthcoming there and so on.
> >>>>>Design is about applying recombinant ideas to solving problems.
> >>>>>
> >>>>>Systems are developed to solve problems in effective ways.  Systems are
> >>>>>not developed for the sake of software or to gratify the provincial and
> >>>>>esoteric ego needs of the employees charged with getting the system
> >>>>>implemented.
> >>>>>
> >>>>>I'll give you a good example.  A Hilton was just built in Hartford and
> >>>>>the building went up fast to meet the deadline of the new convention
> >>>>>center being built next door.  All federal guidelines were applied.
> >>>>>
> >>>>>So come inspection day, Connecticut's inspectors applied Connecticut
> >>>>>building statutes to the inspection and the building failed!  Now one
> >>>>>could say that the building was built fast, saved money, and looks real
> >>>>>good and boy were the builders proud of it.  Let's call this process
> >>>>>agile.
> >>>>>
> >>>>>So sixteen of the rooms were incorrectly built for handicapped access,
> >>>>>an error of three inches per room.  Where to get three inches?  Push
> >>>>>the rooms into the hall and the hall fails.  You can see how this goes.
> >>>>>
> >>>>>Another example.  A Hospital builds a new wing onto an existing
> >>>>>structure and these days new hospital wings look like fancy hotels.
> >>>>>Everything is immaculate, grand fascades, fancy everything.  The
> >>>>>hospital wing opens without a hitch.
> >>>>>
> >>>>>The first bed is rolled down the hall, the elevator button is pushed,
> >>>>>the door opens, the bed pushed into the elevetor as far as it can go,
> >>>>>but the bed still doesn't fit.  Wrong sized elevator.
> >>>>
> >>>>
> >>>>It's pretty amazing to me that you find anything in common with Agile in
> >>>>these scenarios.  They all sound like cases were there was no feedback
> >>>>or testing.  Sounds more like plan-driven development to me.
> >>>
> >>>
> >>>Au contraire.  The bricks all passed unit tests.  As did the cement,
> >>>steel, and so on.  And the plans all had feedback.
> >>
> >>Not the right feedback. It's a failure of requirements capture,
> >>pure and simple. No methodology nor any other thing, other
> >>than collecting all the relevant requirements and checklisting
> >>them, would have made a bit of difference.
> >
> >
> > I will infer that you're saying that More planning anfd preparation
> > time might have comprehensively accumulated and accounted for these
> > missing design considerations.  In other words a hieavier weight
> > methodology could have avoided the headaches - all things being equal.
> >
>
> Not at all - I'm just saying thst missing the point is
> missing the point. If the elevator width doesn't
> meet spec, it simply doesn't.
>
> How that's acheived isn't on the table. I dunno
> the people, dunno the domain.
>
> Heavier weight process might obscure the requirement.
>
> It's ultimately an anthropology problem.
>
> >
> >>>And the customer
> >>>surely showed up with a glowing smile watching the obvious progress.
> >>>And progress happened every day.
> >>>
> >>>The elevator worked fine.  Up.  Down.  Ring, ring.  All positive
> >>>feedback.
> >>>
> >>>The workers sweated.  The execs wore suits and went golfing.
> >>>
> >>>
> >>>
> >>>>>These are true stories.  Shouldn't all of the architects of these
> >>>>>buildings have expected change to happen as well.  Same with the
> >>>>>builders.  maybe build with everything loose so that it can be
> >>>>>reassembled when the next minor detail arises?  Aren't we being told
> >>>>>this is the way things work?
> >>>>
> >>>>Well, the fact is software is malleable.  In fact it is too malleable.
> >>>>It isn't hard to change software at all.  All you have to do is type a
> >>>>couple of characters in any program and you can break it.  Because that
> >>>>is the way that software is, we need tests to give it backbone.
> >>>
> >>>
> >>>It's not that malleable.  Once in production software is very hard to
> >>>change for all kinds of political reasons.
> >>>
> >>
> >>It shouldn't go into production with defects that are gonna
> >>cost people money, at least without an enforceable plan
> >>to get the defects out, upfront.
> >>
> >>Once in production, somebody has to make the decisions of
> >>when, how and why to deploy upgrades.
> >
> >
> > Well, my point is that if something goes through the XP methodology
> > with all of the hot air and hubris that one has performed a bazillion
> > tests on it already but defects still exist, who will know and how
> > could they prove it.
> >
>
> No process can guarantee any result. You really need respectful
> interaction between subject matter experts and the implementors.

Wait a minute here.  System design methodologies generally do guarantee
a number of things assuming the commitment of the purcahser to follow
through.

They guarantee that all requirements will be respectfully accounted for
in feasibility and cost to the degree that these can be known.

Once these requirements are accounted for, design models can very
accurately be modeled to predict with a fair degree of certainty what
any number of implementations might cost and deliver.

And once the desired system is decided upon, any fair implementation
will guarantee exactly what was designed.

The examples I cited describe exactly how good intentions and
well-tested parts are not necessarily enough to design complex wholes.
You don't have to be a domain expert to understand the lesson being
presented unless you're just playing the XP belligerence game.

>
> If the tests are wrong, they're wrong. Fix 'em.

If you *know* the tests are wrong you would fix them just like the
builders would have corrected their mistakes before making them.

In software development, if no whole design rigor is exercised up front
you have no context for questioning, let alone fixing something that's
wrong.  You won't know it.  There's no decoupling of design from code
or code from test or chummy BA from chummy developers.  Everyone has
the same vested interest in being absolutely right in the assumptions
being adopted *even when the buy in is yeilding a false positive*.

>
> > Would any of us argue for long with these people?  I lose heart just
> > trying to get a straight answer out of them in something as
> > straightforward as a newsgroup.  Haven't you heard,  XP is absolutely
> > right because they've tested everything every which way.
> >
>
> I used XP , once. It contributed zero defects in five years*. This
> was no more absolutely right than anything else - it modelled
> a well-known system with loads of test data.
>
> *sampling limit, in this case.

Is this something meaningful?

>
> > Once in commercial production, software that is mission critical is not
> > easily changed because, as someone said elsewhere, tweaking the wrong
> > bit could cause system calamities.  Reintroducing code in these
> > enevironments could take six months to a year of expensive rework or
> > total shutdown.
>
> Then you have to get it right the first time. That costs
> money, you know. If it costs more than you have,
> try to keep the drawing to inside straights to a minimum.

Yes.  And there has to be multiple, independent layers of insurance
that the thing is right because the goodwill of companies rides on the
success or embarassing failure of such things.

And if it costs money to implement straight code correctly then it
costs bundles more implementing correct and comprehensive testing
strategies that are, after all, nothing more than more code.

>
> >  It's no longer a question of tweaking code but
> > questioning all assumptions.  With BDUF, you can isolate the problem
> > and hypothetically run the system without software trying to understand
> > the overall implications.
> >
> >
>
> That sounds heartbreaking. It sounds like fighting a war on two
> fronts. Why no alliances, then?

What's heartbreaking?  Discovering holes in the desiner's logic up
front and not wasting time making those mistakes in implementation?
You aren't fighting wars here.  You're analyzing system logic and
fine-tuning implementation to avoid bottlenecks, security weaknesses,
and lots more.  These are not things that can be managed at the code
and BA GUI requirements level.  Implementation becomes surgical and not
haphazard.

>
> >>>In fact a big problem for architects and designers is having
> >>>programmers undermine design activity with too much dog and pony
> >>>prototyping.
> >>
> >>How is that possible? Other than time being wasted, prototyping
> >>is harmless. Prototypes should not even be attempted until
> >>there's a specific question or suite of questions they are
> >>to answer. If it's a sandboxed protpype, just to let the
> >>programmers play, then chunk it, or put it away. You
> >>still need specific deliverables from the prototyping.
> >
> >
> > Prototyping is political dynamite in many organizations.  Software
> > designers and architects are usually discussing issues that are not
> > near and dear to the hearts of the local application domain princess
> > who wants to have someone to talk to.  Enter, any number of local
> > characters who begin prototyping their idea of what should happen.
> > Before long the architects and designers are entangled in favorite
> > color discussions and presentation fashion shows.
> >
>
> But that's a Big Man problem. Who is the champion
> for this? Who's the Boss?

Bean-counters and clock-watchers.

>
> Surely methodology cannot solve organizational
> psychology problems at this level.
>
> > Add to this mix, any number of programmers who believe they know better
> > than the people always talking about abstract ideas and you enter the
> > realm of random, esoteric, and uncontrollable development.
> >
> >
>
> That's where the Four W's come in - Who, What, Why and When.

That's where HCE come in.  Here comes everybody with a hot button
answer to all four questions.

>
> >>>Bad ideas become adopted before any discussion of the
> >>>larger picture can be formulated.
> >>>
> >>
> >>Then they have to get rooted out and killed, or at least
> >>triaged and weighed for "badness". Bad ideas that don't get
> >>shot are a sign of complacency, not methodology.
> >
> >
> > There is no budget to root things out and bad software is often
> > sponsored internally by incompetent people who control your paycheck.
> > XP adds authenticity to the problems involved.
> >
>
> Incompetent people are just competent people who
> haven't figured it out yet. Clue - one mechanism
> of competence is being transparent.

I like that.

>
> > Because software development is so tightly coupled to the individuals,
> > it is no longer a matter of correcting or eliminating problematic code.
> >  The XP crowd has a vociferous ego stake in what's being done.  They've
> > got stories and tests and feedback loops that will insist it's there
> > right.  They all feel good about it.  And there is no impartial design
> > document you can point to to say otherwise because the whole ball of
> > wax is personal, intimate, immediate, and a treadmill of exhaustion for
> > everyone involved.
> >
>
> You don't need a methodology, you need a guillotine! Actually,
> you need leadership, but I understand....

I believe you do.  In flat, matrixes, leadership is the fellow walking
around with the bump on their head. (Corporations hammer everyone who
sticks out).

>
>
> >
> >>>Of course you need tests.  We aren't a bunch of ninnies here.
> >>>
> >>>
> >>>
> >>>>>Even carpenters measure before they cut.  Yet, in computer science we
> >>>>>are being told that we should operate as though we are all alchoholics
> >>>>>and take things one day at a time.
> >>>>
> >>>>The problem is: misunderstanding the material you are working with.
> >>>>Code is not wood or concrete.
> >>>
> >>>
> >>>But spent resources are.  Nobody fixes anything for free.  And bad code
> >>>applied to millions of daily transactions can cost companies or
> >>>customers lots and lots of money when wrong.
> >>>
> >>
> >>So somebody has to do a cost-benefeit analysis of when to do what.
> >>Good code isn't free, either. This is logistics, not particularly
> >>even software logistics.
> >
> >
> > The key term is "has to".
> >
> >
>
> Exactly. But in the absence of accountability, all things
> are possible.

I ran across shocking stuff just recently.  The absence of
accountability is equivalent to malfeasance.

>
>
> >>>Testing is tricky stuff and complex logic errors don't get discussed
> >>>when daily iterations are the norm because there is no time.
> >>>
> >>>Design and OOD are not code or code design.
> >>>
> >>
> >>You can't fix culture with tools, in other words. Mostly, yes :)
> >>
> >
> > Thanks Les.  Arguing XP is as thankless a task as I've ever
> > encountered.  The proponents swarm on critics like hornets so try to
> > avoid this stuff more often than not.  I sincerely was trying to give
> > the OP a fair assessment of what's out there but this quagmire blocks
> > all light from shining through.
> >
>
> But isn't it amazing what can be accomplished in the face of
> those sorts of odds? People *are* rational, once you do
> the heavy lifting for 'em. After all, that's why you're
> there.
>
> Fighting a lost cause to a draw is about as good
> as it gets on this planet. And sometimes, you are the
> windshield and not the bug.
>
> I don't mean to appear arrogant; far from it. But
> nobody say it s'posed to be easy.

I like that as well.  It is a good rule of thumb to simply complex
problems.  But sometimes the complexity is what it is.

0
Krasicki1 (73)
7/9/2005 5:33:24 AM
I asked you " Why would it be suicide to have a coupling between the
GUI and  database schema?"

You gave me an example that did not involve any database access at all.
 Can you please give me an example of the bad thing with coupling
between the GUI and database schema?

If your example would have any thing to do with a database, maxItem and
totalAmt would be calculated this way:
select max(item) from product
select sum(amount) from product

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/9/2005 11:57:46 AM
>>Adding a new column to the database would in no way >>crash the GUI.

> Use your imagination!

There are no way. The only possible way is if you have insert SQL
statements that are not specifying columns. That is considered to be
very bad coding. Any other select, update or insert statement will
produce exactly the same result after the adding of a new column.

I can guarantee you that you can add as many columns as you want to any
table in a database, without breaking anything at all.

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/9/2005 12:01:59 PM
> When you separate responsibilties and name them, you know where to look.
>  I'd be willing to bet that the original function wasn't called:
> PrintNameAndAmountForItemsAndCalculateTotalAlongWithMaxItem()
> But, that's what an ugly function like that should be named, isn't it?

You have to remember that this function was written by RCM in an
attempt to show why the GUI and database schema must be separated.
Nobody are arguing for making such function.

But, if the function had ended with printing of the sum and max values,
it would be a common and resonable example of a report program. The
name of the function would be printProductReport().

If you have a report that ends with a summary, you want the sum to made
up of the items that appear in the report and nothing else. If the
selection of the products was complex and used from multiple points in
the application, it should be separated into a separate function. But
in this example, every product is selected, so there is (almost)
nothing to separate.

> function Print
>    foreach item in FilteredList()
>         print item.name
>         print item.amount

>function Total
>     foreach item in FilteredList()
>         total += item.amount

Now, you have made the report twice as slow. And you had created the
possibility that the total would be something else from what is shown
in the report. 

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/9/2005 12:17:53 PM
> Now imaging that deep within the database structure we add a new
> column to a relationship table.  This column modifies the way the
> relationship works.

Your initial statement didn't say anything about modifying primary or
foreign keys. You said "add a new column". If you change a primary or
foreign key, the GUI will not be unaffected. Can you give some example
of primary or foreign key change, that can be isolated to only the
persitence layer?

> Let's finally say that the GUI code depends deeply on the database
> schema, but the necessary 'if' statement to check the new field of
> that relationship table was not put into the GUI code.
> The GUI crashes.

Can you give an example? I am not sure I understand why the GUI would
crash.

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/9/2005 12:29:08 PM
frebe wrote:
> I asked you " Why would it be suicide to have a coupling between the
> GUI and  database schema?"
> 
> You gave me an example that did not involve any database access at all.
>  Can you please give me an example of the bad thing with coupling
> between the GUI and database schema?
> 
> If your example would have any thing to do with a database, maxItem and
> totalAmt would be calculated this way:
> select max(item) from product
> select sum(amount) from product
> 
> Fredrik Bertilsson
> http://butler.sourceforge.net
> 

Well, here's one and it is a very sad tale.  I once worked with a team 
that embedded database calls throughout their application.  They were 
all over the place.  The vendor raised the licensing rate and they 
simply could not make money with the product.  The team wished that they 
could break the dependencies on the vendor but the calls were too 
pervasive.  The whole thing ended up with a very expensive rewrite 
which, by the way, had a good amount of separation between the db and 
the rest of the app.  They learned a very expensive lesson about 
dependencies.


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall 2005)
www.objectmentor.com

0
mfeathers2 (74)
7/9/2005 12:57:12 PM
frebe wrote:
>>When you separate responsibilties and name them, you know where to look.
>> I'd be willing to bet that the original function wasn't called:
>>PrintNameAndAmountForItemsAndCalculateTotalAlongWithMaxItem()
>>But, that's what an ugly function like that should be named, isn't it?
> 
> 
> You have to remember that this function was written by RCM in an
> attempt to show why the GUI and database schema must be separated.
> Nobody are arguing for making such function.

It was an example of the sort of intertwining that was being discussed.

> But, if the function had ended with printing of the sum and max values,
> it would be a common and resonable example of a report program. The
> name of the function would be printProductReport().

To me it would be reasonable if the function didn't contain the 
calculation code.  Functions should have single responsibilities.
I definitely wouldn't look for the calculation code there.  What would 
happen if we wanted two reports?  Would you duplicate the code in both 
reports?

> If you have a report that ends with a summary, you want the sum to made
> up of the items that appear in the report and nothing else. If the
> selection of the products was complex and used from multiple points in
> the application, it should be separated into a separate function. But
> in this example, every product is selected, so there is (almost)
> nothing to separate.

Yes, it is a very simple example and there isn't much to separate, but 
it is prone to the error that Bob used in his example.

>>function Print
>>   foreach item in FilteredList()
>>        print item.name
>>        print item.amount
> 
> 
>>function Total
>>    foreach item in FilteredList()
>>        total += item.amount
> 
> 
> Now, you have made the report twice as slow. And you had created the
> possibility that the total would be something else from what is shown
> in the report. 

I didn't want to freak topmind out, because I know he isn't too keen on 
OO, but after I posted what I had above I thought "no, another way of 
doing it is to have an abstraction that holds the filtered list.  We can 
ask it to render items for printing and ask it to total."  I don't know 
how quickly, I'd go that far; it depends on the problem.  But this is 
part of abstracting away from the bare data in the database.

Re perfomance, it matters when it matters.  If this were a performance 
critical app, and this code was on the critical path I wouldn't do it 
this way, but that isn't always the case. One of the best ways to 
totally mess up an application is to believe that things are performance 
critical when they aren't.  That belief leads to bad structuring 
decisions that are hard to work your way out of.  On the other hand, 
when the code is factored well, into small single responsibility based 
methods, and covered with tests, you have more places to profile and 
more options when optimizing.


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall 2005)
www.objectmentor.com



0
mfeathers2 (74)
7/9/2005 1:19:54 PM
> To me it would be reasonable if the function didn't contain the
> calculation code.  Functions should have single responsibilities.
> I definitely wouldn't look for the calculation code there.  What would
> happen if we wanted two reports?  Would you duplicate the code in both
> reports?

I agree that if the calculation is complex, it should be put into a
separate function. Instread of
total += item.amount
this could be used:
total = calcSum(total, item.amount)

But a + operation or max function is not very much to separate.

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/9/2005 1:32:48 PM
frebe wrote:
>>To me it would be reasonable if the function didn't contain the
>>calculation code.  Functions should have single responsibilities.
>>I definitely wouldn't look for the calculation code there.  What would
>>happen if we wanted two reports?  Would you duplicate the code in both
>>reports?
> 
> 
> I agree that if the calculation is complex, it should be put into a
> separate function. Instread of
> total += item.amount
> this could be used:
> total = calcSum(total, item.amount)
> 
> But a + operation or max function is not very much to separate.
> 
> Fredrik Bertilsson
> http://butler.sourceforge.net

Yes, it is minor, but often it's good to catch things when they are minor.

Tell about an experiment I run when I work with teams sometimes.  A pair 
  of developers introduces a switch statement with two legs.  I say "oh, 
looks like you could use command pattern there and get rid of the 
switch."  They say "yeah, but it's only two cases, adding the command 
pattern seems like overkill."  So I say, "yeah, you're right."  They go 
on and add the third the leg.  "So, what about the command pattern now?" 
  They say "yeah, but it's still small."  So, I come back an hour later 
and they have seven or eight legs in the switch and they try to dodge 
me, but the truth is, they are looking at the work of introducing the 
command pattern then and thinking "Man! All of that work!"  So that is 
when we sit down and have a conversation about catching things early.
We sit down together and fix the code.  The next time through, they are 
separating things preemptively because they realize how much easier it is.


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall 2005)
www.objectmentor.com
0
mfeathers2 (74)
7/9/2005 1:43:55 PM
> The vendor raised the licensing rate and they
> simply could not make money with the product.

There are many free RDBMS availible.

> The team wished that they could break the dependencies on the vendor

Just keep to ANSI SQL and you will be able to change database vendor at
any time. The applications I currently work with, are database vendor
independent without separating the database schema from the GUI or
"business" logic. Using ANSI SQL the database schema are already
separated from the database vendor.

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/9/2005 2:38:59 PM
I don't say that separation should never be done. But you can't always
separate everything from everything. For example, if you are producing
XML documents in a java application, two popular frameworks are jdom
and dom4j. Do my application have to be separated from the XML
framework used? Do I have to write a layer between my application and
jdom/dom4j, so I can switch between them? The answer might be yes in
some scenarios, but in most scenarios the anwser would be no. You
always have to compare the benifits to the costs.

If you are separating the "business" logic from the database schema,
what are the benifits? As pointed out before, vendor independence has
nothing to do with this. The database are already separated from the
application in two layers: ANSI SQL and JDBC/ODBC/ADO.

About your example, can you show how the command pattern would be
better than the switch statement? Are you saying that switch statements
are always bad, or are you talking about a specific scenario?

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/9/2005 3:08:58 PM
> I am not disagreeing with the decoupling of the >GUI layout from the database schema

What are your arguments for decoupling the GUI from the database
schema?

> but I am wondering
> how often the GUI needs changing in some >manner to accomodate the new column whether
>decoupled or not.

If a new column also cause changes in the GUI, wouldn't a decoupled
architechture cause you extra work?

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/9/2005 4:20:14 PM
frebe wrote:

> I asked you " Why would it be suicide to have a coupling between the
> GUI and  database schema?"

Because that is the definition of coupling: When it's suicide.

;-)

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/9/2005 4:29:43 PM
Robert C. Martin wrote:
> On 8 Jul 2005 01:34:15 -0700, hansewetz@hotmail.com wrote:

> Some of both.  Most of the acceptance tests that I write are based on
> functional and non-functional requirements.  Some, however, are based
> on software structure.  See:
> http://butunclebob.com/ArticleS.UncleBob.StableDependenciesFixture.

OK, I assume that by 'test' you mean 'execution test'.

> >Also, I'm not really sure what you mean by 'requirements'.
>
> The traditional definition.

Since you are expressing requirements as tests it seems pretty
important to have a clear definition of 'requirements'! The term
'requirement' is one of the terms that have different meanings to
different people. Check out environments with coders, architects,
business analysts, business people, bean counters, project managers ...

I asked the question about 'requirements' to make sure we were talking
about the same thing. I'm still not sure if you are mixing requirements
from the customer with specification of the behavior of the software.

> That's just Xeno's paradox.  It's true that there's always on more
> test to run to test one last little detail, or one last little obtuse
> case.  But that doesn't mean that Achilles never passes the tortoise.

Xeno's paradox is meant to illustrate a paradox; my example does not
describe a paradox.

You might be able to show that 'that last little detail' is handled
correctly by analyzing the design. If the 'last little detail' is a,
let's say a raise conditions, I don't think your execution tests will
always do the job.

> Which from microsecond to microsecond can change it's state.  There
> is, after all, a positive probability that all the molecules in the
> bell will, by chance, shift to the left by 3 feet.  Or that a cosmic
> ray with the energy of a well hit baseball (they have been detected!)
> will miss all the intervening air molecules and "ring" the bell.
>
> In short, there's no way to prove that the machinery works.  But it
> doesn't matter, because you can prove that it works to a within a
> certain probability.

Of course it's not possible to 'prove' that the machinery works.
My apologies for not clearly expressing what I meant. My point, not
explained well, was that it is sometimes possible to show that
logically a design or piece of code should work if certain assumptions
are made. For example, an assumption could be that when writing a value
to a register the 'bell' reacts in a certain way.

You can increase the probability of ending up with software that works
correctly if you not only run execution tests but also verify that some
important parts of the design are internally consistent and under
certain assumptions will perform as expected. One such assumption is
that the design is implemented correctly. Another one is that the
environment in which the software executes maintains certain
characteristics. For example, no cosmic rays with the energy of a well
hit baseball!

Execution tests can increase the confidence that the code actually
implements the design. In general I cannot see that execution tests can
show that the design is consistent since an execution test can only
check a few samples of all possible states of the software.

> True, but that doesn't mean that we don't write the tests.  We cannot
> test *everything*.  But we can test enough.

To say that 'we can test enough' is a pretty strong statement. The
meaning of 'eneough' depends on what the software does.

If you are writing safety or mission critical applications you will
most certainly have to show that your design or some parts of your
design are consistent and performs in a specific way under certain
conditions without execution tests. However, if you are writing a word
processor, execution tests are probably 'enough'.

I do not question the value of writing execution tests. However, I do
question the blind belief in an illusion that checking software through
execution tests are 'enough'.

Regards,
Hans Ewetz

0
hansewetz (110)
7/9/2005 5:34:28 PM
frebe wrote:

> Just keep to ANSI SQL and you will be able to change database vendor at
> any time. The applications I currently work with, are database vendor
> independent without separating the database schema from the GUI or
> "business" logic. Using ANSI SQL the database schema are already
> separated from the database vendor.

I want to switch to XML.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/9/2005 5:39:26 PM
frebe wrote:

> > I am not disagreeing with the decoupling of the >GUI layout from the
database schema
>
> What are your arguments for decoupling the GUI from the database
> schema?

Data are most efficiently stored in normalized tables. Users most
efficiently enter data in the order they think of them (name, address,
telephone number, etc.).

These two efficiencies contradict each other. The user should not, for
example, enter a telephone number and a CustomerID into a grid, representing
the phone number table. The user should not memorize the CustomerID. The GUI
should have a default phone number slot, and an option to push in extra
numbers of various types.

If you propose a system that reconciles the two efficiencies, then you
propose a decoupling.

> > but I am wondering
> > how often the GUI needs changing in some
> > manner to accomodate the new column whether
> > decoupled or not.
>
> If a new column also cause changes in the GUI, wouldn't a decoupled
> architechture cause you extra work?

The meaning of "decoupled" is it would not cause extra work. But that's a
goal, not a requirement. Sometimes you perform a little extra work.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/9/2005 5:45:22 PM
Phlip wrote:
> frebe wrote:
> 
> 
>>Just keep to ANSI SQL and you will be able to change database vendor at
>>any time. The applications I currently work with, are database vendor
>>independent without separating the database schema from the GUI or
>>"business" logic. Using ANSI SQL the database schema are already
>>separated from the database vendor.
> 
> 
> I want to switch to XML.
> 

"I'm sorry, but we didn't plan for XML.  So, you can't without, 
essentially, a rewrite."

"Well, can't you change the code quickly to accomodate XML?  I read 
about a team that converted persistence strategies in a 100 KSLOC code 
base in about a week.  They said that all of the tests that they had 
helped them do it."

"Er, I'm sorry about that too, but you see once we figured out that had 
to populate a database every time that we wanted to setup a test, well, 
we stopped writing as many tests.  Plus, the only way to test our app is 
through the GUI with a screen-scraping tool and well, the tests are 
horrible to maintain.  We have some tests but nowhere near as many as it 
would take to do a quick safe conversion."

"Please tell me I didn't pay for this."


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall)
www.objectmentor.com
0
mfeathers2 (74)
7/9/2005 6:17:49 PM
Phlip wrote:

> Converting requirements to specifications must be done, and converting
> specifications to code must be done. Automating the specifications
> simplifies coding.

Please show me how to automate specifications of software!
Not sure if we are talking about the same thing here.


> > Also, I'm not really sure what you mean by 'requirements'.
>
> Things that will profit your customer.

Lots of things will 'profit' the customer. Developers working long
hours for free is an important one. Your definition of
'requirements' seems far to general to be usefull for expressing
requirements as 'tests'. If the goal is to express requirements as
tests I believe you need a more narrow definition.


> If you can't find a way to convert requirements "asystolic patients ring an
> alarm bell" into specifications "strobe port 7 when the time between
> systolic pressure spikes above 5 millibars exceeds 2 seconds" then you must
> research those requirements.

The example you give here is not the example I gave.

I never said that the requirements could not be converted into a
specification ...?

I said that by using execution tests you might not always be able to
check that the software behaves according to the requirements. To be
more clear, you might only be able to check a subset of the
requirements through execution tests.

>Insisting on executable specifications is a powerful way to review and gate
>requirements. If you can't write a test, you must plan first. So insisting
>on automated tests forces a test-driven project to plan its most important
>details.

Does this also mean that if you 'can write a test' for the behavior of
the software you don't have to plan?

Are you also saying that the behavior of your software reveals it's
'most important' details? If you are building a compiler there are
lots of important details that are not revealed at the boundary of the
compiler!

What is a 'test-driven project'? Even if you express requirements
or specification of the software as tests, I don't see tests as the
driving force in a project.

I believe automated tests are important. But it is equally important to
understand 'what' it means and 'what it does not mean' when the
software passes an automated test suite. To believe that all
requirements are met because the software passes all tests is sticking
the head in the sand. Automated tests are simply one out of many ways
to ensure that the software behaves correctly.

Regards,
Hans Ewetz

0
hansewetz (110)
7/9/2005 6:30:11 PM
> Data are most efficiently stored in normalized tables. Users most
> efficiently enter data in the order they think of them (name, address,
> telephone number, etc.).

Isn't that the same. In your examples you have attributes that are
associated with a person. Wouldn't it be natural to have a person table
with the columns name, address, telephone, etc? In the GUI you would
most likely have a form/panel for editing persons.

> The user should not, for
> example, enter a telephone number and a CustomerID into a grid, representing
> the phone number table.

In a coupled architechture using database-aware GUI components, you
don't have to use a grid. You could for example use a list view
combined with a popup detail panel. How would your GUI look like?

"phone number table"?? The phone number column should belong to the
customer table?

> The user should not memorize the CustomerID.
Of course not. Using database-aware GUI components, you could have a
search panel allowing the user to search for customers using any
attribute. Tight coupling between the GUI and database makes such
search panel easy to do. When I create applications, I just supply the
where-clause and a search panel are automatically created based on the
columns and operators in the where-clause.

> The GUI
> should have a default phone number slot,
The concept of default values is supported by almost every database.
Support for default values are very easy to implement in database-aware
GUI components.

> and an option to push in extra
> numbers of various types.
To do this, you need to add an extra table
extra_phone_number(customerid, runningno, telephoneno)
In a database-aware detail panel, related record could be edited too.
All examples you give just show the power of a database-aware GUI. The
features you are asking for is quickly implemented by changing a
descriptor object for the GUI component. Doing it your decoupled way,
it would take days.

> If you propose a system that reconciles the two efficiencies, then you
> propose a decoupling.
In that case, database-aware GUI components are "decoupled". A coupled
GUI can ofcourse add extra functionallity on top of the database, such
as validation and formatting.

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/9/2005 6:58:41 PM
hansewetz wrote:

> Phlip wrote:
>
> > Converting requirements to specifications must be done, and converting
> > specifications to code must be done. Automating the specifications
> > simplifies coding.
>
> Please show me how to automate specifications of software!
> Not sure if we are talking about the same thing here.

Humans collect requirements. Engineers convert them into specifications, and
write these down.

Documenting specifications is very important.

It's so important, each document should have a Test button, so you can test
the specification against the live code.

So, all specifications should be automated. That makes coding easier,
because you always know when you deviate from specifications, and you know
the specifications are refined enough that you can code to them.

> Lots of things will 'profit' the customer.

Snipping out the gain-saying here...

> I said that by using execution tests you might not always be able to
> check that the software behaves according to the requirements. To be
> more clear, you might only be able to check a subset of the
> requirements through execution tests.

That's hardly a reason not to try.

> Does this also mean that if you 'can write a test' for the behavior of
> the software you don't have to plan?

How could you write the test without planning?

The point: If you can't write the test, you must plan more. So the answer to
the question "how much planning is needed in software engineering" is "write
tests for everything, just before writing each feature. That will right-size
the amount of planning."

> Are you also saying that the behavior of your software reveals it's
> 'most important' details? If you are building a compiler there are
> lots of important details that are not revealed at the boundary of the
> compiler!

You are my customer. You just said you want a complex system - a compiler -
with known boundaries. To get there, do you want me to "think", or write
tests?

I "think" our compiler will behave correctly within its boundaries. Here I
go - think think think. Is this "thinking" thing working? How do we know?

> What is a 'test-driven project'?

A project where you express all requirements as tests, so they drive the
project.

Engineers already generally do everything we are (both) discussing. But they
do them without the names we are using for the activities. And, sometimes,
they get bits wrong. They write specifications that cannot be tested. Then
they write their code, read it, and "think" about whether it meets specs.
Don't do that.

> Even if you express requirements
> or specification of the software as tests, I don't see tests as the
> driving force in a project.

Any force you need to drive your project, if it's a good force, you can
express it as a test.

Most projects use two driving forces - paperwork, and debugging. They only
commit to specifications that can be written down (untestably), and then
then they only write the kind of code that they can debug. Both those
systems are not working; they cause rework and endless delays for bug hunts.

> I believe automated tests are important. But it is equally important to
> understand 'what' it means and 'what it does not mean' when the
> software passes an automated test suite. To believe that all
> requirements are met because the software passes all tests is sticking
> the head in the sand. Automated tests are simply one out of many ways
> to ensure that the software behaves correctly.

Well, while my head is in this sand of testage, I see projects with a test /
code ratio of 4:1, and I run all these tests after the fewest possible
edits, usually just one line of code.

If you don't have experience with that kind of project, then you are
comparing my claim that tests should drive development at all levels with
your experience with incomplete testing. So without experience with a
_minimum_ volume of testage to support specifying and implementing, you are
not qualified to then ask about the _next_ level, which is helping tests
(and thinking) rate the odds a program will fail in the field.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand



0
phlip_cpp (3852)
7/9/2005 7:06:24 PM
> I want to switch to XML.

Why?

The data model in XML is hierachical and the hierachical data model was
abandoned in favour of the relational model, long time ago. XML is
almost only used for read-only configuration files and data transfer.
It is not used as databases.

The relational model is superior to hierachical model, because of
referential integrity and superior quering. For tasks there these
features are not necessary, like data transfer, XML is an option. But
not as databases.

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/9/2005 7:11:02 PM
> "I'm sorry, but we didn't plan for XML.  So, you can't without,
> essentially, a rewrite."

Lets say you have these classes with the given properties:

class Customer
  -CustomerID
  -Name
  -Telephone

class Order
  -OrderID
  -CustomerID
  -DeliveryDate

class OrderRow
  -OrderID
  -PartNo
  -Quantity

class Part
  -PartNo
  -Description

How would the DTD for your XML look like? If you want to find all
customers that ordered a given part, how would you implement that
search? How would you do to check that only existing PartNo are entered
by the user?

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/9/2005 7:19:51 PM
frebe wrote:
>>I want to switch to XML.
> 
> 
> Why?
> 
> The data model in XML is hierachical and the hierachical data model was
> abandoned in favour of the relational model, long time ago. XML is
> almost only used for read-only configuration files and data transfer.
> It is not used as databases.
> 
> The relational model is superior to hierachical model, because of
> referential integrity and superior quering. For tasks there these
> features are not necessary, like data transfer, XML is an option. But
> not as databases.

Requirements change.  Our new users don't want a database.  They can't 
have another server and all of their desktop machines have tight memory. 
  They want to save their 'work in progress' as documents in XML to 
bridge to tools they already have.

This does happen ;-)


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall 2005)
www.objectmentor.com
0
mfeathers2 (74)
7/9/2005 7:24:21 PM
frebe wrote:
>>I want to switch to XML.
> 
> 
> Why?
> 
> The data model in XML is hierachical and the hierachical data model was
> abandoned in favour of the relational model, long time ago. 


Now that you mention it, I have seen hierarchical data access built 
directly into a UI with no separation.

Separation would've come in handy when relational became popular, right?

You're in the same situation when something better than relational comes 
along, aren't you?  Even if it isn't "better" in some global sense, but 
only in some particular business specific way which makes the difference 
between making money and not.

The scenario that I outlined in my previous example is real.  I've 
worked with companies that are moving from centralized databases to 
document based storage.


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall 2005)
www.objectmentor.com







0
mfeathers2 (74)
7/9/2005 7:37:09 PM
frebe wrote:

> How would the DTD for your XML look like? If you want to find all
> customers that ordered a given part, how would you implement that
> search? How would you do to check that only existing PartNo are entered
> by the user?

You are asking how to make XML behave like SQL. That's a different question,
and _yes_ there is more work involved.

Regardless of the extra code to support queries in XML, we don't want this
change to drag down the rest of our modules. The change should be isolated
to the Persistence Layer.

And do you persist in claiming that coupling to SQL is less odious than all
the other couplings we seek to defeat??

Michael Feathers wrote:

> This does happen ;-)

Changed my mind, Holmes. XML is too last-millenium...

Now I want YAML.

Sorry.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/9/2005 7:51:57 PM
Phlip wrote:

> Humans collect requirements.

> I "think" our compiler will behave correctly within its boundaries. Here I
> go - think think think. Is this "thinking" thing working? How do we know?

> Any force you need to drive your project, if it's a good force, you can
> express it as a test.

> Well, while my head is in this sand of testage, I see projects with a test /
> code ratio of 4:1, and I run all these tests after the fewest possible
> edits, usually just one line of code.

> If you don't have experience with that kind of project, then you are
> comparing my claim that tests should drive development at all levels with
> your experience with incomplete testing. So without experience with a
> _minimum_ volume of testage to support specifying and implementing, you are
> not qualified to then ask about the _next_ level, which is helping tests
> (and thinking) rate the odds a program will fail in the field.

Well, after 'thinking" a little I 'think' I'll bail out at this point.
I believe the quality of your comments speaks for itself.

Regards,
Hans Ewetz

0
hansewetz (110)
7/9/2005 8:28:05 PM
> Requirements change.  Our new users don't want a database.  They can't
> have another server and all of their desktop machines have tight memory.
> They want to save their 'work in progress' as documents in XML to
> bridge to tools they already have.
> This does happen ;-)

There exists a number of very lightwight RDBMS, hsqldb and McKoi for
example. XML files uses much more memory on disk than the corresponding
database file. And a DOM document uses a large amout of memory after
being parsed.

XML files may be used for some applications, but it is very easy to
decide if you need a RDBMS for your application or if direct file
access is enough. It must be extremly rare that an applications is
converted from using a RDBMS to using XML files.

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/10/2005 4:46:54 AM
> Now that you mention it, I have seen hierarchical data access built
> directly into a UI with no separation.
> Separation would've come in handy when relational became popular, right?

The problem is that every new generation of data model moves the border
between database logic and "business" logic. For example, before the
relational model we had a hierachical database like this:

customer A
  -order 1
     -part A
  -order 2
  -part B
customer B
  -order 3
    -part B
  -order 4
    -part A

Now we want to write a function to find every customers ordering part
A. The "busniess" logic might look like this:

customers = Set()
for each customer:
  for each order:
    for each part:
      if (part = 'A'):
         customers.add(part)

Then relational databases came, this would not longer be "business"
logic. It would be replaced by
select distinct customerid from order join orderrow on
order.orderid=orderrow.orderid
where partno='A'

Or worse, the old "business" logic would be keept and the relational
database would be traversed as a hierarchial one (this I have seen many
times).

The idea that "business" logic could be separated from database logic
is an illusion. The next generation of data model will probably again
move the border and the "business" logic you write to today will
tomorrow be regarded as low-level data access logic.

> The scenario that I outlined in my previous example is real.  I've
> worked with companies that are moving from centralized databases to
> document based storage.
Can you give a more detailed description of the case?

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/10/2005 5:02:15 AM
frebe wrote:

> There exists a number of very lightwight RDBMS, hsqldb and McKoi for
> example. XML files uses much more memory on disk than the corresponding
> database file. And a DOM document uses a large amout of memory after
> being parsed.

We decouple to permit programs to exchange these tradeoffs as the project
grows. If the customer requests such-and-so balance between flexibility and
speed, we may pick XML or RDBMS.

(And everyone here knows XML is not a database, it's a notation that helps
you build one...)

> XML files may be used for some applications, but it is very easy to
> decide if you need a RDBMS for your application or if direct file
> access is enough. It must be extremly rare that an applications is
> converted from using a RDBMS to using XML files.

Programs start by investing in certain libraries. Decoupling our modules
from these libraries, even favored ones, keeps designs clean before the need
to replace arises. If it does, we are ready. If not, we still get the
benefits of clean design and easy testing.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/10/2005 5:11:36 AM
> Regardless of the extra code to support queries in XML, we don't want this
> change to drag down the rest of our modules. The change should be isolated
> to the Persistence Layer.

The question is if really need switchability between different data
paradigms. In history there have been three -hierachical, network and
relational. All but relational is obsolete. There might come new
paradigms, but it seem to take at least 10-20 years between every
generation. As pointed out in my answer to Feathers it is impossible to
know how a future paradigm would affect the application structure, so
trying to prefare for an unknown future in this case, is just wast of
money.

Once we identified the need for a RDBMS in an application, we can be
resonable sure that a RDBMS could be used during the entire life-cycle
of the application. The good thing with modern relational databases is
the high level of standardization. If a new generation of databases
would come soon, we can be sure that there would SQL support and
JDBC/ODBC/ADO drivers to emulate relational databses in the next
generation databases.

I think you base your argument on the fact that yesterday databases had
a very low level of standardization. If you changed vendor, you had to
rewrite your application. But today it is easy to separate your
application from a database vendor by using standard interfaces.

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/10/2005 5:16:10 AM
> We decouple to permit programs to exchange these tradeoffs as the project
> grows. If the customer requests such-and-so balance between flexibility and
> speed, we may pick XML or RDBMS.

The tradeoffs are handled by choosing different RDBMS implementations,
not by switching to different data paradigms.

> Programs start by investing in certain libraries. Decoupling our modules
> from these libraries, even favored ones, keeps designs clean before the need
> to replace arises.

That is why standard API like JDBC should be used. By using JDBC your
modules are separated from the actual database vendor used.

The problem is that it is hard to make an API like JDBC that would work
with all obsolete database paradigms. If you do that, you had to use
smallest denominator and you would have a simple map for retrieving
records by id only.

But if your application really need switchability between database
paradigms, I think such API should be used. The reason why there are
very few of such API is that the need for applications to be switchable
between database paradigms is extremly low.

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/10/2005 5:26:32 AM
frebe wrote:

> The question is if really need switchability between different data
> paradigms. In history there have been three -hierachical, network and
> relational. All but relational is obsolete. There might come new
> paradigms, but it seem to take at least 10-20 years between every
> generation. As pointed out in my answer to Feathers it is impossible to
> know how a future paradigm would affect the application structure, so
> trying to prefare for an unknown future in this case, is just wast of
> money.

So you are saying that the paradigm can couple out to the other modules,
even when the actual technology is insulated. Fair enough.

> Once we identified the need for a RDBMS in an application, we can be
> resonable sure that a RDBMS could be used during the entire life-cycle
> of the application.

Hope for the best, prepare for the worst. ;-)

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/10/2005 5:49:41 AM
It's a nice example, but I can't imagine ever calling that business 
logic.  To me, it is all data access.


frebe wrote:
>>Now that you mention it, I have seen hierarchical data access built
>>directly into a UI with no separation.
>>Separation would've come in handy when relational became popular, right?
> 
> 
> The problem is that every new generation of data model moves the border
> between database logic and "business" logic. For example, before the
> relational model we had a hierachical database like this:
> 
> customer A
>   -order 1
>      -part A
>   -order 2
>   -part B
> customer B
>   -order 3
>     -part B
>   -order 4
>     -part A
> 
> Now we want to write a function to find every customers ordering part
> A. The "busniess" logic might look like this:
> 
> customers = Set()
> for each customer:
>   for each order:
>     for each part:
>       if (part = 'A'):
>          customers.add(part)
> 
> Then relational databases came, this would not longer be "business"
> logic. It would be replaced by
> select distinct customerid from order join orderrow on
> order.orderid=orderrow.orderid
> where partno='A'
> 
> Or worse, the old "business" logic would be keept and the relational
> database would be traversed as a hierarchial one (this I have seen many
> times).
> 
> The idea that "business" logic could be separated from database logic
> is an illusion. The next generation of data model will probably again
> move the border and the "business" logic you write to today will
> tomorrow be regarded as low-level data access logic.
> 
> 
>>The scenario that I outlined in my previous example is real.  I've
>>worked with companies that are moving from centralized databases to
>>document based storage.
> 
> Can you give a more detailed description of the case?

There are a couple of them, and it is hard to do without betraying 
confidentiality.  But, in general, these are applications where someone 
thought a central database was perfect for the job, but some new 
category of customer decided that they didn't want a server, and would 
rather manage the data as documents.  In these several cases, it was an 
explicit rejection of workgroup features and a desire to interoperate 
with other tools in existing workflows.

Michael Feathers
www.objectmentor.com


0
mfeathers2 (74)
7/10/2005 2:12:19 PM
hansewetz@hotmail.com wrote:

> Of course it's not possible to 'prove' that the machinery works.
> My apologies for not clearly expressing what I meant. My point, not
> explained well, was that it is sometimes possible to show that
> logically a design or piece of code should work if certain assumptions
> are made. For example, an assumption could be that when writing a value
> to a register the 'bell' reacts in a certain way.
....
> If you are writing safety or mission critical applications you will
> most certainly have to show that your design or some parts of your
> design are consistent and performs in a specific way under certain
> conditions without execution tests. However, if you are writing a word
> processor, execution tests are probably 'enough'.

The question is, how do you show it?  A chain of reasoning on a piece of 
paper is capable of being misunderstood, and worse it is capable of 
being totally irrelevant if the implementation doesn't match it in any
slight detail.

 > I do not question the value of writing execution tests. However, I do
 > question the blind belief in an illusion that checking software through
 > execution tests are 'enough'.

I don't think anyone advocates that.  Agile development is *very* 
thought intensive.  The thing is, we try to tie the thoughts directly to 
the code whenever we can so that our reasoning is grounded.

Best to avoid the whimsical state that Knuth wrote about: "Beware of 
bugs in the above code; I have only proven it correct, not tried it."

:-)


Michael Feathers
author, Working Effectively with Legacy Code (Prentice Hall 2005)
www.objectmentor.com
0
mfeathers2 (74)
7/10/2005 3:21:28 PM
> It's a nice example, but I can't imagine ever calling that business
> logic.  To me, it is all data access.

That is because you know about relational databases. But if you had
written the code when hierachial database was used, before the
introduction of relational databases, very few people would had been
able to imagine that was not business logic. The same thing will happen
when the next generation of databases come. Suff that we call business
logic today, will be considered as data access tomorrow. Look at a web
shop application, it is all "data access", showing records for the
user, letting the user enter order records. Almost no "business" logic
at all.

> they didn't want a server
You don't need a server if you want a RDBMS. McKoi for example has a
footprint < 1M.

> would rather manage the data as documents.
Why? You are loosing a lot of features like referential integrity and
querying by doing this. Which were the benefits?

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/10/2005 3:41:22 PM
On 9 Jul 2005 05:01:59 -0700, "frebe" <fredrik_bertilsson@passagen.se>
wrote:

>>>Adding a new column to the database would in no way >>crash the GUI.
>
>> Use your imagination!
>
>There are no way. 

There are a million ways.  Use your imagination.  For example, imagine
a column named "price".  The GUI totals it up in two dozen or so
different places using the appropriate SQL. 

The requirements change.  The price field is split between net-price
and shipping.  Some of the totals should be the total of both, some
should be just the total of net-price.  

>The only possible way is if you have insert SQL
>statements that are not specifying columns. That is considered to be
>very bad coding. 

The name of the field changed.  

>Any other select, update or insert statement will
>produce exactly the same result after the adding of a new column.

Not if adding the new column changed the meaning of a previous column.

>I can guarantee you that you can add as many columns as you want to any
>table in a database, without breaking anything at all.

And I guarantee you that sometimes adding a column causes the meanings
of other columns to change.


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/10/2005 4:30:04 PM
On Sat, 9 Jul 2005 07:58:26 +1200, "Shane Mingins"
<shanemingins@yahoo.com.clothes> wrote:

>"Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
>news:io7tc1p4sflc4vk839tt45djr8dp1u2eb9@4ax.com...
>> On Fri, 8 Jul 2005 15:24:17 +1200, "Shane Mingins"
>> <shanemingins@yahoo.com.clothes> wrote:
>>
>>>"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
>>>news:ek7lc1plc7jdd4r1k7u45tiig7sj2c73su@4ax.com...
>>>
>>>> We try to decouple the GUI layout from the
>>>> database schemae because it would be a shame to crash the GUI when
>>>> adding a new column to the database.
>>>>
>>>
>>>Out of curiousity ... how many times have you added a new column to a
>>>database table where the data is provided to the view, and not needed to
>>>change the view?
>>
>> Consider a GUI that traverses a complex data structure on the database
>> to present a summary of the information in some graphic form.  Suppose
>> that the database is strongly interconnected with lots of tables and
>> relationships, and that the data being displayed is not about any
>> individual table or relationship, but is about a calculation based on
>> the whole database.
>>
>> Now imaging that deep within the database structure we add a new
>> column to a relationship table.  This column modifies the way the
>> relationship works.
>>
>> Let's finally say that the GUI code depends deeply on the database
>> schema, but the necessary 'if' statement to check the new field of
>> that relationship table was not put into the GUI code.
>>
>> The GUI crashes.
>
>
>I am thinking that my question was not phrased well.  If you were to add up 
>the number of times that you have had to add a new column to a database 
>table, of those times, how often did you not have to alter the GUI vs not 
>have to alter the GUI?  I am wondering what the ratio is of having to alter 
>the GUI vs not having to alter the GUI.

I don't know the ratio, but it's not uncommon to change the schema and
not change the GUI.

>I have just been wondering 
>whether the "adding a new column to the database" is a big enough reason if 
>99% of the time adding that column requires a change in the GUI anyhow.

I don't think you are missing anything.  My statement was an extreme
one.  Coupling the GUI to the Database creates the possibility that
adding a column will crash the GUI.  I agree that there are much
better reasons for decoupling the Database from the GUI; but there are
not too many more poignant fears.

-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/10/2005 4:34:08 PM
On 9 Jul 2005 05:29:08 -0700, "frebe" <fredrik_bertilsson@passagen.se>
wrote:

>> Now imaging that deep within the database structure we add a new
>> column to a relationship table.  This column modifies the way the
>> relationship works.
>
>Your initial statement didn't say anything about modifying primary or
>foreign keys. You said "add a new column". If you change a primary or
>foreign key, the GUI will not be unaffected. Can you give some example
>of primary or foreign key change, that can be isolated to only the
>persitence layer?
>
>> Let's finally say that the GUI code depends deeply on the database
>> schema, but the necessary 'if' statement to check the new field of
>> that relationship table was not put into the GUI code.
>> The GUI crashes.
>
>Can you give an example? I am not sure I understand why the GUI would
>crash.

The new structure of the database introduces a logical flaw in the
calculations within the GUI.  The GUI code inadvertently frees an
allocated structure *twice*.  The heap gets corrupted.  A billion
instructions later the GUI crashes.

However, let me say that when I used the term "crash" in the original
post, I simply meant "malfunction". 


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/10/2005 4:48:21 PM
On 9 Jul 2005 09:20:14 -0700, "frebe" <fredrik_bertilsson@passagen.se>
wrote:

>> I am not disagreeing with the decoupling of the >GUI layout from the database schema
>
>What are your arguments for decoupling the GUI from the database
>schema?
>
>> but I am wondering
>> how often the GUI needs changing in some >manner to accomodate the new column whether
>>decoupled or not.
>
>If a new column also cause changes in the GUI, wouldn't a decoupled
>architechture cause you extra work?

Not necessarily.  The GUI might make calls to a business rule layer.
the BR layer will probably need to change because of the new schema;
but the UI might not have to change at all.  


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/10/2005 4:49:52 PM

Michael Feathers wrote:
> hansewetz@hotmail.com wrote:
>
> > Of course it's not possible to 'prove' that the machinery works.
> > My apologies for not clearly expressing what I meant. My point, not
> > explained well, was that it is sometimes possible to show that
> > logically a design or piece of code should work if certain assumptions
> > are made. For example, an assumption could be that when writing a value
> > to a register the 'bell' reacts in a certain way.
> ...
> > If you are writing safety or mission critical applications you will
> > most certainly have to show that your design or some parts of your
> > design are consistent and performs in a specific way under certain
> > conditions without execution tests. However, if you are writing a word
> > processor, execution tests are probably 'enough'.
>
> The question is, how do you show it?  A chain of reasoning on a piece of
> paper is capable of being misunderstood, and worse it is capable of
> being totally irrelevant if the implementation doesn't match it in any
> slight detail.

I have rarely encountered a more belligerent statement about OOD in my
life.

Icon notations for designing software are always tightly coupled to
implementation.  The abstarct always decomposes to some closer
proximity of implementation - sometimes directly to code generation.

The absolute horseshit I'm reading is just offensive.  The flawed
reasoning of anyone working in software will manifest itself whether
the person is working with a design notation or working code and the
same is true with gestatlt groupings.

>
>  > I do not question the value of writing execution tests. However, I do
>  > question the blind belief in an illusion that checking software through
>  > execution tests are 'enough'.
>
> I don't think anyone advocates that.  Agile development is *very*
> thought intensive.

As opposed to what?  These comments of yours are just bizarre.

Let's see, "Nobody advocates stupid, dumb, or obviously wrong stuff...
therefore XP is free of all bad thoughts..."

Is that what you're telling us?

> The thing is, we try to tie the thoughts directly to
> the code whenever we can so that our reasoning is grounded.

Next thing you know you'll create a Flash Gordon helmet that directly
translates the pure thoughts of the Righteous League of XP Purists
directly to code.

THe rhetoric is offensive because you are turning a discussion of
pragmatic methodological approaches to software development into
righteous thinking vs. wrongful thinking.

That has no place in these dialogues.

0
Krasicki1 (73)
7/10/2005 5:30:19 PM
Michael Feathers wrote:
> hansewetz@hotmail.com wrote:

> > If you are writing safety or mission critical applications you will
> > most certainly have to show that your design or some parts of your
> > design are consistent and performs in a specific way under certain
> > conditions without execution tests. However, if you are writing a word
> > processor, execution tests are probably 'enough'.
>
> The question is, how do you show it?

One way to show that your software system will behave correctly is to
first build a model of it. Once you have a model you can show that the
model has certain desirable characteristics.

You might be able to prove that:
   - with certain assumptions made, the model that you base your code
on is internally consistent.
   - a large set of business rules are consistent, i.e., do not
contradict each other.
   - your system will terminate - if coded correctly.
   - the system will cause the stuff outside the computer to behave
according to requirements - if coded correctly and if the surrounding
environment maintains certain characteristics.

However, there is no general answer to your question no more than there
is a general answer to how a 'problem is solved'.

> A chain of reasoning on a piece of
> paper is capable of being misunderstood,

I don't believe this is a valid argument for not doing paper analysis.

> and worse it is capable of
> being totally irrelevant if the implementation doesn't match it in any
> slight detail.

You must have done some analysis in your head before you code. After
all, code is not a general problem solving tool - it won't solve
problem automatically for you. I can ask you the same question: how do
you know that your code matches all the details that you keep in your
head? Most likely, the details in your head are different than the
details in your colleagues heads.

>  > I do not question the value of writing execution tests. However, I do
>  > question the blind belief in an illusion that checking software through
>  > execution tests are 'enough'.
>
> I don't think anyone advocates that.  Agile development is *very*
> thought intensive.

Sounds good!

> The thing is, we try to tie the thoughts directly to
> the code whenever we can so that our reasoning is grounded.

I don't see how reasoning about a problem can be grounded in code. As I
mentioned earlier, code is not a general problem solving tool.
Reasoning comes before coding I believe.

> Best to avoid the whimsical state that Knuth wrote about: "Beware of
> bugs in the above code; I have only proven it correct, not tried it."
>
> :-)

As I have mentioned before, I believe that both execution tests and
analysis of the problem/solution should be done when developing IT
systems. The two are not contradictory to each other - they show
different aspects of the IT system.

At times it is possible to be very 'agile' by analyzing a problem using
paper and pen before coding. Here is an example I encountered a couple
of years ago (apologies in advance for the rather long example):

A large set of rather complex business rule (>100) had to be
implemented. Without some in depth knowledge of the business it was
difficult to understand the relationship between the rules. However,
the 'requirements' were expressed as text in a very thick binder. Now,
it turned out that by spending a non-significant amount of time
(approximately one month) it was possible to see that the rules could
be understood as a cross product of a handful very simple rules. There
were some important benefits with implementing the rules as a cross
product of simple rules instead of a large set of complex rules. First,
the implementation was very quick. Second, we could easily add new
rules - which we later did on a regular basis. Third, the simple rules
had a clear business meaning and we could, using paper and pen, easily
show the correctness of a new rule (again, by making a number of
assumptions). We also followed 'standard' IT practice of having a large
execution test suite - 'agile' development does not have monopoly of
execution tests!

I have heard arguments like:  'if you just had coded the complex rules
first you would have seen that they were composed of simpler rules'.
This is not true - or we were not smart enough. In fact, we did code
the rules first - with full regression test suite. However, the
relationship between the complex rules and the simple rules required
far more business knowledge than we had. Later one person realized that
it is not possible to have such a large set of business rules without
some clear logic behind them. That's when the paper and pen analysis
started.

I cannot say that our first implementation was a mistake. With time
pressure and deadlines it is possible that we did not have a choice.
However, I can say that the second implementation was far superior when
compared to the first one.

I mentioned the example since I have difficult to buy into the
sometimes dogmatic statements from proponents of 'agile' development,
XP or any other methodology. Obviously there cannot be one methodology
that fits all environments. For example, I would not want to be X-rayed
with a machine that was developed using (what I believe is) an XP
approach. On the other hand, I would not want to pay the amount of
money a word processor would cost had it been developed in a safety
critical environment.

Regards,
Hans Ewetz

0
hansewetz (110)
7/10/2005 5:31:45 PM
> The price field is split between net-price and shipping.

First you talk about adding a column. Now you talk about splitting a
column (removing one and adding two). Removal of columns will of course
break the client code in the same was as a removal of a method would
do.

> Some of the totals should be the total of both, some
> should be just the total of net-price.

If you do it your way, wouldn't you have to change your "business"
logic too? This is not an isolated change in the "persistence" layer as
you claimed.

> Not if adding the new column changed the meaning of a previous
In that case, you do something else but just adding a new column.

> And I guarantee you that sometimes adding a column causes the meanings
> of other columns to change.
Maybe sometimes, but not always. If you change the meaning of a column,
it means that you are resusing an old column name for a new purpose.
That would be considered the same as deleting one column and adding
another. Anything, if you change the semantics of a column, you change
the business logic, and you have not an isolated change in the
"persistence" layer.

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/10/2005 6:06:47 PM
> The new structure of the database introduces a logical flaw in the
> calculations within the GUI.  The GUI code inadvertently frees an
> allocated structure *twice*.  The heap gets corrupted.  A billion
> instructions later the GUI crashes.

If you have bugs in your software anything can happen. But I don't
thing saying "my application might have a bug causing problems" is an
argument for proving your claims.

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/10/2005 6:11:13 PM
>>If a new column also cause changes in the GUI, wouldn't a decoupled
>>architechture cause you extra work?
> Not necessarily.  The GUI might make calls to a business rule layer.
> the BR layer will probably need to change because of the new schema;
> but the UI might not have to change at all.

So it will will cause you extra work?

Lets have a very common example. You have a employee table which is
edited in a client that shows employees as rows in a list box. When you
double click a employee a new windows pops up for editing one emplyee
record. Now I add a new column "telephoneno" in the employee table.
Using you approach you have to change the customer data access object,
the customer business object and finally the client. Using my approach
I just change add the new column to the client descriptor.

Doesn't the decoupled apprach case a lot of extra work for many very
common tasks?

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/10/2005 6:17:49 PM
On 8 Jul 2005 08:24:05 -0700, hansewetz@hotmail.com wrote:

>The example is obviously silly. However, it should illustrate my point
>that execution tests are not always enough. Execution tests can easily
>create an illusion of correctness of a piece of software.

So can stray cosmic rays, errant mosquitos, and random water drips.
The fact that test results can be confounded, does not mean that tests
should not be written.  



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/10/2005 6:20:46 PM
On 9 Jul 2005 10:34:28 -0700, hansewetz@hotmail.com wrote:

>Robert C. Martin wrote:
>> On 8 Jul 2005 01:34:15 -0700, hansewetz@hotmail.com wrote:
>
>> Some of both.  Most of the acceptance tests that I write are based on
>> functional and non-functional requirements.  Some, however, are based
>> on software structure.  See:
>> http://butunclebob.com/ArticleS.UncleBob.StableDependenciesFixture.
>
>OK, I assume that by 'test' you mean 'execution test'.

I mean an automated test that, by it's execution, demonstrates that
some part of the system is working correctly.
>
>> >Also, I'm not really sure what you mean by 'requirements'.
>>
>> The traditional definition.
>
>Since you are expressing requirements as tests it seems pretty
>important to have a clear definition of 'requirements'! 

Agreed.

>The term
>'requirement' is one of the terms that have different meanings to
>different people. Check out environments with coders, architects,
>business analysts, business people, bean counters, project managers ...

Yes, and still...  Confounding the issue with many different
definitions does not change the underlying theme.  We express
requirements as executable tests.

>I asked the question about 'requirements' to make sure we were talking
>about the same thing. I'm still not sure if you are mixing requirements
>from the customer with specification of the behavior of the software.

I am speaking specifically of the specification of the behavior of the
software as defined/agreed by the customer.

>You might be able to show that 'that last little detail' is handled
>correctly by analyzing the design. If the 'last little detail' is a,
>let's say a raise conditions, I don't think your execution tests will
>always do the job.

Any line of code that can be written can be tested.

>Of course it's not possible to 'prove' that the machinery works.
>My apologies for not clearly expressing what I meant. My point, not
>explained well, was that it is sometimes possible to show that
>logically a design or piece of code should work if certain assumptions
>are made. For example, an assumption could be that when writing a value
>to a register the 'bell' reacts in a certain way.
>
>You can increase the probability of ending up with software that works
>correctly if you not only run execution tests but also verify that some
>important parts of the design are internally consistent and under
>certain assumptions will perform as expected. 

Agreed, and agreed.

>I do not question the value of writing execution tests. However, I do
>question the blind belief in an illusion that checking software through
>execution tests are 'enough'.

Execution tests are necessary, but insufficient.  You also have to see
the system work.  However, execution tests can be the primary
mechanism that you depend upon to ensure that the system is correct.  


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/10/2005 6:33:01 PM
Fredrick,

From where I sit, as an observer to this conversation, Robert made his 
point.  It is not only quite possible for a change in the database to crash 
or invalidate the GUI, but there are many ways for that to occur.

> First you talk about adding a column. Now you talk about splitting a
> column (removing one and adding two). Removal of columns will of course
> break the client code in the same was as a removal of a method would
> do.

He made an offhand remark about decoupling.  Think about the ideas behind 
the words.

>The only possible way is if you have insert SQL
>statements that are not specifying columns. That is considered to be
>very bad coding.

Have you ever thought about WHY this is considered to be bad coding?  Most 
of the time, your dev lead or dev manager will dictate "This practice is 
bad" when that practice violates a principle.  In this case, the principle 
that makes this practice 'bad' is the same one that Robert is trying to 
convey to you:  That tight coupling between the existence, order, location, 
and relations of fields in a database should be as loosely coupled to the 
front end as possible to allow change in one without affecting the other.

To whit: I have removed a column from a database, added two more (in 
different tables), and had minimal change to my application.  My changes 
maintained reporting without changes, and the CRUD screens were NOT changed 
in this instance.  The business rules were changed instead.  That isn't 
typical, but it happens, and it happened to me.  Seperating the database 
from the front end made it possible.

> if you change the semantics of a column, you change
> the business logic, and you have not an isolated change in the
> "persistence" layer.

No one said that you would isolate the change in the persistence layer.  The 
point wasn't to isolate the change.  The point was to allow the different 
layers to change as indepedently as possible.  Many changes will happen 
across multiple layers.  That is inevitable.  In simple CRUD screens, it is 
normal.  However, it is not always the case, and to assume that it is means 
that you do not have experience with other kinds of systems, like 
integration brokers, B-to-B systems, Data ETL systems, etc.

I am not one of the folks who believe that you should overcode your 
middleware.  Personally, I believe in using the tools you have to the 
greatest extent possible.  That means using ADODB and the Dataset object so 
that you don't have to create 30 classes to model 55 tables, if you can do 
it with datasets and well-built views and stored procs.  On the other hand, 
keeping true to the principles will always help you create code that you 
will not feel guilty about 3 years later, when you realize that the poor 
schmuck who gets to maintain it has to make changes in two dozen places when 
a varchar field is restricted from 255 chars to 42 in order to allow better 
integration with a downstream system.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
--


0
nickmalik (325)
7/10/2005 7:46:53 PM
> It is not only quite possible for a change in the database to crash
> or invalidate the GUI, but there are many ways for that to occur.

You are chaninging his statement. His claim was "adding a new column
can crash the GUI". You say "a change in the database can crash the
GUI". You are corrent and he is wrong.

>>The only possible way is if you have insert SQL
>>statements that are not specifying columns. That is considered to be
>>very bad coding.
>Have you ever thought about WHY this is considered to be bad coding?

Because adding a new column will crash old insert statements.

> I have removed a column from a database, added two more (in
> different tables), and had minimal change to my application.  My changes
> maintained reporting without changes, and the CRUD screens were NOT changed
> in this instance.  The business rules were changed instead.  That isn't
> typical, but it happens, and it happened to me.  Seperating the database
> from the front end made it possible.
Yes it is possible. But the question is if it is the benefits are
larger than the costs. You admit that it isn't typical. Why base your
architechture on rare scenarios? I prefer to have an architecture that
makes common scenarios easy.

> No one said that you would isolate the change in the persistence layer.
I think it has been said many times.

> The point was to allow the different  layers to change as indepedently as possible.
My point is to show that there are no reason to change the database
schema independently. So far, nobody has shown an example of the
opposite.

> However, it is not always the case, and to assume that it is means
> that you do not have experience with other kinds of systems, like
> integration brokers, B-to-B systems, Data ETL systems, etc.

My experience is from applications in the areas logistics, production
control, human resource administration and accounting. 80% percent of
the logic in these kind of application are very CRUD related. And a lot
of business logic is implemented in the database schema. All schema
changes are also changes in business logic.

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/10/2005 8:09:10 PM
frebe wrote:

> > The price field is split between net-price and shipping.
>
> First you talk about adding a column. Now you talk about splitting a
> column (removing one and adding two). Removal of columns will of course
> break the client code in the same was as a removal of a method would
> do.

You have a bad habit of addressing the details of an example supporting a
point instead of addressing the point. This was two different examples in a
row.

Decoupling is not a way to keep any GUI from crashing. It is a way, in
combination with tests and careful planning, to make sure the failure is
loud early and obvious.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/10/2005 8:15:19 PM
Finally... We get to the crux of the matter... what does it mean to isolate 
one from another?

You appear to assume that, in order to isolate the database from the gui, 
that you have to abandon data bound controls.

I would challenge that assumption by asking you to look at the definitions a 
little different.

In general, let's say that layers are bound to each other across interfaces. 
In other words, the GUI layer uses a specific interface of the business 
objects layer.  Therefore, when data is transferred across these boundaries, 
the specification of that data is included in the specification of the 
interface.

You appear to be happy with using automated components that will actively 
adapt to changes in the contents of the data view.  I agree with this 
notion.  As much as possible, and where data transactional efficiency is not 
a problem (as in the case of a CRUD interface), this is a good idea.

I'd like to point out that, in this case, the specification of data 
interaction did not change even when the number of columns did, because the 
interface will handle ANY and ALL fields, not just specific fields.

The problem for maintenance comes when you need to make exceptions to the 
basic rules of "user knows best" to do things like restrict a single column 
to a preset list of values, or to validate that a particular field, present 
in the data structure, should not appear in a grid along with the rest of 
the columns.

This is where you need to place logic against the data structure that 
doesn't just apply to ALL fields.  This is where headaches start.

I am hoping to encourage caution about this kind of binding.  If changing 
the name of a column in a database causes your code to break, because your 
code was looking for that column name to exclude from a grid, and the column 
is 500 bytes wide, and now the grid scrolls wildly, and the user won't be 
happy, then it is fair to say that you have not isolated the db change from 
the front end.  This is because the name of the database column appears in 
the front end code, potentially in many places, in order to perform this 
very specific exclusion.

On the other hand, if your data layer always _maps_ a database column name 
to a gui-friendly name that allows the db field name to change 
independently, then you can continue to use a data-bound control without 
fear that these exceptions can cause additional cost.

There is a little known feature of the Microsoft .Net framework that does 
this for data sets.  If you use the Visual Studio wizard to build a data 
adapter, and then take a look at the code that was generated, you'll see the 
use of the DataTableMapping object that allows for this level of 
indirection, while still allowing you to use the data bound controls for 
front-end mechanisms.  I do not know if you use Microsoft .Net for writing 
code, and I'm not trying to convince you to do so.  However, it may be 
interesting to take a look at the following topic in the documentation to 
see how it works:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vbcon/html/vbcontablemappingindataadapters.asp

This keeps with the notion of independence between the database and the 
front end while still allowing the use of automated tools that interpret the 
data structure in a consistent manner for the creation of automated GUIs.

Also, another little known features of the .Net platform is that you can 
create your middleware objects, and then bind the data controls directly to 
them, giving you the ability to code your business rules directly to your 
own objects, and then handle persistence downstream independently.  These 
are patterns that not often used, because their structural benefits are not 
well understood by the majority of developers.  However, they have the real 
ability to make maintenance more efficient.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
7/10/2005 8:17:41 PM
> You appear to assume that, in order to isolate the database from the gui,
> that you have to abandon data bound controls.
No, but that is what I have been told at this forum. Personally I think
that the database already are separated by SQL and JDBC.

> You appear to be happy with using automated components that will actively
> adapt to changes in the contents of the data view.
Not always. If I add a column, I also need to add the descriptor for
the GUI control. But it should also be possible to tell the control to
show all columns in the table.

> This is where you need to place logic against the data structure that
> doesn't just apply to ALL fields.  This is where headaches start.
Why, it is just to tell the GUI control which fields to show.

> On the other hand, if your data layer always _maps_ a database column name
> to a gui-friendly name that allows the db field name to change
> independently, then you can continue to use a data-bound control without
> fear that these exceptions can cause additional cost.
Of course you don't have to show the physical column name in the header
in the GUI grid. In the framework I use, I can associate a "caption"
property to every database column (which will be different in different
languages).

> I do not know if you use Microsoft .Net for writing code, and I'm not trying to convince
> you to do so.
I use Java becaue I want to decouple my applications from any specific
OS.

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/10/2005 8:48:27 PM
>
> Yes, it is minor, but often it's good to catch things when they are minor.
>
> Tell about an experiment I run when I work with teams sometimes.  A pair
>   of developers introduces a switch statement with two legs.  I say "oh,
> looks like you could use command pattern there and get rid of the
> switch."  They say "yeah, but it's only two cases, adding the command
> pattern seems like overkill."  So I say, "yeah, you're right."  They go
> on and add the third the leg.  "So, what about the command pattern now?"
>   They say "yeah, but it's still small."  So, I come back an hour later
> and they have seven or eight legs in the switch and they try to dodge
> me, but the truth is, they are looking at the work of introducing the
> command pattern then and thinking "Man! All of that work!"  So that is
> when we sit down and have a conversation about catching things early.
> We sit down together and fix the code.  The next time through, they are
> separating things preemptively because they realize how much easier it is.

But you can't assume that things always expand the same way they did as
last time things grew. Things often don't follow the same growth
patterns each time. I personally do not encounter too many fully
duplicate switch statement lists unless there is a bigger design flaw
somewhere else causing it. In other words, duplicate switch statement
lists are often a symptom, not the desease itself.

> 
> 
> Michael Feathers

-T-

0
topmind (2124)
7/11/2005 4:57:42 AM
Robert C. Martin wrote:

>Yes, and still...  Confounding the issue with many different
>definitions does not change the underlying theme.  We express
>requirements as executable tests.

Stating that requirements are expressed as tests does not define what
requirements are. I seriously doubt that all automated tests express
requirements (unless of course you define 'execution test' to mean
'test of requirement'). The reason I'm harping on the issue of what
is meant by a 'requirement' is because it is important to
understand 'what it is' that is being demonstrated when a piece of
software passes an automated execution test suite.

For example, a customer requiring that a bell should sound when a
patient's heart stops does not care if the software writes 0x65 to
memory location 0x6754. The customer only cares that the bell goes off.
If an automated test suite checks that 0x65 is written to location
0x6754, it does not check the requirement. It simply checks that some
part of the specification of the software is met.

>I am speaking specifically of the specification of the behavior of the
>software as defined/agreed by the customer.

Customers do not in general express the behavior of the software as
requirements. They express the behavior of stuff outside the computer
(or possible outside the software being written) as requirements. I see
this as the main difference between specification of the software and
requirements from customers.

>Any line of code that can be written can be tested.

Of course, but many lines of code may produce a large number of states
that cannot be tested through execution tests. Also, all problems
cannot be decomposed into parts that can be analyzed independently.

>However, execution tests can be the primary
>mechanism that you depend upon to ensure that the system is correct.

Agree and disagree. For some systems the primary mechanism may be
automated execution tests. For others, it may be equations from
physics, mechanics or chemistry together with automated execution tests
that are the primary mechanisms that ensures that the system is
correct. I feel a little uncomfortable with the word 'primary'
since it sounds as if it is OK to exclude other mechanisms for ensuring
that the software is correct. 

Regards,
Hans Ewetz

0
hansewetz (110)
7/11/2005 4:32:50 PM
hansewetz@hotmail.com wrote:
>
> Stating that requirements are expressed as tests does not define what
> requirements are. I seriously doubt that all automated tests express
> requirements (unless of course you define 'execution test' to mean
> 'test of requirement'). The reason I'm harping on the issue of what
> is meant by a 'requirement' is because it is important to
> understand 'what it is' that is being demonstrated when a piece of
> software passes an automated execution test suite.
> 
> For example, a customer requiring that a bell should sound when a
> patient's heart stops does not care if the software writes 0x65 to
> memory location 0x6754. The customer only cares that the bell goes off.
> If an automated test suite checks that 0x65 is written to location
> 0x6754, it does not check the requirement. It simply checks that some
> part of the specification of the software is met.

Yet, amazingly that is good enough because we don't ask the customer to 
care about 0x65 or 0x6754.  They write tests in terms of the domain. 
Here is a simple example:

|heart|start|
|heart sensor|on|
|heart|stop|
|check|alarm|sounds|in|interval|0|300|


Real tests would probably have rows for EKG values at some sample 
interval so that the customer can verify that the software detects 
stoppage correctly.


Michael Feathers
www.objectmentor.com


0
mfeathers2 (74)
7/11/2005 5:04:39 PM

Robert C. Martin wrote:
> On 30 Jun 2005 02:38:48 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >> >Hi,
> >> >
> >> >
> >> >I understand the mechanics of objects; I use objects, polymorphism,
> >> >even patterns.  I understand some of OOD, but I'm trying to grasp the
> >> >bigger picture, how to "think in OOD".  I hope an example will help:
> >>
> >> There has been an enormous amount of debate and discussion about this
> >> topic.  Some folks believe that OO is an extremely high level
> >> technique used to make models of the world.  Others believe that OO is
> >> a mechanism for structuring source code.  I fall into the latter camp.
> >
> >I fall into both (though philosophy is a little strong), I'd rather
> >call it a school.
> >
> >>
> >> To me, the whole notion that OO is a philosophy is silly.
> >
> >do you think there are any philosophical notions behind SE...or simply
> >that OO is not (a distintive) one?
>
> Whether there are philosophies of software development is irrelevant;
> though I think there may be.  My point is that OO has often been
> touted as a "grand overarching philosophy" having more to do with
> life, the universe, and everything, than with software.

Its roots certainly lie outside the narrow view of OO, though the same
can be said for other pardigms.

>
> OO is certainly a different way of thinking about software.  From that
> point of view it is a kind of philosophy.  But it is a way of thinking
> about software at the structural level; not the grand "analysis" level
> (whatever that word happens to mean.)

I don't buy this.

If you believe that writing software is about modelling the real world
in some manner, and that the behavioural structure of the software must
be the same as the structure of the real world then it is a
philosophy....(to me this is the definition of analysis and at the root
of any modelling endeavour).

I accept that we embed more structure in the model in order to simplify
our software construction/deployment/maintanence and that this is also
encoded in 'objects', but it is behaviorally neutral...(to me this is
the definition of design).

You can if you like reject mappging from the model to the real world
approach (as some do...e.g. Copeland)....but frankly I don't, I believe
it lies at the heart of a scientific/mathematical view of the world.

>
> >OK, it is possible to invert the physical dependencies between sets of
> >classes (modules) by moving entities between them ...we should at least
> >agree here......are you claiming that this is the key characteristic
> >value of OO?
>
> Yes.

good

> >
> >> If the high level policy modules
> >> depend on (directly call) lower level modules, which then call even
> >> lower level modules, then the program is procedural and not object
> >> oriented.
> >
> >this is utterly bizarre....you are claiming that the OO ness of a
> >system is not dependent on the logical model (i.e. 'class',
> >'interface'), but on the allocation of a class or interface to a
> >physical deployment entity (module)...and further that even if an
> >application uses classes, interfaces, encapsulation, abstraction,
> >polymorphism, it can be considered (strictly) functional in certain
> >deployments........I disagree.
>
> Not quite. There is a logical component.  Higher level policies are
> decoupled from lower level policies by having both depend on
> interfaces or abstract classes.

but if they do not depend on interfaces but on implementation then they
are not OO?

> >
> >i.e.
> >
> >interface IA
> >{
> >}
> >
> >class CA : IA
> >{
> >}
> >
> >interface IB
> >{
> >}
> >
> >class CB1
> >{
> > IA a = new CA();
> >}
> >
> >class CB2
> >{
> >}
> >
> >so I can vary the OO'ness of the above code, not by changing the code,
> >but by how I allocate each entity to a module?!?!?
>
> Certainly.  If the allocation of classes to modules does not
> effectively decouple those modules, then you don't have an OO
> solution.  The fact that classes and interfaces are used is
> irrelevant, if those classes and interfaces are not used to create an
> OO structure.

I'm actually quite pleased you said that....at least we can proceed on
me thinking I know what your saying and disagreeing.....which seems
rare around here.

'irrelevant' is the key word.

to you they seem only relevant if we can decouple physical modules.

>
> >If I put it all in 1 module....its not OO?
>
> Its not so much a matter of whether its in one module or not.  It's a
> matter of whether or not there is an obvious and convenient fracture
> zone that could be used to separate the modules.

OK, A->B (depends on B)....can be seperated into two seperate modules,
so then all software containing classes is OO.....thats good.....I
think I would agree with the conclusion if not the route there.

are funtional dll's OO then? this fracture zone would appear to be in
those too.

>
> >> However, if the high level policy modules are independent, or depend
> >> solely on polymorphic interfaces that the lower level modules
> >> implement, then the system is OO.
> >
> >So if a physical module (set of classes) depends on 1 non polymorphic
> >interfaces and n (>0) polymorphic ones, its not OO??!?!?!?!?!?
>
> OO-ness is not binary, it is a continuum.

then its not a very good definition to me.

and as a goal then it is not a good one as it would seem to imply
decoupling at all costs...to me that is only one side of the coin.

> To the extent that a module
> depends on concretions, it is not OO.  To the extent that it depends
> on abstractions it is.

to me you are confusing OO with (de)coupling.

it is perfectly possibly to do this in C at runtime with function
pointers, yet with no entity identifiable as an 'object'.

it is perfectly possibly to do this in C without function pointers
simply by editing the link file.

function prototypes give us a degree of abstraction and encapsulation,
yet we do not need 'objects' to do this.

I believe you said in a previous post that you did not think that OO
should be boiled down to a sytactic argument about the difference
between

f(a,b) and a.f(b)

but to me this is the central point, what sematically is the difference
between these constructions?

encapsulation and abstraction!

we now have 2 scopes to A, private and public, and a syntatic mechanism
to identify them and which f can see.....without that OO is just C with
function pointers (not rumbaugh C...as we manually constuct a mechanism
to do this via the v-table).

Yes that means we can now substitute a different a....and thus
physically decouple its implementation, but who cares most of the time,
this is a symptom, not a defining cause, it is because it is logically
decoupled that it can be physically decoupled, if it is not logically
decoupled it cannot be physically decoupled, if there is no logical
abstraction (and encapsulation with it), then no logical interface
exists.

>
> >Would you characterise a C program where the dependencies between
> >'modules' were via function pointers as OO?
>
> Yes, so long as the decoupling created by the function pointers
> allowed high level policy to be separated from lower level details.
>
> >(I actually would
> >sympathise with an answer like 'a bit' but the lack of something
> >corresponding to an object would worry me)
>
> The function pointers are the methods of an interface.  Typically
> those pointers will be held in some kind of data structure.

that is Runbaugh C, we do not need to go that route to exhibit the
characteristic that you define OO'ness by, just good old fashioned
procedural C, with some (int *)(foo)(1)'s thrown in....(except I cannot
remember the syntax).

> That data
> structure is an object.

that data structure need not exist, I believe you have introduced it
because your definition fails to need the concept of an object...and
that is problematic....OO with no objects.

>
> Consider, for example, the FILE data type in C.  Deep within it there
> are function pointers to the read, write, open, close, seek methods.
> FILEs are objects.
>

I agree (to a degree), this is an example but this would seem to make
all C's procedural libraries examples of OO.

I still cannot see how your definition does not include procedural
libraries and dll's in particular.

And I cannot see how your definition implies the existence or value of
'objects', short of stateless single method ones....i.e.
procedures/functions.

You seem to be putting the cart before the horse in order to validate a
view of OO that seems skewed towards physical dependencies.

0
Nicholls.Mark (1061)
7/11/2005 5:11:21 PM
Mark Nicholls wrote:
>
> If you believe that writing software is about modelling the real world
> in some manner, and that the behavioural structure of the software must
> be the same as the structure of the real world then it is a
> philosophy....(to me this is the definition of analysis and at the root
> of any modelling endeavour).
>
Philosopy isn't about "belief."  Philosphers take a critical stance
towards their subject matter.  Philosophy as applied to OO would be
about studying the statements of OO, looking for logical coherency,
critiquing OO, etc.  If you're talking about belief, you're talking
about something more akin to religion.  Religion is not philosphy.

-- Daniel

0
7/11/2005 5:47:49 PM

Daniel Parker wrote:
> Mark Nicholls wrote:
> >
> > If you believe that writing software is about modelling the real world
> > in some manner, and that the behavioural structure of the software must
> > be the same as the structure of the real world then it is a
> > philosophy....(to me this is the definition of analysis and at the root
> > of any modelling endeavour).
> >
> Philosopy isn't about "belief."  Philosphers take a critical stance
> towards their subject matter.

i.e. beliefs.

> Philosophy as applied to OO would be
> about studying the statements of OO,

i.e. the statements based on the axiomatic beliefs.

maths is beliefs, on axioms.
science is beliefs, on maths and observation.
religion is beliefs, based on god.

> looking for logical coherency,
> critiquing OO, etc.

logic....based on axiomatic beliefs see ZFC.

> If you're talking about belief, you're talking
> about something more akin to religion.  Religion is not philosphy.
>

religion i.e. based on a belief in a god.

If you don't like it then I'll drop 'philosophy', its not a big deal
for me....my complaint with Mr Martin is not whether OO is a
philosophy, but his definition does not seem to need the existence of
objects, and thus does not seem to characterise general OO technology,
and is thus not a very useful definition.

0
Nicholls.Mark (1061)
7/12/2005 9:13:27 AM
On 11 Jul 2005 10:47:49 -0700, Daniel Parker wrote:

> Mark Nicholls wrote:
>>
>> If you believe that writing software is about modelling the real world
>> in some manner, and that the behavioural structure of the software must
>> be the same as the structure of the real world then it is a
>> philosophy....(to me this is the definition of analysis and at the root
>> of any modelling endeavour).
>>
> Philosopy isn't about "belief."  Philosphers take a critical stance
> towards their subject matter.

Well, clerics are very critical towards devil... (:-))

OK, the difference is not in the subject. It is in the method. There are
scientific and religious methods of knowledge. The gap between philosophy
and religion is not that great, if any...

> Philosophy as applied to OO would be
> about studying the statements of OO, looking for logical coherency,
> critiquing OO, etc.

No. Philosophy is about the most general things such as truth, matter,
mind, cognition, existence etc. OO does not fit this scale. It would be
silly to think OO as a subject of philosophy. There could be a philosophy
behind OO, but it is not the same as "philosophy of OO". So this is really
about a belief, that the "real" world and the world of software are
controlled by same laws. And not because lazy people are accustomed to
particular methods, but because these laws were *indeed* created same by
either God Almighty or the Big-Bang!

> If you're talking about belief, you're talking
> about something more akin to religion.  Religion is not philosphy.

Yes, philosophy is religion! (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
0
mailbox2 (6357)
7/12/2005 9:20:36 AM

Dmitry A. Kazakov wrote:
>
> Philosophy is about the most general things such as truth, matter,
> mind, cognition, existence etc.

I hate to tell you this, but the hottest topic in philosophy today is
"business ethics."

> OO does not fit this scale. It would be
> silly to think OO as a subject of philosophy. There could be a philosophy
> behind OO, but it is not the same as "philosophy of OO".

I recall reading a short book by a philosopher on "system theory."  If
philosophers can talk about that, surely they can talk about OO.
Philosophers can have fun too.

-- Daniel

0
7/12/2005 1:31:14 PM
Hans,

You are suffering from the same disorientations anyone familiar with
software development methodologies has in discussing XP.

XP proponents have done something unusual with their vernacular and
that is that they've co-opted well-known and familiar software
development terminiology and given it a subversive twist.

So when you ask about requirements they will respond, "No problem. We
have them right here."  When you say where they say "in the test".
When you say, "but a test can only describe the provincial, myopic, and
quite exacting immmediate answer to a given factoid.", they say, "no
matter, we do thousands of these and therein must lie a greater
meaning."  Your frustration will be endless.

XP has a self-fulfilling prophesy built in and it is self-insulating in
circular logics that protect it from even reasonable analysis.

cheers,

krasicki


hansewetz@hotmail.com wrote:
> Robert C. Martin wrote:
>
> >Yes, and still...  Confounding the issue with many different
> >definitions does not change the underlying theme.  We express
> >requirements as executable tests.
>
> Stating that requirements are expressed as tests does not define what
> requirements are. I seriously doubt that all automated tests express
> requirements (unless of course you define 'execution test' to mean
> 'test of requirement'). The reason I'm harping on the issue of what
> is meant by a 'requirement' is because it is important to
> understand 'what it is' that is being demonstrated when a piece of
> software passes an automated execution test suite.
>
> For example, a customer requiring that a bell should sound when a
> patient's heart stops does not care if the software writes 0x65 to
> memory location 0x6754. The customer only cares that the bell goes off.
> If an automated test suite checks that 0x65 is written to location
> 0x6754, it does not check the requirement. It simply checks that some
> part of the specification of the software is met.
>
> >I am speaking specifically of the specification of the behavior of the
> >software as defined/agreed by the customer.
>
> Customers do not in general express the behavior of the software as
> requirements. They express the behavior of stuff outside the computer
> (or possible outside the software being written) as requirements. I see
> this as the main difference between specification of the software and
> requirements from customers.
>
> >Any line of code that can be written can be tested.
>
> Of course, but many lines of code may produce a large number of states
> that cannot be tested through execution tests. Also, all problems
> cannot be decomposed into parts that can be analyzed independently.
>
> >However, execution tests can be the primary
> >mechanism that you depend upon to ensure that the system is correct.
>
> Agree and disagree. For some systems the primary mechanism may be
> automated execution tests. For others, it may be equations from
> physics, mechanics or chemistry together with automated execution tests
> that are the primary mechanisms that ensures that the system is
> correct. I feel a little uncomfortable with the word 'primary'
> since it sounds as if it is OK to exclude other mechanisms for ensuring
> that the software is correct. 
> 
> Regards,
> Hans Ewetz

0
krasicki (41)
7/12/2005 1:43:00 PM
Mark Nicholls wrote:
> Daniel Parker wrote:
>
> > If you're talking about belief, you're talking
> > about something more akin to religion.  Religion is not philosphy.
> >
>
> religion i.e. based on a belief in a god.

I agree that God is the ultimate undefined concept, but I don't think
the Protestant community in which I grew up would have looked at it
this way.

>
> If you don't like it then I'll drop 'philosophy', its not a big deal
> for me....my complaint with Mr Martin is not whether OO is a
> philosophy, but his definition does not seem to need the existence of
> objects, and thus does not seem to characterise general OO technology,
> and is thus not a very useful definition.

I agree that it would be helpful to have a coherent exposition of OO
concepts.  Abadi and Cardelli have a coherent exposition of OO types,
but they don't talk about the notion of behaviour, the word behaviour
doesn't appear in the index, and they also don't talk about the idea of
dependency, except in regards to the order of chapters.  I also don't
recall reading about any theory of the relation between object types.

-- Daniel

0
7/12/2005 1:48:57 PM

Daniel Parker wrote:
> Mark Nicholls wrote:
> > Daniel Parker wrote:
> >
> > > If you're talking about belief, you're talking
> > > about something more akin to religion.  Religion is not philosphy.
> > >
> >
> > religion i.e. based on a belief in a god.
>
> I agree that God is the ultimate undefined concept, but I don't think
> the Protestant community in which I grew up would have looked at it
> this way.

I cannot comment, religion is beyond me, religion is certainly a belief
system, but so is science.

>
> >
> > If you don't like it then I'll drop 'philosophy', its not a big deal
> > for me....my complaint with Mr Martin is not whether OO is a
> > philosophy, but his definition does not seem to need the existence of
> > objects, and thus does not seem to characterise general OO technology,
> > and is thus not a very useful definition.
>
> I agree that it would be helpful to have a coherent exposition of OO
> concepts.
> Abadi and Cardelli have a coherent exposition of OO types,
> but they don't talk about the notion of behaviour,

can't say I understand it, I've just bought a book on model theory and
got stuck on page 4. =A310's a page so far...not good.

maybe behaviour doesn't exist?

maybe there are only theorems (method signatures) and proofs,
implementations.....where does behaviour live? I've always found the
notion ellusive, except in the context of informally talking about
pre/post conditions i.e. theorems.

> the word behaviour
> doesn't appear in the index, and they also don't talk about the idea of
> dependency,

I suspect dependency is unnecessary from a formal concept of types, you
would simply include dependants in your universe.

i=2Ee. I suspect it is a engineering issue, rather than a modelling
issue, and thus unnecessary for a conceptual definition of 'object' or
'type'.

> except in regards to the order of chapters.  I also don't
> recall reading about any theory of the relation between object types.
>

model theory talks extensively about the relationship between
structures (types) and their equivalence, unfortunately the maths is
beyond me for the moment, I need a 'model theory for dummies' book.

0
Nicholls.Mark (1061)
7/12/2005 2:09:13 PM
Michael Feathers wrote:

> Yet, amazingly that is good enough because we don't ask the customer to
> care about 0x65 or 0x6754.

Sorry, I don't understand 'what it is' that is good enough.

Also, it is not clear at all 'what' it is that is proven (or
checked) when you run an execution test. I'm pretty sure that an
execution testing can be categorized into many different types. For
example, an execution test can check that:
   - the specification of the software is correct.
   - the 'world' around the computer/software behaves according to
requirements
   - that there is no regression (not the same as the software behaving
correctly)
   - the runtime quality of the executable code is acceptable.
   - etc.

Equally important must be to understand what 'is not' checked when
running an execution test. For example:
   - raise conditions cannot occur
   - inconsistencies between business rules do not exist (if the system
is 'complex eneough')
   - etc.

Understanding different aspects of execution test is not much different
than applying the 'principle' of separation of concerns. So far I
have only seen that 'requirements' are expressed as execution
tests. I have yet to see a definition of what is meant by
'requirements' - I'm ignoring 'Phlip's declaration that a
requirement is something that profits the customer.


>They write tests in terms of the domain.
> Here is a simple example:
>
> |heart|start|
> |heart sensor|on|
> |heart|stop|
> |check|alarm|sounds|in|interval|0|300|

Here I assume that by 'They' you refer to the customer. If I'm
wrong you can scratch out the rest of this comment.

If you are saying that the customer performs tests of the domain, then
you must be inferring that the tests you write are only checking the
specification of the software. There is clearly a problem with this. If
you deduce a software specification from requirements, it is up to you
to test that your software meets the requirements. It is not the
responsibility of the customer to show that your software specification
is correct.

Regards,
Hans Ewetz

0
hansewetz (110)
7/12/2005 2:46:17 PM
On 12 Jul 2005 06:31:14 -0700, Daniel Parker wrote:

> Dmitry A. Kazakov wrote:
>>
>> Philosophy is about the most general things such as truth, matter,
>> mind, cognition, existence etc.
> 
> I hate to tell you this, but the hottest topic in philosophy today is
> "business ethics."

Better it be jurisprudence... (:-))

>> OO does not fit this scale. It would be
>> silly to think OO as a subject of philosophy. There could be a philosophy
>> behind OO, but it is not the same as "philosophy of OO".
> 
> I recall reading a short book by a philosopher on "system theory."  If
> philosophers can talk about that, surely they can talk about OO.
> Philosophers can have fun too.

They can. I recall how AI suffered from an invasion of philosophers. It
wasn't good for AI. Philosophy has the function of carrion vultures in wild
life... (:-)) It is very amusing, a great reading, but alas it does not
really help to understand anything.

I think OOA/D/P could become a subject of philosophy, when the complexity
of systems designed will reach a certain level (the complexity of a true
AI; of 1 square meter of physical space; of a biotope etc.)

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
0
mailbox2 (6357)
7/12/2005 3:34:25 PM
On 10 Jul 2005 11:17:49 -0700, "frebe"
<fredrik_bertilsson@passagen.se> wrote:

>>>If a new column also cause changes in the GUI, wouldn't a decoupled
>>>architechture cause you extra work?
>> Not necessarily.  The GUI might make calls to a business rule layer.
>> the BR layer will probably need to change because of the new schema;
>> but the UI might not have to change at all.
>
>So it will will cause you extra work?
>
>Lets have a very common example. You have a employee table which is
>edited in a client that shows employees as rows in a list box. When you
>double click a employee a new windows pops up for editing one emplyee
>record. Now I add a new column "telephoneno" in the employee table.
>Using you approach you have to change the customer data access object,
>the customer business object and finally the client. Using my approach
>I just change add the new column to the client descriptor.
>
>Doesn't the decoupled apprach case a lot of extra work for many very
>common tasks?

Yes, it adds work.  It also removes work.  For example, I can write an
automated test (more work) that checks what the business rules think
ought to be put on the screen.  (e.g. it tests the Customer DAO).  I
can run this test any time I want, without the GUI running.  How many
manual testing sessions will this save me?  


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/12/2005 4:01:54 PM
On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
<Nicholls.Mark@mtvne.com> wrote:

>If you believe that writing software is about modelling the real world
>in some manner, and that the behavioural structure of the software must
>be the same as the structure of the real world then it is a
>philosophy....(to me this is the definition of analysis and at the root
>of any modelling endeavour).

I don't believe the initial premise.  Software is not about modeling
the real world (whatever that means).  Software is about describing
behaviors that solve problems.  



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/12/2005 4:07:10 PM
On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
<Nicholls.Mark@mtvne.com> wrote:

>> Not quite. There is a logical component.  Higher level policies are
>> decoupled from lower level policies by having both depend on
>> interfaces or abstract classes.
>
>but if they do not depend on interfaces but on implementation then they
>are not OO?

Yes, they are not OO.  



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/12/2005 4:08:05 PM
On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
<Nicholls.Mark@mtvne.com> wrote:

>>
>> >If I put it all in 1 module....its not OO?
>>
>> Its not so much a matter of whether its in one module or not.  It's a
>> matter of whether or not there is an obvious and convenient fracture
>> zone that could be used to separate the modules.
>
>OK, A->B (depends on B)....can be seperated into two seperate modules,
>so then all software containing classes is OO.....thats good.....I
>think I would agree with the conclusion if not the route there.

If A names B, then they can't be separated nicely.  If, however, A
names an interface that B implements, then A does not depend on B, and
B does not depend on A.

    A--->I<----B

This makes the design OO.

>are funtional dll's OO then? this fracture zone would appear to be in
>those too.

Yes.  And so is the device independence in most operating systems.  In
C, for example, getchar() and putchar() are OO constructs.  



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/12/2005 4:11:33 PM
On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
<Nicholls.Mark@mtvne.com> wrote:

>> To the extent that a module
>> depends on concretions, it is not OO.  To the extent that it depends
>> on abstractions it is.
>
>to me you are confusing OO with (de)coupling.
>
>it is perfectly possibly to do this in C at runtime with function
>pointers, yet with no entity identifiable as an 'object'.

The function pointers themselves form aggregates that are, for all
intents and purposes, object.

Consider, for example, the FILE data structure in UNIX.  It has five
pointers to functions buried deep inside it.  open, close, read,
write, lseek.  This is an object.

I am not confusing OO with decoupling.  I am defining OO as a strategy
for decoupling.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/12/2005 4:13:48 PM

Robert C. Martin wrote:
> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >> To the extent that a module
> >> depends on concretions, it is not OO.  To the extent that it depends
> >> on abstractions it is.
> >
> >to me you are confusing OO with (de)coupling.
> >
> >it is perfectly possibly to do this in C at runtime with function
> >pointers, yet with no entity identifiable as an 'object'.
>
> The function pointers themselves form aggregates that are, for all
> intents and purposes, object.
>
> Consider, for example, the FILE data structure in UNIX.  It has five
> pointers to functions buried deep inside it.  open, close, read,
> write, lseek.  This is an object.
>
> I am not confusing OO with decoupling.  I am defining OO as a strategy
> for decoupling.
>

Is *any* data structure that holds a function or references a function
considered "OO" to you?

That is a rather wide definition and has existed since the invention of
subroutines.

>
>
> -----
> Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
> Object Mentor Inc.            | blog:  www.butunclebob.com
> The Agile Transition Experts  | web:   www.objectmentor.com
> 800-338-6716
>
>
> "The aim of science is not to open the door to infinite wisdom,
>  but to set a limit to infinite error."
>     -- Bertolt Brecht, Life of Galileo

0
topmind (2124)
7/12/2005 4:45:40 PM

Robert C. Martin wrote:
> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >If you believe that writing software is about modelling the real world
> >in some manner, and that the behavioural structure of the software must
> >be the same as the structure of the real world then it is a
> >philosophy....(to me this is the definition of analysis and at the root
> >of any modelling endeavour).
>
> I don't believe the initial premise.  Software is not about modeling
> the real world (whatever that means).  Software is about describing
> behaviors that solve problems.
>

"Software is about describing behaviors that solve problems.  "

substitute 'describe' for 'model'.

insert 'real world' in front of behaviours and.....

"Software is about modelling real world behaviors that solve problems."

the fact that those 'behaviours' are executed by computers is
irrelevant.

So I see little difference.

0
Nicholls.Mark (1061)
7/12/2005 4:56:10 PM

Robert C. Martin wrote:
> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >>
> >> >If I put it all in 1 module....its not OO?
> >>
> >> Its not so much a matter of whether its in one module or not.  It's a
> >> matter of whether or not there is an obvious and convenient fracture
> >> zone that could be used to separate the modules.
> >
> >OK, A->B (depends on B)....can be seperated into two seperate modules,
> >so then all software containing classes is OO.....thats good.....I
> >think I would agree with the conclusion if not the route there.
>
> If A names B, then they can't be separated nicely.  If, however, A
> names an interface that B implements, then A does not depend on B, and
> B does not depend on A.
>
>     A--->I<----B
>
> This makes the design OO.
>
> >are funtional dll's OO then? this fracture zone would appear to be in
> >those too.
>
> Yes.  And so is the device independence in most operating systems.  In
> C, for example, getchar() and putchar() are OO constructs.
>

I think you need to define 'nicely' in some sense....your definition so
far is that physical modules can be 'nicely' seperated, then it's OO.

the other big problem is that your definition seems to encompass the
concept of code libraries, including procedural ones.

and it does not even seem to depende on the existence of objects...to
me this is problematic.

0
Nicholls.Mark (1061)
7/12/2005 5:02:05 PM

Robert C. Martin wrote:
> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >> To the extent that a module
> >> depends on concretions, it is not OO.  To the extent that it depends
> >> on abstractions it is.
> >
> >to me you are confusing OO with (de)coupling.
> >
> >it is perfectly possibly to do this in C at runtime with function
> >pointers, yet with no entity identifiable as an 'object'.
>
> The function pointers themselves form aggregates that are, for all
> intents and purposes, object.

so all procedural technology that implements functional indirection is
OO.....this makes all procedural technology where functional libraries
can be linked in object oriented, I find that problematic.

>
> Consider, for example, the FILE data structure in UNIX.  It has five
> pointers to functions buried deep inside it.  open, close, read,
> write, lseek.  This is an object.
>
> I am not confusing OO with decoupling.  I am defining OO as a strategy
> for decoupling.
>

You are not....you have not said OO is XYZ and a consequence is
decoupling....you've said "OO-ness of a system can be identified by
tracing the dependencies between the modules".....you are defining the
measure of OOness as the degree of decoupling...the same....

So C without any libraries except.

----------------------------------

Foo.h

void Foo(void);

----------------------------------

Foo.c

void Foo(void)
{
 return;
}

----------------------------------

by your definition is OO, at least at link time, the prototype acting
as the abstaction (for it is an abstaction...a *procedural* one).

To me you are confusing OO with decoupling, because your definition is
that anything that can be 'nicely' decoupled is OO......that makes lots
of things that most people would consider not OO to be OO, like
standard C with no function pointers except function prototypes.

I would expect some notion of structural encapsulation to need to
exist...i.e. the object, and probably composite behavioural
encapsulation...i.e. the interface.

objects + interfaces = OO.

or

structural encapsulation + behavioural encapsulation = OO

seems better to me.

I think OO needs statefull objects, else it is just procedural.

0
Nicholls.Mark (1061)
7/12/2005 5:21:07 PM
kraz wrote:
> Hans,
>
> You are suffering from the same disorientations anyone familiar with
> software development methodologies has in discussing XP.

I know - but I'm not ready to give up yet!

> So when you ask about requirements they will respond, "No problem. We
> have them right here."  When you say where they say "in the test".
> When you say, "but a test can only describe the provincial, myopic, and
> quite exacting immmediate answer to a given factoid.", they say, "no
> matter, we do thousands of these and therein must lie a greater
> meaning."  Your frustration will be endless.

Agree.

Anyone slightly familiar with Systems theory knows that most
interesting systems have a very large number of states. The ratio
between the number of states that is checked in a 'test driven'
environment and the number of states the system can take on, is most
likely very close to zero. This is why I believe that it is important
to understand what 'can and cannot be checked' with automated
execution tests so other strategies for ensuring that the system
behaves correctly can also be applied.

Regards,
Hans Ewetz

0
hansewetz (110)
7/12/2005 5:23:26 PM

Robert C. Martin wrote:
> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >> Not quite. There is a logical component.  Higher level policies are
> >> decoupled from lower level policies by having both depend on
> >> interfaces or abstract classes.
> >
> >but if they do not depend on interfaces but on implementation then they
> >are not OO?
>
> Yes, they are not OO.
>

so

------------------------------------------------------

class CFoo
{
  public CFoo(...) ()

  public void Execute()
  {
     CBar bar = new CBar("hello");
     bar.Execute();
  }
}

class CBar
{
  public CBar(..) {}

  public void Execute()
  {
     cout "Hello";
  }
}

------------------------------------------------------------

is not OO....even though we are using objects, and (unfortunately) our
OO tool couples the interface to CBar and CFoo for convenience to the
implementation.......

but

-------------------------------------------------

void Foo(void);
void Bar(char[] message);


void Bar(char[] message)
{
  printf(message);
}

void Foo()
{
   Bar("Hello");
}

----------------------------------------------------

is OO, even though its a C program, and no concept of object exists
here at all, but we can physically isolate Foo, Bar and the prototypes
from each other 'nicely'?

0
Nicholls.Mark (1061)
7/12/2005 5:28:51 PM
hansewetz wrote:

> Anyone slightly familiar with Systems theory knows that most
> interesting systems have a very large number of states. The ratio
> between the number of states that is checked in a 'test driven'
> environment and the number of states the system can take on, is most
> likely very close to zero. This is why I believe that it is important
> to understand what 'can and cannot be checked' with automated
> execution tests so other strategies for ensuring that the system
> behaves correctly can also be applied.

Regardless where the specifications come from, I want to write lots of
tests. I want to use a simple rule of thumb to make sure that most tests are
most important. A QA department can take care of the rest, if needed.

The rule of thumb is I use is this: All requirements get converted into
tests, and all implementation is via test-first.

So are you saying I should not follow such a simple, obvious rule of thumb
because it _might_ go wrong. That smart people know that such-and-so
combination of requirements cannot create tests that cover such-and-so code.

How could so many tests hurt?

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/12/2005 7:00:43 PM
hansewetz wrote:

> Understanding different aspects of execution test is not much different
> than applying the 'principle' of separation of concerns. So far I
> have only seen that 'requirements' are expressed as execution
> tests. I have yet to see a definition of what is meant by
> 'requirements' - I'm ignoring 'Phlip's declaration that a
> requirement is something that profits the customer.

Okay. You are my customer. You ask me to write code that does Q. I start
writing X.

You ask "where's Q". Either I can show you a simple, direct, obvious link
from X to Q, or you ought to get another programmer.

That is the way of things.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/12/2005 7:07:03 PM
On 12 Jul 2005 07:09:13 -0700, "Mark Nicholls"
<Nicholls.Mark@mtvne.com> wrote:

>I cannot comment, religion is beyond me, religion is certainly a belief
>system, but so is science.

Careful.  Religion is a belief system based upon faith.  Science is a
belief system based upon empirical observation.  Saying that they are
both believe systems creates a false comparison.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/12/2005 7:49:44 PM
On 12 Jul 2005 09:56:10 -0700, "Mark Nicholls"
<Nicholls.Mark@mtvne.com> wrote:

>
>
>Robert C. Martin wrote:
>> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
>> <Nicholls.Mark@mtvne.com> wrote:
>>
>> >If you believe that writing software is about modelling the real world
>> >in some manner, and that the behavioural structure of the software must
>> >be the same as the structure of the real world then it is a
>> >philosophy....(to me this is the definition of analysis and at the root
>> >of any modelling endeavour).
>>
>> I don't believe the initial premise.  Software is not about modeling
>> the real world (whatever that means).  Software is about describing
>> behaviors that solve problems.
>>
>
>"Software is about describing behaviors that solve problems.  "
>
>substitute 'describe' for 'model'.

No, don't.  The word "model" is very different from the word
"describe".  A better word might be "specify".  A model is not a
specification.  A model is an intentional approximation.  

>insert 'real world' in front of behaviours and.....

Inserting the term 'real world' in front of behaviors is misleading.
There are many behaviors that software can describe that have nothing
whatever to do with the "real world".  We can, for example, specify
behaviors in software that are not part of the real world at all.  

The term "real world" is an emotional term.  It is used in order to
gain credibility.  The term "OO helps us create models of the real
world" is an attempt to use both "real world" and "model" to gain
credibility for OO.  A naive person will react to the statement by
being impressed.  And, rather like emperor's clothing, will see some
special meaning in those words that does not, in fact, exist.
>
>"Software is about modelling real world behaviors that solve problems."

No, software is about specifying behaviors that solve problems.  
>
>the fact that those 'behaviours' are executed by computers is
>irrelevant.

I disagree.  The fact that those behaviors are executed by computers
is the ESSENSE.

>So I see little difference.

I see a real world of difference.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/12/2005 7:56:10 PM
On 12 Jul 2005 10:28:51 -0700, "Mark Nicholls"
<Nicholls.Mark@mtvne.com> wrote:

>
>
>Robert C. Martin wrote:
>> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
>> <Nicholls.Mark@mtvne.com> wrote:
>>
>> >> Not quite. There is a logical component.  Higher level policies are
>> >> decoupled from lower level policies by having both depend on
>> >> interfaces or abstract classes.
>> >
>> >but if they do not depend on interfaces but on implementation then they
>> >are not OO?
>>
>> Yes, they are not OO.
>>
>
>so
>
>------------------------------------------------------
>
>class CFoo
>{
>  public CFoo(...) ()
>
>  public void Execute()
>  {
>     CBar bar = new CBar("hello");
>     bar.Execute();
>  }
>}
>
>class CBar
>{
>  public CBar(..) {}
>
>  public void Execute()
>  {
>     cout "Hello";
>  }
>}
>
>------------------------------------------------------------
>
>is not OO....even though we are using objects, and (unfortunately) our
>OO tool couples the interface to CBar and CFoo for convenience to the
>implementation.......

Yes. this is not OO.

>
>but
>
>-------------------------------------------------
>
>void Foo(void);
>void Bar(char[] message);
>
>
>void Bar(char[] message)
>{
>  printf(message);
>}
>
>void Foo()
>{
>   Bar("Hello");
>}
>
>----------------------------------------------------
>
>is OO, even though its a C program, and no concept of object exists
>here at all, but we can physically isolate Foo, Bar and the prototypes
>from each other 'nicely'?

No, this is not OO either.  Foo mentions Bar by name.  Both Foo and
Bar are concrete.  No polymorphism, no dependencies on abstractions,
no OO.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/12/2005 8:00:39 PM
On 12 Jul 2005 09:45:40 -0700, "topmind" <topmind@technologist.com>
wrote:

>
>
>Robert C. Martin wrote:
>> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
>> <Nicholls.Mark@mtvne.com> wrote:
>>
>> >> To the extent that a module
>> >> depends on concretions, it is not OO.  To the extent that it depends
>> >> on abstractions it is.
>> >
>> >to me you are confusing OO with (de)coupling.
>> >
>> >it is perfectly possibly to do this in C at runtime with function
>> >pointers, yet with no entity identifiable as an 'object'.
>>
>> The function pointers themselves form aggregates that are, for all
>> intents and purposes, object.
>>
>> Consider, for example, the FILE data structure in UNIX.  It has five
>> pointers to functions buried deep inside it.  open, close, read,
>> write, lseek.  This is an object.
>>
>> I am not confusing OO with decoupling.  I am defining OO as a strategy
>> for decoupling.
>>
>
>Is *any* data structure that holds a function or references a function
>considered "OO" to you?

Yes, if other elements of the program make use of the function
pointers without knowing what functions they point to.
>
>That is a rather wide definition and has existed since the invention of
>subroutines.

Sure.  OOPLS have given a conventional syntax to it.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/12/2005 8:02:28 PM
On 12 Jul 2005 10:21:07 -0700, "Mark Nicholls"
<Nicholls.Mark@mtvne.com> wrote:


>by your definition is OO, at least at link time, the prototype acting
>as the abstaction (for it is an abstaction...a *procedural* one).

No, I don't consider link-time polymorphism as part of OO.  

>To me you are confusing OO with decoupling, because your definition is
>that anything that can be 'nicely' decoupled is OO

Anything that is nicely decoupled using runtime polymorphism.

>I would expect some notion of structural encapsulation to need to
>exist...

Once you decide to invoke functions through function pointers, it's a
very short step to aggregating those function pointers into data
structures that also hold the data that those functions manipulate.

Indeed, I have often defined OO as:

"A programming style in which data structures are manipulated by
functions that are called through pointers contained by the
manipulated data structure."

However, I find this definition to be too complicated.  I prefer:

"A programming style in which all source code dependencies target
runtime abstractions."   





-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/12/2005 8:12:05 PM
Mark Nicholls wrote:
> Robert C. Martin wrote:
> > On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
> > <Nicholls.Mark@mtvne.com> wrote:
> >
> > >If you believe that writing software is about modelling the real world
> > >in some manner, and that the behavioural structure of the software must
> > >be the same as the structure of the real world then it is a
> > >philosophy....(to me this is the definition of analysis and at the root
> > >of any modelling endeavour).
> >
> > I don't believe the initial premise.  Software is not about modeling
> > the real world (whatever that means).  Software is about describing
> > behaviors that solve problems.

Robert, seems like you missed off the catch-all phrase at the end of
the sentence - "modelling the real world in some manner" - which afaict
would include making a very selective description of just those aspects
of the 'real world' relevant to the problem-at-hand.


> "Software is about describing behaviors that solve problems. "
> substitute 'describe' for 'model'.
> insert 'real world' in front of behaviours and.....
> "Software is about modelling real world behaviors that solve problems."
> the fact that those 'behaviours' are executed by computers is
> irrelevant.
> So I see little difference.

Mark, "writing software is about modelling the real world in some
manner" would include purposeless undirected
modelling-for-the-sake-of-modelling (which may be what Robert was
objecting to).

Michael Jackson writes about "Descriptions and Models" in "Aspects of
System Description" 2003
http://mcs.open.ac.uk/mj665/papers.html

0
igouy (1009)
7/12/2005 8:41:58 PM
Robert C. Martin wrote:
-snip-
> The term "real world" is an emotional term.

And as the "Universe 'too queer' to grasp" probably isn't what we
actually mean.
http://news.bbc.co.uk/2/hi/science/nature/4676751.stm


-snip-
> I disagree.  The fact that those behaviors are executed by computers
> is the ESSENSE.

Certainly seems relevant :-)

0
igouy (1009)
7/12/2005 8:54:42 PM

Robert C. Martin wrote:
> On 12 Jul 2005 09:45:40 -0700, "topmind" <topmind@technologist.com>
> wrote:
>
> >
> >
> >Robert C. Martin wrote:
> >> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
> >> <Nicholls.Mark@mtvne.com> wrote:
> >>
> >> >> To the extent that a module
> >> >> depends on concretions, it is not OO.  To the extent that it depends
> >> >> on abstractions it is.
> >> >
> >> >to me you are confusing OO with (de)coupling.
> >> >
> >> >it is perfectly possibly to do this in C at runtime with function
> >> >pointers, yet with no entity identifiable as an 'object'.
> >>
> >> The function pointers themselves form aggregates that are, for all
> >> intents and purposes, object.
> >>
> >> Consider, for example, the FILE data structure in UNIX.  It has five
> >> pointers to functions buried deep inside it.  open, close, read,
> >> write, lseek.  This is an object.
> >>
> >> I am not confusing OO with decoupling.  I am defining OO as a strategy
> >> for decoupling.
> >>
> >
> >Is *any* data structure that holds a function or references a function
> >considered "OO" to you?
>
> Yes, if other elements of the program make use of the function
> pointers without knowing what functions they point to.
> >
> >That is a rather wide definition and has existed since the invention of
> >subroutines.
>
> Sure.  OOPLS have given a conventional syntax to it.
>

And attributes are just keys in glorified associative arrays (aka
maps)?

I can live with those kinds of definintions; however that makes OO map
centric. I would rather see table-centric designs for non-trivial
projects. Map-centric is the "navigational" structures that Dr. Codd
rallied against. He was mostly focusing on attributes back then, but it
can apply to code snippets or executables also.

>
>
> -----
> Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com

-T-

0
topmind (2124)
7/12/2005 11:29:32 PM

hansewetz@hotmail.com wrote:
> kraz wrote:
> > Hans,
> >
> > You are suffering from the same disorientations anyone familiar with
> > software development methodologies has in discussing XP.
>
> I know - but I'm not ready to give up yet!
>
> > So when you ask about requirements they will respond, "No problem. We
> > have them right here."  When you say where they say "in the test".
> > When you say, "but a test can only describe the provincial, myopic, and
> > quite exacting immmediate answer to a given factoid.", they say, "no
> > matter, we do thousands of these and therein must lie a greater
> > meaning."  Your frustration will be endless.
>
> Agree.
>
> Anyone slightly familiar with Systems theory knows that most
> interesting systems have a very large number of states. The ratio
> between the number of states that is checked in a 'test driven'
> environment and the number of states the system can take on, is most
> likely very close to zero. This is why I believe that it is important
> to understand what 'can and cannot be checked' with automated
> execution tests so other strategies for ensuring that the system
> behaves correctly can also be applied.
>

I enjoy reading your sparring sessions and I'm empathetic with the
baseline you're attempting to establish.

The cultish response mechanism employed by the Agile/XP proponents
makes these discussions frustrating.  In any case, I tip my hat to you
for continuing to try making sense of this stuff in the context of
Software Development Methodologies.  Everything not having to do with
programming has long been drowned out.  I get the impression these days
that we may as well be discussing the Knights of the Round Table.

0
Krasicki1 (73)
7/13/2005 1:24:02 AM
>  It also removes work.  For example, I can write an
> automated test (more work) that checks what the business rules think
> ought to be put on the screen.

I write automated tests that uses a database. In the setup you fill the
database with test data. After the test, you evaluate if the database
has the state it is supposed to have. In what way would separation
makes it easier to create automated tests?

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/13/2005 5:39:43 AM
frebe wrote:

> I write automated tests that uses a database. In the setup you fill the
> database with test data. After the test, you evaluate if the database
> has the state it is supposed to have. In what way would separation
> makes it easier to create automated tests?

The short answer is when you need to replace the database with a Mock
object, to observe the other layers under extraneous conditions that the
database cannot, or should not, supply. Things like bizarre errors.

Most defects reported in the field come from untested error handling code.
You should be able to test things like disk exhaustion without filling your
database up with blobs.

If a layer expects the worst of the layers around it, then it won't be
surprised.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/13/2005 6:08:10 AM
> The short answer is when you need to replace the database with a Mock
> object, to observe the other layers under extraneous conditions that the
> database cannot, or should not, supply. Things like bizarre errors.
I am not sure I understand what you are saying. Can you give some
examples of how you would do this. Assuming that we have you
findEmployeeByAge() -method. How would be test bizarre errors?

> Most defects reported in the field come from untested error handling code.
I agree that it is hard to thest error handling code. It is important
to have a good error handling strategy, that you know work.

> You should be able to test things like disk exhaustion without filling your
> database up with blobs.
It would be a little bit hard to emulate disk exhaustion without using
the actual RDBMS. Different vendors may behaive different. But I think
we can assume that the RDBMS would throw some sort of exception, or?

Fredrik Bertilsson
http://butler.sourceforge.net

0
7/13/2005 8:54:30 AM

Robert C. Martin wrote:
> On 12 Jul 2005 09:56:10 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >
> >
> >Robert C. Martin wrote:
> >> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
> >> <Nicholls.Mark@mtvne.com> wrote:
> >>
> >> >If you believe that writing software is about modelling the real world
> >> >in some manner, and that the behavioural structure of the software must
> >> >be the same as the structure of the real world then it is a
> >> >philosophy....(to me this is the definition of analysis and at the root
> >> >of any modelling endeavour).
> >>
> >> I don't believe the initial premise.  Software is not about modeling
> >> the real world (whatever that means).  Software is about describing
> >> behaviors that solve problems.
> >>
> >
> >"Software is about describing behaviors that solve problems.  "
> >
> >substitute 'describe' for 'model'.
>
> No, don't.  The word "model" is very different from the word
> "describe".  A better word might be "specify".  A model is not a
> specification.  A model is an intentional approximation.

"A mathematical model is the use of mathematical language to describe
the behaviour of a system. Mathematical models are used in particularly
in the sciences such biology, electrical engineering, physics but also
in other
fields such as economics, sociology and political science."

http://en.wikipedia.org/wiki/Mathematical_model

There is nothing intentionally approximate about modelling,
unfortunately the real world is very complex thus one of the tools of
modelling is to make assumptions....something we do in SE all the time,
so in your sense SE is usually hugely intentionally approximation.

I think this is a pretty good definition, a model, can you find a
definition anywhere that coincides with your interpretation? I can't.

>
> >insert 'real world' in front of behaviours and.....
>
> Inserting the term 'real world' in front of behaviors is misleading.
> There are many behaviors that software can describe that have nothing
> whatever to do with the "real world".  We can, for example, specify
> behaviors in software that are not part of the real world at all.
>
> The term "real world" is an emotional term.  It is used in order to
> gain credibility.  The term "OO helps us create models of the real
> world" is an attempt to use both "real world" and "model" to gain
> credibility for OO.  A naive person will react to the statement by
> being impressed.  And, rather like emperor's clothing, will see some
> special meaning in those words that does not, in fact, exist.

OK, I'll drop "real world" is doesn't really matter what we are
modelling (I'll go with Meyers "model of a model of reality") , the
nature of modelling is building an abstraction in some language were
there exists a mapping from the thing we are describing to the
description (if that mapping is isomorphic then there is no
approximation.....usually in SE it is approximate, e.g. our data types
are usually bounded, where the things we model usually aren't).

> >
> >"Software is about modelling real world behaviors that solve problems."
>
> No, software is about specifying behaviors that solve problems.

OK, from the above definition I still see no reason not to insert
'model' for specify....then we are left with a simple rejection of the
existence of the mapping from the thing we seek to
specify/model/describe to our specification/model/description? you
would not be alone.....do you reject this mapping?

> >
> >the fact that those 'behaviours' are executed by computers is
> >irrelevant.
>
> I disagree.  The fact that those behaviors are executed by computers
> is the ESSENSE.

why?

>
> >So I see little difference.
>
> I see a real world of difference.
>

which is?

P.S.

the 'M' in UML stand for 'modelling' for a reason....we are building
models.
see Meyer on "objects as a modelling too"....."a software system
is.....a model of a model of a subset of ....reality"......

0
Nicholls.Mark (1061)
7/13/2005 8:55:47 AM
Mark Nicholls wrote:

> I would expect some notion of structural encapsulation to need to
> exist...i.e. the object, and probably composite behavioural
> encapsulation...i.e. the interface.

> objects + interfaces = OO.

> or

> structural encapsulation + behavioural encapsulation = OO

> seems better to me.

> I think OO needs statefull objects, else it is just procedural.

ADT = interface + encapsulation + implementation hiding
Object-Based = ADT + self-reference
Object-Oriented = Object-Based + Inheritance

Very succinct. Very precise.
And very consistent with the various definitions given by the OO
protagonists.

The concept of ADTs is well-established and understood.

The concept of object-based introduces ADTs that are state-based
(self-reference is state) . It also allows a major distinction between
say the style in which an ADT is constructed in Modula-2 etc and
Java.

Object-oriented brings inheritance. Which is a far greater
semantic paradigm than object-based.

In maths tradition, we have axiomatic specifications that enables
each to build on the power/value of its predecessor.


Regards,
Steven Perryman

0
ggroups5 (201)
7/13/2005 9:10:11 AM

Isaac Gouy wrote:
> Mark Nicholls wrote:
> > Robert C. Martin wrote:
> > > On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
> > > <Nicholls.Mark@mtvne.com> wrote:
> > >
> > > >If you believe that writing software is about modelling the real world
> > > >in some manner, and that the behavioural structure of the software must
> > > >be the same as the structure of the real world then it is a
> > > >philosophy....(to me this is the definition of analysis and at the root
> > > >of any modelling endeavour).
> > >
> > > I don't believe the initial premise.  Software is not about modeling
> > > the real world (whatever that means).  Software is about describing
> > > behaviors that solve problems.
>
> Robert, seems like you missed off the catch-all phrase at the end of
> the sentence - "modelling the real world in some manner" - which afaict
> would include making a very selective description of just those aspects
> of the 'real world' relevant to the problem-at-hand.

This appears to be why he rejects it, modelling (to him) is about
approximation, while SE is not.

>
>
> > "Software is about describing behaviors that solve problems. "
> > substitute 'describe' for 'model'.
> > insert 'real world' in front of behaviours and.....
> > "Software is about modelling real world behaviors that solve problems."
> > the fact that those 'behaviours' are executed by computers is
> > irrelevant.
> > So I see little difference.
>
> Mark, "writing software is about modelling the real world in some
> manner" would include purposeless undirected
> modelling-for-the-sake-of-modelling (which may be what Robert was
> objecting to).

and it is not possible to write purposeless undirected OO software?

If OO captured the essence of 'purpose' it would be a truly incredible
philosophical concept truly akin to a religion.


> Michael Jackson writes about "Descriptions and Models" in "Aspects of
> System Description" 2003
> http://mcs.open.ac.uk/mj665/papers.html

very interesting......

It largely coincides with what I am trying to say.....but then I'm
hugely biased, and only skimmed it.
but specifically

"The desired relationship between a model domain and the domain it
models is in principle simple. There should be a one to one
correspondence between phenomina of the two domains and their values"

i.e. the object of the excercise is to create something isomorphic to
the domain (though I agree with Meyer that this is highly subjective
and should be viewed as a model of a subjective model of a domain).

This is not Jacksons invention or discovery, this is central to the
role of scientific/mathematical modelling, and not the preserve of
computer scientists....the similarities between OO and model theory are
not coincidence, the Curry-XXXX (can't remember his name) direct
correspondence between software and the writing of proofs is not
coincidence.

i.e. modelling is not "intentianally approximate"...

he goes on to state the reasons why "practical choices" introduce
"unavoidable departures"...i.e. approximations in the example of
modelling a lift. That is an engineering skill, what choices have to be
made in the 'real world' to actually implement the model........(to me
this is the distinction between analysis and design).

0
Nicholls.Mark (1061)
7/13/2005 9:24:33 AM

Robert C. Martin wrote:
> On 12 Jul 2005 10:28:51 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >
> >
> >Robert C. Martin wrote:
> >> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
> >> <Nicholls.Mark@mtvne.com> wrote:
> >>
> >> >> Not quite. There is a logical component.  Higher level policies are
> >> >> decoupled from lower level policies by having both depend on
> >> >> interfaces or abstract classes.
> >> >
> >> >but if they do not depend on interfaces but on implementation then they
> >> >are not OO?
> >>
> >> Yes, they are not OO.
> >>
> >
> >so
> >
> >------------------------------------------------------
> >
> >class CFoo
> >{
> >  public CFoo(...) ()
> >
> >  public void Execute()
> >  {
> >     CBar bar = new CBar("hello");
> >     bar.Execute();
> >  }
> >}
> >
> >class CBar
> >{
> >  public CBar(..) {}
> >
> >  public void Execute()
> >  {
> >     cout "Hello";
> >  }
> >}
> >
> >------------------------------------------------------------
> >
> >is not OO....even though we are using objects, and (unfortunately) our
> >OO tool couples the interface to CBar and CFoo for convenience to the
> >implementation.......
>
> Yes. this is not OO.
>
> >
> >but
> >
> >-------------------------------------------------
> >
> >void Foo(void);
> >void Bar(char[] message);
> >
> >
> >void Bar(char[] message)
> >{
> >  printf(message);
> >}
> >
> >void Foo()
> >{
> >   Bar("Hello");
> >}
> >
> >----------------------------------------------------
> >
> >is OO, even though its a C program, and no concept of object exists
> >here at all, but we can physically isolate Foo, Bar and the prototypes
> >from each other 'nicely'?
>
> No, this is not OO either.  Foo mentions Bar by name.

Foo mentions the abstraction Bar (as defined by its prototype i.e. its
abstraction by name).

> Both Foo and
> Bar are concrete.  No polymorphism, no dependencies on abstractions,
> no OO.
>

you've added polymorphism to your definition and absraction, whence
before it was just physical decoupling.

unfortunately this example does have all the above properties, but at
link time, if I create a virtual machine that compiles C code and links
it at run time (which very oddly I have), then that makes the above C
code OO?

you add more to your definition, but still it seems my simple C program
seems to be OO, and still your definition does not imply the existence
of objects....thats not good.

0
Nicholls.Mark (1061)
7/13/2005 10:24:42 AM
Phlip wrote:

> Regardless where the specifications come from, I want to write lots of
> tests.

The fact that you 'want' to write lots of tests are not really
relevant for the discussion.

> That smart people know that such-and-so
> combination of requirements cannot create tests that cover such-and-so code.

I don't understand what you are saying.

> How could so many tests hurt?

Unless you work for free, someone will pay money for the tests. If you
don't have a good understanding of what it is that is being proven by
the tests, someone is wasting money.

Also, quality costs money and all software is not required to have the
same quality. If all 'testing' is done through execution tests, as
you propose, high quality software will cost more money to test.
Therefore, it seems to be important to really understand what you test
and how much you should test.

Regards,
Hans Ewetz

0
hansewetz (110)
7/13/2005 12:33:56 PM
hansewetz wrote:

> The fact that you 'want' to write lots of tests are not really
> relevant for the discussion.

My code wants me to write lots of tests.

The most important point here is there should be lots of tests, and they
should all be relevant tests. How will you get lots of them, and how can you
improve their relevance?

I think I can come close by _pretending_ that specifications are tests. We
still need other practices, but that one seems to provide a good bang/buck
ratio.

> > That smart people know that such-and-so
> > combination of requirements cannot create tests that cover such-and-so
code.
>
> I don't understand what you are saying.

Someone said "tests cannot prove the absence of bugs, because they cannot
cover all possible combinations where the bugs might be."

True. They can't.

> > How could so many tests hurt?
>
> Unless you work for free, someone will pay money for the tests.

Someone pays for debugging and rework all the time. I think they'd be
thrilled to have a cheaper alternative.

> If you
> don't have a good understanding of what it is that is being proven by
> the tests, someone is wasting money.

Okay. To get a good understanding of what the tests are proving, use the
requirements as ... "inspiration" to figure what tests to write.

> Also, quality costs money and all software is not required to have the
> same quality.

Don't make me remind you I am an eXtremo who has studied Statistical Process
Control. Quality is free. The higher your quality gets, the easier changes
become. So the cost of higher quality pays for itself, by reducing the costs
of all future feature.

> If all 'testing' is done through execution tests, as
> you propose, high quality software will cost more money to test.

That is a non-sequitur. If all seagulls carry machine guns, then seagulls
are dangerous. They don't, and they are less so. So what?

> Therefore, it seems to be important to really understand what you test
> and how much you should test.

Okay. To get a good understanding of what the tests are proving, use the
requirements as ... "inspiration" to figure what tests to write.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/13/2005 1:24:17 PM
frebe wrote:

> It would be a little bit hard to emulate disk exhaustion without using
> the actual RDBMS. Different vendors may behaive different. But I think
> we can assume that the RDBMS would throw some sort of exception, or?

Sure. Manually test, once, and capture the error event. Then write an
automated test with a mock object that throws the error, without the risk
and expense of toasting the hard drive.

> > The short answer is when you need to replace the database with a Mock
> > object, to observe the other layers under extraneous conditions that the
> > database cannot, or should not, supply. Things like bizarre errors.

> I am not sure I understand what you are saying. Can you give some
> examples of how you would do this. Assuming that we have you
> findEmployeeByAge() -method. How would be test bizarre errors?

By passing that mock in instead of the real database wrapper object. So
findEmployeeByAge(cn) thinks it has a chance, then cn blows up instead of
does anything. Then the tests check that findEmployeeByAge(cn) caught the
error and passed it to the GUI's failure handler.

If all this stuff is decoupled, you don't need to write a complex Mock to
test a given effect. Research "construction encapsulation".

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/13/2005 1:42:04 PM
On 13 Jul 2005 01:55:47 -0700, "Mark Nicholls"
<Nicholls.Mark@mtvne.com> wrote:

>
>
>Robert C. Martin wrote:
>> On 12 Jul 2005 09:56:10 -0700, "Mark Nicholls"
>> <Nicholls.Mark@mtvne.com> wrote:
>>
>> >
>> >
>> >Robert C. Martin wrote:
>> >> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
>> >> <Nicholls.Mark@mtvne.com> wrote:
>> >>
>> >> >If you believe that writing software is about modelling the real world
>> >> >in some manner, and that the behavioural structure of the software must
>> >> >be the same as the structure of the real world then it is a
>> >> >philosophy....(to me this is the definition of analysis and at the root
>> >> >of any modelling endeavour).
>> >>
>> >> I don't believe the initial premise.  Software is not about modeling
>> >> the real world (whatever that means).  Software is about describing
>> >> behaviors that solve problems.
>> >>
>> >
>> >"Software is about describing behaviors that solve problems.  "
>> >
>> >substitute 'describe' for 'model'.
>>
>> No, don't.  The word "model" is very different from the word
>> "describe".  A better word might be "specify".  A model is not a
>> specification.  A model is an intentional approximation.
>
>"A mathematical model is the use of mathematical language to describe
>the behaviour of a system. Mathematical models are used in particularly
>in the sciences such biology, electrical engineering, physics but also
>in other
>fields such as economics, sociology and political science."
>
>http://en.wikipedia.org/wiki/Mathematical_model

I certainly agree.  I'll also assert that the only model in software
that has the rigor of the mathematical models you are talking about,
is the source code. 

>There is nothing intentionally approximate about modelling,

Wow!  The entire motivation behind modeling is to eliminate detail
that is irrelevant to the situation being studied.  We build models of
airframes to see if they are aerodynamic.  We build wire frame models
of building and bridges to see if they are structurally sound.  These
are intentional approximations.

UML diagrams are also intentional approximations.  If they weren't, we
would just write the source code.


>> >"Software is about modelling real world behaviors that solve problems."
>>
>> No, software is about specifying behaviors that solve problems.
>
>OK, from the above definition I still see no reason not to insert
>'model' for specify

The term "model" implies approximation.  The term "specify" implies
exactitude.  Software is not about getting it sorta-right.  Software
is about getting it right.

>....then we are left with a simple rejection of the
>existence of the mapping from the thing we seek to
>specify/model/describe to our specification/model/description? you
>would not be alone.....do you reject this mapping?

I don't understand the sentence.

>> >the fact that those 'behaviours' are executed by computers is
>> >irrelevant.
>>
>> I disagree.  The fact that those behaviors are executed by computers
>> is the ESSENSE.
>
>why?

Because it seems to me that software is about writing programs that
run inside computers.

>P.S.
>
>the 'M' in UML stand for 'modelling' for a reason....we are building
>models.
>see Meyer on "objects as a modelling too"....."a software system
>is.....a model of a model of a subset of ....reality"......

Yes, in the mid '90s the term 'model' was very chic.  Now it has
become so badly overused and abused that it has no definition other
than "a good thing to do".  One can gain credibility for oneself by
using the word "model" more than once in a paragraph.

In any case, UML *is* a modeling language.  I have no qualms about
creating models of software systems.  I do it all the time.  But I
don't fall into the trap of thinking that they are provably correct by
any means other than by implementing them and testing them.

Models *sometimes* help you think about the problem.  They can also be
misleading.  The only real truth is when the program executes.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/13/2005 2:23:35 PM
On 13 Jul 2005 02:24:33 -0700, "Mark Nicholls"
<Nicholls.Mark@mtvne.com> wrote:

>This appears to be why he rejects it, modelling (to him) is about
>approximation, while SE is not.

The only activity in software development that is not approximate, is
program execution.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/13/2005 2:24:33 PM
On 13 Jul 2005 03:24:42 -0700, "Mark Nicholls"
<Nicholls.Mark@mtvne.com> wrote:

>
>
>Robert C. Martin wrote:

>> Both Foo and
>> Bar are concrete.  No polymorphism, no dependencies on abstractions,
>> no OO.
>>
>
>you've added polymorphism to your definition and absraction, whence
>before it was just physical decoupling.

Yes, I have added polymorphism.  That is very consistent with many
other postings, articles, and books that I have written.  

Newsgroup postings are intentional approximations (i.e. models) of our
opinions.  We don't have the time or desire to specify them in the
excruciating detail that would allow us to escape the semantic
analysis of a determined debater.

>unfortunately this example does have all the above properties, but at
>link time, if I create a virtual machine that compiles C code and links
>it at run time (which very oddly I have), then that makes the above C
>code OO?

Almost.  If I could create different instances of Bar at runtime, and
swap them into the currently running Foo, then I would agree that this
is OO.

>you add more to your definition, but still it seems my simple C program
>seems to be OO, and still your definition does not imply the existence
>of objects....thats not good.

Why not?  Isn't it a good thing to define "object oriented" in terms
that don't use the word "object"?  Doesn't the alternative lead to
circularity?

BTW, the argument you are making about the C program seems to suggest
that if you don't have an object, it's not OO.  Consider, the Command
pattern.  There are no data elements in the command pattern.  It's
nothing more than a jump vector (rather like your dynamically loaded
Bar functions).  Is the command pattern OO?  



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/13/2005 2:32:26 PM
Phlip wrote:
> hansewetz wrote:
>
> > Understanding different aspects of execution test is not much different
> > than applying the 'principle' of separation of concerns. So far I
> > have only seen that 'requirements' are expressed as execution
> > tests. I have yet to see a definition of what is meant by
> > 'requirements' - I'm ignoring 'Phlip's declaration that a
> > requirement is something that profits the customer.
>
> Okay. You are my customer. You ask me to write code that does Q. I start
> writing X.
>
> You ask "where's Q". Either I can show you a simple, direct, obvious link
> from X to Q, or you ought to get another programmer.
>
> That is the way of things.

The nature of customer requirements obviously depends on the business
you are in. However, customers usually do not state that 'the code'
should this or that. Customers normally states what should occur
outside the software/computer.

I'm still waiting - probably forever - for what you mean by a
'requirement'.

Regards,
Hans Ewetz

0
hansewetz (110)
7/13/2005 3:09:49 PM
hansewetz wrote:

> The nature of customer requirements obviously depends on the business
> you are in. However, customers usually do not state that 'the code'
> should this or that. Customers normally states what should occur
> outside the software/computer.

Then you are using the R word for "things which cannot be converted into
testable specifications".

If we pick another word, "Foo", to mean "things which we can express as
tests, to prove^W demonstrate our software has them", then "Foo" is a very
useful and unambiguous word, which can be used to facilitate communication
with customer representatives, with colleagues, with testers, and with code.

If I were your customer, and I asked your program to do X and pass test X',
and you said that's not a requirement, I would be confused. But you would do
X' and X. Foo.

> I'm still waiting - probably forever - for what you mean by a
> 'requirement'.

Things users need your software to do so they can profit.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/13/2005 3:16:52 PM
Phlip wrote:
> frebe wrote:
>
> > I write automated tests that uses a database. In the setup you fill the
> > database with test data. After the test, you evaluate if the database
> > has the state it is supposed to have.

So do I.

>> In what way would separation
> > makes it easier to create automated tests?
>
> The short answer is when you need to replace the database with a Mock
> object, to observe the other layers under extraneous conditions that the
> database cannot, or should not, supply. Things like bizarre errors.
>
> Most defects reported in the field come from untested error handling code.
> You should be able to test things like disk exhaustion without filling your
> database up with blobs.
>
I don't get it.  What does changing data sources have to do with
testing resiliency to disk faults?  Why do you think you would have to
otherwise fill a database up with blobs to test that?

Regards,
Daniel Parker

0
7/13/2005 7:23:30 PM
Daniel Parker wrote:

> I don't get it.  What does changing data sources have to do with
> testing resiliency to disk faults?

My example was not testing resiliency to disk faults. It was testing (hence
designing) the failure path from the database thing (the real thing or a
mock) back up to the representation layer.

> Why do you think you would have to
> otherwise fill a database up with blobs to test that?

Because databases generally don't report errors they don't really have. They
are kind'a hard-nosed about that sort of thing.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/13/2005 7:28:25 PM
Robert C. Martin wrote:
-snip-
> >There is nothing intentionally approximate about modelling,
>
> Wow!  The entire motivation behind modeling is to eliminate detail
> that is irrelevant to the situation being studied... These
> are intentional approximations.

Are you saying they are intentional approximations because:
1) some aspects of the situation are intentionally excluded
2) included aspects of the situation are not represented exactly

0
igouy (1009)
7/13/2005 10:10:22 PM
Mark Nicholls wrote:
-snip-
> > Robert, seems like you missed off the catch-all phrase at the end of
> > the sentence - "modelling the real world in some manner" - which afaict
> > would include making a very selective description of just those aspects
> > of the 'real world' relevant to the problem-at-hand.
>
> This appears to be why he rejects it, modelling (to him) is about
> approximation, while SE is not.

Maybe we should inquire what Robert actually means by approximate.


> > > "Software is about describing behaviors that solve problems. "
> > > substitute 'describe' for 'model'.
> > > insert 'real world' in front of behaviours and.....
> > > "Software is about modelling real world behaviors that solve problems."
> > > the fact that those 'behaviours' are executed by computers is
> > > irrelevant.
> > > So I see little difference.
> >
> > Mark, "writing software is about modelling the real world in some
> > manner" would include purposeless undirected
> > modelling-for-the-sake-of-modelling (which may be what Robert was
> > objecting to).
>
> and it is not possible to write purposeless undirected OO software?

afaik Software cannot be purposeful or purposeless, software is not
capable of intentionality - programmers are.


-snip-
> > Michael Jackson writes about "Descriptions and Models" in "Aspects of
> > System Description" 2003
> > http://mcs.open.ac.uk/mj665/papers.html
>
> very interesting......
>
> It largely coincides with what I am trying to say.....but then I'm
> hugely biased, and only skimmed it.
> but specifically
>
> "The desired relationship between a model domain and the domain it
> models is in principle simple. There should be a one to one
> correspondence between phenomina of the two domains and their values"
>
> i.e. the object of the excercise is to create something isomorphic to
> the domain (though I agree with Meyer that this is highly subjective
> and should be viewed as a model of a subjective model of a domain).
>
> This is not Jacksons invention or discovery, this is central to the
> role of scientific/mathematical modelling, and not the preserve of
> computer scientists....the similarities between OO and model theory are
> not coincidence, the Curry-XXXX (can't remember his name) direct
> correspondence between software and the writing of proofs is not
> coincidence.
>
> i.e. modelling is not "intentianally approximate"...
>
> he goes on to state the reasons why "practical choices" introduce
> "unavoidable departures"...i.e. approximations in the example of
> modelling a lift. That is an engineering skill, what choices have to be
> made in the 'real world' to actually implement the model........(to me
> this is the distinction between analysis and design).

Well, after "in principle simple" he quickly reminds us that practical
models are almost never so "ideal".

It's difficult, but maybe really reading the paper might prove more
interesting, than skimming for how it might help this little argument
;-)

0
igouy (1009)
7/13/2005 10:37:38 PM
Robert C. Martin wrote:
> On 13 Jul 2005 02:24:33 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >This appears to be why he rejects it, modelling (to him) is about
> >approximation, while SE is not.
>
> The only activity in software development that is not approximate, is
> program execution.

Here I am questioning "The Customer" and making a digital recording of
the conversation. When "The Customer" says 1 plus 1 equals 2, what
approximate thing are you trying to draw our attention to?

0
igouy (1009)
7/13/2005 10:49:32 PM
On 13 Jul 2005 02:10:11 -0700, ggroups@bigfoot.com wrote:

>Mark Nicholls wrote:
>
>> I would expect some notion of structural encapsulation to need to
>> exist...i.e. the object, and probably composite behavioural
>> encapsulation...i.e. the interface.
>
>> objects + interfaces = OO.
>
>> or
>
>> structural encapsulation + behavioural encapsulation = OO
>
>> seems better to me.
>
>> I think OO needs statefull objects, else it is just procedural.
>
>ADT = interface + encapsulation + implementation hiding
>Object-Based = ADT + self-reference
>Object-Oriented = Object-Based + Inheritance
>
>Very succinct. Very precise.

I agree that this are succinct and precise definitions.  Furthermore,
I don't have any problem with the first two.  However, the third (OO)
bothers me because it lacks intent.  To me, OO is much more about
intent, than about inheritance.  

On a pickier point, I think the definition above puts far too much
emphasis on inheritance.  Indeed, I think it was written by someone
who was used to statically typed languages and therefore thought that
polymorphism and inheritance were inextricably linked.  I would be
much happier (though I would still have the issue of intent) with the
above definition if it were stated as:

Object-Oriented = Object-Based + Dynamic Polymorphism.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/14/2005 1:43:52 AM
Robert C. Martin wrote:
> On 13 Jul 2005 02:10:11 -0700, ggroups@bigfoot.com wrote:
>
> >Mark Nicholls wrote:
> >
> >> I would expect some notion of structural encapsulation to need to
> >> exist...i.e. the object, and probably composite behavioural
> >> encapsulation...i.e. the interface.
> >
> >> objects + interfaces = OO.
> >
> >> or
> >
> >> structural encapsulation + behavioural encapsulation = OO
> >
> >> seems better to me.
> >
> >> I think OO needs statefull objects, else it is just procedural.
> >
> >ADT = interface + encapsulation + implementation hiding
> >Object-Based = ADT + self-reference
> >Object-Oriented = Object-Based + Inheritance
> >
> >Very succinct. Very precise.
>
> I agree that this are succinct and precise definitions.  Furthermore,
> I don't have any problem with the first two.  However, the third (OO)
> bothers me because it lacks intent.  To me, OO is much more about
> intent, than about inheritance.
>
> On a pickier point, I think the definition above puts far too much
> emphasis on inheritance.  Indeed, I think it was written by someone
> who was used to statically typed languages and therefore thought that
> polymorphism and inheritance were inextricably linked.

>From that we might assume you have extensive experience with
dynamically checked OO languages, but iirc that isn't the case.

Let me assure you that Smalltalk programmers make extensive use of
inheritance.


> I would be
> much happier (though I would still have the issue of intent) with the
> above definition if it were stated as:
> 
> Object-Oriented = Object-Based + Dynamic Polymorphism.

0
igouy (1009)
7/14/2005 5:15:13 AM

Robert C. Martin wrote:
> On 13 Jul 2005 01:55:47 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >
> >
> >Robert C. Martin wrote:
> >> On 12 Jul 2005 09:56:10 -0700, "Mark Nicholls"
> >> <Nicholls.Mark@mtvne.com> wrote:
> >>
> >> >
> >> >
> >> >Robert C. Martin wrote:
> >> >> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
> >> >> <Nicholls.Mark@mtvne.com> wrote:
> >> >>
> >> >> >If you believe that writing software is about modelling the real world
> >> >> >in some manner, and that the behavioural structure of the software must
> >> >> >be the same as the structure of the real world then it is a
> >> >> >philosophy....(to me this is the definition of analysis and at the root
> >> >> >of any modelling endeavour).
> >> >>
> >> >> I don't believe the initial premise.  Software is not about modeling
> >> >> the real world (whatever that means).  Software is about describing
> >> >> behaviors that solve problems.
> >> >>
> >> >
> >> >"Software is about describing behaviors that solve problems.  "
> >> >
> >> >substitute 'describe' for 'model'.
> >>
> >> No, don't.  The word "model" is very different from the word
> >> "describe".  A better word might be "specify".  A model is not a
> >> specification.  A model is an intentional approximation.
> >
> >"A mathematical model is the use of mathematical language to describe
> >the behaviour of a system. Mathematical models are used in particularly
> >in the sciences such biology, electrical engineering, physics but also
> >in other
> >fields such as economics, sociology and political science."
> >
> >http://en.wikipedia.org/wiki/Mathematical_model
>
> I certainly agree.  I'll also assert that the only model in software
> that has the rigor of the mathematical models you are talking about,
> is the source code.

I agree

>
> >There is nothing intentionally approximate about modelling,
>
> Wow!  The entire motivation behind modeling is to eliminate detail
> that is irrelevant to the situation being studied.

that is a practical necessity, but there is nothing in the above
definition that stipulates it being approximate....if I want to create
models that satisify the following theorems

"for all a,b in S, a+b is in S"

then there are a whole raft of valid exact models that satisfy
this.....it is only practical necessity that drive us to approximate.

you can view the above as a system requirent if you so wish.

> We build models of
> airframes to see if they are aerodynamic.  We build wire frame models
> of building and bridges to see if they are structurally sound.  These
> are intentional approximations.

for practical reasons.

>
> UML diagrams are also intentional approximations.  If they weren't, we
> would just write the source code.

they are usually incomplete....but they are simply a representation, as
the ASCII in your editor is....an approximation to me implies that they
are wrong but behave almost like the domain they intend to model,
incompleteness implies they are correct but lack the full rigour of
source code, CASE tools require complete representations in order to
generate the code.....you seem to be confusing the model with a
language that represents it, and the general practical use of that
language...i.e. most people don't use CASE tools, thus their use of UML
is usually as a sketching tool.

The model itself is usually an approximation, but that is a problem of
complexity of domain not modelling itself.

>
>
> >> >"Software is about modelling real world behaviors that solve problems."
> >>
> >> No, software is about specifying behaviors that solve problems.
> >
> >OK, from the above definition I still see no reason not to insert
> >'model' for specify
>
> The term "model" implies approximation.

you need to give me some reason to believe this is not just your
personal interpretation.......mathematical modelling is *not* (in terms
of model theory) about approximation, it is about absolute
satisfyability of a set of theorems.

Even statistical models are exact in terms of the axioms and
assumptions they are given, it is the assumptions that are the
approximation not the model.

> The term "specify" implies
> exactitude.  Software is not about getting it sorta-right.  Software
> is about getting it right.

Well TDD aint going to give you that.

Economic activity is about getting it fit for purpose.

but it is irrelevant, even if I agree, then the only way to really do
this is through the formalism of mathematical modelling.

>
> >....then we are left with a simple rejection of the
> >existence of the mapping from the thing we seek to
> >specify/model/describe to our specification/model/description? you
> >would not be alone.....do you reject this mapping?
>
> I don't understand the sentence.

"The desired relationship between a model domain and the domain it
models is in principle simple. There should be a one to one
correspondence between phenomina of the two domains and their values"

M. Jackson

As I say in the other post this is not his invention, but is central to
the nature of western scientific philosophy (at least), some, notable,
Jim Coplien, seem to reject it.

>
> >> >the fact that those 'behaviours' are executed by computers is
> >> >irrelevant.
> >>
> >> I disagree.  The fact that those behaviors are executed by computers
> >> is the ESSENSE.
> >
> >why?
>
> Because it seems to me that software is about writing programs that
> run inside computers.

does an algorithm need to be executed in order to be correct?

algorithms existed long before computers.

chefs use recipes....some work, some don't.

there are obviously engineering/economic considerations about
algortithms being executed on computers, but these are incidental to
the basis of both algorithms, models or specifications.

>
> >P.S.
> >
> >the 'M' in UML stand for 'modelling' for a reason....we are building
> >models.
> >see Meyer on "objects as a modelling too"....."a software system
> >is.....a model of a model of a subset of ....reality"......
>
> Yes, in the mid '90s the term 'model' was very chic.  Now it has
> become so badly overused and abused that it has no definition other
> than "a good thing to do".  One can gain credibility for oneself by
> using the word "model" more than once in a paragraph.

I use it in the strict mathematical sense, I wont let them devalue the
language if you don't.

>
> In any case, UML *is* a modeling language.  I have no qualms about
> creating models of software systems.  I do it all the time.  But I
> don't fall into the trap of thinking that they are provably correct by
> any means other than by implementing them and testing them.

in practical terms I agree (though we may differ on the degree of
provability and the nature of the tests).

>
> Models *sometimes* help you think about the problem.  They can also be
> misleading.  The only real truth is when the program executes.
>

I don't understand; the program is a model, and its execution once does
not give any absolute truth about how it will execute a second time,
unless the contexts are identical (if we include time in the context,
then by definition never).

I agree we can take an approximate view of correctness, but I suspect
you would reject such a stance on the grounds that SE is not about
getting it almost right, but absolutely right.....a lofty ambition
indeed, and for some specifications absolutely impossible, and for
others absolutely impossible to verify.

0
Nicholls.Mark (1061)
7/14/2005 10:15:34 AM
Phlip wrote:
> hansewetz wrote:

> > I'm still waiting - probably forever - for what you mean by a
> > 'requirement'.
>
> Things users need your software to do so they can profit.

So, requirements no longer have anything to do with tests?

Regards,
Hans Ewetz

0
hansewetz (110)
7/14/2005 10:41:57 AM
Phlip wrote:
> hansewetz wrote:
>
> > The fact that you 'want' to write lots of tests are not really
> > relevant for the discussion.
>
> My code wants me to write lots of tests.

I'm starting to understand the situation better now ...


> The most important point here is there should be lots of tests, and they
> should all be relevant tests. How will you get lots of them, and how can you
> improve their relevance?
>
> I think I can come close by _pretending_ that specifications are tests. We
> still need other practices, but that one seems to provide a good bang/buck
> ratio.

Specifications of the software or requirements from the customer? Your
arguments are not really coherent.

> Someone pays for debugging and rework all the time. I think they'd be
> thrilled to have a cheaper alternative.

True. That's why it is also important to show that your 'test only
driven' approach will deliver. Some analysis of what it can deliver
is needed - just stating that it will deliver is not good enough.

> > Also, quality costs money and all software is not required to have the
> > same quality.
>
> Don't make me remind you I am an eXtremo who has studied Statistical Process
> Control. Quality is free. The higher your quality gets, the easier changes
> become. So the cost of higher quality pays for itself, by reducing the costs
> of all future feature.

And, of course your general claims are always valid in all domains and
all environments.

Regards,
Hans Ewetz

0
hansewetz (110)
7/14/2005 10:52:55 AM
Robert C. Martin wrote:
> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >If you believe that writing software is about modelling the real world
> >in some manner, and that the behavioural structure of the software must
> >be the same as the structure of the real world then it is a
> >philosophy....(to me this is the definition of analysis and at the root
> >of any modelling endeavour).
>
> I don't believe the initial premise.  Software is not about modeling
> the real world (whatever that means).  Software is about describing
> behaviors that solve problems.

Software at the core has to be about describing someone's perception
of (problems in) the world. If you cannot do this I can't understand
what the code will solve. Once the problem is described it may be
possible to describe behaviors that solve the problem.

Personally, I find it useful to separate between analogic and analytic
models. Many problems can be encoded in a computer as an anlogic model
where there is a close correspondence at the highest level of
abstraction between how we perceive phenomenon in the world and
constructs inside the computer (records, objects, tables etc.).

Regards,
Hans Ewetz

0
hansewetz (110)
7/14/2005 12:58:29 PM
hansewetz wrote:

> > Quality is free.

> And, of course your general claims are always valid in all domains and
> all environments.

My bad. There is a domain and environment out there where quality is not
free. Where increased quality doesn't help you go faster.

What was it's name again?

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/14/2005 1:10:56 PM
On 14 Jul 2005 05:58:29 -0700, hansewetz@hotmail.com wrote:

>Robert C. Martin wrote:
>> On 11 Jul 2005 10:11:21 -0700, "Mark Nicholls"
>> <Nicholls.Mark@mtvne.com> wrote:
>>
>> >If you believe that writing software is about modelling the real world
>> >in some manner, and that the behavioural structure of the software must
>> >be the same as the structure of the real world then it is a
>> >philosophy....(to me this is the definition of analysis and at the root
>> >of any modelling endeavour).
>>
>> I don't believe the initial premise.  Software is not about modeling
>> the real world (whatever that means).  Software is about describing
>> behaviors that solve problems.
>
>Software at the core has to be about describing someone's perception
>of (problems in) the world. 

No, it's about specifying the details of someone's plan to have a
computer solve someone else's problem.  

>If you cannot do this I can't understand
>what the code will solve. Once the problem is described it may be
>possible to describe behaviors that solve the problem.

I certainly agree that the problem must be identified and described.
Unfortunately many problems are difficult to understand and polluted
by political and physical issues.  Often the only way to truly
identify a problem is to apply a series of ever improving solutions,
and incrementally measuring the benefit.
>
>Personally, I find it useful to separate between analogic and analytic
>models. 

I'm not sure what you mean.  What do you think is the difference
between a model created by analogy, and a model created by analysis?

>Many problems can be encoded in a computer as an anlogic model
>where there is a close correspondence at the highest level of
>abstraction between how we perceive phenomenon in the world and
>constructs inside the computer (records, objects, tables etc.).

If I understand you correctly, you are saying that many problems can
be solved by creating structures in the program that are analogous to
structures in the problem domain.  I certainly agree.  Creating
simulations or analogies is an important and powerful tool available
to software professionals.  It is not, however, the sole, primary, or
even the central tool.  This tool does not define what software is
about.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
7/14/2005 2:39:06 PM
Robert C. Martin wrote:
> On 14 Jul 2005 05:58:29 -0700, hansewetz@hotmail.com wrote:

> >Software at the core has to be about describing someone's perception
> >of (problems in) the world.
>
> No, it's about specifying the details of someone's plan to have a
> computer solve someone else's problem.

This is simply one out of many opinions.

> >Personally, I find it useful to separate between analogic and analytic
> >models.
>
> I'm not sure what you mean.  What do you think is the difference
> between a model created by analogy, and a model created by analysis?

By an anlogic modle I mean another 'reality' that has certain things in
common with our perception of the world. For example, a software system
keeping track of employees could be an analogic model. A Monopoly game
can also be viewed as an anologic model. By an analytic model I mean a
description of our perception of the world.


> >Many problems can be encoded in a computer as an anlogic model
> >where there is a close correspondence at the highest level of
> >abstraction between how we perceive phenomenon in the world and
> >constructs inside the computer (records, objects, tables etc.).
>
> If I understand you correctly, you are saying that many problems can
> be solved by creating structures in the program that are analogous to
> structures in the problem domain.  I certainly agree.  Creating
> simulations or analogies is an important and powerful tool available
> to software professionals.  It is not, however, the sole, primary, or
> even the central tool.  This tool does not define what software is
> about.

I disagree. I see it as a central tool.

However, I do agree that it does not 'define what software is about'.
Discussing what 'software development is about' is an endless
discussion since different people have different experiences and
different backgrounds. I believe that the best that can out of such a
discussion is a sharing of different views of best practices in
different domains. I also believe that the worst that can come out are
dogmatic statements about what something is and is not.

Regards,
Hans Ewetz

0
hansewetz (110)
7/14/2005 3:02:46 PM
hansewetz wrote:

> > No, it's about specifying the details of someone's plan to have a
> > computer solve someone else's problem.
>
> This is simply one out of many opinions.

Software isn't about specifying the details of a plan to solve a problem?

I can't tell your point from all the cheap potshots. Why don't you say, "Oh,
and an X-ray controller software is just about solving a problem, not about
modeling the real world of the X-ray machine. You could give someone cancer
with that attitude!"

Sheesh...

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/14/2005 4:13:04 PM
Robert C. Martin wrote:
> On 13 Jul 2005 03:24:42 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >
> >
> >Robert C. Martin wrote:
>
> >> Both Foo and
> >> Bar are concrete.  No polymorphism, no dependencies on abstractions,
> >> no OO.
> >>
> >
> >you've added polymorphism to your definition and absraction, whence
> >before it was just physical decoupling.
>
> Yes, I have added polymorphism.  That is very consistent with many
> other postings, articles, and books that I have written.
>
> Newsgroup postings are intentional approximations (i.e. models) of our
> opinions.  We don't have the time or desire to specify them in the
> excruciating detail that would allow us to escape the semantic
> analysis of a determined debater.

I think thats a bit of a cop out, I've given you a definition, your
definition appears to be amorphous dependent on my objections.

If you have a feeling what OO is and can't define it then thats fine,
feel free to say so. I have given one, it's probably not perfect, but
at least directly implies stateful objects and interfaces.

>
> >unfortunately this example does have all the above properties, but at
> >link time, if I create a virtual machine that compiles C code and links
> >it at run time (which very oddly I have), then that makes the above C
> >code OO?
>
> Almost.  If I could create different instances of Bar at runtime, and
> swap them into the currently running Foo, then I would agree that this
> is OO.
>
> >you add more to your definition, but still it seems my simple C program
> >seems to be OO, and still your definition does not imply the existence
> >of objects....thats not good.
>
> Why not?

I would expect a definition of OO to imply the existence of
objects....is that not unreasonable. I would take one of the
defininitive characteristics of OO was the 'object'.

> Isn't it a good thing to define "object oriented" in terms
> that don't use the word "object"?  Doesn't the alternative lead to
> circularity?
>

I haven't done this, and I don't expect you to, I've defined it in
terms of other well defined terms and then defined the term 'object'
(possibly via class) in terms of those, the problem with your current
definition is that it does not define the term 'object' at all.

Objects do not seem to be necessary for OO by your definition, just
decouplng, thus any system that exhibits the nature of decoupling is
OO....but now it seems any system that exhibits run time substitution
is OO.......the problem with that is it seems to span almost all
technology

Yes we can model OO systems procedurally and procedural systems in OO,
that makes every interchangable, we need to find something distinctive
in OO systems that does not exist in procedural systems in order to be
able to define a good definition....to me thats the object (and the
interface), and currently you definition though consistent with OO does
not imply the existence of objects....or stuctural encapsulation.

> BTW, the argument you are making about the C program seems to suggest
> that if you don't have an object, it's not OO.  Consider, the Command
> pattern.  There are no data elements in the command pattern.  It's
> nothing more than a jump vector (rather like your dynamically loaded
> Bar functions).  Is the command pattern OO?
>

Yes but "There are no data elements in the command pattern" is not
true.

Execute() takes no parameters, and the behaviour of execute requires
that the data required is available in some manner, generally within
the scope of the 'object'....in OO it is structurally encapsulated (see
my other post as to a definition of OO).

the canonical form of command is

class CXYZCommand
{
 private type1 param1;
 private type2 param2;
 private type3 param3;
etc

   public CXYZCommand(type1 param1,type2 param2,type3 param3...etc)
   {
      this.param1 = param1;
   }

   public void Execute()
   {
      underlyingFunction(param1,param2,param3);
   }
}

Thats very stateful.....

if you wish to invoke that as your the trivial mapping between function
f(x,y,z) and f(x,y,z).Execute() and  then all procedural languages are
OO (see curried functions).

Yes it is possible to model procedule systems in objects....that does
not (to me) make them OO.

there does exist a stateless command pattern isomorpic to the function
pointer, but it takes no parameters.

class CFunctionPointerFoo
{
   void Execute()
   {
      Foo();
   }
}

0
Nicholls.Mark (1061)
7/14/2005 4:43:45 PM
Phlip wrote:

>
> > > No, it's about specifying the details of someone's plan to have a
> > > computer solve someone else's problem.
> >
> hansewetz wrote:
> > This is simply one out of many opinions.
>
> Software isn't about specifying the details of a plan to solve a problem?

Just want to clarify that my comment was 'This is simply one out of
many opinions'. The first comment was written by R. Martin.

> I can't tell your point from all the cheap potshots. Why don't you say, "Oh,
> and an X-ray controller software is just about solving a problem, not about
> modeling the real world of the X-ray machine. You could give someone cancer
> with that attitude!"

I don't believe software development is 'just about solving a
problem'. In fact, I do believe that a large part of developing
software for an x-ray machine is about modeling the machine.

My point is very simple. I believe that both execution tests and
analysis are necessary when developing software. I also believe that it
is important to understand 'what it is' that is being tested during
execution tests. I believe that stating that someone will execute
'lots of tests' says close to nothing about the reliability of the
software even if the software passes all the tests.

So far I have not seen anything from the people promoting a 'test
driven approach' that even remotely looks like an analysis of 'what
it is' that is being tested. Neither have I seen anything that
attempts to understand what and how to test to ensure that the software
works correctly. Finally, I haven't seen anything that tries to
understand how to ensure that the software works for things that cannot
be tested. Up until now, I have only heard brave statements about how
great a test driven approach is without any substance behind it.

You are claming that a test driven approach is the way to go. It is up
to you to show that the claim is valid. I'm simply trying to
understand what it means to do a test driven development project and
what the potential benefits are.

Finally, I'm trying to understand the notion of 'requirements equal
to tests'. So far I have not been convinced about this notion. If you
are interested in promoting this idea you should have some solid
arguments behind it instead of lots of statements that sound good but
in fact are just opinions.

(Sometimes it's hard not to throw a few cheap potshots - your
comments are not always readable and the arguments are often lacking
:-) )

Regards,
Hans Ewetz

0
hansewetz (110)
7/14/2005 4:47:33 PM
hansewetz wrote:

> (Sometimes it's hard not to throw a few cheap potshots - your
> comments are not always readable and the arguments are often lacking
> :-) )

Bring them on.

> My point is very simple. I believe that both execution tests and
> analysis are necessary when developing software.

How much analysis should we do? [More than] enough to write [too many]
tests. That makes analysis useful and focused.

> So far I have not seen anything from the people promoting a 'test
> driven approach' that even remotely looks like an analysis of 'what
> it is' that is being tested.

That's because it's impossible to write tests without analyzing. So there's
almost no reason to say it. Hey! Analyze before you write tests, everyone!

(Yes, here comes a potshot, writing useless tests without analysis is
possible. It's impossible within the context of reviews and releases.)

> Neither have I seen anything that
> attempts to understand what and how to test to ensure that the software
> works correctly.

What is the ratio of test to production code you are familiar with?

> Finally, I haven't seen anything that tries to
> understand how to ensure that the software works for things that cannot
> be tested. Up until now, I have only heard brave statements about how
> great a test driven approach is without any substance behind it.

If the customer says "Give me Q that I might profit", you say "I can't test
Q. Let me analyze a little. Okay, can I give you tests and code for Q1, Q2
and Q3?" The customer collaborates with this analysis and says, "Okay,
that's enough Q for now."

Cryptography is impossible to test. You can't test that the NSA can't crack
your protocol. You can, however, test that you have 128-bit RSA encryption
with one-way passwords and a tear-off pad, etc. etc. Going from "no crack"
to "RSA" is analysis.

Analysis to write tests is a powerful way to focus customer and team
activities on important things.

> Finally, I'm trying to understand the notion of 'requirements equal
> to tests'.

You keep saying "You guys don't analyze", then you say "how do you know what
to test?" Put them together...

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/14/2005 5:06:53 PM
Mark Nicholls wrote:
> Robert C. Martin wrote:
-snip-
> > Newsgroup postings are intentional approximations (i.e. models) of our
> > opinions.  We don't have the time or desire to specify them in the
> > excruciating detail that would allow us to escape the semantic
> > analysis of a determined debater.
>
> I think thats a bit of a cop out

Yes. As we are free to engage in as few discussions as time allows,
it's more graceful to simply acknowledge when we forget to mention
something or were too vague or any of those other minor failings that
take place in the normal course of conversation.


-snip-
> I would expect a definition of OO to imply the existence of
> objects....is that not unreasonable. I would take one of the
> defininitive characteristics of OO was the 'object'.

And what does an object have that's special - identity.

0
igouy (1009)
7/14/2005 5:37:15 PM

Robert C. Martin wrote:
> On 12 Jul 2005 07:09:13 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >I cannot comment, religion is beyond me, religion is certainly a belief
> >system, but so is science.
>
> Careful.  Religion is a belief system based upon faith.  Science is a
> belief system based upon empirical observation.  Saying that they are
> both believe systems creates a false comparison.
>

Careful mathematics is a belief system based (usually) on the axioms of
ZFC.

these seem to coincide with empitical evidence,

Maths is a belief system.

I am not arrogant enough (despite my personal aetheism), to discount
other peoples empirical observation (beliefs)...and philosphical it
seems impossible to discount the existence of a creational god, just
one that "play dice"....I'm with Einstein.

0
Nicholls.Mark (1061)
7/14/2005 5:37:44 PM
"Mark Nicholls" <Nicholls.Mark@mtvne.com> writes:
>Careful mathematics is a belief system based (usually) on the
>axioms of ZFC.

  It suffices to just specify the axioms as an assertion A and
  then investigate the consequences of A. It is not neccessary
  to "belief" that A is actually true.

0
ram (2986)
7/14/2005 6:04:58 PM
Phlip wrote:

> > My point is very simple. I believe that both execution tests and
> > analysis are necessary when developing software.
>
> How much analysis should we do? [More than] enough to write [too many]
> tests. That makes analysis useful and focused.

I don't think that there is a single answer to your question.

> What is the ratio of test to production code you are familiar with?

Don't see the relevance of your comment.

> You keep saying "You guys don't analyze", then you say "how do you know what
> to test?" Put them together...


I can't put them together since you don't present a single strategy
for testing. At least I can come up with a few strategies for analyzing
problems, solutions and problem solving.

For example, I can choose to develop a meta model for my domain. I can
then show that any model instantiated from the meta model has certain
characteristic I want it to have. I can also enumerate the different
elements I will use in my model. For example, I may choose to use
entities, events, values, etc. - each being well defined. When I
solve a problem I have multiple problem solving strategies available:
solve difficult parts first, solve easy parts first and hope the
difficult ones disappears, generalize the problem and maybe it becomes
easier to solve, map the problem into mathematical notation and apply
mathematical analysis, remove assumptions - maybe the assumptions
prevents me from solving the problem, reformulate the problem, solve a
similar problem to get hints, etc.

I could see a few test related issues that should be analyzed. You may
want to build tests that detect simple code bugs - i.e. the design
has been coded incorrectly. You may want to build tests that show that
the design or algorithms are logically working according to
specifications. You may want to build tests that show that there are no
inconsistencies among different behaviors in the software. There are
probably many other categories of tests that may be done. I'm not
saying it is possible to practically separate tests this way since you
are applying tests on something that implements a mix of concerns.
However, it is probably worth giving it a shot - maybe someone has
already done it. Or, maybe you are already doing it ...?

Regards,
Hans Ewetz

0
hansewetz (110)
7/14/2005 6:15:19 PM
hansewetz wrote:

> I can't put them together since you don't present a single strategy
> for testing.

I said it over and over again. You could probably repeat it. Unless you
moved the definition of "strategy"...

> At least I can come up with a few strategies for analyzing
> problems, solutions and problem solving.

At least I can too. I hold them subservient to you-know-whats.

> For example, I can choose to develop a meta model for my domain. I can
> then show that any model instantiated from the meta model has certain
> characteristic I want it to have. I can also enumerate the different
> elements I will use in my model. For example, I may choose to use
> entities, events, values, etc. - each being well defined. When I
> solve a problem I have multiple problem solving strategies available:
> solve difficult parts first, solve easy parts first and hope the
> difficult ones disappears, generalize the problem and maybe it becomes
> easier to solve, map the problem into mathematical notation and apply
> mathematical analysis, remove assumptions - maybe the assumptions
> prevents me from solving the problem, reformulate the problem, solve a
> similar problem to get hints, etc.

That is orthogonal, at least, to the goal of tests driving development. Of
course you still need to do all that, but...

Can you put an 'assert(false)' anywhere in your code, run all your tests,
and get failing test cases? Done right, you should at least fail "unit"
tests, and each of these should associate with a user story. And you will
very probably fail an acceptance test, and this _will_ associate with a user
story. (Done perfectly, an acceptance test _will_ fail, but that's more
perfect than most development needs.)

So in your scenario, you have tangled this traceability up into an elaborate
object model. It's a fine model, and a fine system to generate it, but my
thought experiment illustrates how your system is not necessarily driven by
executable specifications derived from requirements derived from field use.

> I could see a few test related issues that should be analyzed. You may
> want to build tests that detect simple code bugs - i.e. the design
> has been coded incorrectly. You may want to build tests that show that
> the design or algorithms are logically working according to
> specifications. You may want to build tests that show that there are no
> inconsistencies among different behaviors in the software. There are
> probably many other categories of tests that may be done. I'm not
> saying it is possible to practically separate tests this way since you
> are applying tests on something that implements a mix of concerns.
> However, it is probably worth giving it a shot - maybe someone has
> already done it. Or, maybe you are already doing it ...?

Yes, we hire QA departments to add these kinds of tests that target coverage
and combinatoric issues. Our QA has a much easier time of it.

And unless you use SPARKS, we debug much less than you.

-- 
  Phlip
  http://www.c2.com/cgi/wiki?ZeekLand


0
phlip_cpp (3852)
7/14/2005 10:19:08 PM
Phlip wrote:
> hansewetz wrote:
>
> > I can't put them together since you don't present a single strategy
> > for testing.
>
> I said it over and over again. You could probably repeat it. Unless you
> moved the definition of "strategy"...
>
> > At least I can come up with a few strategies for analyzing
> > problems, solutions and problem solving.
>
> At least I can too. I hold them subservient to you-know-whats.
>
> > For example, I can choose to develop a meta model for my domain. I can
> > then show that any model instantiated from the meta model has certain
> > characteristic I want it to have. I can also enumerate the different
> > elements I will use in my model. For example, I may choose to use
> > entities, events, values, etc. - each being well defined. When I
> > solve a problem I have multiple problem solving strategies available:
> > solve difficult parts first, solve easy parts first and hope the
> > difficult ones disappears, generalize the problem and maybe it becomes
> > easier to solve, map the problem into mathematical notation and apply
> > mathematical analysis, remove assumptions - maybe the assumptions
> > prevents me from solving the problem, reformulate the problem, solve a
> > similar problem to get hints, etc.
>
> That is orthogonal, at least, to the goal of tests driving development. Of
> course you still need to do all that, but...
>
> Can you put an 'assert(false)' anywhere in your code, run all your tests,
> and get failing test cases? Done right, you should at least fail "unit"
> tests, and each of these should associate with a user story. And you will
> very probably fail an acceptance test, and this _will_ associate with a user
> story. (Done perfectly, an acceptance test _will_ fail, but that's more
> perfect than most development needs.)
>
> So in your scenario, you have tangled this traceability up into an elaborate
> object model. It's a fine model, and a fine system to generate it, but my
> thought experiment illustrates how your system is not necessarily driven by
> executable specifications derived from requirements derived from field use.
>
> > I could see a few test related issues that should be analyzed. You may
> > want to build tests that detect simple code bugs - i.e. the design
> > has been coded incorrectly. You may want to build tests that show that
> > the design or algorithms are logically working according to
> > specifications. You may want to build tests that show that there are no
> > inconsistencies among different behaviors in the software. There are
> > probably many other categories of tests that may be done. I'm not
> > saying it is possible to practically separate tests this way since you
> > are applying tests on something that implements a mix of concerns.
> > However, it is probably worth giving it a shot - maybe someone has
> > already done it. Or, maybe you are already doing it ...?
>
> Yes, we hire QA departments to add these kinds of tests that target coverage
> and combinatoric issues. Our QA has a much easier time of it.
>
> And unless you use SPARKS, we debug much less than you.

I have quite a few comments to make here - but I have to bite the
bullet and keep quiet! Have too many things to do right now.

Regards,
Hans Ewetz

0
hansewetz (110)
7/15/2005 7:36:32 AM

Stefan Ram wrote:
> "Mark Nicholls" <Nicholls.Mark@mtvne.com> writes:
> >Careful mathematics is a belief system based (usually) on the
> >axioms of ZFC.
>
>   It suffices to just specify the axioms as an assertion A and
>   then investigate the consequences of A. It is not neccessary
>   to "belief" that A is actually true.

OK, I agree, but if we are to believe that we can apply the model to
the universe around us, we must believe that the axioms apply to that
universe

In pure maths, it doesn't really matter as it is an exercise in mental
gymnastics (people usually use a different word).

In applied maths and sciences it is hugely relevant.

So I should really say science and applied maths are belief systems.

If you believe F=MA applies, you must believe that time is absolute for
all observers, if you believe relativity, you can scrub this belief,
but there are a whole lot more including and the existence of the empty
set, and the axiom of choice (or its negation), ZFC (or equivalent) etc
etc and a set of existential beliefs about existence, and that the
universe itself is deterministic in the same manner that maths
is.......thats a lot of belief, thankfully it seems to work.

0
Nicholls.Mark (1061)
7/15/2005 8:35:15 AM

Isaac Gouy wrote:
> Mark Nicholls wrote:
> -snip-
> > > Robert, seems like you missed off the catch-all phrase at the end of
> > > the sentence - "modelling the real world in some manner" - which afaict
> > > would include making a very selective description of just those aspects
> > > of the 'real world' relevant to the problem-at-hand.
> >
> > This appears to be why he rejects it, modelling (to him) is about
> > approximation, while SE is not.
>
> Maybe we should inquire what Robert actually means by approximate.
>
>
> > > > "Software is about describing behaviors that solve problems. "
> > > > substitute 'describe' for 'model'.
> > > > insert 'real world' in front of behaviours and.....
> > > > "Software is about modelling real world behaviors that solve problems."
> > > > the fact that those 'behaviours' are executed by computers is
> > > > irrelevant.
> > > > So I see little difference.
> > >
> > > Mark, "writing software is about modelling the real world in some
> > > manner" would include purposeless undirected
> > > modelling-for-the-sake-of-modelling (which may be what Robert was
> > > objecting to).
> >
> > and it is not possible to write purposeless undirected OO software?
>
> afaik Software cannot be purposeful or purposeless, software is not
> capable of intentionality - programmers are.

I think thats my point i.e. I don't see the relevance of the
purposefulness(!).

>
>
> -snip-
> > > Michael Jackson writes about "Descriptions and Models" in "Aspects of
> > > System Description" 2003
> > > http://mcs.open.ac.uk/mj665/papers.html
> >
> > very interesting......
> >
> > It largely coincides with what I am trying to say.....but then I'm
> > hugely biased, and only skimmed it.
> > but specifically
> >
> > "The desired relationship between a model domain and the domain it
> > models is in principle simple. There should be a one to one
> > correspondence between phenomina of the two domains and their values"
> >
> > i.e. the object of the excercise is to create something isomorphic to
> > the domain (though I agree with Meyer that this is highly subjective
> > and should be viewed as a model of a subjective model of a domain).
> >
> > This is not Jacksons invention or discovery, this is central to the
> > role of scientific/mathematical modelling, and not the preserve of
> > computer scientists....the similarities between OO and model theory are
> > not coincidence, the Curry-XXXX (can't remember his name) direct
> > correspondence between software and the writing of proofs is not
> > coincidence.
> >
> > i.e. modelling is not "intentianally approximate"...
> >
> > he goes on to state the reasons why "practical choices" introduce
> > "unavoidable departures"...i.e. approximations in the example of
> > modelling a lift. That is an engineering skill, what choices have to be
> > made in the 'real world' to actually implement the model........(to me
> > this is the distinction between analysis and design).
>
> Well, after "in principle simple" he quickly reminds us that practical
> models are almost never so "ideal".

I completely agree, but the lack of idealism is not in the model, but
the specification.

>
> It's difficult, but maybe really reading the paper might prove more
> interesting, than skimming for how it might help this little argument
> ;-)

Its quite long, I was going to give it a go on the train, but forgot.

0
Nicholls.Mark (1061)
7/15/2005 8:45:26 AM

Robert C. Martin wrote:
> On 13 Jul 2005 02:24:33 -0700, "Mark Nicholls"
> <Nicholls.Mark@mtvne.com> wrote:
>
> >This appears to be why he rejects it, modelling (to him) is about
> >approximation, while SE is not.
>
> The only activity in software development that is not approximate, is
> program execution.
>

but you rejected modelling because....

"The term "model" implies approximation.  The term "specify" implies
exactitude.  Software is not about getting it sorta-right.  Software
is about getting it right. "

now you seem to agree with me at least that specifation is approximate
(generally by being incomplete and programmers filling in the gaps,
like newton assuming time is absolute), whence before it was an
"exactitude".

I don't understand there seems to be a contradiction in what you're
saying.

If SE is approximate then how can you reject modelling on the grounds
of it being an approximation?

0
Nicholls.Mark (1061)
7/15/2005 9:05:24 AM
Isaac Gouy wrote:
> Mark Nicholls wrote:
> > Robert C. Martin wrote:
> -snip-
> > > Newsgroup postings are intentional approximations (i.e. models) of our
> > > opinions.  We don't have the time or desire to specify them in the
> > > excruciating detail that would allow us to escape the semantic
> > > analysis of a determined debater.
> >
> > I think thats a bit of a cop out
>
> Yes. As we are free to engage in as few discussions as time allows,
> it's more graceful to simply acknowledge when we forget to mention
> something or were too vague or any of those other minor failings that
> take place in the normal course of conversation.

Sometimes I change my mind even.

>
>
> -snip-
> > I would expect a definition of OO to imply the existence of
> > objects....is that not unreasonable. I would take one of the
> > defininitive characteristics of OO was the 'object'.
>
> And what does an object have that's special - identity.

maybe, currently I simply see identity as a label, by which we can
refer to 'objects' in both specification and implementation.

i.e. it probably is necessary, but as a syntactic mechanism for
reference to something that encapsulates structure.

0
Nicholls.Mark (1061)
7/15/2005 9:14:15 AM
>
> >by your definition is OO, at least at link time, the prototype acting
> >as the abstaction (for it is an abstaction...a *procedural* one).
>
> No, I don't consider link-time polymorphism as part of OO.

what about dynamic link time?

>
> >To me you are confusing OO with decoupling, because your definition is
> >that anything that can be 'nicely' decoupled is OO
>
> Anything that is nicely decoupled using runtime polymorphism.

OK. So polymorphism is the key attribute.

>
> >I would expect some notion of structural encapsulation to need to
> >exist...
>
> Once you decide to invoke functions through function pointers, it's a
> very short step to aggregating those function pointers into data
> structures that also hold the data that those functions manipulate.

yes but is an extra step that seems to devide C and C++, procedural
languages and OO ones, it is one that is absent from your original
definition.

>
> Indeed, I have often defined OO as:
>
> "A programming style in which data structures are manipulated by
> functions that are called through pointers contained by the
> manipulated data structure."

this is rather mechanical explanation, I suspect it does not apply to
some OO languages, and again it does not seem to imply the existence of
structural encapsulation i.e. stateful objects.

>
> However, I find this definition to be too complicated.  I prefer:

I agree...too mechanical.

>
> "A programming style in which all source code dependencies target
> runtime abstractions."
>

'runtime abstractions' ? is not an interface a compile time
abstraction?

OK, we may as well leave it, I think its a reflection of your personal
hangups about physical dependendency, my definition probably reflect my
personal hangup of the existence of objects and interfaces.

0
Nicholls.Mark (1061)
7/15/2005 12:37:51 PM
>
> >by your definition is OO, at least at link time, the prototype acting
> >as the abstaction (for it is an abstaction...a *procedural* one).
>
> No, I don't consider link-time polymorphism as part of OO.

what about dynamic link time?

>
> >To me you are confusing OO with decoupling, because your definition is
> >that anything that can be 'nicely' decoupled is OO
>
> Anything that is nicely decoupled using runtime polymorphism.

OK. So polymorphism is the key attribute.

>
> >I would expect some notion of structural encapsulation to need to
> >exist...
>
> Once you decide to invoke functions through function pointers, it's a
> very short step to aggregating those function pointers into data
> structures that also hold the data that those functions manipulate.

yes but is an extra step that seems to devide C and C++, procedural
languages and OO ones, it is one that is absent from your original
definition.

>
> Indeed, I have often defined OO as:
>
> "A programming style in which data structures are manipulated by
> functions that are called through pointers contained by the
> manipulated data structure."

this is rather mechanical explanation, I suspect it does not apply to
some OO languages, and again it does not seem to imply the existence of
structural encapsulation i.e. stateful objects.

>
> However, I find this definition to be too complicated.  I prefer:

I agree...too mechanical.

>
> "A programming style in which all source code dependencies target
> runtime abstractions."
>

'runtime abstractions' ? is not an interface a compile time
abstraction?

OK, we may as well leave it, I think its a reflection of your personal
hangups about physical dependendency, my definition probably reflect my
personal hangup of the existence of objects and interfaces.

0
Nicholls.Mark (1061)
7/15/2005 12:37:51 PM
Mark Nicholls wrote:
> Isaac Gouy wrote:
> > Mark Nicholls wrote:
> > -snip-
> > > > Robert, seems like you missed off the catch-all phrase at the end of
> > > > the sentence - "modelling the real world in some manner" - which afaict
> > > > would include making a very selective description of just those aspects
> > > > of the 'real world' relevant to the problem-at-hand.
> > >
> > > This appears to be why he rejects it, modelling (to him) is about
> > > approximation, while SE is not.
> >
> > Maybe we should inquire what Robert actually means by approximate.
> >
> >
> > > > > "Software is about describing behaviors that solve problems. "
> > > > > substitute 'describe' for 'model'.
> > > > > insert 'real world' in front of behaviours and.....
> > > > > "Software is about modelling real world behaviors that solve problems."
> > > > > the fact that those 'behaviours' are executed by computers is
> > > > > irrelevant.
> > > > > So I see little difference.
> > > >
> > > > Mark, "writing software is about modelling the real world in some
> > > > manner" would include purposeless undirected
> > > > modelling-for-the-sake-of-modelling (which may be what Robert was
> > > > objecting to).
> > >
> > > and it is not possible to write purposeless undirected OO software?
> >
> > afaik Software cannot be purposeful or purposeless, software is not
> > capable of intentionality - programmers are.
>
> I think thats my point i.e. I don't see the relevance of the
> purposefulness(!).

I was just sifting through the likely layers of mis-communication - in
my mind (and perhaps Roberts), some phrase like 'modelling the "real"
world' sets off an alarm labelled "analysis paralysis".
The only way to shut-up the alarm is to quickly qualify 'modelling the
"real" world' with some notion of limited domain, specific purpose
(problem)...


-snip-
> > Well, after "in principle simple" he quickly reminds us that practical
> > models are almost never so "ideal".
>
> I completely agree, but the lack of idealism is not in the model, but
> the specification.

Sorry I really don't understand what you mean.

0
igouy (1009)
7/15/2005 3:39:39 PM
Mark Nicholls wrote:
>
> religion is certainly a belief system, but so is science.
>
Who is to be honoured: Doubting Thomas, or "those who have not seen
and yet believe?"  Science says the former; religion says the latter.

Regards,
Daniel Parker

0
7/15/2005 5:38:33 PM
Mark Nicholls wrote:

> you add more to your definition, but still it seems my simple C
> program seems to be OO, and still your definition does not imply the
> existence of objects....thats not good.

As far as I know, early C++ compilers would simply translate C++ source code 
to C source code. Would you say that that transformation removed the 
"OOness" from the code?

Curious, Ilja 


0
it3974 (470)
7/15/2005 5:51:27 PM
"Isaac Gouy" <igouy@yahoo.com> writes:
>I was just sifting through the likely layers of mis-communication - in
>my mind (and perhaps Roberts), some phrase like 'modelling the "real"
>world' sets off an alarm labelled "analysis paralysis".

  The "real world" is also a model.

0
ram (2986)
7/15/2005 6:58:18 PM
On Fri, 15 Jul 2005 19:51:27 +0200, Ilja Preu� wrote:

> Mark Nicholls wrote:
> 
>> you add more to your definition, but still it seems my simple C
>> program seems to be OO, and still your definition does not imply the
>> existence of objects....thats not good.
> 
> As far as I know, early C++ compilers would simply translate C++ source code 
> to C source code. Would you say that that transformation removed the 
> "OOness" from the code?

What happens with a "flightness" when you board a train? "OOness" is not a
property of the problem or a program solving the problem. It is one of a
method.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
0
mailbox2 (6357) <