f



XP Requirement Analysis?

I often read that in XP you should not create a lot of documentation - that
its all temporary stuff to help one understand the system.  First off I
applaud XP's desire to actually help the developers understand the domain
for the system they are building but I think this is the wrong way of
"helping them understand" for a couple of reasons:  Many domains such as
accounting, manufacturing, supply chain, are well understood with a wealth
of existing documentation.  Rather then trying to figure things out as you
go wouldn't it be best to train up developers on the business domain before
jumping into a (possibly incomplete poorly defined) scenario and rewriting
it over and over until you've captured all the requirements?
I'm not saying that's all bad; I would rather have developers understand the
domain they are implementing even if it is piece by piece then to be handed
a set of specifications (that are always missing something or just plain
wrong) and like zombies do exactly what it says without question.  My
biggest complaint however is that how do you go to the customer and say this
wasn't a requirement of the original agreement, you will have to increase
the project budget, extend the timeline, etc.. if the requirements have not
thoroughly been analyzed, documented, and signed?

- Kurt


0
kurth (11)
9/14/2004 5:40:13 AM
comp.object 3218 articles. 1 followers. Post Follow

341 Replies
1209 Views

Similar Articles

[PageSpeed] 5

kurth wrote:

> I often read that in XP you should not create a lot of documentation

Citation?

> - that
> its all temporary stuff to help one understand the system.

The best way to state the goal is this: Minimize the time between fully
specifying a feature and deploying it. Small cycles can return business
value by seeking requirements in order of business value.

> First off I
> applaud XP's desire to actually help the developers understand the domain
> for the system they are building but I think this is the wrong way of
> "helping them understand" for a couple of reasons:  Many domains such as
> accounting, manufacturing, supply chain, are well understood with a wealth
> of existing documentation.

Right. It's not the documentation that's the problem - it's the
requirements, sitting on a shelf. That is exactly like inventory, sitting on
a shelf, in manufacturing. It can only decay in value over time.

> Rather then trying to figure things out as you
> go wouldn't it be best to train up developers on the business domain
before
> jumping into a (possibly incomplete poorly defined) scenario and rewriting
> it over and over until you've captured all the requirements?

That's why the OnsiteCustomer is _onsite_ - so developers can learn what
they need (including RTFM), parallel with implementing it.

> I'm not saying that's all bad; I would rather have developers understand
the
> domain they are implementing even if it is piece by piece then to be
handed
> a set of specifications (that are always missing something or just plain
> wrong) and like zombies do exactly what it says without question.

Citation?

> My
> biggest complaint however is that how do you go to the customer and say
this
> wasn't a requirement of the original agreement, you will have to increase
> the project budget, extend the timeline, etc.. if the requirements have
not
> thoroughly been analyzed, documented, and signed?

(Reputedly) per Craig Larman's /Agile and Iterative Development: A Manager's
Guide/, if you perform "big requirements up front", you invest in a huge
risk of failure. That's why "more artifacts", in the "spend millions on
requirements first" range, lead to so many spectacular software failures.

Fundamentally, the most important requirements are also the ones someone is
laying awake at night thinking about, and can't stop talking about. While
the low-priority requirements are obviously fuzzy, the high-priority ones
ought to be obvious to everyone in the building.

Every XP practice, observed alone in isolation, has huge obvious gaps. Other
practices fill them. Given the practice "Minimize the pipeline between
specifying requirements and deploying them", here are how the other
practices fit in;

 - planning game - onsite customer sorts requirements by business priority
              The ones on top are usually compelling and obvious.
implementing
              them will help teach about the ones further down the stack

 - simple design - the code does not fill up with hooks and features that
              address requirements not yet as fully understood as the
              ones of high business priority

 - pair programming - whoever knows less about a requirement is learning
              from the pair

 - onsite customer - the customer can answer questions about requirements
              at the exact moment that the briefest answer - most obvious
              to the customer - has the greatest impact on the result

 - frequent releases - Every week the project displays the ability to boost
              user productivity by satisfying the most important
requirements

 - common workspaces - If I realize I don't know enough about a domain
              aspect, and I overhear someone pairing and discussing it, I
can
              pause and listen in, or I can boot out one of the pairs and
learn
              directly

 - shared code ownership - I can change code supporting some requiment
              and see what happens!

 - acceptance tests - the specifications are written in a literate,
readable,
              unambiguous and executable format, reviewed by everyone

 - refactoring - because the code was grown by refactoring, it is
              safe to refactor again. Hence the project can accept
              feature requests in any order, not just in order of technical
              risk

 - scope control - the onsite customer reviews features growing in
             real-time

The only "evidence" that XP "works" is the rate programming shops learn to
profit from Agility. Honest businesses are systems for collecting scientific
data-they just call it "money".

XP addresses these common risks to velocity:

* long bug hunts
* delays before releasing
* reworking previously deployed features.

To speed development while preventing long bug hunts, XP recommends
Test-Driven Development, which treats the lack of an ability as a minor bug,
and writes a test to capture that bug before killing it. The supporting
practices are Merciless Refactoring, to paint yourself into corners and then
cut new doors; Simple Design, to bypass excessive engineering efforts; Pair
Programming, to learn from each change; Common Code Ownership, to minimize
political or esthetic reasons not to change code; a System Metaphor, to
shorten sentences, and Continuous Integration, to merge code changes before
they conflict.

To avoid delays before releasing, XP teams do not just release often, they
Release Frequently, typically every week, rain or shine. (Some releases
don't deploy all the way to real users. Each one need only demonstrate a
hypothetical productivity boost.) Whole Teams know each release's status
non-verbally, and have the mandate to automate much of their workflow. The
business side reviews and extends Customer Tests to learn exactly what
features appear in each release. Teams who Frequently Release must face
disturbing issues, like installers or databases, and incrementally develop
their scripts.

To avoid rework, XP teams boost user productivity early and often. The
Planning Game sorts User Stories in order by business value. This is a
hill-climbing algorithm-a search for the maximum productivity gain from the
current position. On hills without secondary peaks, the shortest path up is
always the steepest path from each point. In the space of programming, a
hill-climbing algorithm encounters no secondary peaks when all application
features deform continuously. Simple Design, Merciless Refactoring, and
Test-Driven Development change code smoothly and safely.
The fixes for those risks overlap. That's the point: The XP Practices are
all "best practices" when used alone-in moderation. Put together, they bind
and reinforce each other, permitting low overhead and extreme velocity.
Questions about practices have answers in other practices. A Sustainable
Pace also keeps programmers fresh, to learn about requirements.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces




0
phlip_cpp (3852)
9/14/2004 6:10:45 AM
On Mon, 13 Sep 2004 23:40:13 -0600, "kurth" <kurth@avanade.com> wrote:

>I often read that in XP you should not create a lot of documentation - that
>its all temporary stuff to help one understand the system.  First off I
>applaud XP's desire to actually help the developers understand the domain
>for the system they are building but I think this is the wrong way of
>"helping them understand" for a couple of reasons:  Many domains such as
>accounting, manufacturing, supply chain, are well understood with a wealth
>of existing documentation.  Rather then trying to figure things out as you
>go wouldn't it be best to train up developers on the business domain before
>jumping into a (possibly incomplete poorly defined) scenario and rewriting
>it over and over until you've captured all the requirements?
>I'm not saying that's all bad; I would rather have developers understand the
>domain they are implementing even if it is piece by piece then to be handed
>a set of specifications (that are always missing something or just plain
>wrong) and like zombies do exactly what it says without question.  My
>biggest complaint however is that how do you go to the customer and say this
>wasn't a requirement of the original agreement, you will have to increase
>the project budget, extend the timeline, etc.. if the requirements have not
>thoroughly been analyzed, documented, and signed?
>
>- Kurt
>

I see this debate quite often..


Documentation of any kind should be produced only if it is effective
and benefits the project.  Whether its to safeguard against
contractual requirements, or to be used in providing education for
newer developers coming on board later in the development cycle, or
simply as a reference system.   You only produce it if you have a need
to do so.

Documentation thats just being produced to sit on a shelf is just a
waste of time and money.

People who try an preach a magical solution to a complex problem
usually dont understand the problem. I mean who could be stupid enough
to believe that just because you have a customer on site, you should
be doing everything they ask, yet some XP proponents use this as a
justification for not having to produce effective documentation.



0
foo_ (331)
9/14/2004 9:49:44 AM
AndyW wrote:

> Documentation thats just being produced to sit on a shelf is just a
> waste of time and money.

And it adds risk.

> People who try an preach a magical solution to a complex problem
> usually dont understand the problem. I mean who could be stupid enough
> to believe that just because you have a customer on site, you should
> be doing everything they ask, yet some XP proponents use this as a
> justification for not having to produce effective documentation.

The magic comes from a slight adjustment to priorities. The documentation is
optional. Onsite customers, user stories, and tests are not.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/14/2004 3:37:18 PM
Kurt,

> My biggest complaint however is that how do you go to the customer and say this
> wasn't a requirement of the original agreement, you will have to increase
> the project budget, extend the timeline, etc.. 

How would doing all that advance the projects's (and the customer's) 
goals ?

Laurent
http://bossavit.com/thoughts/
0
laurent (379)
9/14/2004 9:30:08 PM
On Tue, 14 Sep 2004 23:30:08 +0200, Laurent Bossavit
<laurent@dontspambossavit.com> wrote:

>Kurt,
>
>> My biggest complaint however is that how do you go to the customer and say this
>> wasn't a requirement of the original agreement, you will have to increase
>> the project budget, extend the timeline, etc.. 
>
>How would doing all that advance the projects's (and the customer's) 
>goals ?
>
>Laurent
>http://bossavit.com/thoughts/


I think it comes down to contractual agreement and terms of payment.
Many sensitive projects are contractually negotiated on what was
originally asked for, with ammendments being additional cost payed for
and approved by the customer.   If you cant prove what they asked for,
there is no way you can really get paid for any ammendments.

Its pretty standard in medium sized projects ($100 million+) upwards
where a change no matter how simple runs into the 10s of thousands of
dollars.

Its not uncommon for a customer to request a trivial change - that may
only take an hour to implement - yet the cost of that hours work will
be perhaps $60,000.

A classic example occured a couple of years ago where I was on a
project that had an hourly running cost of about $4500.  The customer
asked one of their own developers to make a change that ended up
costing $30,000 by the time it was implemented.  When the customers
senior managemet was shown the expense they went ballistic.

 Luckely  developers for the company I was working at wouldnt do any
work until the requirement is signed sealed and delivered complete
with spec and accurate cost/timescales attached.  The cost of one of
our folks to produce that documentation is only $2-3k at most which
comes off the final development price should it be implemented.  

Requirements documentation is critical here because it is often used
for the customer to plan changes to the actual operating process
and/or business model for their company in advance of any decision to
make the change. The requirements are also used much later when
analysing defects in the operating process vs actual operation of the
s/w.  When a piece of s/w is shown to be working to requrements
(conforming to requirements) we can then analyse external entities
such as the people involved or the operating procedure to see if one
of those is defective and needs to be brought back into line, or if
the environment has changed enought to warrent a change to the
original requirement.

The timescales I am talking about here are over a period of years,
rather than months.  You cannot use any method that only commits for
temporary documentation as a necessary evil.

While XP and old style try and achieve the same goal, often old style
turns out better because its more rigerous approach contains all the
required safe guards.

This does not mean that one is better than the other.  My view is that
its really down to the capability and maturity of the organisation
performing the work.

I work to a CMM level 5 standard of work, so I expect my employers to
try and achieve that.  There is a huge visible difference to what is
produced at CMM 5 than say at CMM 3.  Doesnt matter what method you
use.  

Usually at this level old style is the flavour, but XP tweaks are
sometimes used.  In the example above, conformance to requirements
requires that test documentation be produced and maintained, however,
there is nothing to say that testing down at the programmers level
cant be done XP style.
0
foo_ (331)
9/15/2004 12:48:39 AM
On Mon, 13 Sep 2004 23:40:13 -0600, "kurth" <kurth@avanade.com> wrote:
>   Rather then trying to figure things out as you
>go wouldn't it be best to train up developers on the business domain before
>jumping into a (possibly incomplete poorly defined) scenario and rewriting
>it over and over until you've captured all the requirements?

This was pretty much standard operating procedure twenty years ago.

The "problem" is that it takes several years to learn a domain, and in
the IT industry today, companies want to hire programmers off the
shelf and then fire them.  No time to learn a domain.

XP makes a virtue of necessity and trains developers to be exactly the
opposite of how things were, back in the day.

Just how you evaluate the wisdom of these various modern trends is a
matter for debate.

J.


0
9/15/2004 1:05:37 AM
"Phlip" <phlip_cpp@yahoo.com> wrote in message news:<FXv1d.335$6s7.185@newssvr16.news.prodigy.com>...
> kurth wrote:
> 
> > I often read that in XP you should not create a lot of documentation
> 
> Citation?

Are you kidding?

> > - that
> > its all temporary stuff to help one understand the system.
> 
> The best way to state the goal is this: Minimize the time between fully
> specifying a feature and deploying it. Small cycles can return business
> value by seeking requirements in order of business value.

Small cycles, at the XP level of granularity and in the absence of
real requirements analysis, are wasteful.  The simplistic
widgetization of XP ignores the synergies/dependencies between
features.  The much vaunted "prioritized list" of features is thus
arbitrary and has little real meaning, other than as a marketing
device.

Coding functionality to confirm that its required amounts to little
more than trial-and-error.  It's fine to say you're doing "the most
important stuff", but in reality, there is a minimum set of
functionality required in any non-trivial application to make it
useful.  You may claim with every feature that you have to throw away
or modify (doing your little refactoring dance every time), you're
learning something.  But you're really just burning money.

Are you bragging that you can't think without a compiler? 

> 
> > First off I
> > applaud XP's desire to actually help the developers understand the domain
> > for the system they are building but I think this is the wrong way of
> > "helping them understand" for a couple of reasons:  Many domains such as
> > accounting, manufacturing, supply chain, are well understood with a wealth
> > of existing documentation.
> 
> Right. It's not the documentation that's the problem - it's the
> requirements, sitting on a shelf. That is exactly like inventory, sitting on
> a shelf, in manufacturing. It can only decay in value over time.

So does implemented functionality.  You just bury the decay better. 
That's the advantage procrastinators have.  People don't get to see
what the results would have been had you been more proactive.  They
just see you putting out the fires as they flare up, and assume you
know what you're doing.

A good priming scan of the problem domain and requirements will cause
more of the features you code to add value, without having to be
modified or replaced.  You can bury requirements churn in refactoring
and call it "agility" all you want.  You're building useless stuff to
(hopefully) get to the useful stuff, and trying to convince people
that this is better than trying to reason through the problem first.

> > Rather then trying to figure things out as you
> > go wouldn't it be best to train up developers on the business domain
>  before
> > jumping into a (possibly incomplete poorly defined) scenario and rewriting
> > it over and over until you've captured all the requirements?
> 
> That's why the OnsiteCustomer is _onsite_ - so developers can learn what
> they need (including RTFM), parallel with implementing it.

If the Onsite Customer lacks the skills to properly analyze the
problem domain (simply having been a worker in the domain doesn't
deliver those skills) then they, like you, will simply be muddling
their way through.

> > My
> > biggest complaint however is that how do you go to the customer and say
>  this
> > wasn't a requirement of the original agreement, you will have to increase
> > the project budget, extend the timeline, etc.. if the requirements have
>  not
> > thoroughly been analyzed, documented, and signed?
> 
> (Reputedly) per Craig Larman's /Agile and Iterative Development: A Manager's
> Guide/, if you perform "big requirements up front", you invest in a huge
> risk of failure. That's why "more artifacts", in the "spend millions on
> requirements first" range, lead to so many spectacular software failures.

Going to the other extreme is likely to produce less showy, but
equally catastrophic failures.  Just because something compiles and
runs and passes the tests doesn't mean its useful or valuable.  The
best way to give a project a direction is to do enough upfront
analysis to know what it is you're building and what goals you're
trying to achieve.  You don't make stuff like that up as you're going
along, or you deserve what you end up with.

 
Cy
0
cycoe (74)
9/15/2004 2:18:36 AM
"kurth" <kurth@avanade.com> wrote in message news:<_LmdncZLBLO5GNvcRVn-iw@comcast.com>...
> I often read that in XP you should not create a lot of documentation - that
> its all temporary stuff to help one understand the system.  First off I
> applaud XP's desire to actually help the developers understand the domain
> for the system they are building but I think this is the wrong way of
> "helping them understand" for a couple of reasons:  Many domains such as
> accounting, manufacturing, supply chain, are well understood with a wealth
> of existing documentation.  Rather then trying to figure things out as you
> go wouldn't it be best to train up developers on the business domain before
> jumping into a (possibly incomplete poorly defined) scenario and rewriting
> it over and over until you've captured all the requirements?
> I'm not saying that's all bad; I would rather have developers understand the
> domain they are implementing even if it is piece by piece then to be handed
> a set of specifications (that are always missing something or just plain
> wrong) and like zombies do exactly what it says without question.  My
> biggest complaint however is that how do you go to the customer and say this
> wasn't a requirement of the original agreement, you will have to increase
> the project budget, extend the timeline, etc.. if the requirements have not
> thoroughly been analyzed, documented, and signed?
> 
> - Kurt

Responding to Kurt:

I agree with your statement regarding estabished industries and the
intelligence of becoming, in effect, a domain-expert before setting
down requirements, designing, and--god forbid--coding. I have seen
this error first hand: commiting way too aggressively to a schedule
and telling software to "go code", then finding out that systems
analysis is REALLY needed, switching gears, and spending as much as 6
times the original schedule to properly analyze the problem and become
truly knowledgable about the domain.

Regarding "how to go to the customer", I would be glad to go into
detail about how to contain "scope creep", how should "derived
requirements" be handled, how to work issues with the customer, etc.
Email me directly and we can take it offline, if you like.

I share your comments, and suspicions, regarding XP, and this exerpt
from my latest article--due to appear in the next publication of PC AI
online--I think sums it up.

---------------------------------------------------------------------------

Methodologies involving joint application development (JAD) and
evolving prototypes have gained popularity over the last decade, most
notably with the popularization of Extreme Programming (XP).
Proponents of XP tout the advantages of "growing" a system through
rigorous testing and customer participation. Without question, the
benefits of close customer involvement and testing are enormous,
regardless of methodology. Yet while XP has proven it can work without
several of the cumbersome and labor-intensive artifacts of the
traditional SDLC, there are substantial risks inherent in any spiral
development methodology. When formal analysis and design are bypassed
in favor of a hyper-iterative system evolution strategy, what might
appear effective or ideal during one iteration may prove suboptimal�or
a real showstopper�in the final release. This is the "locally optimal,
globally suboptimal" situation that can often manifest itself during
later stages of a development project.

XP's emphasis on "user stories" and iteration is not very different
from earlier approaches involving use cases 1, and the insistence on
high customer involvement is a fundamental concept of JAD. Both use
cases and JAD have been proven effective for obvious reasons: use
cases help identify user interfaces and expected system behavior,
while customer involvement ensures the project is on track and
delivering the desired functionality. Considering XP's reliance on
these earlier techniques and methodologies, including pair
programming�where "buddies" share in the development and inspection of
code, XP's real contribution to RAD lies in its "test-first" approach
to development. Unlike the traditional approach, with its rigid
sequence of design, develop, integrate and test, XP encourages writing
both unit and integration test code before the actual system code.

In the absence of a full set of quality system requirements, spiral
development methodologies, such as XP, offer distinct advantages in
that they foster discovery of both required and desired functionality
as the system evolves. Moreover, there is always a working system, and
with every new release both customer and development organization
alike gain a sense of security, promise, and progress. But while "its
team-based approach works well for smaller projects, it scales up
poorly to larger projects�and places too little emphasis on analysis
and architectural design."
 
-------------------------------------------------------------------------
 
Brian S. Smith
http://www.leapse.com
techsupport@leapse.com
0
sales3288 (24)
9/15/2004 2:29:18 AM
On Mon, 13 Sep 2004 23:40:13 -0600, "kurth" <kurth@avanade.com> wrote:

>I often read that in XP you should not create a lot of documentation - that
>its all temporary stuff to help one understand the system.  

That's not quite right.  XP says that you should create no documents
that you aren't going to need, and need soon.  In other words, don't
write a bunch of documents that nobody is going to read for the next
five months (if ever).

On the other hand XPers will create any document that they find
immediately useful.  

>First off I
>applaud XP's desire to actually help the developers understand the domain
>for the system they are building but I think this is the wrong way of
>"helping them understand" for a couple of reasons:  Many domains such as
>accounting, manufacturing, supply chain, are well understood with a wealth
>of existing documentation.  Rather then trying to figure things out as you
>go wouldn't it be best to train up developers on the business domain before
>jumping into a (possibly incomplete poorly defined) scenario and rewriting
>it over and over until you've captured all the requirements?

Yes.  It would be good to train the developers in the domain.  XP does
not say you shouldn't do that.  XP also does not say that you should
write things over and over until you've captured all the requirements.

What XP says is that you should capture your requirements in short
iterations.  That you should have the customer (i.e. the domain
expert) write acceptance tests for the features to be delivered in the
next iteration.  That the customer should be communicating closely
with the developers -- in the same room -- so that requirements are
quickly clarified.

>I'm not saying that's all bad; I would rather have developers understand the
>domain they are implementing even if it is piece by piece then to be handed
>a set of specifications (that are always missing something or just plain
>wrong) and like zombies do exactly what it says without question.  

Of course.

>My
>biggest complaint however is that how do you go to the customer and say this
>wasn't a requirement of the original agreement, you will have to increase
>the project budget, extend the timeline, etc.. if the requirements have not
>thoroughly been analyzed, documented, and signed?

How do you do it *when* the requirements have been thoroughly
analyzed, documented, and signed.  You *know* that thoroughly
analyzing, documenting, and signing the requirements isn't going to
stop them from changing, or being wrong.

There is an amazing amount of peer-reviewed research that correlates
thoroughly analyzed, documented, and signed requirements with project
failure.  Indeed there are studies of thousands of projects that have
shown that when requirements are treated this way, far fewer than half
of them are ever actually used by the customer. (i.e. more than half
the code is wasted).  Other research shows that only a tiny fraction
of those projects ever get used at all -- even though the meet the
requirements.

On the other hand, there is a lot of research that correlating success
with projects that are released very early in a minimal form, and
evolved through a series of releases during which requirements are
constantly being gathered.


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/15/2004 4:46:31 AM
On 14 Sep 2004 19:29:18 -0700, sales@leapse.com (Brian S. Smith)
wrote:

>But while its
>team-based approach works well for smaller projects, it scales up
>poorly to larger projects�and places too little emphasis on analysis
>and architectural design.

I have found that agile methods like XP and Scrum scale very nicely
into the ~100 developer range.  As for analysis and architectural
design, these teams are doing more of that now than they used to do
when they did waterfall.  

Analysis and architectural design are daily exercises in an Agile
environment.  They are not simply done up front.  They are done *all
the time*.  This can't be stressed enough.  Some people envision XP
teams as a bunch of wildly hacking coders.  No.  The best XP teams are
those that take a very deliberate and serious view of design and
architecture, and invest a great deal of time in it *every day*.  The
fact that they write code every day does not diminish the design and
architecture work they do.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/15/2004 4:59:21 AM
Any successful project and therefore methodology *must identify risk, plan
for it, and continually monitor for signs that it is occuring*.  XP does
addresses some important risks, poor quality product, communication issues,
lack of customer feedback with and methods to mitigate such as TDD, and
continual integration, frequent releases, pair programming, but to even
suggest there
is a chance we don't need any documentation probably doesn't understand why
most
of it is being created in the first place and denies the lessonsed learned
repeadly over and over by nearly every project that failed due poorly
defined requirements, scope creep, impossible to implement, solves wrong
problem, vendor dependency, internal politics, insufficient training, to
name a few.  Not every project has every risk, however no matter what the
size of the project if it requires time and money *there will be risk* and
there *must be* some *essential* documentation such as scope document
(*every single project* I have ever worked on has had scope creep), a risk
mitigation plan (it could be that the risks are small and/or few but *unless
you identify them you cannot mitigate them*, scope creep being one of them
thus the need for the scope document), and a handful of others.  I will
agree that large amounts of documents can be costly to produce and there is
a point where one must say enough is enough.  I'm all for the argument that
too much can be too much, but I cannot think of a single instance of a
project that has failed soley due to an overwellming amount of unused
documentation, *there is always at least one other imporperly mitigated risk
that was or should have been known and was not addressed.*   Any
methodology that suggests the 'risk of creating too much documentation'
without providing an exaustive list of when and why the document is or is
not
necessary and/or an alternate mitigation plan, is inviting far greater risks
to go undetected, improperly mitigated, and building the possiblity of real
failure into the methodology to save a few bucks on docs.  I think perhaps
this is why I have not infrequently heard the statement "XP is only sutible
for small projects where it doesn't really matter anyway."  </rant >

The project I am currently working on is nearly 2 million lines of code.
Timeline is very important and missed deadlines can cost millions in
penalties, a lot of documentation was produced up front, much of which has
many of the symptoms mentioned, very out dated, hard to find, etc... however
I still find this documentation valueable strict timelines make even the
smallest change orders potentially very costly and therefore risky.
We have many on site customers, perform continual integration (every checkin
to VSS triggers a new build and tests to run (build server never sleeps)
which is deployed first to a test in environment, twice a week to a user
acceptence test environment and eventually into production)  unit tests for
every component and dozens of testers that execute written tests scrips
using the actual applications.  TDD and continual integration has been very
valuable, however I don't believe an on site customer  has anything to do
with scope control they only want to add more features.  Continual
refactoring is okay on the top levels and locallized changes in the lower
levels but an abstraction that has 100 thousand lines of code dependant on
it... *if it works, it stays* no matter how much you dislike it... (if its
broken then you have no choice) thats just how it is, I think anyone who has
worked on a project of any truely significant size will agree with me you'll
break hundreds of tests and you'll be working late until every single one is
fixed!  This is why it is so important on a large project to create detailed
specifications.  For small projects a few high level requirements may be
sufficient, but if you have multiple teams working on different scenarios
they can come up with solutions that have an inconsistant view of the common
details.  When you can spend a few days to rewrite a several hundred lines
or so lines of code it may not be that important but even small systems
quickly grow to 10s of thousands of lines that may still be okay not like
all of them need to change but in a much larger system details are very
difficult to change they are after all at the lowest levels in the system
and having a shared understanding can be critically important:

We recently discovered a bug in one of the database functions
select ... from T1
union
select ... from T2
should have been:
select ... from T1
union all
select ... from T2

its a bit more complicated then that (sorry I can't be more specific) but
essentially thats the core of it, our system is logically divided into 2
almost independant systems with a shared infrastructure and a couple of
touch points... system 2 was never concered with T2 it was nearly irrelevant
to what that system did; not a single one of their scenarios involed T2,
anything involving T1 were part all of System1.  System2 however navigated
through the subtype from one entity to another entity required by one of the
touch points.  When we found a bug where system2 did not perform as expected
the bug was reassigned and apperantly they determined it was because there
was a missing record in T1 and conviniently stuffed one in there (and
created a conversion script to create them), when tests broke in system 1
some developers somehow found a way to coded around it.  When the converted
data was actually introduced to the system this function was performing
badly and the union was changed to union all as it should be.  At this point
tests started breaking all over the place duplicate records appeared on tons
of views and it was obvious system 2 wasn't working with anything related to
the T2 records.  When the change from union to union all was found questions
started being asked and it was determined that the T1 records didn't belong
there in the first place... so the records (and conversion script) were
removed thinking we would just have to fix whatever issues arise, this fixed
nearly all the broken tests except two that depended on system2.  They had
to refactor thousands of lines of code that were incorrectly dependant on T1
rather then the subtype all to solve *two critical* test cases that wont
work if the union is put back in.  I'm sure many will argue the tests
*should* have caught this sooner, but with such a small number of scenarios
involving it, a bug in a database function that hid the underlying problem,
and introduction of bad data as necessary it its to hard to see how it was
missed.  Mostly I think it was a communication breakdown (I know,
pair-programming -- try that in a multi-site environment).  I don't think
more
or less specifications would have prevented this, without it however I think
these kind of errors would have been far more common, *especially* when
there are communication issues (as nearly all projects have) , if nothing
else
they can't say no one told them.
Thankfully I'm not on system2!

I totally agree with TDD, and continual integration, I can't say anything
about
pair-programming because I haven't done much of it.  These are excellent
practices to mitigate the risk of project failure due to poor quality, lack
of
customer acceptance, and some others, however <rant>you are going to have to
have alot more supporting documentation to say that there is a serious risk
of
failure due soley or primarily due to creating too much documentation (not
just a waste of money on unused documents).</rant>


>There is an amazing amount of peer-reviewed research that correlates
>thoroughly analyzed, documented, and signed requirements with project
>failure.  Indeed there are studies of thousands of projects that have
>shown that when requirements are treated this way, far fewer than half
>of them are ever actually used by the customer. (i.e. more than half
>the code is wasted).  Other research shows that only a tiny fraction
>of those projects ever get used at all -- even though the meet the
>requirements.

<rant>There are surveys that show correlations between practically anything
and everything such as cancer, yet many have been proven incorrect...
If you look at all of the projects that have failed and have signed
documents
I'll guarentee you there will not be *just* a few.. but many issues
involved...
large projects have an exceptal amount of risk and politics involved that
which
always makes things messy.  There are always users who hate change and will
refuse to use it and always some early adopters who will sponsor it so that
statment
doesn't suprise me in the least.  Most of these large projects are not even
the effort of a single company... each in it for profit.  Not getting paid
is a much
larger concern than if the customer actually uses it, providing the
requirements
have been documented and signed the customer is contractually obligated.
We don't want the customer not to use it any more then they don't want
to spend alot of money on a system they don't use... but it is poorly
defined
requierments that are the most common reason for this!  Enterprise
software is unbelieveablely complicated and there are thousands of potential
scenarios that need to be explored, failure to do this results in very very
serious scope creep, most software companies however do not know
how to do this properly and rather then placing the risk on the customer
who actually has the option to pull the plug when costs rise to higher then
expected, they sign a set of requirements they cannot deliver, and thus
continue to attempt to implement the impossible, until have no choice but
to admit defeat,  deliver poor quality softaware the users despise, fail to
meet all requirements, sabatoge other vendors so they will fail first etc...
just so they can meet the contractual obligations they should never have
signed in the first place.</rant>

Anyways I think I'm done now :)

- Kurt


0
kurth (11)
9/15/2004 8:31:05 AM
On Wed, 15 Sep 2004 02:31:05 -0600, "kurth" <kurth@avanade.com> wrote:

>Any successful project and therefore methodology *must identify risk, plan
>for it, and continually monitor for signs that it is occuring*.  

Agreed.

>XP does
>addresses some important risks, poor quality product, communication issues,
>lack of customer feedback with and methods to mitigate such as TDD, and
>continual integration, frequent releases, pair programming, 

Agreed.

>but to even suggest there
>is a chance we don't need any documentation probably doesn't understand why
>most
>of it is being created in the first place and denies the lessonsed learned
>repeadly over and over by nearly every project that failed due poorly
>defined requirements, scope creep, impossible to implement, solves wrong
>problem, vendor dependency, internal politics, insufficient training, to
>name a few.  

The Agile Movement was started by a group of seasoned experts who were
concerned about waste.  The agile methods, like XP, do not suggest
that we don't need documentation.  Rather they suggest that documents
should not be produced unless they are needed, and needed soon.  The
advice is often to defer documentation until the need for that
documentation is unambiguous, and the information to be documented is
clear.

Indeed, Agile methods like XP demand certain forms of documentation,
typically in the form of tests.  Some folks poo-poo this as
non-documentation, but I beg to differ.  The acceptance tests that XP
style methods demand are formal, readable, and executable.  They
define the requirements in unambiguous and repeatable terms.
Similarly, the unit tests written in a TDD style are formal and
repeatable documents written about the production code, *before* that
production code is written.  They are documents of design and intent.
The fact that they are also code only enhances their formality and
accuracy.

Clearly not all documentation can be of this highly formal and
repeatable kind.  Some is going to have to be prose, or diagrams.  XP,
and the Agile Methods have no problem with this.  They simply suggest
that these documents be written as they are needed, and when the
information is most accurate.

>Not every project has every risk, however no matter what the
>size of the project if it requires time and money *there will be risk* and
>there *must be* some *essential* documentation such as scope document
>(*every single project* I have ever worked on has had scope creep), a risk
>mitigation plan (it could be that the risks are small and/or few but *unless
>you identify them you cannot mitigate them*, scope creep being one of them
>thus the need for the scope document), and a handful of others.  

Agile methods do not consider scope creep to be bad.  Indeed, the
basic premise is that scope is fuzzy in the early stages of a project,
and the only way to reduce that fuzziness is to make frequent releases
to the stakeholders so they can see the system in operation and
resolve their own internal indecision. 

Agile Methods like XP, or Scrum always produce a scope document of
some kind.  In Scrum it's a backlog document.  In XP it's a deck of
story cards.  In each case, however, this document is designed to be
volatile -- to *embrace* scope creep and thereby manage it.

I suppose "creep" is not the right word.  A better word would be
resolution.  Early on the scope is not resolved.  From iteration to
iteration and release to release the resolution of the scope
increases.

>I will
>agree that large amounts of documents can be costly to produce and there is
>a point where one must say enough is enough.  

My own view is that *every* document must be challenged.  Only those
that are clearly necessary should be created; and even then their
creation should be deferred until the need becomes imminent.

>I'm all for the argument that
>too much can be too much, but I cannot think of a single instance of a
>project that has failed soley due to an overwellming amount of unused
>documentation, *there is always at least one other imporperly mitigated risk
>that was or should have been known and was not addressed.*   

I have experienced many projects that were canceled long before a line
of code was produced.  Millions of dollars were spent on the up-front
documents demanded by the process and yet no user ever saw even the
tiniest bit of the system execute.

>Any
>methodology that suggests the 'risk of creating too much documentation'
>without providing an exaustive list of when and why the document is or is
>not
>necessary and/or an alternate mitigation plan, is inviting far greater risks
>to go undetected, improperly mitigated, and building the possiblity of real
>failure into the methodology to save a few bucks on docs.  

I disagree.  The risk of waste is huge.  Indeed the risk of waste is
probably the single largest risk in any project.  Years of research
have shown that software projects generate more waste than usefulness,
by a huge margin.   One such study of $37 Billion worth of projects
showed that 46% "so egregiously did not meet the real needs (although
they met the specifications that they were never used."  Another 20%
required extensive rework to meet the true needs rather than the
specifications.  Another study showed that only 7% of the developed
features were "always used".  Another 13% were used "often", A full
45% were never used, and the remaining 35% were used "sometimes" or
"rarely".  

Another way to look at that is that about half the budget of every
project produced features that were useless. That kind of waste is the
overriding risk that we want to get under control. 

>I think perhaps
>this is why I have not infrequently heard the statement "XP is only sutible
>for small projects where it doesn't really matter anyway." 

This belies what's going on in the industry.  Many large projects that
are mission critical are transitioning to agile techniques including
XP.

>The project I am currently working on is nearly 2 million lines of code.

I'm consulting for a group right now that has 70 million and needs to
be replaced.  They are convinced that the only way they can succeed is
to use an agile approach.  They've tried twice before using less
iterative approaches, and experienced some very high cost failures.

Let me quote from the process document for the production of the Space
Shuttle software:

"Due to the size, complexity, and evolutionary nature of the program
it was recognized early that the [waterfall] software development
cycle could not be strictly applied and still satisfy the objectives.
However, an implementation approach was devised [] which met the
objectives by applying the [waterfall] cycle to small elements [six
weeks in size] of the overall software package on an iterative basis."

In short, the more complex a project gets, the more iterative it needs
to be, and that means iteration through *all* aspects of the project
including requirements, documents, code, tests, etc.    


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/15/2004 2:32:54 PM
On Wed, 15 Sep 2004 02:31:05 -0600, "kurth" <kurth@avanade.com> wrote:

>however I don't believe an on site customer  has anything to do
>with scope control they only want to add more features.

If the on-site customer is not responsible for budget, then you are
right, all they want it more features.  If, however, the on-site
customer is given a budget (usually in points or a percentage of
weekly velocity) then they'll start jealously controlling scope.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/15/2004 2:52:40 PM
On Wed, 15 Sep 2004 02:31:05 -0600, "kurth" <kurth@avanade.com> wrote:

>Continual
>refactoring is okay on the top levels and locallized changes in the lower
>levels but an abstraction that has 100 thousand lines of code dependant on
>it... *if it works, it stays* no matter how much you dislike it... (if its
>broken then you have no choice) thats just how it is, I think anyone who has
>worked on a project of any truely significant size will agree with me you'll
>break hundreds of tests and you'll be working late until every single one is
>fixed! 

That is certainly the fear.  On the other hand, I am currently making
a sweeping change to a system, and I'm managing to keep all the tests
passing while I do it.  I've been making this change for some weeks
now, and we have actually released the project while this change was
in mid-stride.

You *can* change the 100,000 line abstraction.  You don't have to do
it all at once, and you don't have to break all the tests.  You *do*
have to be creative about how you do it.  Doing it is often
worthwhile.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/15/2004 2:55:08 PM
On Wed, 15 Sep 2004 02:31:05 -0600, "kurth" <kurth@avanade.com> wrote:

>This is why it is so important on a large project to create detailed
>specifications.  For small projects a few high level requirements may be
>sufficient, but if you have multiple teams working on different scenarios
>they can come up with solutions that have an inconsistant view of the common
>details.  When you can spend a few days to rewrite a several hundred lines
>or so lines of code it may not be that important but even small systems
>quickly grow to 10s of thousands of lines that may still be okay not like
>all of them need to change but in a much larger system details are very
>difficult to change they are after all at the lowest levels in the system
>and having a shared understanding can be critically important:

This is a realistic fear.  However, I've found that it is not a law.
By careful design, and careful refactoring of the design, we can make
it possible to migrate the details of a system without massive
thrashing and changes.  It takes care, and deliberate attention, but
is a *lot* cheaper than leaving a mess to fester and rot the system.
IMHO.





-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/15/2004 2:57:40 PM
On Wed, 15 Sep 2004 02:31:05 -0600, "kurth" <kurth@avanade.com> wrote:

><rant>you are going to have to
>have alot more supporting documentation to say that there is a serious risk
>of
>failure due soley or primarily due to creating too much documentation (not
>just a waste of money on unused documents).</rant>

In 1987 Fred Brooks was commissioned to evaluate the DOD's software
process, 2167A.  In his evaluation he writes: "As drafted, it
continues to reinforce exactly the document-driven, specify-then-build
approach that lies at the heart of so many DoD software problems." 

Indeed, there is a plethora of research that shows a huge risk
associated with documentation-driven, specify-then-build approaches.
There are some 50 pages summarizing that research in Craig Larmans
recent book: "Agile and Iterative Development: A Manager's Guide".



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/15/2004 3:04:51 PM
On Wed, 15 Sep 2004 02:31:05 -0600, "kurth" <kurth@avanade.com> wrote:

><rant>There are surveys that show correlations between practically anything
>and everything such as cancer, yet many have been proven incorrect...

You are missing the point.  There is a massive amount of peer reviewed
research that shows that document-driven, specify-then-build
approaches are horrendously risky.  There is virtually *no* peer
reviewed research that shows the opposite.  The asymmetry is striking.

-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/15/2004 3:06:34 PM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
news:4bigk0d5sk2b76cord5n4ek87mbolgpajh@4ax.com...
>
> Agile methods do not consider scope creep to be bad.
> Indeed, the basic premise is that scope is fuzzy in the
> early stages of a project, and the only way to reduce
> that fuzziness is to make frequent releases to the
> stakeholders so they can see the system in operation
> and resolve their own internal indecision.

Some agile methods apply time-boxing to the scope of the project.  The
requirements can and will change, even dramatically, but the customer and
developer must keep the overall magnitude of the scope constant.

-- 
Roger L. Cauvin
nospam_roger@cauvin.org (omit the "nospam_" part)
Cauvin, Inc.
http://www.cauvin-inc.com


0
roger (257)
9/15/2004 3:14:08 PM
kurth wrote:

> Any successful project and therefore methodology *must identify risk, plan
> for it, and continually monitor for signs that it is occuring*.  XP does
> addresses some important risks, poor quality product, communication
issues,
> lack of customer feedback with and methods to mitigate such as TDD, and
> continual integration, frequent releases, pair programming, but to even
> suggest there
> is a chance we don't need any documentation probably doesn't understand
why

The team decides when and what to document. How does that lead to "suggest
there is a chance we don't need any documentation"?

There's also a chance we don't need any code. Party!

> most
> of it is being created in the first place and denies the lessonsed learned
> repeadly over and over by nearly every project that failed due poorly
> defined requirements, scope creep, impossible to implement, solves wrong
> problem, vendor dependency, internal politics, insufficient training, to
> name a few.  Not every project has every risk, however no matter what the
> size of the project if it requires time and money *there will be risk* and
> there *must be* some *essential* documentation such as scope document
> (*every single project* I have ever worked on has had scope creep), a risk
> mitigation plan (it could be that the risks are small and/or few but
*unless
> you identify them you cannot mitigate them*, scope creep being one of them
> thus the need for the scope document), and a handful of others.

The best risk mitigator is feedback from tests. XP suggests if you indeed
need code, you need tests. You also need anything else the team agrees on.

> I will
> agree that large amounts of documents can be costly to produce and there
is
> a point where one must say enough is enough.  I'm all for the argument
that
> too much can be too much, but I cannot think of a single instance of a
> project that has failed soley due to an overwellming amount of unused
> documentation, *there is always at least one other imporperly mitigated
risk
> that was or should have been known and was not addressed.*   Any
> methodology that suggests the 'risk of creating too much documentation'
> without providing an exaustive list of when and why the document is or is
> not
> necessary and/or an alternate mitigation plan, is inviting far greater
risks
> to go undetected, improperly mitigated, and building the possiblity of
real
> failure into the methodology to save a few bucks on docs.  I think perhaps
> this is why I have not infrequently heard the statement "XP is only
sutible
> for small projects where it doesn't really matter anyway."  </rant >

XP was invented by consultants who observed that big _design_ up front was
not working, and they tried emergent design matched with incremental
requirements discovery. The studies showing big _requirements_ up front were
causing such problems came later.

Again: Where is the citation you read that implied an XP project might not
document anything? I'm aware they are out there; I'm just curious which one.

> The project I am currently working on is nearly 2 million lines of code.
> Timeline is very important and missed deadlines can cost millions in
> penalties, a lot of documentation was produced up front, much of which has
> many of the symptoms mentioned, very out dated, hard to find, etc...
however
> I still find this documentation valueable strict timelines make even the
> smallest change orders potentially very costly and therefore risky.

How many million lines of test do you have?

> We have many on site customers, perform continual integration (every
checkin
> to VSS triggers a new build and tests to run (build server never sleeps)
> which is deployed first to a test in environment, twice a week to a user
> acceptence test environment and eventually into production)  unit tests
for
> every component and dozens of testers that execute written tests scrips
> using the actual applications.

Okay. Add pair programming and a common workspace, and you are doing XP.
Note nobody is obeying every line of that documentation; they are just using
it for research.

> TDD and continual integration has been very
> valuable, however I don't believe an on site customer  has anything to do
> with scope control they only want to add more features.

Do they work _onsite_, with the team, and use that position to cancel
line-items of features that they figure nobody wants?

> Continual
> refactoring is okay on the top levels and locallized changes in the lower
> levels but an abstraction that has 100 thousand lines of code dependant on
> it... *if it works, it stays* no matter how much you dislike it... (if its
> broken then you have no choice) thats just how it is,

If that code had grown via TDD, it would be the easiest and safest to
change, not the most risky.

> I think anyone who has
> worked on a project of any truely significant size will agree with me
you'll
> break hundreds of tests and you'll be working late until every single one
is
> fixed!

Which is why you run them after the fewest possible edits.

> This is why it is so important on a large project to create detailed
> specifications.  For small projects a few high level requirements may be
> sufficient, but if you have multiple teams working on different scenarios
> they can come up with solutions that have an inconsistant view of the
common
> details.  When you can spend a few days to rewrite a several hundred lines
> or so lines of code it may not be that important but even small systems
> quickly grow to 10s of thousands of lines that may still be okay not like
> all of them need to change but in a much larger system details are very
> difficult to change they are after all at the lowest levels in the system
> and having a shared understanding can be critically important:
>
> We recently discovered a bug in one of the database functions
> select ... from T1
> union
> select ... from T2
> should have been:
> select ... from T1
> union all
> select ... from T2
>
> its a bit more complicated then that (sorry I can't be more specific) but
> essentially thats the core of it, our system is logically divided into 2
> almost independant systems with a shared infrastructure and a couple of
> touch points... system 2 was never concered with T2 it was nearly
irrelevant
> to what that system did; not a single one of their scenarios involed T2,
> anything involving T1 were part all of System1.  System2 however navigated
> through the subtype from one entity to another entity required by one of
the
> touch points.  When we found a bug where system2 did not perform as
expected
> the bug was reassigned and apperantly they determined it was because there
> was a missing record in T1 and conviniently stuffed one in there (and
> created a conversion script to create them), when tests broke in system 1
> some developers somehow found a way to coded around it.  When the
converted
> data was actually introduced to the system this function was performing
> badly and the union was changed to union all as it should be.  At this
point
> tests started breaking all over the place duplicate records appeared on
tons
> of views and it was obvious system 2 wasn't working with anything related
to
> the T2 records.  When the change from union to union all was found
questions
> started being asked and it was determined that the T1 records didn't
belong
> there in the first place... so the records (and conversion script) were
> removed thinking we would just have to fix whatever issues arise, this
fixed
> nearly all the broken tests except two that depended on system2.  They had
> to refactor thousands of lines of code that were incorrectly dependant on
T1
> rather then the subtype all to solve *two critical* test cases that wont
> work if the union is put back in.  I'm sure many will argue the tests
> *should* have caught this sooner, but with such a small number of
scenarios
> involving it, a bug in a database function that hid the underlying
problem,
> and introduction of bad data as necessary it its to hard to see how it was
> missed.  Mostly I think it was a communication breakdown (I know,
> pair-programming -- try that in a multi-site environment).

When that happens you can often clone the misbehaving function, and let the
system who needs the bug fix use the new version. Then mark the old version
as deprecated, and incrementally remove it.

> I don't think
> more
> or less specifications would have prevented this, without it however I
think
> these kind of errors would have been far more common, *especially* when
> there are communication issues (as nearly all projects have) , if nothing
> else
> they can't say no one told them.
> Thankfully I'm not on system2!
>
> I totally agree with TDD, and continual integration, I can't say anything
> about
> pair-programming because I haven't done much of it.

All XP practices (and a few more) reduce the importance of early
documentation.

In your case, are your customer/functional/acceptance tests in a literate
format that the customer team can directly author?

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/15/2004 4:50:13 PM
Robert C. Martin <unclebob@objectmentor.com> wrote in message news:<4bigk0d5sk2b76cord5n4ek87mbolgpajh@4ax.com>...
> On Wed, 15 Sep 2004 02:31:05 -0600, "kurth" <kurth@avanade.com> wrote:
> 
> >Any successful project and therefore methodology *must identify risk, plan
> >for it, and continually monitor for signs that it is occuring*.  
> 
> Agreed.
> 
> >XP does
> >addresses some important risks, poor quality product, communication issues,
> >lack of customer feedback with and methods to mitigate such as TDD, and
> >continual integration, frequent releases, pair programming, 
> 
> Agreed.
> 
> >but to even suggest there
> >is a chance we don't need any documentation probably doesn't understand why
> >most
> >of it is being created in the first place and denies the lessonsed learned
> >repeadly over and over by nearly every project that failed due poorly
> >defined requirements, scope creep, impossible to implement, solves wrong
> >problem, vendor dependency, internal politics, insufficient training, to
> >name a few.  
> 
> The Agile Movement was started by a group of seasoned experts who were
> concerned about waste.  The agile methods, like XP, do not suggest
> that we don't need documentation.  Rather they suggest that documents
> should not be produced unless they are needed, and needed soon.  The
> advice is often to defer documentation until the need for that
> documentation is unambiguous, and the information to be documented is
> clear.
> 
> Indeed, Agile methods like XP demand certain forms of documentation,
> typically in the form of tests.  Some folks poo-poo this as
> non-documentation, but I beg to differ.  The acceptance tests that XP
> style methods demand are formal, readable, and executable.  They
> define the requirements in unambiguous and repeatable terms.
> Similarly, the unit tests written in a TDD style are formal and
> repeatable documents written about the production code, *before* that
> production code is written.  They are documents of design and intent.
> The fact that they are also code only enhances their formality and
> accuracy.
> 
> Clearly not all documentation can be of this highly formal and
> repeatable kind.  Some is going to have to be prose, or diagrams.  XP,
> and the Agile Methods have no problem with this.  They simply suggest
> that these documents be written as they are needed, and when the
> information is most accurate.
> 
> >Not every project has every risk, however no matter what the
> >size of the project if it requires time and money *there will be risk* and
> >there *must be* some *essential* documentation such as scope document
> >(*every single project* I have ever worked on has had scope creep), a risk
> >mitigation plan (it could be that the risks are small and/or few but *unless
> >you identify them you cannot mitigate them*, scope creep being one of them
> >thus the need for the scope document), and a handful of others.  
> 
> Agile methods do not consider scope creep to be bad.  Indeed, the
> basic premise is that scope is fuzzy in the early stages of a project,
> and the only way to reduce that fuzziness is to make frequent releases
> to the stakeholders so they can see the system in operation and
> resolve their own internal indecision. 
> 
> Agile Methods like XP, or Scrum always produce a scope document of
> some kind.  In Scrum it's a backlog document.  In XP it's a deck of
> story cards.  In each case, however, this document is designed to be
> volatile -- to *embrace* scope creep and thereby manage it.
> 
> I suppose "creep" is not the right word.  A better word would be
> resolution.  Early on the scope is not resolved.  From iteration to
> iteration and release to release the resolution of the scope
> increases.
> 

I agree, scope creep must not be prevented but embrassed and well
communicated procedures to address it, such as change request forms
(documentation).  When the customer brings you a few hundred more
stories then you had before you started development, are you going to
just say OK great, more refactoring and eat the extra development
cost?


> >I will
> >agree that large amounts of documents can be costly to produce and there is
> >a point where one must say enough is enough.  
> 
> My own view is that *every* document must be challenged.  Only those
> that are clearly necessary should be created; and even then their
> creation should be deferred until the need becomes imminent.
> 
> >I'm all for the argument that
> >too much can be too much, but I cannot think of a single instance of a
> >project that has failed soley due to an overwellming amount of unused
> >documentation, *there is always at least one other imporperly mitigated risk
> >that was or should have been known and was not addressed.*   
> 
> I have experienced many projects that were canceled long before a line
> of code was produced.  Millions of dollars were spent on the up-front
> documents demanded by the process and yet no user ever saw even the
> tiniest bit of the system execute.

I don't necessarily concider every case of this to be "failure."  It
is quite possible and not infrequent that after spending time
analyzing all the requiremnts it became appearant that the system is
too risky to continue, and pulled the plug before risking a far larger
investment in actual development activities until the issues became
overwellming and the project had to be cancelled or continued only
because so much has already been invested.

> 
> >Any
> >methodology that suggests the 'risk of creating too much documentation'
> >without providing an exaustive list of when and why the document is or is
> >not
> >necessary and/or an alternate mitigation plan, is inviting far greater risks
> >to go undetected, improperly mitigated, and building the possiblity of real
> >failure into the methodology to save a few bucks on docs.  
> 
> I disagree.  The risk of waste is huge.  Indeed the risk of waste is
> probably the single largest risk in any project.  Years of research
> have shown that software projects generate more waste than usefulness,
> by a huge margin.   One such study of $37 Billion worth of projects
> showed that 46% "so egregiously did not meet the real needs (although
> they met the specifications that they were never used."  Another 20%
> required extensive rework to meet the true needs rather than the
> specifications.  Another study showed that only 7% of the developed
> features were "always used".  Another 13% were used "often", A full
> 45% were never used, and the remaining 35% were used "sometimes" or
> "rarely".  

It could be that 35% of the "sometimes", "rarely" used functionality
is a "sometimes" "rarely" occuring event but non-the-less a
requirement.  Such as a person changing his or her name, the system
must support it but how often does it really ever occur, but I do
agree there is likely to be some completely unnecessary functionality
in the end result, business processes adapt and change find better
ways to accomplish their goals, or perhaps too many unexpected
requirments lead to a partial development of the functinality but was
not complete enough or to bugy to actually meet the true business
requirements (despite the documented requirements).

> 
> Another way to look at that is that about half the budget of every
> project produced features that were useless. That kind of waste is the
> overriding risk that we want to get under control. 
> 
> >I think perhaps
> >this is why I have not infrequently heard the statement "XP is only sutible
> >for small projects where it doesn't really matter anyway." 
> 
> This belies what's going on in the industry.  Many large projects that
> are mission critical are transitioning to agile techniques including
> XP.
> 
> >The project I am currently working on is nearly 2 million lines of code.
> 
> I'm consulting for a group right now that has 70 million and needs to
> be replaced.  They are convinced that the only way they can succeed is
> to use an agile approach.  They've tried twice before using less
> iterative approaches, and experienced some very high cost failures.
> 
> Let me quote from the process document for the production of the Space
> Shuttle software:
> 
> "Due to the size, complexity, and evolutionary nature of the program
> it was recognized early that the [waterfall] software development
> cycle could not be strictly applied and still satisfy the objectives.
> However, an implementation approach was devised [] which met the
> objectives by applying the [waterfall] cycle to small elements [six
> weeks in size] of the overall software package on an iterative basis."
> 
> In short, the more complex a project gets, the more iterative it needs
> to be, and that means iteration through *all* aspects of the project
> including requirements, documents, code, tests, etc.    
>

I agree.  However I'm not arguing for the Waterfall methodology, I'm
arguing against 'risk of too much doucmentation.' XP is not the only
alternative methodology nor only Agile method.  Frequent releases and
iterative development are essential to mitigating many of these risks.
 This is not a concept unique to XP.  Detailing every requirement at
the start is definatly too far.  Function by function pseudo-code is
too far, you might as well be write and include the tests at that
point.  I disagree however that once you have this group of
requirements picked out you jump into to IDE and start coding (code or
tests).  All essential high level requirements need to be tested for
feasability but aside from that there is no need to go into the
details of every single requirement from the start.  Requirements
should prioritized and grouped to create many releases that meet a
group of related requirements (It wouldn't be very useful if create
method was implemented but no method to retrieve the saved data), the
release has to be useable and some groupings of functionality can take
months to implement.  IMHO, tests might be clearly readable to
programmers but they don't help the domain experts validate your
understanding of the abstractions.  OO thinking is not like real-world
thinking:  Lawn.Mow(user)... shouldn't that be User.Mow(lawn)?.. after
telling them thats how it should be they'll accept you are crazy and
refrain from correcting you in the future. <kidding/> :)  Unless
entity relationships are very well understood it would be benificial
to create some relationship diagrams define the constraints and such,
overlap in scenarios also needs to be looked for and identified or
your going to end up with many inconsistant solutions (I'll create my
report in excel, I'll use Word and bookmarks, I'll use ASP.NET,
etc..).


- Kurt


> 
> -----
> Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
> Object Mentor Inc.            | blog:  www.butunclebob.com
> The Agile Transition Experts  | web:   www.objectmentor.com
> 800-338-6716   
> 
> 
> "The aim of science is not to open the door to infinite wisdom, 
>  but to set a limit to infinite error."
>     -- Bertolt Brecht, Life of Galileo
0
kurbylogic (51)
9/15/2004 6:59:36 PM
On 15 Sep 2004 11:59:36 -0700, kurbylogic@hotmail.com (Kurt) wrote:


>I agree, scope creep must not be prevented but embrassed and well
>communicated procedures to address it, such as change request forms
>(documentation).  

IMHO it would be a shame if change request forms were necessary.  I
agree that sometimes they are necessary, but I would work hard to make
it so that they weren't.  

>When the customer brings you a few hundred more
>stories then you had before you started development, are you going to
>just say OK great, more refactoring and eat the extra development
>cost?

No, I'm going to tell him what impact those stories have on the
schedule and cost by estimating them and adjusting the plan to show
how and when they might be added.  I'll do that every week if he
brings me new stories every week.  I fully expect that he's going to,
and will be disappointed if he doesn't.

>> I have experienced many projects that were canceled long before a line
>> of code was produced.  Millions of dollars were spent on the up-front
>> documents demanded by the process and yet no user ever saw even the
>> tiniest bit of the system execute.
>
>I don't necessarily concider every case of this to be "failure."  It
>is quite possible and not infrequent that after spending time
>analyzing all the requiremnts it became appearant that the system is
>too risky to continue, and pulled the plug before risking a far larger
>investment in actual development activities until the issues became
>overwellming and the project had to be cancelled or continued only
>because so much has already been invested.

The more common case IMHO is that the business gets frustrated with
the amount of money being spent without anything, but paper, to show
for it.

>> In short, the more complex a project gets, the more iterative it needs
>> to be, and that means iteration through *all* aspects of the project
>> including requirements, documents, code, tests, etc.    
>>
>
>I agree.  However I'm not arguing for the Waterfall methodology, I'm
>arguing against 'risk of too much doucmentation.' XP is not the only
>alternative methodology nor only Agile method.  

Agreed.  

>Frequent releases and
>iterative development are essential to mitigating many of these risks.
> This is not a concept unique to XP.  

Agreed, and agreed.

>Detailing every requirement at
>the start is definatly too far.  Function by function pseudo-code is
>too far, you might as well be write and include the tests at that
>point.  I disagree however that once you have this group of
>requirements picked out you jump into to IDE and start coding (code or
>tests).  

XP says nothing about how one starts coding.  It does not say to "jump
into the IDE".  It also does not say to spend three days drawing UML.
It leaves that part up to the team.  Most teams find it useful to put
their heads together and plan out a design that will get them through
the iteration.   Often they'll draw their designs in UML on a
whiteboard, or document them on CRC cards.  Some teams don't draw the
designs, but just talk about them.  These design sessions occur at the
start of the iteration and throughout it as well.  Indeed, some XPers
may spend more time designing than non-XPers.

>All essential high level requirements need to be tested for
>feasability but aside from that there is no need to go into the
>details of every single requirement from the start.  

Agreed.

>Requirements
>should prioritized and grouped to create many releases that meet a
>group of related requirements (It wouldn't be very useful if create
>method was implemented but no method to retrieve the saved data), the
>release has to be useable and some groupings of functionality can take
>months to implement.  

I would still break up the development into iterations of a week or
two.  And I'd try very hard to make each iteration deliver something
demonstrable to the business -- showing some kind of business value.

>IMHO, tests might be clearly readable to
>programmers but they don't help the domain experts validate your
>understanding of the abstractions.  

Certainly unit tests are readable by the domain experts.  However,
acceptance tests are.  Indeed they are written by the domain experts.
Many teams make use of high level executable specification languages
for this purpose.  See www.fitnesse.org for an example.

>Unless
>entity relationships are very well understood it would be benificial
>to create some relationship diagrams define the constraints and such,

Agreed.


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/15/2004 11:25:43 PM
Robert C. Martin <unclebob@objectmentor.com> wrote in message news:<9hmgk0lfveev4m45u523366hvcam0o67b0@4ax.com>...
> On Wed, 15 Sep 2004 02:31:05 -0600, "kurth" <kurth@avanade.com> wrote:
> 
> ><rant>There are surveys that show correlations between practically anything
> >and everything such as cancer, yet many have been proven incorrect...
> 
> You are missing the point.  There is a massive amount of peer reviewed
> research that shows that document-driven, specify-then-build
> approaches are horrendously risky. 

With all due respect, I find that one hard to swallow. That's like
saying the old adage "A problem well defined, is a problem
half-solved" is a bunch of nonesense. See the STANDISH CHAOS REPORT in
the 90s. Programs suffering from poor requirement definition were
found to have failed or run over budget, or grossly missed scheduled,
at an alarmingly high percentage. Granted the old 2167A DoD and FAA,
etc., requirements guidelines are archaic and inefficient, but that's
really taking a polar extreme as an example. Virtually any project,
other than one that has little or no firm requirements to begin with,
will benefit IMMENSELY from a well-defined set of system
specifications. We can discuss what constitutes a solid requirement,
and what does not, but to claim that such a wealth of information
leads to greater program risk is, IMHO, just not so. Thanks,

-Brian

Brian S. Smith
http://www.leapse.com
techsupport@leapse.com

 There is virtually *no* peer
> reviewed research that shows the opposite.  The asymmetry is striking.
> 
> -----
> Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
> Object Mentor Inc.            | blog:  www.butunclebob.com
> The Agile Transition Experts  | web:   www.objectmentor.com
> 800-338-6716   
> 
> 
> "The aim of science is not to open the door to infinite wisdom, 
>  but to set a limit to infinite error."
>     -- Bertolt Brecht, Life of Galileo
0
sales3288 (24)
9/16/2004 1:06:16 AM
On 15 Sep 2004 18:06:16 -0700, sales@leapse.com (Brian S. Smith)
wrote:

>With all due respect, I find that one hard to swallow. That's like
>saying the old adage "A problem well defined, is a problem
>half-solved" is a bunch of nonesense. See the STANDISH CHAOS REPORT in
>the 90s. Programs suffering from poor requirement definition were
>found to have failed or run over budget, or grossly missed scheduled,
>at an alarmingly high percentage. Granted the old 2167A DoD and FAA,
>etc., requirements guidelines are archaic and inefficient, but that's
>really taking a polar extreme as an example. Virtually any project,
>other than one that has little or no firm requirements to begin with,
>will benefit IMMENSELY from a well-defined set of system
>specifications. We can discuss what constitutes a solid requirement,
>and what does not, but to claim that such a wealth of information
>leads to greater program risk is, IMHO, just not so. Thanks,

It may seem counterintuitive, but in fact is /is/ so. See Larman's
book for page after page of referenced detail.

-- 
Ron Jeffries
www.XProgramming.com
I'm giving the best advice I have. You get to decide if it's true for you.
0
ronjeffries2 (313)
9/16/2004 2:19:59 AM
Ronald E Jeffries wrote:

> Brian S. Smith wrote:
>
> >With all due respect, I find that one hard to swallow. That's like
> >saying the old adage "A problem well defined, is a problem
> >half-solved" is a bunch of nonesense. See the STANDISH CHAOS REPORT in
> >the 90s. Programs suffering from poor requirement definition were
> >found to have failed or run over budget, or grossly missed scheduled,
> >at an alarmingly high percentage. Granted the old 2167A DoD and FAA,
> >etc., requirements guidelines are archaic and inefficient, but that's
> >really taking a polar extreme as an example. Virtually any project,
> >other than one that has little or no firm requirements to begin with,
> >will benefit IMMENSELY from a well-defined set of system
> >specifications. We can discuss what constitutes a solid requirement,
> >and what does not, but to claim that such a wealth of information
> >leads to greater program risk is, IMHO, just not so. Thanks,
>
> It may seem counterintuitive, but in fact is /is/ so. See Larman's
> book for page after page of referenced detail.

Can you draw a distinction between up-front research and up-front
speculative planning? The former can't be bad.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/16/2004 3:43:06 AM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
news:9hmgk0lfveev4m45u523366hvcam0o67b0@4ax.com...
> On Wed, 15 Sep 2004 02:31:05 -0600, "kurth" <kurth@avanade.com> wrote:
>
>><rant>There are surveys that show correlations between practically 
>>anything
>>and everything such as cancer, yet many have been proven incorrect...
>
> You are missing the point.  There is a massive amount of peer reviewed
> research that shows that document-driven, specify-then-build
> approaches are horrendously risky.  There is virtually *no* peer
> reviewed research that shows the opposite.  The asymmetry is striking.

Okay I can't resist... when was the last time you googled?
XP has had its share of peer-review, TDD has gone over fairly well but XP
as a methodology gets pretty ugly.

What is up with "success of C3"??
Sounds like a rather *extreme* failure to me, a (high-quality I'm sure) 
product
that missed (by a long shot) and the essential requirement (Y2K).
I don't think C3 was anywhere a-typical or high risk, excuse after excuse
that are completely unjustified.  5 year waste of time and money if you ask 
me
(at least documentation was minimal).

XP can be incorperated into and supplement existing methodologies, in that
respect it could be quite successful.  However, as as a methodology it is
grossly incomplete and a self proclaimed success that is completely
unsupported at least not by anything that is measurable.
Found a few interesting quotes during my google I couldn't resist but share.

Ron Jeffries "My assessment overall is that XP has some characteristics in
common with the higher SEI levels, up to and including level 5. However, I
would not assert that an XP team is a level 5 team. It takes a lot more
documentation and "proving" going on in CMM than we recommend for XP. XP is
in some ways a "vertical" slice through the SEI levels 2 through 5."
http://www.xprogramming.com/xpmag/xp_and_cmm.htm#Level%20Five

Mark C. Paulk (SEI) "Many of the KPAs that XP either ignores or only 
partially
covers are undoubtedly addressed in real projects. XP needs management and
infrastructure support, even if it does not specifically call for it."
"Should organizations use XP, as published, for life-critical or
high-reliability systems? Probably not. XP's lack of design documentation
and de-emphasis on architecture are risky." 
http://www.agilealliance.org/articles/reviews/Paulk2/articles/XPFromACMMPerspective-Paulk.pdf


- Kurt


>
> -----
> Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
> Object Mentor Inc.            | blog:  www.butunclebob.com
> The Agile Transition Experts  | web:   www.objectmentor.com
> 800-338-6716
>
>
> "The aim of science is not to open the door to infinite wisdom,
> but to set a limit to infinite error."
>    -- Bertolt Brecht, Life of Galileo 


0
kurth (11)
9/16/2004 4:52:36 AM
Brian S. Smith wrote:
> Robert C. Martin <unclebob@objectmentor.com> wrote in message
> news:<9hmgk0lfveev4m45u523366hvcam0o67b0@4ax.com>...
>> On Wed, 15 Sep 2004 02:31:05 -0600, "kurth" <kurth@avanade.com>
>> wrote:
>>
>>> <rant>There are surveys that show correlations between practically
>>> anything and everything such as cancer, yet many have been proven
>>> incorrect...
>>
>> You are missing the point.  There is a massive amount of peer
>> reviewed research that shows that document-driven, specify-then-build
>> approaches are horrendously risky.
>
> With all due respect, I find that one hard to swallow. That's like
> saying the old adage "A problem well defined, is a problem
> half-solved" is a bunch of nonesense.

Not to me.

First, it's important that you are solving the *right* problem. But writing
a problem down, even having it signed from a bunch of important people, has
a surprisingly bad history of making sure that the defined problem is the
right one. Speaking with people and showing them proposed solutions early
and often seems to be a good alternative.

Second, having the problem well defined on paper is not what solves the
problem, but having it well defined in the heads of the people whos job it
is to solve it. Using paper as the communication medium might well not be
the most effective way to get the problem into the heads.

Cheers, Ilja


0
it3974 (470)
9/16/2004 6:35:39 AM
On Mon, 13 Sep 2004 23:40:13 -0600, "kurth" <kurth@avanade.com> wrote:

>I often read that in XP you should not create a lot of documentation - that
>its all temporary stuff to help one understand the system.  First off I
>applaud XP's desire to actually help the developers understand the domain
>for the system they are building but I think this is the wrong way of
>"helping them understand" for a couple of reasons:  Many domains such as
>accounting, manufacturing, supply chain, are well understood with a wealth
>of existing documentation.  Rather then trying to figure things out as you
>go wouldn't it be best to train up developers on the business domain before
>jumping into a (possibly incomplete poorly defined) scenario and rewriting
>it over and over until you've captured all the requirements?
>I'm not saying that's all bad; I would rather have developers understand the
>domain they are implementing even if it is piece by piece then to be handed
>a set of specifications (that are always missing something or just plain
>wrong) and like zombies do exactly what it says without question.  My
>biggest complaint however is that how do you go to the customer and say this
>wasn't a requirement of the original agreement, you will have to increase
>the project budget, extend the timeline, etc.. if the requirements have not
>thoroughly been analyzed, documented, and signed?
>
>- Kurt
>

A suspect that XP will gradually get modified in order to handle the
differing environments that methods can be used in until it comes back
to looking like 'old style' development.  Considering thats where it
gets most of its techniques from, just needs to put the missing ones
back really :)

0
foo_ (331)
9/16/2004 11:13:01 AM
AndyW <foo_@bar_no_email.com> wrote in 
news:e7tik01sqa04hhfkipkr8gnhp66940jvbs@4ax.com:

>  'old style' development

What is 'old style' development?
0
9/16/2004 11:40:09 AM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
news:9jhfk0dak6u6h9u7j8o4fsthr2d0tn49ad@4ax.com...
> On Mon, 13 Sep 2004 23:40:13 -0600, "kurth" <kurth@avanade.com> wrote:
>
>
> There is an amazing amount of peer-reviewed research that correlates
> thoroughly analyzed, documented, and signed requirements with project
> failure.  Indeed there are studies of thousands of projects that have
> shown that when requirements are treated this way, far fewer than half
> of them are ever actually used by the customer. (i.e. more than half
> the code is wasted).  Other research shows that only a tiny fraction
> of those projects ever get used at all -- even though the meet the
> requirements.
>
> On the other hand, there is a lot of research that correlating
successtudies
> with projects that are released very early in a minimal form, and
> evolved through a series of releases during which requirements are
> constantly being gathered.
>
References?

The studies that I want to see never seem to be tackled.  For example, I'd
like to see a study why an ealier payroll project delivered real business
value to Chrysler, but C3 did not.  Everyone passionately interested in XP
should want to see that study, but I suspect that they do not.  "A man hears
what he wants to hear and..."

Regards,
Daniel Parker


0
Daniel
9/16/2004 11:53:21 AM
kurth wrote:

> Okay I can't resist... when was the last time you googled?
> XP has had its share of peer-review, TDD has gone over fairly well but XP
> as a methodology gets pretty ugly.

Googling for "Extreme Programming", in quotes, returns 362,000 hits.

> What is up with "success of C3"??

Plenty of projects, in history, have been technical successes ad political
failures. If company's stocks somehow reflected the value of their products,
society as we know it would unravel.

XP is becoming recognized in the venture capital world as a way to determine
success early, and to avoid burning up lots of money. More and more job
descriptions request XP and TDD, to ensure high velocity towards business
goals.

Here's a recent post from the XP mailing list:

George Paci wrote:

> Just thought I'd share today's success story:
>
> The Big Demo for our project was today, with lots of
> higher-ups in attendance, guys who decide whether we
> get the goodies we want (like Lear jet flights, and
> continued employment).
>
> Toward the middle of the very upbeat and
> well-received demo,
> my boss made a point of mentioning how flexible and
> responsive the software team was, and how we used
> this methodology called "Extreme Programming."  After a
> couple sentences of praise, he said (and I quote):
>
> "I guess this is kind of an ad for Extreme
> Programming."
>
>
> (I should add that we're not 100% XP; we have the
> dials turned up to
> an average of about 8, with Metaphor (3) and Pair
> Programming (4)
> trailing the pack.  Our iterations got smashed down
> to anywhere from
> one to three days lately; I'm not sure if that's
> more XP or less.)

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/16/2004 12:01:40 PM
On Wed, 15 Sep 2004 22:52:36 -0600, "kurth" <kurth@avanade.com> wrote:

>
>"Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
>news:9hmgk0lfveev4m45u523366hvcam0o67b0@4ax.com...
>> On Wed, 15 Sep 2004 02:31:05 -0600, "kurth" <kurth@avanade.com> wrote:
>>
>>><rant>There are surveys that show correlations between practically 
>>>anything
>>>and everything such as cancer, yet many have been proven incorrect...
>>
>> You are missing the point.  There is a massive amount of peer reviewed
>> research that shows that document-driven, specify-then-build
>> approaches are horrendously risky.  There is virtually *no* peer
>> reviewed research that shows the opposite.  The asymmetry is striking.
>
>Okay I can't resist... when was the last time you googled?
>XP has had its share of peer-review, TDD has gone over fairly well but XP
>as a methodology gets pretty ugly.

We need to make a distinction between peer-reviewed research, and a
bunch of folks on a newsgroup who apparently agree with each other.
We can line up folks on both sides of the XP/C3 debate and yammer at
each other for years.  What you can't do, apparently, is line up an
equal number of peer-reviewed research papers that support iterative
and non-iterative techniques.  The majority (a virtual monopoly IMHO)
support the iterative approach.

>What is up with "success of C3"??

You've been reading too much Rosenberg.  Go sing some dumb Beatle song
parodies and get it out of your system.   C3 was a project that
failed.  C3 was a project that used XP.  You can make the type I error
if you wish.  I advise you, however, to consider whether there might
have been other factors.  Certainly the developers on the C3 team do
not associate the failure with XP.

>XP can be incorperated into and supplement existing methodologies, in that
>respect it could be quite successful.  

This has been done many times, and it often is quite successful.  

>However, as as a methodology it is
>grossly incomplete and a self proclaimed success that is completely
>unsupported at least not by anything that is measurable.

Your data?  I can point you to a rather large number of success
stories.  The data coming out of XP/Agile projects is pretty
compelling.  Companies (like Sabre, Jet-Direct, Symantec, Workshare,
and others) are reporting ~50% productivity increases, and 10X
reduction in defects.

I quite agree that XP (or any agile technique) is not the only tool
that a development organization should use.  Paulk's quote is
particularly relevant.  There are process areas that XP/Agile do not
address.  XP provides virtually no concrete guidance for large teams,
or regulated teams.  However, large and regulated teams have managed
to incorporate XP/Agile methods into their way of working.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/16/2004 12:22:54 PM
On Thu, 16 Sep 2004 23:13:01 +1200, AndyW <foo_@bar_no_email.com>
wrote:

>A suspect that XP will gradually get modified in order to handle the
>differing environments that methods can be used in until it comes back
>to looking like 'old style' development.  Considering thats where it
>gets most of its techniques from, just needs to put the missing ones
>back really :)

The software for the Mercury space capsule was written using an
iterative method.  The iterations were one day long.  At the end of
every day they delivered new working code.  They wrote unit tests in
the morning, and made those unit tests pass in the afternoon.  

So, in that regard, I think you are right.  XP does look a lot like
that 'old-style' development method.  

The space shuttle software was built in increments of 6 weeks.  Every
six weeks they delivered new working software.

So in that regard I think you are right.  XP does look a little like
that 'old-style' development method.

However, there are some other 'old-style' development methods that
neither XP nor any other agile technique will ever resemble.  They
will not resemble techniques in which all requirements are gathered up
front, and then analyzed up front, and then designed up front, and
then implemented at the end, in a set of linear phases.  

This process, known as "waterfall" was adopted in the seventies, and
became remarkably popular.  It managed to become the standard for all
software development in the US DoD (ref DoD 2167).  Many other
government agencies, other countries, and major industries followed
suit.  (Clearly if the US government is doing it, it must be good. ;-)

If you follow the references back you find *one* paper at the root of
the tree.  Many, if not all, papers that support the waterfall way of
working can be traced back to a paper written in 1970 by Dr. Winston
Royce.  It was entitled "Managing the Development of Large Software
Systems".  Early in this paper Royce draws the typical waterfall
diagram.  The thrust of his paper was that this technique is
"grandiose" and inappropriate for all but the most trivial projects. 

Apparently none of the people who cited his paper bothered to read it.

In Craig Larman's book: "Agile and Iterative Development: A Manager's
Guide", you can find the wonderful story of how he tracked down the
author of DoD 2167.  A tech writer living in Mass.  Larman had lunch
with this fellow a few years back.  His first words upon meeting Craig
were "I'm so sorry".  

-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/16/2004 12:38:41 PM
Cristiano wrote:

> AndyW wrote:
>
> >  'old style' development
>
> What is 'old style' development?

You know. Everyone sits in a ring, with two keyboards and mice for each
workstation. Everyone tests every 1~10 edits, integrates continuously,
spends extra time writing literate acceptance tests, sorts feature requests
in business priority, and releases a new version, each Friday, without bugs.

Just like we have always been doing...

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/16/2004 1:02:41 PM
Daniel Parker wrote:

> The studies that I want to see never seem to be tackled.  For example, I'd
> like to see a study why an ealier payroll project delivered real business
> value to Chrysler, but C3 did not.  Everyone passionately interested in XP
> should want to see that study, but I suspect that they do not.

What part of "Daimler Chrysler still uses XP" don't you understand?

> "A man hears
> what he wants to hear and..."

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/16/2004 1:03:57 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in news:R9g2d.2988$IU4.2278
@newssvr15.news.prodigy.com:

> Just like we have always been doing...

I figured.. ;)
0
9/16/2004 1:56:41 PM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
news:9b0jk0dhlq2tcmb5bnoqukujim73qpsgi1@4ax.com...

> You've been reading too much Rosenberg.  Go sing some dumb Beatle song
> parodies and get it out of your system.   C3 was a project that
> failed.  C3 was a project that used XP.  You can make the type I error
> if you wish.  I advise you, however, to consider whether there might
> have been other factors.  Certainly the developers on the C3 team do
> not associate the failure with XP.

Of course, at least a few of them are making $$$ from XP.


Shayne Wissler
http://www.ouraysoftware.com


0
9/16/2004 2:47:01 PM
 sales@leapse.com (Brian S. Smith)  wrote:
 
>With all due respect, I find that one hard to swallow. That's like
>saying the old adage "A problem well defined, is a problem
>half-solved" is a bunch of nonesense. See the STANDISH CHAOS REPORT in
>the 90s. Programs suffering from poor requirement definition were
>found to have failed or run over budget, or grossly missed scheduled,
>at an alarmingly high percentage. Granted the old 2167A DoD and FAA,
>etc., requirements guidelines are archaic and inefficient, but that's
>really taking a polar extreme as an example. Virtually any project,
>other than one that has little or no firm requirements to begin with,
>will benefit IMMENSELY from a well-defined set of system
>specifications. We can discuss what constitutes a solid requirement,
>and what does not, but to claim that such a wealth of information
>leads to greater program risk is, IMHO, just not so. Thanks,

Hear here!

Elliott
-- 
dip refers to *abstract interfaces* a form class
abstraction.  The opposite form is a concrete
implementation class abstraction.
0
universe5 (202)
9/16/2004 8:48:21 PM
kurbylogic@hotmail.com (Kurt) wrote:

> ...
> I agree.  However I'm not arguing for the Waterfall methodology, I'm
> arguing against 'risk of too much doucmentation.' XP is not the only
> alternative methodology nor only Agile method.  Frequent releases and
> iterative development are essential to mitigating many of these risks.
>  This is not a concept unique to XP.  Detailing every requirement at
> the start is definatly too far.  Function by function pseudo-code is
> too far, you might as well be write and include the tests at that
> point.  I disagree however that once you have this group of
> requirements picked out you jump into to IDE and start coding (code or
> tests).  All essential high level requirements need to be tested for
> feasability but aside from that there is no need to go into the
> details of every single requirement from the start.  Requirements
> should prioritized and grouped to create many releases that meet a
> group of related requirements (It wouldn't be very useful if create
> method was implemented but no method to retrieve the saved data), the
> release has to be useable and some groupings of functionality can take
> months to implement.  IMHO, tests might be clearly readable to
> programmers but they don't help the domain experts validate your
> understanding of the abstractions.  OO thinking is not like real-world
> thinking:  Lawn.Mow(user)... shouldn't that be User.Mow(lawn)?.. after
> telling them thats how it should be they'll accept you are crazy and
> refrain from correcting you in the future. <kidding/> :)  Unless
> entity relationships are very well understood it would be benificial
> to create some relationship diagrams define the constraints and such,
> overlap in scenarios also needs to be looked for and identified or
> your going to end up with many inconsistant solutions (I'll create my
> report in excel, I'll use Word and bookmarks, I'll use ASP.NET,
> etc..).


Hear here!

Elliott
-- 
dip refers to *abstract interfaces* a form class
abstraction.  The opposite form is a concrete
implementation class abstraction.
0
universe5 (202)
9/16/2004 8:51:38 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in message news:<1bg2d.2989$N26.2232@newssvr15.news.prodigy.com>...
> Daniel Parker wrote:
> 
> > The studies that I want to see never seem to be tackled.  For example, I'd
> > like to see a study why an ealier payroll project delivered real business
> > value to Chrysler, but C3 did not.  Everyone passionately interested in XP
> > should want to see that study, but I suspect that they do not.
> 
> What part of "Daimler Chrysler still uses XP" don't you understand?
> 
The relevence to my posting.

Daniel
0
9/16/2004 9:28:12 PM
Daniel Parker wrote:

> > > The studies that I want to see never seem to be tackled.  For example,
I'd
> > > like to see a study why an ealier payroll project delivered real
business
> > > value to Chrysler, but C3 did not.  Everyone passionately interested
in XP
> > > should want to see that study, but I suspect that they do not.

Disregarding your implication that XPers want to be gullible, would you
respect a "study" with one sample?

> > What part of "Daimler Chrysler still uses XP" don't you understand?
> >
> The relevence to my posting.

The "debate" is whether XP caused XP's first (titular) project to succeed or
fail.

I can't think of anyone more qualified to judge if XP succeeded at DC than
DC's programmers. If XP caused the C3's project's failure, why would DC
still use XP?

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/16/2004 9:41:52 PM
Cy Coe wrote:

> If the Onsite Customer lacks the skills to properly analyze the
> problem domain (simply having been a worker in the domain doesn't
> deliver those skills) then they, like you, will simply be muddling
> their way through.

I agree with this, but the implication is merely that XP's "Customer Role"
needs to be a highly skilled individual.  No less skilled than, and not
unsimilar to, the traditional "Software Analyst".

The difference as I see it in XP/Agile methodology is...

Firstly, the analyst/customer begins onsite with the development team early,
working and learning in parallel with the developers.  Then stays with team
during most of the life-cycle, as opposed to swiftly moving on to the next
"interesting" project shortly after coding begins.

Secondly, the analyst/customer is a specialist in a particular domain.  In
other words, they are finance, manufacturing, health, etc. experts first,
and IT experts second.  (This is just an efficient division of skills, and
not something XP advocates explicitly AFAICT, just something I've inferred.)

Domain driven, rather than IT driven.

--
Justin


0
null38 (68)
9/16/2004 9:56:16 PM
Justin Farley wrote:

> Secondly, the analyst/customer is a specialist in a particular domain.  In
> other words, they are finance, manufacturing, health, etc. experts first,
> and IT experts second.  (This is just an efficient division of skills, and
> not something XP advocates explicitly AFAICT, just something I've
inferred.)

The XP Customer and Developer Bills of Rights lists matching pairs of
responsibilities and authorities for the business and technical sides. Roles
occupying the business side are not required to write code, and not allowed
to interfere with programmers programming as they see fit.

(This means, if one of them has a good idea, then they occupy the other
role, with its responsibilities, when they espouse it.)

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/16/2004 10:11:19 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in message
news:AMn2d.18657$ZC7.11100@newssvr19.news.prodigy.com...
> Daniel Parker wrote:
>
> > > > The studies that I want to see never seem to be tackled.  For
example,
> I'd
> > > > like to see a study why an ealier payroll project delivered real
> business
> > > > value to Chrysler, but C3 did not.  Everyone passionately interested
> in XP
> > > > should want to see that study, but I suspect that they do not.
>
> Disregarding your implication that XPers want to be gullible,

Why disregard it?  If you believe in something passionately, like XP, it's
human nature to be somewhat uninterested in the wider picture, it would be
surprising if it were not so.  See, for example,
http://www.everreader.com/shak2.htm.  When RCM refers to studies that
support his beliefs, I'm quite sure that he didn't approach them from a
dispassionate point of view, read them carefully, and make up his mind from
the studies.  Rather, he feels in his bones that XP is the right way of
doing things, based on lifelong experience, considered thought, and much
practice, and refers to the studies in an offhand way, most likely claiming
more than they can support.  There are people who, when they believe in
something passionately, make it a point of studying opposing views,
challenging their own perceptions, reading right wing papers if they are
left wing and vice versa, but such people are rare.

> would you
> respect a "study" with one sample?
>
As a case study, yes.  It would be interesting to compare the two projects
in terms of management expectations, approach to documentation and
specification, time to completion, cost, quality of programmers and
analysts, number of people involved, quality of results relative to business
expectations, return on investment, business commitment, and so on.  I don't
know anything at all about the previous Chrysler payroll project, but it
appears that it was successful.  (The old payroll systems that I know about
in my neck of the woods were written in COBOL on mainframes, and the sheets
that would get transcribed into punch cards were often filled out in the
local pubs.)

Regards,
Daniel Parker


0
Daniel
9/16/2004 10:45:27 PM
On Wed, 15 Sep 2004 22:19:59 -0400, Ronald E Jeffries wrote:

> On 15 Sep 2004 18:06:16 -0700, sales@leapse.com (Brian S. Smith) wrote:
> 
>>With all due respect, I find that one hard to swallow. That's like
>>saying the old adage "A problem well defined, is a problem half-solved"
>>is a bunch of nonesense. See the STANDISH CHAOS REPORT in the 90s.
>>Programs suffering from poor requirement definition were found to have
>>failed or run over budget, or grossly missed scheduled, at an alarmingly
>>high percentage. Granted the old 2167A DoD and FAA, etc., requirements
>>guidelines are archaic and inefficient, but that's really taking a polar
>>extreme as an example. Virtually any project, other than one that has
>>little or no firm requirements to begin with, will benefit IMMENSELY
>>from a well-defined set of system specifications. We can discuss what
>>constitutes a solid requirement, and what does not, but to claim that
>>such a wealth of information leads to greater program risk is, IMHO,
>>just not so. Thanks,
> 
> It may seem counterintuitive, but in fact is /is/ so. See Larman's book
> for page after page of referenced detail.

It doesn't even seem particularly counter-intuitive to me.

The old adage "A problem well defined, is a problem half-solved" is indeed
a bunch of nonsense.

There are many quite well defined problems that are not only not half
solved, but that are in fact unsolvable.
0
droby2 (108)
9/17/2004 1:35:24 AM
On Thu, 16 Sep 2004 11:40:09 +0000 (UTC), Cristiano
<cristianoTAKEsadun@THIShotmailOUT.com> wrote:

>AndyW <foo_@bar_no_email.com> wrote in 
>news:e7tik01sqa04hhfkipkr8gnhp66940jvbs@4ax.com:
>
>>  'old style' development
>
>What is 'old style' development?

The stuff we used to do before they dumbed it down so that kids could
understand it.

 The older methodologies like SSADM, Yourdon and the stuff used even
before that :)

0
foo_ (331)
9/17/2004 3:03:33 AM
On Thu, 16 Sep 2004 07:38:41 -0500, Robert C. Martin
<unclebob@objectmentor.com> wrote:

>On Thu, 16 Sep 2004 23:13:01 +1200, AndyW <foo_@bar_no_email.com>
>wrote:
>
>>A suspect that XP will gradually get modified in order to handle the
>>differing environments that methods can be used in until it comes back
>>to looking like 'old style' development.  Considering thats where it
>>gets most of its techniques from, just needs to put the missing ones
>>back really :)
>
>The software for the Mercury space capsule was written using an
>iterative method.  The iterations were one day long.  At the end of
>every day they delivered new working code.  They wrote unit tests in
>the morning, and made those unit tests pass in the afternoon.  
>
>So, in that regard, I think you are right.  XP does look a lot like
>that 'old-style' development method.  
>
>The space shuttle software was built in increments of 6 weeks.  Every
>six weeks they delivered new working software.
>
>So in that regard I think you are right.  XP does look a little like
>that 'old-style' development method.
>
>However, there are some other 'old-style' development methods that
>neither XP nor any other agile technique will ever resemble.  They
>will not resemble techniques in which all requirements are gathered up
>front, and then analyzed up front, and then designed up front, and
>then implemented at the end, in a set of linear phases.  
>
>This process, known as "waterfall" was adopted in the seventies, and
>became remarkably popular.  It managed to become the standard for all
>software development in the US DoD (ref DoD 2167).  Many other
>government agencies, other countries, and major industries followed
>suit.  (Clearly if the US government is doing it, it must be good. ;-)
>
>If you follow the references back you find *one* paper at the root of
>the tree.  Many, if not all, papers that support the waterfall way of
>working can be traced back to a paper written in 1970 by Dr. Winston
>Royce.  It was entitled "Managing the Development of Large Software
>Systems".  Early in this paper Royce draws the typical waterfall
>diagram.  The thrust of his paper was that this technique is
>"grandiose" and inappropriate for all but the most trivial projects. 
>
>Apparently none of the people who cited his paper bothered to read it.
>
>In Craig Larman's book: "Agile and Iterative Development: A Manager's
>Guide", you can find the wonderful story of how he tracked down the
>author of DoD 2167.  A tech writer living in Mass.  Larman had lunch
>with this fellow a few years back.  His first words upon meeting Craig
>were "I'm so sorry".  
>
>-----
>Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
>Object Mentor Inc.            | blog:  www.butunclebob.com
>The Agile Transition Experts  | web:   www.objectmentor.com
>800-338-6716   
>
>
>"The aim of science is not to open the door to infinite wisdom, 
> but to set a limit to infinite error."
>    -- Bertolt Brecht, Life of Galileo


I think you forgot about mini-waterfall - that came out before Rad
techniques became common.   Most large scale projects tended and still
tend to use this mechanism because it [mini-waterfall] is tried and
tested.

All of the modern techniques are really just re-hacks of that with
buzzwords  put in so the young noobs will become impressed and adopt
it. :)


0
foo_ (331)
9/17/2004 3:09:28 AM
"Phlip" <phlip_cpp@yahoo.com> wrote in message news:<AMn2d.18657$ZC7.11100@newssvr19.news.prodigy.com>...

-snip-
> The "debate" is whether XP caused XP's first (titular) project to succeed or
> fail.

And a great deal was said about that on
comp.software.extreme-programming

Some in this thread:
http://groups.google.com/groups?q=g:thl2544022514d&dq=&hl=en&lr=&ie=UTF-8&c2coff=1&selm=j0ki80pb4p214usj7c13c03ulsp1tqq13d%404ax.com
0
igouy (1009)
9/17/2004 4:09:54 AM
On Thu, 16 Sep 2004 14:47:01 GMT, "Shayne Wissler"
<thalesNOSPAM000@yahoo.com> wrote:

>
>"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
>news:9b0jk0dhlq2tcmb5bnoqukujim73qpsgi1@4ax.com...
>
>> You've been reading too much Rosenberg.  Go sing some dumb Beatle song
>> parodies and get it out of your system.   C3 was a project that
>> failed.  C3 was a project that used XP.  You can make the type I error
>> if you wish.  I advise you, however, to consider whether there might
>> have been other factors.  Certainly the developers on the C3 team do
>> not associate the failure with XP.
>
>Of course, at least a few of them are making $$$ from XP.

Yes, that's no secret.  It's what you do with good ideas.


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/17/2004 4:19:35 AM
On Thu, 16 Sep 2004 07:53:21 -0400, "Daniel Parker"
<danielaparker@spam?nothanks.windupbird.com> wrote:

>"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
>news:9jhfk0dak6u6h9u7j8o4fsthr2d0tn49ad@4ax.com...
>> On Mon, 13 Sep 2004 23:40:13 -0600, "kurth" <kurth@avanade.com> wrote:
>>
>>
>> There is an amazing amount of peer-reviewed research that correlates
>> thoroughly analyzed, documented, and signed requirements with project
>> failure.  Indeed there are studies of thousands of projects that have
>> shown that when requirements are treated this way, far fewer than half
>> of them are ever actually used by the customer. (i.e. more than half
>> the code is wasted).  Other research shows that only a tiny fraction
>> of those projects ever get used at all -- even though the meet the
>> requirements.
>>
>> On the other hand, there is a lot of research that correlating
>successtudies
>> with projects that are released very early in a minimal form, and
>> evolved through a series of releases during which requirements are
>> constantly being gathered.
>>
>References?

See Larman's book.  There are 50 pages of references there.  

>The studies that I want to see never seem to be tackled.  For example, I'd
>like to see a study why an ealier payroll project delivered real business
>value to Chrysler, but C3 did not.  

Odd.  The earlier project was canceled before it delivered anything.
C3 was paying real employees for quite awhile, so it *did* deliver
business value.  C3 was canceled before it was completed for a bunch
of reasons that have been exhaustively reported upon in this
newsgroup.  There is little evidence to suggest that XP was the cause,
or even complicit, in that cancellation.

>Everyone passionately interested in XP
>should want to see that study, but I suspect that they do not. 

In fact we have gone over it and over it, and in this newsgroup.   

>"A man hears
>what he wants to hear and..."

"...disregards the rest."  Yes, that's often too true.


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/17/2004 4:23:52 AM
On Thu, 16 Sep 2004 18:45:27 -0400, "Daniel Parker"
<danielaparker@spam?nothanks.windupbird.com> wrote:

>There are people who, when they believe in
>something passionately, make it a point of studying opposing views,
>challenging their own perceptions, reading right wing papers if they are
>left wing and vice versa, but such people are rare.

I suppose that you are one such person.  



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/17/2004 4:25:46 AM
On Thu, 16 Sep 2004 18:45:27 -0400, "Daniel Parker"
<danielaparker@spam?nothanks.windupbird.com> wrote:

>When RCM refers to studies that
>support his beliefs, I'm quite sure that he didn't approach them from a
>dispassionate point of view, read them carefully, and make up his mind from
>the studies.  Rather, he feels in his bones that XP is the right way of
>doing things, based on lifelong experience, considered thought, and much
>practice, 

I think that's quite true.  

>and refers to the studies in an offhand way, most likely claiming
>more than they can support.

I think that's speculation on your part.  The studies I've been
referring to, that were described by Larman in his book, are generally
older studies from the '80s and '90s, before XP was an issue.  You
might want to look them over.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/17/2004 4:29:45 AM
On Fri, 17 Sep 2004 15:09:28 +1200, AndyW <foo_@bar_no_email.com>
wrote:

>I think you forgot about mini-waterfall - that came out before Rad
>techniques became common.   Most large scale projects tended and still
>tend to use this mechanism because it [mini-waterfall] is tried and
>tested.

Indeed.  I did not forget about it.  On the other hand I disagree that
most large scale projects tend to use it.  Perhaps I travel in the
wrong circles, but I see a *lot* of straight waterfall.

I will say, however, that mini-waterfall, done in very short
iterations, is a reasonable approach.  

>All of the modern techniques are really just re-hacks of that with
>buzzwords  put in so the young noobs will become impressed and adopt
>it. :)

Clearly most of the stuff in XP/Agile is not new.  (TDD might be an
exception in some ways.)  I don't think you've got the motivation
right.  The folks who are really keen on XP/Agile are the oldsters who
have tired of the significant waste in the industry.



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/17/2004 4:36:21 AM
AndyW <foo_@bar_no_email.com> wrote in 
news:ltkkk0t9poscvon8k5kfnn2i1krnos640p@4ax.com:

>  The older methodologies like SSADM, Yourdon and the stuff used even
> before that :)

Ah. Then no, I don't think your suspicion is correct. 
0
9/17/2004 7:11:22 AM
The statements I had read in XP documentation is temporary and/or should be 
created was an overgeneralization.
Documenation is clearly needed coninually in the project  from start to 
finish.
We agree this is true, but disagree how much.
XP is only concerned with a small subset of the 'big picture' project, does 
not address managing ... anything.
XP perspective = As Needed.
Project management perspective = Essential

We agree iterative development, frequent releases, continual integration, 
and tdd used in XP have demonstrated success.
These are not new techniques or unique to XP.  Assert(XP Is Iterative == 
true); Assert(Iterative is XP == false)

What then is XP?
What criteria must a project meet to be concidered an XP vs not XP?

Can a "large" (subjectivly speaking) project satisfiy all criteria of an XP 
project? possibly
Can all large projects satisfiy all criteria of an XP project? I don't think 
so (concidering the impossiblities of shared ownership and pair-programming 
of a multi-site, multi-vendor project)

What criteria must be met in the XP project to be concidered a success or 
failure?  (eg. big picture... failure, XP successful)

IMHO, XP is an incomplete methodology and any project demonstrating success 
using some of the techniques is somehow concidered +1 to XP even though 
management activities may have had a far greater inpact on project success 
then XP.  Further more the claim that all documentation should questioned if 
taken at face value could lead a breakdown in essential project management 
activities that very likely will put the 'big picture' project at a greater 
risk then if XP had not been introduced at all.  Yet XP keeps its nose clean 
by saying other factors (namely poor project management) resulted in the big 
picture failure.


- Kurt 


0
kurth (11)
9/17/2004 9:02:15 AM
Robert C. Martin <unclebob@objectmentor.com> wrote in message news:<3spkk0po6d1gd5ef62tf783oa9jeu4jmag@4ax.com>...
> On Thu, 16 Sep 2004 18:45:27 -0400, "Daniel Parker"
> <danielaparker@spam?nothanks.windupbird.com> wrote:
> 
> >When RCM refers to studies that
> >support his beliefs, I'm quite sure that he didn't approach them from a
> >dispassionate point of view, read them carefully, and make up his mind from
> >the studies.  Rather, he feels in his bones that XP is the right way of
> >doing things, based on lifelong experience, considered thought, and much
> >practice, 
> 
> I think that's quite true.  
> 
> >and refers to the studies in an offhand way, most likely claiming
> >more than they can support.
> 
> I think that's speculation on your part.  

Well, so was my other point :-)

> The studies I've been
> referring to, that were described by Larman in his book, are generally
> older studies from the '80s and '90s, before XP was an issue.  You
> might want to look them over.
> 
I'll have a look.

Daniel
0
9/17/2004 6:47:39 PM
Robert C. Martin <unclebob@objectmentor.com> wrote in message news:<nppkk0p2k7kidqg1u5tk0o9h4bnqp90op3@4ax.com>...
> On Thu, 16 Sep 2004 18:45:27 -0400, "Daniel Parker"
> <danielaparker@spam?nothanks.windupbird.com> wrote:
> 
> >There are people who, when they believe in
> >something passionately, make it a point of studying opposing views,
> >challenging their own perceptions, reading right wing papers if they are
> >left wing and vice versa, but such people are rare.
> 
> I suppose that you are one such person.  
> 
On the contrary, I'm quite sure that on most matters I'm as pig headed
as anyone else.

Regards,
Daniel Parker
0
9/17/2004 7:00:32 PM
On 17 Sep 2004 12:00:32 -0700, danielaparker@hotmail.com (Daniel
Parker) wrote:

>Robert C. Martin <unclebob@objectmentor.com> wrote in message news:<nppkk0p2k7kidqg1u5tk0o9h4bnqp90op3@4ax.com>...
>> On Thu, 16 Sep 2004 18:45:27 -0400, "Daniel Parker"
>> <danielaparker@spam?nothanks.windupbird.com> wrote:
>> 
>> >There are people who, when they believe in
>> >something passionately, make it a point of studying opposing views,
>> >challenging their own perceptions, reading right wing papers if they are
>> >left wing and vice versa, but such people are rare.
>> 
>> I suppose that you are one such person.  
>> 
>On the contrary, I'm quite sure that on most matters I'm as pig headed
>as anyone else.

      :-)  



-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/18/2004 1:17:04 AM
On Fri, 17 Sep 2004 03:02:15 -0600, "kurth" <kurth@avanade.com> wrote:

>The statements I had read in XP documentation is temporary and/or should be 
>created was an overgeneralization.
>Documenation is clearly needed coninually in the project  from start to 
>finish.
>We agree this is true, but disagree how much.
>XP is only concerned with a small subset of the 'big picture' project, does 
>not address managing ... anything.

No this is incorrect.  XP is concerned with managing the day to day,
week to week, and month to month specification, design, delivery,
deployment, and quality of software.

>XP perspective = As Needed.

>Project management perspective = Essential

There are lots and lots of documents produced in project that are
utterly irrelevant.  I'm sure that you don't think that every possible
document should always be produced.  I'm sure that instead you think
that there is a reasonable subset of documents that are important to a
project, and a much smaller subset that are essential.  Perhaps we
disagree about where to draw the line, but that's just a disagreement
of degree, rather than essence.
>
>We agree iterative development, frequent releases, continual integration, 
>and tdd used in XP have demonstrated success.
>These are not new techniques or unique to XP.  Assert(XP Is Iterative == 
>true); Assert(Iterative is XP == false)

Agreed, and agreed.
>
>What then is XP?

A set of about a dozen very discrete practices.

>What criteria must a project meet to be concidered an XP vs not XP?

I'm not sure why this is an important question, but I suppose that the
answer is that the project was strongly guided by the values of XP,
and that the project team practiced all or most of the XP practices.
Putting absolute numbers on this is a fools errand IMHO.

I don't know what this has to do with the point about XP and
documentation.

>Can a "large" (subjectivly speaking) project satisfiy all criteria of an XP 
>project? possibly

IMHO, yes.  More is needed, but the project can still be very aligned
to XP.

>Can all large projects satisfiy all criteria of an XP project? 

Who cares?  The values of XP, and the intent of the practices can be
preserved.  The important issue is not whether a project is "XP" or
"Not XP".  The important point is whether or not the project is done
in very quick iterations, buy a collaborative team using intense
feedback, and customer input.  I would also add TDD to the mix,
because I think that's an essential ingredient.

>I don't think 
>so (concidering the impossiblities of shared ownership and pair-programming 
>of a multi-site, multi-vendor project)

Those things are not impossible.  There are many companies who have
managed to solve those problems by breaking the uber-team into many
co-located subteams.  Though, again, who cares if you can call it XP
or not.  The spirit is there, even if the letter is not.

>What criteria must be met in the XP project to be concidered a success or 
>failure?  (eg. big picture... failure, XP successful)

Good question.  I don't know.  I don't care.  The only thing that
really matters is whether or not the project succeeds.  And project
success is only indirectly related to process success.  I think XP or
Agile methods help.  Indeed, I am convinced that they have a
significant positive effect on both quality, repeatability,
predictability, and productivity.  That can often be enough to shove a
project past the "success" line.  Sometimes it's not.
>
>IMHO, XP is an incomplete methodology 

No argument.  Indeed, *every* methodology is incomplete.  

>and any project demonstrating success 
>using some of the techniques is somehow concidered +1 to XP even though 
>management activities may have had a far greater inpact on project success 
>then XP.  

Fair enough.  On the other hand when you see enough +1s with XP and
enough -1s with X, you can begin to draw a conclusion. 

>Further more the claim that all documentation should questioned if 
>taken at face value could lead a breakdown in essential project management 
>activities that very likely will put the 'big picture' project at a greater 
>risk then if XP had not been introduced at all.  

A reasonable fear.  But remember, the "customer" (which includes the
entire stakeholder organization) can write a story for any document
that they think is necessary.  They can schedule that story any time
they think the document ought to be produced.  If the company believes
any particular document, or set of documents is important enough, the
XP team will produce it.

>Yet XP keeps its nose clean 
>by saying other factors (namely poor project management) resulted in the big 
>picture failure.

Interestingly enough that's how *every* methodology keeps it's nose
clean.  There's always something else to blame.  

-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/18/2004 1:36:02 AM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message 
news:m83nk015vrqoirmi7ro0317o9rn8omcdm4@4ax.com...
> On Fri, 17 Sep 2004 03:02:15 -0600, "kurth" <kurth@avanade.com> wrote:
>
>>The statements I had read in XP documentation is temporary and/or should 
>>be
>>created was an overgeneralization.
>>Documenation is clearly needed coninually in the project  from start to
>>finish.
>>We agree this is true, but disagree how much.
>>XP is only concerned with a small subset of the 'big picture' project, 
>>does
>>not address managing ... anything.
>
> No this is incorrect.  XP is concerned with managing the day to day,
> week to week, and month to month specification, design, delivery,
> deployment, and quality of software.
>
>>XP perspective = As Needed.
>
>>Project management perspective = Essential
>
> There are lots and lots of documents produced in project that are
> utterly irrelevant.  I'm sure that you don't think that every possible
> document should always be produced.  I'm sure that instead you think
> that there is a reasonable subset of documents that are important to a
> project, and a much smaller subset that are essential.  Perhaps we
> disagree about where to draw the line, but that's just a disagreement
> of degree, rather than essence.


Agreed sometimes things go overboard, but its the polar extreme, the 
minimalistic view I disagree with.
Instead of asking *why was it created* perhaps the better question might be 
*why are we not using it*.
It is only a waste if its not used.


>>
>>We agree iterative development, frequent releases, continual integration,
>>and tdd used in XP have demonstrated success.
>>These are not new techniques or unique to XP.  Assert(XP Is Iterative ==
>>true); Assert(Iterative is XP == false)
>
> Agreed, and agreed.
>>
>>What then is XP?
>
> A set of about a dozen very discrete practices.
>
>>What criteria must a project meet to be concidered an XP vs not XP?
>
> I'm not sure why this is an important question, but I suppose that the
> answer is that the project was strongly guided by the values of XP,
> and that the project team practiced all or most of the XP practices.
> Putting absolute numbers on this is a fools errand IMHO.
>
> I don't know what this has to do with the point about XP and
> documentation.
>


Something must set XP appart from other methodologies?  Or are you renaming 
an old idea?
To test it for success or failure surely you need some criteria to determine 
*if* XP was even used.
I think parts of XP is fundumentally flawed and it is doubtful projects that 
claim to use XP actually adhere to all of XPs core values, and therefore, 
regardless if project is a success or failure we wont know if it that is 
because of a failure to apply the methodology (for better or for worse) .

I think XP goes something like this:
+1 to the TDD,
+1 to iterative/agile,
+1 continual integration
+/-?(TBD) pair programing
+/- ...
-1 insufficient documentation
-1 continual refactoring (delays delays delays...)
-5 difficult to manage



>>Can a "large" (subjectivly speaking) project satisfiy all criteria of an 
>>XP
>>project? possibly
>
> IMHO, yes.  More is needed, but the project can still be very aligned
> to XP.
>
>>Can all large projects satisfiy all criteria of an XP project?
>
> Who cares?  The values of XP, and the intent of the practices can be
> preserved.  The important issue is not whether a project is "XP" or
> "Not XP".  The important point is whether or not the project is done
> in very quick iterations, buy a collaborative team using intense
> feedback, and customer input.  I would also add TDD to the mix,
> because I think that's an essential ingredient.
>
>>I don't think
>>so (concidering the impossiblities of shared ownership and 
>>pair-programming
>>of a multi-site, multi-vendor project)
>
> Those things are not impossible.  There are many companies who have
> managed to solve those problems by breaking the uber-team into many
> co-located subteams.  Though, again, who cares if you can call it XP
> or not.  The spirit is there, even if the letter is not.
>
>>What criteria must be met in the XP project to be concidered a success or
>>failure?  (eg. big picture... failure, XP successful)
>
> Good question.  I don't know.  I don't care.  The only thing that
> really matters is whether or not the project succeeds.  And project
> success is only indirectly related to process success.  I think XP or
> Agile methods help.  Indeed, I am convinced that they have a
> significant positive effect on both quality, repeatability,
> predictability, and productivity.  That can often be enough to shove a
> project past the "success" line.  Sometimes it's not.


We do not optimize code without first creating a baseline, if the changes 
are not measurable you cannot claim to have improved anything.
I have an idea -- how about a Test Driven Methodology?  first define 
success, then see if you methodology can accomplish that.
Ours does, before we start a project we define success using SMART 
(Specific, Measureable, Attainable, Realistic, Time-based) goals.

>>
>>IMHO, XP is an incomplete methodology
>
> No argument.  Indeed, *every* methodology is incomplete.


Agreed, however ours is continually improving.  We don't end our project 
after the product has been delivered, project closure is very important and 
often overlooked.  This is the lessonsed learned and preperation for the 
future.  We ask the questions what worked well? what did not work well? what 
issues did we have? was it preventable? what are the symptoms? what steps 
can we take after symptoms have appeared? was it addressed properly? .... 
and most importantly we document our findings and update our methodology as 
necessary.  As you might suspect there is a massive amount of documentation 
in our methodology.  The methodology is very focused on good project 
managment, but by no means is it only for project managers it has specific 
deliverables (deliverables meaning a product of work; code, documents, ...) 
expected from each role, PM (risk, scope, etc..), developers (test plan, 
code), system engineers (test environment, system performance baseline). 
Every project is different and the methodology is a custom fit, many 
deliverables are concidered optional others essentials, some are essentials 
only for specific project types such as large scale or multi-site.  It 
includes templates, samaples, instructions, guidlines, lessons learned.  It 
is a huge investement, but we don't create it all this documentation just 
for the sake of creating it, we believe it is *critical* to our success and 
it is very well maintained and organized.  Every person in our organization 
regardless of role; junior, senior, even HR is must attend training on basic 
project managment, consultancy skills and our methodology.  My prior 
employer had a smattering of basic templates and lessonsed learned were 
primarily word of mouth.  The difference is as day and night.



>
>>and any project demonstrating success
>>using some of the techniques is somehow concidered +1 to XP even though
>>management activities may have had a far greater inpact on project success
>>then XP.
>
> Fair enough.  On the other hand when you see enough +1s with XP and
> enough -1s with X, you can begin to draw a conclusion.
>
>>Further more the claim that all documentation should questioned if
>>taken at face value could lead a breakdown in essential project management
>>activities that very likely will put the 'big picture' project at a 
>>greater
>>risk then if XP had not been introduced at all.
>
> A reasonable fear.  But remember, the "customer" (which includes the
> entire stakeholder organization) can write a story for any document
> that they think is necessary.  They can schedule that story any time
> they think the document ought to be produced.  If the company believes
> any particular document, or set of documents is important enough, the
> XP team will produce it.
>
>>Yet XP keeps its nose clean
>>by saying other factors (namely poor project management) resulted in the 
>>big
>>picture failure.
>
> Interestingly enough that's how *every* methodology keeps it's nose
> clean.  There's always something else to blame.
>

Absolutely not true, if the project is not a *complete* success (100% 
customer satisfaction) we have failed and take that quite seriously, we want 
to know why a 98% rating was not 100%.  100% on every project is idealistic 
it just doesn't happen, but we get amazingly close.

- Kurt

> -----
> Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
> Object Mentor Inc.            | blog:  www.butunclebob.com
> The Agile Transition Experts  | web:   www.objectmentor.com
> 800-338-6716
>
>
> "The aim of science is not to open the door to infinite wisdom,
> but to set a limit to infinite error."
>    -- Bertolt Brecht, Life of Galileo 


0
kurbylogic (51)
9/18/2004 7:22:59 AM
KurtH wrote:

> Agreed sometimes things go overboard, but its the polar extreme, the
> minimalistic view I disagree with.
> Instead of asking *why was it created* perhaps the better question might
be
> *why are we not using it*.
> It is only a waste if its not used.

It can also be a waste if it's used and causes trouble.

The cited research of those "big requirements up front" documents shows that
effect. Folks tried to type in all those requirements, and got in trouble
with the complexity.

So take such a document (and its author), sort the features in order of
business priority, start with a few simple features, and restrict their
scope. Now the rest of those documents are not used, yet.

After each iteration, re-evaluate business priority. This culls out more
scope. Eventually, a project could be deploying value without obeying much
of those up-front documents. Then they become not used, and hence waste. But
at least they didn't cause trouble.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/18/2004 11:40:58 AM
On Sat, 18 Sep 2004 01:22:59 -0600, "KurtH" <kurbylogic@hotmail.com>
wrote:


>Something must set XP appart from other methodologies?  Or are you renaming 
>an old idea?

Some of the disciplines are very old.  Others (like TDD) are somewhat
revolutionary (though a few people have practiced them for quite some
time).  What sets XP (and other methods like SCRUM) apart is the
aggregation of these principles into a whole with a set of unifying
values.

>I think XP goes something like this:
>+1 to the TDD,
>+1 to iterative/agile,
>+1 continual integration

OK

>+/-?(TBD) pair programing
The industry seems to agree that this is equivocal -- at least when
practiced 100%.  Personally I prefer pairing when I can possibly do it
(which is not often since I'm on the road so much).  I also chide many
of my clients for not pairing enough and allowing knowledge silos to
build up. (I was recently working with a team who were blocked in a
task.  I asked them who knew about the subsystem.  Their response was:
"Bill".  Of course Bill had just quit, and nobody knew about his
stuff.)

In the end it seems that most teams who really give pairing a decent
shot, wind up using it at a 60%-80% level.

>-1 insufficient documentation

In XP the amount of documentation depends on the needs of the project.
Any needed documentation will be produced.  So the documentation is
always sufficient.  

>-1 continual refactoring (delays delays delays...)

No no no.  Speed, speed, speed!  It's the refactoring that allows you
to go fast.  When you see a mess, you clean it up!  Of course you try
never to make the mess in the first place, but none of us are really
*that* good.  Nobody has *all* the best ideas up front.  When we see
an opportunity to improve, we take it; and that helps us go faster.  

I was recently at a client site, working on some code, and we saw a
chance to remove some duplicate code.  So we took that opportunity,
and spent 20 min. cleaning it up.  Then, some time later we needed add
a new function to the subsystem.  Sure enough, the refactoring we had
done exposed an obvious chance at using the Template Method pattern.
The change we needed to make just slipped right in, and enabled a
whole cadre of similar changes.  

>-5 difficult to manage

XP, and all the agile methods produce an immense amount of data about
how the team is doing.  We track velocity, quality, build success,
acceptance test success, etc.  This data is the grist that good
project managers need to manage a project.  


-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/18/2004 10:28:11 PM
"Donald Roby" <droby@acm.org> wrote in message news:<pan.2004.09.17.01.30.19.201180@acm.org>...
> On Wed, 15 Sep 2004 22:19:59 -0400, Ronald E Jeffries wrote:
> 
> > On 15 Sep 2004 18:06:16 -0700, sales@leapse.com (Brian S. Smith) wrote:
> > 
> >>With all due respect, I find that one hard to swallow. That's like
> >>saying the old adage "A problem well defined, is a problem half-solved"
> >>is a bunch of nonesense. See the STANDISH CHAOS REPORT in the 90s.
> >>Programs suffering from poor requirement definition were found to have
> >>failed or run over budget, or grossly missed scheduled, at an alarmingly
> >>high percentage. Granted the old 2167A DoD and FAA, etc., requirements
> >>guidelines are archaic and inefficient, but that's really taking a polar
> >>extreme as an example. Virtually any project, other than one that has
> >>little or no firm requirements to begin with, will benefit IMMENSELY
> >>from a well-defined set of system specifications. We can discuss what
> >>constitutes a solid requirement, and what does not, but to claim that
> >>such a wealth of information leads to greater program risk is, IMHO,
> >>just not so. Thanks,
> > 
> > It may seem counterintuitive, but in fact is /is/ so. See Larman's book
> > for page after page of referenced detail.
> 
> It doesn't even seem particularly counter-intuitive to me.
> 
> The old adage "A problem well defined, is a problem half-solved" is indeed
> a bunch of nonsense.

Disagree. I liken systems analysis to outlining a novel before writing
it, or architecting a building before ordering the materials, hiring
contractors, and pouring the foundation. I have undertaking projects
where I later regretted not having spent the time up-front doing the
research, defining my objectives, building a strawman--or
storyboarding the scenarios. So much time and effort could have been
saved if I had taken the time to "do it right". So I will always be an
advocate of organization and thorough planning...and while others on
this newsgroup will point to the various strategies and even products
built into the XP methodology, I still feel it passes on the big
picture--which may be fine if the big picture is nebulous.

-- 
Brian S. Smith
Leap Systems
sales@leapse.com
http://www.leapse.com
"Turn requirements into object models, instantly, with Leap SE."
......RAD from the source  


> 
> There are many quite well defined problems that are not only not half
> solved, but that are in fact unsolvable
0
sales3288 (24)
9/19/2004 3:45:21 AM
Brian,

> I liken systems analysis to outlining a novel before writing
> it, or architecting a building before ordering the materials, hiring
> contractors, and pouring the foundation.

Your chosen set of comparisons is interesting. "Architect before build" 
may be, for the little I know about it, a common feature of the process 
that produced all successful buildings. "Outline before writing" most 
definitely *isn't* a common feature of the process that produced all 
successful novels.

Laurent
http://bossavit.com/thoughts/
0
laurent (379)
9/19/2004 8:11:50 AM
Laurent Bossavit <laurent@dontspambossavit.com> wrote in message news:<MPG.1bb75dab475f4b09897cd@news.noos.fr>...
> Brian,
> 
> > I liken systems analysis to outlining a novel before writing
> > it, or architecting a building before ordering the materials, hiring
> > contractors, and pouring the foundation.
> 
> Your chosen set of comparisons is interesting. "Architect before build" 
> may be, for the little I know about it, a common feature of the process 
> that produced all successful buildings. "Outline before writing" most 
> definitely *isn't* a common feature of the process that produced all 
> successful novels.
> 
> Laurent
> http://bossavit.com/thoughts/

It's the only way I've ever done creative writing, and from what I've
read, many successful novelists and playwrights have used outlines to
structure their work.

Perhaps these work practises come down to simple preferences and
relative aptitudes.  Perhaps some people are simply up-front planners,
and others are do-it-as-you-go types, and the only "wrong" approach is
one that forces a person to work in a way to which he isn't suited. 
Any approach that doesn't allow me to plan, outline, analyze, scheme
and examine the big picture before executing any large effort simply
won't work for me.  For other people, perhaps the attempt to create
such a plan simply hurts more than it helps.

Wouldn't it be nice if a software development process recognized these
differences in cognitive and working styles, and allowed for the
strengths of all these different types to be utilized?  Of course, in
order to do so, such a process would have to allow specialization in
terms of job roles.  Could it simply be that traditional methodologies
haven't offered enough to the do-it-as-you-go types?  If that is the
case, then some newer methodologies have simply erred in the other
direction.


Cy
0
cycoe (74)
9/19/2004 3:48:00 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in message news:<e9V2d.4571$Qv5.1218@newssvr33.news.prodigy.com>...
> KurtH wrote:
> 
> > Agreed sometimes things go overboard, but its the polar extreme, the
> > minimalistic view I disagree with.
> > Instead of asking *why was it created* perhaps the better question might
>  be
> > *why are we not using it*.
> > It is only a waste if its not used.
> 
> It can also be a waste if it's used and causes trouble.
> 
> The cited research of those "big requirements up front" documents shows that
> effect. Folks tried to type in all those requirements, 

....in a single, monolithic, locked-in chunk.  Waterfall's an easy
target, and you guys have been trying to tar all non-agile processes
with that brush for too long now.

> and got in trouble
> with the complexity.

But at least they captured the breadth of the problem space.  

> So take such a document (and its author), sort the features in order of
> business priority, start with a few simple features, and restrict their
> scope. Now the rest of those documents are not used, yet.

But the process of creating the document in the first place
crystallized the wish list in such a way that useful prioritization
can take place.  Not doing that research and analysis before
attempting to create some top ten list of desired features is skipping
a step.  In other words, you need to know what is wanted before you
can figure out what is needed.  Requirements documents become big
because what is wanted is always the larger set, and sorting out the
"nice-to-haves" from the "need-to-haves" is a difficult job, never
mind sorting the "need-to-haves" into a sensible order.

In any complex business application, the features work together to
satisfy business goals.  They are rarely, if ever, independent islands
of business utility.  They are also rarely, if ever, independent
islands of development cost, but that's another discussion.  If a
business has a ten-step process that it's trying to automate, coding
step 5 only adds real business value if steps 1-4 and 6-10 are also
handled, either through automation or manual workaround.  On the other
hand, analyzing the business process and requirements for all ten
steps may reveal things about step 5 that would have been missed had a
more piecemeal requirements approach been taken.
 
> After each iteration, re-evaluate business priority. This culls out more
> scope. Eventually, a project could be deploying value without obeying much
> of those up-front documents. Then they become not used, and hence waste. But
> at least they didn't cause trouble.

"Business priority" in the absence of considering the
synergies/dependencies between features is meaningless.  Prioritizing
a list that you haven't fully reseached and assembled is simply
creating an illusion of rigour.


Cy
0
cycoe (74)
9/19/2004 4:12:16 PM
Robert C. Martin wrote:

> KurtH wrote:

> >-1 continual refactoring (delays delays delays...)
>
> No no no.  Speed, speed, speed!  It's the refactoring that allows you
> to go fast.

Right. But it's about preventing any big messes, not cleaning them up.

> When you see a mess, you clean it up!

Practicing refactoring is good, because you will know how to clean something
up.

However, refactoring is part of adding new code. You make a test pass, and
only after you have the safety of a green bar do you refactor - just a
little - to merge the new code with the existing code. Refactoring to merge
permits many more opportunities for more green bars. Designing during a red
bar would make too many edits before returning to a green bar (including
designing on paper).

> I was recently at a client site, working on some code, and we saw a
> chance to remove some duplicate code.  So we took that opportunity,
> and spent 20 min. cleaning it up.

And you had a green bar after every few edits. Those unfamiliar with
refactoring like this are unqualified to say it's slow.

> Then, some time later we needed add
> a new function to the subsystem.  Sure enough, the refactoring we had
> done exposed an obvious chance at using the Template Method pattern.
> The change we needed to make just slipped right in, and enabled a
> whole cadre of similar changes.

Right. Changing code to fit current requirements increases the odds they
accept the next requirements _without_ refactoring.

> >-5 difficult to manage
>
> XP, and all the agile methods produce an immense amount of data about
> how the team is doing.  We track velocity, quality, build success,
> acceptance test success, etc.  This data is the grist that good
> project managers need to manage a project.

Those who have managed XP projects have found it incredibly easy to manage.

Explaining XP is hard. Doing it is surprisingly easy.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/19/2004 4:28:48 PM
"Justin Farley" <null@void.com> wrote in message news:<4_n2d.893161$ic1.89758@news.easynews.com>...
> Cy Coe wrote:
> 
> > If the Onsite Customer lacks the skills to properly analyze the
> > problem domain (simply having been a worker in the domain doesn't
> > deliver those skills) then they, like you, will simply be muddling
> > their way through.
> 
> I agree with this, but the implication is merely that XP's "Customer Role"
> needs to be a highly skilled individual.  No less skilled than, and not
> unsimilar to, the traditional "Software Analyst".

Except that some XP commentators seem to believe that having detailed
first-hand business knowledge makes those skills less important or
completely unnecessary for a Customer.  I believe the opposite - that
the ideal Customer would be someone very much like a
software/systems/business analyst, and that specific domain knowledge,
while undoubtedly valuable, is not as important as the analytical
skills.

> The difference as I see it in XP/Agile methodology is...
> 
> Firstly, the analyst/customer begins onsite with the development team early,
> working and learning in parallel with the developers.  Then stays with team
> during most of the life-cycle, as opposed to swiftly moving on to the next
> "interesting" project shortly after coding begins.

I have no problem with this, as long as the customer can do some
research and analysis on features that aren't on the current coding
hopper.

> Secondly, the analyst/customer is a specialist in a particular domain.  In
> other words, they are finance, manufacturing, health, etc. experts first,
> and IT experts second.  (This is just an efficient division of skills, and
> not something XP advocates explicitly AFAICT, just something I've inferred.)

And that's the problem I have with the XP view.  A good analyst can
learn about a business domain more quickly than a business worker can
acquire requirements analysis skills.  Simply knowing about a domain
isn't enough form someone to determine software requirements for it. 
That knowledge is certainly helpful to the process, and I'm not
opposed to the Customer team having end-users on it, but I've seen the
folly of turning over all requirements responsibilities to end-users
first hand.

> Domain driven, rather than IT driven.

Unless you have people with analysis skills on the Customer team,
there will be no "driving" at all, but rather a random walk.


Cy
0
cycoe (74)
9/19/2004 6:54:52 PM
Laurent Bossavit <laurent@dontspambossavit.com> wrote:

> Brian,
> 
> > I liken systems analysis to outlining a novel before writing
> > it, or architecting a building before ordering the materials, hiring
> > contractors, and pouring the foundation.
> 
> Your chosen set of comparisons is interesting. "Architect before build" 
> may be, for the little I know about it, a common feature of the process 
> that produced all successful buildings. "Outline before writing" most 
> definitely *isn't* a common feature of the process that produced all 
> successful novels.

Though it seems roughly a third to half authors do outline.  An
outline simply lists the major points and their relevant sub-points.
How couldn't that be a widely used practice when writing a novel.

Elliott
-- 
"'Business priority' in the absence of considering the
synergies/dependencies between features is meaningless.  Prioritizing
a list that you haven't fully reseached and assembled
is simply creating an illusion of rigour."   ~ Cy Coe
0
Universe
9/19/2004 7:26:06 PM
Cy,

> It's the only way I've ever done creative writing, and from what I've
> read, many successful novelists and playwrights have used outlines to
> structure their work.

Undeniably. Many, but by no means all - authors are apparently all over 
the map with respect to using outlines; the following sample hints at a 
broad range of differences:
  http://raxilach.notlong.com

I suspect, too, that authors may outline at various points in their 
work, for instance to pin down structure after writing a first draft. 
This gives us a further distinction between "outlining at all" and 
"outlining before writing the first line", the latter being what Brian 
likened systems analysis to.

Complicating the picture, creative writing is in most cases an 
individual effort, whereas quite a lot of software work is supposed to 
involve teams, or at least groups.

Cognitive styles within the group will usually not be homogenous - 
probably *cannot* be homogenous along more than one or two dimensions: 
if the group has been selected among, say, people who are uniformly 
strong introverts, that particular trait is likely to be the result of 
very different quirks in each person.

> Wouldn't it be nice if a software development process recognized these
> differences in cognitive and working styles, and allowed for the
> strengths of all these different types to be utilized?

For me, the suggestion raises more questions than it brings answers. 
(Not necessarily a bad thing...) Yes, it would be nice, but would it 
look anything like "processes" currently on offer ? Do we have to hold 
out hope for a single "process" that is able to reconfigure itself in 
that way, for any combination of styles within one group ?

What do you think would characterize such a process ?

> Of course, in order to do so, such a process would have to allow
> specialization in terms of job roles.

Why "of course" ?

> Could it simply be that traditional methodologies haven't offered
> enough to the do-it-as-you-go types?  If that is the case, then
> some newer methodologies have simply erred in the other direction.

Reminds me of a favorite quote, "Truth emerges more readily from error 
than from confusion." If we keep erring in the *same* direction, when 
are we ever going to learn anything ? Erring in the other direction (if 
that's indeed what these newer methods do) may seem like the best way to 
illuminate how to find the appropriate balance. At least it gives us a 
sense of what the directions are.

For instance Extreme Programming, to focus on one of these "newer" 
methods, does have quite a bit of planning and outlining built into it, 
especially compared to widespread "code first and tweak until it works" 
approaches. What it doesn't have is a lot of formal examining of the 
"big picture" in solution space. I suspect that these two directions 
were conflated with each other in previous styles of process 
description, and that we may learn, thanks to XP, that processes can 
usefully vary along these two distinct dimensions.

Laurent
http://bossavit.com/thoughts/
0
laurent (379)
9/20/2004 8:32:33 AM
"KurtH" <kurbylogic@hotmail.com> wrote in
news:r42dnSfc1JzLf9bcRVn-sw@comcast.com: 

> -5 difficult to manage

What is your take on "manage"?

I ask since I've seen differnt people having very different ideas on what 
means managing a software project.

> Every person in our organization 
> regardless of role; junior, senior, even HR is must attend training on
> basic project managment, consultancy skills and our methodology.  My
> prior employer had a smattering of basic templates and lessonsed
> learned were primarily word of mouth.  The difference is as day and
> night. 

Indeed it is. But isn't that orthogonal to the metodology?
What you're saying is that, if everyone's given the basic ideas and 
rationales on why things are done the way are done, the entire process 
benefits. That's true no matter the process. Try to do a cooking recipe in 
two without first talk over the ingredients.. 

0
9/20/2004 1:19:19 PM
cycoe@hotmail.com (Cy Coe) wrote:

> Laurent Bossavit <laurent@dontspambossavit.com> wrote in message news:<MPG.1bb75dab475f4b09897cd@news.noos.fr>...
> > Brian,
> > 
> > > I liken systems analysis to outlining a novel before writing
> > > it, or architecting a building before ordering the materials, hiring
> > > contractors, and pouring the foundation.
> > 
> > Your chosen set of comparisons is interesting. "Architect before build" 
> > may be, for the little I know about it, a common feature of the process 
> > that produced all successful buildings. "Outline before writing" most 
> > definitely *isn't* a common feature of the process that produced all 
> > successful novels.
> > 
> > Laurent
> > http://bossavit.com/thoughts/
> 
> It's the only way I've ever done creative writing, and from what I've
> read, many successful novelists and playwrights have used outlines to
> structure their work.
> 
> Perhaps these work practises come down to simple preferences and
> relative aptitudes.  Perhaps some people are simply up-front planners,
> and others are do-it-as-you-go types, and the only "wrong" approach is
> one that forces a person to work in a way to which he isn't suited. 

The problem is that typically "do-it-as-you-go-types" also:
	` fail to take a systems - all around big picture - approach to
investigation and design.  They generally fail to view any one issue in
the light of all related key issues.
	` fail to take of project macro issues like creating adequate
documentation
	` fascinatingly I've found they tend to explicitly place parts above
wholes in Part/Whole entities
	` tend to inefficiently and uneconomically "reinvent the wheel"
rather than attempting to reuse and or build off of prior related work
	` tend to view things from a code-centric stance

Elliott
-- 
"'Business priority' in the absence of considering the
synergies/dependencies between features is meaningless.  Prioritizing
a list that you haven't fully reseached and assembled
is simply creating an illusion of rigour."   ~ Cy Coe    :- }
0
universe5 (202)
9/20/2004 9:50:31 PM
Elliott,

> The problem is that typically "do-it-as-you-go-types" also:

Wonderful list - thanks for posting it. I assume you are a "big 
picture" person, so I'm sure you will have no difficulty completing it 
with the other one - the one that lists the main failure modes of "big 
picture" thinkers. Please post it here - you wouldn't want to leave us 
with an incomplete understanding, would you ?

Laurent
http://bossavit.com/thoughts/
0
laurent (379)
9/21/2004 7:08:29 AM
"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
news:9b0jk0dhlq2tcmb5bnoqukujim73qpsgi1@4ax.com...
> On Wed, 15 Sep 2004 22:52:36 -0600, "kurth" <kurth@avanade.com> wrote:
>
> C3 was a project that
> failed.  ...  I advise you, however, to consider whether there might
> have been other factors.  Certainly the developers on the C3 team do
> not associate the failure with XP.
>
That's a shame.  When a project fails, the only helpful, the only useful
point of view to take is that the failure is attributable to the team, to
the practices, the judgements, and the approaches, and to examine them
dispassionately.  It's not the only defensible point of view, it's often
defensible to point to the "other", but it's the only point of view that is
useful, the only one that will result in the practices, the judgements, and
the approaches getting better.  It's always possible to say it failed for
these other reasons, it was those other guys, but doing so overlooks the
fact that many, many projects have succeeded in delivering real business
value under the most adverse circumstances, with people who were not the
best, and with lots of reasons why the project should have failed.

What makes the newsgroup discussions of C3 so empty is that the participants
closest to the scene were generally unwilling to enter into any meaningful
discussion, and contented themselves with standard rhetorical responses,
such as "if you haven't tried XP, you can't comment on XP", and the like.
About the only thing we got out of those discussions is that the project
took an awfully long time.

Fortunately, your claim is not completely true, since one of the more
thoughtful members of the team wrote an interesting article on some
difficulties associated with the XP customer model.  We can learn from that.
There should be a lot more of that.

Regards,
Daniel Parker


0
Daniel
9/21/2004 10:59:03 AM
"Daniel Parker" <danielaparker@spam?nothanks.windupbird.com> wrote in 
news:gTT3d.13210$pA.812066@news20.bellglobal.com:

> When a project fails, the only helpful, the only useful
> point of view to take is that the failure is attributable to the team, to
> the practices, the judgements, and the approaches, and to examine them
> dispassionately. 

What makes you think that examination hasn't been carried on?
After such an analysis is done, one should be enabled to say to which 
degree the methodology has had an impact or not.

From reading a little about that project, it seems reasonably clear to me 
that the participants haven't jumped to conclusions, but have indeed 
analyzed the failure; and they explicitely indicate as main culprit some 
management occurrences and missing or declining stakeholder interest and 
responsability.

I wasn't there, so I can't say for sure. But that is what the analysis that 
developers who were there do, seems to me. 

Of course you can say that you don't believe them. But then you should say 
why. Or you can point to faults in their analysis, and illustrate how your 
alternatives would have performed better.

For me, I dont know of any process which could save a project where for 
example the amount of work actually necessary is willingly not disclosed to 
the stakeholders - which are given an imaginery "pretty" figure instead, 
until it becomes obviously unattainable.

The only serious question is if another process could have brought, for 
example, this figure down, and how, and why.

Otherwise it's just noise.
0
9/21/2004 12:52:48 PM
On Tue, 21 Sep 2004 06:59:03 -0400, "Daniel Parker"
<danielaparker@spam?nothanks.windupbird.com> wrote:

>"Robert C. Martin" <unclebob@objectmentor.com> wrote in message
>news:9b0jk0dhlq2tcmb5bnoqukujim73qpsgi1@4ax.com...
>> On Wed, 15 Sep 2004 22:52:36 -0600, "kurth" <kurth@avanade.com> wrote:
>>
>> C3 was a project that
>> failed.  ...  I advise you, however, to consider whether there might
>> have been other factors.  Certainly the developers on the C3 team do
>> not associate the failure with XP.
>>
>That's a shame.  When a project fails, the only helpful, the only useful
>point of view to take is that the failure is attributable to the team, to
>the practices, the judgements, and the approaches, and to examine them
>dispassionately.  

Agreed.  So let me rephrase: After a serious amount of soul-searching
and examination of many aspects of the project, the developers that I
have spoken to cannot attribute the failure of C3 to XP.  These people
may have missed a connection, however there are other hypotheses for
the failure that appear much more probable.

>What makes the newsgroup discussions of C3 so empty is that the participants
>closest to the scene were generally unwilling to enter into any meaningful
>discussion, 

This statement, and the following, seem to be at odds with each other.
Indeed, the conversation around C3 was quite loud and prolific.  Much
of the fodder that Rosenberg used for his "book" came from those
discussions.  Ron and Chet were then, and are now, very candid about
C3.
>
>Fortunately, your claim is not completely true, since one of the more
>thoughtful members of the team wrote an interesting article on some
>difficulties associated with the XP customer model.  We can learn from that.
>There should be a lot more of that.

Look at the papers that were submitted to the many XP/Agile
conferences over the last four years.  Many of them discuss the very
issues you are interested in.  There's a lot of critical thinking
going on within the Agile/XP community.
-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/21/2004 1:37:27 PM
Cristiano Sadun <cristianoTAKEsadun@THIShotmailOUT.com> wrote in message news:<Xns956B975E66BC9Sadun@212.45.188.38>...
> "Daniel Parker" <danielaparker@spam?nothanks.windupbird.com> wrote in 
> news:gTT3d.13210$pA.812066@news20.bellglobal.com:
> 
> > When a project fails, the only helpful, the only useful
> > point of view to take is that the failure is attributable to the team, to
> > the practices, the judgements, and the approaches, and to examine them
> > dispassionately. 
> 
> What makes you think that examination hasn't been carried on?
> After such an analysis is done, one should be enabled to say to which 
> degree the methodology has had an impact or not.
> 
> From reading a little about that project, it seems reasonably clear to me 
> that the participants haven't jumped to conclusions, but have indeed 
> analyzed the failure; and they explicitely indicate as main culprit some 
> management occurrences and missing or declining stakeholder interest and 
> responsability.
> 
> I wasn't there, so I can't say for sure. But that is what the analysis that 
> developers who were there do, seems to me. 
> 
> Of course you can say that you don't believe them. But then you should say 
> why. Or you can point to faults in their analysis, and illustrate how your 
> alternatives would have performed better.
> 
My comments were based strictly on the discussions in the newsgroups,
particularly the XP group, and I think they accurately reflect the
tenor of those discussions.  I didn't see anything that could usefully
be called analysis.

Regards,
Daniel Parker
0
9/21/2004 6:30:03 PM
danielaparker@hotmail.com (Daniel Parker) wrote:

> Cristiano Sadun <cristianoTAKEsadun@THIShotmailOUT.com> wrote in message news:<Xns956B975E66BC9Sadun@212.45.188.38>...
> > "Daniel Parker" <danielaparker@spam?nothanks.windupbird.com> wrote in 
> > news:gTT3d.13210$pA.812066@news20.bellglobal.com:
> > 
> > > When a project fails, the only helpful, the only useful
> > > point of view to take is that the failure is attributable to the team, to
> > > the practices, the judgements, and the approaches, and to examine them
> > > dispassionately. 
> > 
> > What makes you think that examination hasn't been carried on?
> > After such an analysis is done, one should be enabled to say to which 
> > degree the methodology has had an impact or not.
> > 
> > From reading a little about that project, it seems reasonably clear to me 
> > that the participants haven't jumped to conclusions, but have indeed 
> > analyzed the failure; and they explicitely indicate as main culprit some 
> > management occurrences and missing or declining stakeholder interest and 
> > responsability.

But o'course no XP development team failure and responsibility.  Nawww,
nope, uh-uh, "not meee!",  "look at 'em, not us", ....  We're your good
old reliable "XP be defect free" team.

Elliott


> > 
> > I wasn't there, so I can't say for sure. But that is what the analysis that 
> > developers who were there do, seems to me. 
> > 
> > Of course you can say that you don't believe them. But then you should say 
> > why. Or you can point to faults in their analysis, and illustrate how your 
> > alternatives would have performed better.
> > 
> My comments were based strictly on the discussions in the newsgroups,
> particularly the XP group, and I think they accurately reflect the
> tenor of those discussions.  I didn't see anything that could usefully
> be called analysis.
> 
> Regards,
> Daniel Parker

-- 
Theory Leads, Practice Verifies

 Profiteer US Out of Iraq Now!
0
universe5 (202)
9/21/2004 7:15:03 PM
danielaparker@hotmail.com (Daniel Parker) wrote in 
news:33feb190.0409211030.3d34fccf@posting.google.com:

> My comments were based strictly on the discussions in the newsgroups,
> particularly the XP group, and I think they accurately reflect the
> tenor of those discussions.  I didn't see anything that could usefully
> be called analysis.
> 

Curious; but I'll take your word for it, so ok. 

I looked at some wiki pages - for example, at http://c2.com/cgi/wiki?
CthreeProjectTerminated and http://c2.com/cgi/wiki?
ChryslerComprehensiveCompensation - and there's some fact reporting from 
various participants (among much chatter, of course, but that's a wiki).

They reported their takes on what they think are the reasons for failure, 
which amount to 

 - artificial manipulation of effort estimates by a manager and the 
unwillingness of some team members to perform heroics (which is not 
something I can blame anybody for)
 - disapperance of the original backers of the project, both financial 
and 
   functional
 - lack of interest by their replacements.

Again, unless you have reasons to deny these claims, I don't see *any* 
project surviving in these circumstances, regardless of the process.

(Please note that I'm not a particular XP backer in any way. I just find 
some of its messages interesting since they resonate with situations and 
problems I've faced in a few years of working in the industry. I 
routinely use RUP-style iterative/incremental methodologies and love UML 
sketches).

In the C3 case - like always - the question I give to myself is: "had 
been my butt on the line, what could I have done to ensure another 
outcome"?
The answer is a sad "not much".

If you have a different one which you think is viable, I'd love to hear 
it - I would learn something.
0
9/22/2004 9:09:53 AM
Cristiano Sadun <cristianoTAKEsadun@THIShotmailOUT.com> wrote in message news:<Xns956C71937839ESadun@212.45.188.38>...
> danielaparker@hotmail.com (Daniel Parker) wrote in 
> 
> I looked at some wiki pages - for example, at http://c2.com/cgi/wiki?
> CthreeProjectTerminated and http://c2.com/cgi/wiki?
> ChryslerComprehensiveCompensation - and there's some fact reporting from 
> various participants (among much chatter, of course, but that's a wiki).
> 
> They reported their takes on what they think are the reasons for failure, 
> which amount to 
> 
>  - artificial manipulation of effort estimates by a manager and the 
> unwillingness of some team members to perform heroics (which is not 
> something I can blame anybody for)
>  - disapperance of the original backers of the project, both financial 
> and 
>    functional
>  - lack of interest by their replacements.
> 
> Again, unless you have reasons to deny these claims, I don't see *any* 
> project surviving in these circumstances, regardless of the process.
> 
This is precisely my point, these are the kinds of reasons that people
come up with who do not want to take responsibility for failure
themselves.  Regardless of their validity, focusing on these kinds of
things is not helpful, it does not lead to the kind of introspection
that is necessary to become better at delivering results in an
imperfect world where the probability of failure is never zero. Many
projects have survived artificial manipulation of effort estimates,
and if new management lacked interest in the project, the question
becomes what could the project leadership have done differently that
would have attracted that management interest? How could they have
justified their existence better in terms of delivering business
value?

Regards,
Daniel Parker
0
9/22/2004 6:37:14 PM
Cy Coe wrote:
> "Justin Farley" <null@void.com> wrote in message
> news:<4_n2d.893161$ic1.89758@news.easynews.com>...

>> Secondly, the analyst/customer is a specialist in a particular
>> domain.  In other words, they are finance, manufacturing, health,
>> etc. experts first, and IT experts second.  (This is just an
>> efficient division of skills, and not something XP advocates
>> explicitly AFAICT, just something I've inferred.)
>
> And that's the problem I have with the XP view. [...]

To clarify, that second point is _not_ the XP view AFIAK.  It is my personal
opinion about the analyst/customer role.

> [...] A good analyst can
> learn about a business domain more quickly than a business worker can
> acquire requirements analysis skills.  Simply knowing about a domain
> isn't enough form someone to determine software requirements for it.
> That knowledge is certainly helpful to the process, and I'm not
> opposed to the Customer team having end-users on it, but I've seen the
> folly of turning over all requirements responsibilities to end-users
> first hand.

Yes, domain knowledge alone is not enough to determine software
requirements.  But I am talking about a team, not an individual.

Your notion that "business workers" are less able to acquire analysis skills
is ignoring their ability to contribute.  Why do "software analysts" need to
learn about a business domain, when an intelligent "domain expert" on the
team can contribute all the domain knowledge needed.

I don't see it as "turning over all requirements responsibilities to the
end-users", although it does place /more/ responsibility on the end-users
(or rather, the end-users' representative aka domain expert aka customer
role).  It is just a more efficient allocation of skills, and nobody has to
learn anything more quickly than anybody else, they just need to be in the
same room as each other for the whole life-cycle.

--
Justin


0
null38 (68)
9/22/2004 11:16:52 PM
danielaparker@hotmail.com (Daniel Parker) wrote:

> Cristiano Sadun <cristianoTAKEsadun@THIShotmailOUT.com> wrote in message news:<Xns956C71937839ESadun@212.45.188.38>...
> > danielaparker@hotmail.com (Daniel Parker) wrote in 
> > 
> > I looked at some wiki pages - for example, at http://c2.com/cgi/wiki?
> > CthreeProjectTerminated and http://c2.com/cgi/wiki?
> > ChryslerComprehensiveCompensation - and there's some fact reporting from 
> > various participants (among much chatter, of course, but that's a wiki).
> > 
> > They reported their takes on what they think are the reasons for failure, 
> > which amount to 
> > 
> >  - artificial manipulation of effort estimates by a manager and the 
> > unwillingness of some team members to perform heroics (which is not 
> > something I can blame anybody for)
> >  - disapperance of the original backers of the project, both financial 
> > and 
> >    functional
> >  - lack of interest by their replacements.
> > 
> > Again, unless you have reasons to deny these claims, I don't see *any* 
> > project surviving in these circumstances, regardless of the process.
 
> This is precisely my point, these are the kinds of reasons that people
> come up with who do not want to take responsibility for failure
> themselves.  Regardless of their validity, focusing on these kinds of
> things is not helpful, it does not lead to the kind of introspection
> that is necessary to become better at delivering results in an
> imperfect world where the probability of failure is never zero. 

While everyone knows I'm not keen on XP (to say the least :- ), but
the reasons seem like pretty heavy "showstoppers".

I must state that, I don't know if they were in fact the real reasons
for canceling the project.

Further, given the piecemeal, hackerish nature of XP, there's every
good reason for the project not to have made it given that deleterious
nature.

> Many
> projects have survived artificial manipulation of effort estimates,
> and if new management lacked interest in the project, the question
> becomes what could the project leadership have done differently that
> would have attracted that management interest? How could they have
> justified their existence better in terms of delivering business
> value?

I do think it's valid for you to question the actual extent to which
these factors did play in the project's downfall.

I really would like to see the effort appraised by 2-3 3rd parties:
non-Chrysler, non-XP.

It's significant that XP's promoters can point to few other notable XP
efforts, unlike much more widely adopted RUP.

Elliott
-- 
"'Business priority' in the absence of considering the
synergies/dependencies between features is meaningless.  Prioritizing
a list that you haven't fully reseached and assembled
is simply creating an illusion of rigour."   ~ Cy Coe
0
Universe
9/22/2004 11:49:43 PM
Justin Farley wrote:

> >> Secondly, the analyst/customer is a specialist in a particular
> >> domain.  In other words, they are finance, manufacturing, health,
> >> etc. experts first, and IT experts second.  (This is just an
> >> efficient division of skills, and not something XP advocates
> >> explicitly AFAICT, just something I've inferred.)
> >
> > And that's the problem I have with the XP view. [...]
>
> To clarify, that second point is _not_ the XP view AFIAK.  It is my
personal
> opinion about the analyst/customer role.

The XP view is the Onsite Customer has business-side authorization to
schedule user stories and write their test specifications.

It's up to the business side to select finance, manufacturing, or health
experts if they need them. (It's also up to the programmers to report any
troubles they perceive in the process..;-)

So White Book XP cannot declare all Onsite Customers must be domain experts,
but real life tends to lead that way.

> I don't see it as "turning over all requirements responsibilities to the
> end-users", although it does place /more/ responsibility on the end-users
> (or rather, the end-users' representative aka domain expert aka customer
> role).  It is just a more efficient allocation of skills, and nobody has
to
> learn anything more quickly than anybody else, they just need to be in the
> same room as each other for the whole life-cycle.

Ah, but if I pretend I know of a software process that can somehow work
without a customer, expert, or analyst, I can tease you about it for many
many posts.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/23/2004 12:38:57 AM
danielaparker@hotmail.com (Daniel Parker) wrote in 
news:33feb190.0409221037.58299e0a@posting.google.com:

> This is precisely my point, these are the kinds of reasons that people
> come up with who do not want to take responsibility for failure
> themselves.  

Well, I'd agree if they were talking generically. But these two things 
seem pretty good reasons - not just generic coverups. Of course, you're 
entitled to a different opinion - no problem there. I only asked since I 
wondered if you had any idea on how to tackle that kind of situation.

> Regardless of their validity, focusing on these kinds of
> things is not helpful, it does not lead to the kind of introspection
> that is necessary to become better at delivering results in an
> imperfect world where the probability of failure is never zero.

I don't understand. For any given methodology, there are internal factors 
(i.e. the ones supposedely tackled and controlled by the methodology) and 
external factors. Trying to extend a methodology to tackle all possible 
external factors is, imho, a hopeless perspective.

> Many
> projects have survived artificial manipulation of effort estimates,
> and if new management lacked interest in the project,

It may be, even if I haven't seen many. Still, is this something that 
should be addressed by a *software development methodology* (SDM)? It can 
be that the human, charm and communicative skills of a team leader allow 
him/her to sell bullshit and go on thriving (gosh, someone say 
politicians? :-). Still, this seems to me well outside the scope of any 
methodology.

> the question
> becomes what could the project leadership have done differently that
> would have attracted that management interest? How could they have
> justified their existence better in terms of delivering business
> value?

Sure. But that question does not pertain to a SDM, does it? 
0
9/23/2004 7:50:02 AM
"Cristiano Sadun" <cristianoTAKEsadun@THIShotmailOUT.com> wrote in message
news:Xns956D640913B89Sadun@212.45.188.38...
> danielaparker@hotmail.com (Daniel Parker) wrote in
> news:33feb190.0409221037.58299e0a@posting.google.com:
>

> > the question
> > becomes what could the project leadership have done differently that
> > would have attracted that management interest? How could they have
> > justified their existence better in terms of delivering business
> > value?
>
> Sure. But that question does not pertain to a SDM, does it?

But SDM is only ever periphally a factor in success/failure anyway.
Success/failure is ultimately a management issue, including the immediate
project leads and managers, and the professional responsibility of
developers to raise things that don't make sense with management.  SDM only
becomes a factor when it imposes rigidity where flexibility and judgement
are called for.

Regards,
Daniel Parker


0
Daniel
9/23/2004 1:06:53 PM
"Daniel Parker" <danielaparker@spam?nothanks.windupbird.com> wrote in
news:fXz4d.24460$bL1.1056522@news20.bellglobal.com: 

> But SDM is only ever periphally a factor in success/failure anyway.
> Success/failure is ultimately a management issue, including the
> immediate project leads and managers, and the professional
> responsibility of developers to raise things that don't make sense
> with management.  SDM only becomes a factor when it imposes rigidity
> where flexibility and judgement are called for

Couldn't agree more.

It was exactly this, that was leaving me puzzled when I read your first 
post - where you seemed to imply that XP was the main culprit and the 
"other" reasons (management related) were just used as a cover.

In particular, your answering to 

>Certainly the developers on the C3 team do
> not associate the failure with XP.

with

>That's a shame. [...] It's always possible to say it failed for
these other reasons, it was those other guys [...]

I re-read the post now, and the positioning of your sentence may have been 
accidental. If you were simply advocating that XP (or any other 
methodology) is not enough for ensuring the success of any project, I 
apologize for the misunderstanding. 
0
9/23/2004 1:40:37 PM
Cristiano Sadun <cristianoTAKEsadun@THIShotmailOUT.com> wrote in message news:<Xns956D9F79F5844Sadun@212.45.188.38>...
> "Daniel Parker" <danielaparker@spam?nothanks.windupbird.com> wrote in
> news:fXz4d.24460$bL1.1056522@news20.bellglobal.com: 
> 
> In particular, your answering to 
> 
> >Certainly the developers on the C3 team do
> > not associate the failure with XP.
> 
> with
> 
> >That's a shame. [...] It's always possible to say it failed for
> these other reasons, it was those other guys [...]
> 
> I re-read the post now, and the positioning of your sentence may have been 
> accidental.

Oh, I see, yes, well, I blame RCM, who should have worded his
assertion somewhat differently to better fit with my response :-)

Regards,
Daniel Parker
0
9/23/2004 9:57:01 PM
"Justin Farley" <null@void.com> wrote in message news:<EJn4d.1889823$y4.325489@news.easynews.com>...
> Cy Coe wrote:
> > "Justin Farley" <null@void.com> wrote in message
> > news:<4_n2d.893161$ic1.89758@news.easynews.com>...
>  
> >> Secondly, the analyst/customer is a specialist in a particular
> >> domain.  In other words, they are finance, manufacturing, health,
> >> etc. experts first, and IT experts second.  (This is just an
> >> efficient division of skills, and not something XP advocates
> >> explicitly AFAICT, just something I've inferred.)
> >
> > And that's the problem I have with the XP view. [...]
> 
> To clarify, that second point is _not_ the XP view AFIAK.  It is my personal
> opinion about the analyst/customer role.

It seems to fit pretty well with what I've read from XP gurus here.
 
> > [...] A good analyst can
> > learn about a business domain more quickly than a business worker can
> > acquire requirements analysis skills.  Simply knowing about a domain
> > isn't enough form someone to determine software requirements for it.
> > That knowledge is certainly helpful to the process, and I'm not
> > opposed to the Customer team having end-users on it, but I've seen the
> > folly of turning over all requirements responsibilities to end-users
> > first hand.
> 
> Yes, domain knowledge alone is not enough to determine software
> requirements.  But I am talking about a team, not an individual.

Well, XP didn't originally talk about a team.  It talked about a
single, monolithic Customer, specifically a business decision maker
with extensive domain knowledge but no IT background.  The "team"
thing (with its implicit support for business analyst and QA roles)
apparently came along later, perhaps as a result of organizations
balking at the idea of firing a bunch of skilled analysts and testers
and replacing them with line-area middle managers or software end
users.  The original intent was clearly to do away with non-coding
specialists on software projects, and replace them with a combination
of programmers and business subject-matter experts.

> Your notion that "business workers" are less able to acquire analysis skills
> is ignoring their ability to contribute.

Not at all.  I have no problem with such people being on the team. 
Their knowledge is important, and valuable.  But shaping that
knowledge into a form useful for specifying software requirements is a
skill that most "business workers" (and, I'd argue, most programmers)
do not have.

> Why do "software analysts" need to
> learn about a business domain, when an intelligent "domain expert" on the
> team can contribute all the domain knowledge needed.

Because they can structure and model that domain in a way that
facilitates the development of an automated solution.  A typical
"domain expert" will tend to simply want to "pave the cow path", to
computerize a manual process or recreate a legacy system.

> I don't see it as "turning over all requirements responsibilities to the
> end-users", although it does place /more/ responsibility on the end-users
> (or rather, the end-users' representative aka domain expert aka customer
> role).  It is just a more efficient allocation of skills, and nobody has to
> learn anything more quickly than anybody else, they just need to be in the
> same room as each other for the whole life-cycle.

If just being in a room together and talking were a sure-fire way to
accomplish something, meetings wouldn't have the bad name they have. 
And the longer you have that domain expert with you full time instead
of doing his regular job, the more stale his knowledge will be.  If
business requirements really change as fast as XP'ers claim they do,
do you really want such an important person out of touch with the
business for that long?


Cy
0
cycoe (74)
9/25/2004 4:37:13 PM
Cy Coe wrote:
> "Justin Farley" <null@void.com> wrote in message
> news:<EJn4d.1889823$y4.325489@news.easynews.com>...

>> Yes, domain knowledge alone is not enough to determine software
>> requirements.  But I am talking about a team, not an individual.
>
> Well, XP didn't originally talk about a team.  It talked about a
> single, monolithic Customer, specifically a business decision maker
> with extensive domain knowledge but no IT background.  The "team"
> thing (with its implicit support for business analyst and QA roles)
> apparently came along later, perhaps as a result of organizations
> balking at the idea of firing a bunch of skilled analysts and testers
> and replacing them with line-area middle managers or software end
> users.  The original intent was clearly to do away with non-coding
> specialists on software projects, and replace them with a combination
> of programmers and business subject-matter experts.

I don't know what XP originally talked about.  I'll leave it to others to
dispute the above, but I suspect the idea that "XP didn't originally talk
about a team" will be treated with a big sigh.  I can only comment on my own
opinion...

>> Your notion that "business workers" are less able to acquire
>> analysis skills is ignoring their ability to contribute.
>
> Not at all.  I have no problem with such people being on the team.
> Their knowledge is important, and valuable.  But shaping that
> knowledge into a form useful for specifying software requirements is a
> skill that most "business workers" (and, I'd argue, most programmers)
> do not have.

I agree.  But again, your only point is that we need intelligent domain
experts and intelligent IT experts on the team.  My point is that the
ideal/practical division of skills is to have both on the team communicating
with each other, rather than have domain experts learning IT skills or
vica-versa.

>> Why do "software analysts" need to
>> learn about a business domain, when an intelligent "domain expert"
>> on the team can contribute all the domain knowledge needed.
>
> Because they can structure and model that domain in a way that
> facilitates the development of an automated solution.  A typical
> "domain expert" will tend to simply want to "pave the cow path", to
> computerize a manual process or recreate a legacy system.

Not if software analysts specialize, and provide a more efficient role as
domain experts rather than IT experts.  There is no such thing as a
"typical" domain expert when I advocate a change in that role.

>> I don't see it as "turning over all requirements responsibilities to
>> the end-users", although it does place /more/ responsibility on the
>> end-users (or rather, the end-users' representative aka domain
>> expert aka customer role).  It is just a more efficient allocation
>> of skills, and nobody has to learn anything more quickly than
>> anybody else, they just need to be in the same room as each other
>> for the whole life-cycle.
>
> If just being in a room together and talking were a sure-fire way to
> accomplish something, meetings wouldn't have the bad name they have.
> And the longer you have that domain expert with you full time instead
> of doing his regular job, the more stale his knowledge will be.  If
> business requirements really change as fast as XP'ers claim they do,
> do you really want such an important person out of touch with the
> business for that long?

The domain expert's regular job *is* working with IT experts.  I don't see
how that will result in his knowledge becoming stale, since he will continue
to work in the same domain.  Quite the opposite, and this is my whole point
about the most efficient division of skills.  By specializing in a domain
(with good IT skills), he is far more valuable than a general IT analyst who
thinks he can learn any domain however quickly.

--
Justin


0
null38 (68)
9/28/2004 9:23:13 PM
Cy Coe wrote:
> "Justin Farley" <null@void.com> wrote in message news:<EJn4d.1889823$y4.325489@news.easynews.com>...
> 
>>Cy Coe wrote:
>>
>>>"Justin Farley" <null@void.com> wrote in message
>>>news:<4_n2d.893161$ic1.89758@news.easynews.com>...
>>
>> 
>>
>>>>Secondly, the analyst/customer is a specialist in a particular
>>>>domain.  In other words, they are finance, manufacturing, health,
>>>>etc. experts first, and IT experts second.  (This is just an
>>>>efficient division of skills, and not something XP advocates
>>>>explicitly AFAICT, just something I've inferred.)
>>>
>>>And that's the problem I have with the XP view. [...]
>>
>>To clarify, that second point is _not_ the XP view AFIAK.  It is my personal
>>opinion about the analyst/customer role.
> 
> 
> It seems to fit pretty well with what I've read from XP gurus here.
>  
> 
>>>[...] A good analyst can
>>>learn about a business domain more quickly than a business worker can
>>>acquire requirements analysis skills.  Simply knowing about a domain
>>>isn't enough form someone to determine software requirements for it.
>>>That knowledge is certainly helpful to the process, and I'm not
>>>opposed to the Customer team having end-users on it, but I've seen the
>>>folly of turning over all requirements responsibilities to end-users
>>>first hand.
>>
>>Yes, domain knowledge alone is not enough to determine software
>>requirements.  But I am talking about a team, not an individual.
> 
> 
> Well, XP didn't originally talk about a team.  It talked about a
> single, monolithic Customer, specifically a business decision maker
> with extensive domain knowledge but no IT background.  The "team"
> thing (with its implicit support for business analyst and QA roles)
> apparently came along later, perhaps as a result of organizations
> balking at the idea of firing a bunch of skilled analysts and testers
> and replacing them with line-area middle managers or software end
> users.  The original intent was clearly to do away with non-coding
> specialists on software projects, and replace them with a combination
> of programmers and business subject-matter experts.
> 

You are right, Kent's Book 'talked' about 'a' customer, when in fact it 
should have talked about a group of customers talking with 
'OneCustomerVoice'. But like everything in life, XP has evolved, been 
refined, refactored, to aid the XP community understand and use its ideas.

One of these refinements has as you have said, been what 'a' customer 
is. To quote ExtremeProgrammingInstalled: Whether they are one or many 
people, the XP customer always speaks with one voice. The determination 
of what will have business value, and the order of building that value, 
rests solely with the customer.

Another practise is the '40 week' practise has now become 'sustainable 
pace'.

At the end of the day ALL software development processes get refined, it 
doesn't mean they are therefore 'bad' just because they change.  It 
means people have identified weaknesses in them and therefore address them.

0
news248 (706)
9/28/2004 10:16:15 PM
Andrew McDonagh <news@andrewcdonagh.f2s.com> wrote:

> At the end of the day ALL software development processes get refined, it 
> doesn't mean they are therefore 'bad' just because they change.  It 
> means people have identified weaknesses in them and therefore address them.

We had been getting XP revisions at about the rate of 1 every 2-3 days
for a couple of months there.

Elliott

-- 
Before you manage dependency, you should have an object
model network of key domain and use case entities that 
are collaborating together to get BUSINESS, SCIENTIFIC,
ARTISTIC requirements *done*!
    Something to by god manage in the first place!
0
universe5 (202)
9/28/2004 10:47:44 PM
Universe wrote:
> Andrew McDonagh <news@andrewcdonagh.f2s.com> wrote:
> 
> 
>>At the end of the day ALL software development processes get refined, it 
>>doesn't mean they are therefore 'bad' just because they change.  It 
>>means people have identified weaknesses in them and therefore address them.
> 
> 
> We had been getting XP revisions at about the rate of 1 every 2-3 days
> for a couple of months there.
> 
> Elliott
> 


I have no idea what those were, its been pretty much stable as far as 
I'm personally aware.  Thats not to say it didn't happen, just that I 
didn't see it.

But it is worth noting that the one thing that has been common to most 
of the changes have been in trying to clarify and express intent, rather 
than direction changes.

Andrew
0
news248 (706)
9/28/2004 10:58:08 PM
Andrew McDonagh <news@andrewcdonagh.f2s.com> wrote:

> Universe wrote:
> > Andrew McDonagh <news@andrewcdonagh.f2s.com> wrote:
> > 
> > 
> >>At the end of the day ALL software development processes get refined, it 
> >>doesn't mean they are therefore 'bad' just because they change.  It 
> >>means people have identified weaknesses in them and therefore address them.
> > 
> > 
> > We had been getting XP revisions at about the rate of 1 every 2-3 days
> > for a couple of months there.
> > 
> > Elliott
> > 
> 
> 
> I have no idea what those were, its been pretty much stable as far as 
> I'm personally aware.  Thats not to say it didn't happen, just that I 
> didn't see it.
> 
> But it is worth noting that the one thing that has been common to most 
> of the changes have been in trying to clarify and express intent, rather 
> than direction changes.
> 
> Andrew

Not sure about the logical consistency of each and every XP revision
during that time - beginning roughly 6 months ago back to 9 months (see
Google archive), but I'll agree that overall in sum, XP remains a
codification of:
	~ the piecemeal pragmatist over holistic rational
	~ part over whole
	~ spontaneous outcome over feedback modified planning
	~ evolve design by test coding over modifiable architectural plan

Elliott
-  
XP TDD tests are created from some degree of pre-coding analysis.
(Though XP does not mandate doing minimally *overall* analysis of
the use case currently being coded, much less minimally *overall*
investigation of at least all key project use cases, singly then as a
group [project scope]).  However, Dijkstra recommends that system
design be *driven* (driven: key design elements overall are "steered"
in one direction or another) by core abstraction decomposition, the
interrelationships of core abstractions, tackling highest risks at the
outset, etc.  Given that Dijkstra's approach here is not central to
XP/Alliance TDD, there is a much greater chance of the averse as
opposed to the reverse occurring.  !;- >
0
universe5 (202)
9/28/2004 11:14:37 PM
On Tue, 28 Sep 2004 18:47:44 -0400, Universe wrote:

> Andrew McDonagh <news@andrewcdonagh.f2s.com> wrote:
> 
>> At the end of the day ALL software development processes get refined, it 
>> doesn't mean they are therefore 'bad' just because they change.  It 
>> means people have identified weaknesses in them and therefore address them.
> 
> We had been getting XP revisions at about the rate of 1 every 2-3 days
> for a couple of months there.
> 
Perhaps it was a period of highly iterative methodology improvement.

Were the changes improvements?
0
droby2 (108)
9/28/2004 11:20:51 PM
Elliott,

> Google archive), but I'll agree that overall in sum, XP remains a
> codification of:
> 	~ the piecemeal pragmatist over holistic rational

"Pragmatist" sounds like the kind of slur one might prefer over 
"pragmatic" which could be interpreted favorably... And I sort of grok 
the distinction between "piecemeal" and "holistic" - though I'd frame it 
in terms of giving equal regard to bottom-up and top-down pressures - 
but above all I would be curious how you mean "rational".

Laurent
http://bossavit.com/thoughts/
0
laurent (379)
9/29/2004 12:22:32 AM
Andrew McDonagh <news@andrewcdonagh.f2s.com> wrote in message news:<cjcnrt$6ih$1@news.freedom2surf.net>...
> Cy Coe wrote:

> > Well, XP didn't originally talk about a team.  It talked about a
> > single, monolithic Customer, specifically a business decision maker
> > with extensive domain knowledge but no IT background.  The "team"
> > thing (with its implicit support for business analyst and QA roles)
> > apparently came along later, perhaps as a result of organizations
> > balking at the idea of firing a bunch of skilled analysts and testers
> > and replacing them with line-area middle managers or software end
> > users.  The original intent was clearly to do away with non-coding
> > specialists on software projects, and replace them with a combination
> > of programmers and business subject-matter experts.
> > 
> 
> You are right, Kent's Book 'talked' about 'a' customer, when in fact it 
> should have talked about a group of customers talking with 
> 'OneCustomerVoice'. But like everything in life, XP has evolved, been 
> refined, refactored, to aid the XP community understand and use its ideas.

It shouldn't have taken evolution, refinement or refactoring to
discover that skills in requirements gathering, analysis and business
modeling can be useful in software projects, even "agile" ones.  XP
marketing played to a certain attitude common among programmers - that
*they* are the only ones who actually do anything useful on software
projects, and all the other non-coding specialists (analysts, testers,
architects, project managers, even DBAs) are merely useless overhead.

The customer wasn't supposed to be an analyst, or a QA person.  Those
represent dilutions of the original concept - non-specialized
programmers collaborating directly with end-users in an environment
where tactical design decisions replace considerations of
architecture.  It's almost as if the XP values list should have
included (with apologies to the screenwriter of the film "Starship
Troopers") "Everyone codes, nobody quits."


Cy
0
cycoe (74)
9/29/2004 4:07:32 AM
Andrew McDonagh wrote:

> But it is worth noting that the one thing that has been common to most
> of the changes have been in trying to clarify and express intent, rather
> than direction changes.

Q: I don't "believe in" your methodology, because I like doing X.

A: We only said X is optional, and Y often prevents the need for it.

Q: But what about [contrived scenario Z]?

A: Then you do X, of course. Use your head.

Q: You just have an answer for everything, huh? Are you going
     to change your methodology again?

A: Well, changing the implementation, per project, is Practice W...

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/29/2004 11:44:20 AM
"Justin Farley" <null@void.com> wrote in message news:<5Dk6d.2280133$6p.385250@news.easynews.com>...
> Cy Coe wrote:

> >> Your notion that "business workers" are less able to acquire
> >> analysis skills is ignoring their ability to contribute.
> >
> > Not at all.  I have no problem with such people being on the team.
> > Their knowledge is important, and valuable.  But shaping that
> > knowledge into a form useful for specifying software requirements is a
> > skill that most "business workers" (and, I'd argue, most programmers)
> > do not have.
> 
> I agree.  But again, your only point is that we need intelligent domain
> experts and intelligent IT experts on the team.  My point is that the
> ideal/practical division of skills is to have both on the team communicating
> with each other, rather than have domain experts learning IT skills or
> vica-versa.

I believe you need some (or at least one) person whose knowledge
crosses the business/IT boundary.  In truth, you're already going down
that road when you have your "Customer" specify acceptance tests. 
Sure, he's still happily oblivious about what objects do what, but
you're still requiring a certain degree of technical savvy, no matter
how fancy you make the testing framework.

> >> Why do "software analysts" need to
> >> learn about a business domain, when an intelligent "domain expert"
> >> on the team can contribute all the domain knowledge needed.
> >
> > Because they can structure and model that domain in a way that
> > facilitates the development of an automated solution.  A typical
> > "domain expert" will tend to simply want to "pave the cow path", to
> > computerize a manual process or recreate a legacy system.
> 
> Not if software analysts specialize, and provide a more efficient role as
> domain experts rather than IT experts.  There is no such thing as a
> "typical" domain expert when I advocate a change in that role.

I tend to think of "domain expert" as question-answerer, a source of
unstructured knowledge gained from real-world experience.  A "software
analyst" can also be a domain expert, but one who knows how to do
things with that knowledge that a more conventional domain expert does
not.

> The domain expert's regular job *is* working with IT experts.  I don't see
> how that will result in his knowledge becoming stale, since he will continue
> to work in the same domain.  Quite the opposite, and this is my whole point
> about the most efficient division of skills.  By specializing in a domain
> (with good IT skills), he is far more valuable than a general IT analyst who
> thinks he can learn any domain however quickly.

Okay, but now you're switching gears.  You're talking about an analyst
who knows about a specific domain, rather than someone pulled in from
a line area because he knows the domain from an administrative
perspective, which is what I understand the more traditional/intended
profile of the XP Customer to be.


Cy
0
cycoe (74)
9/29/2004 12:36:52 PM
On 28 Sep 2004 21:07:32 -0700, cycoe@hotmail.com (Cy Coe) wrote:

>It shouldn't have taken evolution, refinement or refactoring to
>discover that skills in requirements gathering, analysis and business
>modeling can be useful in software projects, even "agile" ones.

It shouldn't have taken 2,500 years to realize that heavy things don't
fall faster than light things.  Go figure.

-----
Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
Object Mentor Inc.            | blog:  www.butunclebob.com
The Agile Transition Experts  | web:   www.objectmentor.com
800-338-6716   


"The aim of science is not to open the door to infinite wisdom, 
 but to set a limit to infinite error."
    -- Bertolt Brecht, Life of Galileo
0
unclebob2 (2724)
9/29/2004 12:36:58 PM
cycoe@hotmail.com (Cy Coe) wrote in 
news:1bd6d89a.0409250837.50e1ddd@posting.google.com:

> Well, XP didn't originally talk about a team.  It talked about a
> single, monolithic Customer

I suspect you're mixing roles with concrete instances.

The notion of Customer - as I understood it when reading XP - is that there 
must be one logical entity deciding what's important or not for the 
project.

The rationale being that you've got to know who has the understanding and 
authority to decide what the system under construction's supposed to do.

Then, this entity can be the entire board if so necessary. In time, several 
different people can be the Customer. That's far less important than the 
idea that every piece of functionality implemented *has* been agreed with 
someone who, in turn, represents the organization you're selling software 
to.

This is, at least, what seemed obvious to me when I did read about 'the 
Customer'. Simply because most methodologies assume this implicitly - XP 
simply states the fact explicitly in order to declare that 'the Customer' 
must be involved not only at the start, but continously.





0
9/29/2004 12:39:06 PM
cycoe@hotmail.com (Cy Coe) wrote in 
news:1bd6d89a.0409282007.795ed39e@posting.google.com:

> It shouldn't have taken evolution,

What is exactly your point?

Are you saying

1) XP doesn't work
2) XP is stating the obvious
3) XP is worse than <X>

or..

?


0
9/29/2004 12:58:46 PM
Cristiano Sadun wrote:

> Cy Coe wrote:
>
> > It shouldn't have taken evolution,
>
> What is exactly your point?
>
> Are you saying
>
> 1) XP doesn't work
> 2) XP is stating the obvious
> 3) XP is worse than <X>
>
> or..
>
> ?

Ask him if he's a programmer.

I could dial up an open-heart surgery newsgroup, read the debates for a
while, mimick them, and sound kind'a like a surgeon.

I'm not saying that's a bad thing...

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/29/2004 1:24:19 PM
Robert C. Martin <unclebob@objectmentor.com> wrote:

> On 28 Sep 2004 21:07:32 -0700, cycoe@hotmail.com (Cy Coe) wrote:
> > 
> >It shouldn't have taken evolution, refinement or refactoring to
> >discover that skills in requirements gathering, analysis and business
> >modeling can be useful in software projects, even "agile" ones.
 
> It shouldn't have taken 2,500 years to realize that heavy things don't
> fall faster than light things.  Go figure.

It didn't take that long based upon historical facts and prescience:
	 ~ many historians maintain the Egyptians and Arabs were aware
	 ~ many other ancients were aware, but Galileo formalized.  
			What's so hard about seeing that a legume and kettle for
		boiling oil both hit the moat at the same time?

To "haughto-sarcastically" tsk, tsk the ability of proven knowledge -
that is *theory* - to rationally inform and *lead* our work practice is
the height of ultra-regressive piecemeal and spontaneity worshiping
cretinism.  I.e. the reactionary hardcore hacker mentality.

Elliott
-- 
Theory Leads, Practice Verifies

 Profiteer US Out of Iraq Now!
0
universe5 (202)
9/29/2004 1:45:53 PM
> I could dial up an open-heart surgery newsgroup, read the debates for a
> while, mimick them, and sound kind'a like a surgeon.

Catch Phlip If You Can. :)

Laurent
0
laurent (379)
9/29/2004 1:54:39 PM
Cristiano Sadun <cristianoTAKEsadun@THIShotmailOUT.com> wrote:

> cycoe@hotmail.com (Cy Coe) wrote in 
> news:1bd6d89a.0409250837.50e1ddd@posting.google.com:
> 
> > Well, XP didn't originally talk about a team.  It talked about a
> > single, monolithic Customer
> 
> I suspect you're mixing roles with concrete instances.
> 
> The notion of Customer - as I understood it when reading XP - is that there 
> must be one logical entity deciding what's important or not for the 
> project.
> 
> The rationale being that you've got to know who has the understanding and 
> authority to decide what the system under construction's supposed to do.
> 
> Then, this entity can be the entire board if so necessary. In time, several 
> different people can be the Customer. That's far less important than the 
> idea that every piece of functionality implemented *has* been agreed with 
> someone who, in turn, represents the organization you're selling software 
> to.
> 
> This is, at least, what seemed obvious to me when I did read about 'the 
> Customer'. Simply because most methodologies assume this implicitly - XP 
> simply states the fact explicitly in order to declare that 'the Customer' 
> must be involved not only at the start, but continously.

Dood.

Not "obvious", but the unalloyed fact:

XP initially and now argued for months here on comp.object and on their
Wiki,  *1* customer rep.  period.

Elliott
-  
XP TDD tests are created from some degree of pre-coding analysis.
(Though XP does not mandate doing minimally *overall* analysis of
the use case currently being coded, much less minimally *overall*
investigation of at least all key project use cases, singly then as a
group [project scope]).  However, Dijkstra recommends that system
design be *driven* (driven: key design elements overall are "steered"
in one direction or another) by core abstraction decomposition, the
interrelationships of core abstractions, tackling highest risks at the
outset, etc.  Given that Dijkstra's approach here is not central to
XP/Alliance TDD, there is a much greater chance of the averse as
opposed to the reverse occurring.  !;- >
0
universe5 (202)
9/29/2004 1:59:23 PM
Robert C. Martin wrote:
> On 28 Sep 2004 21:07:32 -0700, cycoe@hotmail.com (Cy Coe) wrote:
> 
> 
>>It shouldn't have taken evolution, refinement or refactoring to
>>discover that skills in requirements gathering, analysis and business
>>modeling can be useful in software projects, even "agile" ones.
> 
> 
> It shouldn't have taken 2,500 years to realize that heavy things don't
> fall faster than light things.  Go figure.

Since heavy things *do* fall faster than light things, maybe it should 
have taken longer.

> 
> -----
> Robert C. Martin (Uncle Bob)  | email: unclebob@objectmentor.com
> Object Mentor Inc.            | blog:  www.butunclebob.com
> The Agile Transition Experts  | web:   www.objectmentor.com
> 800-338-6716   
> 
> 
> "The aim of science is not to open the door to infinite wisdom, 
>  but to set a limit to infinite error."
>     -- Bertolt Brecht, Life of Galileo
0
reiersol (156)
9/29/2004 7:48:52 PM
Dagfinn Reiersol wrote:

> Since heavy things *do* fall faster than light things, maybe it should
> have taken longer.

Is that supposed to be some kind of joke???

Puzzled, Ilja 


0
it3974 (470)
9/29/2004 8:14:59 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in message news:<7Iy6d.11641$Qv5.3740@newssvr33.news.prodigy.com>...
> Cristiano Sadun wrote:
> 
> > Cy Coe wrote:
> >
> > > It shouldn't have taken evolution,
> >
> > What is exactly your point?
> >
> > Are you saying
> >
> > 1) XP doesn't work
> > 2) XP is stating the obvious
> > 3) XP is worse than <X>
> >
> > or..
> >
> > ?
> 
> Ask him if he's a programmer.

I'll ask you why that question is relevent, considering that we're
talking about requirements analysis, something that non-programmers
do, both in traditional methodologies and your own.  The debate is
about what background and skills those non-programmers should have.

> I could dial up an open-heart surgery newsgroup, read the debates for a
> while, mimick them, and sound kind'a like a surgeon.

But you never know what kind of intersting things an anesthetist or OR
nurse or plain old cardiologist might be able to contribute to such a
group.  Surely that's the more valid comparison, isn't it?
 
> I'm not saying that's a bad thing...

I wonder if passive-aggressiveness is a common feature of second/third
tier wannabe gurus.


Cy
0
cycoe (74)
9/29/2004 10:53:01 PM
Laurent Bossavit wrote:

> > I could dial up an open-heart surgery newsgroup, read the debates for a
> > while, mimick them, and sound kind'a like a surgeon.
>
> Catch Phlip If You Can. :)

I can guess what to do next. Scrub in and say, "prep for a midline incision"
in a bored and distracted tone. ;-)

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/29/2004 11:37:02 PM
cycoe@hotmail.com (Cy Coe) wrote in 
news:1bd6d89a.0409291453.45c13b5e@posting.google.com:

> I'll ask you why that question is relevent,

Er. That wasn't my question. My question was - what is your point?
0
9/30/2004 7:19:59 AM
Cristiano Sadun <cristianoTAKEsadun@THIShotmailOUT.com> wrote in message news:<Xns95745EF11C94ASadun@212.45.188.38>...
> cycoe@hotmail.com (Cy Coe) wrote in 
> news:1bd6d89a.0409291453.45c13b5e@posting.google.com:
> 
> > I'll ask you why that question is relevent,
> 
> Er. That wasn't my question. My question was - what is your point?

I wasn't answering your question.  I was responding to Phlip's cheesy
TV lawyer "discredit the witness" stunt.

As for your question, my point was that the developers of XP attempted
to eliminate certain roles and skill sets from software development
projects, either out of a specific desire to do so or as a consequence
of a more general minimalist bent.

The doctrine was subsequently amended to allow for some of these roles
(analysts, testers), as long as these individuals served as low-level
subordinates to a real, live business expert who was not an IT-type
(no egalitarian brotherhood on the business side, it would seem). 
Among certain of the gurus, it would seem, things were further relaxed
so that someone who had previously been an analyst in the bad old BDUF
days could serve in the new regime as the Onsite Customer himself, as
long as he worked for the business and not the development shop.

As someone who has worked mainly in the requirements analysis field, I
tend to focus on intentions and goals as much as specific results and
implementations.  One need only look on c2.com to see the disdain that
many programmers have for anyone involved in IT who doesn't write
code, be they analysts, architects, database administrators, managers,
project managers, interaction designers, technical writers or QA
specialists.  They believe that these roles are either unimportant or
should always be performed by people who also write code.  The
programmers who hold these opinions most strongly also tend to be
those for whom XP holds the greatest appeal.  I believe Beck himself
has been known to say that software development is just coding plus
administration.  And he was the guy behind the monolithic, non-IT
Customer, as I recall.

It doesn't take a lot of imagination to conclude that at least some of
the motivation behind XP involves a desire to get rid of non-coders
(other than the end users, whose value is based on accumulated
knowledge rather than skills).  It's certainly not hard to imagine
this being part of the appeal for Phlip, especially as he considers my
not being a programmer to invalidate anything I have to say about any
aspect of software development.

I think that any of these newly-allowed non-coding specialists on XP
projects should keep in mind that their invitations to the party were
late and grudging.


Cy
0
cycoe (74)
9/30/2004 10:45:32 PM
Cy Coe wrote:

> I wasn't answering your question.  I was responding to Phlip's cheesy
> TV lawyer "discredit the witness" stunt.

I take offense at "lawyer". However, wha'd I do this time?

> As for your question, my point was that the developers of XP attempted
> to eliminate certain roles and skill sets from software development
> projects, either out of a specific desire to do so or as a consequence
> of a more general minimalist bent.

Objection. Leading. Refers facts not in evidence. Hearsay.

> ...It's certainly not hard to imagine
> this being part of the appeal for Phlip, especially as he considers my
> not being a programmer to invalidate anything I have to say about any
> aspect of software development...

When you discuss your real self - not the Cy Coe persona - you say valid
things.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
9/30/2004 10:55:54 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in message news:<_907d.4227$Rf1.3094@newssvr19.news.prodigy.com>...
> Cy Coe wrote:

> > ...It's certainly not hard to imagine
> > this being part of the appeal for Phlip, especially as he considers my
> > not being a programmer to invalidate anything I have to say about any
> > aspect of software development...
> 
> When you discuss your real self - not the Cy Coe persona - you say valid
> things.

Please enlighten me as to the distinction.  Are you saying that I
display multiple personalities online?


Cy
0
cycoe (74)
10/1/2004 2:51:53 AM
"Cristiano Sadun" <cristianoTAKEsadun@THIShotmailOUT.com> wrote in message
news:Xns95739861A2CBASadun@212.45.188.38...
> cycoe@hotmail.com (Cy Coe) wrote in
> news:1bd6d89a.0409282007.795ed39e@posting.google.com:
>
> > It shouldn't have taken evolution,
>
> What is exactly your point?
>
> Are you saying
>
> 1) XP doesn't work
> 2) XP is stating the obvious
> 3) XP is worse than <X>
>
I think one of his points might be that nobody seems to know what XP is
anymore, except perhaps RCM, who believes that XP projects are projects that
exhibit courage.  "One customer in the room" in particular was one of the
defining characteristics of XP, which to someone working on traditional IT
projects is naive beyond belief, but if anyone points that out, you get a
variety of responses, ranging from Mr Jeffries' "but wouldn't it be nice if
you could have one customer", to your apparent rejection of the idea, on the
grounds that what doesn't make sense couldn't be a part of XP beacuse XP
must be "good."  People, it isn't important that something is "good" or not,
it's important that it's well defined, so that we can talk about it.  XP
should be what it's defined to be, not what you happen to practice in your
workplace.

Regards,
Daniel Parker


0
Daniel
10/1/2004 12:51:14 PM
Daniel Parker wrote:

> XP
> should be what it's defined to be, not what you happen to practice in your
> workplace.

XP happens when a team follows a short list of practices, and any small
adjustments and extra practices they find their project needs.

And thinking any methodology could work without a customer liaison is naive
beyond belief.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/1/2004 1:38:06 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in message news:<25d7d.5676$wC4.4059@newssvr16.news.prodigy.com>...
> Daniel Parker wrote:
> 
> > XP
> > should be what it's defined to be, not what you happen to practice in your
> > workplace.
> 
> XP happens when a team follows a short list of practices, and any small
> adjustments and extra practices they find their project needs.
> 
Could you remind me what's still on that short list of practices?
(It's recently been suggested that pair progamming and "one customer
in the room" need not be on that list.)

Regards,
Daniel Parker
0
10/1/2004 8:44:02 PM
Daniel Parker wrote:

> Could you remind me what's still on that short list of practices?

Asked and answered. Read the Web sites, or the White Book, or the White Book
2nd Ed.

> (It's recently been suggested that pair progamming and "one customer
> in the room" need not be on that list.)

Citation?

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/2/2004 12:04:54 AM
"Phlip" <phlip_cpp@yahoo.com> wrote in message
news:Ggm7d.5713$NJ5.5078@newssvr16.news.prodigy.com...
> Daniel Parker wrote:
>
> > (It's recently been suggested that pair progamming and "one customer
> > in the room" need not be on that list.)
>
> Citation?
>
Cristiano Sadun (this thread) on "one customer in the room", RCM (hard to
find, he posts a lot) on pair programming not always been mandatory.

Regards,
Daniel Parker


0
Daniel
10/2/2004 12:19:03 AM
Daniel Parker wrote:

> Cristiano Sadun (this thread) on "one customer in the room", RCM (hard to
> find, he posts a lot) on pair programming not always been mandatory.

Oh no! RCM said that not every single line of code might need a pair!

The foundations of my universe are crumbling!! Parker, how COULD you DO this
to me??

(Uh, big tip: Anyone trying to learn, or debate, XP would do well to look
beyond one newsgroup...)

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/2/2004 3:04:09 AM
"Phlip" <phlip_cpp@yahoo.com> wrote in message
news:JUo7d.5735$Eh6.3969@newssvr16.news.prodigy.com...
>
> (Uh, big tip: Anyone trying to learn, or debate, XP would do well to look
> beyond one newsgroup...)
>
XP'ers don't debate, Philip, you don't talk to them, you listen.

Regards,
Daniel Parker


0
Daniel
10/2/2004 9:38:55 AM
Daniel Parker wrote:

> XP'ers don't debate, Phlip, you don't talk to them, you listen.

Correct. When you make things up about a topic, then try to debate them with
people who have studied the topic for years, that's what happens: You
listen.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/2/2004 4:00:44 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in message
news:MgA7d.11138$cK5.7607@newssvr15.news.prodigy.com...
>
> When you make things up about a topic,

As near as I can tell, you are the first person on usenet who has ever
accused me of making things up about a topic.  You therefore have the honour
of being the first person to make it into my kill file.

Best wishes,
Daniel Parker


0
Daniel
10/2/2004 5:55:23 PM
Daniel Parker wrote:

> Could you remind me what's still on that short list of practices?
> (It's recently been suggested that pair progamming and "one customer
> in the room" need not be on that list.)

Then Daniel Parker wrote:

> > When you make things up about a topic,
>
> As near as I can tell, you are the first person on usenet who has ever
> accused me of making things up about a topic.  You therefore have the
honour
> of being the first person to make it into my kill file.

And now you make things up about me.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/2/2004 6:39:25 PM
Daniel Parker wrote:
> "Phlip" <phlip_cpp@yahoo.com> wrote in message
> news:JUo7d.5735$Eh6.3969@newssvr16.news.prodigy.com...
> 
>>(Uh, big tip: Anyone trying to learn, or debate, XP would do well to look
>>beyond one newsgroup...)
>>
> 
> XP'ers don't debate, Philip, you don't talk to them, you listen.
> 

What would the XP'ers be doing differently if they *were* debating? What 
kinds of things would they be saying that they aren't saying now? What 
kinds of things would they stop saying that they are saying now?

I'm asking out of genuine curiosity.
0
reiersol (156)
10/2/2004 7:11:50 PM
Dagfinn Reiersol wrote:

> What would the XP'ers be doing differently if they *were* debating? What
> kinds of things would they be saying that they aren't saying now? What
> kinds of things would they stop saying that they are saying now?

To discuss a complex topic, each party states what it thinks the other
party's position is.

I think certain posters think XP is a set of marketing rhetoric, with either
no content or obvious old-fashioned best practices.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces




0
phlip_cpp (3852)
10/2/2004 7:30:12 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in
news:yGH6d.3353$Rf1.825@newssvr19.news.prodigy.com: 

> Laurent Bossavit wrote:
> 
> I can guess what to do next. Scrub in and say, "prep for a midline
> incision" in a bored and distracted tone. ;-)

Did you stay at a holiday inn ?
0
Rich
10/2/2004 7:54:07 PM
cycoe@hotmail.com (Cy Coe) wrote in
news:1bd6d89a.0409301445.706e0247@posting.google.com: 

Was that (1), (2) or (3) ?
0
Rich
10/2/2004 7:55:17 PM
Rich MacDonald <rich@@clevercaboose.com> wrote in message news:<Xns95769764514D4richclevercaboosecom@24.94.170.86>...
> cycoe@hotmail.com (Cy Coe) wrote in
> news:1bd6d89a.0409301445.706e0247@posting.google.com: 
> 
> Was that (1), (2) or (3) ?

In truth, none of the above.  I argue a variation of (3), that certain
aspects of XP, particularly those involving requirements analysis, are
worse that practises found in other methodologies.

XP, as described in the books, does not treat requirements engineering
as a significant undertaking, and seems to assume that as long as you
have someone in the room who knows about the business and who is
willing to ask for small-grained features and author acceptance tests,
that everything will work out in terms of the solution meeting the
organization's needs.  In contrast with this treatment of requirements
analysis (the what and why), XP has a great deal to say about writing
code and small-scale design issues (the how).

Normally, one would take this as a hint that it's up to the Customer
to decide how he's going to come up with the requirements that will be
reflected in the XP workflow as user stories.  But not so fast!  Any
kind of upfront effort aimed at researching/clarifying business
requirements, even if it doesn't involve the programmers in any way,
is apparently incompatible with XP.  XP appears to not only recommend
that trial-and-error coding be the main requirements gathering
technique, but *insists* upon it.

I've spent a good portion of my career helping the organizations I
work for reduce uncertainty and clarify their needs in terms of
software.  I've seen enough false starts, blind alleys and poorly
thought-out concepts to convince me that baby-stepping incrementalism
will not yield a good solution unless you do your homework before
starting down the road of implementation.

And no, I'm not talking about "waterfall", "big design up front", or
whatever other bogeymen XP'ers trot out to scare the townsfolk.  I'm
prepared to accept that designing and coding can take place in a more
cyclical and concurrent manner than they have in traditional projects.
 But I think that requirements are different.  The old waterfall "lock
'em in stone" approach has had its problems, but I think XP's
"do-it-as-you-go" approach errs in the opposite direction.  When
developing business software, focusing on the small at the expense of
proper consideration of the big picture (and yes, implementing too
soon pushes you into that mindset) will cause you to miss
opportunities for simplifying the workflow and enhancing the value of
the solution.

You may ask "Well, isn't that the Customer's job?"  To which I respond
"Yes, it is!  That's why you should let him do it, with the help of
those people who are skilled in this sort of work."

Once you have implemented software out there, you're in maintenance
mode.  Significant changes to the architecture may be possible (maybe
even easy) due to the code structuring and refactoring rules that XP
dictates, but a working system resists significant change in a way
that has nothing to do the cost of writing code.  It's a
psychological/sociological barrier to change.  Having something,
however suboptimal, makes it harder to justify asking for something
different.  Like it or not, your best chance at ensuring that the
right thing is built comes before you start construction, even in
software.


Cy
0
cycoe (74)
10/3/2004 11:31:57 PM
Cy Coe wrote:

> In truth, none of the above.  I argue a variation of (3), that certain
> aspects of XP, particularly those involving requirements analysis, are
> worse that practises found in other methodologies.

XP's founders came to the same conclusion, before the founding Agile
Alliance conference, as the founders of several other methodologies - Scrum,
Crystal Clear, etc. They decided that sorting features in order of business
value was the Prime Directive of requirements gathering.

> XP, as described in the books, does not treat requirements engineering
> as a significant undertaking,

Software engineering follows two nested cycles. The inner cycle is writing
code, and the outer cycle is deploying versions. The biggest risk in the
inner cycle is typically endless runaway debugging, and the biggest risk in
the outer cycle is typically deploying the wrong feature, and getting poor
acceptance.

The best fixes found so far for the inner cycle are test-driven development
by pair programmers in a common workspace, continuously integrating,
following a common style guide, and working at a sustainable pace.

XP considers the risks to the outer cycle so significant that the remaining
half of the practices address them.

> and seems to assume that as long as you
> have someone in the room who knows about the business and who is
> willing to ask for small-grained features and author acceptance tests,
> that everything will work out in terms of the solution meeting the
> organization's needs.

XP supplements the Onsite Customer practice with:

 - literate acceptance tests that everyone can review and execute

 - short iterations, each producing a release that could be deployed

 - frequent deployments to end users, to make absolutely certain
     they are getting what they need, not what they asked for

 - sorting features in order of business priority

 - aggressive scope control, so Onsite Customers can review
     features as they emerge, censor their unimportant details
     early, and declare features finished as soon as possible

If the only practice here were Onsite Customer, then all the familiar risks
to the outer cycle would come back.

> In contrast with this treatment of requirements
> analysis (the what and why), XP has a great deal to say about writing
> code and small-scale design issues (the how).

Many of common outer cycle practices simply have new names and locations
within the XP framework. The point of the framework is focus.

> Normally, one would take this as a hint that it's up to the Customer
> to decide how he's going to come up with the requirements that will be
> reflected in the XP workflow as user stories.  But not so fast!  Any
> kind of upfront effort aimed at researching/clarifying business
> requirements, even if it doesn't involve the programmers in any way,
> is apparently incompatible with XP.  XP appears to not only recommend
> that trial-and-error coding be the main requirements gathering
> technique, but *insists* upon it.

They are not incompatible with XP. They have been shown to be incompatible
with reality in general. Think of the Denver Airport Baggage problem.
Stacking up requirements and trying to implement them all at the same time
adds risk, no matter what your methodology.

XP simply "says" to sort features in order of business priority. I don't see
how anyone could disagree with that technique by itself. XP makes it work
better.

It sounds deceptively simple - that's because it's a hill-climbing
algorithm. If you always pick the steepest path up from your current
position, and if the mountain has no secondary peaks or valleys, you will
find the shortest path up. Hill-climbing algorithms work in spaces that can
continuously deform. XP's inner cycle, TDD and refactoring, ensure that code
can always preserve its features while growing its design.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/4/2004 1:08:00 AM
cycoe@hotmail.com (Cy Coe) wrote in
news:1bd6d89a.0410031531.18415670@posting.google.com: 
 
> XP, as described in the books, does not treat requirements engineering
> as a significant undertaking, and seems to assume that as long as you
> have someone in the room who knows about the business and who is
> willing to ask for small-grained features and author acceptance tests,
> that everything will work out in terms of the solution meeting the
> organization's needs.

I'm certain that any XPer with experience treats the need for and 
development of requirements as a significant undertaking. XP just asks 
(and answers): "Which would you prefer: The requirement's doc or the 
author of the requirement's doc sitting in the same room as you?"

> In contrast with this treatment of requirements
> analysis (the what and why), XP has a great deal to say about writing
> code and small-scale design issues (the how).
> 
> Any
> kind of upfront effort aimed at researching/clarifying business
> requirements, even if it doesn't involve the programmers in any way,
> is apparently incompatible with XP.

Not so.

> XP appears to not only recommend
> that trial-and-error coding be the main requirements gathering
> technique, but *insists* upon it.

Not so. It acknowledges that trial-and-error is a fact of life and tries 
to find an approach that survives/thrives under that reality.
 
> I've spent a good portion of my career helping the organizations I
> work for reduce uncertainty and clarify their needs in terms of
> software.  I've seen enough false starts, blind alleys and poorly
> thought-out concepts to convince me that baby-stepping incrementalism
> will not yield a good solution unless you do your homework before
> starting down the road of implementation.
> 
> And no, I'm not talking about "waterfall", "big design up front", or
> whatever other bogeymen XP'ers trot out to scare the townsfolk.  I'm
> prepared to accept that designing and coding can take place in a more
> cyclical and concurrent manner than they have in traditional projects.
>  But I think that requirements are different.  The old waterfall "lock
> 'em in stone" approach has had its problems, but I think XP's
> "do-it-as-you-go" approach errs in the opposite direction.  When
> developing business software, focusing on the small at the expense of
> proper consideration of the big picture (and yes, implementing too
> soon pushes you into that mindset) will cause you to miss
> opportunities for simplifying the workflow and enhancing the value of
> the solution.

While I have strong sympathy with the belief that "local optimization" 
can miss the "global solution", I have to confess that my XP experience 
has yet to bear that out. I have not hit an impassable barrier since I 
adopted "unit testing as you go". Instead, I've found that my "false 
starts, blind alleys and poorly thought-out concepts" now actually 
survive and evolve into a useful result. Amazingly, the "measure twice 
cut once" mentality is inferior to the "write once, rewrite forever". 
Just last month I rewrote something in its entirety twice in the same 
week. Inefficient from the outside, but not from where I'm sitting. 
Certainly, there are many organizations and projects where this will not 
be possible, but I assure you its true in my arena. Small teams and 
great IDEs are doing the trick as far as I am concerned. 
 
> You may ask "Well, isn't that the Customer's job?"  To which I respond
> "Yes, it is!  That's why you should let him do it, with the help of
> those people who are skilled in this sort of work."

But how? With a painfully slowly developed piece of paper that has not 
yet proved itself in the real world, or with a mirror for the Customer 
to look into? Have you *ever* coded a project where you didn't have a 
single question about the requirement's doc? Of course not. And have you 
ever run into a new question when coding that you would never have 
thought of while simply reading the doc? Of course you have. And isn't 
it the task of every programmer to put his/her head into the mind of the 
Customer? Of course it is. Ideally to "become" that customer? Of course 
it is. Have you ever paired with *the* Customer. Great thing if you 
have. You've missed out if you haven't.
 
> Once you have implemented software out there, you're in maintenance
> mode.  Significant changes to the architecture may be possible (maybe
> even easy) due to the code structuring and refactoring rules that XP
> dictates, but a working system resists significant change in a way
> that has nothing to do the cost of writing code.  It's a
> psychological/sociological barrier to change.  Having something,
> however suboptimal, makes it harder to justify asking for something
> different.  Like it or not, your best chance at ensuring that the
> right thing is built comes before you start construction, even in
> software.
> 
That's just not my experience when I am involved from start to finish. 
When I have to maintain someone else's code, yes. But when I own it, I 
have no barrier to change, psychological or sociological or anything 
else beyond hours in the day and cost-effectiveness. In fact, if I don't 
"change it" in some way every single day, then its aging badly and there 
is a problem. I fervently believe in the quote: "If its useful, it will 
be changed". Its been a truism IME.
0
Rich
10/4/2004 2:22:05 AM
"Phlip" <phlip_cpp@yahoo.com> wrote in
news:Qn18d.3514$5b1.446@newssvr17.news.prodigy.com: 

> It sounds deceptively simple - that's because it's a hill-climbing
> algorithm. If you always pick the steepest path up from your current
> position, and if the mountain has no secondary peaks or valleys, you
> will find the shortest path up.

One of the XP criticisms I find interesting is the claim that all real-
world mountains *do* have secondary peaks and valleys. I agree entirely 
with the claim. What I find interesting is that this fact has not bitten me 
in my XP ass.

> Hill-climbing algorithms work in
> spaces that can continuously deform.

But not optimally. At least in theory. However, I suspect that they work as 
well as any other approach "in the worst case" and better "in general". 
Perhaps the problem is "XP-complete" :-?

> XP's inner cycle, TDD and
> refactoring, ensure that code can always preserve its features while
> growing its design. 

I often think back to a thread I wrote some years ago entitled something 
like: "XP lets me write atrocious code and get away with it". I was 
researching a mathematical framework and had rewritten/refactored the code 
so often I had a heap of spagetti internally. But all the tests still 
worked and the code was useful! I was tongue in cheek but my point was that 
without XP I would have had a heap of spagetti that didn't work and would 
have hit the wall months earlier. Now with an OT paragraph like this I'll 
try to tie it into the subject of the thread by saying that this was 
*algorithm research", i.e., the only way to have written those requirements 
(beyond an absolutely useless "the program shall ..." paragraph) was to 
have written the code itself in order to prove the requirements were 
correct.
0
Rich
10/4/2004 2:36:51 AM
Rich MacDonald wrote:

> Phlip wrote:
>
> > It sounds deceptively simple - that's because it's a hill-climbing
> > algorithm. If you always pick the steepest path up from your current
> > position, and if the mountain has no secondary peaks or valleys, you
> > will find the shortest path up.
>
> One of the XP criticisms I find interesting is the claim that all real-
> world mountains *do* have secondary peaks and valleys. I agree entirely
> with the claim. What I find interesting is that this fact has not bitten
me
> in my XP ass.

The inner cycle of development should generally hill-climb, but the space is
_not_ continuously deformable. You and I both spent recent days re-writing a
subsystem, then re-writing it again. I did not deform one subsystem into its
replacement; I deprecated the old system, wrote the new one, and replaced
the old with the new. To hill-climb, I went down a valley and up its steeper
wall.

TDD does _not_ find the shortest path to the highest peak. If you try to use
it to generate a new algorithm, it is very sensitive to initial conditions.
Early refactors that obey the simplicity principles can inhibit, not
discover, the later abstractions that can make a new algorithm work.

However, TDD still makes re-writing cheaper and more cognitively efficient
than designing-up-front. And that is the only way a hill-climbing algorithm
for the outer cycle, feature deployments, can continuously deform a project.
Today the features is in the client, in Ruby, tomorrow the same feature now
resides on the server in C++.

> I often think back to a thread I wrote some years ago entitled something
> like: "XP lets me write atrocious code and get away with it". I was
> researching a mathematical framework and had rewritten/refactored the code
> so often I had a heap of spagetti internally. But all the tests still
> worked and the code was useful! I was tongue in cheek but my point was
that
> without XP I would have had a heap of spagetti that didn't work and would
> have hit the wall months earlier. Now with an OT paragraph like this I'll
> try to tie it into the subject of the thread by saying that this was
> *algorithm research", i.e., the only way to have written those
requirements
> (beyond an absolutely useless "the program shall ..." paragraph) was to
> have written the code itself in order to prove the requirements were
> correct.

Right - TDD without continous review can _support_ a big ball of mud!

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/4/2004 3:02:32 AM
"Rich MacDonald" <rich@@clevercaboose.com> wrote in message
news:Xns9577D8F91D72Arichclevercaboosecom@24.94.170.86...
> cycoe@hotmail.com (Cy Coe) wrote in
> news:1bd6d89a.0410031531.18415670@posting.google.com:
>
> > XP, as described in the books, does not treat requirements engineering
> > as a significant undertaking, and seems to assume that as long as you
> > have someone in the room who knows about the business and who is
> > willing to ask for small-grained features and author acceptance tests,
> > that everything will work out in terms of the solution meeting the
> > organization's needs.
>
> I'm certain that any XPer with experience treats the need for and
> development of requirements as a significant undertaking. XP just asks
> (and answers): "Which would you prefer: The requirement's doc or the
> author of the requirement's doc sitting in the same room as you?"
>
Hi Richard,

Nice to see a thoughtful post, but...I think you would find that in many
environments, the answer would be the requirements document :-)

I speak from a variety of perspectives:  (i) as a developer in start-ups,
building things like telco products for provisioning network elements, where
if I had a question I would simply drop in the product manager's office;
(ii) as a developer in large banks on multi-million dollar, multi year
horizon trading system projects; (iii) as a customer in large banks
responsible for providing requirements to developers on sizeable risk
projects.

I'll speak now from my current perpsective, as a customer, or more
precisely, as an analyst.  In the last year or so, I've prepared something
like forty specs, with accompanying acceptance tests.  Other anaylsts on the
same project have prepared a similiar number.  The specs go to a developer
and the developer can usually implement a spec in 2-4 days, including unit
testing, leveraging existing infastructure.  This is an environment where
the process can be moved into production on an incremental basis, so there
is a steady stream of specs, development, QA, and deployment to production.

Would the developers prefer to have me rather than my requirement's doc?  I
don't think so.  I don't have the information until I've communicated with
the various users, analyzed the requirements, checked with audit, etc. and
prepared a specification.  The specification captures my understanding of
the requirements, and by capturing them in written form, other analysts and
managers, and developers as well, can provide feedback.  It would be unusual
that I had a full understanding of the subject until I had worked through my
thoughts by expressing them in a document.  In practice, I get very few
questions back from the developers, and the ones I do get tend to be related
to conceptual issues rather than detail issues.

When I finish a spec, I go on to the next.  I can't remember all the details
of the previous ones, that's what written documents are for.  I also need to
be able to allocate a high percentage of my day to analyzing, communicating
with users, etc., and the last thing I want to do is spend my day talking
about details that can easily be captured and looked up in a document.

>
> > You may ask "Well, isn't that the Customer's job?"  To which I respond
> > "Yes, it is!  That's why you should let him do it, with the help of
> > those people who are skilled in this sort of work."
>
> But how? With a painfully slowly developed piece of paper that has not
> yet proved itself in the real world, or with a mirror for the Customer
> to look into? Have you *ever* coded a project where you didn't have a
> single question about the requirement's doc? Of course not. And have you
> ever run into a new question when coding that you would never have
> thought of while simply reading the doc? Of course you have. And isn't
> it the task of every programmer to put his/her head into the mind of the
> Customer? Of course it is. Ideally to "become" that customer? Of course
> it is. Have you ever paired with *the* Customer. Great thing if you
> have. You've missed out if you haven't.
>
My experience has varied.  I've worked with no requirements documents in
pure winging-it environments, in the company of very highly skilled
programmers.  I've worked from requirement documents where I've had very few
questions.  I've worked from requirements documents which were completely
useless and I had to repeat the analysis by bypassing the analysts and going
back to the end users.  And I've prepared requirements documents that don't
seem to require very many supplemental questions.

In banks, on sizeable projects that involve a year or so of work, the
bsuiness wants to be able to manage their budget and time within small
tolerances.  They become an exercise in managing detail and tracking time.
And people become good at it.

I really don't enjoy this type of work very much, I would much rather be on
the development side in a start up environment.  Maybe it's partly because
of the tools, the state of the art in analyst tools is pretty
 dismal, nothing like developers tools, and if I ever have the time I'd like
to start up an open source project to do something better.  But it lends
perspective, enough to know that many of the statements being made about the
requirements document in this thread are a little on the narrow side.

Regards,
Daniel Parker


0
Daniel
10/4/2004 4:24:14 AM
"Daniel Parker" <danielaparker@spam?nothanks.windupbird.com> wrote in
message news:Fj48d.1218$HO1.60368@news20.bellglobal.com...

> I really don't enjoy this type of work very much, I would much rather be
on
> the development side in a start up environment.  Maybe it's partly because
> of the tools, the state of the art in analyst tools is pretty
>  dismal, nothing like developers tools, and if I ever have the time I'd
like
> to start up an open source project to do something better.

What'd you have in mind?


Shayne Wissler
http://www.ouraysoftware.com



0
10/4/2004 4:51:38 AM
"Daniel Parker" <danielaparker@spam?nothanks.windupbird.com> wrote in
news:Fj48d.1218$HO1.60368@news20.bellglobal.com: 

[Good stuff snipped]

Points taken.

> ...it lends perspective, enough to know that many of the statements being 
made about the requirements document in this thread are a little on the 
narrow side.

Including mine. Valid in the situations I was thinking of, but certainly 
narrow and I'm glad you pointed that out. I've spent a few years as the 
analyst, requirements doc writer, myself, so I can appreciate what you say.

>I really don't enjoy this type of work very much  [meaning: the task of 
writing requirements docs]

Me neither. More fun to work both sides of the fence at the same time and 
write your requirements in code. But this isn't feasible beyond a certain 
project size. And if the requirement provider's time is constrained, the 
requirement doc is of practical benefit.
0
Rich
10/4/2004 5:25:07 AM
"Daniel Parker" <danielaparker@spam?nothanks.windupbird.com> wrote in 
news:Jxm7d.29456$tT2.1926490@news20.bellglobal.com:

> "Phlip" <phlip_cpp@yahoo.com> wrote in message
> news:Ggm7d.5713$NJ5.5078@newssvr16.news.prodigy.com...
>> Daniel Parker wrote:
>>
>> > (It's recently been suggested that pair progamming and "one customer
>> > in the room" need not be on that list.)
>>
>> Citation?
>>
> Cristiano Sadun (this thread) on "one customer in the room", RCM (hard to
> find, he posts a lot) on pair programming not always been mandatory.

To clarify, that is only my personal take on the subject - what I 
understood when reading the XP book. Customer in the room = minimizing the 
overhead of customer contact, making his feedback continuously available - 
however the customer is embodied.

In opposition to a bureaucratic process where, say, to contact the customer 
you have to send a mail to the project leader that sends a mail to a 
management representative that asks around the technical resource that 
answers back to the representative after a couple of days, when the 
representative rewrites the answer that is sent back to the project leader 
that finally forwards it to you.

0
10/4/2004 7:06:53 AM
Cy,

> When developing business software, focusing on the small at the expense of
> proper consideration of the big picture (and yes, implementing too
> soon pushes you into that mindset) will cause you to miss opportunities
> for simplifying the workflow and enhancing the value of the solution.

I think I would understand better if we could discuss concrete examples 
of projects that fall in the "business software" category, and concrete 
examples of "opportunities for simplifying the workflow" that could be 
missed by focusing on small bits of value and/or implementing too soon.

Laurent
0
laurent (379)
10/4/2004 2:31:25 PM
"Laurent Bossavit" <laurent@dontspambossavit.com> wrote in message
news:MPG.1bcb7d26bfe439679897ed@news.noos.fr...
> Cy,
>
> > When developing business software, focusing on the small at the expense
of
> > proper consideration of the big picture (and yes, implementing too
> > soon pushes you into that mindset) will cause you to miss opportunities
> > for simplifying the workflow and enhancing the value of the solution.
>
> I think I would understand better if we could discuss concrete examples
> of projects that fall in the "business software" category, and concrete
> examples of "opportunities for simplifying the workflow" that could be
> missed by focusing on small bits of value and/or implementing too soon.

I don't see why you'd need something concrete. Cy's point about workflow
design is fundamentally no different than XP's point about refactoring.

But then, I do understand why you ask for something concrete. That's the
XPer way of sticking your head in the sand. The tactic is a little bit
clever - you get to appear interested in facts while evading them - but it
wears thin after years and years of everyone doing it.


Shayne Wissler
http://www.ouraysoftware.com



0
10/4/2004 3:00:20 PM
Shayne Wissler ad hominemed:

> I don't see why you'd need something concrete. Cy's point about workflow
> design is fundamentally no different than XP's point about refactoring.
>
> But then, I do understand why you ask for something concrete. That's the
> XPer way of sticking your head in the sand. The tactic is a little bit
> clever - you get to appear interested in facts while evading them - but it
> wears thin after years and years of everyone doing it.

Cy's pitch is always "more exact than thou". So how can he claim that
requirements engineering must be so exact, while his posts are so nebulous?

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/4/2004 3:31:05 PM
Shayne,

> I don't see why you'd need something concrete. Cy's point about workflow
> design is fundamentally no different than XP's point about refactoring.

I have a good grasp of what "refactoring" means. I can't say the same 
with respect to "workflow" or "workflow design". Also, when I have a 
discussion on "refactoring" with a practitioner, I can usually tell when 
what they mean by the term is what I mean. That isn't the case with 
"workflow" either. Neither issue is totally unexpected with words which 
aren't in the dictionary.

> But then, I do understand why you ask for something concrete.

Apparently not.

I ask for something concrete in somewhat the same spirit as Richard 
Feynman in that anecdote about theorems and hairy balls. I construct my 
own interpretation of what "opportunities for simplifying the workflow" 
can possibly mean. Next I intend to compare it to Cy's own illustration.

In the current instance, "workflow" tends to make me think of 
bureaucratic tasks, such as rubber-stamping invoices for payment. There 
is a "business process" which consists of, say, people getting approval 
to purchase something, sending out a P.O., getting their goodies, 
authorizing payment, and other people receiving invoices and cutting 
checks. And a "workflow" is the formal structure of who does what when 
in this process.

Simplifying might mean recognizing that it's not necessary to print out 
one of the documents in the process, because in the new version of the 
information system it's going to be keyed in by the same person who 
prints it out. So you might want to eliminate these two feature requests 
from the project plan altogether.

If this is indeed along generally correct lines, I might ask whether 
such simplifications could not possibly become apparent when the set of 
all user stories for the project are laid out in front of the customer - 
including the two stories "Print out report X" and "Key in contents of 
report X". Also, I might ask whether Cy is suggesting that documenting 
the business process up front is *bound* to detect such simplification, 
or if specific techniques are required to spot them.

> The tactic is a little bit clever - you get to appear interested in
> facts while evading them - but it wears thin after years and years
> of everyone doing it.

I'm curious - how do you claim to distinguish between people who are 
genuinely interested in facts, and people who pretend to be ?

I was tempted to reciprocate by calling your own guess "clever". But 
then, I have no interest in damning with faint praise: you're just plain 
wrong about the above.

Laurent
0
laurent (379)
10/4/2004 4:36:26 PM
"Laurent Bossavit" <laurent@dontspambossavit.com> wrote in message
news:MPG.1bcb988cc652039b9897ee@news.noos.fr...
> Shayne,
>
> > I don't see why you'd need something concrete. Cy's point about workflow
> > design is fundamentally no different than XP's point about refactoring.
>
> I have a good grasp of what "refactoring" means. I can't say the same
> with respect to "workflow" or "workflow design". Also, when I have a
> discussion on "refactoring" with a practitioner, I can usually tell when
> what they mean by the term is what I mean. That isn't the case with
> "workflow" either. Neither issue is totally unexpected with words which
> aren't in the dictionary.

"Workflow" was not the only term Cy used. His other verbiage indicated what
he meant: the high-level design, the set of features that are allegedly
needed, i.e., the domain that the XPers don't really want to know the
reasons for, since "analysis is lying" (which I guess means "let the
customer do the lying").

> Simplifying might mean recognizing that it's not necessary to print out
> one of the documents in the process, because in the new version of the
> information system it's going to be keyed in by the same person who
> prints it out. So you might want to eliminate these two feature requests
> from the project plan altogether.
>
> If this is indeed along generally correct lines, I might ask whether
> such simplifications could not possibly become apparent when the set of
> all user stories for the project are laid out in front of the customer -
> including the two stories "Print out report X" and "Key in contents of
> report X". Also, I might ask whether Cy is suggesting that documenting
> the business process up front is *bound* to detect such simplification,
> or if specific techniques are required to spot them.

I think the main point here is that it's useful to analyze/evaluate the
problem before starting to code, not that analysis requires documentation.

> > The tactic is a little bit clever - you get to appear interested in
> > facts while evading them - but it wears thin after years and years
> > of everyone doing it.
>
> I'm curious - how do you claim to distinguish between people who are
> genuinely interested in facts, and people who pretend to be ?

I'm merely pointing out what is a very clear pattern among XPers. I don't
claim that there aren't exceptions. And I'm not surprised that you don't
like the pattern to be exposed for what it is.


Shayne Wissler
http://www.ouraysoftware.com


0
10/4/2004 5:45:29 PM

Shayne Wissler wrote:
> I think the main point here is that it's useful to analyze/evaluate the
> problem before starting to code, not that analysis requires documentation.

If i have a story that says implement a VPN, i think we might want
to do some analysis and perhaps even write something down (horrors).
0
grace33 (48)
10/4/2004 6:08:42 PM
"Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message news:<uF48d.105919$wV.78399@attbi_s54>...
> "Daniel Parker" <danielaparker@spam?nothanks.windupbird.com> wrote in
> message news:Fj48d.1218$HO1.60368@news20.bellglobal.com...

> >  if I ever have the time I'd
>  like
> > to start up an open source project to do something better.
> 
> What'd you have in mind?
> 
Dynamic document generation, where the notion of a document becomes
rather abstract.  Separation of content from presentation;
specification documents tend to be heavily templated.  Content
assembly from a variety of sources, including sampling from live
sources such as databases, flat files of every description, and real
time feeds; glossaries and data dictionaries both real and virtual,
e.g. database metadata; and textual content prepared and reviewed
across the enterprise. Facilities for testing and reconciling data and
incorporating that into documents.  Facilities for versioning and
automating conversion across versions.  A gui for assembling the
pieces.  An extendable framework, of course.  That sort of thing.

Regards,
Daniel Parker
0
10/4/2004 6:10:38 PM
Shayne,

> > I'm curious - how do you claim to distinguish between people who are
> > genuinely interested in facts, and people who pretend to be ?
> 
> I'm merely pointing out what is a very clear pattern among XPers. I don't
> claim that there aren't exceptions.

It sure looked as if you were claiming just that. "Workflow" isn't in 
the dictionary, but "everyone" is.

And you have evaded the question as to how you know that supposed 
"pattern" is in fact a pattern.

Laurent
0
laurent (379)
10/4/2004 6:24:18 PM
"Laurent Bossavit" <laurent@dontspambossavit.com> wrote in message
news:MPG.1bcbb3c18229891c9897ef@news.noos.fr...

> And you have evaded the question as to how you know that supposed
> "pattern" is in fact a pattern.

I.e., you want a concrete?


Shayne Wissler
http://www.ouraysoftware.com


0
10/4/2004 8:03:24 PM
Shayne,

> > And you have evaded the question as to how you know that supposed
> > "pattern" is in fact a pattern.
> 
> I.e., you want a concrete?

I'm going to be greedy and insist upon demonstrating the claim (that you 
somehow know that someone asking for data is doing so insincerely), not 
just pointing at a supposed example.

Laurent
0
laurent (379)
10/4/2004 8:26:09 PM
"Laurent Bossavit" <laurent@dontspambossavit.com> wrote in message
news:MPG.1bcbd01559459f569897f2@news.noos.fr...
> Shayne,
>
> > > And you have evaded the question as to how you know that supposed
> > > "pattern" is in fact a pattern.
> >
> > I.e., you want a concrete?
>
> I'm going to be greedy and insist upon demonstrating the claim (that you
> somehow know that someone asking for data is doing so insincerely), not
> just pointing at a supposed example.

Do you believe that just because you ask me to demonstrate something, that I
am therefore required to?

So Laurent, why are we not talking about your post anymore?


Shayne Wissler
http://www.ouraysoftware.com


0
10/4/2004 8:38:53 PM
Robert Grace wrote:

> If i have a story that says implement a VPN, i think we might want
> to do some analysis and perhaps even write something down (horrors).

"implement a VPN" is not a story. It might be a starting point for coming up 
with a usable mission statement, and with a bunch of stories later. Both 
would require some analysis to be done, of course.

Take care, Ilja 


0
it3974 (470)
10/4/2004 8:57:55 PM
Laurent Bossavit <laurent@dontspambossavit.com> wrote in message news:<MPG.1bcb7d26bfe439679897ed@news.noos.fr>...
> Cy,
> 
> > When developing business software, focusing on the small at the expense of
> > proper consideration of the big picture (and yes, implementing too
> > soon pushes you into that mindset) will cause you to miss opportunities
> > for simplifying the workflow and enhancing the value of the solution.
> 
> I think I would understand better if we could discuss concrete examples 
> of projects that fall in the "business software" category, and concrete 
> examples of "opportunities for simplifying the workflow" that could be 
> missed by focusing on small bits of value and/or implementing too soon.
> 
> Laurent

I have seen projects that sought to "automate" business administrative
processes in order to improve efficiency.  The programmers would
dutifully create the screens, programs and databases that would
support the process as they observed it in the manual/legacy world,
with the blessing of the user reps, who believed that technology alone
would make all their problems go away.

The problem is that while the process was technically "automated", in
that things previously written down on paper were now keyed into
computers, no thought was put into eliminating manual checks,
approvals and steps in the process that were no longer required.  And
the paper was kept around as a backup and routed via a file folder
(the file number being keyed into the system for tracking).  Paper
forms were checked to ensure that the *client-calculated* totals for
columns matched the actual numerical totals, the checking for which
was now done using totals displayed on the screen instead of paper
tape (at least in most cases).

These sort of systems, which are little more than computer scaffolding
on a manual paper-based processes, have a tendency to stick around in
the form they're in.  Why?  Because people get used to them, and the
user-reps who wanted it this way become (or remain) supervisors and
later managers.  When the much-hyped improvements in efficiency failed
to materialize, you know what?  They'd start another project to
replace the system with something using fancier technology, to
"improve efficiency".

The case is a composite of sorts, but is based in entirety on things I
have seen in my career (hope this doesn't violate your requirement of
"concreteness").  This is what I think of when people talk of "letting
the customer steer".

My background includes a good deal of business process re-engineering
work.  While a full-blown BPR exercise is not a must for every
software project, it's amazing how much you can improve things when
you don't take the current manual or legacy system-driven process as a
given.


Cy
0
cycoe (74)
10/4/2004 10:15:07 PM
Robert Grace wrote:

> Shayne Wissler wrote:
> > I think the main point here is that it's useful to analyze/evaluate the
> > problem before starting to code, not that analysis requires
documentation.
>
> If i have a story that says implement a VPN, i think we might want
> to do some analysis and perhaps even write something down (horrors).

What's the Virtual Private Network for? Who will use it, doing what, and how
will they benefit.

Look outward, at the profitable goal, not inward, towards the cuddly cute
little bits.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces



0
phlip_cpp (3852)
10/5/2004 12:24:13 AM
Cy Coe wrote:

> I have seen projects that sought to "automate" business administrative
> processes in order to improve efficiency.

Given such a challenge, there are those who would ask the business side and
programmers to agree on a big, simple, obvious metric for "efficiency". For
example, maybe the average time from a sales call to printing a picking slip
must go from 30 to 15 minutes. Then the team agrees on a future date to hit
that goal. The goal is not nebulous, or a placebo, it is a specific
efficient thing.

Then, after each 1-week iteration, the team leaders compare the current
efficiency to the goal. Halfway to the time window, the goal should be half
achieved.

> The programmers would
> dutifully create the screens, programs and databases that would
> support the process as they observed it in the manual/legacy world,
> with the blessing of the user reps, who believed that technology alone
> would make all their problems go away.

Did they try the most important screens online, with real users, as soon as
possible? Or did the programmers simply tell everyone "leave us alone -
everything's on track"?

> The problem is that while the process was technically "automated", in
> that things previously written down on paper were now keyed into
> computers, no thought was put into eliminating manual checks,
> approvals and steps in the process that were no longer required.  And
> the paper was kept around as a backup and routed via a file folder
> (the file number being keyed into the system for tracking).  Paper
> forms were checked to ensure that the *client-calculated* totals for
> columns matched the actual numerical totals, the checking for which
> was now done using totals displayed on the screen instead of paper
> tape (at least in most cases).

Right. That sounds like the infernal paperwork silo within programming
itself that XP seeks to reduce.

> These sort of systems, which are little more than computer scaffolding
> on a manual paper-based processes, have a tendency to stick around in
> the form they're in.  Why?  Because people get used to them, and the
> user-reps who wanted it this way become (or remain) supervisors and
> later managers.

If they were not subject to round-trip review, from their beginnings, then
they added risk and flab to the process.

> This is what I think of when people talk of "letting
> the customer steer".

And that's exactly why XP helps the customer steer with the headlights
turned on, not off.

One of the hardest things to explain about XP is the level of visibility it
provides. Books like /Rapid Development/ by Steve McConnell say that the
more visible a process is, the slower it is. If a software design process
has a low bug rate, and can force visibility up with hiercharchical testing,
then the process has headlights that can illuminate obstacles farther away
than the process's turning radius.

Visibility changes everything.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/5/2004 2:09:18 AM
Rich MacDonald <rich@@clevercaboose.com> wrote in message news:<Xns9577D8F91D72Arichclevercaboosecom@24.94.170.86>...
> cycoe@hotmail.com (Cy Coe) wrote in
> news:1bd6d89a.0410031531.18415670@posting.google.com: 
>  
> > XP, as described in the books, does not treat requirements engineering
> > as a significant undertaking, and seems to assume that as long as you
> > have someone in the room who knows about the business and who is
> > willing to ask for small-grained features and author acceptance tests,
> > that everything will work out in terms of the solution meeting the
> > organization's needs.
> 
> I'm certain that any XPer with experience treats the need for and 
> development of requirements as a significant undertaking. XP just asks 
> (and answers): "Which would you prefer: The requirement's doc or the 
> author of the requirement's doc sitting in the same room as you?"

Or, more accurately, "Which would you prefer:  The requirements doc or
the guy who would have authored the requirements document had he been
given the time to assemble his thoughts and research?"

Or, as some XP'ers would prefer, "The requirements doc or one of the
users who would have been interviewed by the guy who would have
written the requirements doc?"

Regardless of whether the Customer is a real live end user or a
business analyst, having that individual there answering programmers'
questions is taking time that could be spent by him dealing with the
project stakeholders and gathering those requirements that will be
needed in subsequent iterations (sharpening the saw, if you will).

XP seems to need the Customer to have all the answers.  The only
question is, how does he acquire that depth of knowledge if all he
does is talk to programmers?  How does the Customer both stay in touch
with the business and its needs and make himself perpetually available
to the programmers?

> > XP appears to not only recommend
> > that trial-and-error coding be the main requirements gathering
> > technique, but *insists* upon it.
> 
> Not so. It acknowledges that trial-and-error is a fact of life and tries 
> to find an approach that survives/thrives under that reality.

Whether you use prose, UML or code/tests as your analysis tool, churn
is still your enemy (or is, at the very least, your clients').

In the film "Unforgiven", Gene Hackman's character debunked the "fast
draw" myth of the American West.  He stated that the guy more likely
to win kept his cool and took the time to aim carefully before pulling
the trigger.  Requirements analysis is like this both in the sense of
keeping your cool (not going with the first half-baked solution that
occurs to you) and aiming carefully (thinking about what features you
want, what goals they support and how they work with other features -
both existing and anticipated).
  
> > And no, I'm not talking about "waterfall", "big design up front", or
> > whatever other bogeymen XP'ers trot out to scare the townsfolk.  I'm
> > prepared to accept that designing and coding can take place in a more
> > cyclical and concurrent manner than they have in traditional projects.
> >  But I think that requirements are different.  The old waterfall "lock
> > 'em in stone" approach has had its problems, but I think XP's
> > "do-it-as-you-go" approach errs in the opposite direction.  When
> > developing business software, focusing on the small at the expense of
> > proper consideration of the big picture (and yes, implementing too
> > soon pushes you into that mindset) will cause you to miss
> > opportunities for simplifying the workflow and enhancing the value of
> > the solution.
> 
> While I have strong sympathy with the belief that "local optimization" 
> can miss the "global solution", I have to confess that my XP experience 
> has yet to bear that out.

How would you know if it has?  What I mean by this is if you focus on
the small strokes instead of the big picture, and there aren't large,
glaring deficiencies in your solution, how do you know if you've
missed an opportunity to significantly strengthen the global solution?
 Bugs, failing tests or missing pieces of functionality are a more
obvious indicator of failure than a solution that simply doesn't do
the job it was built for well.  I suppose that XP, in a sense,
reflects a satisficing strategy - build something that's just good
enough, get it out there quick, and fix it later if you have to.

> I have not hit an impassable barrier since I 
> adopted "unit testing as you go". Instead, I've found that my "false 
> starts, blind alleys and poorly thought-out concepts" now actually 
> survive and evolve into a useful result.

Perhaps you don't abandon them when you should, because you believe
that, given enough iterations, you really can turn a sow's ear into a
silk purse.  And even if you can work such transmogrifications, is
this really the best way to get a silk purse?

> > You may ask "Well, isn't that the Customer's job?"  To which I respond
> > "Yes, it is!  That's why you should let him do it, with the help of
> > those people who are skilled in this sort of work."
> 
> But how? With a painfully slowly developed piece of paper that has not 
> yet proved itself in the real world, or with a mirror for the Customer 
> to look into? 

Working software only "proves itself" in the real world when real
users make use of it in their jobs.  The catch is that you don't want
to give it to real users to do real work until the software is
somewhat stable.

> Have you *ever* coded a project where you didn't have a 
> single question about the requirement's doc?

The other way to look at this is - the requirements doc *stimulated*
the question.  The Customer has the background of having written the
document and you have the background of having read it.  Both of you
know more than you would have had there not been a document.

The mistake isn't in writing the requirements documents, it's in
assuming that it always has the last word.  But the process of writing
the requirements document forces the analyst and the business
stakeholders to think through the solution end-to-end, taking into
account how all the individual pieces fit together and reinforce one
another.  User stories chop all that into pieces, forcing each
"feature" to stand on its own as an island of functionality, with no
real expression of business intent.  You lose that sense of the whole.
  
> > Once you have implemented software out there, you're in maintenance
> > mode.  Significant changes to the architecture may be possible (maybe
> > even easy) due to the code structuring and refactoring rules that XP
> > dictates, but a working system resists significant change in a way
> > that has nothing to do the cost of writing code.  It's a
> > psychological/sociological barrier to change.  Having something,
> > however suboptimal, makes it harder to justify asking for something
> > different.  Like it or not, your best chance at ensuring that the
> > right thing is built comes before you start construction, even in
> > software.
> > 
> That's just not my experience when I am involved from start to finish. 
> When I have to maintain someone else's code, yes. But when I own it, I 
> have no barrier to change, psychological or sociological or anything 
> else beyond hours in the day and cost-effectiveness. In fact, if I don't 
> "change it" in some way every single day, then its aging badly and there 
> is a problem. I fervently believe in the quote: "If its useful, it will 
> be changed". Its been a truism IME.

I'm not talking about the programmer in this case.  I'm talking about
the user/customer, and the effect that having a half-assed solution in
place has on the desire/will to seek a better one.  Many organizations
give lip service to the idea of "mistakes as a learning vehicle", but
in reality, getting something wrong simply means "somebody screwed
up".  And what's more, that's *exactly* the right attitude to take!

And having a poorly thought solution reflected in working
functionality makes that failure more concrete and visible than a few
easily edited lines in a requirements doc.


Cy
0
cycoe (74)
10/5/2004 2:42:43 AM
Cy Coe wrote:

> Or, more accurately, "Which would you prefer:  The requirements doc or
> the guy who would have authored the requirements document had he been
> given the time to assemble his thoughts and research?"

Analyzing requirements is a full-time job. The programming team is there to
assist, via visibility.

You would have the guy research in isolation.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/5/2004 2:59:17 AM

Ilja Preu� wrote:
> "implement a VPN" is not a story. It might be a starting point for coming up 
> with a usable mission statement, and with a bunch of stories later. Both 
> would require some analysis to be done, of course.

Seem like an end to end customer business value to me. To break it down
would take quite a bit of analysis one would imagine.
0
grace33 (48)
10/5/2004 5:05:22 AM
Shayne Wissler wrote:
> "Laurent Bossavit" <laurent@dontspambossavit.com> wrote in message
> news:MPG.1bcbd01559459f569897f2@news.noos.fr...
> 
>>Shayne,
>>
>>
>>>>And you have evaded the question as to how you know that supposed
>>>>"pattern" is in fact a pattern.
>>>
>>>I.e., you want a concrete?
>>
>>I'm going to be greedy and insist upon demonstrating the claim (that you
>>somehow know that someone asking for data is doing so insincerely), not
>>just pointing at a supposed example.
> 
> 
> Do you believe that just because you ask me to demonstrate something, that I
> am therefore required to?

You might want to try for your own good. This is your chance to 
demonstrate that your innuendo is something other than a poor substitute 
for rational argument.

> 
> So Laurent, why are we not talking about your post anymore?
> 
> 
> Shayne Wissler
> http://www.ouraysoftware.com
> 
> 
0
reiersol (156)
10/5/2004 6:33:25 AM
Robert Grace wrote:
> Ilja Preu� wrote:
>> "implement a VPN" is not a story. It might be a starting point for
>> coming up with a usable mission statement, and with a bunch of
>> stories later. Both would require some analysis to be done, of
>> course.
>
> Seem like an end to end customer business value to me. To break it
> down would take quite a bit of analysis one would imagine.

Yes. But a User Story has more properties than just being end to end 
customer business value. It needs to be (per definition)

- able to be estimated by the developers (typically as being of size 1, 2 or 
3),
- small enough that you can do at least a handfull of them per iteration, 
and
- tangible enough that the Customer knows how to write automated Acceptance 
Tests for it.

You are right that this takes some amount of analysis. That's what XP teams 
do.

Take care, Ilja 


0
it3974 (470)
10/5/2004 6:42:39 AM
Cy,

Thanks for providing a detailed narrative. You are indeed saying what I 
thought you were saying. And, in the main, I agree with you: without a 
big-picture, structured understanding of how a given project is supposed 
to create value, the project is liable to go awry.

What I'm wondering is, what does "big picture structured understanding"  
entail that *cannot possibly be provided* by whatever process would, in 
an XP project, result in the team having a few dozen index cards to 
discuss, with titles such as "Provide form to key in sheet X" or "Print 
out report X". Which is what is supposed to happen before any coding 
gets done.

Another thing I notice in your narrative is this. There are people whose 
jobs will be made redundant by a successful "reengineering", since the 
computer checks will take a lot less time than manual checks. (To be 
blunt about it, "business value" in some projects means being able to 
fire people.) And there are people whose jobs may become more important 
if they manage to prevent the project from succeeding at eliminating 
redundant work.

I can well understand that if "the customer who steers" is someone from 
that constituency, it's all too likely that the project will create 
value for that constituency, not for the business as a whole. Even if 
(maybe especially if) they *do* understand the big picture.

These are issues of managing conflicts of interest, not issues of 
understanding requirements.

Laurent
http://bossavit.com/thoughts/
0
laurent (379)
10/5/2004 8:07:30 AM
Cy,

> But the process of writing the requirements document forces the
> analyst and the business stakeholders to think through the solution
> end-to-end, taking into account how all the individual pieces fit
> together and reinforce one another. 

You're suggesting that it would not be accurate to write:

  The process of writing titles of user stories on index cards forces
  the analyst and the business stakeholders to think through the
  solution end-to-end, taking into account how all the individual
  stories fit together and reinforce one another.

And I'm not quite sure I understand why. Level of detail is lower, to be 
sure. But presumably a requirements document may get its start in life 
as an outline. And I would expect that an outline would give a better 
"sense of the whole" than a minutely detailed document. I don't see what 
is crucial about level of detail.

Can you set me straight ?

Laurent
http://bossavit.com/thoughts/
0
laurent (379)
10/5/2004 9:24:45 AM
> One of the hardest things to explain about XP is the level of visibility
it
> provides. Books like /Rapid Development/ by Steve McConnell say that the
> more visible a process is, the slower it is. If a software design process
> has a low bug rate, and can force visibility up with hiercharchical
testing,
> then the process has headlights that can illuminate obstacles farther away
> than the process's turning radius.
>
> Visibility changes everything.

I don't think McConnell is saying transparency is bad, in fact there a whole
chapters devoted to the cause of reducing the iterative development cycle
and increasing feedback, just it comes at a price and thus there is a
judgement about how much transparency is sensible.


0
Nicholls.Mark (1061)
10/5/2004 12:14:00 PM

Ilja Preu� wrote:
> You are right that this takes some amount of analysis. That's what XP teams 
> do.

When would the XP team do the analysis? It would probably take a solid
two weeks or more of effort. Not something that can be done
in a one day planning game.
0
grace33 (48)
10/5/2004 1:48:39 PM
Mark Nicholls wrote:

> I don't think McConnell is saying transparency is bad, in fact there a
whole
> chapters devoted to the cause of reducing the iterative development cycle
> and increasing feedback, just it comes at a price and thus there is a
> judgement about how much transparency is sensible.

I refer to the diagrams. One shows many blackouts, and is short. The other
shows fewer blackouts, and is longer.

Steve, at that time, was unfamiliar with test & review techniques which
invert those diagrams. Like /Mike Mulligan and his Steam Shovel/, the more
review we get, the faster we go.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/5/2004 2:18:56 PM
Laurent Bossavit wrote:

> What I'm wondering is, what does "big picture structured understanding"
> entail that *cannot possibly be provided* by whatever process would, in
> an XP project, result in the team having a few dozen index cards to
> discuss, with titles such as "Provide form to key in sheet X" or "Print
> out report X". Which is what is supposed to happen before any coding
> gets done.

Stop teasing them about the index cards.

They are reading (or pretending to read): "You are not allowed to specify
requirements anywhere except on an index card."

XP teams specify requirements however they see fit. The only parts that are
not _optional_ are the index cards and acceptance tests.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/5/2004 2:33:32 PM
Robert Grace wrote:
> Ilja Preu� wrote:
>> You are right that this takes some amount of analysis. That's what
>> XP teams do.
>
> When would the XP team do the analysis?

In the projects Exploration Phase:
http://www.agilemodeling.com/essays/agileModelingXPLifecycle.htm

> It would probably take a solid
> two weeks or more of effort. Not something that can be done
> in a one day planning game.

Depending on the project, that phase can last from a couple of�hours to a
couple of weeks, yes.

Cheers, Ilja


0
preuss (368)
10/5/2004 3:27:05 PM
"Dagfinn Reiersol" <reiersol@online.no> wrote in message
news:Yar8d.272$rh1.4748@news2.e.nsc.no...
> Shayne Wissler wrote:
> > "Laurent Bossavit" <laurent@dontspambossavit.com> wrote in message
> > news:MPG.1bcbd01559459f569897f2@news.noos.fr...
> >
> >>Shayne,
> >>
> >>
> >>>>And you have evaded the question as to how you know that supposed
> >>>>"pattern" is in fact a pattern.
> >>>
> >>>I.e., you want a concrete?
> >>
> >>I'm going to be greedy and insist upon demonstrating the claim (that you
> >>somehow know that someone asking for data is doing so insincerely), not
> >>just pointing at a supposed example.
> >
> >
> > Do you believe that just because you ask me to demonstrate something,
that I
> > am therefore required to?
>
> You might want to try for your own good. This is your chance to
> demonstrate that your innuendo is something other than a poor substitute
> for rational argument.

My "good" is not a function of what you think of me. Second, it would be a
contradiction for me to say "this is evident" and then proceed to give an
argument as to why it's evident. Either it's clear to you as well or not.


Shayne Wissler
http://www.ouraysoftware.com


0
10/5/2004 4:20:53 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in message
news:k3y8d.14223$Qv5.584@newssvr33.news.prodigy.com...
> Mark Nicholls wrote:
>
> > I don't think McConnell is saying transparency is bad, in fact there a
> whole
> > chapters devoted to the cause of reducing the iterative development
cycle
> > and increasing feedback, just it comes at a price and thus there is a
> > judgement about how much transparency is sensible.
>
> I refer to the diagrams. One shows many blackouts, and is short. The other
> shows fewer blackouts, and is longer.

I haven't got it in front of me....but I think I can remember the ones you
mean.

I accept that in general SD lacks transparency, and increasing that
transparency would often but not always shorten the development process.

We agree, I just thought it may have been a little harsh.

>
> Steve, at that time, was unfamiliar with test & review techniques which
> invert those diagrams. Like /Mike Mulligan and his Steam Shovel/, the more
> review we get, the faster we go.
>
Is the 'invert' a joke.....please define 'invert'....noooooooo....

I agree, as I say I think the point is (but it was a while ago, so you may
well be right) that the more transparency there is in a process the more
resource involved, if the information does not highlight any problems then
it is essentially wasted effort...in extreme in a bean canning factory we
don't necessarily need to know exactly how many beans are in each tin, and
to make the information tranparent and available would clearly be wasted
effort and potentially make the bean canning process slower.

SD is probably at the other end of the scale, the more feedback we get the
more directed we are and the better we get at examing our own navels, and
the more we learn, not always, but often.

We agree...I just thought it potentially harsh on a generally very good
book.


0
Nicholls.Mark (1061)
10/5/2004 4:46:14 PM
Robert Grace wrote:

> Ilja Preu� wrote:
> > You are right that this takes some amount of analysis. That's what XP
teams
> > do.
>
> When would the XP team do the analysis? It would probably take a solid
> two weeks or more of effort. Not something that can be done
> in a one day planning game.

The one day planning game only starts the analysis for the highest business
value feature requests. (There are both technical and financial reasons to
delay worrying about lower priority features.)

The analysis happens during the week. First, people occupying the Customer
and Tester roles collude to convert requirements to measurable
specifications. This requires analyzing the meaning of feature requests.
Requests like "the server should be fast" convert into line-items like "we
need to perform 500 sales order transactions in 1 minute". These requests
also convert into marching orders to the programmers, such as "the customer
needs the server to be fast, so take your time while optimizing any server
code you understand."

The technical specifications are written as storytests. These fail until the
programmers finish their stories and pass them. All this effort requires
analyzing the meaning of requirements, and analyzing how they will fit into
the project's archictecture.

Much traditional analysis also occurs. XP simply recommends a framework to
hang it on.

Analysis is a full-time job. Nobody said it was just the planning game.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/5/2004 4:56:25 PM
Mark Nicholls wrote:

> I accept that in general SD lacks transparency, and increasing that
> transparency would often but not always shorten the development process.

Turn it around. Shortening the development process, via automated and manual
review, improves transparency.

A project becomes "Agile" when the rate it collects feedback about its
location and direction exceeds the rate developers change that location and
direction.

Using a metaphor of steering with headlines, any analysis without live code
is like flooring the gas pedal, and turning off the lights.

> I agree, as I say I think the point is (but it was a while ago, so you may
> well be right) that the more transparency there is in a process the more
> resource involved, if the information does not highlight any problems then
> it is essentially wasted effort...in extreme in a bean canning factory we
> don't necessarily need to know exactly how many beans are in each tin, and
> to make the information tranparent and available would clearly be wasted
> effort and potentially make the bean canning process slower.

All XP visibility comes by collecting a side-effect of another process. For
example, pair programming is good for the code right now, but it's also good
later when your pair reports an analysis to his next pair.

> We agree...I just thought it potentially harsh on a generally very good
> book.

It's a good book because it surveys dozens of real projects.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/5/2004 5:00:10 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in message
news:uqA8d.14249$Qv5.9503@newssvr33.news.prodigy.com...
> Mark Nicholls wrote:
>
> > I accept that in general SD lacks transparency, and increasing that
> > transparency would often but not always shorten the development process.
>
> Turn it around. Shortening the development process, via automated and
manual
> review, improves transparency.

the point I (and Mr McConnell) is making is that this is not always true.

McConnell recommends manual review and automated transparency.

Shortening the development process (without adversely effecting cost or
product) may be possible by increasing transparency, but it may be by
decreasing transparency....there is a point where peer review and other
techniques become worth less than investing that time in creating the
product...and diminiting returns eventually go to negative returns.

Have you not ever worked on a project when there seems to be more time spent
in meetings talking about what we may or may not do than actually doing
something and then talking about that.

>
> A project becomes "Agile" when the rate it collects feedback about its
> location and direction exceeds the rate developers change that location
and
> direction.

I don't understand this.

>
> Using a metaphor of steering with headlines, any analysis without live
code
> is like flooring the gas pedal, and turning off the lights.

McConnell would probably not disgree...there are chapters devoted to this,
in fact possibly a third of the book...but he does not propose a one size
fits all mentality......canning beans is different from writing
software.....research projects are different from simple database apps.

>
> > I agree, as I say I think the point is (but it was a while ago, so you
may
> > well be right) that the more transparency there is in a process the more
> > resource involved, if the information does not highlight any problems
then
> > it is essentially wasted effort...in extreme in a bean canning factory
we
> > don't necessarily need to know exactly how many beans are in each tin,
and
> > to make the information tranparent and available would clearly be wasted
> > effort and potentially make the bean canning process slower.
>
> All XP visibility comes by collecting a side-effect of another process.
For
> example, pair programming is good for the code right now, but it's also
good
> later when your pair reports an analysis to his next pair.

This is possibly true...but it can delay the developement process in the
short term...there is a pay off even when pair programming works.

>
> > We agree...I just thought it potentially harsh on a generally very good
> > book.
>
> It's a good book because it surveys dozens of real projects.
>

And then says there are all sorts of different ways of going about
this...one size does not fit all.



0
Nicholls.Mark (1061)
10/5/2004 5:22:48 PM
"Laurent Bossavit" <laurent@dontspambossavit.com> wrote in message
news:MPG.1bcc749a40e397429897f3@news.noos.fr...
> Cy,
>
> Thanks for providing a detailed narrative. You are indeed saying what I
> thought you were saying. And, in the main, I agree with you: without a
> big-picture, structured understanding of how a given project is supposed
> to create value, the project is liable to go awry.
>
> What I'm wondering is, what does "big picture structured understanding"
> entail that *cannot possibly be provided* by whatever process would, in
> an XP project, result in the team having a few dozen index cards to
> discuss, with titles such as "Provide form to key in sheet X" or "Print
> out report X". Which is what is supposed to happen before any coding
> gets done.

There is no such methodology where some good thing "cannot possibly be
provided". Even in an XP project, things can go well, as long as the
individuals make up for the bad parts of the methodology, which they often
do.


Shayne Wissler
http://www.ouraysoftware.com


0
10/5/2004 7:06:24 PM
Mark Nicholls wrote:

> the point I (and Mr McConnell) is making is that this is not always true.
>
> McConnell recommends manual review and automated transparency.
>
> Shortening the development process (without adversely effecting cost or
> product) may be possible by increasing transparency, but it may be by
> decreasing transparency....there is a point where peer review and other
> techniques become worth less than investing that time in creating the
> product...and diminiting returns eventually go to negative returns.
>
> Have you not ever worked on a project when there seems to be more time
spent
> in meetings talking about what we may or may not do than actually doing
> something and then talking about that.

Yes. And I have also experienced using pairing, TDD, continuous integration,
and frequent releases to push the bug rate so low that the project was easy
and safe to steer. When Steve wrote /RAD/, he hadn't experienced that yet.

> > A project becomes "Agile" when the rate it collects feedback about its
> > location and direction exceeds the rate developers change that location
> and
> > direction.
>
> I don't understand this.

Draw a triangle. At one corner is Code-and-Fix, which is still the most
popular methodology. You never know where the project is, so if you call a
meeting, the programmers will begrudgingly admit where the project could be
if they could get thru the current round of bugs. Then they add a couple
features, and these shake loose another round of bugs.

At another corner is Waterfall, which is more popular than anyone wants to
admit. You never know when a phase is over, and during that phase you give a
project a lot of velocity, with no feedback from live code (or bugs).

At the third corner are highly iterative processes that leverage tests to
prevent long endless bug hunts. As a side-effect, these tests provide much
of the role of requirements analysis and designing. But programmer's can't
change the project's location without causing more illumination on its
real-time situation.

> > Using a metaphor of steering with headlines, any analysis without live
> code
> > is like flooring the gas pedal, and turning off the lights.
>
> McConnell would probably not disgree...there are chapters devoted to this,
> in fact possibly a third of the book...but he does not propose a one size
> fits all mentality......canning beans is different from writing
> software.....research projects are different from simple database apps.

There are many disciplines where one best solution has indeed risen above
the alternatives. The germ "theory" of pathology, for example...

> > All XP visibility comes by collecting a side-effect of another process.
> > For
> > example, pair programming is good for the code right now, but it's also
> > good
> > later when your pair reports an analysis to his next pair.
>
> This is possibly true...but it can delay the developement process in the
> short term...there is a pay off even when pair programming works.

It is good for engineers to remember there are always trade-offs. And
there's always a little solo coding going on, in an XP project.

However, if you can find another way to provide short and long term benefits
at the same time, by going faster, tell us!

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/5/2004 7:30:46 PM
Daniel Parker wrote:

snipped

> 
> My experience has varied.  I've worked with no requirements documents in
> pure winging-it environments, in the company of very highly skilled
> programmers.  I've worked from requirement documents where I've had very few
> questions.  I've worked from requirements documents which were completely
> useless and I had to repeat the analysis by bypassing the analysts and going
> back to the end users.  And I've prepared requirements documents that don't
> seem to require very many supplemental questions.
> 

I think everyone here agrees that none of us here can dispute your 
personal experience.

However, some of us (myself included) have experienced the same as you 
have described and now what XP has to offer.

In doing so, we see things differently thats all, and we aren't the only 
ones, for example...

> In banks, on sizeable projects that involve a year or so of work, the
> bsuiness wants to be able to manage their budget and time within small
> tolerances.  They become an exercise in managing detail and tracking time.
> And people become good at it.
> 

.... the UK online Egg bank uses XP for ALL of its software development.

http://www.computing.co.uk/analysis/1148508


0
news248 (706)
10/5/2004 11:12:40 PM
Daniel Parker wrote:
> "Shayne Wissler" <thalesNOSPAM000@yahoo.com> wrote in message news:<uF48d.105919$wV.78399@attbi_s54>...
> 
>>"Daniel Parker" <danielaparker@spam?nothanks.windupbird.com> wrote in
>>message news:Fj48d.1218$HO1.60368@news20.bellglobal.com...
> 
> 
>>> if I ever have the time I'd
>>
>> like
>>
>>>to start up an open source project to do something better.
>>
>>What'd you have in mind?
>>
> 
> Dynamic document generation, where the notion of a document becomes
> rather abstract.  Separation of content from presentation;
> specification documents tend to be heavily templated.  Content
> assembly from a variety of sources, including sampling from live
> sources such as databases, flat files of every description, and real
> time feeds; glossaries and data dictionaries both real and virtual,
> e.g. database metadata; and textual content prepared and reviewed
> across the enterprise. Facilities for testing and reconciling data and
> incorporating that into documents.  Facilities for versioning and
> automating conversion across versions.  A gui for assembling the
> pieces.  An extendable framework, of course.  That sort of thing.
> 
> Regards,
> Daniel Parker

I have to say, its sounds awfully like the automated acceptance test we 
use in XP.  The acceptance test are 'executable' requirements document 
contents. They grow with the system as the system grows.
0
news248 (706)
10/5/2004 11:15:23 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in message news:<0hy8d.14224$Qv5.12616@newssvr33.news.prodigy.com>...
> Laurent Bossavit wrote:
> 
> > What I'm wondering is, what does "big picture structured understanding"
> > entail that *cannot possibly be provided* by whatever process would, in
> > an XP project, result in the team having a few dozen index cards to
> > discuss, with titles such as "Provide form to key in sheet X" or "Print
> > out report X". Which is what is supposed to happen before any coding
> > gets done.
> 
> Stop teasing them about the index cards.
> 
> They are reading (or pretending to read): "You are not allowed to specify
> requirements anywhere except on an index card."

Why forbid outright when a sneer and a cute acronym will suffice?

> XP teams specify requirements however they see fit.

Really?  So the XP gurus wouldn't declare a project "not XP" (and the
team by extension not an "XP team") if the team decided to write a
requirements document using use cases or business process models
before deriving a set of XP-style user stories to write on the cards? 
If I were the Customer on an XP team, that's how I'd want to do it.

> The only parts that are
> not _optional_ are the index cards and acceptance tests.


Cy
0
cycoe (74)
10/6/2004 1:46:43 AM
Shayne Wissler wrote:
> "Dagfinn Reiersol" <reiersol@online.no> wrote in message
> news:Yar8d.272$rh1.4748@news2.e.nsc.no...
> 
>>Shayne Wissler wrote:
>>
>>>"Laurent Bossavit" <laurent@dontspambossavit.com> wrote in message
>>>news:MPG.1bcbd01559459f569897f2@news.noos.fr...
>>>
>>>
>>>>Shayne,
>>>>
>>>>
>>>>
>>>>>>And you have evaded the question as to how you know that supposed
>>>>>>"pattern" is in fact a pattern.
>>>>>
>>>>>I.e., you want a concrete?
>>>>
>>>>I'm going to be greedy and insist upon demonstrating the claim (that you
>>>>somehow know that someone asking for data is doing so insincerely), not
>>>>just pointing at a supposed example.
>>>
>>>
>>>Do you believe that just because you ask me to demonstrate something,
> 
> that I
> 
>>>am therefore required to?
>>
>>You might want to try for your own good. This is your chance to
>>demonstrate that your innuendo is something other than a poor substitute
>>for rational argument.
> 
> 
> My "good" is not a function of what you think of me. Second, it would be a
> contradiction for me to say "this is evident" and then proceed to give an
> argument as to why it's evident. Either it's clear to you as well or not.

We're not talking about pure mathematics and logic, we're talking about 
your claim that XPers are only pretending to be interested in facts. You 
can't mean that it would be evident to someone who has never even heard 
of XP, so it must be based on some kind of observation--evidence.
0
reiersol (156)
10/6/2004 7:53:48 AM
Dagfinn Reiersol wrote:
> Shayne Wissler wrote:
> 
>> "Dagfinn Reiersol" <reiersol@online.no> wrote in message
>> news:Yar8d.272$rh1.4748@news2.e.nsc.no...
>>
>>> Shayne Wissler wrote:
>>>
>>>> "Laurent Bossavit" <laurent@dontspambossavit.com> wrote in message
>>>> news:MPG.1bcbd01559459f569897f2@news.noos.fr...
>>>>
>>>>
>>>>> Shayne,
>>>>>
>>>>>
>>>>>
>>>>>>> And you have evaded the question as to how you know that supposed
>>>>>>> "pattern" is in fact a pattern.
>>>>>>
>>>>>>
>>>>>> I.e., you want a concrete?
>>>>>
>>>>>
>>>>> I'm going to be greedy and insist upon demonstrating the claim 
>>>>> (that you
>>>>> somehow know that someone asking for data is doing so insincerely), 
>>>>> not
>>>>> just pointing at a supposed example.
>>>>
>>>>
>>>>
>>>> Do you believe that just because you ask me to demonstrate something,
>>
>>
>> that I
>>
>>>> am therefore required to?
>>>
>>>
>>> You might want to try for your own good. This is your chance to
>>> demonstrate that your innuendo is something other than a poor substitute
>>> for rational argument.
>>
>>
>>
>> My "good" is not a function of what you think of me. Second, it would 
>> be a
>> contradiction for me to say "this is evident" and then proceed to give an
>> argument as to why it's evident. Either it's clear to you as well or not.
> 
> 
> We're not talking about pure mathematics and logic, we're talking about 
> your claim that XPers are only pretending to be interested in facts. You 
> can't mean that it would be evident to someone who has never even heard 
> of XP, so it must be based on some kind of observation--evidence.

And let me remind you that you said "it wears thin after years and years 
of everyone doing it", so you're implying more than a few casual 
observations.
0
reiersol (156)
10/6/2004 8:29:00 AM
>
> > the point I (and Mr McConnell) is making is that this is not always
true.
> >
> > McConnell recommends manual review and automated transparency.
> >
> > Shortening the development process (without adversely effecting cost or
> > product) may be possible by increasing transparency, but it may be by
> > decreasing transparency....there is a point where peer review and other
> > techniques become worth less than investing that time in creating the
> > product...and diminiting returns eventually go to negative returns.
> >
> > Have you not ever worked on a project when there seems to be more time
> spent
> > in meetings talking about what we may or may not do than actually doing
> > something and then talking about that.
>
> Yes. And I have also experienced using pairing, TDD, continuous
integration,
> and frequent releases to push the bug rate so low that the project was
easy
> and safe to steer. When Steve wrote /RAD/, he hadn't experienced that yet.

I believe most of those things are mentioned..daily smoke and build and
assertions are a form of TDD , he talks about integration, though I do
believe he doesn't believe in premature integration, he believes in frequent
releases, and he believes in low bug rates, in fact it's one of the central
tenets of the book, detect and fix bugs before moving on.

So I don't really see where the problem is.

You accept that you can have too much transparency, and I think this is
simply the point he is making.

I see no contradiction, in fact I would have thought it would be considered
the trail blazer for much of what you talk about.

>
> > > A project becomes "Agile" when the rate it collects feedback about its
> > > location and direction exceeds the rate developers change that
location
> > and
> > > direction.
> >
> > I don't understand this.
>
> Draw a triangle. At one corner is Code-and-Fix, which is still the most
> popular methodology. You never know where the project is, so if you call a
> meeting, the programmers will begrudgingly admit where the project could
be
> if they could get thru the current round of bugs. Then they add a couple
> features, and these shake loose another round of bugs.

I agree.

>
> At another corner is Waterfall, which is more popular than anyone wants to
> admit. You never know when a phase is over, and during that phase you give
a
> project a lot of velocity, with no feedback from live code (or bugs).

I agree

>
> At the third corner are highly iterative processes that leverage tests to
> prevent long endless bug hunts.

the smoke and build chapter....a best practice according to McConnell, yes.
the spiral lifecycle chapter...a best practive according to McConnell, yes.

> As a side-effect, these tests provide much
> of the role of requirements analysis and designing.

hmmm, I don't see this.

They may highlight the analysis and design are wrong, but the tests are not
a substitute, they are just an experiment on the basis of the current
analysis and design.

> But programmer's can't
> change the project's location without causing more illumination on its
> real-time situation.

I don't understand this.

If you mean there needs to be feedback and transparency.....again I see no
contradiction to RAD, ezcept his observation that it is suceptable to
diminiting marginal returns (as all things are in the short run).

>
> > > Using a metaphor of steering with headlines, any analysis without live
> > code
> > > is like flooring the gas pedal, and turning off the lights.
> >
> > McConnell would probably not disgree...there are chapters devoted to
this,
> > in fact possibly a third of the book...but he does not propose a one
size
> > fits all mentality......canning beans is different from writing
> > software.....research projects are different from simple database apps.
>
> There are many disciplines where one best solution has indeed risen above
> the alternatives. The germ "theory" of pathology, for example...

so all disease is caused by germs?
so all disease can be cured by antibiotics?

no, if you go to the doctor with flu and he is a germ specialist, you wont
get very far, antibiotics are no use, it's a
virus....malaria...nope...cancer....nope....brain heamorage,
no...schizaphrenai...no....pathology does not contain a one size fits all
approach, neither does SD.

Are you saying that you have a set of techniques that is always the very
best practice in all contexts, for all projects, for all sizes, for research
projects and for "hello world", for building the software on the space
shuttle and for writing an excel formula for calculating your monthly
expenses, for all organisations?

I don't think you do.

You have some interesting tools, that applied sensibly will probably yield
results, applied badly wont.....silver bullet?.....no....

I see little in what you say that I don't see in RAD....except that RAD
claims the world is a complex place of trade offs and compromise.

>
> > > All XP visibility comes by collecting a side-effect of another
process.
> > > For
> > > example, pair programming is good for the code right now, but it's
also
> > > good
> > > later when your pair reports an analysis to his next pair.
> >
> > This is possibly true...but it can delay the developement process in the
> > short term...there is a pay off even when pair programming works.
>
> It is good for engineers to remember there are always trade-offs. And
> there's always a little solo coding going on, in an XP project.

But isn't that the point of RAD book....there are always trade offs, there
is no one best practice.

why is there a little solo coding going on? is it because it's not a one
size fits all technique? is it because it would be a waste of resource in
many contexts and delay the development schedule rather than decrease it.

>
> However, if you can find another way to provide short and long term
benefits
> at the same time, by going faster, tell us!

I never claimed I could.

And actually that was my point to you!.....this is what you seem to
claim....in the short run we will always go quicker, in the long run we will
always go quicker, the more we apply the quicker we go...

We don't particularly disagree...I just think your being harsh on McConnell.


0
Nicholls.Mark (1061)
10/6/2004 12:27:40 PM
cycoe@hotmail.com (Cy Coe) wrote in
news:1bd6d89a.0410041842.15eb2ea2@posting.google.com: 

> Rich MacDonald <rich@@clevercaboose.com> wrote in message
> news:<Xns9577D8F91D72Arichclevercaboosecom@24.94.170.86>... 
>> cycoe@hotmail.com (Cy Coe) wrote in
>> news:1bd6d89a.0410031531.18415670@posting.google.com: 
> 
> Regardless of whether the Customer is a real live end user or a
> business analyst, having that individual there answering programmers'
> questions is taking time that could be spent by him dealing with the
> project stakeholders and gathering those requirements that will be
> needed in subsequent iterations (sharpening the saw, if you will).

Lame.
 
> XP seems to need the Customer to have all the answers.  The only
> question is, how does he acquire that depth of knowledge if all he
> does is talk to programmers?  How does the Customer both stay in touch
> with the business and its needs and make himself perpetually available
> to the programmers?

Lamer.
 
>> > XP appears to not only recommend
>> > that trial-and-error coding be the main requirements gathering
>> > technique, but *insists* upon it.
>> 
>> Not so. It acknowledges that trial-and-error is a fact of life and
>> tries to find an approach that survives/thrives under that reality.
> 
> Whether you use prose, UML or code/tests as your analysis tool, churn
> is still your enemy (or is, at the very least, your clients').

Churn is the reality. Get over it.
 
> I suppose that XP, in a sense,
> reflects a satisficing strategy - build something that's just good
> enough, get it out there quick, and fix it later if you have to.

You're finally starting to get it. BTW, you *will* have to, and we like 
to say "improve" rather than "fix" :-)
 
>> I have not hit an impassable barrier since I 
>> adopted "unit testing as you go". Instead, I've found that my "false 
>> starts, blind alleys and poorly thought-out concepts" now actually 
>> survive and evolve into a useful result.
> 
> Perhaps you don't abandon them when you should, because you believe
> that, given enough iterations, you really can turn a sow's ear into a
> silk purse.  And even if you can work such transmogrifications, is
> this really the best way to get a silk purse?

IME, yes.For my personal work, upfront requirements...analysis 
models...anything prior to code that goes further than sketches on a 
piece of paper or whiteboard have always proven to be a waste of time. 
Since my experience shows me that the end product won't look anything 
like the start, why should I spend half my time drawing a detailed map?
 
>> > You may ask "Well, isn't that the Customer's job?"  To which I
>> > respond "Yes, it is!  That's why you should let him do it, with the
>> > help of those people who are skilled in this sort of work."
>> 
>> But how? With a painfully slowly developed piece of paper that has
>> not yet proved itself in the real world, or with a mirror for the
>> Customer to look into? 
> 
> Working software only "proves itself" in the real world when real
> users make use of it in their jobs.  The catch is that you don't want
> to give it to real users to do real work until the software is
> somewhat stable.

Exactly. However, stable in the sense of bug-free, not stable in the 
sense of changing.
 
>> Have you *ever* coded a project where you didn't have a 
>> single question about the requirement's doc?
> 
> The other way to look at this is - the requirements doc *stimulated*
> the question.  The Customer has the background of having written the
> document and you have the background of having read it.  Both of you
> know more than you would have had there not been a document.

This is good. And the "second wave" of questions comes when you're 
actually trying to code. Don't want to wait too long to catch those 
issues either, now.
 
> The mistake isn't in writing the requirements documents, it's in
> assuming that it always has the last word.

Excellent.

> But the process of writing
> the requirements document forces the analyst and the business
> stakeholders to think through the solution end-to-end, taking into
> account how all the individual pieces fit together and reinforce one
> another.  User stories chop all that into pieces, forcing each
> "feature" to stand on its own as an island of functionality, with no
> real expression of business intent.  You lose that sense of the whole.

Started strong then finished lame. Pullease. You're not writing a novel.
0
Rich
10/6/2004 2:33:55 PM
Andrew McDonagh <news@andrewcdonagh.f2s.com> wrote in message news:<cjv9uq$fg9$2@news.freedom2surf.net>...
> Daniel Parker wrote:

> > Dynamic document generation, where the notion of a document becomes
> > rather abstract.  Separation of content from presentation;
> > specification documents tend to be heavily templated.  Content
> > assembly from a variety of sources, including sampling from live
> > sources such as databases, flat files of every description, and real
> > time feeds; glossaries and data dictionaries both real and virtual,
> > e.g. database metadata; and textual content prepared and reviewed
> > across the enterprise. Facilities for testing and reconciling data and
> > incorporating that into documents.  Facilities for versioning and
> > automating conversion across versions.  A gui for assembling the
> > pieces.  An extendable framework, of course.  That sort of thing.
> > 
> I have to say, its sounds awfully like the automated acceptance test we 
> use in XP.  The acceptance test are 'executable' requirements document 
> contents. They grow with the system as the system grows.

I don't think so.  The focus is more on preparing specifications
documents, which in the current context are largely expressed in terms
of data, vast collections of data distributed in many forms across the
enterprise, with the aim of achieving minimal redundancy in the
expression of that data, and in the recording of vocabularies for
describing that data.  The problem with existing tools is that they
usually increase redundancy rather than lowering it.  There would of
course be some overlap with acceptance testing, in particular with
reconciliation of data, and with documenting test cases as part of a
specification.  But one has to limit the scope of things.

Regards,
Daniel Parker
0
10/6/2004 2:55:02 PM
Mark Nicholls wrote:

> I believe most of those things are mentioned..daily smoke and build and
> assertions are a form of TDD , he talks about integration, though I do
> believe he doesn't believe in premature integration, he believes in
frequent
> releases, and he believes in low bug rates, in fact it's one of the
central
> tenets of the book, detect and fix bugs before moving on.
>
> So I don't really see where the problem is.
>
> You accept that you can have too much transparency, and I think this is
> simply the point he is making.
>
> I see no contradiction, in fact I would have thought it would be
considered
> the trail blazer for much of what you talk about.

"Too much meetings" is a kind of visibility that's bad.

If you target the right kind of feedback, you get a lot of external
visibility as a side-effect.

I don't think Steve, when he wrote the book, had experienced making <10 very
small edits, and passing all tests. Yes yes yes, many folks wrote lots of
tests before TDD. You noodle around for a long time, then pass some or all
high-level tests. If you think that more tests would slow you down, you
don't add the high-risk kind that could break after a legitimate code
change. And because some tests fail sometimes, you never know when it's
perfectly safe to integrate.

> > At the third corner are highly iterative processes that leverage tests
to
> > prevent long endless bug hunts.
>
> the smoke and build chapter....a best practice according to McConnell,
yes.
> the spiral lifecycle chapter...a best practive according to McConnell,
yes.

And not out at the tip of that triangle. Specifically, each arc of a spiral
is a miniature waterfall. Analyze design code debug test.

To move further from Waterfall, test first and analyze last.

To move further from Code-and-Fix, don't debug. If tests fail, hit Undo (or
discard your sandbox) until they pass, then start your current drive again.

If you couldn't _manually_ undo the last few changes, back to a passing
test, you ain't doing TDD, so you get fewer external benefits.

> > As a side-effect, these tests provide much
> > of the role of requirements analysis and designing.
>
> hmmm, I don't see this.
>
> They may highlight the analysis and design are wrong, but the tests are
not
> a substitute, they are just an experiment on the basis of the current
> analysis and design.

If you invest your A&D effort into them, then they do. They are the
diagrams, the architectural experiments, and the main force that decouples
your design.

> > But programmer's can't
> > change the project's location without causing more illumination on its
> > real-time situation.
>
> I don't understand this.
>
> If you mean there needs to be feedback and transparency.....again I see no
> contradiction to RAD, ezcept his observation that it is suceptable to
> diminiting marginal returns (as all things are in the short run).

Programmers cannot change a program with so many tests without adding tests
and tweaking them.

TEST_(TestEvaluations, testThreeTubes)
{

    layEgg( "egg = startOvum(4, 30, 100)\n"
            "base = egg.newRootAxiom\n"
            "base.tube.tube.tube\n"
            "hatch(egg)\n" );

    incubateTheEgg();
    FleaOpenGL * fog = pMainWindow->glWidget;

    CPPUNIT_ASSERT_EQUAL(0, uint(fog->spheres.size()));
    CPPUNIT_ASSERT_EQUAL(0, uint(fog->triangles.size()));
    CPPUNIT_ASSERT_EQUAL(3, uint(fog->cylinders.size()));

}

The big string at the top is this program's input, exactly as a user would
type it in. If this were from an XP project, its domain expert could pair
with me and understand this test.

(The system metaphor is DNA, in the language Ruby, inside an egg. The
program's core logic "incubates" the egg and expresses its DNA. In this case
it's three tubes.)

If the string were more complex, the domain expert could pair with me to
write the correct assertions to analyze the output. And we would collude to
escallate our analysis into both reusable code and reusable tests which
expose that analysis to our colleagues. For example, a Web page could store
a list of representative input Ruby strings, and their graphical outputs,
for easy review:

    http://flea.sourceforge.net/reference.html

Instead of requiring analysis up-front, emerge it from all the tests. That
keeps it locked to our program's reality, not floating around.

These techniques contradict RAD when the headlights go from dim to bright.
Then you can see things farther away than your turning radius. When you
cross that threshold, your process's abilities change. For example, you no
longer need to carry the weight of the fender repair kit in your trunk.

> > There are many disciplines where one best solution has indeed risen
above
> > the alternatives. The germ "theory" of pathology, for example...
>
> so all disease is caused by germs?

They ain't (directly) caused by evil spirits...

> But isn't that the point of RAD book....there are always trade offs, there
> is no one best practice.

>Twice as many tests as code. All tests pass before you integrate. Integrate
every <90 minutes. This ain't RAD.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/6/2004 3:01:11 PM
Rich MacDonald wrote:

> Exactly. However, stable in the sense of bug-free, not stable in the
> sense of changing.

In traditional programming, you encode a design, and then debug until the
behavior stabilizes. In XP you encode behaviors, and then refactor until the
design stabilizes.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/6/2004 3:09:34 PM
"Dagfinn Reiersol" <reiersol@online.no> wrote in message
news:esN8d.401$Km6.8709@news4.e.nsc.no...

> > My "good" is not a function of what you think of me. Second, it would be
a
> > contradiction for me to say "this is evident" and then proceed to give
an
> > argument as to why it's evident. Either it's clear to you as well or
not.
>
> We're not talking about pure mathematics and logic, we're talking about
> your claim that XPers are only pretending to be interested in facts. You
> can't mean that it would be evident to someone who has never even heard
> of XP, so it must be based on some kind of observation--evidence.

First, I wouldn't claim that about every XP advocate. Second, if you expect
me to dig through the comp.object archives to find quotes for you... well
you won't see that happening.

I agree that people who've not seen XPers do this won't believe what I'm
saying, and they shouldn't. I just think it happens so often in this forum
that regulars will have noticed it at some level already.


Shayne Wissler
http://www.ouraysoftware.com


0
10/6/2004 3:52:59 PM
"Dagfinn Reiersol" <reiersol@online.no> wrote in message
news:dZN8d.512$rh1.8919@news2.e.nsc.no...

> > We're not talking about pure mathematics and logic, we're talking about
> > your claim that XPers are only pretending to be interested in facts. You
> > can't mean that it would be evident to someone who has never even heard
> > of XP, so it must be based on some kind of observation--evidence.
>
> And let me remind you that you said "it wears thin after years and years
> of everyone doing it", so you're implying more than a few casual
> observations.

I was a little sloppy - not everyone in any movement, including XP, all does
some one thing, and I don't mean to accuse all XPers of that kind of
dishonesty (I've never observed Bob Martin do it, for example). I could name
names: I think Ron Jeffries is the father of that particular technique, and
Laurent often argues like Ron.


Shayne Wissler
http://www.ouraysoftware.com


0
10/6/2004 3:57:53 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in message
news:XMT8d.7274$Rf1.4317@newssvr19.news.prodigy.com...
> Mark Nicholls wrote:
>
> > I believe most of those things are mentioned..daily smoke and build and
> > assertions are a form of TDD , he talks about integration, though I do
> > believe he doesn't believe in premature integration, he believes in
> frequent
> > releases, and he believes in low bug rates, in fact it's one of the
> central
> > tenets of the book, detect and fix bugs before moving on.
> >
> > So I don't really see where the problem is.
> >
> > You accept that you can have too much transparency, and I think this is
> > simply the point he is making.
> >
> > I see no contradiction, in fact I would have thought it would be
> considered
> > the trail blazer for much of what you talk about.
>
> "Too much meetings" is a kind of visibility that's bad.

too little meetings are bad as well.

>
> If you target the right kind of feedback, you get a lot of external
> visibility as a side-effect.

right kind of feedback?

tests.......we can have too many tests, we can waste our time testing stuff
that we know works, and in some cases can prove works.
pair programming.....you've told me you think there are cases where this is
not sensible...so we can have too much of that, and we can certainly have
too many people...i.e. maybe triplet programming is a good idea? in some
cases it probably is.

>
> I don't think Steve, when he wrote the book, had experienced making <10
very
> small edits, and passing all tests. Yes yes yes, many folks wrote lots of
> tests before TDD.

I know, I did and so did most other proffesional software engineers...it
aint news to me.

> You noodle around for a long time, then pass some or all
> high-level tests.

nope, don't understand.

I have always added specific assertions throughout my C code, with specific
validation 'methods' to validate those....see writing solid code-MaGuire
1993...the book is devoted to writing and testing code....it's very good.

> If you think that more tests would slow you down,

I do, there comes a point when you are testing things that are not worth
testing.

for example if I write an add function over the integers, should I write a
test that make sure it works for all possible input values?

no

I just write them to test specific cases and boundary condition....too many
tests is thus bad in this scenario, at some point your just wasting time.

> you
> don't add the high-risk kind that could break after a legitimate code
> change. And because some tests fail sometimes, you never know when it's
> perfectly safe to integrate.

I'm not with this, if it's broke fix it, but don't test for behaviour that
you don't want, test for behaviour you do.

>
> > > At the third corner are highly iterative processes that leverage tests
> to
> > > prevent long endless bug hunts.
> >
> > the smoke and build chapter....a best practice according to McConnell,
> yes.
> > the spiral lifecycle chapter...a best practive according to McConnell,
> yes.
>
> And not out at the tip of that triangle. Specifically, each arc of a
spiral
> is a miniature waterfall. Analyze design code debug test.

not according to McConnell, smoke and test best practice...here he
recommends continuous system compilations from day 1 of the build.

He recommends as a best practice prototyping and 'testing' against the
client, regularly throughout the process.

>
> To move further from Waterfall, test first and analyze last.

If you analyse last, how do you know what to test? wierd.

>
> To move further from Code-and-Fix, don't debug. If tests fail, hit Undo
(or
> discard your sandbox) until they pass, then start your current drive
again.

What, even if it's a typo, thats obvious, by the tripping an assetion?

"assertion error string should start with '!' at line 456 foo.c."

I go to line 456 and find I've written a '?'......do I then ctrl-u 10
minutes hard work?

>
> If you couldn't _manually_ undo the last few changes, back to a passing
> test, you ain't doing TDD, so you get fewer external benefits.

I'm happy not undoing 10 minutes work for the sake of a 2 second typo fix.

>
> > > As a side-effect, these tests provide much
> > > of the role of requirements analysis and designing.
> >
> > hmmm, I don't see this.
> >
> > They may highlight the analysis and design are wrong, but the tests are
> not
> > a substitute, they are just an experiment on the basis of the current
> > analysis and design.
>
> If you invest your A&D effort into them, then they do.

but you said "To move further from Waterfall, test first and analyze last."

so I do do analysis first? in order to know what to test....your not telling
me anything I don't know and is not in RAD.

> They are the
> diagrams, the architectural experiments, and the main force that decouples
> your design.

hmmm, this is just plain wierd.

do you not have diagrams as your diagrams? OK, we don't need to do the full
UML standard template stuff, but a few sketches that communicate the problem
domain between client, analyst and developer are quite useful.

very sometimes.

>
> > > But programmer's can't
> > > change the project's location without causing more illumination on its
> > > real-time situation.
> >
> > I don't understand this.
> >
> > If you mean there needs to be feedback and transparency.....again I see
no
> > contradiction to RAD, ezcept his observation that it is suceptable to
> > diminiting marginal returns (as all things are in the short run).
>
> Programmers cannot change a program with so many tests without adding
tests
> and tweaking them.
>
> TEST_(TestEvaluations, testThreeTubes)
> {
>
>     layEgg( "egg = startOvum(4, 30, 100)\n"
>             "base = egg.newRootAxiom\n"
>             "base.tube.tube.tube\n"
>             "hatch(egg)\n" );
>
>     incubateTheEgg();
>     FleaOpenGL * fog = pMainWindow->glWidget;
>
>     CPPUNIT_ASSERT_EQUAL(0, uint(fog->spheres.size()));
>     CPPUNIT_ASSERT_EQUAL(0, uint(fog->triangles.size()));
>     CPPUNIT_ASSERT_EQUAL(3, uint(fog->cylinders.size()));
>
> }
>
> The big string at the top is this program's input, exactly as a user would
> type it in. If this were from an XP project, its domain expert could pair
> with me and understand this test.

OK, the direct involvement of domain expert in test is not mentioned in RAD.

The test itself is a 'best practice' and old hat, fine feal free to promote
it, but the above has been around for 25 years+.

>
> (The system metaphor is DNA, in the language Ruby, inside an egg. The
> program's core logic "incubates" the egg and expresses its DNA. In this
case
> it's three tubes.)

yes I've heard about this metaphor thing.

It isn't in RAD, and is new to me......I'm not convinced, but that's not to
say it's a bad idea....wierd.....but if it works then why not.

>
> If the string were more complex, the domain expert could pair with me to
> write the correct assertions to analyze the output. And we would collude
to
> escallate our analysis into both reusable code and reusable tests which
> expose that analysis to our colleagues. For example, a Web page could
store
> a list of representative input Ruby strings, and their graphical outputs,
> for easy review:
>
>     http://flea.sourceforge.net/reference.html
>
> Instead of requiring analysis up-front, emerge it from all the tests. That
> keeps it locked to our program's reality, not floating around.

hmmm, OK...I see the mentality....though I still think your doing some
analysis before you write the test.

I get the feeling that you think the rest of the world goes around doing
waterfall, or sitting in dark rooms doing code and fix. I'm not, and neither
are the people sitting around me, this is just itterative development,
spiral lifecycle...welcome to the 1980's.

>
> These techniques contradict RAD when the headlights go from dim to bright.
> Then you can see things farther away than your turning radius. When you
> cross that threshold, your process's abilities change. For example, you no
> longer need to carry the weight of the fender repair kit in your trunk.

the analogy is getting wierd.

There is a cost here....you need to pay the bloke to sit next to you and
tell you what to test for.....that cost may be sensible or may not be, it
may be better spent on another programmer or on a bigger office or on a day
off for everyone because they've worked their arses off for the last four
weeks hitting a deadline.

I still think this just a spiral development model, driven to an extreme and
does not contradict anything that RAD says.
I think it is smoke and build driven to an extreme.
I think it is communication driven to an extreme.

That is not to say the extreme is wrong or bad, just has a cost, and is
probably not always warranted....whether it is ever warranted I cannot
comment.

>
> > > There are many disciplines where one best solution has indeed risen
> above
> > > the alternatives. The germ "theory" of pathology, for example...
> >
> > so all disease is caused by germs?
>
> They ain't (directly) caused by evil spirits...

again you've lost me, one minute you were saying that germs were the cause
of all disease and now your off into clairvoyancy.

>
> > But isn't that the point of RAD book....there are always trade offs,
there
> > is no one best practice.
>
> >Twice as many tests as code. All tests pass before you integrate.
Integrate
> every <90 minutes. This ain't RAD.
>

It does not contradict RAD in my opinion.

it is smoke and build.
in fact it seems that you do not believe in continuous integration as I
thought, so it is one of the ten classic mistakes...i.e. do not force
integration prematurely...RAD.
it is spiral development model.

and if it was in front of me it would probably be some others as well, and I
would take that as a compliment.

It is 'extreme', it's extreme RAD.

Why it needs to be portrayed as a revolution as apposed to an evolution I
don't know. Personally I would like to think it had routes based in valued
and established thought.

Is it sensible? I don't know.

at least not all the time, few things are.


0
Nicholls.Mark (1061)
10/6/2004 4:52:28 PM
Daniel Parker wrote:
> Andrew McDonagh <news@andrewcdonagh.f2s.com> wrote in message news:<cjv9uq$fg9$2@news.freedom2surf.net>...
> 
>>Daniel Parker wrote:
> 
> 
>>>Dynamic document generation, where the notion of a document becomes
>>>rather abstract.  Separation of content from presentation;
>>>specification documents tend to be heavily templated.  Content
>>>assembly from a variety of sources, including sampling from live
>>>sources such as databases, flat files of every description, and real
>>>time feeds; glossaries and data dictionaries both real and virtual,
>>>e.g. database metadata; and textual content prepared and reviewed
>>>across the enterprise. Facilities for testing and reconciling data and
>>>incorporating that into documents.  Facilities for versioning and
>>>automating conversion across versions.  A gui for assembling the
>>>pieces.  An extendable framework, of course.  That sort of thing.
>>>
>>
>>I have to say, its sounds awfully like the automated acceptance test we 
>>use in XP.  The acceptance test are 'executable' requirements document 
>>contents. They grow with the system as the system grows.
> 
> 
> I don't think so.  

fine.

But from an XP projects perspective, Automated Acceptance tests are 
equivalent to old style requirements, but with more meat.

A traditional requirement would be something like ... "..shall handle 
n000 transactions an hour".  Our XP automated acceptance test invoke 
n000 transactions an hour and pass or fail until the product does handle 
that amount.
0
news248 (706)
10/6/2004 6:01:15 PM
>
> > Exactly. However, stable in the sense of bug-free, not stable in the
> > sense of changing.
>
> In traditional programming, you encode a design, and then debug until the
> behavior stabilizes. In XP you encode behaviors, and then refactor until
the
> design stabilizes.
>
> -- 

Again I think this is an example of XP revisionhist history...the claim is
that before XP we all sat their in our waterfalls and in darkened rooms code
and fixing, iteractive development existed long before XP was a twinkle in
anyones eye.

It is called iterative development, or spiral development lifecycle.

Why do we have to think of XP as some sort of revolutionary approach that
will change the world rather than a evolution or a specific implementation
of things that have been established for many years, it may have some very
sensible things to say but this style of debate, debases the argument.

I read the Agile manifesto and think 'ooo yes that all seems a sensible
summary of sensible asperations based largely on experience and existing
knowledge', then I come here and read all this hype, spin, distortion and
revisionist history and it saddens me...honestly it does.

Manifesto is the correct term, it has been debased into politics and spin
and hype, it's more Stock, Aitken and Waterman than Booch, Rumbaugh and
Jacobson.

(that probably doesn't translate.....but one set made Kylie Minogue and
other soap stars into a pop stars and the other three obviously didn't).


0
Nicholls.Mark (1061)
10/7/2004 9:45:34 AM
Mark Nicholls wrote:

>> In traditional programming, you encode a design, and then debug
>> until the behavior stabilizes. In XP you encode behaviors, and then
>> refactor until the design stabilizes.
>>
>> --
>
> the
> claim is that before XP we all sat their in our waterfalls and in
> darkened rooms code and fixing.

I don't see how you can read that into the above paragraph.

So are you saying that deciding on part of the design, coding that part, and
*then* testing and debugging it until it works, is atypical for traditional
software development?

> iteractive development existed long
> before XP was a twinkle in anyones eye.

I don't remeber anyone stating something different. Perhaps XP is more than
"just" iterative development.

> It is called iterative development, or spiral development lifecycle.

That still has testing *after* coding in the cycle, if I remember correctly.
It still has design before coding inside an iteration, hasn't it?

> Why do we have to think of XP as some sort of revolutionary approach
> that will change the world rather than a evolution or a specific
> implementation of things that have been established for many years,
> it may have some very sensible things to say but this style of
> debate, debases the argument.

Well, for most of us who try to do it, it obviously feels quite different
from what we experienced beforehand. Perhaps "Quantity changes into
quality".

And many people who didn't try it yet feel that it sounds way "too extreme"
for their taste - which suggests to me that it might actually be far from
"what everyone is already doing anyway."

Cheers, Ilja


0
preuss (368)
10/7/2004 10:45:26 AM
Mark Nicholls wrote:

> > In traditional programming, you encode a design, and then debug until
the
> > behavior stabilizes. In XP you encode behaviors, and then refactor until
> > the design stabilizes.

> Again I think this is an example of XP revisionhist history...the claim is
> that before XP we all sat their in our waterfalls and in darkened rooms
code
> and fixing, iteractive development existed long before XP was a twinkle in
> anyones eye.
>
> It is called iterative development, or spiral development lifecycle.

"Before" XP nobody ever recommended "don't analyze until you start with some
code."

That's not an XP rule, it's a general recommendation based on team
experience.

Spiral goes "analyze, design, code, debug, test". Agile goes "test, code,
design, analyze".

Debugging code is such an anxious, scarring process that some folks might
refuse to believe software can develop without it. This is called the
Stockholm Effect.

> Why do we have to think of XP as some sort of revolutionary approach that
> will change the world rather than a evolution or a specific implementation
> of things that have been established for many years, it may have some very
> sensible things to say but this style of debate, debases the argument.

"Before" XP, there were many highly iterative projects that used tests to
leverage code. There were no books saying "don't design up front".

> I read the Agile manifesto and think 'ooo yes that all seems a sensible
> summary of sensible asperations based largely on experience and existing
> knowledge', then I come here and read all this hype, spin, distortion and
> revisionist history and it saddens me...honestly it does.

The Agile Manifesto sucks. It was written by a committee, trying to find
their average.

Read /Lean Software Development/ by the Poppendiecks. Then try to tell us
"claiming the Japanese had a car manufacturing system in the 1960s superior
to Detroit's is revisionist history."

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/7/2004 2:09:07 PM
Ilja Preu� wrote:

> Well, for most of us who try to do it, it obviously feels quite different
> from what we experienced beforehand. Perhaps "Quantity changes into
> quality".

A dirt-simple executive overview:

    "Without Agile practices, software engineering projects occupy a
    spectrum, from excessive planning to undisciplined hacking. GUI
    implementation, for example, trends toward the hacking end, lead
    by vendors' tools that make GUI programming appear as easy as
    painting. Their tools irresistibly link painting and coding to full-
    featured debuggers, and neglect hooks to enable testing. These
    biases resemble old-fashioned manufacturing methodologies that
    planned to over-produce hardware parts, then measured them all
    and threw away the defects. Both speculative planning and endless
    hacking risk extra rework and endless debugging."

The deconstruction:

I did not say "before Agile practices", as if a bell rang and we all started
pairing. I said "without", meaning practices that did not call themselves
"Agile" are still good if they are not "Anti-Agile".

Next, I called the planning "excessive". I did not say "ANY planning is
bad!" snarl snarl snarl. What's "excessive"? Enough to slow you down and
risk rework.

Next, I called hacking "undisciplined". Everything else is discipline, but
even Dr. Laura Schleshinger might tell you that you can have too little OR
too much discipline.

I throw the term "hacking" out as BAD, without qualifications. Hacking has
short-term benefits.

Everyone busts on Waterfall (which still exists, folks). I use GUI coding as
our industry's most common example of institutionalized code-and-fix.

Vendors ship GUI toolkits without hooks for testing. This helps those
vendors promote their vendors. The Visible Hand of the Marketplace at work
again...

Some companies carefully plan their back-ends, hack their GUIs, and rewrite
their GUIs from scratch for each version. They repeatedly gain those
short-term benefits.

The parallel with hardware, "These biases resemble old-fashioned
manufacturing methodologies that planned to over-produce hardware parts,
then measured them all and threw away the defects," refers directly to
Taylorism, and Lean Manufacturing.

Next, "Both speculative planning and endless hacking risk extra rework and
endless debugging." That sentence uses parallel construction on parallel
concepts. When "traditional" engineers grumble about "changing
requirements", sometimes they are grumbling about planning to fixed
requirements, implementing code, deploying the code, receiving feature
requests, and discovering their plans made their code rigid. The ability to
change easily is a _must_have_ feature of an Agile project. Next, naturally
endless hacking leads to endless debugging - the kind where you assign a bug
on Monday and by Thursday your engineer says, "I figured out which module
the bug is in!"

However, those terms appear in one sentence. That's because we Agilistas
suspect that planning leads to debugging, and hacking leads to rework. These
relationships are harder to justify in print, but we have seen them in
practice.

Now the next paragraph in the executive overview:

    "Agile projects rise above that spectrum. We apply discipline
    to the good parts of planning and hacking. We carefully plan
    to write many tests, until new features are as easy as hacking-
    without the low design quality, high bug rate, and inevitable
    decay. Agile projects become useful early, add value
    incrementally, and maintain their value."

By "rise above that spectrum" I refer to the triangle, with waterfall at one
corner, hacking in the other, and iterative development on the third axis.

By "discipline", I am reassuring the executives that Agility is _not_ a way
to avoid careful and diligent work, and is _not_ a way to avoid following a
process.

Planning is not bad. Agile projects start with one big invisible User Story,
before all the others, saying "The code shall be easy to change". Getting
there requires extraordinary up-front planning.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/7/2004 2:35:18 PM
"Ilja Preu�" <preuss@disy.net> wrote in message
news:ck36hj$iao$1@stu1id2.ip.tesion.net...
> Mark Nicholls wrote:
>
> >> In traditional programming, you encode a design, and then debug
> >> until the behavior stabilizes. In XP you encode behaviors, and then
> >> refactor until the design stabilizes.
> >>
> >> --
> >
> > the
> > claim is that before XP we all sat their in our waterfalls and in
> > darkened rooms code and fixing.
>
> I don't see how you can read that into the above paragraph.

OK, I am linking to another thread to a degree, I accept it is not explicit
in this sentence.

But I don't see the difference in the two statements.

In order to 'encode behaviours' you need to have some sort of analysis of
the behaviour and some sort of design as how you want to expose these
behaviours.

If I ask you to write a test, you will ask me what I want you to test, and
how that behaviour is exposed...i.e. you will want me to the analysis and
design.

Analysis then design then encode then refactor, this isn't new to me, it's
just iterative development.

>
> So are you saying that deciding on part of the design, coding that part,
and
> *then* testing and debugging it until it works, is atypical for
traditional
> software development?
>
> > iteractive development existed long
> > before XP was a twinkle in anyones eye.
>
> I don't remeber anyone stating something different. Perhaps XP is more
than
> "just" iterative development.

That's fine....perhaps it is...the statements seem to imply not that it is
more, but that is is different.

In A we do B, in C we do D.

rather than

in A we do B, in C we do B and D.

there is no revolution, just evolution.

i.e. we should be able to consensually agree about B(!) and then go on to
argue about the value of D.....I see a lot of polarised positions.

>
> > It is called iterative development, or spiral development lifecycle.
>
> That still has testing *after* coding in the cycle, if I remember
correctly.
> It still has design before coding inside an iteration, hasn't it?

I agree, but I cannot see how you write a test before you know what to test
or what the thing we want to test looks like, I accept that you are
potentially moving the test before writing the implementation, where
'traditionally' it would be done at the same time via assertions or after
via unit testing.

If you write all the tests before writing any code, I would suggest that the
normal use of assertion inside code, reduced the iteration even further
..........is this XXP, or just the use of assertions in software development?

>
> > Why do we have to think of XP as some sort of revolutionary approach
> > that will change the world rather than a evolution or a specific
> > implementation of things that have been established for many years,
> > it may have some very sensible things to say but this style of
> > debate, debases the argument.
>
> Well, for most of us who try to do it, it obviously feels quite different
> from what we experienced beforehand. Perhaps "Quantity changes into
> quality".

I do see interesting approaches, new ideas, some of which may be sensible,
some not, but whenever people come down to identify what those actual
differences are, they seem to be an adaptaption or new implementation of
existing thought.......that's good and fine......but don't oversell it....it
isn't the silver bullet, if XP reduces the iteraction time in evolutionary
prototyping and iteractive development process then good...but it doesn't
negate the value of those concepts.


>
> And many people who didn't try it yet feel that it sounds way "too
extreme"
> for their taste - which suggests to me that it might actually be far from
> "what everyone is already doing anyway."

You may be right......I am largely ignorant of most of it.....I have no
problem with new ideas. I do believe that XP and the like are promoting
genuinely innovative processes, but the logic and value of them could be
traced back into the established knowledge of 40 years of software
experience....I as yet see no contradiction, just innovation, some of it may
be good, some of it not, none of it will apply to everything.


0
Nicholls.Mark (1061)
10/7/2004 2:36:21 PM
Mark Nicholls wrote:

> In order to 'encode behaviours' you need to have some sort of analysis of
> the behaviour and some sort of design as how you want to expose these
> behaviours.

Right, but it's very small. You are allowed to only attend to that behavior
you need to test-in. Briefly, you are allowed to pretend it's the only
requirement. So you do a very small amount of analysis and design.

Minutes later, after you have passing tests, you are required to analyze and
design the rest of the project, to merge the new feature in AS IF you had
designed all the current features up front, and implemented them all at
once.

The distinction is we believe long analytic and design sessions, up-front,
add risk without tempering feedback from live code. If code is safe to
change, and bug-resistant, then it can participate in those design sessions.
And so can releasing and deploying new versions.

> If I ask you to write a test, you will ask me what I want you to test, and
> how that behaviour is exposed...i.e. you will want me to the analysis and
> design.
>
> Analysis then design then encode then refactor, this isn't new to me, it's
> just iterative development.

It's just iterative development.

However, previous authors never said "Don't design up front".

The reason is we have found that TDD and refactoring lead to a very simple
solution, typically one more simple than we could have designed.

Previous authors did not say "You should be able to deploy any integration."

Someone said "premature integration" a while ago. Yes, you could deploy one
of them, too.

> I agree, but I cannot see how you write a test before you know what to
test
> or what the thing we want to test looks like, I accept that you are
> potentially moving the test before writing the implementation, where
> 'traditionally' it would be done at the same time via assertions or after
> via unit testing.

The test is very small. It's at the level where you know what's the next
statement to write, so you should perforce know what complementing test
would fail without the statement.

> If you write all the tests before writing any code, I would suggest that
the
> normal use of assertion inside code, reduced the iteration even further
> .........is this XXP, or just the use of assertions in software
development?

Big test up front is bad.

What's revolutionary is our insistence on growing everything relevant to a
program in lockstep. Testing, coding, designing, analyzing, and requirements
must experience the fewest latencies between them.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/7/2004 2:49:13 PM
"Mark Nicholls" <nicholls.mark@mtvne.com> wrote in
news:2skhisF1lbfcdU1@uni-berlin.de: 

> Why do we have to think of XP as some sort of revolutionary approach
> that will change the world rather than a evolution or a specific
> implementation of things that have been established for many years

Who cares?

The important bit is either if it works, or it doesn't; and if it does, if 
and when works better than the alternatives.
0
10/7/2004 2:52:22 PM
Cristiano Sadun wrote:

> > Why do we have to think of XP as some sort of revolutionary approach
> > that will change the world rather than a evolution or a specific
> > implementation of things that have been established for many years
>
> Who cares?
>
> The important bit is either if it works, or it doesn't; and if it does, if
> and when works better than the alternatives.

8~12 programmers sitting in a ring, with shared workstations, each with dual
keyboards and mice, making ~10 edits between each test run, predicting the
result of each test run correctly, writing 3 times as much test as
production code, integrating every 10~60 minutes, releasing a new version
each Friday.

Nope. Nothing revolutionary about that. Been doing it since the 1970s.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces



0
phlip_cpp (3852)
10/7/2004 2:55:46 PM
>
> > > In traditional programming, you encode a design, and then debug until
> the
> > > behavior stabilizes. In XP you encode behaviors, and then refactor
until
> > > the design stabilizes.
>
> > Again I think this is an example of XP revisionhist history...the claim
is
> > that before XP we all sat their in our waterfalls and in darkened rooms
> code
> > and fixing, iteractive development existed long before XP was a twinkle
in
> > anyones eye.
> >
> > It is called iterative development, or spiral development lifecycle.
>
> "Before" XP nobody ever recommended "don't analyze until you start with
some
> code."
>
> That's not an XP rule, it's a general recommendation based on team
> experience.
>
> Spiral goes "analyze, design, code, debug, test". Agile goes "test, code,
> design, analyze".

OK, lets test this process.

I am a client and I want a system.

Now write a test.

>
> Debugging code is such an anxious, scarring process that some folks might
> refuse to believe software can develop without it. This is called the
> Stockholm Effect.

And they may be right....I'm open minded on the issue.

>
> > Why do we have to think of XP as some sort of revolutionary approach
that
> > will change the world rather than a evolution or a specific
implementation
> > of things that have been established for many years, it may have some
very
> > sensible things to say but this style of debate, debases the argument.
>
> "Before" XP, there were many highly iterative projects that used tests to
> leverage code. There were no books saying "don't design up front".

Fine, so write the test.

>
> > I read the Agile manifesto and think 'ooo yes that all seems a sensible
> > summary of sensible asperations based largely on experience and existing
> > knowledge', then I come here and read all this hype, spin, distortion
and
> > revisionist history and it saddens me...honestly it does.
>
> The Agile Manifesto sucks. It was written by a committee, trying to find
> their average.

nice....genuinely...it made me laugh.

>
> Read /Lean Software Development/ by the Poppendiecks. Then try to tell us
> "claiming the Japanese had a car manufacturing system in the 1960s
superior
> to Detroit's is revisionist history."
>

I know a little of the japanese method......that was a revolution.....try
reading Henry Mintzberg.


0
Nicholls.Mark (1061)
10/7/2004 2:57:01 PM
"Cristiano Sadun" <cristianoTAKEsadun@THIShotmailOUT.com> wrote in message
news:Xns957BABA6EA6BFSadun@212.45.188.38...
> "Mark Nicholls" <nicholls.mark@mtvne.com> wrote in
> news:2skhisF1lbfcdU1@uni-berlin.de:
>
> > Why do we have to think of XP as some sort of revolutionary approach
> > that will change the world rather than a evolution or a specific
> > implementation of things that have been established for many years
>
> Who cares?
>
> The important bit is either if it works, or it doesn't; and if it does, if
> and when works better than the alternatives.

In trying to assertain what works and what doesn't it's usually best to work
out what's changed and what hasn't.

In order to know when to use a technique or not, it's nice to understand why
the technique works in order to know that it will apply to a given scenario.

I expect NASA use a different process from VBScriptsRUs....not because NASA
knows better....they just have a different problem to solve.

To believe, I like to understand.


0
Nicholls.Mark (1061)
10/7/2004 3:00:15 PM
>
> > > Why do we have to think of XP as some sort of revolutionary approach
> > > that will change the world rather than a evolution or a specific
> > > implementation of things that have been established for many years
> >
> > Who cares?
> >
> > The important bit is either if it works, or it doesn't; and if it does,
if
> > and when works better than the alternatives.
>
> 8~12 programmers sitting in a ring, with shared workstations, each with
dual
> keyboards and mice, making ~10 edits between each test run, predicting the
> result of each test run correctly, writing 3 times as much test as
> production code, integrating every 10~60 minutes, releasing a new version
> each Friday.
>
> Nope. Nothing revolutionary about that. Been doing it since the 1970s.
>

The undelying process is not revolutionary, it does not contradict existing
thought, it builds on it, if we can both agree it is an example of iterative
development, we can argue about the specific implementation and it's value.

Iterative development in many scenarios is very good, reducing the length of
each iteraction in many scenarios is very sensible.....but it's still
iterative development, so why throw the baby out with the bath water.



0
Nicholls.Mark (1061)
10/7/2004 3:04:17 PM
"Mark Nicholls" <nicholls.mark@mtvne.com> wrote in
news:2sl40tF1m4o9qU1@uni-berlin.de: 

 
> In trying to assertain what works and what doesn't it's usually best
> to work out what's changed and what hasn't.
> 
> In order to know when to use a technique or not, it's nice to
> understand why the technique works in order to know that it will apply
> to a given scenario. 
> 
> I expect NASA use a different process from VBScriptsRUs....not because
> NASA knows better....they just have a different problem to solve.
> 
> To believe, I like to understand.

Not for these issues, imvho - not in the meaning of "understand" I think 
you're implying.

A software development process is an extremely complex one, for which 
analytical methods may not work very well. The hints are all over the 
place: the same process works with some teams and doesn't with others; 
some team is affected heavily by one set of issues while another doesn't 
even notice them; in spite of a sizeable amount of work and research and 
effort in implementation, lots of projects still fail; and so on.

For example, I may believe that a certain practice that I follow leads to 
high success rates; still the same practice applied by someone else may 
not work at all. The reason for that is that the amount of information 
and experience needed to apply that practice vastly overgrowns my ability 
(or will) to explain, clarify. 

Hence, the only real way of evaluating software development processes is, 
exactly like for other similar phenomena, use statistics: to try and use 
them, and collect the results. This approach tend to bypass "beliefs" and 
look for a statistical correlation between the use of a certain approach 
and the degree of success.

Of course, defining "success" is a key factor - but you see immediatly 
that it's much easier to do. For example, from the discussion of some 
weeks ago, it's clear to me that for a certain definition of "success" C3 
is success; for another, is a failure. Same thing with most projects (a 
typical case is projects which deliver functionality which isn't really 
useful to anyone.. :).

Using this approach, the only thing that I strongly believe is that 
waterfall efforts tend to be much worse than iterative ones; and high-
bureocracy processes tend to be much worse than lower-bureocracy ones. 

For XP - either in positive or in negative - the question is still open 
and will stay open until sufficient statistical evidence of its impact 
(positive, negative or irrelevant) will be available.

That is the sense of the "who cares" - that something works or not is not 
decided by arguing (or even understanding), but by experiment.
0
10/7/2004 3:14:42 PM
Andrew McDonagh <news@andrewcdonagh.f2s.com> wrote in message news:<ck1btc$1us$1@news.freedom2surf.net>...
> 
> But from an XP projects perspective, Automated Acceptance tests are 
> equivalent to old style requirements, but with more meat.
> 
No, they're not equivalent :-)

> A traditional requirement would be something like ... "..shall handle 
> n000 transactions an hour".  Our XP automated acceptance test invoke 
> n000 transactions an hour and pass or fail until the product does handle 
> that amount.

Well, consider a file extract process to a downstream system.  From a
business point of view, the only thing of interest is that the process
take place within a certain window of time in the UAT and production
environments.  That's it, and a statement to that effect is all that
belongs in a specification document.  It's up to the development team
to meet that requirement, the business doesn't care how they do it. 
If it's an easily meetable target, then there's no point in
considering it further.  If they really can't meet it, than the
business process would have to be revisited.  You don't need an
automatic acceptance test, by the way, to tell whether a process takes
too long to run.

Automated acceptance tests are good things, but they have to be
regarded as a sampling of the specification, not the specification
itself.

Regards,
Daniel Parker
0
10/7/2004 3:23:47 PM
>
> > In order to 'encode behaviours' you need to have some sort of analysis
of
> > the behaviour and some sort of design as how you want to expose these
> > behaviours.
>
> Right, but it's very small. You are allowed to only attend to that
behavior
> you need to test-in. Briefly, you are allowed to pretend it's the only
> requirement. So you do a very small amount of analysis and design.

OK, we can agree on it then.

it goes..

1. a tiny bit of requirements gathering.
2. a tiny bit of analysis
3. a tiny bit of development
4. a tiny bit of running the test.
5 goto 1 iterate until finished.

you would put writing the test in 3, you would put it before 3...but it's
not worth getting too worked up about.

To me it's an example iterative development, in interesting innovative one,
maybe a good on, maybe not.

>
> Minutes later, after you have passing tests, you are required to analyze
and
> design the rest of the project, to merge the new feature in AS IF you had
> designed all the current features up front, and implemented them all at
> once.

I've not with this.

>
> The distinction is we believe long analytic and design sessions, up-front,
> add risk without tempering feedback from live code.

I agree, and so does McConnell, so does spiral, so does itterative
development.

> If code is safe to
> change, and bug-resistant, then it can participate in those design
sessions.
> And so can releasing and deploying new versions.

I'm not with this either...are you just saying it works therefore it's safe?

>
> > If I ask you to write a test, you will ask me what I want you to test,
and
> > how that behaviour is exposed...i.e. you will want me to the analysis
and
> > design.
> >
> > Analysis then design then encode then refactor, this isn't new to me,
it's
> > just iterative development.
>
> It's just iterative development.

good, we have a seed of consensus, and I am genuinely open minded as the
value of XP and the value of 'agile' techniques with respect to the
development process, I just get irritated with the flashing lights and
sirens, and some of the more dubious stuff on the hard science of OO...but
thats a different story.

>
> However, previous authors never said "Don't design up front".
>
> The reason is we have found that TDD and refactoring lead to a very simple
> solution, typically one more simple than we could have designed.

I think thats because you are mitigating your design risk with each
iteration, this is not new...it's often good.

>
> Previous authors did not say "You should be able to deploy any
integration."

I'm not with this either.

>
> Someone said "premature integration" a while ago. Yes, you could deploy
one
> of them, too.

McConnell did, do not integrate too soon, is a classic mistake, according to
him....I think, I haven't got the book here, so I called be talking a loud
of old arse.

>
> > I agree, but I cannot see how you write a test before you know what to
> test
> > or what the thing we want to test looks like, I accept that you are
> > potentially moving the test before writing the implementation, where
> > 'traditionally' it would be done at the same time via assertions or
after
> > via unit testing.
>
> The test is very small. It's at the level where you know what's the next
> statement to write, so you should perforce know what complementing test
> would fail without the statement.
>
> > If you write all the tests before writing any code, I would suggest that
> the
> > normal use of assertion inside code, reduced the iteration even further
> > .........is this XXP, or just the use of assertions in software
> development?
>
> Big test up front is bad.

int Times2(int i)
{
assert(i < maxint/2);
assert(i > -maxint/2);

int answer = i * 2;

assert(answer == (i+i));

return answer;
}

I see no big tests.

assertions exist at every level, as small as above, or as large as testing
the complete output given a set of inputs.

>
> What's revolutionary is our insistence on growing everything relevant to a
> program in lockstep. Testing, coding, designing, analyzing, and
requirements
> must experience the fewest latencies between them.
>
OK, we're both splitting hairs now on the definition of revolutionary....
To me it's an evolution of itterative development.
To you it's a revolutionary evolution of itterative development?

and thus we should be able to sleep soundly in our beds?


0
Nicholls.Mark (1061)
10/7/2004 3:40:58 PM
Daniel Parker wrote:

> Well, consider a file extract process to a downstream system.  From a
> business point of view, the only thing of interest is that the process
> take place within a certain window of time in the UAT and production
> environments.  That's it, and a statement to that effect is all that
> belongs in a specification document.  It's up to the development team
> to meet that requirement, the business doesn't care how they do it.
> If it's an easily meetable target, then there's no point in
> considering it further.  If they really can't meet it, than the
> business process would have to be revisited.  You don't need an
> automatic acceptance test, by the way, to tell whether a process takes
> too long to run.

Yes you do. Anything that's important to business, if it breaks, should
raise an instant alarm.

Consider a system whose performance slowly decayed over time. Knowing which
element decayed first would have been good. It should have had a test, and
this should have alerted before the decay became noticable to humans.

> Automated acceptance tests are good things, but they have to be
> regarded as a sampling of the specification, not the specification
> itself.

If any test is hard to write, you must ask why. Maybe the code has poor
infrastructure and can't submit to tests. Maybe the requirement sounds
correct but has internal flaws.

The act of writing these tests is a kind of analysis. Our code needs the
highest quality analysis possible. Are you saying analysis is bad, or should
sometimes be skipped?

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/7/2004 3:42:17 PM
Mark Nicholls wrote:

> Iterative development in many scenarios is very good, reducing the length
of
> each iteraction in many scenarios is very sensible.....but it's still
> iterative development, so why throw the baby out with the bath water.

We don't. One Ken Schwaber says if you suspect a given boss is going to
stress about the words "Agile" or "eXtreme", you should start saying
"iterative and incremental development".

> The undelying process is not revolutionary, it does not contradict
existing
> thought, it builds on it, if we can both agree it is an example of
iterative
> development, we can argue about the specific implementation and it's
value.

If I return to the idea that testing FIRST is revolutionary, you will point
to numerous examples of folks writing tests before writing the code to pass
them. It's not just the "first" part that makes it revolutionary.

 - work at the level of statements within methods, not entire methods
 - the fewest possible edits between testing, say 10 at the most
 - don't design until the tests pass
 - design by refactoring
 - force the design you plan to emerge
 - always correctly predict the result of the next test run

Put together, nobody worked at that level of granularity. The inversion goes
beyond simply writing the test first.

To add a new feature, the test must fail for the correct reason. After it
fails, perform the smallest possible edit to pass the test. After the test
passes, refactor to recover from the poor design of that small edit.

By "small" I mean you should be able to undo the edits manually, without
using the Undo button (though you could). The edits should fit in your
short-term memory.

The inversion is you should pay more attention to whether a test fails for
the correct reason than you should pay to the code that makes it pass. When
you force a Red Bar, and want to write a new feature, now is the time to
step thru the code and see if its behavior is exactly what you expect.
However, when you write code to pass the test, you are allowed to lie. If
the test asks for 10, the new code could just return 10, without logic.

Writing more assertions and more tests, to force out that lie, to make the
simplest code that can pass all those tests also be the correct production
code, is where analysis happens.

You ought to try the TDD lifecycle with a small test project, and try to
generate these effects for yourself.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/7/2004 3:52:37 PM
"Cristiano Sadun" <cristianoTAKEsadun@THIShotmailOUT.com> wrote in message
news:Xns957BAF7047A5BSadun@212.45.188.38...
> "Mark Nicholls" <nicholls.mark@mtvne.com> wrote in
> news:2sl40tF1m4o9qU1@uni-berlin.de:
>
>
> > In trying to assertain what works and what doesn't it's usually best
> > to work out what's changed and what hasn't.
> >
> > In order to know when to use a technique or not, it's nice to
> > understand why the technique works in order to know that it will apply
> > to a given scenario.
> >
> > I expect NASA use a different process from VBScriptsRUs....not because
> > NASA knows better....they just have a different problem to solve.
> >
> > To believe, I like to understand.
>
> Not for these issues, imvho - not in the meaning of "understand" I think
> you're implying.

don't get me wrong...there is a world of difference between maths and
physics, physics and engineering, engineering and brick laying.

All are of equal value, but there are few hard facts about brick laying.
While maths is just about hard facts.

I do not need absolute knowledge, but I do need a believable and hopefully
empiracally testable cause and effect chain.

>
> A software development process is an extremely complex one, for which
> analytical methods may not work very well. The hints are all over the
> place: the same process works with some teams and doesn't with others;
> some team is affected heavily by one set of issues while another doesn't
> even notice them; in spite of a sizeable amount of work and research and
> effort in implementation, lots of projects still fail; and so on.

I completely agree.

>
> For example, I may believe that a certain practice that I follow leads to
> high success rates; still the same practice applied by someone else may
> not work at all. The reason for that is that the amount of information
> and experience needed to apply that practice vastly overgrowns my ability
> (or will) to explain, clarify.

I agree.

>
> Hence, the only real way of evaluating software development processes is,
> exactly like for other similar phenomena, use statistics: to try and use
> them, and collect the results. This approach tend to bypass "beliefs" and
> look for a statistical correlation between the use of a certain approach
> and the degree of success.
>

I agree, it is a dangerous route though to believe something just from
empirical evidence...generally we need both evidence and a theortical
mechanism that explains the results.

Otherwise I can simply claim that the increased use of safety rasors
explains the increased use of ballpoint pens....it may ask a question....is
the correlation direct, indirect or accidental...if you give me a cause and
effect mechanism that I can build evidence about then eventually it comes to
the point where it makes more sense to believe you than not.

> Of course, defining "success" is a key factor - but you see immediatly
> that it's much easier to do. For example, from the discussion of some
> weeks ago, it's clear to me that for a certain definition of "success" C3
> is success; for another, is a failure. Same thing with most projects (a
> typical case is projects which deliver functionality which isn't really
> useful to anyone.. :).
>
> Using this approach, the only thing that I strongly believe is that
> waterfall efforts tend to be much worse than iterative ones; and high-
> bureocracy processes tend to be much worse than lower-bureocracy ones.

Again I agree.

>
> For XP - either in positive or in negative - the question is still open
> and will stay open until sufficient statistical evidence of its impact
> (positive, negative or irrelevant) will be available.

We agree again....but it makes it easier and quicker to 'believe' by basing
the argument on already accepted ideology...if we have evidence that RAD
works, and XP can be seen as an evolution of RAD then we can compare XP with
RAD and see if the XP knobs and whistles truly make it better or not.....if
we compare with waterfall or code and fix then the jump is much firther, and
the world generally does not believe in either of those so the comparison
becomes meaningless.

>
> That is the sense of the "who cares" - that something works or not is not
> decided by arguing (or even understanding), but by experiment.

I would like to have a coherent believable cause and effect chain before
turning a SD department upside down.
I believe in many of the practices of RAD, if we establish that XP is a
derivative or an evolution, then the risk seems far less than a complete
revolution.


0
Nicholls.Mark (1061)
10/7/2004 3:58:13 PM
> > The undelying process is not revolutionary, it does not contradict
> existing
> > thought, it builds on it, if we can both agree it is an example of
> iterative
> > development, we can argue about the specific implementation and it's
> value.
>
> If I return to the idea that testing FIRST is revolutionary, you will
point
> to numerous examples of folks writing tests before writing the code to
pass
> them. It's not just the "first" part that makes it revolutionary.

OK, I thought we'd agreed that really there was a tiny bit of requirements
gathering and analysis before we could do that.

>
>  - work at the level of statements within methods, not entire methods
>  - the fewest possible edits between testing, say 10 at the most
>  - don't design until the tests pass
>  - design by refactoring
>  - force the design you plan to emerge
>  - always correctly predict the result of the next test run
>
> Put together, nobody worked at that level of granularity. The inversion
goes
> beyond simply writing the test first.

ooo you said that word.

>
> To add a new feature, the test must fail for the correct reason. After it
> fails, perform the smallest possible edit to pass the test. After the test
> passes, refactor to recover from the poor design of that small edit.
>
> By "small" I mean you should be able to undo the edits manually, without
> using the Undo button (though you could). The edits should fit in your
> short-term memory.
>
> The inversion is you should pay more attention to whether a test fails for
> the correct reason than you should pay to the code that makes it pass.
When
> you force a Red Bar, and want to write a new feature, now is the time to
> step thru the code and see if its behavior is exactly what you expect.
> However, when you write code to pass the test, you are allowed to lie. If
> the test asks for 10, the new code could just return 10, without logic.
>
> Writing more assertions and more tests, to force out that lie, to make the
> simplest code that can pass all those tests also be the correct production
> code, is where analysis happens.
>

There is a lot here that makes sense...and I have seen before....and some I
haven't....and I stay open minded.

> You ought to try the TDD lifecycle with a small test project, and try to
> generate these effects for yourself.
>
I've been putting assert in code for 12 years+.
I use n-unit.
I have built dummy implementations for 8 years+ in order to run tests
against live code.

I still do some analysis up front in order to understand what the client
actually wants, I believe that direct communication between development and
client is the quickest way to establish some sense of the problem domain
without the obstruction of keyboard monitor code and test, I think it
reduces the early iterations to a 5 minute conversation rather than a 30
minute code and test cycle. Diminiting returns hit quick in any analysis
phase, and I have long believed that any attempt at exhaustive analysis is
doomed.


0
Nicholls.Mark (1061)
10/7/2004 4:11:33 PM
Daniel Parker wrote:
> Andrew McDonagh <news@andrewcdonagh.f2s.com> wrote in message news:<ck1btc$1us$1@news.freedom2surf.net>...
> 
>>But from an XP projects perspective, Automated Acceptance tests are 
>>equivalent to old style requirements, but with more meat.
>>
> 
> No, they're not equivalent :-)

we'll have to disagree then. :-)

> 
> 
>>A traditional requirement would be something like ... "..shall handle 
>>n000 transactions an hour".  Our XP automated acceptance test invoke 
>>n000 transactions an hour and pass or fail until the product does handle 
>>that amount.
> 
> 
> Well, consider a file extract process to a downstream system.  

OK

> From a business point of view, the only thing of interest is that the process
> take place within a certain window of time in the UAT and production
> environments.  

So, from a business point of view, doesn't it make sense to know if the
system can do this?

If it does make sense, why can't the test be automated?

If it doesn't make sense, why is it a requirement?


> That's it, and a statement to that effect is all that
> belongs in a specification document.  

Absolutely agree with you.

And in XP that document would be executable.

> It's up to the development team
> to meet that requirement, the business doesn't care how they do it. 

The same is true when XP is being used.  The requirements (stories) are
met by the what ever implementation the developers see fit. The
automated acceptance tests don't care about the implementation.

> If it's an easily meetable target, then there's no point in
> considering it further.  

Could you explain what you mean by this sentence?  i.e. consider what
further?

> If they really can't meet it, than the
> business process would have to be revisited.  

Yes thats true for an XP project too.

>You don't need an automatic acceptance test, by the way, to tell whether a process takes
> too long to run.

Correct, it doesn't need an automated test, but every requirement needs
a some form of test to prove the system supports it.

If we can automate that test, then wouldn't this be better than having
to run a manual test?

> 
> Automated acceptance tests are good things, but they have to be
> regarded as a sampling of the specification, not the specification
> itself.

In non XP projects you are absolutely correct. In XP Acceptance tests
are the requirements, regardless of whether they are automated or not.
XP teams just strive to automated them.

Andrew
0
news248 (706)
10/7/2004 8:40:20 PM
Mark Nicholls wrote:

snipped

> 
> There is a lot here that makes sense...and I have seen before....and some I
> haven't....and I stay open minded.
> 
> 
>>You ought to try the TDD lifecycle with a small test project, and try to
>>generate these effects for yourself.
>>
> 
> I've been putting assert in code for 12 years+.

using asserts in production code has nothing to do with unit testing or 
TDD, its more about reporting when the API contract between the caller 
and callee objects are being broken.

> I use n-unit.
> I have built dummy implementations for 8 years+ in order to run tests
> against live code.

At best, that's Unit Testing not TDD, these two are very different.
0
news248 (706)
10/7/2004 10:15:16 PM
"Andrew McDonagh" <news@andrewcdonagh.f2s.com> wrote in message
news:ck49jo$sie$1@news.freedom2surf.net...
> Daniel Parker wrote:
> >
> >
> > Well, consider a file extract process to a downstream system.
>
> OK
>
> > From a business point of view, the only thing of interest is that the
process
> > take place within a certain window of time in the UAT and production
> > environments.
>
> So, from a business point of view, doesn't it make sense to know if the
> system can do this?
>
> If it does make sense, why can't the test be automated?
>
> If it doesn't make sense, why is it a requirement?
>
It's a requirement because that's the window of time in which the process
has to run.  The business side doesn't care whether there's an automated
test or not, they just care that the process runs in that time.
>
> > That's it, and a statement to that effect is all that
> > belongs in a specification document.
>
> Absolutely agree with you.
>
> And in XP that document would be executable.
>
> > It's up to the development team
> > to meet that requirement, the business doesn't care how they do it.
>
> The same is true when XP is being used.  The requirements (stories) are
> met by the what ever implementation the developers see fit. The
> automated acceptance tests don't care about the implementation.
>
> > If it's an easily meetable target, then there's no point in
> > considering it further.
>
> Could you explain what you mean by this sentence?  i.e. consider what
> further?
>
Consider allocating any further analysis or development time to the issue.
Time is an expensive resource.

> > If they really can't meet it, than the
> > business process would have to be revisited.
>
> Yes thats true for an XP project too.
>
> >You don't need an automatic acceptance test, by the way, to tell whether
a process takes
> > too long to run.
>
> Correct, it doesn't need an automated test, but every requirement needs
> a some form of test to prove the system supports it.
>
> If we can automate that test, then wouldn't this be better than having
> to run a manual test?
>
If it's free, sure.  But there's a cost to specifying the test and
implementing the test, and then the specification "document" becomes bigger,
which is a bad thing, so if there's no business value to it, why bother?
We'll know how long it takes to run the process when we do our first UAT
volume test, there's nothing more to test.  It would be a little like
writing an accpetance test to see if the application started, which would be
silly, even though that too is a requirement.
> >
> > Automated acceptance tests are good things, but they have to be
> > regarded as a sampling of the specification, not the specification
> > itself.
>
> In non XP projects you are absolutely correct. In XP Acceptance tests
> are the requirements, regardless of whether they are automated or not.
> XP teams just strive to automated them.
>
I'm all in favour of automating a great number of non-trivial acceptance
tests, especially when it comes to reconciling expected and realized
outputs, but I claim you can't show that automated tests can replace a
specification.  There was an exchange before on this subject and the "tests
are the specification" people didn't demonstrate anything, Phlip was going
to take up the challenge of showing how the XLST 2 specification could be
fully represented as a set of automated tests, but never came up with
anything.

Regards,
Daniel Parker


0
Daniel
10/8/2004 2:29:57 AM
>
> snipped
>
> >
> > There is a lot here that makes sense...and I have seen before....and
some I
> > haven't....and I stay open minded.
> >
> >
> >>You ought to try the TDD lifecycle with a small test project, and try to
> >>generate these effects for yourself.
> >>
> >
> > I've been putting assert in code for 12 years+.
>
> using asserts in production code has nothing to do with unit testing or
> TDD, its more about reporting when the API contract between the caller
> and callee objects are being broken.
>
> > I use n-unit.
> > I have built dummy implementations for 8 years+ in order to run tests
> > against live code.
>
> At best, that's Unit Testing not TDD, these two are very different.

how?


0
Nicholls.Mark (1061)
10/8/2004 1:33:21 PM
Daniel Parker wrote:

> I'm all in favour of automating a great number of non-trivial acceptance
> tests, especially when it comes to reconciling expected and realized
> outputs, but I claim you can't show that automated tests can replace a
> specification.  There was an exchange before on this subject and the
"tests
> are the specification" people didn't demonstrate anything, Phlip was going
> to take up the challenge of showing how the XLST 2 specification could be
> fully represented as a set of automated tests, but never came up with
> anything.

I was? That sounds like a ton of tedious work. But, as acceptance tests go,
each test is simple enough for a newbie, so maybe you'd like to get us
started.

In terms of business value, a published "recommendation" has many more
details than an application-specific module. To satisfy business value, if
we needed to write an XSLT module, we would only implement the few features
our other modules need, and would leave the others in our (thin) formal
written specification document. Implementing the module via test-first would
lead naturally to literate acceptance tests.

Here's an example of a literate acceptance test. I have more up my sleeve,
but I keep posting this one because it's so photogenic:

    http://flea.sourceforge.net/reference.html

Most keywords in that miniature declarative language come with a small
sample. The acceptance test builds the documentation for all the keywords,
with the samples rendered as little pictures. The specification _is_ the
acceptance test output.

If you think there are specifications that can't be expressed as literate
acceptance tests, well, argue for your limitations and sure enough they're
yours.

If you want me to do all of XSLT, e-mail me privately and I'l send you my
rate sheet.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces




0
phlip_cpp (3852)
10/8/2004 1:57:30 PM
Mark Nicholls wrote:

> > At best, that's Unit Testing not TDD, these two are very different.
>
> how?

http://flea.sourceforge.net/TDD_in_a_nut_shell.pdf

When you add the TDD principles...

 - write a line of test to force a line of code
 - test fails for the correct reason
 - minimal code to pass the test
 - <10 edits between correctly predicting a test run
 - Undo to get rid of unexpected failures
 - refactor by returning to passing tests as often as possible
 - refactor to reduce fluff
 - refactor to improve self-documentation
 - refactor to merge duplicated behavior

....you transition into a different mode than "automate lots of tests and run
them often".

Each of those principles is an example of stimulus and response. When the
project does X[i] you instantly do Y[i]. Simple, distinct rules create
complex emergent behaviors that, in turn, create elegant code.

When no refactor can reduce code's ability to be tested, the average
refactor improves its ability. Some advanced testing works by walking code
through many permutations of test inputs. As a side-effect of rapid
development, we apply more and more tests to steadily contracting
permutations of code.

These forces form a cycle that models a dynamic attractor. A real attractor
converges on a provable point. A biological attractor tends to converge on
its operator's intent. If we intend clean elegant code, husbanding its
emergence is much more efficient than heroism.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/8/2004 3:36:19 PM
"Phlip" <phlip_cpp@yahoo.com> wrote in message
news:Tty9d.15070$Qv5.5620@newssvr33.news.prodigy.com...
> Mark Nicholls wrote:
>
> > > At best, that's Unit Testing not TDD, these two are very different.
> >
> > how?
>
> http://flea.sourceforge.net/TDD_in_a_nut_shell.pdf
>
> When you add the TDD principles...
>
>  - write a line of test to force a line of code
>  - test fails for the correct reason
>  - minimal code to pass the test
>  - <10 edits between correctly predicting a test run
>  - Undo to get rid of unexpected failures
>  - refactor by returning to passing tests as often as possible
>  - refactor to reduce fluff
>  - refactor to improve self-documentation
>  - refactor to merge duplicated behavior
>
> ...you transition into a different mode than "automate lots of tests and
run
> them often".
>
> Each of those principles is an example of stimulus and response. When the
> project does X[i] you instantly do Y[i]. Simple, distinct rules create
> complex emergent behaviors that, in turn, create elegant code.
>
> When no refactor can reduce code's ability to be tested, the average
> refactor improves its ability. Some advanced testing works by walking code
> through many permutations of test inputs. As a side-effect of rapid
> development, we apply more and more tests to steadily contracting
> permutations of code.
>
> These forces form a cycle that models a dynamic attractor. A real
attractor
> converges on a provable point. A biological attractor tends to converge on
> its operator's intent. If we intend clean elegant code, husbanding its
> emergence is much more efficient than heroism.
>
nice....I also think sitting down for half an hour with a pencil and piece
of paper is very useful.

again..I have no specific problem except the implication that this is
'different', rather than an example of.

thus my question how is it 'different'....it isn't....it's more......it's an
assembly of lots of established knowledge....that should not be concieved as
a criticism, but a compliment....I do have potential problems with it, we
can go there if we want. I simply am trying to establish that testing is and
has been for a long time central and part of micro development process
rather than a seperate process. Putting assertions in the code and around
the code in formal tests is not unusual or new. Isolating, seperating
concerns, analysing behaviour and simplyfying is and always has been part of
SD from day one....so what do I see as new and extra;

Refactoring is a recent 'process', that is not to say it didn't happen
before, it did, it's just now there are published techniques built around
making it relatively safe and sensible...that's good...these tools are
powerful tools, so I buy refactoring now, and I don't see it as a new part
of XP.

>  - <10 edits between correctly predicting a test run

yep....this is 'extreme' at the very least if it is a rule.

>  - Undo to get rid of unexpected failures

yep...normally one would expect to edit/debug/correct failure rather than
remove them as a rule, that is 'different'.

>  - write a line of test to force a line of code

hmmmmm, I still don't really believe this you see.

to write the test, I must have an interface to an implementation, even if I
have no implementation. So the interface *must* come first.

In order to know what the interface is, I surely need to know something
about the problem in question, so I *must* have to do some sort of analysis.

In order to do some sort of analysis, I must know what the system's intent
is, so I *must* have to do some sort of requirements gathering.

I'm not saying waterfall, as I've said, we look in RAD and we see all sorts
of ways of doing fast, incremental gathering and analysis, and I believe
this is far from rare, in fact it may well be the dominant approach.

So what are we left with?

A more formalised approach to adding tests to our code....that's probably
good.

My central criticisms of the extra stuff is this.

I have long found that the most productive part of my day is walking to
work, walking home, and standing in the freezing cold smoking a cigarette.
I have no emprirical evidence to back this up, just my personal experience,
so I might be talking about just me.

So why do I believe that these enforced absences from my keyboard are the
most productive?

because I can think.

clearly.

without the clutter of code and a keyboard.

I can model and abstract, I can see new simpler ways of what I've spent the
last 2 hours 'evolving' in my development tools.

Simplicity is the key, and I find *not* sitting in front of a computer with
a pencil and a piece of paper for thirty minutes to be the best way to
derive simplicity and *then* return to the toil at the keyboard.

So if it works for you, fine, you may be in the majority, but I doubt it
will work for me.....though I do believe in small incremental development,
there is a lower bound where any thought or analysis is wrung from the
process, there is a bigger picture that must be established or at least
sketched.

In short, it seems to be a form of incremental code and fix....and that
probably is not as good as what I do now...sorry.


0
Nicholls.Mark (1061)
10/8/2004 4:25:14 PM
Mark Nicholls wrote:

> nice....I also think sitting down for half an hour with a pencil and piece
> of paper is very useful.

Of course it is. So is checking out a sand-box of your project and screwing
with it for a while. Then copy the best back into your real project.

(And my dynamic attractor parable is _not_ about inspiration, or beliefs,
but they can help.)

When an existing design needs to be encoded via TDD, I get the mental image
of an old-fashioned clothes wringer. You take wet dripping clothes, run them
through the wringer, and it squeezes out most of the water.

To squeeze any latent cruft out of a pre-existing design, return to your
production project, and use TDD to force the same design to emerge. Of
course you can cheat a little. Given an early choice what to refactor you
can lead the design towards your plan a little.

However, if you can't find a way to add the last few design elements that
you planned, without exceeding the number of features you currently need,
you learned something.

> >  - <10 edits between correctly predicting a test run
>
> yep....this is 'extreme' at the very least if it is a rule.

We don't have "rules" we have "principles".

In this case, the <10 edits rule^W principle could have come first, and it
could have generated the TDD principle. Or, the TDD rules can come first,
and generate the <10 edits principle.

> >  - Undo to get rid of unexpected failures
>
> yep...normally one would expect to edit/debug/correct failure rather than
> remove them as a rule, that is 'different'.

Sometimes production code is very near to what you need, so you just debug a
little, or fix an obvious typo. You always have the _option_ to Undo, and
the <10 edits principle forces you to only make changes that you can
remember, so you could even Undo manually.

These principles, used with other XP situations like pair programming,
prevent those long hideous open-ended bug hunts.

On a "traditional project" (in my experience), the longer you code, the
slower you go. On a project following these principles, the longer you go
the faster you go.

> >  - write a line of test to force a line of code
>
> hmmmmm, I still don't really believe this you see.
>
> to write the test, I must have an interface to an implementation, even if
I
> have no implementation. So the interface *must* come first.

Read my "nut_shell" paper.

The interface _does_ come first. You write the simplest interface that could
be tested, and the decoupling starts here.

> In order to know what the interface is, I surely need to know something
> about the problem in question, so I *must* have to do some sort of
analysis.

Yep. Just enough to get to the interface, to the test, and to the feature.

Here's a dirt simple example:

    int main()
    {
        Source aSource("a b\nc,  d");
        string
        token = aSource.pullNextToken();  assert("a" == token);
        token = aSource.pullNextToken();  assert("b" == token);
        token = aSource.pullNextToken();  assert("c" == token);
        token = aSource.pullNextToken();  assert("d" == token);
        token = aSource.pullNextToken();  assert(""  == token);
                                              //  EOT!
    }

From the tested interface, we can tell that Source is going to be an object
that parses strings, removes blanks, and returns tokens.

That sounds too trivial to reflect real-world project needs, right?

 ==> that's the point. it's trivial <==

To avoid high-risk code, we start all tests low-risk, and only push the risk
up at need, to buy a business value.

> In order to do some sort of analysis, I must know what the system's intent
> is, so I *must* have to do some sort of requirements gathering.

Yes. You start the project with a mission statement, a top-ten wish-list,
and whatever verbiage the participants _feel_ like writing. Then your
requirements-donor narrows in on the highest business value features, and
you analyze them in bite-sized chunks.

Because everyone in your business should have at least a nodding
understanding of your business goals, the high business value features
should be very obvious. Your receptionist should know them.

However, lower-priority features won't be so obvious, and the minimal set of
them that maximizes business value is not obvious.

Expecting requirements analysis, without code, to locate that minimal set
adds incredible risk.

> I'm not saying waterfall, as I've said, we look in RAD and we see all
sorts
> of ways of doing fast, incremental gathering and analysis, and I believe
> this is far from rare, in fact it may well be the dominant approach.
>
> So what are we left with?
>
> A more formalised approach to adding tests to our code....that's probably
> good.

And a way to allow emergent design to propell - instead of interfere with -
incremental requirement analysis.

> My central criticisms of the extra stuff is this.
>
> I have long found that the most productive part of my day is walking to
> work, walking home, and standing in the freezing cold smoking a cigarette.
> I have no emprirical evidence to back this up, just my personal
experience,
> so I might be talking about just me.

I don't like to discuss the things that make me most productive among
potential employers. ;-)

> So why do I believe that these enforced absences from my keyboard are the
> most productive?
>
> because I can think.
>
> clearly.
>
> without the clutter of code and a keyboard.
>
> I can model and abstract, I can see new simpler ways of what I've spent
the
> last 2 hours 'evolving' in my development tools.

Right. We call this either Sustainable Pace or Balanced Life.

> Simplicity is the key, and I find *not* sitting in front of a computer
with
> a pencil and a piece of paper for thirty minutes to be the best way to
> derive simplicity and *then* return to the toil at the keyboard.
>
> So if it works for you, fine, you may be in the majority, but I doubt it
> will work for me.....though I do believe in small incremental development,
> there is a lower bound where any thought or analysis is wrung from the
> process, there is a bigger picture that must be established or at least
> sketched.
>
> In short, it seems to be a form of incremental code and fix....and that
> probably is not as good as what I do now...sorry.

Try it. You'l like it. ;-)

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/8/2004 5:10:39 PM
>
> > nice....I also think sitting down for half an hour with a pencil and
piece
> > of paper is very useful.
>
> Of course it is. So is checking out a sand-box of your project and
screwing
> with it for a while. Then copy the best back into your real project.

I agree, I have a think, and then a mess about, and then I think
either...nope, you've disappeared up your arse again or, yippee, I can now
delete 10% of the code from my app and plan.

>
> (And my dynamic attractor parable is _not_ about inspiration, or beliefs,
> but they can help.)
>
I didn't really understand it, so I can't comment.

> When an existing design needs to be encoded via TDD, I get the mental
image
> of an old-fashioned clothes wringer. You take wet dripping clothes, run
them
> through the wringer, and it squeezes out most of the water.
>
> To squeeze any latent cruft out of a pre-existing design, return to your
> production project, and use TDD to force the same design to emerge. Of
> course you can cheat a little. Given an early choice what to refactor you
> can lead the design towards your plan a little.
>
> However, if you can't find a way to add the last few design elements that
> you planned, without exceeding the number of features you currently need,
> you learned something.
>
> > >  - <10 edits between correctly predicting a test run
> >
> > yep....this is 'extreme' at the very least if it is a rule.
>
> We don't have "rules" we have "principles".
>

oh no, here we go again.....hueristics you mean? (principles make me shudder
slightly, as there seems to be a cultural gap as to the meaning).

> In this case, the <10 edits rule^W principle could have come first, and it
> could have generated the TDD principle. Or, the TDD rules can come first,
> and generate the <10 edits principle.

I drank 2/3rds of a bottle or red wine last night, red wine always does my
head in, so I'm not with the above, but it may be me.

>
> > >  - Undo to get rid of unexpected failures
> >
> > yep...normally one would expect to edit/debug/correct failure rather
than
> > remove them as a rule, that is 'different'.
>
> Sometimes production code is very near to what you need, so you just debug
a
> little, or fix an obvious typo. You always have the _option_ to Undo, and
> the <10 edits principle forces you to only make changes that you can
> remember, so you could even Undo manually.

OK, now I see the mentality about the <10 edits, and actually I can believe
there is a place for it.

>
> These principles, used with other XP situations like pair programming,
> prevent those long hideous open-ended bug hunts.

reduce? maybe, maybe not, I wont argue because I am ignorant of such
projects except via heresay.

>
> On a "traditional project" (in my experience), the longer you code, the
> slower you go. On a project following these principles, the longer you go
> the faster you go.

nope....'the longer you go'.....at what?

>
> > >  - write a line of test to force a line of code
> >
> > hmmmmm, I still don't really believe this you see.
> >
> > to write the test, I must have an interface to an implementation, even
if
> I
> > have no implementation. So the interface *must* come first.
>
> Read my "nut_shell" paper.

I did browse, I admit to being tired and lazy at the moment, it's quite
long.

I apologise, I am probably not doing your argument justice....I'll have a go
on Monday.

>
> The interface _does_ come first. You write the simplest interface that
could
> be tested, and the decoupling starts here.

OK, but you need to know what that interface is, so you must do some
analysis and some requirements gathering and some design.

These must come before you write any client (test) or implementation
(potentially even a dummy).

>
> > In order to know what the interface is, I surely need to know something
> > about the problem in question, so I *must* have to do some sort of
> analysis.
>
> Yep. Just enough to get to the interface, to the test, and to the feature.


I've had this argument with Lahman.........

If you've got the interface....then you've got the abstraction....you've
done the analysis.

If you may out the interfaces of the system (concrete class = implementation
+ interface) it looks like a big abstract model of your system.

You are doing bog standard SD.....with some knobs on....and I'm not too sure
about all of the knobs.

>
> Here's a dirt simple example:
>
>     int main()
>     {
>         Source aSource("a b\nc,  d");
>         string
>         token = aSource.pullNextToken();  assert("a" == token);
>         token = aSource.pullNextToken();  assert("b" == token);
>         token = aSource.pullNextToken();  assert("c" == token);
>         token = aSource.pullNextToken();  assert("d" == token);
>         token = aSource.pullNextToken();  assert(""  == token);
>                                               //  EOT!
>     }
>
> From the tested interface, we can tell that Source is going to be an
object
> that parses strings, removes blanks, and returns tokens.

pullNextToken()

what does that do...how did you name it...how do you know what it
returns...how do you know what to pass it, how do you know what to test
for...i.e. you have some preconcieved idea of what it does.

It didn't pop out of the ether.

You designed it didn't you, you shouldn't be ashamed, we all have to do it
sometimes....hopefully most times.

And now you've just written a client a an implementation for it......it was
the test.

requirements
analysis.
design .....here comes some interfaces
coding....here comes client and implementations...including tests and dummy
implementations.
run tests.

maybe this is based on a misconception of what testing means when put in
things like iterative development. My interpretation is 'run tests' comes
after write some code. My practice has always been to embed the tests in the
micro coding process via assertions.

>
> That sounds too trivial to reflect real-world project needs, right?

I don't know....I have a sneaky feeling I've missed your point.

>
>  ==> that's the point. it's trivial <==
>
> To avoid high-risk code, we start all tests low-risk, and only push the
risk
> up at need, to buy a business value.

RAD......!!!!!

spiral process...analysis..assess risk.....mitigate.....move
forward....analysis.....assess risk.....mitigate.....move
forward.......analysis.....assess risk.....mitigate.....move
forward.......analysis.....assess risk.....mitigate.....move forward.......

its good yes....it's not an XP knob though....it's an ordinary, accepted,
established RAD knob...its normal, natural and kind.

>
> > In order to do some sort of analysis, I must know what the system's
intent
> > is, so I *must* have to do some sort of requirements gathering.
>
> Yes. You start the project with a mission statement, a top-ten wish-list,
> and whatever verbiage the participants _feel_ like writing. Then your
> requirements-donor narrows in on the highest business value features, and
> you analyze them in bite-sized chunks.

see above....this I think it normal....except maybe you should be assessing
greatest risk v greated value......the idea being that you don't spent 18
months writing the new version of windows to find out that the kernel
architecture will not and cannot ever work.

>
> Because everyone in your business should have at least a nodding
> understanding of your business goals, the high business value features
> should be very obvious. Your receptionist should know them.

I agree this is normal.....unless you are not taking risk into account...in
which case it genuinely is different and probably foolhardy.

>
> However, lower-priority features won't be so obvious, and the minimal set
of
> them that maximizes business value is not obvious.

this is just economics.....Wealth of nations...Adam smith (17th
century).....stop when marginal revenue = marginal cost.

it's in RAD....under feature removal....i.e. cost and schedule is not always
obvious on day 1.

>
> Expecting requirements analysis, without code, to locate that minimal set
> adds incredible risk.

again your assuming some sort of waterfall process...that fell from favour
in about 1990.

>
> > I'm not saying waterfall, as I've said, we look in RAD and we see all
> sorts
> > of ways of doing fast, incremental gathering and analysis, and I believe
> > this is far from rare, in fact it may well be the dominant approach.
> >
> > So what are we left with?
> >
> > A more formalised approach to adding tests to our code....that's
probably
> > good.
>
> And a way to allow emergent design to propell - instead of interfere
with -
> incremental requirement analysis.

that sounds good...but I don't see it. Do you mean evolutionary prototyping
with the client in some sense...if so this is normal, standard
practice...not an XP knob.

>
> > My central criticisms of the extra stuff is this.
> >
> > I have long found that the most productive part of my day is walking to
> > work, walking home, and standing in the freezing cold smoking a
cigarette.
> > I have no emprirical evidence to back this up, just my personal
> experience,
> > so I might be talking about just me.
>
> I don't like to discuss the things that make me most productive among
> potential employers. ;-)

I have no cheque book....first rule of sales....identify the man who really
signs the cheques.

>
> > So why do I believe that these enforced absences from my keyboard are
the
> > most productive?
> >
> > because I can think.
> >
> > clearly.
> >
> > without the clutter of code and a keyboard.
> >
> > I can model and abstract, I can see new simpler ways of what I've spent
> the
> > last 2 hours 'evolving' in my development tools.
>
> Right. We call this either Sustainable Pace or Balanced Life.

genuine laughter.

I'll tell that to the wife, when I come home next in some sort of partial
nervous breakdown trying to finish something.

I call it analysis, infomal, unstructured.

>
> > Simplicity is the key, and I find *not* sitting in front of a computer
> with
> > a pencil and a piece of paper for thirty minutes to be the best way to
> > derive simplicity and *then* return to the toil at the keyboard.
> >
> > So if it works for you, fine, you may be in the majority, but I doubt it
> > will work for me.....though I do believe in small incremental
development,
> > there is a lower bound where any thought or analysis is wrung from the
> > process, there is a bigger picture that must be established or at least
> > sketched.
> >
> > In short, it seems to be a form of incremental code and fix....and that
> > probably is not as good as what I do now...sorry.
>
> Try it. You'l like it. ;-)
>

I think someone's put it in another post.....there always seems to be this
XP/Agile v's grumpy old men battle.....it's highly polarised and noone ever
says....'ooo actually that's not completely mad' or 'well actually now you
say that maybe that aspect of XP is debatable'.

I like testing.....if XP promotes testing thats a good thing.
I like comminication......if XP promotes testing thats a good thing.
I like small iteratice development......if XP promotes testing thats a good
thing.

But I also like promoting making people think and comminicate and draw rough
sketches of data architecture and system architecture and not do this with
the IDE open and the compiler whirring.

Unforunately I don't think XP does promote this, I think it may actually
undermine it.


0
Nicholls.Mark (1061)
10/8/2004 5:57:31 PM
Mark,

First, put this thread aside to resume reading on Monday. Proceed now at 
your own risk. :)

> pullNextToken()
> what does that do...how did you name it...how do you know what it
> returns...how do you know what to pass it, how do you know what to test
> for...i.e. you have some preconcieved idea of what it does.

Doing your analysis "on paper" amounts to jotting down thoughts. These 
are probably in somewhat symbolic form - squares, arrows, and so on. But 
you could just as well do it in English, I suppose.

Imagine you were writing French, and for the fun of it decided to write 
down your analysis thoughts in French. At first the process would be 
quite laborious - you'd have to formulate your thoughts in English to 
get them into a coherent state, then translate to French. After a while, 
though, as you grew proficient in French, you would /directly/ put down 
your thoughts in French. (I know I'm painting a rosy picture - past a 
certain age one's brain is no longer plastic enough to learn to think in 
a new language, so it might take forever to get to that level of 
proficiency - but the process is a good enough description.)

Now, in the above, imagine replacing "French" with "tests". Phlip is 
doing analysis in the language of tests. His "dirt simple example" is 
the same as your quick jottings on paper, but it will morph into an 
executable test seamlessly.

Laurent
http://bossavit.com/thoughts/
0
laurent (379)
10/8/2004 6:30:26 PM
> First, put this thread aside to resume reading on Monday. Proceed now at
> your own risk. :)

I will, I've now been to the pub and had a pint, so I'm 'tired and
emotional'.

>
> > pullNextToken()
> > what does that do...how did you name it...how do you know what it
> > returns...how do you know what to pass it, how do you know what to test
> > for...i.e. you have some preconcieved idea of what it does.
>
> Doing your analysis "on paper" amounts to jotting down thoughts. These
> are probably in somewhat symbolic form - squares, arrows, and so on. But
> you could just as well do it in English, I suppose.
>
> Imagine you were writing French, and for the fun of it decided to write
> down your analysis thoughts in French. At first the process would be
> quite laborious - you'd have to formulate your thoughts in English to
> get them into a coherent state, then translate to French. After a while,
> though, as you grew proficient in French, you would /directly/ put down
> your thoughts in French. (I know I'm painting a rosy picture - past a
> certain age one's brain is no longer plastic enough to learn to think in
> a new language, so it might take forever to get to that level of
> proficiency - but the process is a good enough description.)
>
> Now, in the above, imagine replacing "French" with "tests". Phlip is
> doing analysis in the language of tests. His "dirt simple example" is
> the same as your quick jottings on paper, but it will morph into an
> executable test seamlessly.
>
> Laurent
> http://bossavit.com/thoughts/


0
Nicholls.Mark (1061)
10/8/2004 6:40:50 PM
I wrote,

> Imagine you were writing French

That should have read "imagine you were *learning* French".

Laurent
0
laurent (379)
10/8/2004 6:41:24 PM
Mark Nicholls wrote:

> > First, put this thread aside to resume reading on Monday. Proceed now at
> > your own risk. :)
>
> I will, I've now been to the pub and had a pint, so I'm 'tired and
> emotional'.

I understand that more than a time-zone difference is involved here,
however...

....The mighty Elliott drinks too. And impassioned gain-sayings of the
support verbiage between the points are unfair to those of us who are not
under the same influence and hence can't share the joke.

> > pullNextToken()

> what does that do...

It ... pulls the next ... token. Maybe. ;-)

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/8/2004 8:10:39 PM
Mark Nicholls wrote:
> So why do I believe that these enforced absences from my keyboard are the
> most productive?

There's seems to be little recognition that people are different
and have different thinking and working styles. XP has a one
size fits all attitude. Even worse XP paints anywone who wants
to think and design as a BDUFer, which is worse than being a communist
homosexual from france.
0
grace33 (48)
10/8/2004 8:50:24 PM
Robert Grace wrote:
>
> Mark Nicholls wrote:
> > So why do I believe that these enforced absences from my keyboard are
the
> > most productive?
>
> There's seems to be little recognition that people are different
> and have different thinking and working styles. XP has a one
> size fits all attitude. Even worse XP paints anywone who wants
> to think and design as a BDUFer, which is...

....perfectly acceptable, so long as you don't waste time with it, don't
worship it, don't sell it, and so long as you convert your design to code
via pairing and TDD.

> ...worse than being a communist
> homosexual from france.

You forgot "terrorist" and "vegan".

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/9/2004 1:09:34 AM
"Mark Nicholls" <nicholls.mark@mtvne.com> wrote in
news:2sntc9F1nm9l8U1@uni-berlin.de: 
 
> Refactoring is a recent 'process', that is not to say it didn't happen
> before, it did, it's just now there are published techniques built
> around making it relatively safe and sensible...that's good...these
> tools are powerful tools, so I buy refactoring now, and I don't see it
> as a new part of XP.

Whats "new" is not refactoring itself, its that refactoring became powerful 
enough (thank the IDEs) that it fundamentally changed the equation: We were 
able to go from "do it right the first time because it gets more and more 
expensive to change it later" to "do it well enough now to move on to the 
next version while still remaining agile". This is true for all 
methodologies, of course, not just XP. But IMHO XP did more than just 
recognize the change, i.e., to accept the improvement as a bonus but not 
fundamentally alter anything else. Rather, XP built itself *around* this 
change. That was new.
0
Rich
10/9/2004 2:41:17 AM
"Mark Nicholls" <nicholls.mark@mtvne.com> wrote in
news:2so2pfF1koputU1@uni-berlin.de: 
 
> I like testing.....if XP promotes testing thats a good thing.
> I like comminication......if XP promotes testing thats a good thing.
> I like small iteratice development......if XP promotes testing thats a
> good thing.
> 
> But I also like promoting making people think and comminicate and draw
> rough sketches of data architecture and system architecture and not do
> this with the IDE open and the compiler whirring.

This is great stuff. In my team, we *do* do a lot of thinking with the IDE 
open and the compiler whirring :-) We also do a lot of thinking when we all 
sit back and talk about the issue. And we have a whiteboard that covers the 
entire room and we do a *lot* of thinking up there. We'll draw something 
up, work on it, keep at it until we think we know where we want to go next, 
then we'll get back to the keyboard and go for a while, while stealing 
glances back at the whiteboard. And after a while, we erase the board.

Sometimes our whitebard model turns out to be a bit crappy. Sometimes we 
decide that we don't understand the issue well enough to draw it on the 
board. Then we think with the IDE. XP calls that prototyping.
 
> Unforunately I don't think XP does promote this, I think it may
> actually undermine it.

Either (1) you're wrong, which is a good thing because then you may really 
like XP. Or (2) you're right, in which case, screw calling yourself an 
XPer, take the other good stuff it has to offer and keep your existing good 
practices. Either way, you win.

FWIW, I think the answer is (1), that XP is not incompatible with what you 
just said. Even so, has anyone ever truly adopted a methodology in 100% 
form and abandoned everything prior? I do lots of XP things and don't 
really care about calling myself a card-carrying XPer who follows every XP 
practice. I think most "XPers" are similar; they do bits of XP but not all 
of it.(*) Its probably only in these forums that we all take hard-line 
positions and pound on our methodology bibles.

(*) I'm programming at home this weekend on a solo project. Does that mean 
I'm not an XPer? Who cares? The Monday morning CVS check-in will answer the 
question just fine.
0
Rich
10/9/2004 3:04:50 AM
Robert Grace <grace33@aol.com> wrote in
news:10mdv6mbrmitnb5@news.supernews.com: 

> 
> Mark Nicholls wrote:
>> So why do I believe that these enforced absences from my keyboard are
>> the most productive?
> 
> There's seems to be little recognition that people are different
> and have different thinking and working styles. XP has a one
> size fits all attitude. Even worse XP paints anywone who wants
> to think and design as a BDUFer, which is worse than being a communist
> homosexual from france.
> 

Our team has taken to walking a mile to and from the local Starbucks once 
or twice a day. Great things happen on those walks. (We're concerned now 
that winter is approaching.) Must I turn in my XP badge? Keep your Mickey 
Mouse T-shirt, speedo and purse imagery to yourself. What an idiot.
0
Rich
10/9/2004 3:10:39 AM
Rich MacDonald wrote:

> Mark Nicholls wrote:
>
> > I like testing.....if XP promotes testing thats a good thing.
> > I like comminication......if XP promotes testing thats a good thing.
> > I like small iteratice development......if XP promotes testing thats a
> > good thing.
> >
> > But I also like promoting making people think and comminicate and draw
> > rough sketches of data architecture and system architecture and not do
> > this with the IDE open and the compiler whirring.
>
> This is great stuff. In my team, we *do* do a lot of thinking with the IDE
> open and the compiler whirring :-) We also do a lot of thinking when we
all
> sit back and talk about the issue. And we have a whiteboard that covers
the
> entire room and we do a *lot* of thinking up there. We'll draw something
> up, work on it, keep at it until we think we know where we want to go
next,
> then we'll get back to the keyboard and go for a while, while stealing
> glances back at the whiteboard. And after a while, we erase the board.

The C3 project had a slogan "no documentation". (Gee, maybe _that's_ why it
"failed"!!) This was essentially encouragement to avoid the documentational
excesses of previous projects, but sadly it became part of the earliest XP
verbiage. The "communication" value moderates the guilty pleasure of telling
suits, "hey, we don't write documentation here!"

Agile Modeling follows these practices:

 * iterative refinement
  - create several models in parallel
 - iterate models into code soon
 * simplicity
  - use the simplest tool (whiteboard!)
  - display models simply (whiteboard!!)
 * documentation
  - discard temporary models
  - update models only at great need
 * teamwork
  - model with others
  - display models publically

What stays inside the loop is the idea that modeling is a creative activity,
akin to going out and smoking, or writing scratch code,

What Agile Modeling refutes is the worship, paranoia, and anxiety invested
in traditional modeling. If you invert each AM practice, you get...

 - write one kind of diagram all day
 - store the model and try to code it later
 - use a complex tool
 - hide the models in a complex archive
 - escallate all models to hallowed permanence
 - never update models even when reality changes
 - model alone
 - don't display the models

Please don't try to tell me nobody ever did any of those.

AM leverages the creative and communicative aspects of a medium softer than
code, but less rigorous.

> Sometimes our whitebard model turns out to be a bit crappy. Sometimes we
> decide that we don't understand the issue well enough to draw it on the
> board. Then we think with the IDE. XP calls that prototyping.

 - iterate models into code soon

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces



0
phlip_cpp (3852)
10/9/2004 3:52:40 AM
Robert Grace wrote:

> There's seems to be little recognition that people are different
> and have different thinking and working styles. XP has a one
> size fits all attitude.

That's simply not true. XP is expected to be adapted.

http://www.xprogramming.com/Practices/justrule.htm
http://www.xprogramming.com/xpmag/jatNotRules.htm

BUT - there certainly *is* value in having a *team* agree on "rules"

http://c2.com/cgi/wiki?ItsJustaRule

And there *is* value in doing something "by the book" for some while.

http://c2.com/cgi/wiki?ThreeLevelsOfAudience

> Even worse XP paints anywone who wants
> to think and design as a BDUFer,

Not true.

First, every XPer thinks and designs, of course, so you are probably talking 
about designing before implementing.

Well, even that is an integral part of XP - design happens when you come up 
with the Metapher, in the Planning Games, in short design sessions during 
the day. CRC cards and white boards are used quite a lot by XP teams.

Cheers, Ilja 


0
it3974 (470)
10/9/2004 7:08:16 AM
Robert,

> There's seems to be little recognition that people are different
> and have different thinking and working styles. XP has a one
> size fits all attitude.

Sauce for the goose is sauce for the gander. If we want to acknowledge 
different thinking styles, we have more to gain by tearing down the 
notion that "visual modeling" is intrinsically better than the non-
visual kinds, and dethrone UML. That's the larger tyranny.

Laurent
http://bossavit.com/thoughts/
0
laurent (379)
10/9/2004 7:18:46 AM

Laurent Bossavit wrote:
> Sauce for the goose is sauce for the gander. If we want to acknowledge 
> different thinking styles, we have more to gain by tearing down the 
> notion that "visual modeling" is intrinsically better than the non-
> visual kinds, and dethrone UML. That's the larger tyranny.

Tyranny is tyranny. The argument over which is large is a luxury.
0
grace33 (48)
10/9/2004 1:32:56 PM

Ilja Preu� wrote:
> That's simply not true. XP is expected to be adapted.
> 
> http://www.xprogramming.com/Practices/justrule.htm
> http://www.xprogramming.com/xpmag/jatNotRules.htm

Yes, of course, that's exactly the attitude we've all seen
over the years from XPers.

I guess that you need to work the same hours so you can
attend the same meeting have the same setup so you can
pair program while following the same coding standard
while programming test first is a false impression?


>>Even worse XP paints anywone who wants
>>to think and design as a BDUFer,
> 
> Not true.

So BDUF is ok with XP. Didn't know that. What is XP again?
0
grace33 (48)
10/9/2004 1:40:19 PM
Laurent Bossavit wrote:

> Robert,
>
> > There's seems to be little recognition that people are different
> > and have different thinking and working styles. XP has a one
> > size fits all attitude.
>
> Sauce for the goose is sauce for the gander. If we want to acknowledge
> different thinking styles, we have more to gain by tearing down the
> notion that "visual modeling" is intrinsically better than the non-
> visual kinds, and dethrone UML. That's the larger tyranny.

Geez, Robert - everyone really liked your post here!

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/9/2004 1:42:56 PM
Robert Grace wrote:

> Ilja Preu� wrote:

> > That's simply not true. XP is expected to be adapted.
> >
> > http://www.xprogramming.com/Practices/justrule.htm
> > http://www.xprogramming.com/xpmag/jatNotRules.htm
>
> Yes, of course, that's exactly the attitude we've all seen
> over the years from XPers.

Yep. Only XP says you adapt the rules to your situation. All the other
methodologies say ... something. I haven't read them.

> I guess that you need to work the same hours so you can
> attend the same meeting have the same setup so you can
> pair program while following the same coding standard
> while programming test first is a false impression?

Uh, one team's coding standards might be different from another's.

And you forgot the keyboards - all have QWERTY layout.

Quite the conspiracy, huh?

> >>Even worse XP paints anywone who wants
> >>to think and design as a BDUFer,
> >
> > Not true.
>
> So BDUF is ok with XP. Didn't know that. What is XP again?

A strategy to make BDUF safe.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces



0
phlip_cpp (3852)
10/9/2004 2:23:56 PM
Robert,

> I guess that you need to work the same hours so you can
> attend the same meeting have the same setup so you can
> pair program while following the same coding standard
> while programming test first is a false impression?

If you're a chess player, you don't complain that the game is 
"inflexible" just because your opponent won't let you move a pawn 
backwards or a bishop sideways.

Your options for flexibility are either a) to play a different game 
altogether or b) to find room for expressing your individual skill and 
style within the constraints of the game.

"Different games" may not mean replacing every practice, either - you 
can play checkers (draughts) on a chess board. Test-driven development 
works fine within a team using more up-front design.

Laurent
http://bossavit.com/thoughts/
0
laurent (379)
10/9/2004 2:36:03 PM
Laurent Bossavit wrote:

> Robert,
>
> > I guess that you need to work the same hours so you can
> > attend the same meeting have the same setup so you can
> > pair program while following the same coding standard
> > while programming test first is a false impression?
>
> If you're a chess player, you don't complain that the game is
> "inflexible" just because your opponent won't let you move a pawn
> backwards or a bishop sideways.
>
> Your options for flexibility are either a) to play a different game
> altogether or b) to find room for expressing your individual skill and
> style within the constraints of the game.
>
> "Different games" may not mean replacing every practice, either - you
> can play checkers (draughts) on a chess board. Test-driven development
> works fine within a team using more up-front design.

Uh, and it also lets you move the pawns backwards if that wins the battle,
and if your teammates agree...

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/9/2004 2:47:37 PM
Mark Nicholls wrote:
>>snipped
>>
>>
>>>There is a lot here that makes sense...and I have seen before....and
> 
> some I
> 
>>>haven't....and I stay open minded.
>>>
>>>
>>>
>>>>You ought to try the TDD lifecycle with a small test project, and try to
>>>>generate these effects for yourself.
>>>>
>>>
>>>I've been putting assert in code for 12 years+.
>>
>>using asserts in production code has nothing to do with unit testing or
>>TDD, its more about reporting when the API contract between the caller
>>and callee objects are being broken.
>>
>>
>>>I use n-unit.
>>>I have built dummy implementations for 8 years+ in order to run tests
>>>against live code.
>>
>>At best, that's Unit Testing not TDD, these two are very different.
> 
> 
> how?
> 
> 

It seems everyone else has answered this for me, cheers guys.
0
news248 (706)
10/9/2004 5:08:35 PM
Mark Nicholls wrote:

> If I ask you to write a test, you will ask me what I want you to
> test, and how that behaviour is exposed...i.e. you will want me to
> the analysis and design.

Analysis, yes; design no.

> Analysis then design then encode then refactor, this isn't new to me,
> it's just iterative development.

I think the main difference is not in the practices, but in the philosophy. 
Even with iterative development, traditionally people will want to get the 
design "as right as possible" before starting to code - they might refactor, 
but will see it as a necessary evil instead of the main practice to get to a 
good design. They might write automated tests, but they will see it more as 
a practice to find bugs instead of preventing them. And they typically will 
want their testing to be minimally invasive, in contrast to specifically 
using the tests to form the code.

But perhaps all old news to you. In my experience, it isn't to the majority 
of developers, though.

>> So are you saying that deciding on part of the design, coding that
>> part, and *then* testing and debugging it until it works, is
>> atypical for traditional software development?

Sadly you didn't reply to this...

>>> iteractive development existed long
>>> before XP was a twinkle in anyones eye.
>>
>> I don't remeber anyone stating something different. Perhaps XP is
>> more than "just" iterative development.
>
> That's fine....perhaps it is...the statements seem to imply not that
> it is more, but that is is different.

Well, every method is more than just being iterative, of course. And it's 
part of that "more" that's different.

> In A we do B, in C we do D.
>
> rather than
>
> in A we do B, in C we do B and D.
>
> there is no revolution, just evolution.

As far as I know, there is no contradiction between evolution and 
revolution. A small step in an evolution can often have revolutionary 
consequences - especially in complex systems.

> i.e. we should be able to consensually agree about B(!) and then go
> on to argue about the value of D

The value of a process isn't simply the sum of the values of its practices. 
It's to a great amount defined by its philosophy, by its values and their 
impact on why, when, how and how much the practices are used, and how they 
interact.

Therefore I'm not sure thar "agreeing about B" buys us much.


>>> It is called iterative development, or spiral development lifecycle.
>>
>> That still has testing *after* coding in the cycle, if I remember
>> correctly. It still has design before coding inside an iteration,
>> hasn't it?
>
> I agree, but I cannot see how you write a test before you know what
> to test or what the thing we want to test looks like

I only need to know about what it looks from the outside, I can decide on 
that while writing the test, and that decision can be a rough first cut that 
gets improved once I have a green bar.

>, I accept that
> you are potentially moving the test before writing the
> implementation, where 'traditionally' it would be done at the same
> time via assertions or after via unit testing.

Yes. And that makes a *big* difference. It's possibly as revolutionary as 
physicians washing their hands before the surgery instead of only 
afterwards.

> If you write all the tests before writing any code,

Just for the record: I don't - that's not what TDD is about.

> I would suggest
> that the normal use of assertion inside code, reduced the iteration
> even further

How would they do that??? Wouldn't you still need outside test code that 
called the production code?

> is this XXP, or just the use of assertions in
> software development?

If "taken to to the extremes", it's called Design By Contract, is orthogonal 
to TDD as far as I can tell, and far from being common practice. It's 
certainly worth a look.

> I do see interesting approaches, new ideas, some of which may be
> sensible, some not,

Of which you can't be sure before you tried, I'd add...

> but whenever people come down to identify what
> those actual differences are, they seem to be an adaptaption or new
> implementation of existing thought

Well, yes. And still until recently they weren't used that much in that 
specific symbiotic combination.

> that's good and
> fine......but don't oversell it

How do you know wether something is oversold?

> ....it isn't the silver bullet,

That's true. For many teams it seems to improve things somewhat, though.

> if XP
> reduces the iteraction time in evolutionary prototyping and
> iteractive development process then good

I think XP does a little bit more than that....

> ...but it doesn't negate the
> value of those concepts.

Did someone say it did???

> the logic and value of
> them could be traced back into the established knowledge of 40 years
> of software experience....

There is one very significant difference: 40 years ago, feedback was slow 
and costly - it was expensive to only compile a program and it took ages. 
(That's what I hear, at least - I didn't exist at that time.) Therefore it 
was important to check everything extensively before going the next step.

Today, the IDE checks the syntax as fast as I type; and running an extensive 
suite of tests is a matter of seconds. That considerably changes the forces 
of software development.

Cheers, Ilja 


0
it3974 (470)
10/9/2004 6:30:25 PM
Robert Grace wrote:
> Ilja Preu� wrote:
>> That's simply not true. XP is expected to be adapted.
>>
>> http://www.xprogramming.com/Practices/justrule.htm
>> http://www.xprogramming.com/xpmag/jatNotRules.htm
>
> Yes, of course, that's exactly the attitude we've all seen
> over the years from XPers.

I don't follow you.

> I guess that you need to work the same hours so you can
> attend the same meeting

We have a daily Stand Up Meeting at 11 o'clock - it's so late because one of 
the team members is a late riser. I typically start work between 8:30 and 
9:00. One of our two project managers doesn't attend on tuesdays. One of the 
team members only works 7 hours/day instead of 8 as the others.

Works quite well.

> have the same setup so you can
> pair program

Half of our team works on Linux, half on Windows. We all use Eclipse by 
choice. Compiler settings are standardized, of course. The remaining 
settings show some individuality, though the important shortcuts seem to 
converge naturally.

Works quite well.

> while following the same coding standard

We decided on a coding standard, because it made our live easier, yes. If 
your team finds that it doesn't need one - who is going to blame you???

> while programming test first is a false impression?

What I find more tragic is that I'm forced to program in Java... ;)

>>> Even worse XP paints anywone who wants
>>> to think and design as a BDUFer,
>>
>> Not true.
>
> So BDUF is ok with XP. Didn't know that.

Didn't say that. You can think and design without doing BDUF.

One of my coworkers used to stare at a blank wall for hours before starting 
to code. That was ok - he needed that. He doesn't need it any longer, 
though - he has learned to evolve the design and now finds that his BUF 
staring would just be a waste of time. He still stares at the monitor for a 
minute from time to time while pairing. It's something you get used to.

Cheers, Ilja 


0
it3974 (470)
10/9/2004 6:58:40 PM
On Sat, 09 Oct 2004 06:40:19 -0700, Robert Grace wrote:



> Ilja Preu� wrote:
>> That's simply not true. XP is expected to be adapted.
>> 
>> http://www.xprogramming.com/Practices/justrule.htm
>> http://www.xprogramming.com/xpmag/jatNotRules.htm
> 
> Yes, of course, that's exactly the attitude we've all seen over the
> years from XPers.
> 
> I guess that you need to work the same hours so you can attend the same
> meeting have the same setup so you can pair program while following the
> same coding standard while programming test first is a false impression?
> 
> 
>>>Even worse XP paints anywone who wants to think and design as a BDUFer,
>> 
>> Not true.
> 
> So BDUF is ok with XP. Didn't know that. What is XP again?

I don't think that's what Ilja said...

Thinking and designing is a good idea.  In fact, we should keep doing it
instead of limiting it to an up front phase.



0
droby2 (108)
10/9/2004 7:10:37 PM
Mark Nicholls wrote:
> try reading Henry Mintzberg.

Do you have a specific book in mind?

Thanks, Ilja 


0
it3974 (470)
10/9/2004 7:15:37 PM
>
> > If I ask you to write a test, you will ask me what I want you to
> > test, and how that behaviour is exposed...i.e. you will want me to
> > the analysis and design.
>
> Analysis, yes; design no.

if we create a method signature...to me that is design.

>
> > Analysis then design then encode then refactor, this isn't new to me,
> > it's just iterative development.
>
> I think the main difference is not in the practices, but in the
philosophy.
> Even with iterative development, traditionally people will want to get the
> design "as right as possible" before starting to code

Again I think this is a mild misrepresentation.

I want to get it as right as possible given that the cost of that is not
greater than the benefit of doing something else.

Like writing some code and some tests. Or checking back with the client that
where we've gone is sort of what they were thinking...all sorts of other
stuff.

At the heart of iterative development is risk analysis...if the risks are
associated with the code...i.e. it is demanding in some sense,
architecturally or performance or whatever, I go there next...if the risk is
that the development team may not actually understand what the client
wants...then I would try to mitigate there, it would not be sensible for me
to dive into testing, if I wasn't sure what I wanted to test.

The driver to me is risk/cost/benefit.

> - they might refactor,
> but will see it as a necessary evil instead of the main practice to get to
a
> good design. They might write automated tests, but they will see it more
as
> a practice to find bugs instead of preventing them. And they typically
will
> want their testing to be minimally invasive, in contrast to specifically
> using the tests to form the code.
>

Agreed.

> But perhaps all old news to you. In my experience, it isn't to the
majority
> of developers, though.

It's not all old news, as I've said if we get to 90% of this is old hat that
we can agree on, then that is good. We can then argue about the last
10%....and maybe agree on some of that as well.

>
> >> So are you saying that deciding on part of the design, coding that
> >> part, and *then* testing and debugging it until it works, is
> >> atypical for traditional software development?
>
> Sadly you didn't reply to this...

What do you mean by testing...there seems to be an idea that we write the
code and then create tests, I don't believe this atypical or typical.
Certainly as a C programmer asserts explicitly exist as part of the micro
development process, compiling often (testing?) is normal, running against a
test harness (testing?) is quite normal....I accept TDD may move even
further, but to stylalise typical, as waterfall, would be unhelpful....at
least to me.

>
> >>> iteractive development existed long
> >>> before XP was a twinkle in anyones eye.
> >>
> >> I don't remeber anyone stating something different. Perhaps XP is
> >> more than "just" iterative development.
> >
> > That's fine....perhaps it is...the statements seem to imply not that
> > it is more, but that is is different.
>
> Well, every method is more than just being iterative, of course. And it's
> part of that "more" that's different.

OK, that's fine, I started this because of statements about XP contradicting
the logic of RAD, to me it doesn't, it builds on it, and by taking a club to
RAD we may simply be pushing ourselves into entrenched positions.

>
> > In A we do B, in C we do D.
> >
> > rather than
> >
> > in A we do B, in C we do B and D.
> >
> > there is no revolution, just evolution.
>
> As far as I know, there is no contradiction between evolution and
> revolution. A small step in an evolution can often have revolutionary
> consequences - especially in complex systems.

evolutions can create revolutionary consequences...but the process is
evolving, claiming it's a revolutionary process though looses any potential
for consensus...XP to me is XRAD, that should be a compliment, not a
problem.

>
> > i.e. we should be able to consensually agree about B(!) and then go
> > on to argue about the value of D
>
> The value of a process isn't simply the sum of the values of its
practices.
> It's to a great amount defined by its philosophy, by its values and their
> impact on why, when, how and how much the practices are used, and how they
> interact.

Then we can discuss the last 10%.

>
> Therefore I'm not sure thar "agreeing about B" buys us much.
>

90%.

>
> >>> It is called iterative development, or spiral development lifecycle.
> >>
> >> That still has testing *after* coding in the cycle, if I remember
> >> correctly. It still has design before coding inside an iteration,
> >> hasn't it?
> >
> > I agree, but I cannot see how you write a test before you know what
> > to test or what the thing we want to test looks like
>
> I only need to know about what it looks from the outside, I can decide on
> that while writing the test, and that decision can be a rough first cut
that
> gets improved once I have a green bar.

OK.

>
> >, I accept that
> > you are potentially moving the test before writing the
> > implementation, where 'traditionally' it would be done at the same
> > time via assertions or after via unit testing.
>
> Yes. And that makes a *big* difference. It's possibly as revolutionary as
> physicians washing their hands before the surgery instead of only
> afterwards.

maybe.

>
> > If you write all the tests before writing any code,
>
> Just for the record: I don't - that's not what TDD is about.

then I am bamboozled slightly.

>
> > I would suggest
> > that the normal use of assertion inside code, reduced the iteration
> > even further
>
> How would they do that??? Wouldn't you still need outside test code that
> called the production code?

yes, but by embedding you tests in your code means that you are testing all
code all of the time....that's as clear as mud!

If you are writing a unit test for class A, and A uses B, and B is stuffed
full of assertions, then the test for A does some testing on B. So in a
sense writing A is in fact writing a test for B. XXP!

>
> > is this XXP, or just the use of assertions in
> > software development?
>
> If "taken to to the extremes", it's called Design By Contract, is
orthogonal
> to TDD as far as I can tell, and far from being common practice. It's
> certainly worth a look.

No I'm talking about embedding assertion inside the code, as apposed to
outside in the tests....it's old hat....but very powerful.

>
> > I do see interesting approaches, new ideas, some of which may be
> > sensible, some not,
>
> Of which you can't be sure before you tried, I'd add...

True, but as I pointed out, in order to do something, I need to have a
reasonable sense that it is better than what I'm doing now....I could hop to
work...I've never tried, but I think walking is better.

I have and use n-unit...I see it as a test harness. I follow many of the
practices of RAD...so I should be 90% along the XP, BP route.

>
> > but whenever people come down to identify what
> > those actual differences are, they seem to be an adaptaption or new
> > implementation of existing thought
>
> Well, yes. And still until recently they weren't used that much in that
> specific symbiotic combination.

Possibly, possibly not....RAD was since the 90's and is big now...very
big....probably the norm....and 90% in the same line as XP etc.

>
> > that's good and
> > fine......but don't oversell it
>
> How do you know wether something is oversold?

When people start claiming revolution when there is only evolution, when
they start rewriting history in order to make the claims seem better, when
the defintion of typical is waterfall and not RAD (at least as well).

how do you know it isn't...I feel it is...which is why we get 300+ posts
with people falling into entrenched positions.


>
> > ....it isn't the silver bullet,
>
> That's true. For many teams it seems to improve things somewhat, though.

That's good, genuinely. I'd be interested to know what they were doing
before...but its still good.

>
> > if XP
> > reduces the iteraction time in evolutionary prototyping and
> > iteractive development process then good
>
> I think XP does a little bit more than that....

maybe, maybe not...thats still a good thing to me though.

>
> > ...but it doesn't negate the
> > value of those concepts.
>
> Did someone say it did???

it all started....as these things often do....with a relatively innocent
swipe at RAD....I now I'm embroiled.

I usually steer clear of process discussions as I don't particularly
subscibe to any as a rule.

>
> > the logic and value of
> > them could be traced back into the established knowledge of 40 years
> > of software experience....
>
> There is one very significant difference: 40 years ago, feedback was slow
> and costly - it was expensive to only compile a program and it took ages.
> (That's what I hear, at least - I didn't exist at that time.) Therefore it
> was important to check everything extensively before going the next step.

Which is why I subscribe to itterative development.

>
> Today, the IDE checks the syntax as fast as I type; and running an
extensive
> suite of tests is a matter of seconds. That considerably changes the
forces
> of software development.
>
I like VB6 as well!

oh no, I've just undermined all potential creditability.

Mark


0
Nicholls.Mark (1061)
10/11/2004 12:20:37 PM
> > try reading Henry Mintzberg.
>
> Do you have a specific book in mind?
>
> Thanks, Ilja

Strategy safari.


0
Nicholls.Mark (1061)
10/11/2004 12:21:53 PM
> First, put this thread aside to resume reading on Monday. Proceed now at
> your own risk. :)

It's Monday.

>
> > pullNextToken()
> > what does that do...how did you name it...how do you know what it
> > returns...how do you know what to pass it, how do you know what to test
> > for...i.e. you have some preconcieved idea of what it does.
>
> Doing your analysis "on paper" amounts to jotting down thoughts. These
> are probably in somewhat symbolic form - squares, arrows, and so on. But
> you could just as well do it in English, I suppose.

OK, pictures are a very powerful communication medium, it is far easier to
engage a 'customer' or any other stakeholder with a few boxes and arrows.
Clients are usually very good at process analysis (using a few naive
pictures), because often it's part of their job (informally), use case is
often quite natural, and the jump to logical constructs like classes or ER
models is a hugely powerful thing. Don't get the poor soul to sign these
things off, as people sometimes do, it's unfair, but it is a very valuable
way to communicate and establish the way ahead.

>
> Imagine you were writing French, and for the fun of it decided to write
> down your analysis thoughts in French. At first the process would be
> quite laborious - you'd have to formulate your thoughts in English to
> get them into a coherent state, then translate to French. After a while,
> though, as you grew proficient in French, you would /directly/ put down
> your thoughts in French. (I know I'm painting a rosy picture - past a
> certain age one's brain is no longer plastic enough to learn to think in
> a new language, so it might take forever to get to that level of
> proficiency - but the process is a good enough description.)
>
> Now, in the above, imagine replacing "French" with "tests". Phlip is
> doing analysis in the language of tests. His "dirt simple example" is
> the same as your quick jottings on paper, but it will morph into an
> executable test seamlessly.

OK, but try building a bridge like that.

Engineers do not exhaustively model the physics of a construction, they
sketch, communicate and model generally in pictures based on established
architectures....I think doing this in tests is a step backwards. By all
means construct your tests to test your construction, but I don't believe it
is a sensible way to design the bridge.

I accept this is a way to do it...I don't think it's the best way though.


0
Nicholls.Mark (1061)
10/11/2004 12:38:29 PM
Mark Nicholls wrote:

> OK, but try building a bridge like that.

They did. But it took 500,000 years of iterations. Each bridge you see is
the result of one compile. The source is the design. The next engineer will
learn from the last bridge, build the next one, and tweak the design just a
little.

> Engineers do not exhaustively model the physics of a construction, they
> sketch, communicate and model generally in pictures based on established
> architectures....I think doing this in tests is a step backwards. By all
> means construct your tests to test your construction, but I don't believe
it
> is a sensible way to design the bridge.

Software doesn't have gravity, or material costs. So we have only begun to
explore the space of all possible useful designs.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/11/2004 2:15:39 PM
Mark Nicholls wrote:

> Strategy safari.

Thanks again! 


0
it3974 (470)
10/11/2004 6:18:45 PM
Mark Nicholls wrote:
>>> If I ask you to write a test, you will ask me what I want you to
>>> test, and how that behaviour is exposed...i.e. you will want me to
>>> the analysis and design.
>>
>> Analysis, yes; design no.
>
> if we create a method signature...to me that is design.

Yes. I will decide on that while writing the test - just tell me what 
behaviour you want. And I actually might change it once the test runs...

>>> Analysis then design then encode then refactor, this isn't new to
>>> me, it's just iterative development.
>>
>> I think the main difference is not in the practices, but in the
>> philosophy. Even with iterative development, traditionally people
>> will want to get the design "as right as possible" before starting
>> to code
>
> Again I think this is a mild misrepresentation.
>
> I want to get it as right as possible given that the cost of that is
> not greater than the benefit of doing something else.

Well, yes, of course.

What seems to be rather revolutionary to many people seems to be the 
suggestion that, when using a certain set of practices in concert with 
modern technology, the cost of trying to get the design right upfront is 
virtually always at least as high as getting to the right design by starting 
with a minimal design and improving it continuously.

> if the
> risk is that the development team may not actually understand what
> the client wants...then I would try to mitigate there, it would not
> be sensible for me to dive into testing, if I wasn't sure what I
> wanted to test.

Well, as an aside, one way to get the understanding would be to write the 
tests together with the client...

> The driver to me is risk/cost/benefit.

Naturally. The question is how to deal with a specific risk, though. To 
understand a requirement, do you let the customer check a requirements 
document, or a running system?

>> - they might refactor,
>> but will see it as a necessary evil instead of the main practice to
>> get to a good design. They might write automated tests, but they
>> will see it more as a practice to find bugs instead of preventing
>> them. And they typically will want their testing to be minimally
>> invasive, in contrast to specifically using the tests to form the
>> code.
>>
>
> Agreed.

For those people, it is likely that they will have some reservations about 
doing XP. Perhaps the practices aren't elementary different, but doing them 
would require from them to question some deeply held believes - "never touch 
a running system", "tests can't prevent bugs, only find some", etc.

I think that is why XP often feels revolutionary - not because of the 
practices, but because of the assumptions you need to make to feel save 
using them.

>>>> So are you saying that deciding on part of the design, coding that
>>>> part, and *then* testing and debugging it until it works, is
>>>> atypical for traditional software development?
>>
>> Sadly you didn't reply to this...
>
> What do you mean by testing...

Good question - and I don't have an answer at hand... :o

>> Well, every method is more than just being iterative, of course. And
>> it's part of that "more" that's different.
>
> OK, that's fine, I started this because of statements about XP
> contradicting the logic of RAD

I must have missed that, sorry.

> to me it doesn't, it builds on it,
> and by taking a club to RAD we may simply be pushing ourselves into
> entrenched positions.

I don't know enough about RAD to comment.

But again in general I don't see an inherent contradiction between building 
on something existing and being revolutionary...

>> As far as I know, there is no contradiction between evolution and
>> revolution. A small step in an evolution can often have revolutionary
>> consequences - especially in complex systems.
>
> evolutions can create revolutionary consequences...but the process is
> evolving, claiming it's a revolutionary process though looses any
> potential for consensus

Well, perhaps those people claiming that XP "is a revolutionary process" 
just mean something different from what you understand it to mean?


>>> If you write all the tests before writing any code,
>>
>> Just for the record: I don't - that's not what TDD is about.
>
> then I am bamboozled slightly.

Are you?

>>> I would suggest
>>> that the normal use of assertion inside code, reduced the iteration
>>> even further
>>
>> How would they do that??? Wouldn't you still need outside test code
>> that called the production code?
>
> yes, but by embedding you tests in your code means that you are
> testing all code all of the time....that's as clear as mud!

If you don't have external tests, the first time the assertions would be 
executed would be in production!

> If you are writing a unit test for class A, and A uses B, and B is
> stuffed full of assertions, then the test for A does some testing on
> B. So in a sense writing A is in fact writing a test for B. XXP!

The same is true without assertions. When A uses B and B doesn't work 
correctly, A doesn't wotk correctly either.

>>> is this XXP, or just the use of assertions in
>>> software development?
>>
>> If "taken to to the extremes", it's called Design By Contract, is
>> orthogonal to TDD as far as I can tell, and far from being common
>> practice. It's certainly worth a look.
>
> No I'm talking about embedding assertion inside the code, as apposed
> to outside in the tests....

What makes you think that I wasn't???

>>> I do see interesting approaches, new ideas, some of which may be
>>> sensible, some not,
>>
>> Of which you can't be sure before you tried, I'd add...
>
> True, but as I pointed out, in order to do something, I need to have a
> reasonable sense that it is better than what I'm doing now....I could
> hop to work...I've never tried, but I think walking is better.

I think I tried that in my early years of school... ;)

And I was seriously thinking about trying inline skates when they became 
popular here some years ago.

> I have and use n-unit...I see it as a test harness. I follow many of
> the practices of RAD...so I should be 90% along the XP, BP route.

That doesn't mean that you get 90% of the benefits, though. Actually it 
doesn't even mean that it needs to feel similar to TDD/XP at all...

>> Today, the IDE checks the syntax as fast as I type; and running an
>> extensive suite of tests is a matter of seconds. That considerably
>> changes the forces of software development.
>>
> I like VB6 as well!

:eek: ;)

Cheers, Ilja 


0
it3974 (470)
10/12/2004 1:19:31 PM

Ilja Preu� wrote:
> What seems to be rather revolutionary to many people seems to be the 
> suggestion that, when using a certain set of practices in concert with 
> modern technology, the cost of trying to get the design right upfront is 
> virtually always at least as high as getting to the right design by starting 
> with a minimal design and improving it continuously.

"Virtually always" across all projects and all domains? If you
are designing your nuclear power plant you are happy with evolving
a design rather than figuring out what makes a safe plant? Same
for your car breaking system? Are you comfortable evolving a
security system rather than learning about security, algorithms,
attacks, etc? I don't find that position credible.
0
grace33 (48)
10/12/2004 2:20:01 PM
I've just started reading Meyer, and my brains beginning to hurt on page 50,
it very very dry.

But it says on the 1st page...

"...inevitable...is the the reaction that meets the introduction of a new
methodological principle (1) its trivial (2) it cannot work (3) thats how I
did it all along"

I consider myself chastened by (3)....but I wont be too hard on myself.

"Ilja Preu�" <it@iljapreuss.de> wrote in message
news:416bdc90@news.totallyobjects.com...
> Mark Nicholls wrote:
> >>> If I ask you to write a test, you will ask me what I want you to
> >>> test, and how that behaviour is exposed...i.e. you will want me to
> >>> the analysis and design.
> >>
> >> Analysis, yes; design no.
> >
> > if we create a method signature...to me that is design.
>
> Yes. I will decide on that while writing the test - just tell me what
> behaviour you want. And I actually might change it once the test runs...

you wouldn't change the behaviour I hope...though you might put it in a more
generic context.

To me what is happening is...
requirements gathering (me telling you)
analysis (you interpreting what I'm saying)
design (you deciding how this behaviour should be accessed).

We're splitting hairs, or at least Meyer would contend I am....to me this is
code and fix, mitigated by a test....that's not to say it's bad...it may be
the best way of doing it....and I think writing the client (test) before any
implementation is mildly different.

I just worry that any system wide/holistic architecture may be short changed
by this mentality...though you yourself may take it as read, that you need
to sit down, have a cup of coffee, have a chat with your client about his
holiday/dog/house/beard...but these acticities to me are actually very
important in establishing an overall feel for what's required.

>
> >>> Analysis then design then encode then refactor, this isn't new to
> >>> me, it's just iterative development.
> >>
> >> I think the main difference is not in the practices, but in the
> >> philosophy. Even with iterative development, traditionally people
> >> will want to get the design "as right as possible" before starting
> >> to code
> >
> > Again I think this is a mild misrepresentation.
> >
> > I want to get it as right as possible given that the cost of that is
> > not greater than the benefit of doing something else.
>
> Well, yes, of course.

OK, but this is not explicit in TDD or XP (or at least not to my ignorant
knowledge of them).

I think this is a better heuristic (I didn't invent it), and may well end up
with you calling your client and chatting about his dog, and then
saying...'ooo, and you know when you said you wanted it to do XYZ, did you
mean it or did you mean WXYZ'.

>
> What seems to be rather revolutionary to many people seems to be the
> suggestion that, when using a certain set of practices in concert with
> modern technology, the cost of trying to get the design right upfront is
> virtually always at least as high as getting to the right design by
starting
> with a minimal design and improving it continuously.

So the heuristic is, get the design good enough to do a bit of TDD, but no
further?

"good enough" is open to interpretation, and I wouldn't necessarily
disagree...though it does irks me slightly, I can feel hundreds of
developers stampeding to their keyboards (in pairs of course) waiting to get
to grips with the first line of code....what did he say?.....white, with two
sugars?.....what sort of coffee?.....java!

So maybe it's just a question of emphasis.

>
> > if the
> > risk is that the development team may not actually understand what
> > the client wants...then I would try to mitigate there, it would not
> > be sensible for me to dive into testing, if I wasn't sure what I
> > wanted to test.
>
> Well, as an aside, one way to get the understanding would be to write the
> tests together with the client...

As a ludite myself, most of my clients thankfully share my cynical view of
technology...I think they prefer a few boxes and arrows with some hand
waving....but I accept that many others would be more than willing.

Is it an efficient way to gather requirements?

I wouldn't have thought on day 1, but there may well be a place further down
the line as a form of evolutionary prototyping.

>
> > The driver to me is risk/cost/benefit.
>
> Naturally. The question is how to deal with a specific risk, though. To
> understand a requirement, do you let the customer check a requirements
> document, or a running system?

good question.

both are hugely risky.

there seems to be a feeling among 'managers' to take a contractual SLA
approach to SD that I don't really think works (thus I actually quite like
the agile manifesto for apposing this sort of thing), to me requirements is
a joint process but the onus really is on the supplier to get at the correct
requirements. If the client signs them off and they're wrong, I would
consider it my failure first.

Generally if there were an ambiguity about the requirement I would go for
the, sit down with a cup of coffee approach, tip toe up to the problem,
explain where the ambuguity is and how it may effect the business process in
different interpretations, I don't think a TDD approach really fits most of
the time in this sort of context.

>
> >> - they might refactor,
> >> but will see it as a necessary evil instead of the main practice to
> >> get to a good design. They might write automated tests, but they
> >> will see it more as a practice to find bugs instead of preventing
> >> them. And they typically will want their testing to be minimally
> >> invasive, in contrast to specifically using the tests to form the
> >> code.
> >>
> >
> > Agreed.
>
> For those people, it is likely that they will have some reservations about
> doing XP. Perhaps the practices aren't elementary different, but doing
them
> would require from them to question some deeply held believes - "never
touch
> a running system", "tests can't prevent bugs, only find some", etc.

I subscribe in principle to both of those views, the second more than the
first, the first is usually a question of scale...i.e. if I have 1 live
server app, I may be more brave than 100,000.

>
> I think that is why XP often feels revolutionary - not because of the
> practices, but because of the assumptions you need to make to feel save
> using them.

I haven't seen it then, as I subscribe to the things you don't.

>
> >>>> So are you saying that deciding on part of the design, coding that
> >>>> part, and *then* testing and debugging it until it works, is
> >>>> atypical for traditional software development?
> >>
> >> Sadly you didn't reply to this...
> >
> > What do you mean by testing...
>
> Good question - and I don't have an answer at hand... :o
>
> >> Well, every method is more than just being iterative, of course. And
> >> it's part of that "more" that's different.
> >
> > OK, that's fine, I started this because of statements about XP
> > contradicting the logic of RAD
>
> I must have missed that, sorry.

You did, I was sitting around minding my own business thinking "now don't
get involved in that XP thread, it looks like a nightmare", and phlip took a
swipe at McConnel's view of visibility versus schedule and stupidly I got
involved.

>
> > to me it doesn't, it builds on it,
> > and by taking a club to RAD we may simply be pushing ourselves into
> > entrenched positions.
>
> I don't know enough about RAD to comment.

Its a good book, McConnel.
Its not about OO (which I think  is a good thing in a process book), but
it's stuffed with well founded common sense based on hard data.

>
> But again in general I don't see an inherent contradiction between
building
> on something existing and being revolutionary...

To me the root is 'revolt' i.e. to overturn some existing authority.. I
don't see it, but it's just semantics so we may as well leave it.

>
> >> As far as I know, there is no contradiction between evolution and
> >> revolution. A small step in an evolution can often have revolutionary
> >> consequences - especially in complex systems.
> >
> > evolutions can create revolutionary consequences...but the process is
> > evolving, claiming it's a revolutionary process though looses any
> > potential for consensus
>
> Well, perhaps those people claiming that XP "is a revolutionary process"
> just mean something different from what you understand it to mean?
>

yep, it will make me wince though when it's said.......cost two shillings,
made of wood, in my day the sun shone all year, it never rained, we were
poor as church mice, but twice as happy as people nowadays, cars! we had
those as well, but we couldn't afford one, they were at least twelve
shillings and sixpence, we had tandem unicycles, made of wood....etc.

>
> >>> If you write all the tests before writing any code,
> >>
> >> Just for the record: I don't - that's not what TDD is about.
> >
> > then I am bamboozled slightly.
>
> Are you?

yes.

I think I need to read some standard text to really get a feel for what it's
about, different people have different interpretations, which makes it
difficult for me to actually work out what the common theme is....I thought
it was testing up front.

>
> >>> I would suggest
> >>> that the normal use of assertion inside code, reduced the iteration
> >>> even further
> >>
> >> How would they do that??? Wouldn't you still need outside test code
> >> that called the production code?
> >
> > yes, but by embedding you tests in your code means that you are
> > testing all code all of the time....that's as clear as mud!
>
> If you don't have external tests, the first time the assertions would be
> executed would be in production!

OK, but unit tests are old hat....whatever Meyer may say about me claiming
"thats how I did it all along", the 'standard' approach for me is, write
some code and embed loads of assertions all over the place, some simple,
some complex e.g. doing a calculation in two completely different ways given
the inputs and comparing results, maybe one in the SQL and the other in the
middle tier. Write a test harness...this to me usually comes after I've
actually got some implementation, but only just, compile, run the test
harness and then see what happens, then itterate. The loop may not be as
tight as <10 lines of code, but its probably < 40....."Writing Solid Code"
maguire 1993, another good book.

>
> > If you are writing a unit test for class A, and A uses B, and B is
> > stuffed full of assertions, then the test for A does some testing on
> > B. So in a sense writing A is in fact writing a test for B. XXP!
>
> The same is true without assertions. When A uses B and B doesn't work
> correctly, A doesn't wotk correctly either.

That's not necessarily true...A can work by accident...it may ignore the
erroneous result, or B may actually work in a scenario it isn't meant to,
and may not work in a similar yet slightly different scenario.

My code will also go......

"assertion failure line 231 in module B.c in function Foo....argument Bar
cannot be negative" rather than
"assertion failure line 95 in Test TestA ABC does not equal XYZ"

If you do this as you go along then running a simple unit test actually
exponentially exercises lots of little tests.

Now maybe you can do this in n-unit...I haven't tried...I've viewed it as
the harness not the internal assertions, and I don't like the idea of n-unit
being statically linked into production code, rather than just assertions
conditionally compiled in.

>
> >>> is this XXP, or just the use of assertions in
> >>> software development?
> >>
> >> If "taken to to the extremes", it's called Design By Contract, is
> >> orthogonal to TDD as far as I can tell, and far from being common
> >> practice. It's certainly worth a look.
> >
> > No I'm talking about embedding assertion inside the code, as apposed
> > to outside in the tests....
>
> What makes you think that I wasn't???

I've only just started Meyer, I am aware of DbC and assertions as two
distinct things, its only now that the two things have been linked in my
brain.

>
> >>> I do see interesting approaches, new ideas, some of which may be
> >>> sensible, some not,
> >>
> >> Of which you can't be sure before you tried, I'd add...
> >
> > True, but as I pointed out, in order to do something, I need to have a
> > reasonable sense that it is better than what I'm doing now....I could
> > hop to work...I've never tried, but I think walking is better.
>
> I think I tried that in my early years of school... ;)

I used to walk in the lotus position on my knees....not to school but every
now and then.

>
> And I was seriously thinking about trying inline skates when they became
> popular here some years ago.

That probably is a better way.

>
> > I have and use n-unit...I see it as a test harness. I follow many of
> > the practices of RAD...so I should be 90% along the XP, BP route.
>
> That doesn't mean that you get 90% of the benefits, though. Actually it
> doesn't even mean that it needs to feel similar to TDD/XP at all...

maybe not, but I am yet to 'believe', after Meyer I may try reading some
hard core XP stuff, and come back quietly chanting 'xp is good, xp is best,
xp is more than rad...' etc.

>
> >> Today, the IDE checks the syntax as fast as I type; and running an
> >> extensive suite of tests is a matter of seconds. That considerably
> >> changes the forces of software development.
> >>
> > I like VB6 as well!
>
> :eek: ;)
>
It gets a bad rap for being easy! to me thats good, and a completely
different topic.

Regards

Mark.


0
Nicholls.Mark (1061)
10/12/2004 2:20:43 PM
Robert Grace wrote:
>
> Ilja Preu� wrote:

> > What seems to be rather revolutionary to many people seems to be the
> > suggestion that, when using a certain set of practices in concert with
> > modern technology, the cost of trying to get the design right upfront is
> > virtually always at least as high as getting to the right design by
starting
> > with a minimal design and improving it continuously.
>
> "Virtually always" across all projects and all domains? If you
> are designing your nuclear power plant you are happy with evolving
> a design rather than figuring out what makes a safe plant? Same
> for your car breaking system? Are you comfortable evolving a
> security system rather than learning about security, algorithms,
> attacks, etc? I don't find that position credible.

The highly iterative design methodologies - in concert with pairing and
test-first - have been frequently shown to run with very low defect rates.

Would you like your local nuke designed using techniques that repeated
studies have shown increases their defect rate?

Your argument is "fallacy of the excluded middle". One can evolve a design
while researching safety at the same time. Evolving that design in code
makes it safer and more verified, than on paper.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces



0
phlip_cpp (3852)
10/12/2004 2:24:32 PM
Ilja Preu� wrote:

> > to me it doesn't, it builds on it,
> > and by taking a club to RAD we may simply be pushing ourselves into
> > entrenched positions.
>
> I don't know enough about RAD to comment.
>
> But again in general I don't see an inherent contradiction between
building
> on something existing and being revolutionary...

RAD was a movement to leverage these techniques:

 - an acceptable defect rate ("just good enough")
 - phased delivery
 - programming in the debugger
 - mild refactoring

The idea was to take a form-painter based language, like Visual Basic, slap
up some forms, rapidly add their code, catalog all the bugs, and ship when
the defect rate got low enough. Design quality was expected to only mildly
suck.

Like modern game programming, the technique also depended on hiring hotshots
who had delivered before, so they could do it again without an organization
learn to reproduce their results.

The idea was that modern form-painters and debuggers had progressed far
enough to institutionalize code-and-fix, within large iterations, limiting
the "week of all nighters" to once per quarter.

> Well, perhaps those people claiming that XP "is a revolutionary process"
> just mean something different from what you understand it to mean?

XP is revolutionary, but not for any reason that anyone seeking to migrate
to iterative techniques needs to know or deal with.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/12/2004 2:32:21 PM

Phlip wrote:
> The highly iterative design methodologies - in concert with pairing and
> test-first - have been frequently shown to run with very low defect rates.

How about its work rate? You seem to think a methodology replaces
knowledge. How do you build a security system approach that works?
What does any of what you said have to do with anything about knowing
about which algorithms are secure etc? Would you like your surgeon
not to have studied surgery or just figure it out on the fly?

> Would you like your local nuke designed using techniques that repeated
> studies have shown increases their defect rate?

You definitely have a hammer.

> Your argument is "fallacy of the excluded middle". One can evolve a design
> while researching safety at the same time. Evolving that design in code
> makes it safer and more verified, than on paper.

So on the space shuttle you would have had a few "accidents" before
you decided on the triplicate/voting system approach for
fault tolerance? Or you wouldn't have bothered having a goal
for reliability or having a strategy for getting there. You
would have started with while (1) { fly(); } and then say we'll
evolve it from there? My god.
0
grace33 (48)
10/12/2004 3:01:44 PM
Robert Grace wrote:
>
> Phlip wrote:

> > The highly iterative design methodologies - in concert with pairing and
> > test-first - have been frequently shown to run with very low defect
rates.
>
> How about its work rate? You seem to think a methodology replaces
> knowledge. How do you build a security system approach that works?

This is a different topic than whether design should be created
incrementally, with high feedback.

> What does any of what you said have to do with anything about knowing
> about which algorithms are secure etc? Would you like your surgeon
> not to have studied surgery or just figure it out on the fly?

Surgery ain't design.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/12/2004 3:35:56 PM
Robert,

> If you are designing your nuclear power plant you are happy
> with evolving a design rather than figuring out what makes
> a safe plant?

This is an appeal to emotion, and one that begs the question.

You are asking, in effect: "If you were designing a nuclear plant, would 
you be scared ?" There's only one sensible answer to this - yes, if I 
were designing a nuclear plant, I would be scared out of my pants of 
making even the slightest mistake in design.

That has nothing to do with comparing design processes. If we do show 
that an evolutionary design process yields, overall, a final design with 
fewer defects than a "pipelined" design process - then that is what we 
/should/ adopt to design a nuclear plant. "Figuring out what makes a 
safe plant" is part and parcel of the design objective for that kind of 
project, so anything which makes the design unsafe is a defect.

If you're designing a nuclear plant, are you happier with a process that 
produces more or fewer defects ?

Laurent
http://bossavit.com/thoughts/
0
laurent (379)
10/12/2004 3:53:38 PM
>
> > > to me it doesn't, it builds on it,
> > > and by taking a club to RAD we may simply be pushing ourselves into
> > > entrenched positions.
> >
> > I don't know enough about RAD to comment.
> >
> > But again in general I don't see an inherent contradiction between
> building
> > on something existing and being revolutionary...
>
> RAD was a movement to leverage these techniques:

is

>
>  - an acceptable defect rate ("just good enough")
>  - phased delivery
>  - programming in the debugger
>  - mild refactoring

I use McConnell to define RAD, and I don't recognise any of this except that
'phased delivery' is *one* of the best practices as a delivery technique,
where has the rest of it come from?

>
> The idea was to take a form-painter based language, like Visual Basic,
slap
> up some forms, rapidly add their code, catalog all the bugs, and ship when
> the defect rate got low enough. Design quality was expected to only mildly
> suck.

Again this is not in RAD, there is a practice called evolutionary
prototyping (I think), where front ends are developed in rapid iterations
with the client, this is a good technique in my view, there is absolutely
nothing to do with shipping with shipping with high levels of bugs.

>
> Like modern game programming, the technique also depended on hiring
hotshots
> who had delivered before, so they could do it again without an
organization
> learn to reproduce their results.

where are you getting this from?

>
> The idea was that modern form-painters and debuggers had progressed far
> enough to institutionalize code-and-fix, within large iterations, limiting
> the "week of all nighters" to once per quarter.
>

there is an acceptance of the week of all nighters.....thats life....

> > Well, perhaps those people claiming that XP "is a revolutionary process"
> > just mean something different from what you understand it to mean?
>
> XP is revolutionary, but not for any reason that anyone seeking to migrate
> to iterative techniques needs to know or deal with.
>

I try to keep out of this sort of thing, but you keep on portraying RAD as
something it is not, and then effectively stealing it's clothes.


0
Nicholls.Mark (1061)
10/12/2004 4:00:10 PM

Phlip wrote:
>>How about its work rate? You seem to think a methodology replaces
>>knowledge. How do you build a security system approach that works?
> 
> This is a different topic than whether design should be created
> incrementally, with high feedback.

That's design in my book. If you are just talking about low
level coding details that's different. But design to me
is about making a solution that works and that encompasses
everything. With your very strong code fixation i can see
how you would miss this.

>>What does any of what you said have to do with anything about knowing
>>about which algorithms are secure etc? Would you like your surgeon
>>not to have studied surgery or just figure it out on the fly?
> 
> Surgery ain't design.

Sure it is. Most everything is design.
0
grace33 (48)
10/12/2004 4:02:33 PM
Laurent Bossavit wrote:
> Robert,
> 
>>If you are designing your nuclear power plant you are happy
>>with evolving a design rather than figuring out what makes
>>a safe plant?
> 
> 
> This is an appeal to emotion, and one that begs the question.

Your response is evasive and begs a response. I see
at the end you just resort to your false guru level mantra again.

It's not an appeal to anything other than how do you get
out of the cracker jack box prize level of responses this
group has evolved into and start dealing with real problems.


> If you're designing a nuclear plant, are you happier with a process that 
> produces more or fewer defects ?

I hear the cracker jack box opening...

How would you know what a defect was because you would have
no idea what should be done to make a safe plant? Getting
the wrong thing right isn't a win.

And your defect will be doosy because you didn't bother
to understand your domain because you don't need to. You
have the magick formula that works the same way on any problem.

And it's no problem if you have to move the site two years into
your project because the river you picked for a cooling
source doesn't have enough flow in winter because you thought
any river is a good place to start, your aren't going to need it,
we'll just evolve it.

Or am i appealing to emotion again?
0
grace33 (48)
10/12/2004 4:21:30 PM
Mark Nicholls wrote:

> > RAD was a movement to leverage these techniques:
>
> is

May I have a show of hands? Who here calls what they do "RAD"?

> >  - an acceptable defect rate ("just good enough")
> >  - phased delivery
> >  - programming in the debugger
> >  - mild refactoring
>
> I use McConnell to define RAD, and I don't recognise any of this except
that
> 'phased delivery' is *one* of the best practices as a delivery technique,
> where has the rest of it come from?

Read between the lines. And McConnell's /RAD/ book came out after that
definition of RAD circulated.

> > The idea was to take a form-painter based language, like Visual Basic,
> > slap
> > up some forms, rapidly add their code, catalog all the bugs, and ship
when
> > the defect rate got low enough. Design quality was expected to only
mildly
> > suck.
>
> Again this is not in RAD, there is a practice called evolutionary
> prototyping (I think), where front ends are developed in rapid iterations
> with the client, this is a good technique in my view, there is absolutely
> nothing to do with shipping with shipping with high levels of bugs.

If you pull the definition of RAD out of that mud, then what do you have
left to define? IID?

> > Like modern game programming, the technique also depended on hiring
> > hotshots
> > who had delivered before, so they could do it again without an
> > organization
> > learn to reproduce their results.
>
> where are you getting this from?

Personal observations?

Oh, sorry, I forgot - we can only cite textbooks here.

> > The idea was that modern form-painters and debuggers had progressed far
> > enough to institutionalize code-and-fix, within large iterations,
limiting
> > the "week of all nighters" to once per quarter.
>
> there is an acceptance of the week of all nighters.....thats life....

There is not. XP teams routinely report the week before deployment is just
as relaxed as any other week. I'm sure some other methodologies report
similar effects...

> I try to keep out of this sort of thing, but you keep on portraying RAD as
> something it is not, and then effectively stealing it's clothes.

If you call RAD IID, then what's left to define the "just good enough" and
"slap up a GUI" movements?

Recall at one time tool vendors promoted their tools as RAD because "you
just paint the GUI, and you are half done!"

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/12/2004 4:30:09 PM
Laurent Bossavit wrote:

> Robert,
>
> > If you are designing your nuclear power plant you are happy
> > with evolving a design rather than figuring out what makes
> > a safe plant?
>
> This is an appeal to emotion, and one that begs the question.
>
> You are asking, in effect: "If you were designing a nuclear plant, would
> you be scared ?" There's only one sensible answer to this - yes, if I
> were designing a nuclear plant, I would be scared out of my pants of
> making even the slightest mistake in design.

Didn't someone once say, "Write tests until fear turns into boredom"?

So I think I'd be writing, planning, researching, and designing the most
elaborate tests, in software and hardware, to do a nuke. And most of those
tests would stay in production (and hence have their own tests). Because
"leave the tests on in production" violates several methodology guidelines,
including you-know-what, I couldn't use those methodologies on a nuke.

According to Larman's book, I could use RUP, and test-first.

(Besides, I personally would be bucking for an energy cell plant, not a dumb
nuke whose only purpose is to create toxic waste and inflate my country's
nuclear weapons program's core competency...)

Chernobyl blew up because they were testing it, folks...

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/12/2004 4:30:26 PM
Robert,

Don't confuse an argument for evolutionary design with an argument 
against acquiring design skills.

> Would you like your surgeon not to have studied surgery or just
> figure it out on the fly?

A surgeon, in many cases - such as emergency wards - cannot rehearse the 
entirety of an operation before she performs it. She necessarily relies 
on a repertoire of skills (just like a programmer has internalized, say, 
the structure of a wide range of design patterns) but will assess the 
situation "on the spot", and respond as is appropriate - not 
(necessarily) according to a predetermined plan.

Laurent
http://bossavit.com/thoughts/
0
laurent (379)
10/12/2004 4:31:23 PM
"Robert Grace" <grace33@aol.com> wrote in message 
news:10mo0uib8se4732@news.supernews.com...

> And it's no problem if you have to move the site two years into
> your project because the river you picked for a cooling
> source doesn't have enough flow in winter because you thought
> any river is a good place to start, your aren't going to need it,
> we'll just evolve it.
>
> Or am i appealing to emotion again?

I'll just post their response since it's coming anyway: "Nuclear power 
plants don't compare to software. Software is flexible blah blah..."


Shayne Wissler
http://www.ouraysoftware.com



0
10/12/2004 4:35:17 PM
Robert Grace wrote:

> And it's no problem if you have to move the site two years into
> your project because the river you picked for a cooling
> source doesn't have enough flow in winter because you thought
> any river is a good place to start, your aren't going to need it,
> we'll just evolve it.

Well, you could move the site, or build a cooling water pipeline to the 
site, or find an alternative way of cooling.

All of the solutions are probably quite expensive - mostly because hardware 
is involved.

The premise of XP is that software can be build in a way that makes a whole 
lot of those kinds of changes inexpensive enough to make defering decisions 
about a reasonable choice.

Cheers, Ilja 


0
it3974 (470)
10/12/2004 4:38:42 PM

Laurent Bossavit wrote:
> A surgeon, in many cases - such as emergency wards - cannot rehearse the 
> entirety of an operation before she performs it. She necessarily relies 
> on a repertoire of skills (just like a programmer has internalized, say, 
> the structure of a wide range of design patterns) but will assess the 
> situation "on the spot", and respond as is appropriate - not 
> (necessarily) according to a predetermined plan.

Certainly. Part of any process is this. But that's not the whole of it.
There's the years of practice and study and research as well.
Having "a predetermined plan" and "no plan" is false dichotomy.
0
grace33 (48)
10/12/2004 4:41:50 PM

Shayne Wissler wrote:

> I'll just post their response since it's coming anyway: "Nuclear power 
> plants don't compare to software. Software is flexible blah blah..."

It's all building. It's all design. It's all creating. It's all making
a system.
0
grace33 (48)
10/12/2004 4:49:53 PM

Ilja Preu� wrote:
> The premise of XP is that software can be build in a way that makes a whole 
> lot of those kinds of changes inexpensive enough to make defering decisions 
> about a reasonable choice.

I don't really care about premises. I care about results. Your result
in this case was horrible.
0
grace33 (48)
10/12/2004 4:50:52 PM
Robert,

> How would you know what a defect was because you would have
> no idea what should be done to make a safe plant?

You and I may not be qualified to say what makes a plant safe. The 
people designing a plant would be selected, in large part, for knowledge 
germane to that task. In coming up with a design, what they are doing 
/is/ encoding that knowledge into a specific solution.

Saying that the design process is not a straight pipeline is not the 
same as saying that the entire domain should be unstudied.

"Evolutionary design" doesn't mean "let design be done by novices".

> And it's no problem if you have to move the site two years into
> your project because the river you picked for a cooling
> source doesn't have enough flow in winter

It is a problem, obviously. The question is, when in the process of 
designing the nuclear plant can this decision be made ? If we don't have 
the appropriate site picked, is it absolutely impossible to go on with 
making any other decisions about the plant ?

"Evolutionary design" in this context would mean selecting a site that's 
"good enough", or maybe several candidates, assuming that they are 
representative of plant sites in general. We'd go on to design the rest 
of the plant based on these assumptions, with provisions for later 
revision. There's nothing sinister in that, and I suspect that it's the 
way nuclear plants are /actually/ designed.

Feedback from the later design decisions, or raising the issue above 
later on, may result in having to pick a different site, the 
characteristics of which /may/ impact significant subsets of the design 
decisions already made. Just how significant would be yet another 
attribute of the performance of the design process: if what we design is 
so intertwingled that getting a single decision wrong invalidates the 
whole ball o'wax, we won't get anywhere.

Laurent
http://bossavit.com/thoughts/
0
laurent (379)
10/12/2004 4:50:56 PM
Robert Grace wrote:
>
> Laurent Bossavit wrote:
> > Robert,
> >
> >>If you are designing your nuclear power plant you are happy
> >>with evolving a design rather than figuring out what makes
> >>a safe plant?
> >
> >
> > This is an appeal to emotion, and one that begs the question.
>
> Your response is evasive and begs a response. I see
> at the end you just resort to your false guru level mantra again.
>
> It's not an appeal to anything other than how do you get
> out of the cracker jack box prize level of responses this
> group has evolved into and start dealing with real problems.

A direct answer to the question "use XP on a nuke?":

Alistair Cockburn charts software projects in these categories:

 Life          |  L6  L20  L40  L100
               |
 Essential     |  E6  E20  E40  E100
 Money         |
               |
 Discretionary |  D6  D20  D40  D100
 Money         |
               |
 Comfort       |  C6  C20  C40  C100
               +---------------------
     number of   1~6  ~20  ~40  ~100
     engineers

An iterative lifecycle like RUP (which encourages both TDD and merciless
refactoring) can cover any project category, from trivial Web sites at C6,
to nuclear power plants at L100. [Larman2004]

If you advise moving to a policy of "less documentation is better", you will
get surprisingly good results with XP, However, it only covers the quadrant
from C6 to E20. If you simply must build a nuke, write more documentation,
within a process that supports it, to cover your ass (and slightly reduce
the odds the thing blows up).

A participant of the XP mailing list, whom I will refer to as Glen, works on
a project cleaning up after a breeder reactor. His project is CMM >=4 or
something. He has reported they have learned from XP, but they cannot pair
program, because anti-terrorism authorities need an exact papertrail of who
saw what code.

A direct answer, cited from a peer-reviewed book, with a supplemental
citation closer to the question's details. I suppose I'l get either an
insult or some form of evasion in reply.

> > If you're designing a nuclear plant, are you happier with a process that
> > produces more or fewer defects ?
>
> I hear the cracker jack box opening...
>
> How would you know what a defect was because you would have
> no idea what should be done to make a safe plant? Getting
> the wrong thing right isn't a win.

Okay, only Laurent wants to build a nuke without paperwork. You have
certainly eloquently and gracefully made your point, Robert.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/12/2004 4:54:05 PM
Shayne Wissler wrote:

> I'll just post their response since it's coming anyway: "Nuclear power
> plants don't compare to software. Software is flexible blah blah..."

The software for a nuke ain't flexible.

(I suspect we are discussing writing the software for a nuke, not building a
nuke in iterations. But you know the answer to that one too, Shayne..)

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/12/2004 4:55:41 PM

Laurent Bossavit wrote:

> You and I may not be qualified to say what makes a plant safe. The 
> people designing a plant would be selected, in large part, for knowledge 
> germane to that task.

You are part of the people. You have to learn. You can't
pass the buck and hide under the i just do what i am told
umbrella.


> Feedback from the later design decisions, or raising the issue above 
> later on, may result in having to pick a different site, the 
> characteristics of which /may/ impact significant subsets of the design 
> decisions already made. Just how significant would be yet another 
> attribute of the performance of the design process: if what we design is 
> so intertwingled that getting a single decision wrong invalidates the 
> whole ball o'wax, we won't get anywhere.

I guess you'll just have to stick with the simple safe problems then.
There are many many products where you have to make such
decisions. Being a professional is knowing how to get to the
point of making them and accepting the responsibility.
0
grace33 (48)
10/12/2004 4:56:16 PM
>
> > > RAD was a movement to leverage these techniques:
> >
> > is
>
> May I have a show of hands? Who here calls what they do "RAD"?

If I do it then, there is no 'was'....I do it more than any other technique.

>
> > >  - an acceptable defect rate ("just good enough")
> > >  - phased delivery
> > >  - programming in the debugger
> > >  - mild refactoring
> >
> > I use McConnell to define RAD, and I don't recognise any of this except
> that
> > 'phased delivery' is *one* of the best practices as a delivery
technique,
> > where has the rest of it come from?
>
> Read between the lines. And McConnell's /RAD/ book came out after that
> definition of RAD circulated.

you deleted this line

"RAD was a movement to leverage these techniques:", followed by a complete
mischaracterisation.

If you don't like RAD then fine...but don't portray it as something it
isn't.


>
> > > The idea was to take a form-painter based language, like Visual Basic,
> > > slap
> > > up some forms, rapidly add their code, catalog all the bugs, and ship
> when
> > > the defect rate got low enough. Design quality was expected to only
> mildly
> > > suck.
> >
> > Again this is not in RAD, there is a practice called evolutionary
> > prototyping (I think), where front ends are developed in rapid
iterations
> > with the client, this is a good technique in my view, there is
absolutely
> > nothing to do with shipping with shipping with high levels of bugs.
>
> If you pull the definition of RAD out of that mud, then what do you have
> left to define? IID?

unfortunately I don't know what IID is.

I personally define RAD by the book RAD, even if it wasn't the first
treatment it is by far the most influentual.

>
> > > Like modern game programming, the technique also depended on hiring
> > > hotshots
> > > who had delivered before, so they could do it again without an
> > > organization
> > > learn to reproduce their results.
> >
> > where are you getting this from?
>
> Personal observations?
>
> Oh, sorry, I forgot - we can only cite textbooks here.

It helps, otherwise we cannot judge whether your judgements are based on any
facts or just personal subjectivity.

You can cite personal opinions, but if they are opinions about things
written in black and white, that bear no relationship to those statement,
then don't be suprised when it's pointed out.

>
> > > The idea was that modern form-painters and debuggers had progressed
far
> > > enough to institutionalize code-and-fix, within large iterations,
> limiting
> > > the "week of all nighters" to once per quarter.
> >
> > there is an acceptance of the week of all nighters.....thats life....
>
> There is not. XP teams routinely report the week before deployment is just
> as relaxed as any other week. I'm sure some other methodologies report
> similar effects...

So XP teams never do the 60 hour week every now and then because someone's
oversold a project....it truly is magic, not only does it revolutionise SD,
but also marketing and organisational management....deadlines are never
missed? budgets never broken?...I don't know why your telling us about it,
you could make a fortune taking Sun, IBM and microsoft to the cleaners.

The 60 hour week once every blue moon is life, we all have to do it once
every year or so, the key is not to do it for too long or too often.....it's
a subject that is dealt with very admirably in RAD (McConnel), under burnout
and heroics...maybe you should give it a read.

>
> > I try to keep out of this sort of thing, but you keep on portraying RAD
as
> > something it is not, and then effectively stealing it's clothes.
>
> If you call RAD IID, then what's left to define the "just good enough" and
> "slap up a GUI" movements?

I don't understand IID.

>
> Recall at one time tool vendors promoted their tools as RAD because "you
> just paint the GUI, and you are half done!"
>
It can be used for evolutionary prototyping, a RAD technique, thus a RAD
tool, it's a weak claim, but then marketing spin and hype are often empty
and transparent when looked at in detail.



0
Nicholls.Mark (1061)
10/12/2004 4:56:31 PM
Phlip wrote:
> If you advise moving to a policy of "less documentation is better", you will
> get surprisingly good results with XP, However, it only covers the quadrant
> from C6 to E20. If you simply must build a nuke, write more documentation,
> within a process that supports it, to cover your ass (and slightly reduce
> the odds the thing blows up).

And what does writing more or less documentation have to do with
anything? Rather than deal with substance you break things
down into more or less documentation?

> A direct answer, cited from a peer-reviewed book, with a supplemental
> citation closer to the question's details. I suppose I'l get either an
> insult or some form of evasion in reply.

Actually i don't see how any of it has to do with anything
more than just form.

0
grace33 (48)
10/12/2004 5:04:58 PM
Mark Nicholls wrote:

> > as relaxed as any other week. I'm sure some other methodologies report
> > similar effects...
>
> So XP teams never do the 60 hour week every now and then because someone's

XP teams occassionally commit to a 60-hour week. Not all-nighters.

> oversold a project....it truly is magic, not only does it revolutionise
SD,

This is the Stockholm Effect, again. Yes, Virginia, there's both a Santa
Claus and corrupt newspaper editors. Yes, Mark, some teams beat their
competition to market without working long hours.

If someone oversold a project, the team will know after a few weeks. The
current "burndown rate" - the rate the team finishes features - will not
indicate a given feature set will be in the box by a given time.

If someone oversold a project, they will either emend their forecasts, or
_they_ will be discredited.

> but also marketing and organisational management....deadlines are never
> missed? budgets never broken?...I don't know why your telling us about it,
> you could make a fortune taking Sun, IBM and microsoft to the cleaners.

I suspect all three of them have started XP projects. I know MS is
aggressively on board.

> The 60 hour week once every blue moon is life, we all have to do it once
> every year or so, the key is not to do it for too long or too
often.....it's
> a subject that is dealt with very admirably in RAD (McConnel), under
burnout
> and heroics...maybe you should give it a read.

That trick doesn't work here, Mark. I read /RAD/ when it came out, cover to
cover. Assumptions like that will make you look bad.

And /RAD/ had like 2 pages on Unit Tests.

> I don't understand IID.

Incremental and Iterative Development.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/12/2004 5:09:35 PM
Robert Grace wrote:

> Laurent Bossavit wrote:

> > A surgeon, in many cases - such as emergency wards - cannot rehearse the
> > entirety of an operation before she performs it. She necessarily relies
> > on a repertoire of skills (just like a programmer has internalized, say,
> > the structure of a wide range of design patterns) but will assess the
> > situation "on the spot", and respond as is appropriate - not
> > (necessarily) according to a predetermined plan.
>
> Certainly. Part of any process is this. But that's not the whole of it.
> There's the years of practice and study and research as well.
> Having "a predetermined plan" and "no plan" is false dichotomy.

Then the debate is over which aspects can be predetermined.

Delaying expensive decisions until the last cheap moment, with the most
responsibility and the most information to make the decision, is the heart
of Lean Manufacturing. The process Toyota used in the 1960s to kick
Detroit's butt.

Oh yeah - L100 projects, too: Cars.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces



0
phlip_cpp (3852)
10/12/2004 5:12:36 PM
Mark Nicholls wrote:
> I've just started reading Meyer, and my brains beginning to hurt on
> page 50, it very very dry.

Haven't read it yet... :o

> But it says on the 1st page...
>
> "...inevitable...is the the reaction that meets the introduction of a
> new methodological principle (1) its trivial (2) it cannot work (3)
> thats how I did it all along"
>
> I consider myself chastened by (3)....but I wont be too hard on
> myself.

;)


>> Yes. I will decide on that while writing the test - just tell me what
>> behaviour you want. And I actually might change it once the test
>> runs...
>
> you wouldn't change the behaviour I hope...though you might put it in
> a more generic context.

Yes. That's what I have the tests for - to hopefully tell me when I changed 
behaviour.

> to me
> this is code and fix, mitigated by a test....

More like code-and-fix-the-design than code-and-fix-the-behaviour, I would 
say.

> that's not to say it's
> bad...it may be the best way of doing it....and I think writing the
> client (test) before any implementation is mildly different.

You should try it some time!

> I just worry that any system wide/holistic architecture may be short
> changed by this mentality...though you yourself may take it as read,
> that you need to sit down, have a cup of coffee, have a chat with
> your client about his holiday/dog/house/beard...but these acticities
> to me are actually very important in establishing an overall feel for
> what's required.

I agree that the overall feel is very important!

>>> I want to get it as right as possible given that the cost of that is
>>> not greater than the benefit of doing something else.
>>
>> Well, yes, of course.
>
> OK, but this is not explicit in TDD or XP (or at least not to my
> ignorant knowledge of them).
>
> I think this is a better heuristic (I didn't invent it), and may well
> end up with you calling your client and chatting about his dog, and
> then saying...'ooo, and you know when you said you wanted it to do
> XYZ, did you mean it or did you mean WXYZ'.

Well, yes - that's why XP proposes to have a client representative sitting 
in the team room - to make reduce costs and overhead of this action.

>> What seems to be rather revolutionary to many people seems to be the
>> suggestion that, when using a certain set of practices in concert
>> with modern technology, the cost of trying to get the design right
>> upfront is virtually always at least as high as getting to the right
>> design by starting with a minimal design and improving it
>> continuously.
>
> So the heuristic is, get the design good enough to do a bit of TDD,
> but no further?

Get the design sketch good enough that you know where to start you TDD 
session. Then while implementing using TDD, get the design of the whole 
system "perfect" for what the system currently does.

>> Well, as an aside, one way to get the understanding would be to
>> write the tests together with the client...
>
> As a ludite myself, most of my clients thankfully share my cynical
> view of technology...I think they prefer a few boxes and arrows with
> some hand waving....but I accept that many others would be more than
> willing.

Would they perhaps be willing to fill some test data into excell sheets?

> Generally if there were an ambiguity about the requirement I would go
> for the, sit down with a cup of coffee approach, tip toe up to the
> problem, explain where the ambuguity is and how it may effect the
> business process in different interpretations, I don't think a TDD
> approach really fits most of the time in this sort of context.

Well, unless you include writing Customer Tests first, which might require 
exactly that! :)

>>> OK, that's fine, I started this because of statements about XP
>>> contradicting the logic of RAD
>>
>> I must have missed that, sorry.
>
> You did, I was sitting around minding my own business thinking "now
> don't get involved in that XP thread, it looks like a nightmare", and
> phlip took a swipe at McConnel's view of visibility versus schedule
> and stupidly I got involved.

Oh, that thing. I tried reading some sentences, but couldn't follow. That 
sometimes happens to me when I read Phlip's posts - must be some brain 
incompatibility or something...

>>>>> If you write all the tests before writing any code,
>>>>
>>>> Just for the record: I don't - that's not what TDD is about.
>>>
>>> then I am bamboozled slightly.
>>
>> Are you?
>
> yes.
>
> I think I need to read some standard text to really get a feel for
> what it's about

http://www.objectmentor.com/resources/articles/xpepisode.htm is a good short 
example of a TDD/PP session.

Cheers, Ilja 


0
it3974 (470)
10/12/2004 5:24:29 PM
Robert Grace wrote:
> Ilja Preu� wrote:
>> The premise of XP is that software can be build in a way that makes
>> a whole lot of those kinds of changes inexpensive enough to make
>> defering decisions about a reasonable choice.
>
> I don't really care about premises. I care about results. Your result
> in this case was horrible.

I don't follow you. What result?

Confused, Ilja 


0
it3974 (470)
10/12/2004 5:27:06 PM
Robert Grace wrote:
>
> Phlip wrote:

> > If you advise moving to a policy of "less documentation is better", you
will
> > get surprisingly good results with XP, However, it only covers the
quadrant
> > from C6 to E20. If you simply must build a nuke, write more
documentation,
> > within a process that supports it, to cover your ass (and slightly
reduce
> > the odds the thing blows up).
>
> And what does writing more or less documentation have to do with
> anything? Rather than deal with substance you break things
> down into more or less documentation?

Pick any other difference between XP and the process you espouse, and
substitute it for documentation. If you need more of it, you can't do XP.

No XP for nukes, folks...

> > A direct answer, cited from a peer-reviewed book, with a supplemental
> > citation closer to the question's details. I suppose I'l get either an
> > insult or some form of evasion in reply.
>
> Actually i don't see how any of it has to do with anything
> more than just form.

Well, at least I haven't devolved into these constant cheap shots. Accusing
a post of being vague, or lacking substance, is an old trick tries to imply
the writer wrote an incomplete post. It really means the reader lacks
reading comprehension skills.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/12/2004 5:27:36 PM

Phlip wrote:
>>Certainly. Part of any process is this. But that's not the whole of it.
>>There's the years of practice and study and research as well.
>>Having "a predetermined plan" and "no plan" is false dichotomy.
> 
> Then the debate is over which aspects can be predetermined.
> 
> Delaying expensive decisions until the last cheap moment, with the most
> responsibility and the most information to make the decision, is the heart
> of Lean Manufacturing. The process Toyota used in the 1960s to kick
> Detroit's butt.

So you want your surgeon to delay his training until he operates
on you for the first time? You want your real-time calculations
for signaling in the nuclear power-plant to be delayed until
everything is installed and the power plant is turned on?
You want your backup disk sizing requirements to be delayed
until you run out of space and can't do a backup?

In the manufacturing space they didn't just order the machines
the first time they made a car. They figured out what they
wanted, ordered them a head of time, and then installed them
in a building designed to fit them. Then on other parts of
the process they would delay.

Problems have structure. Different strategies apply to
different parts of the structure. You want to apply the
same strategy everywhere and say it is good.
0
grace33 (48)
10/12/2004 5:29:51 PM
Robert Grace wrote:

> Phlip wrote:

> > Delaying expensive decisions until the last cheap moment, with the most
> > responsibility and the most information to make the decision, is the
heart
> > of Lean Manufacturing. The process Toyota used in the 1960s to kick
> > Detroit's butt.
>
> So you want your surgeon to delay his training until he operates
> on you for the first time?

No.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/12/2004 5:31:40 PM
Ilja Preu� wrote:

> Well, yes - that's why XP proposes to have a client representative sitting
> in the team room - to make reduce costs and overhead of this action.

Ilja, that's stupid! You are writing unclear and evasive posts again!! You
XPers are all alike!!!

Do you want your insurance representative, in the operating room with you,
advising your doctor which organs to cut out??

Do you want the ghosts of Fermi and Bose in the room with you, while you
build a nuclear power plant by just routing its pipes here and there? Oh,
and then you refactor them mercilessly, right?

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/12/2004 5:35:52 PM

Phlip wrote:
> Pick any other difference between XP and the process you espouse, and
> substitute it for documentation. If you need more of it, you can't do XP.
> 
> No XP for nukes, folks...

Only because of your magick formula mentality. It would be fine
as long as you are willing to consider where it can be used
and where it can't be used and that strategy is usually
problem specific. You want to listen to code. Listening to
the problem is more important. It will tell you what it needs.


>>>A direct answer, cited from a peer-reviewed book, with a supplemental
>>>citation closer to the question's details. I suppose I'l get either an
>>>insult or some form of evasion in reply.
>>
>>Actually i don't see how any of it has to do with anything
>>more than just form.
> 
> Well, at least I haven't devolved into these constant cheap shots. Accusing
> a post of being vague, or lacking substance, is an old trick tries to imply
> the writer wrote an incomplete post. It really means the reader lacks
> reading comprehension skills.

By form i mean the form of the process, not your argument. Talking about
more or less of anything is really just about form without saying if it
is the right amount for the problem.
0
grace33 (48)
10/12/2004 5:36:53 PM
Robert Grace wrote:
>
> Phlip wrote:

> > Pick any other difference between XP and the process you espouse, and
> > substitute it for documentation. If you need more of it, you can't do
XP.
> >
> > No XP for nukes, folks...
>
> Only because of your magick formula mentality. It would be fine
> as long as you are willing to consider where it can be used
> and where it can't be used and that strategy is usually
> problem specific. You want to listen to code. Listening to
> the problem is more important. It will tell you what it needs.

I just told you XP wasn't contraindicated for C6 to E20 projects, and you
accuse me of a "magic formula mentality". Incredible.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/12/2004 5:39:30 PM
Robert Grace wrote:
> Ilja Preu� wrote:
>> What seems to be rather revolutionary to many people seems to be the
>> suggestion that, when using a certain set of practices in concert
>> with modern technology, the cost of trying to get the design right
>> upfront is virtually always at least as high as getting to the right
>> design by starting with a minimal design and improving it
>> continuously.
>
> "Virtually always" across all projects and all domains?

All software projects.

> If you
> are designing your nuclear power plant you are happy with evolving
> a design rather than figuring out what makes a safe plant?

I'd be very happy if they evolved the design, build a new plant daily to the 
current design and ran it through tests like earthquakes, nuclear attacks, 
plane crashes, cutting off cooling water etc. Of course they would have 
experienced testers on the team working together with power plant experts to 
come up with new tests every day.

Sadly I don't know how to do that with power plants. Fortunately I do know 
how to do that with software.

Take care, Ilja 


0
it3974 (470)
10/12/2004 5:39:44 PM
Robert Grace wrote:
> 
> 
> Ilja Preu� wrote:
> 
>> What seems to be rather revolutionary to many people seems to be the 
>> suggestion that, when using a certain set of practices in concert with 
>> modern technology, the cost of trying to get the design right upfront 
>> is virtually always at least as high as getting to the right design by 
>> starting with a minimal design and improving it continuously.
> 
> 
> "Virtually always" across all projects and all domains?

I seem to understand this question differently from everybody else. I 
choose to take it literally as all projects and all domains, not just 
software. And I would say the answer is unequivocally "no" both from my 
standpoint and from the standpoint of XP.

>If you
> are designing your nuclear power plant you are happy with evolving
> a design rather than figuring out what makes a safe plant?

XP is about developing software, not nuclear power plants. You might use 
XP to develop the software for the power plant, but not the power plant 
itself. Again taking you literally, your question would be about the 
power plant itself. And my answer is: personally, I have no idea how to 
design a power plant, nuclear or otherwise, and I would not dream of 
trying to do so without studying the subject extensively first. But I 
realize that building physical things is totally different from building 
software. Above all, the cost of changing a design after it's been built 
must be vastly greater.

> Same
> for your car breaking system? Are you comfortable evolving a
> security system rather than learning about security, algorithms,
> attacks, etc? I don't find that position credible.
0
reiersol (156)
10/12/2004 5:42:08 PM

Dagfinn Reiersol wrote:

> XP is about developing software, not nuclear power plants. You might use 
> XP to develop the software for the power plant, but not the power plant 
> itself. 

You are developing a nuclear power plant. Your are developing an ipod.
You are developing a securty appliance. You are developing a navigation
system. You are developing a point of sale system.

Software is part of that. Everyting impacts the software and
the software impacts everything. Software is in the physical
thing and determines much of the form of the physical thing.
And the form of the thing impacts the software. There's no
point where one stops and the other starts.

Trying to make the simplifying assumption that software can be
teased out of the implicate order and treated in isolation
is missing how systems or co-evolving and co-dependent.

0
grace33 (48)
10/12/2004 5:52:48 PM
Robert Grace wrote:
> 
> 
> Dagfinn Reiersol wrote:
> 
>> XP is about developing software, not nuclear power plants. You might 
>> use XP to develop the software for the power plant, but not the power 
>> plant itself. 
> 
> 
> You are developing a nuclear power plant. Your are developing an ipod.
> You are developing a securty appliance. You are developing a navigation
> system. You are developing a point of sale system.
> 
> Software is part of that. Everyting impacts the software and
> the software impacts everything. Software is in the physical
> thing and determines much of the form of the physical thing.

Software determines much of the form of a nuclear power plant?

> And the form of the thing impacts the software. There's no
> point where one stops and the other starts.
> 
> Trying to make the simplifying assumption that software can be
> teased out of the implicate order and treated in isolation
> is missing how systems or co-evolving and co-dependent.
> 

I think that's the simplifying assumption that's generally made in the 
real world. The software and the hardware are separate projects with 
separate project managers and separate people designing and 
implementing. Different methodologies, different expertise, different 
vocabularies.
0
reiersol (156)
10/12/2004 6:43:02 PM

Dagfinn Reiersol wrote:

> Software determines much of the form of a nuclear power plant?

What do you think controls and monitors every inch of it?
How does that work? It's all through some physical medium
and that has form. If you control the rods one way you'll
use one physical form, do it another way you'll use another.
Same with all the sensors. Same with the fire supression
systems, the security systems, the valves, etc etc.


> I think that's the simplifying assumption that's generally made in the 
> real world. The software and the hardware are separate projects with 
> separate project managers and separate people designing and 
> implementing. Different methodologies, different expertise, different 
> vocabularies.

This doesn't work any better than having software groups
break up into specialties that are independent of each
other. See http://www-cad.eecs.berkeley.edu/~polis/.
0
grace33 (48)
10/12/2004 6:53:04 PM
Robert Grace wrote:

> Software is part of that. Everyting impacts the software and
> the software impacts everything. Software is in the physical
> thing and determines much of the form of the physical thing.
> And the form of the thing impacts the software. There's no
> point where one stops and the other starts.

If the physical things are designed upfront, and therefore stable, what 
holds me from designing the software evolutionary?

And if the physical things aren't stable (that is, their design), how could 
you expect software to be?

Curious, Ilja 


0
it3974 (470)
10/12/2004 7:00:12 PM
Robert Grace wrote:

> Dagfinn Reiersol wrote:
>
> > Software determines much of the form of a nuclear power plant?
>
> What do you think controls and monitors every inch of it?
> How does that work? It's all through some physical medium
> and that has form. If you control the rods one way you'll
> use one physical form, do it another way you'll use another.
> Same with all the sensors. Same with the fire supression
> systems, the security systems, the valves, etc etc.

As far as I can tell, your first post on the subject of nukes was "If you
are designing your nuclear power plant you are happy with evolving a design
rather than figuring out what makes a safe plant?"

Since then, we have wandered between designing nukes incrementally, or
designing their software incrementally.

Nobody is telling you to design them without up-front analysis. You keep
pretending we do, in order to effect a false sense of shock.

However, you have completely switched from "design nukes" to "design
software for nukes".

A 20-year old Scientific American article recommended Lean Manufacturing for
nukes, but don't ask me to quote from it...

> > I think that's the simplifying assumption that's generally made in the
> > real world. The software and the hardware are separate projects with
> > separate project managers and separate people designing and
> > implementing. Different methodologies, different expertise, different
> > vocabularies.
>
> This doesn't work any better than having software groups
> break up into specialties that are independent of each
> other. See http://www-cad.eecs.berkeley.edu/~polis/.

Another slip. You went from "software and hardware need different
methodologies" to "software and hardware need separated teams".

Do us a favor? Please don't build any nukes...

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/12/2004 7:11:43 PM
Robert Grace wrote:
> 
> 
> Dagfinn Reiersol wrote:
> 
>> Software determines much of the form of a nuclear power plant?
> 
> 
> What do you think controls and monitors every inch of it?
> How does that work? It's all through some physical medium
> and that has form.

You have a way with words. It has a physical form and it's controlled by 
software. That's not the same thing as the physical form being 
determined by software.
0
reiersol (156)
10/12/2004 9:48:00 PM
Robert Grace <grace33@aol.com> wrote in message news:<10mns8cjaufjl51@news.supernews.com>...
> Phlip wrote:
> > The highly iterative design methodologies - in concert with pairing and
> > test-first - have been frequently shown to run with very low defect rates.
> 
> How about its work rate? You seem to think a methodology replaces
> knowledge. How do you build a security system approach that works?
> What does any of what you said have to do with anything about knowing
> about which algorithms are secure etc? Would you like your surgeon
> not to have studied surgery or just figure it out on the fly?
> 
> > Would you like your local nuke designed using techniques that repeated
> > studies have shown increases their defect rate?
> 
> You definitely have a hammer.
> 
> > Your argument is "fallacy of the excluded middle". One can evolve a design
> > while researching safety at the same time. Evolving that design in code
> > makes it safer and more verified, than on paper.
> 
> So on the space shuttle you would have had a few "accidents" before
> you decided on the triplicate/voting system approach for
> fault tolerance? Or you wouldn't have bothered having a goal
> for reliability or having a strategy for getting there. You
> would have started with while (1) { fly(); } and then say we'll
> evolve it from there? My god.


I agree with Robert. Thorough analysis, requirements definition, and
design are invaluable for project success, although I know the XP'ers
will debate me on emphasizing preliminary work on software development
projects. I find that too many engineers and managers come up with
excuses to avoid the up front work, whether it be because of schedule
dates, laziness, you name it; there always seems to be a rationale for
avoiding the work--be it end-user interviews, definition of an
Interface Control Document (ICD), logical modeling, technology
research for optimal COTS solutions, etc. But strangely enough, there
always seems to be time to DO IT OVER AGAIN... and to even do it over
yet again. I fight this battle quite often. In many cases,
engineers--systems engineers and software engineers--convince
themselves that there aren't enough resources to do the analysis. But
their efforts are far from exhaustive. They tend to give it the old
college try and say, "well, we'll get going on what we have and work
it out as we go." This has been the cause of months of rework. I see
this with senior engineers (who should know better) as well as with
newbies. This may sound preachy, but analysis is GOOD.
-- 
Brian S. Smith
Leap Systems
http://www.leapse.com
"Turn requirements into object models, instantly, with Leap SE."
......RAD from the source
0
10/13/2004 12:43:56 AM
Brian S. Smith wrote:

> I agree with Robert.

You have a typo there. It's spelled "Phlip".

> Thorough analysis, requirements definition, and
> design are invaluable for project success, although I know the XP'ers
> will debate me on emphasizing preliminary work on software development
> projects. I find that too many engineers and managers come up with
> excuses to avoid the up front work, whether it be because of schedule
> dates, laziness, you name it; there always seems to be a rationale for
> avoiding the work--be it end-user interviews, definition of an
> Interface Control Document (ICD), logical modeling, technology
> research for optimal COTS solutions, etc. But strangely enough, there
> always seems to be time to DO IT OVER AGAIN... and to even do it over
> yet again. I fight this battle quite often. In many cases,
> engineers--systems engineers and software engineers--convince
> themselves that there aren't enough resources to do the analysis. But
> their efforts are far from exhaustive. They tend to give it the old
> college try and say, "well, we'll get going on what we have and work
> it out as we go." This has been the cause of months of rework. I see
> this with senior engineers (who should know better) as well as with
> newbies. This may sound preachy, but analysis is GOOD.

Analysis is good, so do it all the time.

-- 
  Phlip
  http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces


0
phlip_cpp (3852)
10/13/2004 12:55:29 AM
Robert Grace <grace33@aol.com> wrote in
news:10mnppqkergnr91@news.supernews.com: 
> 
> If you
> are designing your nuclear power plant you are happy with evolving
> a design rather than figuring out what makes a safe plant?

For the record, I designed safety systems for chemical plants in a previous 
career. I would have killed for a system that would let me efficiently do 
the following: (1) "Hey, this scenario could happen." (2) "Let's write a 
test that simulates it." (3) "Yes, you're right, the model says the plant 
just blew up." (4) "Let's design a relief system that handles the scenario 
safely." (5) "Great, the model says it does".

....

(6) Repeat until no one can thing of any additional realistic scenarios.

(7) And at any point for the rest of the life of the design, construction, 
operation and *additional modifications*, have a button that let me replay 
each of those scenarios.

Robert, you're confused between XP testing and test-coverage. Test-coverage 
for life-critical software systems is orthogonal to XP and compatible with 
XP. If you need it, include it.