f



Databases as objects

An unexpected thing happened while debating topmind: I had an epiphany.  
Instead of responding to the news group I thought about it for a short 
bit (very short) and posted an article to my blog titled, "The RDB is 
the biggest object in my system."

<http://blogs.in-streamco.com/anything.php?title=the_rdb_is_the_biggest_object_in_my_syst>

What I realized while trying to describe my preference to use DB 
procedures as the primary (re: only) interface between my applications 
and the database is because I believe my DB's physical representation of 
data belongs to it alone and that customers of the DB oughtn't be 
permitted to directly manipulate (change or query) its data.  I realized 
this is exactly what data-hiding is all about and why expert object 
oriented designers and programmers emphasize the importance of 
interfaces to direct data manipulation.

I thought more about this and posted a second article, Databases as 
Objects: My schema is my class, which explored more similarities between 
databases and objects and their classes.

<http://blogs.in-streamco.com/anything.php?title=my_schema_is_an_class>

I intend next to explore various design patterns from GoF and Smalltalk: 
Best Practice Patterns to see if the similarities persist or where they 
break down, and what can be learned from both about designing and 
implementing OO systems with relational data bases.

If you agree there's such a thing as an object-relational impedance 
mismatch, then perhaps its because you're witnessing the negative 
consequences of tightly coupling objects that shouldn't be tightly coupled.

There's a hypothesis in there somewhere.

As always, if you know of existing research on the subject I'm anxious 
to read about it.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/20/2006 9:25:20 PM
comp.object 3218 articles. 1 followers. Post Follow

204 Replies
1918 Views

Similar Articles

[PageSpeed] 25

Thomas Gagne <tgagne@wide-open-west.com> writes:
>As always, if you know of existing research on the subject I'm
>anxious to read about it.

  No research - but I'd like to quote from my previous post:

      "When you use a relational database via SQL, you also use
      polymorphism: The same operation �SELECT� is supported by
      an object, independent of whether it is a base table or a
      �virtual� table (a �view�), independent of whether this is
      a result of a join operation or of a union operation,
      while the /implementation/ of �SELECT� might differ for
      base tables and different types of views. [...]

      When Alan Kay made up OOP, he thought of objects as of
      independent entities communicating via messages, just like
      cells or like computers within a network. Thus, a database
      (server) is an example of ab object: It accepts messages
      (in SQL, for example). It also has information hiding (you
      can only communicate with it via those messages, but do
      not access the data files directly).

      Different SQL database programs also are an example for
      useful polymorphism: They all understand the same
      language, but might have different implementations
      internally.  Their client does not have to be aware of the
      implementation details."

<db-20060706061022@ram.dialup.fu-berlin.de>

0
ram (2986)
12/20/2006 9:42:27 PM
Thomas Gagne wrote:
> An unexpected thing happened while debating topmind: I had an epiphany.
> Instead of responding to the news group I thought about it for a short
> bit (very short) and posted an article to my blog titled, "The RDB is
> the biggest object in my system."
>
> <http://blogs.in-streamco.com/anything.php?title=the_rdb_is_the_biggest_object_in_my_syst>

>From the link:

"Why shouldn't applications have embedded SQL? Because it's the same as
accessing the private data members of an object. It shouldn't be done.
OO programmers know the correct way to interface with an object is to
use its method interface--not attempt direct manipulation of the
object's data. OO programmer's attempts to violate that rule is what
causes so much frustration mapping the application's data graph into a
relational database's tables, rows, and columns. Those things belong to
the DB--not to the application."

(end quote)

You OO'ers keep forgetting: SQL *is* an interface. I repeat, SQL *is*
an interface. It is *not* "low level hardware".  You OO'ers keep
viewing it as low-level stuff because you don't seem to like it, and
you wrap anything you don't like behind OO and call it "low level" so
that it fits your personal subjective preference and world view. OO may
fit your mind better for whatever reason, but you cannot assume your
head is God's template for every *other* individual.

BTW, Microsoft has ADO, DAO, etc. which are OO wrappers around RDBMS.
Java and other vendors do also. Whether OO is the best way wrap RDBMS
calls is another debate. My point is they already exist.

Further, even if OO *was* the best way to access RDBMS thru an app,
that does not necessarily extrapolate to all domains. OO being good for
X does not automatically imply it is good for Y also. I have already
agreed that OO may be good for writing device drivers and
device-driver-like things; but it has not been shown useful to view
everything as a device driver. I am more interested in seeing how OO
models biz objects rather than how it wraps system services and the
like. Biz modeling has been OO's toughest evidence cookie to crack (but
perhaps not the only).

And finally, just because one *can* view everything as objects does not
necessarily mean one should. One can also view everything as Lisp or
assembler or that Brainf*ck language.

>
> What I realized while trying to describe my preference to use DB
> procedures as the primary (re: only) interface between my applications
> and the database is because I believe my DB's physical representation of
> data belongs to it alone and that customers of the DB oughtn't be
> permitted to directly manipulate (change or query) its data.

When was the last time you've seen this happen? Again, SQL is NOT
"physical representation". For that matter, neither are files even.
File systems are a hierarchical database-like thing. "POKE 462625" is
accessing the physical directly.

> I realized
> this is exactly what data-hiding is all about and why expert object
> oriented designers and programmers emphasize the importance of
> interfaces to direct data manipulation.

"Data hiding"? I am working on "OO hiding". Relational is a high-level
modeling technique which tends to use "declarative interfaces".
Declarative interfaces are not necessarily worse than "behavioral
interfaces", which OO relies on. This sounds like yet another battle
between declarative interfaces versus behavioral interfaces. Note that
one could potentially mix them in RDBMS, but so far it does not appeal
very practical. And this is largely because the tight association
between data and behavior that OO likes simply does not work well in
biz apps. Thus, heavy behavioraltizing of RDBMS is not useful. I am
just pointing out it could be done and probably would be done if it
proved useful. OO forces an overly tight view of data and behavior.
Relational provides a consistency to declarative interfaces, but OO
does not provide any real structure and consistency to behavioral
interfaces. It creates shanty-town biz models.

GOF patterns are supposed to be a solution, but GOF patterns have no
clear rules about when to use what and force a kind of IS-A view on
modeling instead of HAS-A.  GOF patterns are like an attempt to catalog
GO TO patterns instead of rid GO TO's. Relational is comparable to the
move from structured programming from GO TO's: it provides more
consistency and factors common activities into a single interface
convention (relational operators). OO lets people re-invent their own
just like there are a jillion ways to do the equivalent of IF blocks
with GO TO's.

OO has simply failed to factor and standardize common relationship and
collection idioms!!!!
OO has simply failed to factor and standardize common relationship and
collection idioms!!!!

That is why OO is such mess and its hard to figure out people's OO
designs. It makes me feel like a building inspector in a shanty town:
there are no building codes and rules. Relational operators and
normalization rules reign in the "creativity" that should be reigned
in.  The "self-handling noun" view of OOP means that you get
self-reinventing nouns.

>
> I thought more about this and posted a second article, Databases as
> Objects: My schema is my class, which explored more similarities between
> databases and objects and their classes.
>
> <http://blogs.in-streamco.com/anything.php?title=my_schema_is_an_class>

See about ADO, DAO above

>
> I intend next to explore various design patterns from GoF and Smalltalk:
> Best Practice Patterns

How measured as "best"? Subjective internal votes?

>  to see if the similarities persist or where they
> break down, and what can be learned from both about designing and
> implementing OO systems with relational data bases.
>
> If you agree there's such a thing as an object-relational impedance
> mismatch, then perhaps its because you're witnessing the negative
> consequences of tightly coupling objects that shouldn't be tightly coupled.
>
> There's a hypothesis in there somewhere.
>
> As always, if you know of existing research on the subject I'm anxious
> to read about it.
>
> --
> Visit <http://blogs.instreamfinancial.com/anything.php>
> to read my rants on technology and the finance industry.

oop.ismad.com
-T-

0
topmind (2124)
12/21/2006 12:25:29 AM
Thomas Gagne wrote:
> If you agree there's such a thing as an object-relational impedance
> mismatch, ...

This mismatch is mostly caused by the lack of education of object
propellerheads. Witness pathetic atempts to enhance method dispatch
with predicates. Finally, some folks begin to understand predicate
importance! The problem is that if you do it in ad-hock basis you'll
get some inconsistent messy design.

Here are few facts, which may help you to further appreciate the power
of relations.
1. Function call is formally a relational join (followed by
projection). That is

f(a)   is the same as   pi_x ( `y=f(x)`  |><|  `x=a`)

where  `y=f(x)` is a binary relation coressponding to the function, and
`x=a` is a relation that has a single tuple.

A consequence of this fact is that function calls (or arithmetic
expressions) fit naturally into the SQL select and where clause.

2. Function composition is a join (again followed by projection).

3. Predicates can be mixed with relations, and arbitrary relational
algebra expression can be transformed into a normal
'select-project-join' form. This explains why most queruies fit nicely
into "select from where" SQL template.

4. The aggregate/group by construct reflects yet another important
mathematical construction: the equivalence relation. This is why it is
so easy to write queries that count things in SQL.

This is only the beginning of the list, and I assure you that you'll
get more return on your investment not if you spend your time
"brainstorming" how to fit databases into objects, but educating
yourself what database management really is.

0
12/21/2006 1:07:51 AM
> 1. Function call is formally a relational join (followed by
> projection). That is

In my opinion, its not very useful to say "X is really a Y". A lot of
paradigms, idioms, and ideas are interchangable such that one can be
viewed as the other and visa versa.  Thus, implimenting one in the
other does not carry much weight.

As far as the impedence mismatch, I do think it exists. The main reason
is that relational is heavily based on sets, but OO is based on
navigational structures (pointers). The two are very hard to reconcile.
A model based on graphs is very different to work with than one based
on sets. (They are also interchangable, but the choice is a matter of
human convenience and thus productivity.)

-T-

0
topmind (2124)
12/21/2006 6:15:12 AM
topmind wrote:

> Thomas Gagne wrote:
> > An unexpected thing happened while debating topmind: I had an
> > epiphany.  Instead of responding to the news group I thought about
> > it for a short bit (very short) and posted an article to my blog
> > titled, "The RDB is the biggest object in my system."
> > 
> > <http://blogs.in-streamco.com/anything.php?title=the_rdb_is_the_bigg
> > est_object_in_my_syst>
> 
> > From the link:
> 
> "Why shouldn't applications have embedded SQL? Because it's the same
> as accessing the private data members of an object. It shouldn't be
> done.  OO programmers know the correct way to interface with an
> object is to use its method interface--not attempt direct
> manipulation of the object's data. OO programmer's attempts to
> violate that rule is what causes so much frustration mapping the
> application's data graph into a relational database's tables, rows,
> and columns. Those things belong to the DB--not to the application."
> 
> (end quote)
> 
> You OO'ers keep forgetting: SQL is an interface. I repeat, SQL is
> an interface. It is not "low level hardware".  

	SQL is a set-oriented language, it's not an interface as a language
doesn't do anything without context (in this case a parser-interpreter
combi)

> You OO'ers keep
> viewing it as low-level stuff because you don't seem to like it, and
> you wrap anything you don't like behind OO and call it "low level" so
> that it fits your personal subjective preference and world view. OO
> may fit your mind better for whatever reason, but you cannot assume
> your head is God's template for every other individual.

	of course it's not low level stuff, SQL is a set-oriented language and
therefore doesn't match object-oriented languages, so a 'translation'
has to be made as you can't project one onto another in a 1:1 fashion.

> BTW, Microsoft has ADO, DAO, etc. which are OO wrappers around RDBMS.

	no they're not. ADO and DAO aren't OO, as they're COM based so they're
actually procedural (library interfaces implemented on a live object).
Furthermore, they provide the interface you talked about to the DB,
which is often referred to as 'the client interface' or 'provider' when
it comes to database access.

> Further, even if OO was the best way to access RDBMS thru an app,
> that does not necessarily extrapolate to all domains. OO being good
> for X does not automatically imply it is good for Y also. 

	you don't get the point: in an OO application, which works on data IN
the application, you want to do that in an OO fashion. To obtain the
data from the outside is initiated INSIDE the application, thus also in
an OO fashion. As an RDBMS doesn't understand OO in most cases, but it
works with SQL as it has a SQL interpreter in place to let you program
its internal relational algebra statements in a more readable way,
you've to map statements from OO to SQL and set oriented results (the
sets) from the DB back to OO objects.

> I have
> already agreed that OO may be good for writing device drivers and
> device-driver-like things; but it has not been shown useful to view
> everything as a device driver. I am more interested in seeing how OO
> models biz objects rather than how it wraps system services and the
> like. Biz modeling has been OO's toughest evidence cookie to crack
> (but perhaps not the only).

	huh? walls full of books have been written about this topic and you
declare it the toughest cookie to crack...

> And finally, just because one can view everything as objects does not
> necessarily mean one should. One can also view everything as Lisp or
> assembler or that Brainf*ck language.

	sure, but that doesn't mean the language necessarily fits the purpose
you want to use it for. data oriented operations on sets is best suited
with SQL, as it is designed for that. other languages are designed for
other purposes. Mixing the two is often not that successful, though
that's not a problem per se as processing data is more or less a 3 step
process:
- move data from data producer to data consumer
- process data in data consumer
- move data from original data consumer to original data producer

so you can easily chop up this process in 3 parts and implement the
parts in the language best fit for the job.

> > I realized
> > this is exactly what data-hiding is all about and why expert object
> > oriented designers and programmers emphasize the importance of
> > interfaces to direct data manipulation.
> 
> "Data hiding"? I am working on "OO hiding". Relational is a high-level
> modeling technique which tends to use "declarative interfaces".
> Declarative interfaces are not necessarily worse than "behavioral
> interfaces", which OO relies on. This sounds like yet another battle
> between declarative interfaces versus behavioral interfaces. 

	Could you define 'interface' for me, as it gets more and more abiguous
definitions in this post alone.

> Note that
> one could potentially mix them in RDBMS, but so far it does not appeal
> very practical. And this is largely because the tight association
> between data and behavior that OO likes simply does not work well in
> biz apps. Thus, heavy behavioraltizing of RDBMS is not useful. I am
> just pointing out it could be done and probably would be done if it
> proved useful. OO forces an overly tight view of data and behavior.
> Relational provides a consistency to declarative interfaces, but OO
> does not provide any real structure and consistency to behavioral
> interfaces. It creates shanty-town biz models.

 	you declare a lot of IMHO rubbish as 'truth' here. E.g.: why wouldn't
biz apps be helped with OO?

> GOF patterns are supposed to be a solution, but GOF patterns have no
> clear rules about when to use what and force a kind of IS-A view on
> modeling instead of HAS-A. 
	
	You also fall into the 'use pattern first, find problem for it
later'-antipattern.

	a pattern is a (not the) solution for a well defined recognizable
problem. So if you recognize the problem in your application, you can
use the pattern which solves THAT problem to solve THAT problem in your
application. THat's IT. The GoF book names a set of patterns and also
the problems they solve. If you don't have the problems they solve, you
don't need the patterns.

	Btw, the GoF book discourages inheritance a lot, just read it. It says
don't use inheritance if you don't have to.

> GOF patterns are like an attempt to
> catalog GO TO patterns instead of rid GO TO's. Relational is
> comparable to the move from structured programming from GO TO's: it
> provides more consistency and factors common activities into a single
> interface convention (relational operators). OO lets people re-invent
> their own just like there are a jillion ways to do the equivalent of
> IF blocks with GO TO's.

	I've read a lot of nonsense in your post, but this is one of the most
striking examples. WHat on earth have GO TO's to do with the topic at
hand?

> OO has simply failed to factor and standardize common relationship and
> collection idioms!!!!
> OO has simply failed to factor and standardize common relationship and
> collection idioms!!!!

	take your pills, you apparently forgot them ;)

		FB

-- 
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#) 
------------------------------------------------------------------------
0
Frans
12/21/2006 9:53:17 AM
topmind wrote:
> <snip>
>
> BTW, Microsoft has ADO, DAO, etc. which are OO wrappers around RDBMS.
> Java and other vendors do also. Whether OO is the best way wrap RDBMS
> calls is another debate. My point is they already exist.
>
> Further, even if OO *was* the best way to access RDBMS thru an app,
> <snip>
>   
You're missing something.  I am not advocating wrapping the RDB with OO 
stuffs.  I am not saying OO is the best way to access a 
database--directly or through any of the frameworks mentioned above.  In 
fact, I'm advocating the opposite.  Deal with the DB on its own terms, 
but treat it as an object.  I'm recommending against accessing it using 
its low-level interface (SQL), but instead that a higher-level 
application/schema/problem domain-specific API be constructed, most 
likely using procedures, and that applications should access the DB that 
way.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/21/2006 10:30:53 AM
On 20 Dec 2006 17:07:51 -0800, aloha.kakuikanu wrote:

> Thomas Gagne wrote:
>> If you agree there's such a thing as an object-relational impedance
>> mismatch, ...
> 
> This mismatch is mostly caused by the lack of education of object
> propellerheads. Witness pathetic atempts to enhance method dispatch
> with predicates. Finally, some folks begin to understand predicate
> importance! The problem is that if you do it in ad-hock basis you'll
> get some inconsistent messy design.
> 
> Here are few facts, which may help you to further appreciate the power
> of relations.
> 1. Function call is formally a relational join (followed by
> projection). That is
> 
> f(a)   is the same as   pi_x ( `y=f(x)`  |><|  `x=a`)
> 
> where  `y=f(x)` is a binary relation coressponding to the function, and
> `x=a` is a relation that has a single tuple.

LOL! That's amusing.

When you are going to turn your TV-set on, do you a relational join over
its diodes and resistors following by a majestic projection, or just press
the button "ON?"

> A consequence of this fact is that function calls (or arithmetic
> expressions) fit naturally into the SQL select and where clause.
> 
> 2. Function composition is a join (again followed by projection).

Ah, you mean "ON" followed by "1"! GREAT!

> 3. Predicates can be mixed with relations, and arbitrary relational
> algebra expression can be transformed into a normal
> 'select-project-join' form. This explains why most queruies fit nicely
> into "select from where" SQL template.

These should be the buttons "Vol+" and "Vol-." But, wait, how to select
next to the last diode in SQL?

> 4. The aggregate/group by construct reflects yet another important
> mathematical construction: the equivalence relation. This is why it is
> so easy to write queries that count things in SQL.

Yes, I always wondered how much diodes the damned thing has...

> This is only the beginning of the list, and I assure you that you'll
> get more return on your investment not if you spend your time
> "brainstorming" how to fit databases into objects, but educating
> yourself what database management really is.

Isn't FORMAT C: /q everything one should know about it? (:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
0
mailbox2 (6357)
12/21/2006 10:43:09 AM
Let me try to clear something up, and thanks to Topmind, Frans, and 
Stefan for helping me get there.

In OO, objects are subclassed to make them more specific, not more 
general.  I consider SQL to be a low level language, as far as RDBs are 
concerned, because it is application-ignorant.  It's like C for 
relational operations.  SQL doesn't know anything about my application.

So, I subclass Model (so-to-speak) and add data that is domain-specific 
to create my domain-specific database.  Why access it from applications 
using the same domain-ignorant language?  Instead, I construct 
procedures that create a domain-specific interface.  Instead of the 
lower-level

    select * from account, user where user.userId=X and account.userId =
    user.userId

when instead I can use

    exec getAccountsFor @userId=X

?

Besides its brevity, the procedure name clearly communicates the intent 
of the operation (stbpp pattern: intention revealing message), makes 
obvious its parameters, and provides a layer of indirection behind which 
its implementation can change without affecting the procedure's users.

 From SQL I've constructed procedures to provide a higher-level, 
domain-specific, language-and-paradigm-neutral interface to a 
domain-specific database.

To find all the places in my application source code that get account 
information with user IDs it is much easier to find senders (callers) of 
getAccountsFor than it would be to find all the SQL referencing both the 
account and user tables.  Could the latter be done?  Sure, but when a 
more efficient and accurate alternative exists why would you?

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/21/2006 10:52:35 AM
Stefan, what "subject" were you replying to when you wrote that?

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/21/2006 10:55:02 AM

aloha.kakuikanu wrote:
> <snip>
>
> This is only the beginning of the list, and I assure you that you'll
> get more return on your investment not if you spend your time
> "brainstorming" how to fit databases into objects, but educating
> yourself what database management really is.
>
>   
I'm going to keep saying this different ways until I finally say it, or 
draw a picture, that makes it clear I am not trying to fit the database 
inside an object.  I am not trying to wrap it inside an object.  I am 
not advocating an OO framework to arbitrate all DB access.

I am saying domain-specific databases (what makes your application's DB 
design different than mine) can be thought-of, and ultimately treated, 
as objects.  Now--don't run off and think I'm trying to wrap anything.  
What I'm saying is that the rules OO designers use to decide what 
methods an object should have (and not have) and the justifications for 
resisting direct data manipulation can be applied to how applications 
interface to database by deciding that 1) no application should directly 
access the DB's data (no SQL) and 2) applications should use the DB only 
through its interface.  Stored procedures are the best example of the 
latter I know of.

Ultimately, I think I may need to come up with another name for a 
domain-ized database.  The word 'database' has too many possibilities.  
It's too general.  After I've applied my schema to it it no longer has 
all the possibilities it once had.  After my schema's applied it becomes 
something different.  It's becomes my domain's data base.  My domainabase?

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/21/2006 11:15:29 AM
> 2) applications should use the DB only
> through its interface.  Stored procedures are the best example of the 
> latter I know of.

I grasp what you're saying Thomas - I think your main problem is that
if you want the idea to spread and be adopted, you'd have to support
all the popular databases. Essentially it comes down to convenience of
development.

Let's say you decide that Smalltalk is a great place for this
technique to be adopted - which doesn't seem too unreasonable to me.
So you want to be able to define your object model and relational
model and talk to your database seemlessly - essentially forgetting
about the mess that goes on underneath. You believe that talking to
the database via procedures instead of sql will be better.

I'm not going to agree or disagree with you on that matter, but I will
say that what you'll need to do is avoid work for the developer to
convince any one that this is a good idea. In short, you'll want to
invent a framework that implements the procedures in the database
based on the Smalltalk model.

In other words, you'd want it to behave a bit like Gemstone in that
some of your code is in your host Smalltalk environment and some of
your code is in the Gemstone database environment. You want to blur
the lines between the two but you want to do it natively instead of
using some arbitrary set-theory based interfacing language called
'sql'.

Okay, fair enough.. but to gain groundswell support you have to pour
some cement. Which database procedure languages are you going to
support ? - that is your biggest barrier to entry. Why? Because all
rdbms's support a unified interface language - SQL. Pretty much all
databases support a non-unified stored procedure language... from
PLSQL through to Python and all things inbetween... and to varying
degrees of expressiveness and power and features.

I'd be interested to hear how you'd tackle such things. After all, you
don't want to tie yourself down to only one kind of database - do you?

Cheers,
Michael

0
Michael
12/21/2006 12:05:29 PM
Thomas Gagne wrote:
> An unexpected thing happened while debating topmind: I had an epiphany.  
> Instead of responding to the news group I thought about it for a short 
> bit (very short) and posted an article to my blog titled, "The RDB is 
> the biggest object in my system."
> 
> <http://blogs.in-streamco.com/anything.php?title=the_rdb_is_the_biggest_object_in_my_syst> 
> 
> 
snip ...	
> 

Hej, Thomas.

I know nothing about databases.

(I always feel like I've just been to Confession whenever I write that 
line; it's quite liberating.)

Your (very interesting) proposal, however, seems similar(ish) to the 
ideas that pop up from time to time whenever DBers and OOers meeting for 
a ho'down and some line-dancing. The general result of these fun 
evenings is that DBers point to OO's diluting of the power of the DB in 
some way (I've never been sure how). The point is: if you really want 
some good DB-centric advice on your proposal, you should consider 
posting to comp.databases.theory - those folks know everything there is 
to know about DBs and could help you plug any leaks in your endeavours 
(and perhaps save you some time in your studies).

If you do make such a post, however, do please multi-post; don't 
cross-post (amazing: there is a use for cross-posting afterall). The 
reason for this request is that cross-posts between c.o and c.d.t tend 
to deteriorate alarmingly quickly into sulking, name-calling, and 
kill-files bloating to significant proportions of their hosting discs.

Despite this, their DB expertise is, as mentioned, extraordinary; so do 
consider popping over there for a chat ... but don't tell them who sent you.

Just a thought.

..ed

PS On an OO note, regarding your, "My schema is a class/my DB is an 
object," concept (again, very interesting). I presume here you mean the 
DB as the data it contains, rather than a particular vendor's DB such as 
Oracle, etc. If so, then there should be a concept of changing the data 
wholesale for some other data, without affecting the users of that data 
(i.e., the application). I can't really see this happening. If I have 
the data for a suit-tailoring business, and applications that graze this 
data, then I can't really see that the applications will remain 
unchanged when this data is dropped and the data for, say, a 
car-manufacturer inserted instead. Silly example, of course, but I hope 
it gets the point across: how often do you have one schema with multiple 
data-sets conforming to it. Do bear in mind my first sentence ...

-- 
www.EdmundKirwan.com - Home of The Fractal Class Composition.

Download Fractality, free Java code analyzer:
www.EdmundKirwan.com/servlet/fractal/frac-page130.html
0
iamfractal (493)
12/21/2006 12:14:20 PM
Thomas Gagne <tgagne@wide-open-west.com> writes:
>Stefan, what "subject" were you replying to when you wrote that?

  Here are excerpts from the header lines of the post I wrote in July:

Newsgroups: comp.object
Subject: polymorphism (was: Poly Couples)
References: <1151999442.600805.230670@75g2000cwc.googlegroups.com> <OO-20060704232303@ram.dialup.fu-berlin.de> <1152157551.514387.8840@j8g2000cwa.googlegroups.com>
Expires: 28 Nov 2006 11:59:59 GMT
X-No-Archive: Yes
Message-ID: <db-20060706061022@ram.dialup.fu-berlin.de>

0
ram (2986)
12/21/2006 1:30:40 PM
Ed Kirwan wrote:
> <snip>
>
> Your (very interesting) proposal, however, seems similar(ish) to the 
> ideas that pop up from time to time whenever DBers and OOers meeting 
> for a ho'down and some line-dancing. The general result of these fun 
> evenings is that DBers point to OO's diluting of the power of the DB 
> in some way (I've never been sure how). The point is: if you really 
> want some good DB-centric advice on your proposal, you should consider 
> posting to comp.databases.theory - those folks know everything there 
> is to know about DBs and could help you plug any leaks in your 
> endeavours (and perhaps save you some time in your studies).
Thank you for the recommendation.  I'll post it separately to avoid the 
devolving arguments. ;-)
>
> <snip>
>
> PS On an OO note, regarding your, "My schema is a class/my DB is an 
> object," concept (again, very interesting). I presume here you mean 
> the DB as the data it contains, rather than a particular vendor's DB 
> such as Oracle, etc. If so, then there should be a concept of changing 
> the data wholesale for some other data, without affecting the users of 
> that data (i.e., the application).
In theory (isn't it always) if the interface was consistent between 
tailors and auto manufacturers then the answer would be yes.  That would 
be a kind of polymorphism.  Of course, it depends on the interface 
staying intact.  If the interface is broken it matters little what's on 
either side of it--the system is broken.

The metanoia I'm advocating cares little for the DB's vendor.  Once 
you've created a specific database model to support your application 
domain you've created a hypostasis.  You started with an empty database 
with infinite potential and tailored its purpose for your specific needs 
and given it an identity of its own.  You started with the general and 
hypostatized to the specific.  Your database is no longer general 
purpose.  Its design is intellectual property and its contents proprietary.

This is what happens when we subclass Object to create something 
specific, like a Date.  Object has the potential to be anything, but 
Date has been modified for a specific purpose.  It, too, has been 
hypostatized.  Date has become a species independent of its superclass 
with unique data and behavior.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/21/2006 2:21:25 PM
Thomas Gagne wrote:
> Let me try to clear something up, and thanks to Topmind, Frans, and
> Stefan for helping me get there.
>
> In OO, objects are subclassed to make them more specific, not more
> general.  I consider SQL to be a low level language, as far as RDBs are
> concerned, because it is application-ignorant.  It's like C for
> relational operations.  SQL doesn't know anything about my application.
>
> So, I subclass Model (so-to-speak) and add data that is domain-specific
> to create my domain-specific database.  Why access it from applications
> using the same domain-ignorant language?  Instead, I construct
> procedures that create a domain-specific interface.  Instead of the
> lower-level
>
>     select * from account, user where user.userId=X and account.userId =
>     user.userId
>
> when instead I can use
>
>     exec getAccountsFor @userId=X
>
> ?
>
> Besides its brevity, the procedure name clearly communicates the intent
> of the operation (stbpp pattern: intention revealing message), makes
> obvious its parameters, and provides a layer of indirection behind which
> its implementation can change without affecting the procedure's users.
>
>  From SQL I've constructed procedures to provide a higher-level,
> domain-specific, language-and-paradigm-neutral interface to a
> domain-specific database.
>
> To find all the places in my application source code that get account
> information with user IDs it is much easier to find senders (callers) of
> getAccountsFor than it would be to find all the SQL referencing both the
> account and user tables.  Could the latter be done?  Sure, but when a
> more efficient and accurate alternative exists why would you?
>
> --
> Visit <http://blogs.instreamfinancial.com/anything.php>
> to read my rants on technology and the finance industry.

Stefan's point is very interesting as well, though? To use your same
example,

select * from account, user where user.userId=X and account.userId =
user.userId

could also be seen as a message send, something like:

(Table join: account and: user on: [ :a :u | a id = u id ])
  select: #(#field1, #field2) where: [ each user id = X ]

NB. The above is for illustrative purposes, I am not saying that SQL
should be mapped to Smalltalk in that way.

It is true that this is a low-level interaction, and
application-ignorant, but so are all the methods of String, or Integer,
or Array, Socket, SystemDictionary... that doesn't prevent them from
being OO.

Mike

0
google434 (9)
12/21/2006 2:24:39 PM
On reflection, you've actually covered this off in your blog post (as I
see it, anyway); the database is a very large Facade, covering all of
the tables, which are lower-level objects. Do you agree?

0
google434 (9)
12/21/2006 2:35:57 PM
Mike Anderson wrote:
> On reflection, you've actually covered this off in your blog post (as I
> see it, anyway); the database is a very large Facade, covering all of
> the tables, which are lower-level objects. Do you agree?
>   
I've tried, but I've been made aware of some weaknesses in the 
description.  Not in the premise, but there's confusion in the words 
I've chosen--particularly the words transaction and database.  After 
I've created a database uniquely suited to my application domain's 
specific needs it is not longer a 'database' in the generic form, but 
something else.  After discussing it a bit with a coworker educated in 
ancient Greek, we believe hypostasis may be a better term to describe a 
customized database.

 From <http://www.webster.com/dictionary/hypostasis>

    *3 a* *:* the substance or essential nature of an individual *b* *:*
    something that is hypostatized
    <http://www.webster.com/dictionary/hypostatized>

Once customized for its purpose, the hypostasis is "the substance or 
essential nature.." of my system.  Plumbers and carpenters are both 
skilled tradesmen, but have unique skills peculiar to their specific 
trades.  We wouldn't confuse a plumber's (deliberately or accidentally) 
a plumber's skillbase with a carpenter's, but yet we don't have a word 
that adequately differentiates the skills of either, just as we don't 
have a word that differentiates a plumber's apparel manufacturer's 
database from an auto dealer's database.  We just use the word 
'database' in different contexts and hope our readers follow us.

Or at least, I do.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/21/2006 3:18:02 PM
Thomas Gagne wrote:
> Mike Anderson wrote:
> > On reflection, you've actually covered this off in your blog post (as I
> > see it, anyway); the database is a very large Facade, covering all of
> > the tables, which are lower-level objects. Do you agree?
> >
> I've tried, but I've been made aware of some weaknesses in the
> description.  Not in the premise, but there's confusion in the words
> I've chosen--particularly the words transaction and database.  After
> I've created a database uniquely suited to my application domain's
> specific needs it is not longer a 'database' in the generic form, but
> something else.  After discussing it a bit with a coworker educated in
> ancient Greek, we believe hypostasis may be a better term to describe a
> customized database.
>
>  From <http://www.webster.com/dictionary/hypostasis>
>
>     *3 a* *:* the substance or essential nature of an individual *b* *:*
>     something that is hypostatized
>     <http://www.webster.com/dictionary/hypostatized>
>
> Once customized for its purpose, the hypostasis is "the substance or
> essential nature.." of my system.  Plumbers and carpenters are both
> skilled tradesmen, but have unique skills peculiar to their specific
> trades.  We wouldn't confuse a plumber's (deliberately or accidentally)
> a plumber's skillbase with a carpenter's, but yet we don't have a word
> that adequately differentiates the skills of either, just as we don't
> have a word that differentiates a plumber's apparel manufacturer's
> database from an auto dealer's database.  We just use the word
> 'database' in different contexts and hope our readers follow us.
>
> Or at least, I do.

Well, let me see if I am following you :)

Before your addition of procedures to the database, you can see it as a
large object with poor data hiding. Specifically, it has no methods, so
the only way to interact with it is to send messages directly to its
instance variables (the tables within it).

Once you have added procedures to the database, you have your
hypostasis, and now you can interact with it instead of its contained
objects. In fact, now you can enforce the encapsulation by revoking
permissions on the tables and only granting them on the procedures.

I don't think that's too controversial, actually; many people would
regard this as a Best Practice for updates. I've seen it advocated for
selects too. However, I haven't seen anyone talking about it in OO
terms.

I find this very interesting, because it seems to me that databases
have a strong similarity to images, but whereas images are mostly an
unknown concept in mainstream programming, databases are commonplace.

0
google434 (9)
12/21/2006 3:40:50 PM
Michael Lucas-Smith wrote:
> <snip>
> So you want to be able to define your object model and relational
> model and talk to your database seemlessly - essentially forgetting
> about the mess that goes on underneath. You believe that talking to
> the database via procedures instead of sql will be better.
>
> I'm not going to agree or disagree with you on that matter, but I will
> say that what you'll need to do is avoid work for the developer to
> convince any one that this is a good idea. In short, you'll want to
> invent a framework that implements the procedures in the database
> based on the Smalltalk model.
>   
I shudder at the idea of a framework.  Do you consider patterns 
frameworks or just patterns?  Is Object Orientedness a framework or a 
paradigm?  Whichever anyone may /think/ it is, the first step (I think) 
is measuring the value and benefits of the metanoia and deciding whether 
or not it simplifies or complicates designs. 

Frameworks are not without expense.  Sometimes the abstraction comes at 
the cost of performance, flexibility, licensing, dependency, or obviousness.

It is possible (and I believe I have at least two systems that prove) 
that frameworks are unnecessary when we treat our products' databases as 
though they were peer objects (really big ones) and their use was 
governed by the same guidelines, patterns, and practices we apply today 
to even the simplest objects.

For instance, we'd never (or shouldn't) think of accessing a Date 
instance's variables directly--we'd use its interface--which makes dates 
easy-to-use.  Rather than accessing a X.500 directory using low-level 
protocols we use a higher-level interface called LDAP which makes 
directories easier to use.  But for some reason when we want to talk to 
our system's database too often we use SQL instead of a higher-level 
interface that makes it easier to use, and enforces consistent use 
across applications and languages.

So, I guess I'm not sure if a framework is needed.  Maybe some examples, 
but at this point I'm unsure a framework is necessary or if it would be 
overkill.

Perhaps pictures would help...

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
Thomas
12/21/2006 3:46:32 PM
Frans Bouma wrote:
> topmind wrote:
>
> > Thomas Gagne wrote:
> > > An unexpected thing happened while debating topmind: I had an
> > > epiphany.  Instead of responding to the news group I thought about
> > > it for a short bit (very short) and posted an article to my blog
> > > titled, "The RDB is the biggest object in my system."
> > >
> > > <http://blogs.in-streamco.com/anything.php?title=the_rdb_is_the_bigg
> > > est_object_in_my_syst>
> >
> > > From the link:
> >
> > "Why shouldn't applications have embedded SQL? Because it's the same
> > as accessing the private data members of an object. It shouldn't be
> > done.  OO programmers know the correct way to interface with an
> > object is to use its method interface--not attempt direct
> > manipulation of the object's data. OO programmer's attempts to
> > violate that rule is what causes so much frustration mapping the
> > application's data graph into a relational database's tables, rows,
> > and columns. Those things belong to the DB--not to the application."
> >
> > (end quote)
> >
> > You OO'ers keep forgetting: SQL is an interface. I repeat, SQL is
> > an interface. It is not "low level hardware".
>
> 	SQL is a set-oriented language, it's not an interface as a language
> doesn't do anything without context (in this case a parser-interpreter
> combi)

Perhaps we need to clear up our working semantics with regard to
"language" and "interface".  Are methods interfaces or a language? I am
not sure it really matters and I don't want to get tangled in a
definition battle.

>
> > You OO'ers keep
> > viewing it as low-level stuff because you don't seem to like it, and
> > you wrap anything you don't like behind OO and call it "low level" so
> > that it fits your personal subjective preference and world view. OO
> > may fit your mind better for whatever reason, but you cannot assume
> > your head is God's template for every other individual.
>
> 	of course it's not low level stuff, SQL is a set-oriented language and
> therefore doesn't match object-oriented languages, so a 'translation'
> has to be made as you can't project one onto another in a 1:1 fashion.

An option is to not use OO.

>
> > BTW, Microsoft has ADO, DAO, etc. which are OO wrappers around RDBMS.
>
> 	no they're not. ADO and DAO aren't OO, as they're COM based so they're
> actually procedural (library interfaces implemented on a live object).

Being an OO wrapper on top of procedural calls does not necessarily
turn something into non-OO. Please clarify your labelling criteria.

> Furthermore, they provide the interface you talked about to the DB,
> which is often referred to as 'the client interface' or 'provider' when
> it comes to database access.

This does not contradict anything I've said.

>
> > Further, even if OO was the best way to access RDBMS thru an app,
> > that does not necessarily extrapolate to all domains. OO being good
> > for X does not automatically imply it is good for Y also.
>
> 	you don't get the point: in an OO application, which works on data IN
> the application, you want to do that in an OO fashion.

Why? Is OO proven objectively better?

> To obtain the
> data from the outside is initiated INSIDE the application, thus also in
> an OO fashion. As an RDBMS doesn't understand OO in most cases, but it
> works with SQL as it has a SQL interpreter in place to let you program
> its internal relational algebra statements in a more readable way,
> you've to map statements from OO to SQL and set oriented results (the
> sets) from the DB back to OO objects.

Are you suggesting methods such as "Add_AND_Clause(column,
comparisonOperator, Value)"?

Those are bloaty and ugly in my opinion, but let's save that value
judgement for a later debate on clause/criteria wrappers.

>
> > I have
> > already agreed that OO may be good for writing device drivers and
> > device-driver-like things; but it has not been shown useful to view
> > everything as a device driver. I am more interested in seeing how OO
> > models biz objects rather than how it wraps system services and the
> > like. Biz modeling has been OO's toughest evidence cookie to crack
> > (but perhaps not the only).
>
> 	huh? walls full of books have been written about this topic and you
> declare it the toughest cookie to crack...

Such as? I've seen biz examples in OOP books, but they did not show how
they were better than the alternative. Showing how to make an Employee
class does not by itself tell you why an Employee class is better than
not using OO.

>
> > And finally, just because one can view everything as objects does not
> > necessarily mean one should. One can also view everything as Lisp or
> > assembler or that Brainf*ck language.
>
> 	sure, but that doesn't mean the language necessarily fits the purpose
> you want to use it for. data oriented operations on sets is best suited
> with SQL, as it is designed for that. other languages are designed for
> other purposes. Mixing the two is often not that successful, though
> that's not a problem per se as processing data is more or less a 3 step
> process:
> - move data from data producer to data consumer
> - process data in data consumer
> - move data from original data consumer to original data producer
>
> so you can easily chop up this process in 3 parts and implement the
> parts in the language best fit for the job.
>
> > > I realized
> > > this is exactly what data-hiding is all about and why expert object
> > > oriented designers and programmers emphasize the importance of
> > > interfaces to direct data manipulation.
> >
> > "Data hiding"? I am working on "OO hiding". Relational is a high-level
> > modeling technique which tends to use "declarative interfaces".
> > Declarative interfaces are not necessarily worse than "behavioral
> > interfaces", which OO relies on. This sounds like yet another battle
> > between declarative interfaces versus behavioral interfaces.
>
> 	Could you define 'interface' for me, as it gets more and more abiguous
> definitions in this post alone.

If "language" suits you better, that is fine by me. My main point is
that it is not "physical implementation" to be wrapped away.  The use
of "direct data manipulation" was what I was responding to.

>
> > Note that
> > one could potentially mix them in RDBMS, but so far it does not appeal
> > very practical. And this is largely because the tight association
> > between data and behavior that OO likes simply does not work well in
> > biz apps. Thus, heavy behavioraltizing of RDBMS is not useful. I am
> > just pointing out it could be done and probably would be done if it
> > proved useful. OO forces an overly tight view of data and behavior.
> > Relational provides a consistency to declarative interfaces, but OO
> > does not provide any real structure and consistency to behavioral
> > interfaces. It creates shanty-town biz models.
>
>  	you declare a lot of IMHO rubbish as 'truth' here. E.g.: why wouldn't
> biz apps be helped with OO?

(For the record, the use of "rubbish" is a sign of rudeness. Thus, I
did not start the rudeness between us.)

I've never seen it happen. I am not claiming it can't or doesn't, only
that there is no public objective inspectable evidence that it does.
Yet, many push it thru as if it already "passed". I don't claim that
unicorns don't exist, only that I have not seen any captured for
analysis.

>
> > GOF patterns are supposed to be a solution, but GOF patterns have no
> > clear rules about when to use what and force a kind of IS-A view on
> > modeling instead of HAS-A.
>
> 	You also fall into the 'use pattern first, find problem for it
> later'-antipattern.
>
> 	a pattern is a (not the) solution for a well defined recognizable
> problem. So if you recognize the problem in your application, you can
> use the pattern which solves THAT problem to solve THAT problem in your
> application. THat's IT. The GoF book names a set of patterns and also
> the problems they solve. If you don't have the problems they solve, you
> don't need the patterns.

Well, a look-up table is usually simpler and more inspectable than
Visitor. Thus, if usefulness is our guide, then GOF patterns are often
not the best.

>
> 	Btw, the GoF book discourages inheritance a lot, just read it. It says
> don't use inheritance if you don't have to.

If you take away inheritence, you get "network structures" (AKA tangled
pasta). Dr. Codd sought to escape those by applying set theory, and
network structures thankfully fell out of favor, until the OO crowd
tried to bring them back from the dead.

>
> > GOF patterns are like an attempt to
> > catalog GO TO patterns instead of rid GO TO's. Relational is
> > comparable to the move from structured programming from GO TO's: it
> > provides more consistency and factors common activities into a single
> > interface convention (relational operators). OO lets people re-invent
> > their own just like there are a jillion ways to do the equivalent of
> > IF blocks with GO TO's.
>
> 	I've read a lot of nonsense in your post,

No, the nonsense comes from the OO zealots. They have no proof for biz
apps. Two paradigms are equal or unknown until proven otherwise. I want
to see science, not brochures.

> but this is one of the most
> striking examples. WHat on earth have GO TO's to do with the topic at
> hand?

It is an analogy.

>
> > OO has simply failed to factor and standardize common relationship and
> > collection idioms!!!!
> > OO has simply failed to factor and standardize common relationship and
> > collection idioms!!!!
>
> 	take your pills, you apparently forgot them ;)

And you took your LSD: you hallucinate evidence that ain't there.

> 
> 		FB
> 

-T-
oop.ismad.com

0
topmind (2124)
12/21/2006 4:55:09 PM
Thomas Gagne wrote:
> Let me try to clear something up, and thanks to Topmind, Frans, and
> Stefan for helping me get there.
>
> In OO, objects are subclassed to make them more specific, not more
> general.  I consider SQL to be a low level language, as far as RDBs are
> concerned, because it is application-ignorant.

So you are defining "low level" as application-ignorant? I find that a
stretch, but let's continue with it as a working/local definition.

> It's like C for
> relational operations.  SQL doesn't know anything about my application.
>
> So, I subclass Model (so-to-speak) and add data that is domain-specific
> to create my domain-specific database.

In the app? Please clarify. Do you mean create the database, or an OO
*view* of the database?

> Why access it from applications
> using the same domain-ignorant language?  Instead, I construct
> procedures that create a domain-specific interface.  Instead of the
> lower-level
>
>     select * from account, user where user.userId=X and account.userId =
>     user.userId
>
> when instead I can use
>
>     exec getAccountsFor @userId=X

A side notes here before I continue. First, some versions of SQL can do
a "natural join" such that you don't have to explicity declare the
common/default joins between two or more tables. Thus, there are
shorter possibilities. (SQL is hardly the pinnacle of relational
languages IMO, but it is still better than being without a RDBMS.)

Now, the advantage of embedded SQL is that one can add to or change it
as needed without having to hop around. If we need an additional
criteria or columns, we just add it in ONE module. If you have a
separate place for SQL and another for app code, then you have visit
and modify two different modules. That is more work because 2 is
greater than 1. Hopping around slows down development and
modifications.

>
> ?
>
> Besides its brevity, the procedure name clearly communicates the intent
> of the operation (stbpp pattern: intention revealing message), makes
> obvious its parameters, and provides a layer of indirection behind which
> its implementation can change without affecting the procedure's users.

Use comments. And, "getFoo" is hardly an improvement over
"select...from Foo".

>
>  From SQL I've constructed procedures to provide a higher-level,
> domain-specific, language-and-paradigm-neutral interface to a
> domain-specific database.

How is it more "neutral" than SQL? A stored procedure still has its own
syntax and rules.  What common scenarios are you saving us from?

>
> To find all the places in my application source code that get account
> information with user IDs it is much easier to find senders (callers) of
> getAccountsFor than it would be to find all the SQL referencing both the
> account and user tables.  Could the latter be done?  Sure, but when a
> more efficient and accurate alternative exists why would you?

Again, this gets back to the change effort cost and frequency analysis
that was part of the last topic. I weigh the costs of all the kinds
changes when I decide to embed or separate SQL. Most changes that I
encounter in the field favor embedding. If your experience or shop
pattern is different, then we will just have to agree to disagree.

It again comes down to frequencies, and I dissagree with your frequency
assessment. We are back to where we ended on the last topic. One should
look at the human effort and frequency involved, not just use
(disputed) labels such as "low level" etc. to shape our decision.

It is a kind of Frederick Winslow Taylor (time and motion studies)
style of decision making. In my years of experience, embedding reduces
the *net* hopping-around effort. Yes, there are times where isolation
of all the SQL would save time, but not enough to make up for the
others.

>
> --
> Visit <http://blogs.instreamfinancial.com/anything.php>
> to read my rants on technology and the finance industry.

-T-

0
topmind (2124)
12/21/2006 8:30:04 PM
topmind wrote:
> Thomas Gagne wrote:
>   
>> Let me try to clear something up, and thanks to Topmind, Frans, and
>> Stefan for helping me get there.
>>
>> In OO, objects are subclassed to make them more specific, not more
>> general.  I consider SQL to be a low level language, as far as RDBs are
>> concerned, because it is application-ignorant.
>>     
>
> So you are defining "low level" as application-ignorant? I find that a
> stretch, but let's continue with it as a working/local definition.
>   
Low-level meaning further away from my business.  Assembly is even 
further away from my business than SQL is.  Imagine if I'd created a 
language that was application-specific, it would be higher-level than 
SQL.  For instance, if I'd created a language that understood:

    purchase 10 shares of IBM into anAccount

It would be pretty darn high-level.  By grafting application-aware 
constructs into SQL (views and procedures) it becomes increasingly 
higher-level.
>   
>> It's like C for
>> relational operations.  SQL doesn't know anything about my application.
>>
>> So, I subclass Model (so-to-speak) and add data that is domain-specific
>> to create my domain-specific database.
>>     
>
> In the app? Please clarify. Do you mean create the database, or an OO
> *view* of the database?
>   
I actually mean, "create the database" as in:
create database bookstore;
create table book (...);
>   
>> Why access it from applications
>> using the same domain-ignorant language?  Instead, I construct
>> procedures that create a domain-specific interface.  Instead of the
>> lower-level
>>
>>     select * from account, user where user.userId=X and account.userId =
>>     user.userId
>>
>> when instead I can use
>>
>>     exec getAccountsFor @userId=X
>>     
>
> <snip>
>> ?
>>
>> Besides its brevity, the procedure name clearly communicates the intent
>> of the operation (stbpp pattern: intention revealing message), makes
>> obvious its parameters, and provides a layer of indirection behind which
>> its implementation can change without affecting the procedure's users.
>>     
>
> Use comments. And, "getFoo" is hardly an improvement over
> "select...from Foo".
>   
That's a strawman.  I'm sure you can imagine a more complicated SELECT 
statement joining 12 tables, with or without natural joins, UNION'ing to 
another select.  Sure, I could comment it, or I could just create a 
procedure and "exec searchForAccount @accountId=..."
>   
>>  From SQL I've constructed procedures to provide a higher-level,
>> domain-specific, language-and-paradigm-neutral interface to a
>> domain-specific database.
>>     
>
> How is it more "neutral" than SQL? A stored procedure still has its own
> syntax and rules.  What common scenarios are you saving us from?
>   
It's neutral in the sense it can be invoked from C, Python, Java, PHP, 
and SQL scripts--all with exactly the same behavior in the same 
database.  Remember, I'm not wrapping anything in OO, I'm just giving 
the DB an API that facilitates my domain solution across language and 
paradigm boundaries.

A view does the same thing, only it's not as capable as a stored 
procedure is.  I can create a view to do all kinds of useful projections 
that can be called from any language with an attachment to the database, 
but a view can't be extended later to record that a user queried it.
> <snip>
> It again comes down to frequencies, and I dissagree with your frequency
> assessment. We are back to where we ended on the last topic. One should
> look at the human effort and frequency involved, not just use
> (disputed) labels such as "low level" etc. to shape our decision.
>
> It is a kind of Frederick Winslow Taylor (time and motion studies)
> style of decision making. In my years of experience, embedding reduces
> the *net* hopping-around effort. Yes, there are times where isolation
> of all the SQL would save time, but not enough to make up for the
> others.
>   
I have insider knowledge of two decent sized commercial finance 
applications.  One can count the number of lines of SQL because it's 
separated into procedures and views, the other can only estimate because 
the SQL is embedded.  The first /knows/ that 37% of its source code is 
SQL (sql, procedures, and views) making up 570 distinct requests of the 
DB.  The second estimates it around 33% but can not count (so quickly) 
the number of distinct DB requests there may be.  The designer of the 
second (rightly, I think) believes counting the number of SQL lines 
would be difficult since they're distributed throughout his code, and 
include string concatenation for variables and discriminations spread 
throughout functions.

If any SQL has to be modified in the first system, a new procedure can 
be loaded into the database without affecting any of dozens of 
applications.  In fact, their source code needn't even be grep'ed to 
find references to tables or columns.  The second example would require 
a grep, a fix, and a redeployment.  Even in an ASP moving something to 
production prudently requires a trip through some testing.

The chances of something needing fixing or enhancing in 33-40% is 1 in 3 
or 2 in 5, depending on which estimate you want to take.  All fixes 
being equal (they are not), the first system's applications will spend 
37% less time going through QA, suggesting they can more more quickly 
than competitors with 37% of their source code not isolated from their 
applications (as yours sounds to be).
0
tgagne (596)
12/21/2006 9:02:16 PM
Thomas Gagne wrote:
> topmind wrote:
> > <snip>
> >
> > BTW, Microsoft has ADO, DAO, etc. which are OO wrappers around RDBMS.
> > Java and other vendors do also. Whether OO is the best way wrap RDBMS
> > calls is another debate. My point is they already exist.
> >
> > Further, even if OO *was* the best way to access RDBMS thru an app,
> > <snip>
> >
> You're missing something.  I am not advocating wrapping the RDB with OO
> stuffs.  I am not saying OO is the best way to access a
> database--directly or through any of the frameworks mentioned above.  In
> fact, I'm advocating the opposite.  Deal with the DB on its own terms,
> but treat it as an object.

Could you be more specific on "treat it as an object"?  OOP is not
consistently defined such that we have to be careful about labelling
stored procedures as an OO concept.

> I'm recommending against accessing it using
> its low-level interface (SQL),

In a sister reply, I challenged your labelling of SQL as "low level".

> but instead that a higher-level
> application/schema/problem domain-specific API be constructed, most
> likely using procedures, and that applications should access the DB that
> way.

Like I've described many times, there are some labor-intensive
drawbacks to that.

>
> --
> Visit <http://blogs.instreamfinancial.com/anything.php>
> to read my rants on technology and the finance industry.

-T-

0
topmind (2124)
12/21/2006 9:06:36 PM
topmind wrote:
> Thomas Gagne wrote:
>   
>> topmind wrote:
>>     
>>> <snip>
>>>
>>> BTW, Microsoft has ADO, DAO, etc. which are OO wrappers around RDBMS.
>>> Java and other vendors do also. Whether OO is the best way wrap RDBMS
>>> calls is another debate. My point is they already exist.
>>>
>>> Further, even if OO *was* the best way to access RDBMS thru an app,
>>> <snip>
>>>
>>>       
>> You're missing something.  I am not advocating wrapping the RDB with OO
>> stuffs.  I am not saying OO is the best way to access a
>> database--directly or through any of the frameworks mentioned above.  In
>> fact, I'm advocating the opposite.  Deal with the DB on its own terms,
>> but treat it as an object.
>>     
>
> Could you be more specific on "treat it as an object"?  OOP is not
> consistently defined such that we have to be careful about labelling
> stored procedures as an OO concept.
>
>   
>> I'm recommending against accessing it using
>> its low-level interface (SQL),
>>     
>
> In a sister reply, I challenged your labelling of SQL as "low level".
>
>   
>> but instead that a higher-level
>> application/schema/problem domain-specific API be constructed, most
>> likely using procedures, and that applications should access the DB that
>> way.
>>     
>
> Like I've described many times, there are some labor-intensive
> drawbacks to that.
>   
You've said it many times before, but perhaps you can give an example of 
some SQL that's easier to fix when embedded than as a 
procedure--including your normal QA procedures and promotion to production.
0
tgagne (596)
12/21/2006 9:13:26 PM
> If you take away inheritence, you get "network structures" (AKA tangled
> pasta). Dr. Codd sought to escape those by applying set theory, and
> network structures thankfully fell out of favor, until the OO crowd
> tried to bring them back from the dead.

Can you give an example of such as tangled pasta? How did Dr Codd make
network structures fall out of favor?

0
neo55592 (356)
12/21/2006 9:22:00 PM
Thomas Gagne wrote:
> topmind wrote:
> > Thomas Gagne wrote:
> >
> >> Let me try to clear something up, and thanks to Topmind, Frans, and
> >> Stefan for helping me get there.
> >>
> >> In OO, objects are subclassed to make them more specific, not more
> >> general.  I consider SQL to be a low level language, as far as RDBs are
> >> concerned, because it is application-ignorant.
> >>
> >
> > So you are defining "low level" as application-ignorant? I find that a
> > stretch, but let's continue with it as a working/local definition.
> >
> Low-level meaning further away from my business.  Assembly is even
> further away from my business than SQL is.  Imagine if I'd created a
> language that was application-specific, it would be higher-level than
> SQL.  For instance, if I'd created a language that understood:
>
>     purchase 10 shares of IBM into anAccount
>
> It would be pretty darn high-level.  By grafting application-aware
> constructs into SQL (views and procedures) it becomes increasingly
> higher-level.

Why not say "application-specific" instead of "high-level"?

> >
> >> It's like C for
> >> relational operations.  SQL doesn't know anything about my application.
> >>
> >> So, I subclass Model (so-to-speak) and add data that is domain-specific
> >> to create my domain-specific database.
> >>
> >
> > In the app? Please clarify. Do you mean create the database, or an OO
> > *view* of the database?
> >
> I actually mean, "create the database" as in:
> create database bookstore;
> create table book (...);
> >
> >> Why access it from applications
> >> using the same domain-ignorant language?  Instead, I construct
> >> procedures that create a domain-specific interface.  Instead of the
> >> lower-level
> >>
> >>     select * from account, user where user.userId=X and account.userId =
> >>     user.userId
> >>
> >> when instead I can use
> >>
> >>     exec getAccountsFor @userId=X
> >>
> >
> > <snip>
> >> ?
> >>
> >> Besides its brevity, the procedure name clearly communicates the intent
> >> of the operation (stbpp pattern: intention revealing message), makes
> >> obvious its parameters, and provides a layer of indirection behind which
> >> its implementation can change without affecting the procedure's users.
> >>
> >
> > Use comments. And, "getFoo" is hardly an improvement over
> > "select...from Foo".
> >
> That's a strawman.  I'm sure you can imagine a more complicated SELECT
> statement joining 12 tables, with or without natural joins, UNION'ing to
> another select.  Sure, I could comment it, or I could just create a
> procedure and "exec searchForAccount @accountId=..."

Regardless of how large it is, if it has to be changed it has to be
changed. Since SQL changes also tend to mirror app changes and visa
versa, if they are in the same module, we have less hopping around to
do.

> >
> >>  From SQL I've constructed procedures to provide a higher-level,
> >> domain-specific, language-and-paradigm-neutral interface to a
> >> domain-specific database.
> >>
> >
> > How is it more "neutral" than SQL? A stored procedure still has its own
> > syntax and rules.  What common scenarios are you saving us from?
> >
> It's neutral in the sense it can be invoked from C, Python, Java, PHP,
> and SQL scripts--all with exactly the same behavior in the same
> database.

Same with SQL. That is not a distinquishing feature.

> Remember, I'm not wrapping anything in OO, I'm just giving
> the DB an API that facilitates my domain solution across language and
> paradigm boundaries.
>
> A view does the same thing, only it's not as capable as a stored
> procedure is.  I can create a view to do all kinds of useful projections
> that can be called from any language with an attachment to the database,
> but a view can't be extended later to record that a user queried it.

Again, that is a vendor-specific limitation. It is like complaining
about OO because Java does not have multiple inheritance. The lack of
MI is a Java-specific lack, not OO specificly.

> > <snip>
> > It again comes down to frequencies, and I dissagree with your frequency
> > assessment. We are back to where we ended on the last topic. One should
> > look at the human effort and frequency involved, not just use
> > (disputed) labels such as "low level" etc. to shape our decision.
> >
> > It is a kind of Frederick Winslow Taylor (time and motion studies)
> > style of decision making. In my years of experience, embedding reduces
> > the *net* hopping-around effort. Yes, there are times where isolation
> > of all the SQL would save time, but not enough to make up for the
> > others.
> >
> I have insider knowledge of two decent sized commercial finance
> applications.  One can count the number of lines of SQL because it's
> separated into procedures and views, the other can only estimate because
> the SQL is embedded.  The first /knows/ that 37% of its source code is
> SQL (sql, procedures, and views) making up 570 distinct requests of the
> DB.  The second estimates it around 33% but can not count (so quickly)
> the number of distinct DB requests there may be.  The designer of the
> second (rightly, I think) believes counting the number of SQL lines
> would be difficult since they're distributed throughout his code, and
> include string concatenation for variables and discriminations spread
> throughout functions.

That is very a minor reason to separate. Is it worth making the app 10%
to 25% more time-consuming to maintain *just* to be able to count
easier? I have to object. Perhaps you have weird managers.

One can get an approximate by sampling about 20 modules, counting the
SQL, finding the percent of source code it consumes, and then count all
the source lines and multiply by the sample percentage.

>
> If any SQL has to be modified in the first system, a new procedure can
> be loaded into the database without affecting any of dozens of
> applications.

It depends. Again, if the same query is used by *multiple* places in an
app, I am not against putting the SQL into a subroutine to simplify its
change.

> In fact, their source code needn't even be grep'ed to
> find references to tables or columns.

Why not? Why is SP's on the database more searchable than in source
code?

> The second example would require
> a grep, a fix, and a redeployment.  Even in an ASP moving something to
> production prudently requires a trip through some testing.

I am not against testing either.

>
> The chances of something needing fixing or enhancing in 33-40% is 1 in 3
> or 2 in 5, depending on which estimate you want to take.

I am not sure what you are measuring here. 33-40% of what?

> All fixes
> being equal (they are not), the first system's applications will spend
> 37% less time going through QA, suggesting they can more more quickly
> than competitors with 37% of their source code not isolated from their
> applications (as yours sounds to be).

Please clarify what you are measuring/comparing.

Again, my decision to embed most SQL is based on my experience with
various change frequencies and change scenarios. It is not a "random"
decision. If your experience differs, so be it. Just don't claim it a
universal "best practice"; otherwise I will hold you to the scientific
method.

By the way, one valid reason to separate is that the SQL "programmer"
is different from the app programmer and one does not know the other
language.

-T-

0
topmind (2124)
12/21/2006 9:33:54 PM
Neo wrote:
> > If you take away inheritence, you get "network structures" (AKA tangled
> > pasta). Dr. Codd sought to escape those by applying set theory, and
> > network structures thankfully fell out of favor, until the OO crowd
> > tried to bring them back from the dead.
>
> Can you give an example of such as tangled pasta?

OO Visitor pattern.

> How did Dr Codd make
> network structures fall out of favor?

By using examples and logic. And users of RDBMS found them more useful
than network DB's. Large network DB's only exist now for specialized
niches. If it was not for an OODBMS push from OO fans, nobody would
even talk about them anymore.

-T-

0
topmind (2124)
12/21/2006 9:39:14 PM
> > Can you give an example of such as tangled pasta?
> OO Visitor pattern.

Thx, I am still trying to understand it at wikipedia.

> > How did Dr Codd make network structures fall out of favor?
>
> By using examples and logic. And users of RDBMS found them more useful
> than network DB's. Large network DB's only exist now for specialized
> niches. If it was not for an OODBMS push from OO fans, nobody would
> even talk about them anymore.

The CODYSL network data model is mostly a misnomer. It is better called
a Hierarchal/Relational Hybrid Data Model. A true network database
should allows each thing to be related to any other thing, possibly
similar to the human mind. Much data does tends to fit in table-like
structures making RMDB an excellent tool.

Yet, network structures are everywhere. Following example represent a
network where john likes mary, john hates bob, and like is opposite of
hate. A query finds the person with whom john's relationship is
opposite that of with Mary. Can a RMDB user post an equivalent solution
to model/query this simple network? If possible, the solution's
schema/queries should be resilent to future/unknown data requirements.

(new 'john) (new 'mary) (new 'bob)
(new 'like) (new 'hate) (new 'opposite)

(set  like opposite hate)
(set  hate opposite like)

(set  john like mary)
(set  john hate bob)

(; Get person with whom
    john's relationship is opposite of that with mary)
(; Gets bob)
(get  john (get (get john * mary) opposite *) *)

(; Get person with whom
    john's relationship is opposite of that with bob)
(; Gets mary) 
(get  john (get (get john * bob) opposite *) *)

0
neo55592 (356)
12/21/2006 10:43:18 PM
Thomas Gagne wrote:
> topmind wrote:
> > Thomas Gagne wrote:
> >
> >> topmind wrote:
> >>
> >>> <snip>
> >>>
> >>> BTW, Microsoft has ADO, DAO, etc. which are OO wrappers around RDBMS.
> >>> Java and other vendors do also. Whether OO is the best way wrap RDBMS
> >>> calls is another debate. My point is they already exist.
> >>>
> >>> Further, even if OO *was* the best way to access RDBMS thru an app,
> >>> <snip>
> >>>
> >>>
> >> You're missing something.  I am not advocating wrapping the RDB with OO
> >> stuffs.  I am not saying OO is the best way to access a
> >> database--directly or through any of the frameworks mentioned above.  In
> >> fact, I'm advocating the opposite.  Deal with the DB on its own terms,
> >> but treat it as an object.
> >>
> >
> > Could you be more specific on "treat it as an object"?  OOP is not
> > consistently defined such that we have to be careful about labelling
> > stored procedures as an OO concept.
> >
> >
> >> I'm recommending against accessing it using
> >> its low-level interface (SQL),
> >>
> >
> > In a sister reply, I challenged your labelling of SQL as "low level".
> >
> >
> >> but instead that a higher-level
> >> application/schema/problem domain-specific API be constructed, most
> >> likely using procedures, and that applications should access the DB that
> >> way.
> >>
> >
> > Like I've described many times, there are some labor-intensive
> > drawbacks to that.
> >
> You've said it many times before, but perhaps you can give an example of
> some SQL that's easier to fix when embedded than as a
> procedure--including your normal QA procedures and promotion to production.

I thought I already did. Anyhow, here is another:

We have a typical Employees table. After 9/11 we want to add a new
column "security clearance level". We need to add this to the Employee
input screen and the Query By Example screen.

For the input screen, we need to change the SQL from:

UPDATE emp SET ... foo=&bar& WHERE empID = &empID&

To

UPDATE emp SET ... foo=&bar&, secClrLvl = &secClrLvl& WHERE empID =
&empID&

It is in the same module that generates the screen. Thus I only have to
visit one module to add this column. I can add it both to the screen
field specification and to the SQL related to inserting and updating.

You would have to visit both the screen app module and the SP(s).
Plus, you have to add new parameters.

Often I also use techniques to generate most of the SET clause based on
data dictionaries or validation routines to kill 2 birds with one
stone. Passing such a generated string can be a PITA with stored
procedures.

-T-

0
topmind (2124)
12/21/2006 11:05:16 PM
Neo wrote:
> > > Can you give an example of such as tangled pasta?
> > OO Visitor pattern.
>
> Thx, I am still trying to understand it at wikipedia.
>
> > > How did Dr Codd make network structures fall out of favor?
> >
> > By using examples and logic. And users of RDBMS found them more useful
> > than network DB's. Large network DB's only exist now for specialized
> > niches. If it was not for an OODBMS push from OO fans, nobody would
> > even talk about them anymore.
>
> The CODYSL network data model is mostly a misnomer. It is better called
> a Hierarchal/Relational Hybrid Data Model. A true network database
> should allows each thing to be related to any other thing, possibly
> similar to the human mind. Much data does tends to fit in table-like
> structures making RMDB an excellent tool.
>
> Yet, network structures are everywhere. Following example represent a
> network where john likes mary, john hates bob, and like is opposite of
> hate. A query finds the person with whom john's relationship is
> opposite that of with Mary. Can a RMDB user post an equivalent solution
> to model/query this simple network? If possible, the solution's
> schema/queries should be resilent to future/unknown data requirements.

One generally uses many-to-many tables for such. Example:

table: Likes
-------------
personRef1
personRef2

table: Hates
--------------
personRef1
personRef2

table: Opposites
------------
factorRef1
factorRef2

Or we could meta-tize it to make it more flexible:

table: PeopleRelationships
-----------
personRef1
personRef2
relationRef   // Example: "Hate" ("relation" table not shown)

table: RelationRelationships
--------
relationRef1
relationRef2
relationRef  // Example: "Opposite"

I will leave the query work to somebody else.

But this is kind of a "toy" example, such as an AI lab. I would like to
see something more practical.

>
> (new 'john) (new 'mary) (new 'bob)
> (new 'like) (new 'hate) (new 'opposite)
>
> (set  like opposite hate)
> (set  hate opposite like)
>
> (set  john like mary)
> (set  john hate bob)
>
> (; Get person with whom
>     john's relationship is opposite of that with mary)
> (; Gets bob)
> (get  john (get (get john * mary) opposite *) *)
>
> (; Get person with whom
>     john's relationship is opposite of that with bob)
> (; Gets mary) 
> (get  john (get (get john * bob) opposite *) *)

-T-

0
topmind (2124)
12/21/2006 11:26:31 PM
> But this is kind of a "toy" example, such as an AI lab. I would like to
> see something more practical. I will leave the query work to somebody else.

If it is a "toy" example, how difficult could it be to post the query
to find the person with whom john's relationship is opposite of, that
with mary? Then I can proceed to compare how an rmdb vs a network-type
db handle additional data requirements.

0
neo55592 (356)
12/22/2006 3:37:27 AM
Neo wrote:
> > But this is kind of a "toy" example, such as an AI lab. I would like to
> > see something more practical. I will leave the query work to somebody else.
>
> If it is a "toy" example, how difficult could it be to post the query
> to find the person with whom john's relationship is opposite of, that
> with mary?

Toy examples are not necessarily trivial. The problem is their
representativeness, not simplicity level.  (Whether the solution is
simple or not, I won't bother with.)

> Then I can proceed to compare how an rmdb vs a network-type
> db handle additional data requirements.

Based on past experience with the dubious utility of toy/lab examples,
I think I will elect to skip this.  

-T-

0
topmind (2124)
12/22/2006 7:02:11 AM
Neo wrote:
> > But this is kind of a "toy" example, such as an AI lab. I would like to
> > see something more practical. I will leave the query work to somebody else.
>
> If it is a "toy" example, how difficult could it be to post the query
> to find the person with whom john's relationship is opposite of, that
> with mary?

Toy examples are not necessarily trivial. The problem is their
representativeness, not simplicity level.  (Whether the solution is
simple or not, I won't bother with.)

> Then I can proceed to compare how an rmdb vs a network-type
> db handle additional data requirements.

Based on past experience with the dubious utility of toy/lab examples,
I think I will elect to skip this.  

-T-

0
topmind (2124)
12/22/2006 7:03:52 AM
topmind wrote:
> Frans Bouma wrote:
> > topmind wrote:
> > > You OO'ers keep forgetting: SQL is an interface. I repeat, SQL is
> > > an interface. It is not "low level hardware".
> > 
> > 	SQL is a set-oriented language, it's not an interface as a language
> > doesn't do anything without context (in this case a
> > parser-interpreter combi)
> 
> Perhaps we need to clear up our working semantics with regard to
> "language" and "interface".  Are methods interfaces or a language? I
> am not sure it really matters and I don't want to get tangled in a
> definition battle.

	Methods are part of an interface written in a language. SQL is a
language, a set of stored procs is an interface.

	it's not getting much simpler than that. 

> > > BTW, Microsoft has ADO, DAO, etc. which are OO wrappers around
> > > RDBMS.
> > 
> > 	no they're not. ADO and DAO aren't OO, as they're COM based so
> > they're actually procedural (library interfaces implemented on a
> > live object).
> 
> Being an OO wrapper on top of procedural calls does not necessarily
> turn something into non-OO. Please clarify your labelling criteria.

	ADO isn't OO, it's COM. COM isn't OO, despite the fact it lets you
believe you're working with objects, which is actually a facade, you're
not working with OOP style objects, as there's no inheritance nor
polymorphism, you just talk to an interface implemented by an
object-esk construct in memory, which could be seen as a C struct with
function pointers.

> > > Further, even if OO was the best way to access RDBMS thru an app,
> > > that does not necessarily extrapolate to all domains. OO being
> > > good for X does not automatically imply it is good for Y also.
> > 
> > 	you don't get the point: in an OO application, which works on data
> > IN the application, you want to do that in an OO fashion.
> 
> Why? Is OO proven objectively better?

	why would one WANT to use 2 paradigms, which aren't related as in one
is derived from the other, in a single application? (let's redirect the
'what's a paradigm' posts to /dev/null/ first)

> > To obtain the
> > data from the outside is initiated INSIDE the application, thus
> > also in an OO fashion. As an RDBMS doesn't understand OO in most
> > cases, but it works with SQL as it has a SQL interpreter in place
> > to let you program its internal relational algebra statements in a
> > more readable way, you've to map statements from OO to SQL and set
> > oriented results (the sets) from the DB back to OO objects.
> 
> Are you suggesting methods such as "Add_AND_Clause(column,
> comparisonOperator, Value)"?

	No.

> Those are bloaty and ugly in my opinion, but let's save that value
> judgement for a later debate on clause/criteria wrappers.

	You can perfectly write a set of predicate classes which can be
inherited by the developer and make them more specific to the domain
the developer is working with.

> > > I have
> > > already agreed that OO may be good for writing device drivers and
> > > device-driver-like things; but it has not been shown useful to
> > > view everything as a device driver. I am more interested in
> > > seeing how OO models biz objects rather than how it wraps system
> > > services and the like. Biz modeling has been OO's toughest
> > > evidence cookie to crack (but perhaps not the only).
> > 
> > 	huh? walls full of books have been written about this topic and you
> > declare it the toughest cookie to crack...
> 
> Such as? I've seen biz examples in OOP books, but they did not show
> how they were better than the alternative. Showing how to make an
> Employee class does not by itself tell you why an Employee class is
> better than not using OO.

	I'm not saying everything should be OO because it's otherwise not
possible, as you can write any program in plain C. It's often more
suitable for writing an application because the resulting application
is developed faster (code re-use) and is more maintainable and business
apps can be very suitable for using an OO language, simply because you
have data and logic operating on that data, so IMHO the ideal
environment for using an OOP approach.

> > > GOF patterns are supposed to be a solution, but GOF patterns have
> > > no clear rules about when to use what and force a kind of IS-A
> > > view on modeling instead of HAS-A.
> > 
> > 	You also fall into the 'use pattern first, find problem for it
> > later'-antipattern.
> > 
> > 	a pattern is a (not the) solution for a well defined recognizable
> > problem. So if you recognize the problem in your application, you
> > can use the pattern which solves THAT problem to solve THAT problem
> > in your application. THat's IT. The GoF book names a set of
> > patterns and also the problems they solve. If you don't have the
> > problems they solve, you don't need the patterns.
> 
> Well, a look-up table is usually simpler and more inspectable than
> Visitor. Thus, if usefulness is our guide, then GOF patterns are often
> not the best.

	Visitor pattern is a pattern I don't think is very useful as the
problem it solves isn't very common.

	But if your point is that OO is crap because Visitor pattern is silly
and thus all that's said in the GoF book is therefore also retarded
then we're done here.

> > > GOF patterns are like an attempt to
> > > catalog GO TO patterns instead of rid GO TO's. Relational is
> > > comparable to the move from structured programming from GO TO's:
> > > it provides more consistency and factors common activities into a
> > > single interface convention (relational operators). OO lets
> > > people re-invent their own just like there are a jillion ways to
> > > do the equivalent of IF blocks with GO TO's.
> > 
> > 	I've read a lot of nonsense in your post,
> 
> No, the nonsense comes from the OO zealots. They have no proof for biz
> apps. Two paradigms are equal or unknown until proven otherwise. I
> want to see science, not brochures.

	you also have no proof for your claims either. As you started the
claims, let's see them.


		FB

-- 
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#) 
------------------------------------------------------------------------
0
Frans
12/22/2006 8:51:56 AM
topmind wrote:
> Neo wrote:
> > Can you give an example of such as tangled pasta?
>
> OO Visitor pattern.

I was going to let it ride earlier, as you seemed to be having a good
rant, and I didn't want to spoil it, but since you've mentioned the
Visitor pattern twice, I would like to know exactly what you understand
by it. Earlier, you wrote:

"Well, a look-up table is usually simpler and more inspectable than
Visitor. Thus, if usefulness is our guide, then GOF patterns are often
not the best."

I can't think of a situation where a lookup table is interchangeable
with something you would use a Visitor pattern for. The Visitor pattern
is used to flatten arbitrary structures; often trees, but applicable to
any kind of graph.

It's also a pattern that isn't necessarily OO. If you pass a lambda to
a function that walks a tree (or other graph), that's effectively the
same thing.

Mike

0
google434 (9)
12/22/2006 10:02:29 AM
topmind wrote:
> Thomas Gagne wrote:
>   
>> topmind wrote:
>>     
>>> Thomas Gagne wrote:
>>>
>>>       
>>>> topmind wrote:
>>>>
>>>>         
>>>>> <snip>
>>>>>
>>>>> BTW, Microsoft has ADO, DAO, etc. which are OO wrappers around RDBMS.
>>>>> Java and other vendors do also. Whether OO is the best way wrap RDBMS
>>>>> calls is another debate. My point is they already exist.
>>>>>
>>>>> Further, even if OO *was* the best way to access RDBMS thru an app,
>>>>> <snip>
>>>>>
>>>>>
>>>>>           
>>>> You're missing something.  I am not advocating wrapping the RDB with OO
>>>> stuffs.  I am not saying OO is the best way to access a
>>>> database--directly or through any of the frameworks mentioned above.  In
>>>> fact, I'm advocating the opposite.  Deal with the DB on its own terms,
>>>> but treat it as an object.
>>>>
>>>>         
>>> Could you be more specific on "treat it as an object"?  OOP is not
>>> consistently defined such that we have to be careful about labelling
>>> stored procedures as an OO concept.
>>>
>>>
>>>       
>>>> I'm recommending against accessing it using
>>>> its low-level interface (SQL),
>>>>
>>>>         
>>> In a sister reply, I challenged your labelling of SQL as "low level".
>>>
>>>
>>>       
>>>> but instead that a higher-level
>>>> application/schema/problem domain-specific API be constructed, most
>>>> likely using procedures, and that applications should access the DB that
>>>> way.
>>>>
>>>>         
>>> Like I've described many times, there are some labor-intensive
>>> drawbacks to that.
>>>
>>>       
>> You've said it many times before, but perhaps you can give an example of
>> some SQL that's easier to fix when embedded than as a
>> procedure--including your normal QA procedures and promotion to production.
>>     
>
> <snip decent example>
>   
Had you used a procedure the amount of coding change would have been the 
same.  True, you visited one location rather than two.

Your example is a change to an interface.  It doesn't matter what 
language or problem you're working with.  When an interface changes 
everything that uses the interface must change.

Consider an example that doesn't change the interface.

Our system tracks the buying, selling, and payoffs of financial 
contracts.  Our users like to report on arbitrary time periods to see 
which contracts were open.  Using transaction history SQL can answer the 
question for any time period--but with unsatisfactory performance.

To help historical queries run faster (fewer IOs) we created a 
dailyContractBalance table (yes--it is denormalized and redundant--RDB 
purists may be balking now).  Simply put, it records for every day 
(really) which contracts were open that day.  The reports now run much 
faster but we have to maintain the table.  The report is presented on a 
webpage from PHP.  The transactions are created in the back office by a 
Smalltalk application.  Two procedures are involved, the one that 
returns the report and the one that adds transactions.

To affect this non-trivial change we updated the procedure that added 
transactions to add and remove contracts from the dailyContractBalance 
table as they are purchased (insert) and paid-off (delete).  We were 
able to test the new procedure in isolation to prove it added and 
removed contracts correctly from the dailyContractBalance table.

No modifications to the Smalltalk were necessary.  No Smalltalk code had 
to be shipped.

We then modified the report procedure to query the dailyContractBalance 
table instead of the transaction history table.  We were able to test it 
in isolation to prove it returned the right answer AND that it performed 
faster.  The same procedure was used by users to export data into 
spreadsheets.  That module was unchanged as well.

No modification to the website was necessary.  No PHP code had to be 
shipped.

This is an example of what happens when the database is treated as 
though it were an object.  If the interface doesn't change then modules 
that depend on the interface don't need to change.  We were able to do 
some significant changes inside the database (new table) AND change the 
implementation of two procedures without affecting their interface 
(parameters and result set).

Whether SQL is embedded by hand (what it sounds like you do) or is 
generated by a framework, or is the result of some other kind of 
OO-to-RDB mapping, the change I described would have been more painful 
to implement, involved more modules, involved more programs, and 
depending on your QA policy may have required a longer pass through QA 
leading to a delayed production upgrade.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/22/2006 11:51:24 AM
>>>>> Topmind: Dr. Codd sought to escape those by applying set theory, and network structures thankfully fell out of favor, until the OO crowd tried to bring them back from the dead.
>>>>
>>>> Neo: How did Dr Codd make network structures fall out of favor?
>>>
>>>Topmind: By using examples and logic...
>>
>> Neo: Following example represents a network where john likes mary, john hates bob, and like is opposite of hate. A query finds the person with whom john's relationship is opposite that of with mary. Can a RMDB user post an equivalent solution to model/query this simple network? If possible, the solution's schema/queries should be resilent to future/unknown data requirements.
>
> Topmind: Based on past experience with the dubious utility of toy/lab examples, I think I will elect to skip this.

Ok, I think of another example.

0
neo55592 (356)
12/22/2006 4:09:42 PM
Neo wrote:
> >>>>> Topmind: Dr. Codd sought to escape those by applying set theory, and network structures thankfully fell out of favor, until the OO crowd tried to bring them back from the dead.
> >>>>
> >>>> Neo: How did Dr Codd make network structures fall out of favor?
> >>>
> >>>Topmind: By using examples and logic...
> >>
> >> Neo: Following example represents a network where john likes mary, john hates bob, and like is opposite of hate. A query finds the person with whom john's relationship is opposite that of with mary. Can a RMDB user post an equivalent solution to model/query this simple network? If possible, the solution's schema/queries should be resilent to future/unknown data requirements.
> >
> > Topmind: Based on past experience with the dubious utility of toy/lab examples, I think I will elect to skip this.
>
> Ok, I think of another example.

For some ideas, an airline reservation system or a grades/class college
tracking system make fairly good examples. Boring, perhaps, but that is
why they reflect the real world more  :-)

(P.S. I apologize for the duplicate post. My internet connection
burped.)

-T-

0
topmind (2124)
12/22/2006 4:31:51 PM
> Based on past experience with the dubious utility of toy/lab examples, I think I will elect to skip this.

Here is another network example. Adam has children named John(male),
Jack(male) and Mary(female). Find John's sibling of opposite gender.
Below is an implementation using a network-type db. What RMDB
schema/query implements the equivalent? Note that the query does not
refer to John's father (Adam) or John's gender (male) directly.

(new 'male 'gender) (new 'female 'gender)

(new 'opposite 'verb)
(set male opposite female) (set female opposite male)

(new 'adam)
(new 'john)  (set john gender male)
(new 'jack)  (set jack gender male)
(new 'mary) (set mary gender female)

(set adam child john)
(set adam child jack)
(set adam child mary)

(; Get john's sibling of opposite gender
    by getting the thing
    whose gender is opposite
    and is child of john's parent
    and that person is not himself)
(; Gets mary)
(!= (and (get * gender (get (get john gender *) opposite *))
            (get (get * child john) child *))
     john)

0
neo55592 (356)
12/22/2006 4:42:16 PM
> For some ideas, an airline reservation system or a grades/class college tracking system make fairly good examples. Boring, perhaps, but that is why they reflect the real world more  :-)

Can you describe what each example should store and be able to query?
If possible, could you give some sample data?

0
neo55592 (356)
12/22/2006 5:20:48 PM
Thomas Gagne wrote:
> topmind wrote:

> >> You've said it many times before, but perhaps you can give an example of
> >> some SQL that's easier to fix when embedded than as a
> >> procedure--including your normal QA procedures and promotion to production.
> >>
> >
> > <snip decent example>
> >
> Had you used a procedure the amount of coding change would have been the
> same.  True, you visited one location rather than two.
>
> Your example is a change to an interface.  It doesn't matter what
> language or problem you're working with.  When an interface changes
> everything that uses the interface must change.

Agreed. But most changes *are* interfaces-related changes I've come to
find out. OO'ers seem to overemphasize *implementation* changes, when
what I see for my domain (custom biz apps) most changes are
requirements changes, not implementation changes. The OO books
emphasize the wrong change kinds. Thus, I optimize my designs for
requirements changes, not implementation changes.

>
> Consider an example that doesn't change the interface.
>
> Our system tracks the buying, selling, and payoffs of financial
> contracts.  Our users like to report on arbitrary time periods to see
> which contracts were open.  Using transaction history SQL can answer the
> question for any time period--but with unsatisfactory performance.
>
> To help historical queries run faster (fewer IOs) we created a
> dailyContractBalance table (yes--it is denormalized and redundant--RDB
> purists may be balking now).  Simply put, it records for every day
> (really) which contracts were open that day.  The reports now run much
> faster but we have to maintain the table.  [....]

If I am not mistaken, you presented this example a few weeks ago in the
"old" topic. I don't dispute that particular situation may have been
been helped by separation (although some of your issues seemed
vendor-specific). But again, software design is a lot like investment
management: you have to weigh your investment options (design
decisions) against estimated future probabilities of different kinds of
changes.  There is rarely a free lunch; it is a matter of playing the
odds. If you overhaul the tables or DB schemas, it may indeed require
visiting a lot of embedded SQL. However, those happen maybe once every
2 years or so (in my experience) such that you spend a week or so
making the changes. However, feature changes happen just about every
week. It is more economical to save 3 hours every week for the 2 years
rather waste 3 hours a week to save 60 hours once in that two years.

For those two years:

Separation:  3hr x 50 x 2 = 300 hours (assume 50 work weeks per year)

Embedded: 1 x 60hr = 60 hours

Based on the givens, it is clear that embedding is more economical.

[...]

> This is an example of what happens when the database is treated as
> though it were an object.  If the interface doesn't change then modules
> that depend on the interface don't need to change.  We were able to do
> some significant changes inside the database (new table) AND change the
> implementation of two procedures without affecting their interface
> (parameters and result set).

That is a property of function/procedures, not OOP. There is no
polymorphism nor inheritence that I could see in your example since you
don't have multiple implementations active at the same time. You simply
gutted the old implementation and replaced it with a new one. This is
old-fashioned function/procedure "implementation hiding". Giving OO
credit is a big stretch.

-T-

0
topmind (2124)
12/22/2006 5:33:58 PM
Neo wrote:
> > For some ideas, an airline reservation system or a grades/class college tracking system make fairly good examples. Boring, perhaps, but that is why they reflect the real world more  :-)
>
> Can you describe what each example should store and be able to query?
> If possible, could you give some sample data?

This may help some:

http://c2.com/cgi/wiki?CampusExample

-T-

0
topmind (2124)
12/22/2006 5:41:59 PM
topmind wrote:
> Thomas Gagne wrote:
>   
>> topmind wrote:
>>     
>
>   
>>>> You've said it many times before, but perhaps you can give an example of
>>>> some SQL that's easier to fix when embedded than as a
>>>> procedure--including your normal QA procedures and promotion to production.
>>>>
>>>>         
>>> <snip decent example>
>>>
>>>       
>> Had you used a procedure the amount of coding change would have been the
>> same.  True, you visited one location rather than two.
>>
>> Your example is a change to an interface.  It doesn't matter what
>> language or problem you're working with.  When an interface changes
>> everything that uses the interface must change.
>>     
>
> Agreed. But most changes *are* interfaces-related changes I've come to
> find out. OO'ers seem to overemphasize *implementation* changes, when
> what I see for my domain (custom biz apps) most changes are
> requirements changes, not implementation changes. The OO books
> emphasize the wrong change kinds. Thus, I optimize my designs for
> requirements changes, not implementation changes.
>
>   
>> Consider an example that doesn't change the interface.
>>
>> Our system tracks the buying, selling, and payoffs of financial
>> contracts.  Our users like to report on arbitrary time periods to see
>> which contracts were open.  Using transaction history SQL can answer the
>> question for any time period--but with unsatisfactory performance.
>>
>> To help historical queries run faster (fewer IOs) we created a
>> dailyContractBalance table (yes--it is denormalized and redundant--RDB
>> purists may be balking now).  Simply put, it records for every day
>> (really) which contracts were open that day.  The reports now run much
>> faster but we have to maintain the table.  [....]
>>     
>
> If I am not mistaken, you presented this example a few weeks ago in the
> "old" topic. I don't dispute that particular situation may have been
> been helped by separation (although some of your issues seemed
> vendor-specific). But again, software design is a lot like investment
> management: you have to weigh your investment options (design
> decisions) against estimated future probabilities of different kinds of
> changes.  There is rarely a free lunch; it is a matter of playing the
> odds. If you overhaul the tables or DB schemas, it may indeed require
> visiting a lot of embedded SQL. However, those happen maybe once every
> 2 years or so (in my experience) such that you spend a week or so
> making the changes. However, feature changes happen just about every
> week. It is more economical to save 3 hours every week for the 2 years
> rather waste 3 hours a week to save 60 hours once in that two years.
>   
It's hard to buy your estimates without supporting data.  All our fixes 
and enhancements are entered into a bug tracking system and our code 
into CVS.  I'm wondering now if I can query CVS in such a way as to 
identify how often we change implementation v. interface.

As to your estimates, if the interface breaks the same amount of source 
work is needed whether the change is in two places or one.  Changes to 
the application are the most expensive since touching one part of it 
necessarily requires testing all of it as well as shipping it.

In terms of investment management, a portfolio manager wouldn't ignore 
the fact that he can improve returns on 40% of his portfolio if he 
separated SQL from the application.  Even if they happen infrequently 
he's successfully hedged against it having a negative affect on his 
overall investment--especially when that insurance is free (it didn't 
create more code).
> <snip>
>
> That is a property of function/procedures, not OOP. There is no
> polymorphism nor inheritence that I could see in your example since you
> don't have multiple implementations active at the same time. You simply
> gutted the old implementation and replaced it with a new one. This is
> old-fashioned function/procedure "implementation hiding". Giving OO
> credit is a big stretch.
>   
First, you're inside comp.object, so using OO terms makes sense, don't 
you think?  Second, I'm not giving OO credit--just using OO terminology 
since it's applicable.  Third, I believe there's sufficient evidence 
that OO designers and programmers are likely to benefit most from 
database interfaces since many of them are trying (very hard) to marry 
object models to DB models.  I've discovered that enterprise is 
unnecessary and that a solution is well within both the technical and 
ideological grasp of OO's practitioners.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/22/2006 5:52:04 PM
Thomas Gagne wrote:
> 1) no application should directly
> access the DB's data (no SQL) and 2) applications should use the DB only
> through its interface.  Stored procedures are the best example of the
> latter I know of.

Thomas, there is a lot of nonsense in your post that I don't have any
desire to address. I just highlighted couple of sentences.

First, when you mention "stored procedure" you dumbed down discussion
significantly. Because most of the stored procedure usages in
application programming practice are really disgusting. What wrapping
sql command like this

procedure insert_new_employee( <100 of primitive data type arguments> )
begin
    #sql insert into employee( <100 of column names> ) values ( <100 of
values> );
end

is supposed to achieve?

Next, you have been pointed out repeatedly that a query via  function
call (object wrapped, or not) is inferior to SQL query. Let me give you
an example, square root calculation. In procedural programming you do
call

squareRoot(9)

Meyer made a great deal out of so called "design by contract" idea. He
noticed that square root function is just one manifestation of a square
relation

y = x^2

but he don't go further than introducing an assertion which validates
the y = x^2 predicate between function argument and return value.

In relational world you query the square relation. Given a known value
of x you query all the matching values of y (there happen to be only
one). Or  given a known value of y you query all the matching values of
x (there happen to be two).

As you may notice, even in this trivial example you need at least 2
functions
squareRoot(int)
and
square(int)
to perform all possible queries against the database of square numbers.
This number of function that "interface" your database explodes quickly
with increasing complexity. In relational world you just compose your
query out of the small set of well defined operations (selection
followed by projection in this square root example).

On the final note, adding objects into a picture changes really nothing.

0
12/22/2006 6:27:46 PM
Thomas Gagne wrote:
> topmind wrote:
> > Thomas Gagne wrote:
> >
> >> topmind wrote:
> >>
> >
> >
> >>>> You've said it many times before, but perhaps you can give an example of
> >>>> some SQL that's easier to fix when embedded than as a
> >>>> procedure--including your normal QA procedures and promotion to production.
> >>>>
> >>>>
> >>> <snip decent example>
> >>>
> >>>
> >> Had you used a procedure the amount of coding change would have been the
> >> same.  True, you visited one location rather than two.
> >>
> >> Your example is a change to an interface.  It doesn't matter what
> >> language or problem you're working with.  When an interface changes
> >> everything that uses the interface must change.
> >>
> >
> > Agreed. But most changes *are* interfaces-related changes I've come to
> > find out. OO'ers seem to overemphasize *implementation* changes, when
> > what I see for my domain (custom biz apps) most changes are
> > requirements changes, not implementation changes. The OO books
> > emphasize the wrong change kinds. Thus, I optimize my designs for
> > requirements changes, not implementation changes.
> >
> >
> >> Consider an example that doesn't change the interface.
> >>
> >> Our system tracks the buying, selling, and payoffs of financial
> >> contracts.  Our users like to report on arbitrary time periods to see
> >> which contracts were open.  Using transaction history SQL can answer the
> >> question for any time period--but with unsatisfactory performance.
> >>
> >> To help historical queries run faster (fewer IOs) we created a
> >> dailyContractBalance table (yes--it is denormalized and redundant--RDB
> >> purists may be balking now).  Simply put, it records for every day
> >> (really) which contracts were open that day.  The reports now run much
> >> faster but we have to maintain the table.  [....]
> >>
> >
> > If I am not mistaken, you presented this example a few weeks ago in the
> > "old" topic. I don't dispute that particular situation may have been
> > been helped by separation (although some of your issues seemed
> > vendor-specific). But again, software design is a lot like investment
> > management: you have to weigh your investment options (design
> > decisions) against estimated future probabilities of different kinds of
> > changes.  There is rarely a free lunch; it is a matter of playing the
> > odds. If you overhaul the tables or DB schemas, it may indeed require
> > visiting a lot of embedded SQL. However, those happen maybe once every
> > 2 years or so (in my experience) such that you spend a week or so
> > making the changes. However, feature changes happen just about every
> > week. It is more economical to save 3 hours every week for the 2 years
> > rather waste 3 hours a week to save 60 hours once in that two years.
> >
> It's hard to buy your estimates without supporting data.

This works both ways. If your ratio is flipped for whatever reason,
then go with it. I am only describing what I observe and the reasoning
that I use based on these observations.

But I want to at least make sure we agree on impact of such changes
even if we dissagree on the frequency. Readers can plug in their own
observed frequencies.

> All our fixes
> and enhancements are entered into a bug tracking system and our code
> into CVS.  I'm wondering now if I can query CVS in such a way as to
> identify how often we change implementation v. interface.

It is not my fault that your CVS brand is not query-able.  But, if your
management is hard-up to count every speck of dust and separating the
SQL facilitates this for your existing tools, then go ahead and
separate. Your management's priorities may be questionable, but they
call the shots and you have to please them. Speak your mind, but if
they don't want to listen, let it go. The boss is the boss; the work
place is not a democracy.

>
> As to your estimates, if the interface breaks the same amount of source
> work is needed whether the change is in two places or one.  Changes to
> the application are the most expensive since touching one part of it
> necessarily requires testing all of it as well as shipping it.

You should test it *all* anyhow. One can unit test to make sure a given
SP is fine for the tests given to it, but sometimes it may be called in
ways by the app not anticipated by the unit tests. Thus, I don't see
how testing effort would be a different issue either way.

And, I am not (fully) disputing that the particular scenario you gave
would be more costly. I am just doing a frequency-cost analysis and
comparing it to other scenarios.

>
> In terms of investment management, a portfolio manager wouldn't ignore
> the fact that he can improve returns on 40% of his portfolio if he
> separated SQL from the application.  Even if they happen infrequently
> he's successfully hedged against it having a negative affect on his
> overall investment--especially when that insurance is free (it didn't
> create more code).

It is not free. It increases the development time for feature changes.

> > <snip>
> >
> > That is a property of function/procedures, not OOP. There is no
> > polymorphism nor inheritence that I could see in your example since you
> > don't have multiple implementations active at the same time. You simply
> > gutted the old implementation and replaced it with a new one. This is
> > old-fashioned function/procedure "implementation hiding". Giving OO
> > credit is a big stretch.
> >
> First, you're inside comp.object, so using OO terms makes sense, don't
> you think?

No matter where it is discussed, it is not an OO-specific concept. A
rose is still a rose in Timbuktu.

> Second, I'm not giving OO credit--just using OO terminology
> since it's applicable.

How about we find another term. "Hiding implementation behind an
interface" is what I'd call it right now without a better alternative.
Some would use "encapsulation", but there is no agreement on what encap
really means.

> Third, I believe there's sufficient evidence
> that OO designers and programmers are likely to benefit most from
> database interfaces since many of them are trying (very hard) to marry
> object models to DB models.

I would suggest they focus very hard on finding the *best* solution,
not necessarily an OO one. OO is oversold and is highly questionable
for custom biz apps.

> I've discovered that enterprise is
> unnecessary and that a solution is well within both the technical and
> ideological grasp of OO's practitioners.

I don't think marrying them will be practical unless you cripple one or
the other. Set theory and graphs are a fundimentally different way to
organize and sift info. Maybe someday there will be a magic breakthru
to meld them nicely. Until then, I say chuck OO for custom biz apps.
The world is not ready for it. Now you can only embrace the RDBMS or
embrace OO and must choose one or the other or risk a bloated mess.
Given that choice, I say OO goes.

>
> --
> Visit <http://blogs.instreamfinancial.com/anything.php>
> to read my rants on technology and the finance industry.

-T-

0
topmind (2124)
12/22/2006 7:02:08 PM
"topmind" <topmind@technologist.com> writes:
> Neo wrote:
> > > If you take away inheritence, you get "network structures" (AKA
> > > tangled pasta). Dr. Codd sought to escape those by applying set
> > > theory, and network structures thankfully fell out of favor,
> > > until the OO crowd tried to bring them back from the dead.
> >
> > Can you give an example of such as tangled pasta?
>
> OO Visitor pattern.

     The Visitor pattern provides two capabilities:  1) simulation of
double dispatch and 2) non-intrusively adding new operations to
existing classes.  Both of these address limitations of languages such
as C++ and Java.  The existence of this pattern does not demonstrate a
general flaw in the object oriented approach.

Sincerely,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc.  | Large scale, mission-critical, distributed OO
                       | systems design and implementation.
          pjm@spe.com  | (C++, Java, Common Lisp, Jini, middleware, SOA)
0
pjm (703)
12/22/2006 10:17:23 PM
> > > For some ideas, an airline reservation system or a grades/class college tracking system make fairly good examples. Boring, perhaps, but that is why they reflect the real world more  :-)
> >
> > Can you describe what each example should store and be able to query?
> > If possible, could you give some sample data?
>
> http://c2.com/cgi/wiki?CampusExample

Ok, do you already have data for the example or should I create it?

0
neo55592 (356)
12/22/2006 10:30:50 PM
Neo wrote:
> > Based on past experience with the dubious utility of toy/lab examples, I think I will elect to skip this.
>
> Here is another network example. Adam has children named John(male),
> Jack(male) and Mary(female). Find John's sibling of opposite gender.
> Below is an implementation using a network-type db. What RMDB
> schema/query implements the equivalent? Note that the query does not
> refer to John's father (Adam) or John's gender (male) directly.
>

Well, use a table like:
People
  Name
  Father
  Sex
(not precise DDL, I know).

insert into People values ('John', 'Adam', 'M'), ('Jack', 'Adam', 'M'),
('Mary', 'Adam', 'F');

select name from poeple other_sibling join people john on
other_sibling.father = john.father and other_sibling.sex != john.sex
where john.name = 'John';

I think this satisfies your requirements.

I would like to note that you should be able to satisfy most
queries/problems in most languages (unless they are sufficiently
crippled); it is just the amount of work involved in achieving the
desired results.  Some issues are better resolved in one language;
other issues are better resolved in other languages.  The same rule, I
think, holds true in databases as well.  (This problem looks pretty
easy in Relational databases to me, for instance).

-Chris

0
12/22/2006 10:41:15 PM
aloha.kakuikanu escreveu:
> Thomas Gagne wrote:
>> 1) no application should directly
>> access the DB's data (no SQL) and 2) applications should use the DB only
>> through its interface.  Stored procedures are the best example of the
>> latter I know of.
> 
> Thomas, there is a lot of nonsense in your post that I don't have any
> desire to address. I just highlighted couple of sentences.
> 
['highlight' sentences snipped]

> 
> On the final note, adding objects into a picture changes really nothing.
> 

I think we're mixing oranges and apples here. Thomas is trying to show 
that when you have a specific Database (Schema + data fed in) and an 
application that uses it (but not for ad hoc queries), his proposition 
brings value.

If we want to consider the Database as a repository of data and the need 
for doing different queries on a case for case base, then you statement 
is correct about the explosion of methods ('functions').

HTH

--
Cesar Rabak
0
csrabak (402)
12/22/2006 10:48:29 PM
> http://c2.com/cgi/wiki?CampusExample

Below is an initial implemenation with some sample data. What can we
verify?

(new 'contact)
(new 'alias)
(new 'address)
(new 'street)
(new 'apt#)
(new 'city)
(new 'state)
(new 'zip)
(new 'country)

(new 'phn#)
(new 'cell#)
(new 'pager#)
(new 'fax#)
(new 'email)

(new)
(set address instance (it))
(set+ (it) street (set '123 'main 'st))
(set+ (it) city 'chicago)
(set+ (it) state 'illinois)
(set+ (it) zip '56789)
(set+ (it) country 'usa)

(new)
(set address instance (it))
(set+ (it) street (set '547 'elm 'rd))
(set+ (it) apt# '54A)
(set+ (it) city 'houston)
(set+ (it) state 'texas)
(set+ (it) zip '774433-7654)
(set+ (it) country 'usa)

(new 'adam 'person)
(set (it) address (get * street (. '123 'main 'st)))
(set+ (it) phn# '234-6789)
(set+ (it) cell# '435-8766)
(set+ (it) email 'adam&ibm.com)
(set+ (it) email 'adam.smith&gm.com)

(new 'eve  'person 'teacher)
(set (it) address (and (get * street (. '547 'elm 'rd))
                               (get * apt# 54A)))
(set+ (it) cell# '457-8779)

(new 'john 'person)
(set+ (it) alias 'jojo)
(set+ (it) pager# '568-5866)
(set  (it) contact adam)
(set  (it) contact eve)

(new 'mary 'person)
(set+ (it) fax# '587-8978)
(set  (it) contact eve)
(set  (it) contact eve)

(new 'student)
(new 'harvard 'university)
(set harvard teacher eve)
(set harvard student john)
(set harvard student mary)

(new 'credit)
(new 'category)
(new 'prerequiste)

(new 'course1 'course)
(set+ (it) credit '4)
(set+ (it) category 'science)

(new 'course2 'course)
(set+ (it) credit '6)
(set+ (it) category 'math)
(set+ (it) category 'logic)
(set  (it) prerequiste course1)

(new 'semester1 'semester)
(new 'semester2 'semester)

(new 'class1 'class 'course1)
(set semester1 has class1)

(new 'class2 'class 'course2)
(set semester2 has class2)

(new 'grade)
(new 'took 'verb)

(set+ john took class1 grade '85)
(set+ mary took class2 grade '95)


(; Get harvard student
   who took a class
   whose course category is math and logic)
(; Gets mary)
(and (get harvard student *)
     (get * took (get (and (get * category math)
                           (get * category logic)
                      )
                      instance *)))

0
neo55592 (356)
12/23/2006 12:10:25 AM
> > Here is another network example. Adam has children named John(male),
> > Jack(male) and Mary(female). Find John's sibling of opposite gender.
> > Below is an implementation using a network-type db. What RMDB
> > schema/query implements the equivalent? Note that the query does not
> > refer to John's father (Adam) or John's gender (male) directly.
> >
> table like: People (Name, Father, Sex);
> insert into People values:
> ('John', 'Adam', 'M'),
> ('Jack', 'Adam', 'M'),
> ('Mary', 'Adam', 'F');
>
> select name from poeple other_sibling join people john on
> other_sibling.father = john.father and other_sibling.sex != john.sex
> where john.name = 'John';

A very efficient and unexpected solution!

> I think this satisfies your requirements.

Almost, there is the part about the solution being resilent to future
data requirements. Now suppose we want to store Adam's age. Here is how
to do it in the network-type db and still have the original query work.

(new 'age)
(set+ adam age '30)

In the RMDB solution, how can I add Adam's age without impacting the
original query? If that is not possible, go ahead revise the original
schema and query.

0
neo55592 (356)
12/23/2006 12:32:26 AM
Neo wrote:
> > http://c2.com/cgi/wiki?CampusExample
>
> Below is an initial implemenation with some sample data. What can we
> verify?
>
> (new 'contact)
> (new 'alias)
> (new 'address)
[...]

I remember you. You're the guy who proposed that Lisp-like database
thingy about a year ago. We had a "query battle" back then IIRC.
Something about finding all houses with 2 refridgerators or the like.

-T-

0
topmind (2124)
12/23/2006 1:24:49 AM
> I remember you. You're the guy who proposed that Lisp-like database thingy about a year ago. We had a "query battle" back then IIRC. Something about finding all houses with 2 refridgerators or the like.

You have a good memory. And what about Judge Judy :)

0
neo55592 (356)
12/23/2006 1:33:27 AM
Neo wrote:
> > > Here is another network example. Adam has children named John(male),
> > > Jack(male) and Mary(female). Find John's sibling of opposite gender.
> > > Below is an implementation using a network-type db. What RMDB
> > > schema/query implements the equivalent? Note that the query does not
> > > refer to John's father (Adam) or John's gender (male) directly.
> > >
> > table like: People (Name, Father, Sex);
> > insert into People values:
> > ('John', 'Adam', 'M'),
> > ('Jack', 'Adam', 'M'),
> > ('Mary', 'Adam', 'F');
> >
> > select name from poeple other_sibling join people john on
> > other_sibling.father = john.father and other_sibling.sex != john.sex
> > where john.name = 'John';
>
> A very efficient and unexpected solution!
>
> > I think this satisfies your requirements.
>
> Almost, there is the part about the solution being resilent to future
> data requirements. Now suppose we want to store Adam's age. Here is how
> to do it in the network-type db and still have the original query work.
>
> (new 'age)
> (set+ adam age '30)
>
> In the RMDB solution, how can I add Adam's age without impacting the
> original query? If that is not possible, go ahead revise the original
> schema and query.
First, it is better to add Adam into People table. Now, Adam's father
is unknown. I set the column to NULL
insert into People values
('Adam', NULL, 'M');

Then add column Age and set Adam's age.
ALTER TABLE People ADD COLUMN Age SMALLINT DEFAULT NULL;
UPDATE People SET Age = 30 WHERE Name = 'Adam';

This way will not impact to existing queries(No modification is
required) including following.
select name from poeple other_sibling join people john on
other_sibling.father = john.father and other_sibling.sex != john.sex
where john.name = 'John';

0
tonkuma (6)
12/23/2006 10:03:43 AM
On 22 Dec 2006 10:27:46 -0800, aloha.kakuikanu wrote:

> Next, you have been pointed out repeatedly that a query via  function
> call (object wrapped, or not) is inferior to SQL query. Let me give you
> an example, square root calculation. In procedural programming you do
> call
> 
> squareRoot(9)
> 
> Meyer made a great deal out of so called "design by contract" idea. He
> noticed that square root function is just one manifestation of a square
> relation
> 
> y = x^2
> 
> but he don't go further than introducing an assertion which validates
> the y = x^2 predicate between function argument and return value.
> 
> In relational world you query the square relation. Given a known value
> of x you query all the matching values of y (there happen to be only
> one). Or  given a known value of y you query all the matching values of
> x (there happen to be two).
> 
> As you may notice, even in this trivial example you need at least 2
> functions
> squareRoot(int)
> and
> square(int)
> to perform all possible queries against the database of square numbers.
> This number of function that "interface" your database explodes quickly
> with increasing complexity. In relational world you just compose your
> query out of the small set of well defined operations (selection
> followed by projection in this square root example).

Note that there is no any sematic difference between

   SQRT(9)

   SELECT X FROM SQRT WHERE Y=9

You count SQUARE as an extra interface? But

   SELECT Y FROM SQRT WHERE X=3

is as well. What you wrote is basically about syntax sugar. And I can bet
without making any poll that SQRT sugar is much sweeter.

> On the final note, adding objects into a picture changes really nothing.

Only if you don't understand what abstraction is. Objects are supposed to
bind 9 with the table SQRT and the places in the tuples. It is a higher
level theory, which allows questions like what is an integral of SQRT from
0 to 100. Care to write a SELECT for it?

Think about this a bit, and you will understand the reason why it goes in
this direction and not in one you wished. You cannot implement SQRT, it is
not decomposable in relational tables, because that would require an
uncountable number of states. What OO offers is a cut of an infinite
recursion.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
0
mailbox2 (6357)
12/23/2006 11:14:48 AM
Please allow broad mindedly me, if I'm off your point.

Thomas Gagne wrote:
>
>     select * from account, user where user.userId=X and account.userId =
>     user.userId
>
> when instead I can use
>
>     exec getAccountsFor @userId=X
>
I feel your comparison is not fair.
Does OO automatically supply the method getAccountsFor?
Didn't you first coding the method?
If different data requirement is happened, you may need to code new
method or modify the metod.

select * from account, user where user.userId=X and account.userId
=user.userId

No pre-coding or something doing required.
After install DBMS, then create table, load data, you can isuue that
SELECT statement.

If you and your colleague use that pattern frequently, you can create
it as view.
If all data in the tables are not required, you can restrict to only
required columns returned.
Though of cause, user.userId should be included.
CREATE VIEW account_for (userId, other-required-columns-list) AS
SELECT user.userId, other-required-columns-list
from account, user where user.userId=X and account.userId =user.userId;
Then, you can use that
SELECT * FROM account_for WHERE userId=X;

And if base tables are changed(add/remove columns or constraints,
etc.), you'll be not neccesary to change the VIEW and SELECT statement
as far as those changed items are not used explicitly or implicitly in
the VIEW (explicitly specifying other-required-columns-list make the
view more robust for changing). Conversaly, if requirement for
account_for changed, you can change the view without influence to
SELECT statement as far as no changing in selected list and condition
userId=X. I think it is similar to changing moethod in OO programming.

0
tonkuma (6)
12/23/2006 11:47:36 AM
Dmitry A. Kazakov wrote:
> On 22 Dec 2006 10:27:46 -0800, aloha.kakuikanu wrote:
> > On the final note, adding objects into a picture changes really nothing.
>
> Only if you don't understand what abstraction is. Objects are supposed to
> bind 9 with the table SQRT and the places in the tuples. It is a higher
> level theory, which allows questions like what is an integral of SQRT from
> 0 to 100. Care to write a SELECT for it?

So what is object insight into what integral is?

> Think about this a bit, and you will understand the reason why it goes in
> this direction and not in one you wished. You cannot implement SQRT, it is
> not decomposable in relational tables, because that would require an
> uncountable number of states. What OO offers is a cut of an infinite
> recursion.

Function u=f(x,y,z) in general is a predicate Pf(x,y,z,u) with some
additional constraint. Given that a predicate is often an infinite
relation we indeed have obvious difficulty implementing it.

Suppose we have the relation R(x,y)  where y=x^2. No function yet, just
a relation.

As it has two columns the simplest joins we can think of are:

R /\ `y=9`

R /\ `x=3`

where `y=9` and `x=3` are constant unary relations.

In the first case, we probably want to project the resulting relation
to column `x`

project_x (R /\ `y=9`)

in the second case to `y`

project_y (R /\ `x=3`)

Informally, in the first case we want to know the result of sqrt(9), in
the first case just 3^2

How the join is evaluated in both cases? By the standard optimization
technique pioneered by System R! We enumerate all possible join
ordering, and compute their cost, to pick up the most efficient one.

Let's go into the second
case, because it's easier. As one relation is infinite, the only
feasible join method is nested loops.  Moreover, we have to start from
the relation which is finite, that is `x=3`. Now, there are the two
possibilities:
1. Scan the whole R relation and find all the matching tuples. Not
feasible too!
2. Find matching tuples by a some kind of index:

create index unique_x on R(x);

(pseudo SQL syntax). This index is not a conventional b-tree of course,
as all what is needed to do when x is known, and y is not is just to
calculate y by a simple formula x^2. The corollary here is that the
function x->x2 is essentially an index.

The other case is only marginally more complex. Likewise, we quickly
arrive to the conclusion that the only feasible execution is the
indexed nested loops with the scan of the `y=9` as the leading
relation. Then, the required index is the function y->sqrt(y).

In a word, functions to the predicates are what indexes to the tables
in traditional RDBMS are. In RDBMS world there are index organized
tables. In the functions analogy we have "function organized"
predicates.

Indexes are not something that is supposed to be exposed to the end
user, at least in theory. Indexes should be created/destroyed
automatically by RDBMS engine. In todays imperfect world this is done
by DBAs.

This little snippet  is just an unconventional  perspective to  Meyer's
"design-by-contract" idea.  The sqrt()  function  interface is defined
by the assertion y=x^2,  and the programmer's job is to write the
sqrt() implementation. We see that functions being implementational
detail has much in common with indexes being implementation details
too.

Here is little more complex example. For x^2+y^2=1, there are 3 access
methods:
1. Given x, and y return {(x,y)} if they satisfy the equation, and {}
otherwise.
2. Given x return {(x,sqrt(1-x^2)),(x,-sqrt(1-x^2))} or empty set.
3. Given y return {(sqrt(1-y^2),y),(-sqrt(1-x^2),y)} or empty set.
Once again the way to represent infinite predicates in future
relational systems is via access path specification.

0
12/23/2006 6:16:47 PM
> > > > network example. Adam has children named John(male),
> > > > Jack(male) and Mary(female). Find John's sibling of opposite gender.
> > > > Below is an implementation using a network-type db. What RMDB
> > > > schema/query implements the equivalent? Note that the query does not
> > > > refer to John's father (Adam) or John's gender (male) directly.
> > > >
> > > People (Name, Father, Sex);
> > > ('John', 'Adam', 'M'),
> > > ('Jack', 'Adam', 'M'),
> > > ('Mary', 'Adam', 'F');
> > > SELECT name from poeple other_sibling join people john on
> > > other_sibling.father = john.father and other_sibling.sex != john.sex
> > > where john.name = 'John';
> >
> > Now suppose we want to store Adam's age. Here is how
> > to [add] it in the network-type db and still have the original query work.
> >     (new 'age) (set+ adam age '30)
> > how can I add Adam's age [in RMDB] without impacting original query?
>
> ...add Adam into People table. add column Age ...
> This way will not impact to existing queries...

Great, so the data looks something like below:

Table People
Name  Father   Sex    Age
Adam  NULL    NULL  30
John    Adam   M       NULL
Jack    Adam   M       NULL
Mary   Adam    F       NULL

Now suppose we want to add another child of Adam named Francis who is
bisexual. Here is how to add it in the network-type db and still have
the original query work: (new 'francis) (set+ francis gender 'bisexual)
(set adam child francis). How can I add Francis to above RMDB without
affect original query.

Please note that I am trying to creating a network to "tangle"
RM/RMDBs. The names of those nodes and edges are only meant for
convenience. If those names make it appear as if a certain network is
impossible (ie a person's gender being bisexual), rename the network
elements in general terms such as node1, node2, edge1, edge2, etc.

0
neo55592 (356)
12/23/2006 8:48:57 PM
Hello Thomas,

"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
news:DrydnSQnbsmmNxTYnZ2dnUVZ_t2tnZ2d@wideopenwest.com...
> An unexpected thing happened while debating topmind: I had an epiphany.

You see... it is GOOD to have 'topmind' around!  I like 'topmind' for the 
continuous and fervent challenge he applies to many of the decisions that we 
often take for granted.  By being forced to 'support' our best practices 
with long involved explanations about 'why they are best,' we can better see 
the times when those practices are useful and the times when they are not. 
At worst, we discover that we understand the limitations of our ideas much 
better.  At best, we learn something.

That said, when he joins a thread, it is often not useful for people who are 
simply surfing the thread to read the discussion, because it tends to drop 
down to a point-by-point argument fairly quickly.  For that reason, most of 
the contribution that he has added to /this/ thread... I have ignored. 
(Sorry, T).

> Instead of responding to the news group I thought about it for a short bit 
> (very short) and posted an article to my blog titled, "The RDB is the 
> biggest object in my system."
>
> <http://blogs.in-streamco.com/anything.php?title=the_rdb_is_the_biggest_object_in_my_syst>
>
> What I realized while trying to describe my preference to use DB 
> procedures as the primary (re: only) interface between my applications and 
> the database is because I believe my DB's physical representation of data 
> belongs to it alone and that customers of the DB oughtn't be permitted to 
> directly manipulate (change or query) its data.  I realized this is 
> exactly what data-hiding is all about and why expert object oriented 
> designers and programmers emphasize the importance of interfaces to direct 
> data manipulation.

While I did not read this blog entry (yet), I agree, in general, with the 
statement above.

That said, an RDBMS can present MANY interfaces to your code, not all of 
which have to be presented through stored procs.  You could present through 
views, for example, and still hide some of the details of your db design.

I would also say that the db presents the data for 'many' objects instead of 
a single one.  Viewing the db as a single object begs the question: what 
behavior are you encapsulating in it?

>
> I thought more about this and posted a second article, Databases as 
> Objects: My schema is my class, which explored more similarities between 
> databases and objects and their classes.
>
> <http://blogs.in-streamco.com/anything.php?title=my_schema_is_an_class>
>
> I intend next to explore various design patterns from GoF and Smalltalk: 
> Best Practice Patterns to see if the similarities persist or where they 
> break down, and what can be learned from both about designing and 
> implementing OO systems with relational data bases.

this blog entry, I did read, and I replied to it in the blog comments.

>
> If you agree there's such a thing as an object-relational impedance 
> mismatch, then perhaps its because you're witnessing the negative 
> consequences of tightly coupling objects that shouldn't be tightly 
> coupled.

Nope.  I'm seeing the Object Relational Impedence a conceptual disconnect 
between the 'traditional' RDBMS interface that doesn't present a mechanism 
for encapsulating both code and operations in the same object wrapper with 
the object oriented interface which absolutely requires it.  Creating an 
object wrapper that DOES present these two together is the goal of Object 
Relational Mapping (ORM) tools.

Note that even when you do this, you run into the impedence, and that is 
because RDBMS systems are not the appropriate place to put every business 
capability.  (If they were, all apps would be very thin user interfaces on 
very thick databases).

You can certainly place some activities in the db, including calculations, 
validations, and some data translations (including XML-to-SQL and vice 
versa), but I'd argue against placing too many business activities there, 
especially things like complex (multi-path) workflow (because of the 
difficulty with synchronization logic across a set-oriented interface like 
SQL) or cross-system messaging, etc.

Most business capabilities are described as behaviors first, and data 
second.  The data is the operand, not the operator.  This causes us to have 
to recast the data into an entirely different view than the one that is used 
in RDBMS systems.  Key concerns in RDBMS systems, like indexing, Referential 
Integrity, volume scaling, and data value ranges should be encapsulated and 
'hidden' (but not in the sense of data hiding, but in the sense of 'behavior 
hiding' which is an OO notion).

Recasting that data from the efficient storage mechanism presented by 
Relations to the more behavior-oriented mechanism required by objects is the 
responsibility of the Data layer and is the focus of the discussion in 
Object Relational Impedence.

>
> There's a hypothesis in there somewhere.
>
> As always, if you know of existing research on the subject I'm anxious to 
> read about it.
>

A good starting point for finding research is to go to a general article 
like the following and follow the reference links off the page.
http://en.wikipedia.org/wiki/Object_Relational_Mapping


I guess one thing that stands out for me: you reached a valuable conclusion 
about the application of OO design methods to RDBMS design, but you didn't 
prove the initial assumption: that stored procedures should be used as the 
only interface for code to access the data in the database.  In this 
respect, I am not convinced.

Stored procedures are VALUABLE, don't get me wrong.  You get security 
benefits and you get the ability to control the data manipulations against 
multiple tables, but as I pointed out in my blog response, stored procedures 
are not object methods and they are not tied to any objects.  They are 
effectively unconstrained procedural code, with access to any and every 
table in the database (in many db systems, this visibility extends to tables 
in other databases, both on the same server and in other servers). 
Therefore, I have a very difficult time viewing stored proces as methods.

I also think you lose something valuable with Stored Procs.  Excellent 
efforts have been expended to consider the basic principles of RDBMS design 
in objects, and to create objects that will effectively assist with Object 
Relational Mapping as a first step to addressing the Impedence mismatch. 
Those objects are defeated by the artificial barriers placed by stored 
procedures.   I'm referring to various attempts at Data Access Objects, 
including the .Net Data objects in the Microsoft .Net framework.

Some would say that this reduces the value of the DAO-style objects.  I 
would reply that RDBMS systems are based on a mathematical simplicity, and 
approach that is fairly pure and extremely versatile.  Hiding that 
mathematical simplicity may or may not be a valuable enterprise, but it is 
clearly the effect of restricting all data access to a stored procedure 
layer.  In that aspect, perhaps it is the value of the stored proc that 
should be questioned, and not the value of the Data objects in the OO 
library.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
12/23/2006 8:53:12 PM
"topmind" <topmind@technologist.com> wrote in message 
news:1166736833.957095.16410@79g2000cws.googlegroups.com...
> Thomas Gagne wrote:
>>  The designer of the
>> second (rightly, I think) believes counting the number of SQL lines
>> would be difficult since they're distributed throughout his code, and
>> include string concatenation for variables and discriminations spread
>> throughout functions.
>
> That is very a minor reason to separate. Is it worth making the app 10%
> to 25% more time-consuming to maintain *just* to be able to count
> easier? I have to object. Perhaps you have weird managers.


That's FUNNY, T!  You are arguing with a CTO.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
--


0
nickmalik (325)
12/23/2006 9:06:19 PM
On 23 Dec 2006 10:16:47 -0800, aloha.kakuikanu wrote:

> Dmitry A. Kazakov wrote:
>> On 22 Dec 2006 10:27:46 -0800, aloha.kakuikanu wrote:
>>> On the final note, adding objects into a picture changes really nothing.
>>
>> Only if you don't understand what abstraction is. Objects are supposed to
>> bind 9 with the table SQRT and the places in the tuples. It is a higher
>> level theory, which allows questions like what is an integral of SQRT from
>> 0 to 100. Care to write a SELECT for it?
> 
> So what is object insight into what integral is?

9 is an object (a value of a type, say, real number). Real number is
roughly a model of mathematical analysis in the program. We can refine that
model by adding SQRT, integral, etc to the type, and thus to the sets of
the objects of. You cannot do that with the relation SQRT.

>> Think about this a bit, and you will understand the reason why it goes in
>> this direction and not in one you wished. You cannot implement SQRT, it is
>> not decomposable in relational tables, because that would require an
>> uncountable number of states. What OO offers is a cut of an infinite
>> recursion.
> 
> Function u=f(x,y,z) in general is a predicate Pf(x,y,z,u) with some
> additional constraint. Given that a predicate is often an infinite
> relation we indeed have obvious difficulty implementing it.
> 
> Suppose we have the relation R(x,y)  where y=x^2. No function yet, just
> a relation.
> 
> As it has two columns the simplest joins we can think of are:
> 
> R /\ `y=9`
> 
> R /\ `x=3`
> 
> where `y=9` and `x=3` are constant unary relations.
> 
> In the first case, we probably want to project the resulting relation
> to column `x`
> 
> project_x (R /\ `y=9`)
> 
> in the second case to `y`
> 
> project_y (R /\ `x=3`)
> 
> Informally, in the first case we want to know the result of sqrt(9), in
> the first case just 3^2
> 
> How the join is evaluated in both cases? By the standard optimization
> technique pioneered by System R! We enumerate all possible join
> ordering, and compute their cost, to pick up the most efficient one.

No that's not the question. It is how the relation SQRT is decomposed. The
function SQRT can be decomposed into {+,-,*,/} plus imperative control flow
instruction set using, say, Newton method. If you wanted to present an
alternative, you should have to show a doable method of construction the
table of SQRT from scratch. Note that * is also function, and + is too. The
recursion stops at the hardware/axiomatic level.

(It is allowed to claim that your hardware has SQRT, but then I'll move to
exp, or to modified Bessel's function etc.)

> Let's go into the second
> case, because it's easier. As one relation is infinite, the only
> feasible join method is nested loops.

BTW, looping is not a relational concept. I mean forall x in X { do F(x) }.
do F is not decomposable into SELECTs. You have to have "do F" as a
relation in advance: "do F" : S x X -> Boolean. Where S is the set
computational states. Then you need some sort of accumulation and
serialization of states as you navigate them in forall.

[...]
> This little snippet  is just an unconventional  perspective to  Meyer's
> "design-by-contract" idea.

I don't think it was. The idea was of Dijkstra, i.e. the one of proven
correctness of a program. The Meyer's DbC was this idea applied narrowly to
types. Note that both issues of types and of correctness are orthogonal to
relational algebra, and all three are to the point you seem trying to make,
which is IMO the old worn imperative vs. declarative.

> The sqrt()  function  interface is defined
> by the assertion y=x^2,

(Pedantically. No, assertion does not define SQRT. SQRT is defined by its
implementation. Assertion is there to check the correctness of.)

> and the programmer's job is to write the
> sqrt() implementation. We see that functions being implementational
> detail has much in common with indexes being implementation details
> too.

OK. However, note that all programs are implementation details of some
semantics, which is outside the computer and the programming language being
used.

> Here is little more complex example. For x^2+y^2=1, there are 3 access
> methods:
> 1. Given x, and y return {(x,y)} if they satisfy the equation, and {}
> otherwise.
> 2. Given x return {(x,sqrt(1-x^2)),(x,-sqrt(1-x^2))} or empty set.
> 3. Given y return {(sqrt(1-y^2),y),(-sqrt(1-x^2),y)} or empty set.

4. Given return {(x,y) | x^2+y^2<1}
5. Given a,b return {(x,y) | x^2+y^2=1 & a*x+b*y=0}
    ...
The set of "access methods" is obviously uncountable.

> Once again the way to represent infinite predicates in future
> relational systems is via access path specification.

That depends on the hardware. Our programming languages are built on the
hardware in which non-linear constraints like above cannot be efficiently
computed. In analogue computers, for example, differential equations were
not a problem. Sqrt was a problem, if I recall correctly, the module was
bigger than the rest of the computer.

Merry Christmas,

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
0
mailbox2 (6357)
12/24/2006 11:39:32 AM
On Thu, 21 Dec 2006 06:15:29 -0500, Thomas Gagne
<tgagne@wide-open-west.com> wrote:

>Ultimately, I think I may need to come up with another name for a 
>domain-ized database.  The word 'database' has too many possibilities.  
>It's too general.  After I've applied my schema to it it no longer has 
>all the possibilities it once had.  After my schema's applied it becomes 
>something different.  It's becomes my domain's data base.  My domainabase?

It's your application's data model.

J.


0
12/24/2006 4:22:27 PM
On 21 Dec 2006 13:22:00 -0800, "Neo" <neo55592@hotmail.com> wrote:
>How did Dr Codd make
>network structures fall out of favor?

Y'know, I've often wondered about that.

Explicit b-trees and b-trees crosslinked into "networks" can be very
efficient and straightforward, so why did they lose so completely?
The textbook answer is that they are not reusable, and a normalized
relational database, is.  But I think the answer is elsewhere.
Because of its independence, the relational database requires some
kind of language interface, pretty much universally some form of SQL
these days, which further hides the implementation from the
application (I agree with topmind on this).

But the bottom line is that Codd provided an alternative with
strengths and weaknesses, and the combination just seemed to make
developers and managers happier over time.

J.

0
12/24/2006 4:34:14 PM
Responding to Malik...

>>An unexpected thing happened while debating topmind: I had an epiphany.
> 
> 
> You see... it is GOOD to have 'topmind' around!  <snip>
> At best, we learn something.

Bryce is a bright guy but I don't think his motivation is to challenge 
ideas.  I've been observing him for a decade or so and I think he just 
engages in these debates to annoy OO people for his own amusement.  Note 
the following observations:

If I had the task of designing a web site that would really annoy OO 
people because of things like unsupported assertions and misstatements 
about what the OO paradigm was about, it would be Bryce's Geocities web 
site.  There is no way that anyone practicing the OO paradigm would not 
be outraged on reading the material.  IMO, it is actually very cleverly 
done because to be so universally inflammatory he has to know what 
buttons to push.  IOW, I think he knows a lot more about OO development 
than he lets on and he uses that in his debates to pull OO people's chains.

He has been pushing the same "challenges" for a decade or more and 
completely ignores any refutations.  In particular, he continues making 
his more outrageous misstatements about what OO development is about no 
matter how many times he is corrected.  Thus he continues to use 
assertions like OO development being "noun-driven" that have a tiny 
grain of truth (i.e., Peter Coad's primer technique for object blitz 
mechanics) as an overall generalization of the entire paradigm.  IOW, he 
already knows he is wrong when making the statements.

He consistently employs a suite of forensic ploys that are designed to 
pull the opponent down a rabbit hole of misdirection.  Thus one of his 
favorites is to ask for an example, which he then proceeds to tear apart 
for reasons unrelated to the original point that triggered the example. 
  Those ploys are quite predicable once one watches a few of his 
debates.  Unfortunately they are so overused that it becomes very clear 
that Bryce's game is the debate itself rather than the content.

Bottom line: don't feed the troll.  There is nothing to be learned in a 
debate with Bryce except the effectiveness of debating ploys.  He is 
just amusing himself by infuriating OO people.


*************
There is nothing wrong with me that could
not be cured by a capful of Drano.

H. S. Lahman
hsl@pathfindermda.com
Pathfinder Solutions
http://www.pathfindermda.com
blog: http://pathfinderpeople.blogs.com/hslahman
"Model-Based Translation: The Next Step in Agile Development".  Email
info@pathfindermda.com for your copy.
Pathfinder is hiring: 
http://www.pathfindermda.com/about_us/careers_pos3.php.
(888)OOA-PATH



0
h.lahman (3600)
12/24/2006 5:01:03 PM
Nick Malik [Microsoft] wrote:
> "topmind" <topmind@technologist.com> wrote in message
> news:1166736833.957095.16410@79g2000cws.googlegroups.com...
> > Thomas Gagne wrote:
> >>  The designer of the
> >> second (rightly, I think) believes counting the number of SQL lines
> >> would be difficult since they're distributed throughout his code, and
> >> include string concatenation for variables and discriminations spread
> >> throughout functions.
> >
> > That is very a minor reason to separate. Is it worth making the app 10%
> > to 25% more time-consuming to maintain *just* to be able to count
> > easier? I have to object. Perhaps you have weird managers.
>
>
> That's FUNNY, T!  You are arguing with a CTO.

As W shows, rank != right.  Managers are often too distant from the
nitty-gritty of daily work such that they over-focus on "big" changes,
such as schema overhauls that need to be planned and managed days in
advanced. However, everyday slowness created by a particular design
decision may not be appearent to them. I am not saying for sure this is
the case, but it is something I have witnessed.

Similarly, a cubicle-dweller's perspective may miss things that a
manager has to deal with. Nobody has a perfect perspective. We have to
listen and cooperate to get the best results.

>
> --
> --- Nick Malik [Microsoft]
>     MCSD, CFPS, Certified Scrummaster
>     http://blogs.msdn.com/nickmalik
>
> Disclaimer: Opinions expressed in this forum are my own, and not
> representative of my employer.
>    I do not answer questions on behalf of my employer.  I'm just a
> programmer helping programmers.
> --

-T-

0
topmind (2124)
12/24/2006 8:52:16 PM
Nick Malik [Microsoft] wrote:
> <snip>
> That said, an RDBMS can present MANY interfaces to your code, not all of 
> which have to be presented through stored procs.  You could present through 
> views, for example, and still hide some of the details of your db design.
>
> I would also say that the db presents the data for 'many' objects instead of 
> a single one.  Viewing the db as a single object begs the question: what 
> behavior are you encapsulating in it?
>   
Objects are often composed of many other objects.  My database object is 
no different.  Primarily, through stored procedures the DB has methods 
and projections.  The projections can either be returned as collections 
of tuples or I can send a lambda expression (especially helpful for 
really large result sets) that evaluates one row at a time.  Streams can 
also be returned.  This where a facade can be helpful to make the DB's 
interface more idiomatic for your favorite OOPL.  Enjoy!
>   
> <snip>
>
>   
>> If you agree there's such a thing as an object-relational impedance 
>> mismatch, then perhaps its because you're witnessing the negative 
>> consequences of tightly coupling objects that shouldn't be tightly 
>> coupled.
>>     
>
> Nope.  I'm seeing the Object Relational Impedence a conceptual disconnect 
> between the 'traditional' RDBMS interface that doesn't present a mechanism 
> for encapsulating both code and operations in the same object wrapper with 
> the object oriented interface which absolutely requires it.  Creating an 
> object wrapper that DOES present these two together is the goal of Object 
> Relational Mapping (ORM) tools.
>   
I agree it is the goal, but should every DB-ish object in your 
application actually map to a tuple inside the DB (bean)?  Why do that 
when objects should communicate to each other through messages?  Aren't 
the OR tools distracting OO programmers from how they ought really talk 
to the DB?
> Note that even when you do this, you run into the impedence, and that is 
> because RDBMS systems are not the appropriate place to put every business 
> capability.  (If they were, all apps would be very thin user interfaces on 
> very thick databases).
>   
I don't know everyone's experience, but every DB I've worked with /was/ 
my system.  It stored the entire state of my system in neat tables and 
rows with glorious relations between them to answer every question I 
could possibly ask.  Everything else was one of two things: automation 
or cosmetics.  Portfolio management, trading, banking, and 
insurance--the DB recorded everything.  If the system stopped the DB 
knew where.  When the system started the DB knew where from.  In fact, 
before there was a system there was a DB.  It was designed, proved 
correct, constraints implemented, procedures created, load tested, and 
all kinds of fun unit-testing kinds of things before a single line of 
application code was created.

In fact, the DB isn't only the biggest object in my system, but it was 
also the first object--and an OOPL wasn't even necessary to realize it.
> You can certainly place some activities in the db, including calculations, 
> validations, and some data translations (including XML-to-SQL and vice 
> versa), but I'd argue against placing too many business activities there, 
> especially things like complex (multi-path) workflow (because of the 
> difficulty with synchronization logic across a set-oriented interface like 
> SQL) or cross-system messaging, etc.
>   
The database knows how to do things (load, update, remove, and query) 
but it doesn't know why things are done.  That's what applications and 
users are for.
> Most business capabilities are described as behaviors first, and data 
> second.
I disagree, only because behaviors are based on weak assumptions and 
common practices.  Whatever is done can either be done well or poorly.  
Behaviors are based of the weakest facts--the /way/ things are done.  In 
fact, after analyzing the data and comparing the state of a DB before 
and after some behavior, programmers often discover how behavior can be 
improved.

    But the DB must always be correct.  Whether the behaviors are
    correct or not, the DB must maintain its integrity.  It must protect
    its state.  In fact, our DR plan is based on the premise that the
    DB's integrity is the most critical--everything else is cosmetic.
    <http://blogs.in-streamco.com/anything.php?title=rules_for_production>

> <snip>
>>     
>
> A good starting point for finding research is to go to a general article 
> like the following and follow the reference links off the page.
> http://en.wikipedia.org/wiki/Object_Relational_Mapping
>
>
> I guess one thing that stands out for me: you reached a valuable conclusion 
> about the application of OO design methods to RDBMS design, but you didn't 
> prove the initial assumption: that stored procedures should be used as the 
> only interface for code to access the data in the database.  In this 
> respect, I am not convinced.
>   
I don't blame you.  I need to present more evidence, which I will do 
through examples.
> <snip>
>
> I also think you lose something valuable with Stored Procs.  Excellent 
> efforts have been expended to consider the basic principles of RDBMS design 
> in objects, and to create objects that will effectively assist with Object 
> Relational Mapping as a first step to addressing the Impedence mismatch. 
>   
You're right.  Some great research has been spent here--as there was in 
alchemy.

Consider my situation, I have a single database which I know is correct 
because it's guarded by procedures, constraints, unit and integrity 
tests.  I have multiple applications--some of them share common data 
models but others of them do not.  Which model is correct?

Consider you have 20 programs each doing specific things.  Between the 
20 you've discovered there's three different object models that best 
reflect their dependent applications needs and designs.  Which of the 
three should be mapped to the DB?  Should the DB's model be massaged to 
reflect any of them, or should it be designed to be perfect for the 
business data?
> Those objects are defeated by the artificial barriers placed by stored 
> procedures.   I'm referring to various attempts at Data Access Objects, 
> including the .Net Data objects in the Microsoft .Net framework.
>
> Some would say that this reduces the value of the DAO-style objects.  I 
> would reply that RDBMS systems are based on a mathematical simplicity, and 
> approach that is fairly pure and extremely versatile.  Hiding that 
> mathematical simplicity may or may not be a valuable enterprise, but it is 
> clearly the effect of restricting all data access to a stored procedure 
> layer.  In that aspect, perhaps it is the value of the stored proc that 
> should be questioned, and not the value of the Data objects in the OO 
> library.
>   
Would you make that same argument about a Date object, or any other 
object in your system.

"I would reply that Date objects are based on mathematical simplicity, 
an approach that is fairly pure and extremely versatile.  Hiding that 
mathematical simplicity may or may not be a valuable enterprise, but it 
is clearly the effect of restricting all data access to Date's interface 
methods.  In that aspect, perhaps it is the value of Date's interface 
that should be questioned and not the value of the Date objects in the 
OO library."

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/25/2006 3:48:00 AM
Patrick May wrote:
> "topmind" <topmind@technologist.com> writes:
> > Neo wrote:
> > > > If you take away inheritence, you get "network structures" (AKA
> > > > tangled pasta). Dr. Codd sought to escape those by applying set
> > > > theory, and network structures thankfully fell out of favor,
> > > > until the OO crowd tried to bring them back from the dead.
> > >
> > > Can you give an example of such as tangled pasta?
> >
> > OO Visitor pattern.
>
>      The Visitor pattern provides two capabilities:  1) simulation of
> double dispatch and 2) non-intrusively adding new operations to
> existing classes.  Both of these address limitations of languages such
> as C++ and Java.  The existence of this pattern does not demonstrate a
> general flaw in the object oriented approach.

How about you Visitor defenders present a somewhat practical biz
example where Visitor allegedly makes maintenence easier.

> 
> Sincerely,
> 
> Patrick
> 

-T-

0
topmind (2124)
12/25/2006 4:14:22 AM
JXStern wrote:
> On 21 Dec 2006 13:22:00 -0800, "Neo" <neo55592@hotmail.com> wrote:
> >How did Dr Codd make
> >network structures fall out of favor?
>
> Y'know, I've often wondered about that.
>
> Explicit b-trees and b-trees crosslinked into "networks" can be very
> efficient and straightforward, so why did they lose so completely?

Efficient, maybe. Straitforward? no way.  Relational offers more
consistency. There are less different ways to model the same business
in relational. It sounds like a bad thing, but relational killed the
"creativity". It is similar to how structured blocks killed GOTO
creativity. IBM's IMS is dead for a reason.


> The textbook answer is that they are not reusable, and a normalized
> relational database, is.  But I think the answer is elsewhere.
> Because of its independence, the relational database requires some
> kind of language interface, pretty much universally some form of SQL
> these days, which further hides the implementation from the
> application (I agree with topmind on this).

Navigational query languages were proposed. They were ugly because
navigational is ugly.

>
> But the bottom line is that Codd provided an alternative with
> strengths and weaknesses, and the combination just seemed to make
> developers and managers happier over time.
> 
> J.

-T-

0
topmind (2124)
12/25/2006 4:27:12 AM
Frans Bouma wrote:
> topmind wrote:
> > Frans Bouma wrote:
> > > topmind wrote:
> > > > You OO'ers keep forgetting: SQL is an interface. I repeat, SQL is
> > > > an interface. It is not "low level hardware".
> > >
> > > 	SQL is a set-oriented language, it's not an interface as a language
> > > doesn't do anything without context (in this case a
> > > parser-interpreter combi)
> >
> > Perhaps we need to clear up our working semantics with regard to
> > "language" and "interface".  Are methods interfaces or a language? I
> > am not sure it really matters and I don't want to get tangled in a
> > definition battle.
>
> 	Methods are part of an interface written in a language. SQL is a
> language, a set of stored procs is an interface.

It is possible to represent one with the other if a RDBMS supports
triggers. In other words, they are technically interchangable, sort of
a Turing Equivalency Principle act work..

>
> 	it's not getting much simpler than that.

Then we are in trouble. I don't think it matters much whether we call
it language or interface so far, and thus see no reason to make a big
deal out of it just yet. IF it becomes pivitable, then we can revisit
the definition.

>
> > > > BTW, Microsoft has ADO, DAO, etc. which are OO wrappers around
> > > > RDBMS.
> > >
> > > 	no they're not. ADO and DAO aren't OO, as they're COM based so
> > > they're actually procedural (library interfaces implemented on a
> > > live object).
> >
> > Being an OO wrapper on top of procedural calls does not necessarily
> > turn something into non-OO. Please clarify your labelling criteria.
>
> 	ADO isn't OO, it's COM. COM isn't OO, despite the fact it lets you
> believe you're working with objects, which is actually a facade, you're
> not working with OOP style objects, as there's no inheritance nor
> polymorphism, you just talk to an interface implemented by an
> object-esk construct in memory, which could be seen as a C struct with
> function pointers.

Some claim "structures with function pointers" is in fact OO. The
author Robert C. Martin generally uses this definition IIANM.
Polymorphism is when one puts different function pointers in structure
"cells" with the same name.

>
> > > > Further, even if OO was the best way to access RDBMS thru an app,
> > > > that does not necessarily extrapolate to all domains. OO being
> > > > good for X does not automatically imply it is good for Y also.
> > >
> > > 	you don't get the point: in an OO application, which works on data
> > > IN the application, you want to do that in an OO fashion.
> >
> > Why? Is OO proven objectively better?
>
> 	why would one WANT to use 2 paradigms, which aren't related as in one
> is derived from the other, in a single application? (let's redirect the
> 'what's a paradigm' posts to /dev/null/ first)

Yin and Yang. RDBMS do attribute management far better than code (and
OO). But things like logic expressions (If shipment quantity is greater
than bin size and it is Sunday then.....).  This Yin/Yang between
procedural and RDBMS is what makes them shine.  Each does best what it
does best. OO does neither best.

>
> > > To obtain the
> > > data from the outside is initiated INSIDE the application, thus
> > > also in an OO fashion. As an RDBMS doesn't understand OO in most
> > > cases, but it works with SQL as it has a SQL interpreter in place
> > > to let you program its internal relational algebra statements in a
> > > more readable way, you've to map statements from OO to SQL and set
> > > oriented results (the sets) from the DB back to OO objects.
> >
> > Are you suggesting methods such as "Add_AND_Clause(column,
> > comparisonOperator, Value)"?
>
> 	No.

How can you completely wrap SQL into OO without them?

>
> > Those are bloaty and ugly in my opinion, but let's save that value
> > judgement for a later debate on clause/criteria wrappers.
>
> 	You can perfectly write a set of predicate classes which can be
> inherited by the developer and make them more specific to the domain
> the developer is working with.

Example?

>
> > > > I have
> > > > already agreed that OO may be good for writing device drivers and
> > > > device-driver-like things; but it has not been shown useful to
> > > > view everything as a device driver. I am more interested in
> > > > seeing how OO models biz objects rather than how it wraps system
> > > > services and the like. Biz modeling has been OO's toughest
> > > > evidence cookie to crack (but perhaps not the only).
> > >
> > > 	huh? walls full of books have been written about this topic and you
> > > declare it the toughest cookie to crack...
> >
> > Such as? I've seen biz examples in OOP books, but they did not show
> > how they were better than the alternative. Showing how to make an
> > Employee class does not by itself tell you why an Employee class is
> > better than not using OO.
>
> 	I'm not saying everything should be OO because it's otherwise not
> possible, as you can write any program in plain C. It's often more
> suitable for writing an application because the resulting application
> is developed faster (code re-use) and is more maintainable and business
> apps can be very suitable for using an OO language,

Please reread what you replied to. I did NOT claim that OO does not
run.

> simply because you
> have data and logic operating on that data, so IMHO the ideal
> environment for using an OOP approach.

When I see coded proof for costum biz apps, I will believe you. Until
then, I will not take your word for it.

>
> > > > GOF patterns are supposed to be a solution, but GOF patterns have
> > > > no clear rules about when to use what and force a kind of IS-A
> > > > view on modeling instead of HAS-A.
> > >
> > > 	You also fall into the 'use pattern first, find problem for it
> > > later'-antipattern.
> > >
> > > 	a pattern is a (not the) solution for a well defined recognizable
> > > problem. So if you recognize the problem in your application, you
> > > can use the pattern which solves THAT problem to solve THAT problem
> > > in your application. THat's IT. The GoF book names a set of
> > > patterns and also the problems they solve. If you don't have the
> > > problems they solve, you don't need the patterns.
> >
> > Well, a look-up table is usually simpler and more inspectable than
> > Visitor. Thus, if usefulness is our guide, then GOF patterns are often
> > not the best.
>
> 	Visitor pattern is a pattern I don't think is very useful as the
> problem it solves isn't very common.
>
> 	But if your point is that OO is crap because Visitor pattern is silly
> and thus all that's said in the GoF book is therefore also retarded
> then we're done here.

The rest of GOF sucks for somewhat similar reasons.

> > > > GOF patterns are like an attempt to
> > > > catalog GO TO patterns instead of rid GO TO's. Relational is
> > > > comparable to the move from structured programming from GO TO's:
> > > > it provides more consistency and factors common activities into a
> > > > single interface convention (relational operators). OO lets
> > > > people re-invent their own just like there are a jillion ways to
> > > > do the equivalent of IF blocks with GO TO's.
> > >
> > > 	I've read a lot of nonsense in your post,
> >
> > No, the nonsense comes from the OO zealots. They have no proof for biz
> > apps. Two paradigms are equal or unknown until proven otherwise. I
> > want to see science, not brochures.
>
> 	you also have no proof for your claims either. As you started the
> claims, let's see them.


I don't claim my favorite approaches are objectively better. I am only
claiming that there is no evidence that OO is better and that one
should wrap everything they hate behind OO classes UNTIL they prove OO
is better. It is the K.I.S.S. principle, and wrapping is not KISS if
there is no repetition being factored out.

> 
> 
> 		FB
> 
> -- 

-T-

0
topmind (2124)
12/25/2006 4:48:15 AM
> Navigational query languages were proposed.
> They were ugly because navigational is ugly.

People have historically related navigational queries with supposed
"network" databases that were actually relational/hierarchal hybrid
dbs. Below are two nearly equivalent queries based on nearly equivalent
data structures. The first one is for a true network-type db. Which of
the below is navigational/ugly?

(!= (and (get person instance *)
            (get * gender male)
            (get (get * child john) child *))
     john)

SELECT P2.*
FROM ((person INNER JOIN link ON person.ID = link.childID)
INNER JOIN link AS link2 ON link.parentID = link2.parentID)
INNER JOIN person AS P2 ON link2.childID = P2.ID
WHERE (((P2.name)<>"John")
             AND ((person.name)="John")
             AND ((P2.sex)="Male"));

0
neo55592 (356)
12/25/2006 7:15:53 AM
Neo wrote:
> > Navigational query languages were proposed.
> > They were ugly because navigational is ugly.
>
> People have historically related navigational queries with supposed
> "network" databases that were actually relational/hierarchal hybrid
> dbs. Below are two nearly equivalent queries based on nearly equivalent
> data structures. The first one is for a true network-type db. Which of
> the below is navigational/ugly?
>
> (!= (and (get person instance *)
>             (get * gender male)
>             (get (get * child john) child *))
>      john)
>
> SELECT P2.*
> FROM ((person INNER JOIN link ON person.ID = link.childID)
> INNER JOIN link AS link2 ON link.parentID = link2.parentID)
> INNER JOIN person AS P2 ON link2.childID = P2.ID
> WHERE (((P2.name)<>"John")
>              AND ((person.name)="John")
>              AND ((P2.sex)="Male"));

The biggest bloater of the SQL version is the joins. Some RDBMS offer
"natural joins" to avoid having to explicity state common joins. Thus,
the fault is SQL (or at least this version) and not relational in
general in this case. Without the join jabber, they would be quite
comparable. Your version also tends to use symbols instead of
key-words. Early lab relational languages also did this, but IBM
rejected that approach, afraid it would scare away customers. Thus,
that is also a language-specific issue.

(What is with the extra parenthesis? It looks like the MS-Access-style
bastardization of SQL code.)

-T-

0
topmind (2124)
12/25/2006 7:48:29 AM
Thomas Gagne wrote:
> Nick Malik [Microsoft] wrote:
> > <snip>
> >> If you agree there's such a thing as an object-relational
> impedance >> mismatch, then perhaps its because you're witnessing the
> negative >> consequences of tightly coupling objects that shouldn't
> be tightly >> coupled.
> >>     
> > 
> > Nope.  I'm seeing the Object Relational Impedence a conceptual
> > disconnect between the 'traditional' RDBMS interface that doesn't
> > present a mechanism for encapsulating both code and operations in
> > the same object wrapper with the object oriented interface which
> > absolutely requires it.  Creating an object wrapper that DOES
> > present these two together is the goal of Object Relational Mapping
> > (ORM) tools.    
> I agree it is the goal, but should every DB-ish object in your
> application actually map to a tuple inside the DB (bean)?  Why do
> that when objects should communicate to each other through messages?
> Aren't the OR tools distracting OO programmers from how they ought
> really talk to the DB?

	I wouldn't say 'distracting', I'd say 'providing an alternative way to
work with the same well-known relational database.'.

	Example: if you model your relational model with NIAM, you can use the
model for both your domain classes and also for the database model
(generate an E/R model from it). An O/R mapper provides a way to use
the entities defined at the level of abstraction NIAM provides both in
your own code and also in the DB, i.o.w. it provides a way to utilize
the model at runtime in an OO fashion.

> > Most business capabilities are described as behaviors first, and
> > data second.
> I disagree, only because behaviors are based on weak assumptions and
> common practices.  Whatever is done can either be done well or
> poorly.  Behaviors are based of the weakest facts--the way things are
> done.  In fact, after analyzing the data and comparing the state of a
> DB before and after some behavior, programmers often discover how
> behavior can be improved.
> 
>     But the DB must always be correct.  Whether the behaviors are
>     correct or not, the DB must maintain its integrity.  It must
> protect     its state.  In fact, our DR plan is based on the premise
> that the     DB's integrity is the most critical--everything else is
> cosmetic.
> <http://blogs.in-streamco.com/anything.php?title=rules_for_production>

	Though what is 'correctness' in the DB? Isn't that close to semantical
interpretation of the data in such a way that you need to transform the
data into information first to be sure it's correct?

	I mean, sure, there are referential integrity rules, but if I store a
row in the customer table with an address where city is New York and
Country is THe Netherlands, it's not correct, though the db doesn't
object.

> > <snip>
> > 
> > I also think you lose something valuable with Stored Procs.
> > Excellent efforts have been expended to consider the basic
> > principles of RDBMS design in objects, and to create objects that
> > will effectively assist with Object Relational Mapping as a first
> > step to addressing the Impedence mismatch.    
> You're right.  Some great research has been spent here--as there was
> in alchemy.
> 
> Consider my situation, I have a single database which I know is
> correct because it's guarded by procedures, constraints, unit and
> integrity tests.  

	that doesn't mean anything. It still can be incorrect in some form.
There's a difference between data and information, and unless you have
your data -> information transformation code inside your db for ALL
your applications (which thus means, your applications are written
inside the db) you still can have incorrect data.

	Before people start jumping up and down that thats impossible, the
keyword is 'incorrect'. You might have a relational model which of
course forces integrity rules on you and the data stored in the
physical representation of that relational model thus obeys these
rules, but that doesn't mean the data is correct.

	This means that data is only correct in the semantical context the
data is used in, i.e. when it becomes information. An address with 'New
York' as city and 'The Netherlands' as country isn't violating your
model's rules, but is violating a rule for a valid address in the
Netherlands as we don't have a city called 'New York' in the
netherlands.

	So what you're saying doesn't imply solely on a db with procs, it also
implies on db's with an external api.

> I have multiple applications--some of them share
> common data models but others of them do not.  Which model is correct?
> Consider you have 20 programs each doing specific things.  Between
> the 20 you've discovered there's three different object models that
> best reflect their dependent applications needs and designs.  Which
> of the three should be mapped to the DB?  Should the DB's model be
> massaged to reflect any of them, or should it be designed to be
> perfect for the business data?

	In my opinion, there's just 1 model possible, and it comes down to a
NIAM/ORM (object role modelling) model. With that model, you can speak
about an entity and how it's used in the persistent storage and also in
your application.
See:
http://weblogs.asp.net/fbouma/archive/2006/08/23/Essay_3A00_-The-Databas
e-Model-is-the-Domain-Model.aspx

	Everyone who has created NIAM models or even E/R models knows that
there are a couple of different models possible when you look solely on
the information analysis results. So one could for example say: to ease
development, aggregated entities should be possible, e.g. a
'SalesOrder' which contains 'Order', 'Customer', order lines etc. and
is used as a single entity in a piece of code. This however comes down
to the fact that the entity 'SalesOrder' is a denormalization of a
normal NIAM model with customer, order, order lines etc.

	Question now is: would it be best to store the SalesOrder directly
into the db as a single table, or do you want to keep the normalized
tables and use the denormalized SalesOrder entity in your code?

	Either way, the SalesOrder entity has an advantage: you can define a
set of rules for this SalesOrder and implement them inside that
salesorder, and your own code working with that SalesOrder is then
simpler.

	Chosing this route IMHO implies a new abstraction layer above a NIAM
model. It then comes down to: to which abstraction level do you want to
talk in your code: the NIAM level or the abstraction level just above
it.?

		FB

-- 
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#) 
------------------------------------------------------------------------
0
Frans
12/25/2006 11:43:28 AM
On 25 Dec 2006 11:43:28 GMT, Frans Bouma wrote:

> Thomas Gagne wrote:

>> Consider my situation, I have a single database which I know is
>> correct because it's guarded by procedures, constraints, unit and
>> integrity tests.  
> 
> 	that doesn't mean anything. It still can be incorrect in some form.
> There's a difference between data and information, and unless you have
> your data -> information transformation code inside your db for ALL
> your applications (which thus means, your applications are written
> inside the db) you still can have incorrect data.
> 
> 	Before people start jumping up and down that thats impossible, the
> keyword is 'incorrect'. You might have a relational model which of
> course forces integrity rules on you and the data stored in the
> physical representation of that relational model thus obeys these
> rules, but that doesn't mean the data is correct.
> 
> 	This means that data is only correct in the semantical context the
> data is used in, i.e. when it becomes information. An address with 'New
> York' as city and 'The Netherlands' as country isn't violating your
> model's rules, but is violating a rule for a valid address in the
> Netherlands as we don't have a city called 'New York' in the
> netherlands.

You address here one issue, that is, the RA rules applied to wrong data may
produce wrong outcome. It is important, but it is not *the* problem. Which
is that having a correct (consistent) set of rules (like RA) and some valid
input (data set), one can still produce semantically wrong outcomes. This
is universally true. Arithmetic is certainly consistent, but 1 apple + 1
orange is not 2 Ampere. Arithmetic does not define the meaning of 1 and +.
Equivalently, the operations of RA should have a meaning in the application
domain. This meaning lies outside RA, and nothing in RA can warranty
anything about it. Same is true for any programming language. So the custom
chat about "DB correctness" is either trivial or rubbish.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
0
mailbox2 (6357)
12/25/2006 1:11:22 PM
> What is with the extra parenthesis?
> It looks like the MS-Access-style bastardization of SQL code.

Yes, it was auto-generated by MS Access. I suppose the extra
parenthesis are helpful to their parser.

> The biggest bloater of the SQL version is the joins. Some RDBMS offer
> "natural joins" to avoid having to explicity state common joins.

Which RMDB? What might it look like in this case?

> Thus, the fault is SQL (or at least this version)
> and not relational in general in this case.
> Your version also tends to use symbols instead of key-words.
> Early lab relational languages also did this, but IBM
> rejected that approach, afraid it would scare away customers.
> Thus, that is also a language-specific issue.

IBM rejected earlier relational languages with less join jabbers and
key-words because they thought it would scare away customers? Can you
explain further?

> Without the join jabber, they would be quite comparable.

For a majority of the cases, SQL expressions are simpler. For some more
complex networks with the flexibility to meet new data requirements
with minimal impact, like the "toy" examples, network-type db's
expressions can be simpler.

0
neo55592 (356)
12/25/2006 4:00:19 PM
"topmind" <topmind@technologist.com> writes:
> Patrick May wrote:
> > "topmind" <topmind@technologist.com> writes:
> > > Neo wrote:
> > > > > If you take away inheritence, you get "network structures"
> > > > > (AKA tangled pasta). Dr. Codd sought to escape those by
> > > > > applying set theory, and network structures thankfully fell
> > > > > out of favor, until the OO crowd tried to bring them back
> > > > > from the dead.
> > > >
> > > > Can you give an example of such as tangled pasta?
> > >
> > > OO Visitor pattern.
> >
> >      The Visitor pattern provides two capabilities: 1) simulation
> > of double dispatch and 2) non-intrusively adding new operations to
> > existing classes.  Both of these address limitations of languages
> > such as C++ and Java.  The existence of this pattern does not
> > demonstrate a general flaw in the object oriented approach.
>
> How about you Visitor defenders present a somewhat practical biz
> example where Visitor allegedly makes maintenence easier.

     Non-sequitur.  I never claimed to be a "fan" of the pattern, nor
did I assert that it makes maintenance easier.  I merely pointed out
the reasons for using it and provided an example of the context in
which it applies.  My only purpose was to demonstrate that your use of
this pattern to support your contention that OO techniques result in
"tangled pasta" is without merit.

Sincerely,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc.  | Large scale, mission-critical, distributed OO
                       | systems design and implementation.
          pjm@spe.com  | (C++, Java, Common Lisp, Jini, middleware, SOA)
0
pjm (703)
12/25/2006 6:22:19 PM
Patrick May wrote:
> "topmind" <topmind@technologist.com> writes:
> > Patrick May wrote:
> > > "topmind" <topmind@technologist.com> writes:
> > > > Neo wrote:
> > > > > > If you take away inheritence, you get "network structures"
> > > > > > (AKA tangled pasta). Dr. Codd sought to escape those by
> > > > > > applying set theory, and network structures thankfully fell
> > > > > > out of favor, until the OO crowd tried to bring them back
> > > > > > from the dead.
> > > > >
> > > > > Can you give an example of such as tangled pasta?
> > > >
> > > > OO Visitor pattern.
> > >
> > >      The Visitor pattern provides two capabilities: 1) simulation
> > > of double dispatch and 2) non-intrusively adding new operations to
> > > existing classes.  Both of these address limitations of languages
> > > such as C++ and Java.  The existence of this pattern does not
> > > demonstrate a general flaw in the object oriented approach.
> >
> > How about you Visitor defenders present a somewhat practical biz
> > example where Visitor allegedly makes maintenence easier.
>
>      Non-sequitur.  I never claimed to be a "fan" of the pattern, nor
> did I assert that it makes maintenance easier.  I merely pointed out
> the reasons for using it and provided an example of the context in
> which it applies.  My only purpose was to demonstrate that your use of
> this pattern to support your contention that OO techniques result in
> "tangled pasta" is without merit.

Fine, pick another OO pattern and kick procedural/relational's ass with
it. I don't care how you kick its ass, just do it and show it. Put your
money where your mouth is and beat the hell of out of me with OO.

> 
> Sincerely,
> 
> Patrick
> 

-T-

0
topmind (2124)
12/25/2006 10:36:53 PM
Neo wrote:
> > What is with the extra parenthesis?
> > It looks like the MS-Access-style bastardization of SQL code.
>
> Yes, it was auto-generated by MS Access. I suppose the extra
> parenthesis are helpful to their parser.
>
> > The biggest bloater of the SQL version is the joins. Some RDBMS offer
> > "natural joins" to avoid having to explicity state common joins.
>
> Which RMDB? What might it look like in this case?

It goes something like:

....NATURAL JOIN tableA, tableB, tableC
WHERE...

The natural join clause then uses either a data dictionary or column
names to perform the join rather than explicit matches. I have not used
it myself, so I am not an expert on it. The point is that joins can be
simplified. If a dialect of SQL itself does not provide it, then one
can use something like:

SELECT ...
#joinClause("tableA,tableB,tableC")#
WHERE ...

Our embedded function (returning a string) can then supply the
commonly-used joins and create the SQL join phrase for us.

>
> > Thus, the fault is SQL (or at least this version)
> > and not relational in general in this case.
> > Your version also tends to use symbols instead of key-words.
> > Early lab relational languages also did this, but IBM
> > rejected that approach, afraid it would scare away customers.
> > Thus, that is also a language-specific issue.
>
> IBM rejected earlier relational languages with less join jabbers and
> key-words because they thought it would scare away customers? Can you
> explain further?

The early experiments used math-based symbols instead of key-words
because researchers were math savvy and because Dr. Codd presented his
query language with math-based symbols. IBM felt that it would be more
marketable if key-words were used instead. Perhaps they felt it should
be natural to COBOL programmers. The early symbols did have a natural
join if I remember correctly, but I am not positive. I don't know what
vendors do supply a natural join. It seems based on column names, which
to me is a mistake: it should be based upon a data dictionary or join
reference table IMO.

>
> > Without the join jabber, they would be quite comparable.
>
> For a majority of the cases, SQL expressions are simpler. For some more
> complex networks with the flexibility to meet new data requirements
> with minimal impact, like the "toy" examples, network-type db's
> expressions can be simpler.

So network DB's are better at toy examples? I won't even bother to
challenge that :-)

Generally a problem domain tends to present common needs/patterns such
that views or custom functions (if available) can often simplify SQL
queries.  I do wish there was more competition in relational query
languages. SQL has grown a bit stale.

Note that Dr. Codd and supported used to have "query-offs" with
Bachman's group, the premier navigational/network supporter at the
time.  By most accounts, Dr. Codd's group won.

-T-

0
topmind (2124)
12/25/2006 11:16:35 PM
"topmind" <topmind@technologist.com> writes:
> Patrick May wrote:
>> "topmind" <topmind@technologist.com> writes:
>> > > > > Can you give an example of such as tangled pasta?
>> > > >
>> > > > OO Visitor pattern.
>> > >
>> > >      The Visitor pattern provides two capabilities: 1)
>> > > simulation of double dispatch and 2) non-intrusively adding new
>> > > operations to existing classes.  Both of these address
>> > > limitations of languages such as C++ and Java.  The existence
>> > > of this pattern does not demonstrate a general flaw in the
>> > > object oriented approach.
>> >
>> > How about you Visitor defenders present a somewhat practical biz
>> > example where Visitor allegedly makes maintenence easier.
>>
>>      Non-sequitur.  I never claimed to be a "fan" of the pattern,
>> nor did I assert that it makes maintenance easier.  I merely
>> pointed out the reasons for using it and provided an example of the
>> context in which it applies.  My only purpose was to demonstrate
>> that your use of this pattern to support your contention that OO
>> techniques result in "tangled pasta" is without merit.
>
> Fine, pick another OO pattern and kick procedural/relational's ass
> with it. I don't care how you kick its ass, just do it and show
> it. Put your money where your mouth is and beat the hell of out of
> me with OO.

     You've got it backwards.  You used the Visitor pattern in support
of one of your claims in your conversation with Neo.  I simply pointed
out that it does not, in fact, support your argument.  The burden of
proof is still on you to provide an example of OO techniques leading
to "tangled pasta".

     Alternatively, you could simply admit to Neo that you cannot
support your assertion.

Sincerely,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc.  | Large scale, mission-critical, distributed OO
                       | systems design and implementation.
          pjm@spe.com  | (C++, Java, Common Lisp, Jini, middleware, SOA)
0
pjm (703)
12/25/2006 11:17:25 PM
Patrick May wrote:
> "topmind" <topmind@technologist.com> writes:
> > Patrick May wrote:
> >> "topmind" <topmind@technologist.com> writes:
> >> > > > > Can you give an example of such as tangled pasta?
> >> > > >
> >> > > > OO Visitor pattern.
> >> > >
> >> > >      The Visitor pattern provides two capabilities: 1)
> >> > > simulation of double dispatch and 2) non-intrusively adding new
> >> > > operations to existing classes.  Both of these address
> >> > > limitations of languages such as C++ and Java.  The existence
> >> > > of this pattern does not demonstrate a general flaw in the
> >> > > object oriented approach.
> >> >
> >> > How about you Visitor defenders present a somewhat practical biz
> >> > example where Visitor allegedly makes maintenence easier.
> >>
> >>      Non-sequitur.  I never claimed to be a "fan" of the pattern,
> >> nor did I assert that it makes maintenance easier.  I merely
> >> pointed out the reasons for using it and provided an example of the
> >> context in which it applies.  My only purpose was to demonstrate
> >> that your use of this pattern to support your contention that OO
> >> techniques result in "tangled pasta" is without merit.
> >
> > Fine, pick another OO pattern and kick procedural/relational's ass
> > with it. I don't care how you kick its ass, just do it and show
> > it. Put your money where your mouth is and beat the hell of out of
> > me with OO.
>
>      You've got it backwards.  You used the Visitor pattern in support
> of one of your claims in your conversation with Neo.  I simply pointed
> out that it does not, in fact, support your argument.  The burden of
> proof is still on you to provide an example of OO techniques leading
> to "tangled pasta".

There are no real rules for when to use what GOF pattern, especially if
there are competing factors.  The rules of relational normalization are
governed mostly by duplication removal. All else being equal,
consistency trumps inconsistency.

>
>      Alternatively, you could simply admit to Neo that you cannot
> support your assertion.
> 
> Sincerely,
> 
> Patrick
> 

-T-

0
topmind (2124)
12/25/2006 11:50:10 PM
Dmitry A. Kazakov wrote:

> On 25 Dec 2006 11:43:28 GMT, Frans Bouma wrote:
> 
> > Thomas Gagne wrote:
> 
> >> Consider my situation, I have a single database which I know is
> >> correct because it's guarded by procedures, constraints, unit and
> >> integrity tests.  
> > 
> > 	that doesn't mean anything. It still can be incorrect in some form.
> > There's a difference between data and information, and unless you
> > have your data -> information transformation code inside your db
> > for ALL your applications (which thus means, your applications are
> > written inside the db) you still can have incorrect data.
> > 
> > 	Before people start jumping up and down that thats impossible, the
> > keyword is 'incorrect'. You might have a relational model which of
> > course forces integrity rules on you and the data stored in the
> > physical representation of that relational model thus obeys these
> > rules, but that doesn't mean the data is correct.
> > 
> > 	This means that data is only correct in the semantical context the
> > data is used in, i.e. when it becomes information. An address with
> > 'New York' as city and 'The Netherlands' as country isn't violating
> > your model's rules, but is violating a rule for a valid address in
> > the Netherlands as we don't have a city called 'New York' in the
> > netherlands.
> 
> You address here one issue, that is, the RA rules applied to wrong
> data may produce wrong outcome. It is important, but it is not the
> problem. Which is that having a correct (consistent) set of rules
> (like RA) and some valid input (data set), one can still produce
> semantically wrong outcomes. This is universally true. Arithmetic is
> certainly consistent, but 1 apple + 1 orange is not 2 Ampere.
> Arithmetic does not define the meaning of 1 and +.  Equivalently, the
> operations of RA should have a meaning in the application domain.
> This meaning lies outside RA, and nothing in RA can warranty anything
> about it. Same is true for any programming language. So the custom
> chat about "DB correctness" is either trivial or rubbish.

 	Thanks Dmitry for correctly wording this. I tried to explain what you
said it way better. :)

		FB

-- 
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#) 
------------------------------------------------------------------------
0
Frans
12/26/2006 10:25:26 AM
"topmind" <topmind@technologist.com> writes:
> Patrick May wrote:
>>      You've got it backwards.  You used the Visitor pattern in
>> support of one of your claims in your conversation with Neo.  I
>> simply pointed out that it does not, in fact, support your
>> argument.  The burden of proof is still on you to provide an
>> example of OO techniques leading to "tangled pasta".
>
> There are no real rules for when to use what GOF pattern, especially
> if there are competing factors.  The rules of relational
> normalization are governed mostly by duplication removal. All else
> being equal, consistency trumps inconsistency.

     So you can't provide an actual example.  You should just come out
and say so.

>>      Alternatively, you could simply admit to Neo that you cannot
>> support your assertion.

     This still appears to be your only option.

Sincerely,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc.  | Large scale, mission-critical, distributed OO
                       | systems design and implementation.
          pjm@spe.com  | (C++, Java, Common Lisp, Jini, middleware, SOA)
0
pjm (703)
12/26/2006 12:47:44 PM
Patrick May wrote:
> "topmind" <topmind@technologist.com> writes:
> > Patrick May wrote:
> >>      You've got it backwards.  You used the Visitor pattern in
> >> support of one of your claims in your conversation with Neo.  I
> >> simply pointed out that it does not, in fact, support your
> >> argument.  The burden of proof is still on you to provide an
> >> example of OO techniques leading to "tangled pasta".
> >
> > There are no real rules for when to use what GOF pattern, especially
> > if there are competing factors.  The rules of relational
> > normalization are governed mostly by duplication removal. All else
> > being equal, consistency trumps inconsistency.
>
>      So you can't provide an actual example.  You should just come out
> and say so.

How exactly does one provide examples to show that there are no
consistent consensus rules for something?  If I say "There is no
evidence that unicorns exist", you cannot ask for an example. It is
YOUR burden to show that unicorns exist. Now, replace unicorns with
"consistent consensus rules".

>
> >>      Alternatively, you could simply admit to Neo that you cannot
> >> support your assertion.
>
>      This still appears to be your only option.
> 
> Sincerely,
> 
> Patrick
> 

-T-

0
topmind (2124)
12/26/2006 5:08:02 PM
"topmind" <topmind@technologist.com> writes:
> Patrick May wrote:
>> "topmind" <topmind@technologist.com> writes:
>> > Patrick May wrote:
>> >>      You've got it backwards.  You used the Visitor pattern in
>> >> support of one of your claims in your conversation with Neo.  I
>> >> simply pointed out that it does not, in fact, support your
>> >> argument.  The burden of proof is still on you to provide an
>> >> example of OO techniques leading to "tangled pasta".
>> >
>> > There are no real rules for when to use what GOF pattern,
>> > especially if there are competing factors.  The rules of
>> > relational normalization are governed mostly by duplication
>> > removal. All else being equal, consistency trumps inconsistency.
>>
>>      So you can't provide an actual example.  You should just come
>> out and say so.
>
> How exactly does one provide examples to show that there are no
> consistent consensus rules for something?

     That wasn't the claim under discussion.  You said, in message
1166720109.380669.77600@73g2000cwn.googlegroups.com:

> If you take away inheritence, you get "network structures" (AKA
> tangled pasta).

Neo, quite reasonably, asked you for an example of such tangled
pasta.  You have thus far failed to provide an example.  After this
many messages, it appears that you don't have one.  You should just
admit it.

Sincerely,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc.  | Large scale, mission-critical, distributed OO
                       | systems design and implementation.
          pjm@spe.com  | (C++, Java, Common Lisp, Jini, middleware, SOA)
0
pjm (703)
12/26/2006 5:27:57 PM
topmind wrote:

> Patrick May wrote:

>>"topmind" <topmind@technologist.com> writes:

>>>Patrick May wrote:

PM>     You've got it backwards.  You used the Visitor pattern in
PM>support of one of your claims in your conversation with Neo.  I
PM>simply pointed out that it does not, in fact, support your
PM>argument.  The burden of proof is still on you to provide an
PM>example of OO techniques leading to "tangled pasta".

TM>There are no real rules for when to use what GOF pattern, especially
TM>if there are competing factors.  The rules of relational
TM>normalization are governed mostly by duplication removal. All else
TM>being equal, consistency trumps inconsistency.

> So you can't provide an actual example.  You should just come out
>>and say so.

> How exactly does one provide examples to show that there are no
> consistent consensus rules for something?

I contend that GoF have such rules: they are labelled "motivation" etc .


> If I say "There is no
> evidence that unicorns exist", you cannot ask for an example. It is
> YOUR burden to show that unicorns exist. Now, replace unicorns with
> "consistent consensus rules".

Counter-argument : it is easier to *disprove* something than it is to
*prove* it. ***

Proofs are universal : they must hold under all conditions.
Dis-proof is existential : only one condition has to be found to render
a proof statement invalid as it stands.

So, as far as GoF patterns go (and using your weird language) :

show us *one* "inconsistent consensus rule" .


It is *your burden* to show that one condition.

If you cannot, as Patrick May has so amusingly had you squirming over
the months on various different threads trying to avoid, state that
while you have doubts as to the veracity of something, you specifically
do not have the proof (and/or ability) to disprove the veracity.


Regards,
Steven Perryman

*** Lest you try to claim this is not how proof works etc, I will
*immediately provide a real-world example of one of the most important
advances in science of the last 100 yrs* (ie teaching you how to
actually disprove something) .
0
S
12/26/2006 5:50:08 PM
S Perryman wrote:
> topmind wrote:
>
> > Patrick May wrote:
>
> >>"topmind" <topmind@technologist.com> writes:
>
> >>>Patrick May wrote:
>
> PM>     You've got it backwards.  You used the Visitor pattern in
> PM>support of one of your claims in your conversation with Neo.  I
> PM>simply pointed out that it does not, in fact, support your
> PM>argument.  The burden of proof is still on you to provide an
> PM>example of OO techniques leading to "tangled pasta".
>
> TM>There are no real rules for when to use what GOF pattern, especially
> TM>if there are competing factors.  The rules of relational
> TM>normalization are governed mostly by duplication removal. All else
> TM>being equal, consistency trumps inconsistency.
>
> > So you can't provide an actual example.  You should just come out
> >>and say so.
>
> > How exactly does one provide examples to show that there are no
> > consistent consensus rules for something?
>
> I contend that GoF have such rules: they are labelled "motivation" etc .

They are often worded as "adding an X without having to change Y".
However, change needs often change over time. Up-front change needs are
often not a good guide to future change needs.

>
>
> > If I say "There is no
> > evidence that unicorns exist", you cannot ask for an example. It is
> > YOUR burden to show that unicorns exist. Now, replace unicorns with
> > "consistent consensus rules".
>
> Counter-argument : it is easier to *disprove* something than it is to
> *prove* it. ***
>
> Proofs are universal : they must hold under all conditions.
> Dis-proof is existential : only one condition has to be found to render
> a proof statement invalid as it stands.


Okay, I declare "tangled pasta" a subjective opinion. However, there is
no evidence of GOF OO patterns improving realistic business logic.
Until they are proven better in my domain with inspectable public
source code, I shall use procedural/relational techniques instead and
recommend others ignore them also.


>
> So, as far as GoF patterns go (and using your weird language) :
>
> show us *one* "inconsistent consensus rule" .
>
>
> It is *your burden* to show that one condition.
>
> If you cannot, as Patrick May has so amusingly had you squirming over
> the months on various different threads trying to avoid, state that
> while you have doubts as to the veracity of something, you specifically
> do not have the proof (and/or ability) to disprove the veracity.

The bottom line is that you cannot prove GOF OO better. Mr. May likes
nitty side-tracks to distract from the real issue. He is more
interested in bashing me than in defending OO. Kick the messenger all
you all want, but OO is not proven better outside of systems software.
Whether I am a genious or Bozo, you still have no OO proof.

> 
> 
> Regards,
> Steven Perryman
> 

-T-

0
topmind (2124)
12/26/2006 6:44:02 PM
Frans Bouma wrote:
> Thomas Gagne wrote:
>   
> <snip>
>
> 	Though what is 'correctness' in the DB? Isn't that close to semantical
> interpretation of the data in such a way that you need to transform the
> data into information first to be sure it's correct?
>
> 	I mean, sure, there are referential integrity rules, but if I store a
> row in the customer table with an address where city is New York and
> Country is THe Netherlands, it's not correct, though the db doesn't
> object.
>   
For DB's to be correct, more is needed than referential integrity.  For 
instance, in financial systems the data needs to balance.  Transactions 
are supposed to balance, which in theory would keep the system 
"balanced", but production unit tests (doesn't everyone run those?) can 
prove on a daily (or more frequent) basis that the DB's state is both 
referentially correct, business rule correct, and in other respects, 
correct.
>   
>>> <snip>
>>>
>>> I also think you lose something valuable with Stored Procs.
>>> Excellent efforts have been expended to consider the basic
>>> principles of RDBMS design in objects, and to create objects that
>>> will effectively assist with Object Relational Mapping as a first
>>> step to addressing the Impedence mismatch.    
>>>       
>> You're right.  Some great research has been spent here--as there was
>> in alchemy.
>>
>> Consider my situation, I have a single database which I know is
>> correct because it's guarded by procedures, constraints, unit and
>> integrity tests.  
>>     
> <snip>
>
> 	So what you're saying doesn't imply solely on a db with procs, it also
> implies on db's with an external api.
>   
Even stripped of all APIs, the database should be provably correct.  
APIs provide a rampart against corruption (data errors), as does 
referential integrity (structural errors).  Whatever bugs may express 
themselves in the code it's more important to identify and isolate them 
in the database.  Bad data in the database can infect multiple 
applications and be the cause for bad decisions--both manual and 
automated.  This is one of the reasons we focus so strongly on DB integrity.
>   
>> I have multiple applications--some of them share
>> common data models but others of them do not.  Which model is correct?
>> Consider you have 20 programs each doing specific things.  Between
>> the 20 you've discovered there's three different object models that
>> best reflect their dependent applications needs and designs.  Which
>> of the three should be mapped to the DB?  Should the DB's model be
>> massaged to reflect any of them, or should it be designed to be
>> perfect for the business data?
>>     
>
> 	In my opinion, there's just 1 model possible, and it comes down to a
> NIAM/ORM (object role modelling) model. With that model, you can speak
> about an entity and how it's used in the persistent storage and also in
> your application.
>   
I don't agree there's a chicken-and-egg problem.  Chicken and eggs 
dilemmas are so because both the chicken and egg are required entities.  
Object-relational design questions aren't characterized that way because 
objects and object models are optional.  They exist only because system 
designers selected object oriented languages to program with.  The 
relational model exists (and persists) with or without objects or object 
oriented languages.

Along the same theme, the article also errs in presenting a 
bifurcation.  Even if we assume both object and relational models exist 
in symbiosis we do not have to map one model onto the other.  We can 
instead implement to the interface, which is what transactions are all 
about.

In fact, I could probably prove transactions are how it should be done 
using the same proof for an intersection table's requirement to 
represent many-to-many relationships--but that proof is outside the 
scope of this reply.  It is. however, a good subject for a subsequent 
article.  It already proves the need for middleware, why not prove 
transactions are the most efficient and correct approach to joining 
applications to databases?

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/26/2006 6:47:48 PM
Frans Bouma wrote:
> Dmitry A. Kazakov wrote:
>
>   
>> <snip>
>> You address here one issue, that is, the RA rules applied to wrong
>> data may produce wrong outcome. It is important, but it is not the
>> problem. Which is that having a correct (consistent) set of rules
>> (like RA) and some valid input (data set), one can still produce
>> semantically wrong outcomes. This is universally true. Arithmetic is
>> certainly consistent, but 1 apple + 1 orange is not 2 Ampere.
>> Arithmetic does not define the meaning of 1 and +.  Equivalently, the
>> operations of RA should have a meaning in the application domain.
>> This meaning lies outside RA, and nothing in RA can warranty anything
>> about it. Same is true for any programming language. So the custom
>> chat about "DB correctness" is either trivial or rubbish.
>>     
>
>  	Thanks Dmitry for correctly wording this. I tried to explain what you
> said it way better. :)
>
> 		FB
>
>   
I must be the only one that didn't follow it.

Where did the wrong data come from?  Why is it solely relational 
algebra's problem to detect it?

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/26/2006 6:57:06 PM
Thomas Gagne wrote:
> Frans Bouma wrote:
> > Thomas Gagne wrote:
> >
> > <snip>
> >
> > 	Though what is 'correctness' in the DB? Isn't that close to semantical
> > interpretation of the data in such a way that you need to transform the
> > data into information first to be sure it's correct?
> >
> > 	I mean, sure, there are referential integrity rules, but if I store a
> > row in the customer table with an address where city is New York and
> > Country is THe Netherlands, it's not correct, though the db doesn't
> > object.
> >
> For DB's to be correct, more is needed than referential integrity.  For
> instance, in financial systems the data needs to balance.  Transactions
> are supposed to balance, which in theory would keep the system
> "balanced", but production unit tests (doesn't everyone run those?) can
> prove on a daily (or more frequent) basis that the DB's state is both
> referentially correct, business rule correct, and in other respects,
> correct.

I was talking with some tech buddies of mine about this once, and we
concluded that double-entry book-keeping was archaic. One does not need
to store the same info in two different places with modern DB's. If you
want to avoid losing a record, then use some kind of increment
sequencing number.

Of course the usual nightly backup process should be in place, and
perhaps a DB mirror system if you have the budget.

-T-

0
topmind (2124)
12/26/2006 8:33:05 PM
On 24 Dec 2006 20:27:12 -0800, "topmind" <topmind@technologist.com>
wrote:

>> Explicit b-trees and b-trees crosslinked into "networks" can be very
>> efficient and straightforward, so why did they lose so completely?
>
>Efficient, maybe. Straitforward? no way.  

OK, but the efficient can be a big deal.

>Relational offers more
>consistency. There are less different ways to model the same business
>in relational. It sounds like a bad thing, but relational killed the
>"creativity". It is similar to how structured blocks killed GOTO
>creativity. IBM's IMS is dead for a reason.

VSAM is more what I had in mind.

>> The textbook answer is that they are not reusable, and a normalized
>> relational database, is.  But I think the answer is elsewhere.
>> Because of its independence, the relational database requires some
>> kind of language interface, pretty much universally some form of SQL
>> these days, which further hides the implementation from the
>> application (I agree with topmind on this).
>
>Navigational query languages were proposed. They were ugly because
>navigational is ugly.

SQL isn't very pretty.  I should try using MULTIPLY/DIVIDE and other
terminology that is more precise - inner/natural join can be ambiguous
as to why you're doing it, and optimizers can assume incorrectly.

The wordiness is not a problem for me, I tend to comment extensively
anyway.

The role of compilation and optimization in relational databases is
VERY underemphasized.  Maybe it's "only" about performance, but 100x
or more is enough to pay attention to.

J.


0
12/26/2006 11:22:49 PM
JXStern wrote:
> On 24 Dec 2006 20:27:12 -0800, "topmind" <topmind@technologist.com>
> wrote:
>
> >> Explicit b-trees and b-trees crosslinked into "networks" can be very
> >> efficient and straightforward, so why did they lose so completely?
> >
> >Efficient, maybe. Straitforward? no way.
>
> OK, but the efficient can be a big deal.

I doubt a general-purpose navigational DBMS will be faster than a
general-purpose relational DBMS. Application-specific navigational DBMS
do exist and they are indeed fast. (Phone co's use them IIANM).
However, much is hard-wired in order to acheive such speed. A
hard-wired RDBMS could be created also.

>
> >Relational offers more
> >consistency. There are less different ways to model the same business
> >in relational. It sounds like a bad thing, but relational killed the
> >"creativity". It is similar to how structured blocks killed GOTO
> >creativity. IBM's IMS is dead for a reason.
>
> VSAM is more what I had in mind.

That is an implimentation detail, isn't it? If one is going to create a
navigational DB standard, it should not assume a specific
implementation I would think.

>
> >> The textbook answer is that they are not reusable, and a normalized
> >> relational database, is.  But I think the answer is elsewhere.
> >> Because of its independence, the relational database requires some
> >> kind of language interface, pretty much universally some form of SQL
> >> these days, which further hides the implementation from the
> >> application (I agree with topmind on this).
> >
> >Navigational query languages were proposed. They were ugly because
> >navigational is ugly.
>
> SQL isn't very pretty.  I should try using MULTIPLY/DIVIDE and other
> terminology that is more precise - inner/natural join can be ambiguous
> as to why you're doing it, and optimizers can assume incorrectly.

Well, I am all for creating a new relational language and/or RDB
standard to replace or compete with SQL. I even proposed my own called
SMEQL. However, even with its warts, SQL beats what is currently out
there.

>
> The wordiness is not a problem for me, I tend to comment extensively
> anyway.
>
> The role of compilation and optimization in relational databases is
> VERY underemphasized.  Maybe it's "only" about performance, but 100x
> or more is enough to pay attention to.

Please explain. Are you suggesting that if RDBMS were more "pure", then
automatic optimization would be more effective? Perhaps. But again, you
are talking about a brand of apples and not apples in general.

I'll be behind you if you wish to lobby for new relational standards.
At least relational has (semi) standards. Navigational has none in
usage that I know of (other than file systems, and perhaps to some
extent XML-DBs).

> 
> J.

-T-

0
topmind (2124)
12/27/2006 12:21:33 AM
topmind wrote:
> Thomas Gagne wrote:
>   
>> Frans Bouma wrote:
>>     
>>> Thomas Gagne wrote:
>>>
>>> <snip>
>>>
>>> 	Though what is 'correctness' in the DB? Isn't that close to semantical
>>> interpretation of the data in such a way that you need to transform the
>>> data into information first to be sure it's correct?
>>>
>>> 	I mean, sure, there are referential integrity rules, but if I store a
>>> row in the customer table with an address where city is New York and
>>> Country is THe Netherlands, it's not correct, though the db doesn't
>>> object.
>>>
>>>       
>> For DB's to be correct, more is needed than referential integrity.  For
>> instance, in financial systems the data needs to balance.  Transactions
>> are supposed to balance, which in theory would keep the system
>> "balanced", but production unit tests (doesn't everyone run those?) can
>> prove on a daily (or more frequent) basis that the DB's state is both
>> referentially correct, business rule correct, and in other respects,
>> correct.
>>     
>
> I was talking with some tech buddies of mine about this once, and we
> concluded that double-entry book-keeping was archaic. One does not need
> to store the same info in two different places with modern DB's. If you
> want to avoid losing a record, then use some kind of increment
> sequencing number.
>   
Are you and your buddies appropriate authorities to dismiss a Generally 
Accepted Accounting Practice in use since the 12th century?

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/27/2006 2:10:43 AM
Thomas Gagne wrote:
> topmind wrote:
> > Thomas Gagne wrote:
> >
> >> Frans Bouma wrote:
> >>
> >>> Thomas Gagne wrote:
> >>>
> >>> <snip>
> >>>
> >>> 	Though what is 'correctness' in the DB? Isn't that close to semantical
> >>> interpretation of the data in such a way that you need to transform the
> >>> data into information first to be sure it's correct?
> >>>
> >>> 	I mean, sure, there are referential integrity rules, but if I store a
> >>> row in the customer table with an address where city is New York and
> >>> Country is THe Netherlands, it's not correct, though the db doesn't
> >>> object.
> >>>
> >>>
> >> For DB's to be correct, more is needed than referential integrity.  For
> >> instance, in financial systems the data needs to balance.  Transactions
> >> are supposed to balance, which in theory would keep the system
> >> "balanced", but production unit tests (doesn't everyone run those?) can
> >> prove on a daily (or more frequent) basis that the DB's state is both
> >> referentially correct, business rule correct, and in other respects,
> >> correct.
> >>
> >
> > I was talking with some tech buddies of mine about this once, and we
> > concluded that double-entry book-keeping was archaic. One does not need
> > to store the same info in two different places with modern DB's. If you
> > want to avoid losing a record, then use some kind of increment
> > sequencing number.
> >
> Are you and your buddies appropriate authorities to dismiss a Generally
> Accepted Accounting Practice in use since the 12th century?


Is it required to be implemented the same way as done on paper? Or is
one merely required to present it in double-entry form (which is just a
presentation issue)?  It seems foolish to have laws that force
denormalized data. There are better ways to get integrity than to
mirror paper. In other words:

if forced by law then
   law not rational
else
   there are better ways to ensure data intregrity
end if

>
> --
> Visit <http://blogs.instreamfinancial.com/anything.php>
> to read my rants on technology and the finance industry.

-T-

0
topmind (2124)
12/27/2006 6:59:47 AM
On 26 Dec 2006 09:08:02 -0800, topmind wrote:

> If I say "There is no
> evidence that unicorns exist", you cannot ask for an example.

No, we can immediately discard this statement as illegal. "There is no"
must be applied to an observable set. You can say "In Britanica there is no
evidences that unicorns exist." Then we could go and verify that.
Otherwise, it is always your burden to prove a universally quantified
statement. For example by showing that existing unicorns would necessarily
post in comp.object within each hour.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
0
mailbox2 (6357)
12/27/2006 8:34:42 AM
Thomas Gagne wrote:
> Frans Bouma wrote:
> > Thomas Gagne wrote:
> >   
> > <snip>
> > 
> > 	Though what is 'correctness' in the DB? Isn't that close to
> > semantical interpretation of the data in such a way that you need
> > to transform the data into information first to be sure it's
> > correct?
> > 
> > 	I mean, sure, there are referential integrity rules, but if I
> > store a row in the customer table with an address where city is New
> > York and Country is THe Netherlands, it's not correct, though the
> > db doesn't object.
> >   
> For DB's to be correct, more is needed than referential integrity.
> For instance, in financial systems the data needs to balance.
> Transactions are supposed to balance, which in theory would keep the
> system "balanced", but production unit tests (doesn't everyone run
> those?) can prove on a daily (or more frequent) basis that the DB's
> state is both referentially correct, business rule correct, and in
> other respects, correct.

	that's semantic interpretation of data, i.e. the transformation of
data into information. Though, why do you need an RDBMS for this?

	About unit tests: they test what you wrote the test for, though they
don't give you an absolute answer if your code is correct. They only
proof that what the test has to proof is true or not.

> >>> <snip>
> > > > 
> >>> I also think you lose something valuable with Stored Procs.
> >>> Excellent efforts have been expended to consider the basic
> >>> principles of RDBMS design in objects, and to create objects that
> >>> will effectively assist with Object Relational Mapping as a first
> >>> step to addressing the Impedence mismatch.    
> >>>       
> >> You're right.  Some great research has been spent here--as there
> was >> in alchemy.
> > > 
> >> Consider my situation, I have a single database which I know is
> >> correct because it's guarded by procedures, constraints, unit and
> >> integrity tests.  
> >>     
> > <snip>
> > 
> > 	So what you're saying doesn't imply solely on a db with procs, it
> > also implies on db's with an external api.
> >   
> Even stripped of all APIs, the database should be provably correct.

	I still have a problem with what you mean with 'Correct'. Let's say
you mean by it the relational integrity correctness but also the
semantical correctness of the data, as you tried to explain above.
Isn't it so that in that situation I can write another program, also
using the same database which consumes a subset of your tables and
finds incorrect data? Simply because its semantic interpretation of the
data is different? (a dutch zip code format is different from a US zip
code format for example ;))

> APIs provide a rampart against corruption (data errors), as does
> referential integrity (structural errors).  Whatever bugs may express
> themselves in the code it's more important to identify and isolate
> them in the database.  Bad data in the database can infect multiple
> applications and be the cause for bad decisions--both manual and
> automated.  This is one of the reasons we focus so strongly on DB
> integrity.

	all great, but that doesn't imply you THUS should use a set of stored
procedures to create that. As part of your definition of a correct
database/dataset is based on semantical interpretation of the data, you
can also do that outside the DB, in whatever application you're writing.

> >> I have multiple applications--some of them share
> >> common data models but others of them do not.  Which model is
> correct?  >> Consider you have 20 programs each doing specific
> things.  Between >> the 20 you've discovered there's three different
> object models that >> best reflect their dependent applications needs
> and designs.  Which >> of the three should be mapped to the DB?
> Should the DB's model be >> massaged to reflect any of them, or
> should it be designed to be >> perfect for the business data?
> >>     
> > 
> > 	In my opinion, there's just 1 model possible, and it comes down to
> > a NIAM/ORM (object role modelling) model. With that model, you can
> > speak about an entity and how it's used in the persistent storage
> > and also in your application.
> >   
> I don't agree there's a chicken-and-egg problem.  

	I must be missing something, but I didn't speak of any chicken-egg
problem ? :)

> Chicken and eggs
> dilemmas are so because both the chicken and egg are required
> entities.  Object-relational design questions aren't characterized
> that way because objects and object models are optional.  They exist
> only because system designers selected object oriented languages to
> program with.  The relational model exists (and persists) with or
> without objects or object oriented languages.

	they're all technical solutions to problems which arise when an
application has to be build. They're not the initiation of the problem,
they're a solution after the problem has been recognized. This means
that no-one starts with an O/R mapper or RDBMS and then looks around to
build an application. An application has to be build and then tools are
sought to make that building easier and/or resulting in more
maintainable software etc.

> Along the same theme, the article also errs in presenting a
> bifurcation.  Even if we assume both object and relational models
> exist in symbiosis we do not have to map one model onto the other.
> We can instead implement to the interface, which is what transactions
> are all about.

	that won't work in practise as you can't consume a set in an OOPL
unless you transform that set to a consumable object. You too have to
find a way to translate between imperative executed statements and set
oriented statements.

	The mapping is necessary from a technical point of view. Semantically
you're dealing with the same thing: an entity, just in different
representation forms.

	You IMHO also make the mistake to confuse SQL with a relational model. 

> In fact, I could probably prove transactions are how it should be
> done using the same proof for an intersection table's requirement to
> represent many-to-many relationships--but that proof is outside the
> scope of this reply.  

	Well, you just need 2 m:1 relations to have a m:n relation, so I can
have 1 single table and define a m:n relation, that's not that hard.

	I also fail to see what transactions have to do with the concept of an
entity. A transaction is related to executing statements (also outside
the DB if I might add), an entity is a static concept, it's not
executing something, it's a definition of what a set of attributes mean
semantically.

> It is. however, a good subject for a subsequent
> article.  It already proves the need for middleware, why not prove
> transactions are the most efficient and correct approach to joining
> applications to databases?

	Because I don't think they are. 

		FB

-- 
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#) 
------------------------------------------------------------------------
0
Frans
12/27/2006 9:27:49 AM
A clarification

Frans Bouma wrote:

> Thomas Gagne wrote:
> > It is. however, a good subject for a subsequent
> > article.  It already proves the need for middleware, why not prove
> > transactions are the most efficient and correct approach to joining
> > applications to databases?
> 
> 	Because I don't think they are. 

	What I meant by this was that a 'transaction' is used in the process
of consuming an RDBMS in a client program but it doesn't cover the
whole conceptual theory behind what the client code actually
represents. As a transaction can also be used in code itself, without
ever going to a DB, I don't see it as a way to describe what you're
saying, also because my article was about a completely different thing,
namely concepts, not code.

		FB

-- 
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#) 
------------------------------------------------------------------------
0
Frans
12/27/2006 9:37:14 AM
On Tue, 26 Dec 2006 13:57:06 -0500, Thomas Gagne wrote:

> Frans Bouma wrote:
>> Dmitry A. Kazakov wrote:
>>
>>   
>>> <snip>
>>> You address here one issue, that is, the RA rules applied to wrong
>>> data may produce wrong outcome. It is important, but it is not the
>>> problem. Which is that having a correct (consistent) set of rules
>>> (like RA) and some valid input (data set), one can still produce
>>> semantically wrong outcomes. This is universally true. Arithmetic is
>>> certainly consistent, but 1 apple + 1 orange is not 2 Ampere.
>>> Arithmetic does not define the meaning of 1 and +.  Equivalently, the
>>> operations of RA should have a meaning in the application domain.
>>> This meaning lies outside RA, and nothing in RA can warranty anything
>>> about it. Same is true for any programming language. So the custom
>>> chat about "DB correctness" is either trivial or rubbish.
>>
>> Thanks Dmitry for correctly wording this. I tried to explain what you
>> said it way better. :)
>>
> I must be the only one that didn't follow it.
> 
> Where did the wrong data come from?  Why is it solely relational 
> algebra's problem to detect it?

No, nobody claims that. The point is that there exist many consistent
systems, therefore one cannot argue to consistency of RA as a genuine
advantage of. That renders triviality:

- RA is consistent!
- Wow! Let's have a beer.

Rubbish were to argue that RA solves the problem, but we have already
agreed that it does not, which by no means were RA's problem. So we stay by
triviality.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
0
mailbox2 (6357)
12/27/2006 11:10:45 AM
Frans Bouma wrote:
> Thomas Gagne wrote:
>   
> <snip>
>
> 	I still have a problem with what you mean with 'Correct'. Let's say
> you mean by it the relational integrity correctness but also the
> semantical correctness of the data, as you tried to explain above.
> Isn't it so that in that situation I can write another program, also
> using the same database which consumes a subset of your tables and
> finds incorrect data?
In that example, where is the bug: the program or the database?  If the 
DB stored the zip code precisely as the human intended and it wasn't 
lost, overwritten, or corrupted, then the DB is correct.  If a user 
entered an incorrect zip code the database can still be correct, though 
its data may not.

But to the point, if a program was able to store improperly-formatted 
zipcode inside the DB then whose fault is that?  Something someplace 
should make sure zipcodes are properly formatted for all programs that 
may update the database, and if both Dutch and US-formatted zipcode are 
allowed both are properly formatted before adding them to the DB.

One place that edit can happen is in the DB's interface.  A stored 
procedure can be written that both validates the zip code, and records 
its old value, who changed it, when, and all that other good stuff.  
Perhaps later if other zipcode formats are supported and new tables 
created to represent them, the stored procedure can be modified without 
the change domino-ing into applications.
>  Simply because its semantic interpretation of the
> data is different? (a dutch zip code format is different from a US zip
> code format for example ;))
>
>   
>> APIs provide a rampart against corruption (data errors), as does
>> referential integrity (structural errors).  Whatever bugs may express
>> themselves in the code it's more important to identify and isolate
>> them in the database.  Bad data in the database can infect multiple
>> applications and be the cause for bad decisions--both manual and
>> automated.  This is one of the reasons we focus so strongly on DB
>> integrity.
>>     
>
> 	all great, but that doesn't imply you THUS should use a set of stored
> procedures to create that. As part of your definition of a correct
> database/dataset is based on semantical interpretation of the data, you
> can also do that outside the DB, in whatever application you're writing.
>   
If there's more than one executable which is responsible for semantic 
correctness?

I once helped create a transaction processor for a credit union 
product.  Every online and batch application in the system funneled 
through the transaction processor.  Both it and the system's stored 
procedures were created to maintain system integrity and process 
transactions as fast as possible.

I wrote briefly about it a long time ago: 
<http://gagne.homedns.org/~tgagne/articles/newdef.html#casestudy1>.
>   
>>>> I have multiple applications--some of them share
>>>> common data models but others of them do not.  Which model is
>>>>         
>> correct?  >> Consider you have 20 programs each doing specific
>> things.  Between >> the 20 you've discovered there's three different
>> object models that >> best reflect their dependent applications needs
>> and designs.  Which >> of the three should be mapped to the DB?
>> Should the DB's model be >> massaged to reflect any of them, or
>> should it be designed to be >> perfect for the business data?
>>     
>>>>     
>>>>         
>>> 	In my opinion, there's just 1 model possible, and it comes down to
>>> a NIAM/ORM (object role modelling) model. With that model, you can
>>> speak about an entity and how it's used in the persistent storage
>>> and also in your application.
>>>   
>>>       
>> I don't agree there's a chicken-and-egg problem.  
>>     
>
> 	I must be missing something, but I didn't speak of any chicken-egg
> problem ? :)
>   
I thought the web page you referenced did, <

<http://weblogs.asp.net/fbouma/archive/2006/08/23/Essay_3A00_-The-Database-Model-is-the-Domain-Model.aspx>

> <snip>
>> Along the same theme, the article also errs in presenting a
>> bifurcation.  Even if we assume both object and relational models
>> exist in symbiosis we do not have to map one model onto the other.
>> We can instead implement to the interface, which is what transactions
>> are all about.
>>     
>
> 	that won't work in practise as you can't consume a set in an OOPL
> unless you transform that set to a consumable object. You too have to
> find a way to translate between imperative executed statements and set
> oriented statements.
>   
What is returned is a projection, and that projection has tuples.  If 
RAM allows you may treat them as a collection or a read-only stream.  
The application doesn't need to know what a set-oriented statement is if 
it uses stored procedures.
> 	The mapping is necessary from a technical point of view. Semantically
> you're dealing with the same thing: an entity, just in different
> representation forms.
>
> 	You IMHO also make the mistake to confuse SQL with a relational model.
>   
You'll have to elaborate on that--I don't know another way to converse 
with RDBMS without SQL.  But one of RDBMS' advantages over other DBMS is 
the provision of stored procedures.  Other DBMS can still be provided an 
interface, but it'll have to be home-rolled.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/27/2006 1:09:05 PM
topmind wrote:
> Is it required to be implemented the same way as done on paper? Or is
> one merely required to present it in double-entry form (which is just a
> presentation issue)?  It seems foolish to have laws that force
> denormalized data.
How does the law require denormalized data?

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/27/2006 1:11:18 PM
Dmitry A. Kazakov wrote:
> On Tue, 26 Dec 2006 13:57:06 -0500, Thomas Gagne wrote:
>
>   
>> Frans Bouma wrote:
>>     
>>> Dmitry A. Kazakov wrote:
>>>
>>>   
>>>       
>>>> <snip>
>>>> You address here one issue, that is, the RA rules applied to wrong
>>>> data may produce wrong outcome. It is important, but it is not the
>>>> problem. Which is that having a correct (consistent) set of rules
>>>> (like RA) and some valid input (data set), one can still produce
>>>> semantically wrong outcomes. This is universally true. Arithmetic is
>>>> certainly consistent, but 1 apple + 1 orange is not 2 Ampere.
>>>> Arithmetic does not define the meaning of 1 and +.  Equivalently, the
>>>> operations of RA should have a meaning in the application domain.
>>>> This meaning lies outside RA, and nothing in RA can warranty anything
>>>> about it. Same is true for any programming language. So the custom
>>>> chat about "DB correctness" is either trivial or rubbish.
>>>>         
>>> Thanks Dmitry for correctly wording this. I tried to explain what you
>>> said it way better. :)
>>>
>>>       
>> I must be the only one that didn't follow it.
>>
>> Where did the wrong data come from?  Why is it solely relational 
>> algebra's problem to detect it?
>>     
>
> No, nobody claims that. The point is that there exist many consistent
> systems, therefore one cannot argue to consistency of RA as a genuine
> advantage of. That renders triviality:
>
> - RA is consistent!
> - Wow! Let's have a beer.
>
> Rubbish were to argue that RA solves the problem, but we have already
> agreed that it does not, which by no means were RA's problem. So we stay by
> triviality.
>
>   
RA is great, but it proves only structural (relational) correctness.  If 
there's an algebra for network databases then it could prove structural 
(network) correctness as well.  I think both are good things.

In addition to structural correctness we can test for semantic 
correctness.  In banking systems one way to do that is to compare all 
the debit and credit accounts to make sure they net $0.

We can also walk through transaction history to make sure the final 
answer is what's represented in derived locations, like the account 
tables.  Data change (DC) history is the same--make sure the value 
represented in the table is the last value in DC history.

Depending on your hypostasis (DB+design) other tests are likely available.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/27/2006 1:19:51 PM
"topmind" <topmind@technologist.com> writes:
> Okay, I declare "tangled pasta" a subjective opinion.

     After only a half-dozen or so request/response pairs.  Record
time.

> Mr. May likes nitty side-tracks to distract from the real issue.

     Yes, I am rather fond of "nitty side-tracks" such as providing
evidence for one's claims.  I can see how that would distract from
your "real issue" of spewing errant nonsense.  Darn me.  Darn me to
heck.

Sincerely,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc.  | Large scale, mission-critical, distributed OO
                       | systems design and implementation.
          pjm@spe.com  | (C++, Java, Common Lisp, Jini, middleware, SOA)
0
pjm (703)
12/27/2006 2:28:17 PM
On 26 Dec 2006 16:21:33 -0800, "topmind" <topmind@technologist.com>
wrote:

>I'll be behind you if you wish to lobby for new relational standards.
>At least relational has (semi) standards. Navigational has none in
>usage that I know of (other than file systems, and perhaps to some
>extent XML-DBs).

I don't have anything specific in mind, except some additional
definition of what the problems are that might be solved by a new
language/standard.

For one thing, I think the implementation underneath common database
SQL's could be improved, caching more plans per statement.  Too many
stories of sites where performance goes to h*ll when an inappropriate
cached plan is used for variant values.  I don't think there's any
language design issue there that could be resolved at compile time.
The whole need to reevaluate at runtime, differentiates SQL from
familiar procedural languages.

J.
0
12/27/2006 2:31:35 PM
On 26 Dec 2006 12:33:05 -0800, "topmind" <topmind@technologist.com> wrote:

>I was talking with some tech buddies of mine about this once, and we
>concluded that double-entry book-keeping was archaic. One does not need
>to store the same info in two different places with modern DB's. If you
>want to avoid losing a record, then use some kind of increment
>sequencing number.

"double-entry book-keeping" is not about having two copies it is about making
two entries from different perspectives (credit and debit).  If the entries
don't match up, there is an imbalance.  The imbalance can then be traced back to
the entry error.
-----------------------------------------------------------
Louis LaBrunda
Keystone Software Corp.
SkypeMe callto://PhotonDemon
mailto:Lou@Keystone-Software.com http://www.Keystone-Software.com
0
Louis
12/27/2006 3:14:34 PM
"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
news:mqCdnUMy8d5v7w_YnZ2dnUVZ_smonZ2d@wideopenwest.com...
> Dmitry A. Kazakov wrote:
>> On Tue, 26 Dec 2006 13:57:06 -0500, Thomas Gagne wrote:
>>

>>> I must be the only one that didn't follow it.
>>>
>>> Where did the wrong data come from?  Why is it solely relational 
>>> algebra's problem to detect it?
>>>
>>
>> No, nobody claims that. The point is that there exist many consistent
>> systems, therefore one cannot argue to consistency of RA as a genuine
>> advantage of. That renders triviality:
>>
>> - RA is consistent!
>> - Wow! Let's have a beer.
>>
>> Rubbish were to argue that RA solves the problem, but we have already
>> agreed that it does not, which by no means were RA's problem. So we stay 
>> by
>> triviality.
>>
>>
> RA is great, but it proves only structural (relational) correctness.  If 
> there's an algebra for network databases then it could prove structural 
> (network) correctness as well.  I think both are good things.
>
> In addition to structural correctness we can test for semantic 
> correctness.  In banking systems one way to do that is to compare all the 
> debit and credit accounts to make sure they net $0.
>
> We can also walk through transaction history to make sure the final answer 
> is what's represented in derived locations, like the account tables.  Data 
> change (DC) history is the same--make sure the value represented in the 
> table is the last value in DC history.
>
> Depending on your hypostasis (DB+design) other tests are likely available.
>

Excellent examples, Thomas.  Are you stating that you think of the 
PROCEDURES you have described to verify the semantic correctness of the data 
to be, logically, the same level of abstraction as the data persistence 
mechanism?  I do not.

The procedures you describe are programming code and live at a different 
level of abstraction.  The fact that they are written in SQL or some other 
data-native language is simply one preference over another.  Logically, it 
makes NO difference if these procedures are written in SQL, C, Cobol, or 
even APL.  (There are certain to be performance differences).  In fact, in 
the current version of SQL Server, you can write procedures in C# code that 
runs at the engine level, removing the need to collect the data and transmit 
it outside the db to be manipulated.  It is still programming code.  It is 
not more elegant for the fact that the engine calls it.

So how do you tell if a line of code is code or elemental to the RDBMS 
model?  Code can be invoked (and should be, based on your business logic and 
timing).  Referential Integrity, for example, is not invoked... It Just Is. 
That is the distinction I want to draw.  Elements and properties that make 
RDBMS systems valuable, like referential integrity and data type validation, 
are not, in and by themselves, useful to insure the correctness of the data. 
You need procedures to do the rest.

I think this is what some of the other respondants are trying to say: 
assuming correctness without code is nonsense.  I sense that you agree... 
you are just assuming a different programming language.  Am I close?

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
--


-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
12/27/2006 3:21:14 PM
"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
news:BJSdnaRp8N2ASwzYnZ2dnUVZ_uS3nZ2d@wideopenwest.com...
> topmind wrote:
>> I was talking with some tech buddies of mine about this once, and we
>> concluded that double-entry book-keeping was archaic. One does not need
>> to store the same info in two different places with modern DB's. If you
>> want to avoid losing a record, then use some kind of increment
>> sequencing number.
>>
> Are you and your buddies appropriate authorities to dismiss a Generally 
> Accepted Accounting Practice in use since the 12th century?
>

-T- just likes to argue.  Don't try to argue GAAP with him.  You won't learn 
anything.

I think there is nothing wrong with double-entry bookkeeping if it helps you 
and your team to prove that data is correct.  The auditors are business 
customers.  We are in IT.  In IT, we make our customers happy.  Any logical 
argument that creates a design that doesn't meet the needs of the business 
is an argument that should be dismissed on it's face.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
12/27/2006 3:29:58 PM
"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
news:XMydnVOTU7b27Q_YnZ2dnUVZ_oqmnZ2d@wideopenwest.com...
> Frans Bouma wrote:

>>
>> You IMHO also make the mistake to confuse SQL with a relational model.
>>
> You'll have to elaborate on that--I don't know another way to converse 
> with RDBMS without SQL.  But one of RDBMS' advantages over other DBMS is 
> the provision of stored procedures.  Other DBMS can still be provided an 
> interface, but it'll have to be home-rolled.
>

First off: Not every RDBMS has stored procedures.  Sybase and SQL Server do, 
but they are FAR from typical.

I must agree with Frans that you have confused SQL with Relational database 
management.  SQL is a language.  (To be fair, it is at least two languages: 
DML and DDL).  The Data Manipulation Language is just one way to access the 
data in a relational system.  There are other ways.  I used many of them. 
(I only learned SQL as a way to access data after I had been coding for 
about eight years... accessing data all along the way).

The relational model, as described by Codd, is a conceptual model for 
structuring data.  Any system that organizes data according to that model is 
Relational, including those that have no stored procs.  There is nothing in 
the relational model that indicates, requires, or even implies the use of 
Structured Query Language or Stored Procedures.  I assure you that I was 
using Relational model long before I had heard of either.

I think you do yourself a disservice to consider all procedures, written in 
SQL, to be somehow superior to code written in any other language, 
especially now that that the non-SQL code can be executed with the same 
efficiency as the SQL code can.  SQL Code is Code.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
12/27/2006 3:48:54 PM
Nick Malik [Microsoft] wrote:
> "Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
> news:mqCdnUMy8d5v7w_YnZ2dnUVZ_smonZ2d@wideopenwest.com...
>   
> <snip>
>
> Excellent examples, Thomas.  Are you stating that you think of the 
> PROCEDURES you have described to verify the semantic correctness of the data 
> to be, logically, the same level of abstraction as the data persistence 
> mechanism?  I do not.
>   
I don't think so.  The procedures are themselves created from the 
semantics of the database.  If they were not, then there's little value 
for them.  Stored procedures are the ingress to the DB's semantics.  
Updating an account balance is more complicated then simply updating 
account.balanceAmount.  The procedure is created to live inside a 
context.  For example, an account balance may not be updated without an 
open transaction, and that transaction (deposit, withdrawal, etc.) must 
a valid type, and the user must have permission, and transaction history 
must be recorded.  Any number of financial transactions must balance, so 
I may test them either as they are performed or subsequently as part of 
a DB semantics integrity test to make sure they all balance.

So, I agree with you and it's an excellent point you make, the 
procedures are not at the same level of abstraction is the DB.  The DB's 
data records the system's state (within context) and the procedures are 
_how_ the state may be changed.
> The procedures you describe are programming code and live at a different 
> level of abstraction.  The fact that they are written in SQL or some other 
> data-native language is simply one preference over another.  Logically, it 
> makes NO difference if these procedures are written in SQL, C, Cobol, or 
> even APL.  (There are certain to be performance differences).  In fact, in 
> the current version of SQL Server, you can write procedures in C# code that 
> runs at the engine level, removing the need to collect the data and transmit 
> it outside the db to be manipulated.  It is still programming code.  It is 
> not more elegant for the fact that the engine calls it.
>   
You're correct.  And deciding what code belongs inside and which does 
not are decisions OO programmers make every day about which methods 
belong inside an object and which do not.  Some methods increase an 
object's cohesiveness and other's dilute it.
> So how do you tell if a line of code is code or elemental to the RDBMS 
> model?  Code can be invoked (and should be, based on your business logic and 
> timing).  Referential Integrity, for example, is not invoked... It Just Is. 
> That is the distinction I want to draw.  Elements and properties that make 
> RDBMS systems valuable, like referential integrity and data type validation, 
> are not, in and by themselves, useful to insure the correctness of the data. 
> You need procedures to do the rest.
>   
Correct.  The procedures are not responsible for structural integrity.  
RI is intrinsic to relational DBs and there's better places to implement 
RI than inside procedures (declarative constraints, triggers, etc.).

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/27/2006 4:33:38 PM
Nick Malik [Microsoft] wrote:
> <snip>
>
> I think you do yourself a disservice to consider all procedures, written in 
> SQL, to be somehow superior to code written in any other language, 
> especially now that that the non-SQL code can be executed with the same 
> efficiency as the SQL code can.  SQL Code is Code.
>
>   
Agreed.  But I really wanted to discuss the idea inside as narrow a 
context as possible.

<http://gagne.homedns.org/~tgagne/articles/newdef.html#casestudy1> is 
really the description of a system that layered the DB semantic 
interface across two layers.  Stored procedures were available (Sybase) 
and so they were created because we didn't want to waste time compiling 
SQL if we didn't have to.  We were also uninterested in exposing our 
transaction server (it was more than middle ware) to minor changes 
inside the DB.  The procedures knew how to record transactions and 
update balances, but the transaction server (tpserver in the paper) 
actually knew what a "share purchase" was or a "line of credit 
advance."  It also knew whether and how to overdraft an account and all 
kinds of other higher-level things the procedures didn't know about.

As you point out, we could have embedded SQL inside tpserver if 
procedures weren't available to us, but they were.  Their availability 
doesn't prohibit treating the DB as an object, but it can certainly 
assist people in thinking of the DB as an object of it's capable of 
implementing its own interface without outside assistance.

In the particular case described in the article above (article from the 
mid 90s, but programming from the late 80s) tpserver was written in C.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/27/2006 4:49:08 PM
Patrick May wrote:
> "topmind" <topmind@technologist.com> writes:

>
> > Mr. May likes nitty side-tracks to distract from the real issue.
>
>      Yes, I am rather fond of "nitty side-tracks" such as providing
> evidence for one's claims.  I can see how that would distract from
> your "real issue" of spewing errant nonsense.  Darn me.  Darn me to
> heck.

Look who's talkin'.  Where's the objective evidence that OO is better
for biz apps? Huh huh huh?  That's all that REEEEALLY matters but you
ain't got no game on that court.

It is not my burden to show otherwise. The difference between paradigms
and techniques is considered equal or unknown until proven otherwise.
OOP appears to be a bunch of flap-trap for expensive consultants to
suck deep-pocket suckers dry.
 
> 
> Sincerely,
> 
> Patrick
> 

-t-

0
topmind (2124)
12/27/2006 4:59:50 PM
"topmind" <topmind@technologist.com> writes:
> Patrick May wrote:
>>      Yes, I am rather fond of "nitty side-tracks" such as providing
>> evidence for one's claims.  I can see how that would distract from
>> your "real issue" of spewing errant nonsense.  Darn me.  Darn me to
>> heck.
[ . . .]
> It is not my burden to show otherwise.

     When you make positive claims, as you so often do in this
newsgroup, the burden of proof is indeed yours.  You consistently fail
to meet this burden, typically retreating to "I was just voicing my
subjective opinion." when pushed hard enough.

     Perhaps people will devote the effort to providing evidence to
you when you demonstrate the intellectual integrity and common
courtesy of supporting your claims.

Sincerely,

Patrick

------------------------------------------------------------------------
S P Engineering, Inc.  | Large scale, mission-critical, distributed OO
                       | systems design and implementation.
          pjm@spe.com  | (C++, Java, Common Lisp, Jini, middleware, SOA)
0
pjm (703)
12/27/2006 5:38:51 PM
"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
news:D7mdne4mU5111BLYnZ2dnUVZ_smonZ2d@wideopenwest.com...
> Nick Malik [Microsoft] wrote:
>> <snip>
>> That said, an RDBMS can present MANY interfaces to your code, not all of 
>> which have to be presented through stored procs.  You could present 
>> through views, for example, and still hide some of the details of your db 
>> design.
>>
>> I would also say that the db presents the data for 'many' objects instead 
>> of a single one.  Viewing the db as a single object begs the question: 
>> what behavior are you encapsulating in it?
>>
> Objects are often composed of many other objects.

Wow.  First statement has the first assumption I'd like to challenge.

Your statement is true but misses the point.  In theory, everything is an 
object.  Who cares?

An object has a PURPOSE.  If you think of objects as nouns, you are not 
designing your system very well.  Objects are 'things that encapsulate.'  In 
other words, I express the definition of an object as something that "does" 
something, rather than something that "is" something.  In my mind, the 
distinction is important, because it allows me to use OO techniques and 
pattern languages.  Moving from "is" to "does" is absolutely essential, 
because there are a great many objects in systems that I have created whose 
purpose has nothing to do with hiding data.  The encapsulation of logic, the 
seperation of concerns, the creation of structure to allow for agility or to 
reduce dependence, etc, these are reasons for the existence of an object 
that extend far beyond the definition of an object that "is" something.

Therefore, if we look at the (business-specific) database as an object, we 
start to see that the database "is" a representation of the data model 
useful for managing a consistent and efficient representation of the 
information that our business collects.

However, if we start to ask: what does this data do... we run aground.  It 
doesn't do anything.  The relational model doesn't let it.  It is. 
Therefore, to move to the use of an object that "does", we must reframe our 
representation, and view the data not in terms of the relations and tuples 
that Codd described and Date and Darwen elaborated for us, but rather in 
terms of the activities that the business needs to perform on it.

> My database object is no different.  Primarily, through stored procedures 
> the DB has methods and projections.

The term DB is a bit confusing.  From your earlier posts, you appear to be 
referring not to a generic database but to a database that represents a 
specific data model.  If it is OK with you, I'll refer to this concept as a 
'SpecificDatamodel' object, rather than a DB.

As I pointed out in other places, you can view these methods as attached to 
the 'SpecificDatamodel' object, but not to the elements within that specific 
data model, even though the methods clearly apply only to specific elements. 
This is not wrong per se, but it is problematic in some respect.  I've 
worked with databases that have hundreds of tables and thousands of stored 
procedures.  I would have a difficult time considering a single object with 
a thousand methods and such a complex data representation mechanism as a 
single object.

In addition, I would have no way to describe to someone what the object 
'does' or better yet, what it 'encapsulates.'

> The projections can either be returned as collections of tuples or I can 
> send a lambda expression (especially helpful for really large result sets) 
> that evaluates one row at a time.  Streams can also be returned.  This 
> where a facade can be helpful to make the DB's interface more idiomatic 
> for your favorite OOPL.  Enjoy!

ADO.Net does exactly this.  It is a generic object for manipulating 
relational data in a .Net language.  That said, there are numerous examples 
of where it's strength, specifically the ability to apply to a wide variety 
of data needs, is also its weakness, for not being more able to present 
specific needs in a succinct manner.  For this reason, many articles exist 
for creating 'Named Datasets' which allow this library to return data in a 
more constrained fashion.  If the simple provision of data as tuples or 
projections were sufficient, we would not need anything more than simple 
tables and ADO.Net.

You have just argued against stored procedures.

>>   <snip>
>>
>>
>>> If you agree there's such a thing as an object-relational impedance 
>>> mismatch, then perhaps its because you're witnessing the negative 
>>> consequences of tightly coupling objects that shouldn't be tightly 
>>> coupled.
>>>
>>
>> Nope.  I'm seeing the Object Relational Impedence a conceptual disconnect 
>> between the 'traditional' RDBMS interface that doesn't present a 
>> mechanism for encapsulating both code and operations in the same object 
>> wrapper with the object oriented interface which absolutely requires it. 
>> Creating an object wrapper that DOES present these two together is the 
>> goal of Object Relational Mapping (ORM) tools.
>>
> I agree it is the goal, but should every DB-ish object in your application 
> actually map to a tuple inside the DB (bean)?

I would say that most objects get close, because relational databases do 
such an excellent job of representing data in an efficient and versatile 
format, that there is little value in viewing data much above the logical 
model when writing code.  It doesn't have to map to the physical model.  In 
modern RDBMS software, there are many techniques for hiding some of the data 
details, including using views and using the security mechanisms to hide 
columns.  That said, for the purposes of the code, it is often 
counterproductive to take this more than one small step up from the tuple 
itself.  That is because of the strength of the relational model, not its 
weakness.

> Why do that when objects should communicate to each other through 
> messages?

I have no problem with sending messages/transactions to a database. 
Recognize that the systemic components that interpret that message and 
translate the message into data representation are not inherent in the 
"relational" model at all.  They are fairly recent innovations and are part 
of the system only as coded extensions, outside the core engine.  They are 
code.  They are configured using data and declarations, and only used when 
they fit the purpose to which they are designed.  They are still code.

> Aren't the OR tools distracting OO programmers from how they ought really 
> talk to the DB?

On the contrary, using an example from SQL Server and ADO.Net, the OR tools 
allow the programmers to send XML to the database for interpretation by the 
database XML engine.

>> Note that even when you do this, you run into the impedence, and that is 
>> because RDBMS systems are not the appropriate place to put every business 
>> capability.  (If they were, all apps would be very thin user interfaces 
>> on very thick databases).
>>
> I don't know everyone's experience, but every DB I've worked with /was/ my 
> system.  It stored the entire state of my system in neat tables and rows 
> with glorious relations between them to answer every question I could 
> possibly ask.  Everything else was one of two things: automation or 
> cosmetics.  Portfolio management, trading, banking, and insurance--the DB 
> recorded everything.  If the system stopped the DB knew where.  When the 
> system started the DB knew where from.

This eloquent description describes the act of "storing" the data, whether 
it is functional data (balance in an account) or state data (stage in a 
process), you are simply storing.  The system, from the user's standpoint, 
doesn't mean much if the data doesn't move.  The database, by itself, is not 
the system from the viewpoint of the user.  The db stores the data, but it 
is the application that provides the business value of "perform business 
activity" that results, ultimately, in the goal of making money.

To state that the "DB [is] the system" reveals a great deal about your 
approach to solving the needs of the business.  The business wants their 
needs met.  You have a cool tool that is pretty versatile, and you use it 
well.  However, it is simply a tool and it doesn't go all the way.  You need 
other tools to complete the picture. Those remaining tools, the 
applications, are not so easily dismissed.

The application is where the business rules actually exist.  It is where the 
code lives to insure that the database, in addition to storing data well, 
stores useful data.  It is where the code lives to actually understand what 
the states of a process imply, and to communicate the process stages to the 
humans who need to perform activities as a result.

All SQL code that makes that conceptual leap from 'storing data well' to 
'storing useful data' is part of an application.

It is perfectly valid to place some parts of an application into SQL code. 
I'd argue for placing application logic in any language where the benefits 
of the language produce a net benefit in terms of system quality attributes 
like maintainability and performance.  The fact that application logic is in 
SQL doesn't mean that it is not part of an application.


> In fact, before there was a system there was a DB.  It was designed, 
> proved correct, constraints implemented, procedures created, load tested, 
> and all kinds of fun unit-testing kinds of things before a single line of 
> application code was created.

Nonsense.  No one pays to create a database for fun.  Data has to be stored 
for a purpose.  Only in the light of that purpose does that storage and 
retrieval effort have any meaning at all.  That purpose is to fulfill steps 
in a business process... to DO something.  It is meaningless to HAVE data 
unless you can USE it.  If you can use it, you have a system.  You cannot 
say that before you have a system, you have a db, because a db is part of a 
system and is only meaningful with respect to how it supports the ability of 
the system to do the things that the business needs.

> In fact, the DB isn't only the biggest object in my system, but it was 
> also the first object--and an OOPL wasn't even necessary to realize it.

If it is an object, it is a poorly designed one.  It has a single large, 
flat, unconstrained interface that can change radically, affecting systems 
that depend on it, without warning.  All data elements within the object are 
visible to all methods within the object, regardless of whether it may be 
useful to hide one data element from a method or not.  Only by the fortitude 
of the programmer can this be avoided.  Within the DB object, you have an 
entirely procedural space, with global variables and unconstrained data 
availability.

Try running this object past a professional Architectural Review Board. 
That would be FUN to watch.

<<snip>>

>> Most business capabilities are described as behaviors first, and data 
>> second.

> I disagree, only because behaviors are based on weak assumptions and 
> common practices.  Whatever is done can either be done well or poorly.

Interesting statement.  Of course, I can "do" things well or poorly.  I can 
also "represent" things well or poorly.  The fact is that businesses don't 
make money on 'representing' things.  They make money by 'doing' things.

> Behaviors are based of the weakest facts--the /way/ things are done.

Only to a point.  If you are good at seperating the 'what' from the 'how' in 
your application code, you can often reduce the obvious churn that happens 
when the business discovers that the /way/ they are doing something needs to 
change.  This is a major goal of OO development.  That said, it is the one 
and only context in which OO is truly useful.  If stuff didn't change, I 
don't think the 'big brains' would have invented OO programming.  So if you 
are going to describe your relational database in OO terms, this is the 
world you are in.  The terms exist in a context of 'change' in the way in 
which things are done.  Expressing your db in those terms accepts that 
context.

<<snip>>

>    But the DB must always be correct.  Whether the behaviors are
>    correct or not, the DB must maintain its integrity.  It must protect
>    its state.

Are you implying that integrity + state = correct?  I seriously question 
this assertion, if only because you have defined the word 'correct' in such 
narrow terms as to allow a wide array of information that the business would 
find offensive, obstructive, or potentially perilous into a 'correct' 
database.

> In fact, our DR plan is based on the premise that the
>    DB's integrity is the most critical--everything else is cosmetic.
>    <http://blogs.in-streamco.com/anything.php?title=rules_for_production>
>

I read your post and do not disagree with the sentiment.  The 
SpecificDatamodel that you represent in your database is pretty darn 
critical.  (I like the analogy: genetic experimentation on your own 
children... hits home).  Getting data to be stored well is the job of the 
Relational model.  Making sure that the data is useful and meets business 
needs is the job of all of that application code that reads, verifies, and 
validates that data.  The validation code can exist as C# apps or SQL stored 
procs.  It is clearly outside the 'storage' of the data.

<<snip>>

>> I guess one thing that stands out for me: you reached a valuable 
>> conclusion about the application of OO design methods to RDBMS design, 
>> but you didn't prove the initial assumption: that stored procedures 
>> should be used as the only interface for code to access the data in the 
>> database.  In this respect, I am not convinced.
>>
> I don't blame you.  I need to present more evidence, which I will do 
> through examples.
>> <snip>
>>
>> I also think you lose something valuable with Stored Procs.  Excellent 
>> efforts have been expended to consider the basic principles of RDBMS 
>> design in objects, and to create objects that will effectively assist 
>> with Object Relational Mapping as a first step to addressing the 
>> Impedence mismatch.
> You're right.  Some great research has been spent here--as there was in 
> alchemy.

nice jab.

I'm asking you to consider something that you may not have considered: that 
the code in your stored procedures that actually "does" something (recognize 
a deposit to an account), or "proves" something (all credits and debits to a 
particular account balance out), is part of an /application/, and not the 
persistence mechanism, even though it is represented in a db-centric 
language.  This is not dissimilar from your notion of the "db as an object" 
because an object is clearly a construct in application code.

I have no idea if you will find this easy to accept or difficult to accept. 
It is widely accepted in the industry.

In other words, we can write an application in more than one language and we 
can distribute it across more than one system.

So, why would we put logic in Language X rather than Language Y?  Lots of 
reasons.  Perhaps we can change the logic in language X rather easily, and 
we need to have that control. Or perhaps the code in Language Y will run 
more efficiently than if you wrote it in Language X.  These are good 
reasons.

However, the fact remains that at some point, you need to transition from 
one to the other.  At some point, you need to have some code in Language X 
that invokes functionality that is written in Language Y.

All the discussion about 'use a stored procedure instead of SQL code' is 
simply another way of saying "when calling from OO language to DB language, 
here's the text string I want you to use...".  Do you think that C# gives a 
hoot if the text string is "Select prod_id from products" or "exec 
getProducts"?

If you say that you have isolated the fact that there is a column called 
'prod_id' from the code by calling the stored proc, I'd agree, and then I'd 
say that there is more than one way to do that exact thing.  I could, for 
example, have a C# class called 'Products' with a 'get' function and all 
places in the code that need to get products must call that class.  The 
class can now have the SELECT statement embedded in it and it will be no 
more difficult, in theory, to change that class than it would be to change 
the stored procedure code.

The place where practice trumps theory is in encapsulation.  Relational 
databases SUCK at being object oriented.  Therefore, if you make one minor 
change for the sake of efficiency, you have to check EVERY place where the 
SpecificDatamodel is used across that interface.  It is because of the 
procedural nature of the database that we have to work so hard to defend the 
interface.

We want to defend it, but it shouldn't be this hard.  Procedural code is 
just unnecessarily hard.  The SpecificDatamodel, because of its procedural 
nature, is difficult to constrain.  Only be the force of will, and excellent 
governance, and massive testing, can we be sure that this interface remains 
consistent.  We have to spend heroic efforts to keep even a small change in 
the representation from playing havoc with the interface.

As a result, we have propogated a 'best practice' that we should all use 
stored procedures to access the database tables.  This allows the database 
team to behave independently of the developers who use other languages, and 
to control the procedural code that presents the database to the 
applications.  This provides a good seperation of human endeavor.  It is 
not, however, a good practice because the technology is somehow 'good' under 
the covers.  It is an adaptation.

One could just as rationally allow all applications to access the database 
using a single DLL, and disallow all other executable code from accessing 
the database, and then control the interface in that DLL.  If RDBMS systems 
allowed this kind of limitation, that would be a far better mechanism for 
controlling access to the data than the current practice of using stored 
procedures as the firewall.

This kind of limitation would require that the database management team have 
developers who understand two languages.  Doesn't seem so hard.

>
> Consider my situation, I have a single database which I know is correct 
> because it's guarded by procedures, constraints, unit and integrity tests. 
> I have multiple applications--some of them share common data models but 
> others of them do not.  Which model is correct?

Good question.

>
> Consider you have 20 programs each doing specific things.  Between the 20 
> you've discovered there's three different object models that best reflect 
> their dependent applications needs and designs.  Which of the three should 
> be mapped to the DB?  Should the DB's model be massaged to reflect any of 
> them, or should it be designed to be perfect for the business data?

I don't know about the word 'perfect' but I'd say that the data model should 
reflect the business needs for storage and retrieval, and not necessarily 
for any particular application.  Communication to and from the db should be 
in the DB's data model (or as much of it as you correctly choose to expose) 
in a canonical manner.  This means that data, transmitted in the canonical 
schema, does not need to be translated before storage.  Canonical 
transaction model should be as close as possible (preferably identical) to 
the storage model.

Applications may need to translate that to a different model that they need 
to use internally.  That shouldn't be your problem.

>> Those objects are defeated by the artificial barriers placed by stored 
>> procedures.   I'm referring to various attempts at Data Access Objects, 
>> including the .Net Data objects in the Microsoft .Net framework.
>>
>> Some would say that this reduces the value of the DAO-style objects.  I 
>> would reply that RDBMS systems are based on a mathematical simplicity, 
>> and approach that is fairly pure and extremely versatile.  Hiding that 
>> mathematical simplicity may or may not be a valuable enterprise, but it 
>> is clearly the effect of restricting all data access to a stored 
>> procedure layer.  In that aspect, perhaps it is the value of the stored 
>> proc that should be questioned, and not the value of the Data objects in 
>> the OO library.
>>
> Would you make that same argument about a Date object, or any other object 
> in your system.
>
> "I would reply that Date objects are based on mathematical simplicity, an 
> approach that is fairly pure and extremely versatile.  Hiding that 
> mathematical simplicity may or may not be a valuable enterprise, but it is 
> clearly the effect of restricting all data access to Date's interface 
> methods.  In that aspect, perhaps it is the value of Date's interface that 
> should be questioned and not the value of the Date objects in the OO 
> library."

Perhaps a better translation to the Date object (an odd choice, but I'll go 
with it) would be: "Hiding that [] simplicity [...] is clearly the effect of 
restricting all data access to Date's /logical data structure/."

The reason I find this an odd choice is that the .Net DateTime object /does/ 
expose its logical data model to the world as properties of the object. 
http://msdn2.microsoft.com/en-us/library/system.datetime_members.aspx

I think you are trying to say that the DateTime object doesn't expose it's 
physical data structure (perhaps it is storing the whole thing as a single 
64 bit number and everything is calculated from there).  To bring that 
comparison full circle, you want to hide the physical model from the app 
developer and present only the logical model.

I understand that.  However, as I've said above, I think that a better way 
to provide that encapsulation, where the consumer is an OO language module, 
would be to restrict all access to the SpecificDatamodel to a single DLL, 
written in an OO language, and allow that DLL to access the database using 
direct SQL statements like Select, Insert, and Update.  That DLL can make an 
interface visible that exposes the logical, and not physical, data model, 
thus giving you the encapsulation you need.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
12/27/2006 5:41:49 PM
"H. S. Lahman" <h.lahman@verizon.net> wrote in message 
news:jnyjh.1360$9H4.70@trndny07...
> Responding to Malik...
>
>>>An unexpected thing happened while debating topmind: I had an epiphany.
>>
>>
>> You see... it is GOOD to have 'topmind' around!  <snip>
>> At best, we learn something.
>
> Bryce is a bright guy but I don't think his motivation is to challenge 
> ideas.  I've been observing him for a decade or so and I think he just 
> engages in these debates to annoy OO people for his own amusement.

<<snip>>

> Bottom line: don't feed the troll.  There is nothing to be learned in a 
> debate with Bryce except the effectiveness of debating ploys.  He is just 
> amusing himself by infuriating OO people.
>

Hello H. S. Lahman,

I defer to your considerable experience with Bryce (aka topmind) and your 
clear frustration with his tactics.  As I noted, he has the ability to drag 
a conversation into a well fairly quickly.  I maintain that he is useful, 
however, because sometimes an obstacle, no matter how inane, forces us to 
weigh the reasons why our practices are worth following, even if we have no 
hope of convincing /him/ of that or of ending the conversation in an elegant 
fashion once our benefits have been gained.

In effect, by forcing me to defend my positions, I improve my own 
understanding of them.

Your understanding of OO greatly exceeds mine.  I'd venture a guess that 
your mastery of this topic exceeds most of the folks who visit this forum. 
In that context, it is safe to say that Bryce offers less value to you than 
to myself and other members of the community who, like me, need a little 
challenge to fine-tune our own understanding.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
12/27/2006 6:00:30 PM
Patrick May wrote:
> "topmind" <topmind@technologist.com> writes:
> > Patrick May wrote:
> >>      Yes, I am rather fond of "nitty side-tracks" such as providing
> >> evidence for one's claims.  I can see how that would distract from
> >> your "real issue" of spewing errant nonsense.  Darn me.  Darn me to
> >> heck.
> [ . . .]
> > It is not my burden to show otherwise.
>
>      When you make positive claims, as you so often do in this
> newsgroup, the burden of proof is indeed yours.  You consistently fail
> to meet this burden, typically retreating to "I was just voicing my
> subjective opinion." when pushed hard enough.

Because it ain't worth it to persue. With enough time in many cases I
could show objective evidence, but then I decide it is too minor a
point to matter. If that bothers you, then take drugs.

>
>      Perhaps people will devote the effort to providing evidence to
> you when you demonstrate the intellectual integrity and common
> courtesy of supporting your claims.

Don't lecture me about "common curtesy" and "intellectual integrity".
You guys often imply OO is Liquid Jesus, but offer ZERO evidence
outside of systems software. Hyp-O-Crate.

> 
> Sincerely,
> 
> Patrick
> 

-T-

0
topmind (2124)
12/27/2006 6:31:59 PM
Nick Malik [Microsoft] wrote:
> "Thomas Gagne" <tgagne@wide-open-west.com> wrote in message


> To state that the "DB [is] the system" reveals a great deal about your
> approach to solving the needs of the business.  The business wants their
> needs met.  You have a cool tool that is pretty versatile, and you use it
> well.  However, it is simply a tool and it doesn't go all the way.  You need
> other tools to complete the picture. Those remaining tools, the
> applications, are not so easily dismissed.

But a large part of the application *can* be "dumped on" DB's and
tables. The mix is up to you. True, the DB/table portion may not do
100%, but it can come fairly close. Thus, this is more of a "should"
issue than a "can" issue. In table-oriented tools I did use to shift
much of the burden to tables. When such tools fell out of favor because
of the OO hype, I had to cut back.

>
> The application is where the business rules actually exist.

I have to disagree. For example, the re-order level or warning level of
inventory can be stored in a product table. The "method" of re-order
calculation ("formula type" for lack of a better name) can also be
stored there. This is clearly "business rules". (I have even stored
formulas themselves in tables).

> It is where the
> code lives to insure that the database, in addition to storing data well,
> stores useful data.  It is where the code lives to actually understand what
> the states of a process imply, and to communicate the process stages to the
> humans who need to perform activities as a result.
> 


> --- Nick Malik [Microsoft]

-T-

0
topmind (2124)
12/27/2006 6:53:36 PM
Patrick May wrote:
> "topmind" <topmind@technologist.com> writes:
> > Patrick May wrote:
> >>      Yes, I am rather fond of "nitty side-tracks" such as providing
> >> evidence for one's claims.  I can see how that would distract from
> >> your "real issue" of spewing errant nonsense.  Darn me.  Darn me to
> >> heck.
> [ . . .]
> > It is not my burden to show otherwise.
>
>      When you make positive claims, as you so often do in this
> newsgroup, the burden of proof is indeed yours.  You consistently fail
> to meet this burden, typically retreating to "I was just voicing my
> subjective opinion." when pushed hard enough.

And further, I don't see you getting on pro-OO asses when *they* make
informal little claims.  In short, a DOUBLE STANDARD for your pro-OO
buddies. You ignore activities from them that when I do make me a "bad
evil troll".

> Patrick

-T-
oop.ismad.com

0
topmind (2124)
12/27/2006 7:04:36 PM
Thomas Gagne wrote:
> topmind wrote:
> > Is it required to be implemented the same way as done on paper? Or is
> > one merely required to present it in double-entry form (which is just a
> > presentation issue)?  It seems foolish to have laws that force
> > denormalized data.

> How does the law require denormalized data?

That is what I was asking. I was not sure if the criticism of my
suggestion was from a legal standpoint or an implementation standpoint.
Thus, I commented on both possibilities rather than play 20 questions.

>
> --
> Visit <http://blogs.instreamfinancial.com/anything.php>
> to read my rants on technology and the finance industry.

-T-

0
topmind (2124)
12/27/2006 7:09:38 PM
Nick Malik [Microsoft] wrote:
> "Thomas Gagne" <tgagne@wide-open-west.com> wrote in message
> news:BJSdnaRp8N2ASwzYnZ2dnUVZ_uS3nZ2d@wideopenwest.com...
> > topmind wrote:
> >> I was talking with some tech buddies of mine about this once, and we
> >> concluded that double-entry book-keeping was archaic. One does not need
> >> to store the same info in two different places with modern DB's. If you
> >> want to avoid losing a record, then use some kind of increment
> >> sequencing number.
> >>
> > Are you and your buddies appropriate authorities to dismiss a Generally
> > Accepted Accounting Practice in use since the 12th century?
> >
>
> -T- just likes to argue.  Don't try to argue GAAP with him.  You won't learn
> anything.

I have an urge to tell you where you can go. If you don't like
somebody, then simply don't respond. That is the polite way to avoid
somebody you don't like. You don't trash talk them to others, you
simply ignore them. Grow some people skills. I have some non-angelic
opinions of you too, but I don't keep stating them over and over.

>
> I think there is nothing wrong with double-entry bookkeeping if it helps you
> and your team to prove that data is correct.  The auditors are business
> customers.  We are in IT.  In IT, we make our customers happy.  Any logical
> argument that creates a design that doesn't meet the needs of the business
> is an argument that should be dismissed on it's face.


You have not shown that my suggestion "doesn't meet the needs of the
business". Again, you are confusing presentation with internal
(computer) representation. With my suggestion, traditional REPORTS can
still be printed.

Perhaps you could argue that since the vast majority will want an "old
style" view, that it would be easier to make the tables shaped that way
also. But you didn't. 
 

> 
> -- 
> --- Nick Malik [Microsoft]

-T-

0
topmind (2124)
12/27/2006 7:20:19 PM
Nick Malik [Microsoft] wrote:
> "Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
> news:D7mdne4mU5111BLYnZ2dnUVZ_smonZ2d@wideopenwest.com...
>   
>> Nick Malik [Microsoft] wrote:
>>     
>>> <snip>
>>> That said, an RDBMS can present MANY interfaces to your code, not all of 
>>> which have to be presented through stored procs.  You could present 
>>> through views, for example, and still hide some of the details of your db 
>>> design.
>>>
>>> I would also say that the db presents the data for 'many' objects instead 
>>> of a single one.  Viewing the db as a single object begs the question: 
>>> what behavior are you encapsulating in it?
>>>
>>>       
>> Objects are often composed of many other objects.
>>     
>
> Wow.  First statement has the first assumption I'd like to challenge.
>
> <snip>
>
> An object has a PURPOSE....
> <snip>
>
> However, if we start to ask: what does this data do... we run aground.  It 
> doesn't do anything.  The relational model doesn't let it.  It is. 
> Therefore, to move to the use of an object that "does", we must reframe our 
> representation, and view the data not in terms of the relations and tuples 
> that Codd described and Date and Darwen elaborated for us, but rather in 
> terms of the activities that the business needs to perform on it.
>   
All true enough.  No disagreement.  The database has both data that is 
(little more useful than C structures) and things it does (now we have 
an object).  No disagreement there.
>   
>> My database object is no different.  Primarily, through stored procedures 
>> the DB has methods and projections.
>>     
>
> The term DB is a bit confusing.  From your earlier posts, you appear to be 
> referring not to a generic database but to a database that represents a 
> specific data model.  If it is OK with you, I'll refer to this concept as a 
> 'SpecificDatamodel' object, rather than a DB.
>   
I've been thinking a lot about finding another word for that, too.  
Hypostasis is a good one, too, as is /SpecificDataModel/ (SDM).
> As I pointed out in other places, you can view these methods as attached to 
> the 'SpecificDatamodel' object, but not to the elements within that specific 
> data model, even though the methods clearly apply only to specific elements. 
> This is not wrong per se, but it is problematic in some respect.  I've 
> worked with databases that have hundreds of tables and thousands of stored 
> procedures.  I would have a difficult time considering a single object with 
> a thousand methods and such a complex data representation mechanism as a 
> single object.
>
> In addition, I would have no way to describe to someone what the object 
> 'does' or better yet, what it 'encapsulates.'
>   
I won't debate the value of a 1000 procedures in front of hundreds of 
tables.  I'm sure there's something interesting we could find if we 
compared the ratios of tables to relations to procedures across enough 
systems we might find a trend, but without actually doing the research 
we can only speculate.
>   
>> The projections can either be returned as collections of tuples or I can 
>> send a lambda expression (especially helpful for really large result sets) 
>> that evaluates one row at a time.  Streams can also be returned.  This 
>> where a facade can be helpful to make the DB's interface more idiomatic 
>> for your favorite OOPL.  Enjoy!
>>     
>
> ADO.Net does exactly this.  It is a generic object for manipulating 
> relational data in a .Net language.  That said, there are numerous examples 
> of where it's strength, specifically the ability to apply to a wide variety 
> of data needs, is also its weakness, for not being more able to present 
> specific needs in a succinct manner.  For this reason, many articles exist 
> for creating 'Named Datasets' which allow this library to return data in a 
> more constrained fashion.  If the simple provision of data as tuples or 
> projections were sufficient, we would not need anything more than simple 
> tables and ADO.Net.
>
> You have just argued against stored procedures.
>   
But aren't the named datasets a semantic layer on top of the generic 
facilities provided by ADO.Net?  I don't see how that negates the use of 
stored procedures.  There's nothing that prohibits ADO.Net or any other 
DB interface from using procedures in addition to SQL.
>   
>>> <snip>
>> I agree it is the goal, but should every DB-ish object in your application 
>> actually map to a tuple inside the DB (bean)?
>>     
>
> I would say that most objects get close, because relational databases do 
> such an excellent job of representing data in an efficient and versatile 
> format, that there is little value in viewing data much above the logical 
> model when writing code.  It doesn't have to map to the physical model.  In 
> modern RDBMS software, there are many techniques for hiding some of the data 
> details, including using views and using the security mechanisms to hide 
> columns.  That said, for the purposes of the code, it is often 
> counterproductive to take this more than one small step up from the tuple 
> itself.  That is because of the strength of the relational model, not its 
> weakness.
>   
But aren't you talking about structure more than semantics in an SDM?
>   
>> Why do that when objects should communicate to each other through 
>> messages?
>>     
>
> I have no problem with sending messages/transactions to a database. 
> Recognize that the systemic components that interpret that message and 
> translate the message into data representation are not inherent in the 
> "relational" model at all.  They are fairly recent innovations and are part 
> of the system only as coded extensions, outside the core engine.  They are 
> code.  They are configured using data and declarations, and only used when 
> they fit the purpose to which they are designed.  They are still code.
>   
That may be, but they are hardly recent inventions.  Sybase has had them 
since the 80s, and before that system programmers created APIs that 
stood in-front of their hierarchical and network databases that 
performed a similar function.
>   
>> Aren't the OR tools distracting OO programmers from how they ought really 
>> talk to the DB?
>>     
>
> On the contrary, using an example from SQL Server and ADO.Net, the OR tools 
> allow the programmers to send XML to the database for interpretation by the 
> database XML engine.
>   
XML is a structure.  Aren't you assuming there's something semantic 
inside?  Don't those semantics exist independent of the format of their 
envelope?  Instead of simplifying how our applications communicate with 
the SDM we're simultaneously complicating it (with metadata) and 
attempting to assuage that complication with mapping tools?  OR tools 
necessarily imply a tighter integration (albeit buried) between the 
application's model and the SDM.  Are OR tools necessary for the SDM to 
fulfill its contract (interface)?  No, because the SDM exists 
independent of the OR layer.  The SDM predates both the relational DB 
model and object oriented programming.  OO programming does not improve 
the SDM, it only improves our programming (how and how much are another 
debate).  Relational Algebra doesn't improve our SDM, but it does 
improve the facility of the DBMS.  A customer always had multiple 
accounts regardless which DB model implemented it.
>   
>>> Note that even when you do this, you run into the impedence, and that is 
>>> because RDBMS systems are not the appropriate place to put every business 
>>> capability.  (If they were, all apps would be very thin user interfaces 
>>> on very thick databases).
>>>
>>>       
>> I don't know everyone's experience, but every DB I've worked with /was/ my 
>> system.  It stored the entire state of my system in neat tables and rows 
>> with glorious relations between them to answer every question I could 
>> possibly ask.  Everything else was one of two things: automation or 
>> cosmetics.  Portfolio management, trading, banking, and insurance--the DB 
>> recorded everything.  If the system stopped the DB knew where.  When the 
>> system started the DB knew where from.
>>     
>
> This eloquent description describes the act of "storing" the data, whether 
> it is functional data (balance in an account) or state data (stage in a 
> process), you are simply storing.  The system, from the user's standpoint, 
> doesn't mean much if the data doesn't move.  The database, by itself, is not 
> the system from the viewpoint of the user.  The db stores the data, but it 
> is the application that provides the business value of "perform business 
> activity" that results, ultimately, in the goal of making money.
>
> To state that the "DB [is] the system" reveals a great deal about your 
> approach to solving the needs of the business.  The business wants their 
> needs met.  You have a cool tool that is pretty versatile, and you use it 
> well.  However, it is simply a tool and it doesn't go all the way.  You need 
> other tools to complete the picture. Those remaining tools, the 
> applications, are not so easily dismissed.
>   
I didn't mean to trivialize applications--only to emphasize their role 
is to animate, automate, and avail the SDM.
> The application is where the business rules actually exist.
That depends on what you think an SDM is.  If it is only a repository 
for data and contains no mechanism of its own to enforce any semanatic 
rules at all, then you're correct--the business rules must live 
elsewhere.  But if you consider instead that an SDM, or hypostasis, also 
includes an interface that guards both the semantic and structural 
integrity of the data then the responsibility, if not migrated to the DB 
object, straddles both.

But straddling business rules are unsatisfactorily ambiguous.  That is 
why I prefer the SDM knows "how" things are done while my applications 
know "why" and/or "when" they are done.  My SDM knows how to add 
transactions and update account balances, but my applications know why 
I'm updating them (a customer is making a deposit) or when (interest is 
due).  And of course, they are able to automate those transactions and 
originate them at ATMs, teller windows, and batch programs, which the 
SDM is unable to do by itself.
> <snip>
>   
>> In fact, before there was a system there was a DB.  It was designed, 
>> proved correct, constraints implemented, procedures created, load tested, 
>> and all kinds of fun unit-testing kinds of things before a single line of 
>> application code was created.
>>     
>
> Nonsense.  No one pays to create a database for fun.
I was saying (am I the only one?) that they're created first (or should 
be), and that much should be done to make certain they are as correct as 
possible before coding applications.  The latter is a far more expensive 
proposition than creating and throwing away SDMs.  On the risk/reward 
curve, it is far better to iterate SDMs without or with little 
application code than it is to iterate an SDM after code has been 
written that depends on it.  A house's foundation is pretty worthless 
without a house, but you'd better make sure that foundation is poured 
properly or the house, for all its construction expense, will be worth 
less than its raw materials.

<snip>

>
>   
>>    But the DB must always be correct.  Whether the behaviors are
>>    correct or not, the DB must maintain its integrity.  It must protect
>>    its state.
>>     
>
> Are you implying that integrity + state = correct?  I seriously question 
> this assertion, if only because you have defined the word 'correct' in such 
> narrow terms as to allow a wide array of information that the business would 
> find offensive, obstructive, or potentially perilous into a 'correct' 
> database.
>   
That depends on what your definition of integrity is.  If you're 
thinking purely of structural correctness then I'd agree with you.  But 
my definition of integrity includes semantic integrity, in which case 
I'd agree with me. :-)
>   
> <snip>
>>> I guess one thing that stands out for me: you reached a valuable 
>>> conclusion about the application of OO design methods to RDBMS design, 
>>> but you didn't prove the initial assumption: that stored procedures 
>>> should be used as the only interface for code to access the data in the 
>>> database.  In this respect, I am not convinced.
>>>       
>> I don't blame you.  I need to present more evidence, which I will do 
>> through examples.
>>     
>>> <snip>
>>>
>>> I also think you lose something valuable with Stored Procs.  Excellent 
>>> efforts have been expended to consider the basic principles of RDBMS 
>>> design in objects, and to create objects that will effectively assist 
>>> with Object Relational Mapping as a first step to addressing the 
>>> Impedence mismatch.
>>>       
>> You're right.  Some great research has been spent here--as there was in 
>> alchemy.
>>     
>
> nice jab.
>
> I'm asking you to consider something that you may not have considered: that 
> the code in your stored procedures that actually "does" something (recognize 
> a deposit to an account), or "proves" something (all credits and debits to a 
> particular account balance out), is part of an /application/, and not the 
> persistence mechanism, even though it is represented in a db-centric 
> language.  This is not dissimilar from your notion of the "db as an object" 
> because an object is clearly a construct in application code.
>   
Hmm.  When you create objects in your favorite OOPL is all of any given 
object's data publicly accessible so that all other objects may operate 
directly on it?  Given a simple data object, should other objects add a 
day by manipulating its instance variables or should they use its 
interface--after all--it's all part of what the requesting object needs 
to do.
> I have no idea if you will find this easy to accept or difficult to accept. 
> It is widely accepted in the industry.
>   
Lots of things appeal to the masses, but that's not a logical reason to 
use endorse or employ them.  I think lots of teams work too hard 
designing, writing, debugging, and maintaining systems--creating 
unnecessary work for themselves.  But reading and hearing about their 
frustrations doesn't motivate me to use what everyone else uses.
> <snip>
>
> If you say that you have isolated the fact that there is a column called 
> 'prod_id' from the code by calling the stored proc, I'd agree, and then I'd 
> say that there is more than one way to do that exact thing.  I could, for 
> example, have a C# class called 'Products' with a 'get' function and all 
> places in the code that need to get products must call that class.  The 
> class can now have the SELECT statement embedded in it and it will be no 
> more difficult, in theory, to change that class than it would be to change 
> the stored procedure code.
>   
IF the SELECT statement appeared only once and IF the tables it refers 
to are its own and IF that statement isn't repeated anywhere and IF 
their names never change and IF you can deploy your enhanced application 
code as easily as loading a new procedure--you're right.

I haven't seen that application since I read about Multix.
<snip>
>
> As a result, we have propogated a 'best practice' that we should all use 
> stored procedures to access the database tables.  This allows the database 
> team to behave independently of the developers who use other languages, and 
> to control the procedural code that presents the database to the 
> applications.  This provides a good seperation of human endeavor.  It is 
> not, however, a good practice because the technology is somehow 'good' under 
> the covers.  It is an adaptation.
>   
If that adaptation is simpler, requires less code, is more flexible, 
provides more agility, guards against semantic errors, and costs less 
than you're spending now, then I'd say not only is it good under the 
covers but that I'd let it eat crackers in bed.
> One could just as rationally allow all applications to access the database 
> using a single DLL, and disallow all other executable code from accessing 
> the database, and then control the interface in that DLL.
It would be just as rational, and I've done both.  But if people are 
having such a difficult time gasping the concept without another 
language/environment/framework involved, I'm curious how they'd do if I 
relied on it?
>   If RDBMS systems 
> allowed this kind of limitation, that would be a far better mechanism for 
> controlling access to the data than the current practice of using stored 
> procedures as the firewall.
>   
Even without stored procedures, I would still recommend creating both 
micro and macro transactions--one implementing how things are done and 
another ordering them into the why and when.
> This kind of limitation would require that the database management team have 
> developers who understand two languages.  Doesn't seem so hard.
>   
Many development teams arrange programmers and DBAs in opposition to 
each other--but that's another story.

<snip>

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/27/2006 7:37:39 PM
topmind wrote:
> Thomas Gagne wrote:
>   
>> <snip>
>> That is what I was asking. I was not sure if the criticism of my
>> suggestion was from a legal standpoint or an implementation standpoint.
>> Thus, I commented on both possibilities rather than play 20 questions.
>>     
GAAP highly recommends double-entry book keeping.  Even in practice, it 
is a good tool for financial OLTP systems to make sure nothing is 
accidentally recorded to the system that doesn't balance.  After all, we 
don't want someone increasing their account balance if there wasn't a 
source of funds involved, would we?


-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/27/2006 7:46:26 PM
topmind wrote:

> Patrick May wrote:

TM>It is not my burden to show otherwise.

>>When you make positive claims, as you so often do in this
>>newsgroup, the burden of proof is indeed yours. You consistently fail
>>to meet this burden, typically retreating to "I was just voicing my
>>subjective opinion." when pushed hard enough.

> Because it ain't worth it to persue. With enough time in many cases I
> could show objective evidence, but then I decide it is too minor a
> point to matter.

Sorry, but if someone claims that I cannot provide information to
support my position, when I have it to hand, I don't consider it "too
minor a point to bother" .

With all the time you have wasted on comp.object over the years, I
contend that you would have all the time necessary to "show
objective evidence" .


 > If that bothers you, then take drugs.

A poor attempt IMHO to evade the charge that you don't have the
facts to support your claims.


>>Perhaps people will devote the effort to providing evidence to
>>you when you demonstrate the intellectual integrity and common
>>courtesy of supporting your claims.

> Don't lecture me about "common curtesy" and "intellectual integrity".
> You guys often imply OO is Liquid Jesus, but offer ZERO evidence
> outside of systems software. Hyp-O-Crate.

What "evidence" (greater than zero) has been offered *inside* of
"systems software" ... ??


Regards,
Steven Perryman
0
S
12/27/2006 8:19:14 PM
Thomas Gagne wrote:
> topmind wrote:
> > Thomas Gagne wrote:
> >
> >> <snip>
> >> That is what I was asking. I was not sure if the criticism of my
> >> suggestion was from a legal standpoint or an implementation standpoint.
> >> Thus, I commented on both possibilities rather than play 20 questions.
> >>

> GAAP highly recommends double-entry book keeping.  Even in practice, it
> is a good tool for financial OLTP systems to make sure nothing is
> accidentally recorded to the system that doesn't balance.

Normalization is not something I invented. Double entry goes against
normalization. Double-entry is designed for paper processes in mind,
NOT RDBMS. There are *more effective* ways to reduce errors and
mistakes. Normalization is part of it. Perhaps GAAP is insufficiently
educated on database practices and techniques. Technology is not their
main job.


> After all, we
> don't want someone increasing their account balance if there wasn't a
> source of funds involved, would we?

See above.

>
>
> --
> Visit <http://blogs.instreamfinancial.com/anything.php>
> to read my rants on technology and the finance industry.

-T-

0
topmind (2124)
12/27/2006 8:46:46 PM
topmind wrote:

> S Perryman wrote:

>>topmind wrote:

PM>You've got it backwards.  You used the Visitor pattern in
PM>support of one of your claims in your conversation with Neo.  I
PM>simply pointed out that it does not, in fact, support your
PM>argument.  The burden of proof is still on you to provide an
PM>example of OO techniques leading to "tangled pasta".

TM>There are no real rules for when to use what GOF pattern, especially
TM>if there are competing factors.  The rules of relational
TM>normalization are governed mostly by duplication removal. All else
TM>being equal, consistency trumps inconsistency.

PM>So you can't provide an actual example. You should just come out
PM>and say so.

TM>How exactly does one provide examples to show that there are no
TM>consistent consensus rules for something?

>>I contend that GoF have such rules: they are labelled "motivation" etc .

> They are often worded as "adding an X without having to change Y".

Please give us an exact reference to all the GoF patterns that are
worded in the manner that you claim.

Page numbers in the GoF book would be most helpful. Failing that, the
pattern names will suffice.


 > However, change needs often change over time. Up-front change needs
 > are often not a good guide to future change needs.

???

What is a "change need" ??
And once you have explained this, what is an "upfront" / "future"
change need ??


> Okay, I declare "tangled pasta" a subjective opinion.

And how many postings from Patrick May et al has it taken you to
admit this obvious (to us anyway) fact ??


> Until they are proven better in my domain

And what precisely is your "domain" ??

I expect *specific* infomation here (on-line shopping catalogues, share
dealing systems, inventory control etc) .



> with inspectable public
> source code, I shall use procedural/relational techniques instead and
> recommend others ignore them also.

A recommendation that by your own admission is based on opinion and not
fact.


>>So, as far as GoF patterns go (and using your weird language) :

>>show us *one* "inconsistent consensus rule" .

>>It is *your burden* to show that one condition.

>>If you cannot, as Patrick May has so amusingly had you squirming over
>>the months on various different threads trying to avoid, state that
>>while you have doubts as to the veracity of something, you specifically
>>do not have the proof (and/or ability) to disprove the veracity.

> The bottom line is that you cannot prove GOF OO better.

Pick a GoF pattern, and we can construct an example that you can use to
illustrate that your preferred development methods are superior to.



> Mr. May likes
> nitty side-tracks to distract from the real issue. He is more
> interested in bashing me than in defending OO.

He has done no such thing.
He has on numerous occasions easily forced you into a corner.

Specifically, that you cannot :

- cite any personal real-world experience of working on OO projects
where a problem you claim exists can be explained to us by example.

- construct a decent example that illustrates a specific problem that
you claim exists.



> Kick the messenger all you all want

No need. You do a sterling job in kicking yourself.


> but OO is not proven better outside of systems software.

Is OO is proven better *inside* of systems software ??



> Whether I am a genious or Bozo, you still have no OO proof.

See above.


Regards,
Steven Perryman
0
S
12/27/2006 9:22:01 PM
S Perryman wrote:
> topmind wrote:
>
> > S Perryman wrote:
>
> >>topmind wrote:
>
> PM>You've got it backwards.  You used the Visitor pattern in
> PM>support of one of your claims in your conversation with Neo.  I
> PM>simply pointed out that it does not, in fact, support your
> PM>argument.  The burden of proof is still on you to provide an
> PM>example of OO techniques leading to "tangled pasta".
>
> TM>There are no real rules for when to use what GOF pattern, especially
> TM>if there are competing factors.  The rules of relational
> TM>normalization are governed mostly by duplication removal. All else
> TM>being equal, consistency trumps inconsistency.
>
> PM>So you can't provide an actual example. You should just come out
> PM>and say so.
>
> TM>How exactly does one provide examples to show that there are no
> TM>consistent consensus rules for something?
>
> >>I contend that GoF have such rules: they are labelled "motivation" etc .
>
> > They are often worded as "adding an X without having to change Y".
>
> Please give us an exact reference to all the GoF patterns that are
> worded in the manner that you claim.
>
>
> Page numbers in the GoF book would be most helpful. Failing that, the
> pattern names will suffice.

How come the claimer of "motivation" is not subject to your page-number
citation harassment also? Dare I say, "double standard"? You jumped
right over their claims. How convenient. Proof right here of your
double standards. Your friends can be sloppass, but I cannot.

> > Until they are proven better in my domain
>
> And what precisely is your "domain" ??
>
> I expect *specific* infomation here (on-line shopping catalogues, share
> dealing systems, inventory control etc) .

That is unrealistic. The boundaries a somewhat fuzzy. What are you
implying? BTW, I've given some suggestions that readers can relate to:
student grade tracking and airline reservations.

>
>
> > with inspectable public
> > source code, I shall use procedural/relational techniques instead and
> > recommend others ignore them also.
>
> A recommendation that by your own admission is based on opinion and not
> fact.

The default is not OO.

>
>
> >>So, as far as GoF patterns go (and using your weird language) :
>
> >>show us *one* "inconsistent consensus rule" .
>
> >>It is *your burden* to show that one condition.
>
> >>If you cannot, as Patrick May has so amusingly had you squirming over
> >>the months on various different threads trying to avoid, state that
> >>while you have doubts as to the veracity of something, you specifically
> >>do not have the proof (and/or ability) to disprove the veracity.
>
> > The bottom line is that you cannot prove GOF OO better.
>
> Pick a GoF pattern, and we can construct an example that you can use to
> illustrate that your preferred development methods are superior to.

I never claimed they were objectively superior. I believe paradigm
benefits are probably subjective. There are many roads to the same
solution and people pick what best fits their head.


>
>
> > Mr. May likes
> > nitty side-tracks to distract from the real issue. He is more
> > interested in bashing me than in defending OO.
>
> He has done no such thing.
> He has on numerous occasions easily forced you into a corner.
>
> Specifically, that you cannot :
>
> - cite any personal real-world experience of working on OO projects
> where a problem you claim exists can be explained to us by example.

Pro-OO's haven't either. Show me specifics where procedural/relational
is objectively worse in biz apps (and not language-specific faults).
Double Standard.

>
> - construct a decent example that illustrates a specific problem that
> you claim exists.

Same with you with regard to procedural/relational. D.S. again.

> > but OO is not proven better outside of systems software.
>
> Is OO is proven better *inside* of systems software ??

I did not claim that. I am only saying I am not challenging OO in
systems software. You read what you want to read, not what is actually
there. More evidence that your dislike of me distorts your view of
reality.

> Steven Perryman

-T-

0
topmind (2124)
12/27/2006 9:54:41 PM
Thomas Gagne wrote:
> An unexpected thing happened while debating topmind: I had an epiphany.

They have medication for that now.

0
topmind (2124)
12/27/2006 10:05:19 PM
Dmitry A. Kazakov wrote:
> On 26 Dec 2006 09:08:02 -0800, topmind wrote:
>
> > If I say "There is no
> > evidence that unicorns exist", you cannot ask for an example.
>
> No, we can immediately discard this statement as illegal. "There is no"
> must be applied to an observable set. You can say "In Britanica there is no
> evidences that unicorns exist." Then we could go and verify that.
> Otherwise, it is always your burden to prove a universally quantified
> statement. For example by showing that existing unicorns would necessarily
> post in comp.object within each hour.

Is this better?:

Nobody here on comp.object has presented such evidence.

>
> -- 
> Regards,
> Dmitry A. Kazakov
> http://www.dmitry-kazakov.de

-T-

0
topmind (2124)
12/27/2006 10:51:10 PM
topmind wrote:

> S Perryman wrote:

SP>I contend that GoF have such rules: they are labelled "motivation" etc .

TM>They are often worded as "adding an X without having to change Y".

>>Please give us an exact reference to all the GoF patterns that are
>>worded in the manner that you claim.

>>Page numbers in the GoF book would be most helpful. Failing that, the
>>pattern names will suffice. 

> How come the claimer of "motivation" is not subject to your page-number
> citation harassment also? Dare I say, "double standard"? You jumped
> right over their claims. How convenient.

I have done no such thing. Again, you claim that the sections of the
GoF patterns that are labelled "motivation" :

<quote>
are often worded as "adding an X without having to change Y".
</quote>


I have the GoF book in front of me.
I have quickly scanned the "Motivation" sections of a few patterns, and
see no text of the form you claim.

So (my laxness/laziness aside) , the burden of proof is on you to show
us where in the book there are sections worded as you claim.

If you are unable to do so, retract the claim.


TM>Until they are proven better in my domain

>>And what precisely is your "domain" ??

>>I expect *specific* infomation here (on-line shopping catalogues, share
>>dealing systems, inventory control etc) .

> That is unrealistic.

No.

> The boundaries a somewhat fuzzy.

No.

>What are you implying?

Nothing.

I want to *know* what domains you *work or have worked in* .
And more specifically, the types of systems in those domains.


TM>with inspectable public
TM>source code, I shall use procedural/relational techniques instead and
TM>recommend others ignore them also.

>>A recommendation that by your own admission is based on opinion and not
>>fact.

> The default is not OO.

By definition (obviously - you dislike OO so why would you recommend
it) .


TM>The bottom line is that you cannot prove GOF OO better.

>>Pick a GoF pattern, and we can construct an example that you can use to
>>illustrate that your preferred development methods are superior to.

> I never claimed they were objectively superior. I believe paradigm
> benefits are probably subjective. There are many roads to the same
> solution and people pick what best fits their head.

So you do not have GoF patterns +/- real-world examples of their use for
which we can discuss whether they are better than your preferred
development methods.


>>- cite any personal real-world experience of working on OO projects
>>where a problem you claim exists can be explained to us by example.

> Pro-OO's haven't either. Show me specifics where procedural/relational
> is objectively worse in biz apps (and not language-specific faults).
> Double Standard.

What is a "biz app" ??

If you mean "business application" , then is not the vast majority of
s/w written with intended application in some business somewhere ??

If not, provide *specific examples* of a "biz app" .


>>- construct a decent example that illustrates a specific problem that
>>you claim exists.

> Same with you with regard to procedural/relational. D.S. again.

TM>but OO is not proven better outside of systems software.

>>Is OO is proven better *inside* of systems software ??

> I did not claim that.

Who said that you did ??

Again, you wrote :

"but OO is not proven better outside of systems software."

I am asking whether it has been proven better *inside* of systems
software. If it has not, why did you write the above statement in the
first place ?? What meaning can be deduced other than that OO s/w has
been proven better in a specific subject domain ??


> I am only saying I am not challenging OO in systems software.

And now I am asking : why not ??

The same alleged problems/failings of OO must surely manifest themselves
in "systems software" as in "biz apps" (whatever the latter might be) .

What is the issue with "systems software" that you cannot discuss it in
the context of OO development ??


> You read what you want to read, not what is actually there.

I am reading only what you have written.
IMHO you appear to not even be capable of understanding why you write
things in the first place.


Regards,
Steven Perryman
0
S
12/27/2006 10:58:33 PM
S Perryman wrote:
> topmind wrote:
>
> > S Perryman wrote:
>
> SP>I contend that GoF have such rules: they are labelled "motivation" etc .
>
> TM>They are often worded as "adding an X without having to change Y".
>
> >>Please give us an exact reference to all the GoF patterns that are
> >>worded in the manner that you claim.
>
> >>Page numbers in the GoF book would be most helpful. Failing that, the
> >>pattern names will suffice.
>
> > How come the claimer of "motivation" is not subject to your page-number
> > citation harassment also? Dare I say, "double standard"? You jumped
> > right over their claims. How convenient.
>
> I have done no such thing. Again, you claim that the sections of the
> GoF patterns that are labelled "motivation" :
>
> <quote>
> are often worded as "adding an X without having to change Y".
> </quote>
>
>
> I have the GoF book in front of me.
> I have quickly scanned the "Motivation" sections of a few patterns, and
> see no text of the form you claim.
>
> So (my laxness/laziness aside) , the burden of proof is on you to show
> us where in the book there are sections worded as you claim.
>
> If you are unable to do so, retract the claim.


I already did retract it. You keep focusing on it because you found a
tiny morsal of battle that you can win dispite getting your arse kicked
in the war.


> TM>Until they are proven better in my domain
>
> >>And what precisely is your "domain" ??
>
> >>I expect *specific* infomation here (on-line shopping catalogues, share
> >>dealing systems, inventory control etc) .
>
> > That is unrealistic.
>
> No.

Yes.

>
> > The boundaries a somewhat fuzzy.
>
> No.

The boundaries between "biz apps" and other domains are clear?
Evidence?


>
> If not, provide *specific examples* of a "biz app" .

I already provided 2 suggestions that we can use for examples.

>
>
> >>- construct a decent example that illustrates a specific problem that
> >>you claim exists.
>
> > Same with you with regard to procedural/relational. D.S. again.
>
> TM>but OO is not proven better outside of systems software.
>
> >>Is OO is proven better *inside* of systems software ??
>
> > I did not claim that.
>
> Who said that you did ??
>
> Again, you wrote :
>
> "but OO is not proven better outside of systems software."
>
> I am asking whether it has been proven better *inside* of systems
> software. If it has not, why did you write the above statement in the
> first place ??

I already told you.

> What meaning can be deduced other than that OO s/w has
> been proven better in a specific subject domain ??

Simple: I am excluding system software from my statement. I am putting
a Null around it, not a value into it. Read it again over and over
until it sinks in.

>
>
> > I am only saying I am not challenging OO in systems software.
>
> And now I am asking : why not ??
>
> The same alleged problems/failings of OO must surely manifest themselves
> in "systems software" as in "biz apps" (whatever the latter might be) .

Not necessarily. In custom biz apps, the requirements change more often
than implementation, but in systems software it may perhaps be the
opposite. Engineers tend to define the requirements, not salesmen,
CEO's, or congress; and thus the requirements are a little more
planned, more logical, and more conservative.


> > You read what you want to read, not what is actually there.
>
> I am reading only what you have written.
> IMHO you appear to not even be capable of understanding why you write
> things in the first place.

You are the one with a reading problem.  Absense of statement is not a
statement of absense.

> Steven Perryman

-T-

0
topmind (2124)
12/28/2006 4:19:20 AM
Thomas Gagne wrote:

> Frans Bouma wrote:
> > Thomas Gagne wrote:
> >  <snip>
> > 
> > 	I still have a problem with what you mean with 'Correct'. Let's say
> > you mean by it the relational integrity correctness but also the
> > semantical correctness of the data, as you tried to explain above.
> > Isn't it so that in that situation I can write another program, also
> > using the same database which consumes a subset of your tables and
> > finds incorrect data?
> In that example, where is the bug: the program or the database?  If
> the DB stored the zip code precisely as the human intended and it
> wasn't lost, overwritten, or corrupted, then the DB is correct.  If a
> user entered an incorrect zip code the database can still be correct,
> though its data may not.

	It's about context. A value '1' can be CORRECT in context A but
incorrect in context B. The DB is the persistent storage, the place
where the application state lives.

> But to the point, if a program was able to store improperly-formatted
> zipcode inside the DB then whose fault is that?  

	what's an improperly formatted zipcode? In the US, you have 5 digits,
in the netherlands you have 4 digits and 2 characters. A zipcode of
1234 AA is properly formatted for a dutch user of the application, but
not correctly formatted for the US user of the program. Hence: context.

	This thus means that if the db stores '1234AA', it can do so, and the
dutch user will happily use it. The US user can't because for the US
user it's just data, 1234 and 2 characters, it's not information
(zipcode).

> Something someplace
> should make sure zipcodes are properly formatted for all programs
> that may update the database, and if both Dutch and US-formatted
> zipcode are allowed both are properly formatted before adding them to
> the DB.

	properly formatted for the context they're used in! That's the whole
point.

> One place that edit can happen is in the DB's interface.  A stored
> procedure can be written that both validates the zip code, and
> records its old value, who changed it, when, and all that other good
> stuff.  Perhaps later if other zipcode formats are supported and new
> tables created to represent them, the stored procedure can be
> modified without the change domino-ing into applications.

	I think you can cut a looooooong story short by explaining what you
apparently can do in a stored proc which I can't do in code outside the
DB.

> > <snip>
> > > Along the same theme, the article also errs in presenting a
> > > bifurcation.  Even if we assume both object and relational models
> > > exist in symbiosis we do not have to map one model onto the other.
> > > We can instead implement to the interface, which is what
> > > transactions are all about.
> >>    
> > 
> > 	that won't work in practise as you can't consume a set in an OOPL
> > unless you transform that set to a consumable object. You too have
> > to find a way to translate between imperative executed statements
> > and set oriented statements.
> >  
> What is returned is a projection, and that projection has tuples.  If
> RAM allows you may treat them as a collection or a read-only stream.
> The application doesn't need to know what a set-oriented statement is
> if it uses stored procedures.

	no, that's too easy. This is precisely the fault one makes when one
claims that using stored procedures in an OO program is as easy as
using a class model provided and managed by an O/R mapper: having a
proc which returns a set of rows is great, but you have to do something
with these rows, or better: you have to transform the concept of a
table into the concept of a set of entities, or better: you have to
semantically interpret what a row MEANS. If a row means: this is the
data for an entity of type Manager (subtype of 'Employee', lame
inheritance scheme, but illustrates the point) and another row means:
this is the data for a clerk instance (another subtype of employee),
you have to make that interpretation somewhere IF you want to be able
to consume the data in an OO fashion.

	If you just say "yeah, well, I have the data in memory so it's taken
care of", you completely miss the whole point. It's like comparing the
# of lines of code to obtain a set of employee OBJECTS from the db with
the # of lines of code to write the PROC to do the same thing: the proc
doesn't get you the data in the form you can work with, so procs are
effectively meaningless UNLESS you add a transformation layer on top of
them, which makes the whole point of using procs pretty much
meaningless.

> > 	The mapping is necessary from a technical point of view.
> > Semantically you're dealing with the same thing: an entity, just in
> > different representation forms.
> > 
> > 	You IMHO also make the mistake to confuse SQL with a relational
> > model.   
> You'll have to elaborate on that--I don't know another way to
> converse with RDBMS without SQL.  But one of RDBMS' advantages over
> other DBMS is the provision of stored procedures.  Other DBMS can
> still be provided an interface, but it'll have to be home-rolled.

 	SQL is the language to consume the CONTENTS of a physical datamodel,
which is a physical representation of a relational model. It also
happens to be the language to control the RDBMS, which thus means you
can both consume the contents of a datamodel and also create the
datamodel itself by SQL statements, but that doesn't mean what the
datamodel REPRESENTS is equal to SQL.

	As I tried to explain earlier: a NIAM model represents the domain
model and also the relational model. So in theory I can implement the
domain model purely in code without any RDBMS and it would represent
the same thing without a single line of SQL.

		FB

-- 
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#) 
------------------------------------------------------------------------
0
Frans
12/28/2006 11:29:51 AM
In article <5KydnRcGr5oRUA_YnZ2dnUVZ_tadnZ2d@wideopenwest.com>,
 Thomas Gagne <tgagne@wide-open-west.com> wrote:

> GAAP highly recommends double-entry book keeping.  Even in practice, it 
> is a good tool for financial OLTP systems to make sure nothing is 
> accidentally recorded to the system that doesn't balance.  After all, we 
> don't want someone increasing their account balance if there wasn't a 
> source of funds involved, would we?

The question is more whether or not it really makes sense to say this is 
correct:

John -5
Bill +5

and this is not:

John Bill 5

The circumstances that make double-entry best practices doesn't 
necessarily apply when we have non-physical records that computers can 
cross-reference with ease.

-- 
My personal UDP list: 127.0.0.1, 4ax.com, buzzardnews.com, googlegroups.com,
    heapnode.com, localhost, x-privat.org
0
Doc
12/28/2006 11:52:54 AM
On 27 Dec 2006 14:51:10 -0800, topmind wrote:

> Dmitry A. Kazakov wrote:
>> On 26 Dec 2006 09:08:02 -0800, topmind wrote:
>>
>>> If I say "There is no
>>> evidence that unicorns exist", you cannot ask for an example.
>>
>> No, we can immediately discard this statement as illegal. "There is no"
>> must be applied to an observable set. You can say "In Britanica there is no
>> evidences that unicorns exist." Then we could go and verify that.
>> Otherwise, it is always your burden to prove a universally quantified
>> statement. For example by showing that existing unicorns would necessarily
>> post in comp.object within each hour.
> 
> Is this better?:

Definitely, google groups tracks comp.object. So we could take a one day
record and verify existence of unicorns with regard of existence
comp.object and google.

> Nobody here on comp.object has presented such evidence.

We just don't need it, because see above. Misspelled sentences are ignored
on syntax reasons. No evidences or proofs required.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
0
mailbox2 (6357)
12/28/2006 12:35:50 PM
"topmind" <topmind@technologist.com> wrote in message 
news:1167247219.007135.48320@48g2000cwx.googlegroups.com...
> Nick Malik [Microsoft] wrote:

<<clip>>

>>
>> -T- just likes to argue.  Don't try to argue GAAP with him.  You won't 
>> learn
>> anything.
>
> I have an urge to tell you where you can go. If you don't like
> somebody, then simply don't respond. That is the polite way to avoid
> somebody you don't like. You don't trash talk them to others, you
> simply ignore them. Grow some people skills. I have some non-angelic
> opinions of you too, but I don't keep stating them over and over.
>

in many of the threads you are responding to, Bryce, you perform the role of 
a troll.  By that, I mean that you make off-topic statements that are 
seemingly designed to invoke an argument without resolving or learning 
anything.  Not everyone is aware of your tendencies.  If you were a complete 
troll, (e.g. never adding value), I would have said that instead.  I didn't.

>> Grow some people skills.<<

coming from you, that is funny.

>>
>> I think there is nothing wrong with double-entry bookkeeping if it helps 
>> you
>> and your team to prove that data is correct.  The auditors are business
>> customers.  We are in IT.  In IT, we make our customers happy.  Any 
>> logical
>> argument that creates a design that doesn't meet the needs of the 
>> business
>> is an argument that should be dismissed on it's face.
>
>
> You have not shown that my suggestion "doesn't meet the needs of the
> business".

I don't have to.  Developers, when left to their own devices, often come to 
the same mistaken conclusion that you did.  He mentioned GAAP.  In regulated 
industries, GAAP is required by law.  Everywhere else, it is good practice. 
Arguing about the applicability of pre-existing legislation in the financial 
field with a technologist is pretty pointless.  Setting aside the fact that 
his role is one of implementer, and not arguer, the GAAP principles /should/ 
be applied to more institutions in the USA, not fewer, as a measure to 
reduce corporate fraud.

> Again, you are confusing presentation with internal
> (computer) representation. With my suggestion, traditional REPORTS can
> still be printed.
>
> Perhaps you could argue that since the vast majority will want an "old
> style" view, that it would be easier to make the tables shaped that way
> also. But you didn't.
>

You've already shown your ignorance.  Why should I contribute to your 
embarrassment by requiring a greater display?  I was being kind.

--- Nick 


0
nickmalik (325)
12/28/2006 2:01:45 PM
I've written multiple financial OLTP systems and can confidently say 
there's no such thing as "john bill 5".

A transaction system must be able to record:

    Check for $10.
    Cash for $20.
    Deposit to checking $15.
    Payment to line-of-credit: $15.

or perhaps

    Check for $100.
    Interest payment: $98.
    Principle payment: $2.

These are all examples of double-entry/balanced transactions.  
Double-entry is a misnomer.  It's not so much as recording the same 
information twice as much as it is creating smaller transactions and 
building bigger transactions out of them.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/28/2006 3:15:40 PM
Hello Thomas,

I think we agree far more than we disagree.  If you are ever in the Seattle 
area, let me know.  I'll buy you a drink.

I've looked over the points of discussion, both in our thread and in other 
threads with folks like Frans.  (He asks many of the same questions).

From what I can tell, our conversation has wandered around a few points.  I 
think the key to finding common ground is to look at the definition of what 
a database (or in the case of our conversation, a SpecificDataModel or SDM) 
is.

Your approach appears to be that of the component model, where you have 
partitioned the distributed system and you have created key responsibilities 
for the element that we alternately refer to as the database and the SDM. 
The responsibilities that you have allocated to this component (which I 
agree wholeheartedly with) include insuring not only the database-level 
stuff (referential integrity) but the semantics of the business ("knowing 
how to debit an account").

I agree with what you are saying about this partitioning.  I agree and 
concur, with emphasis, that by seperating the concerns of "how" from "why", 
you can create a more versatile model.  To use terms that I'm comfortable 
with, you are discussing the notion of 'composable services.'  That means 
that this component makes available a set of activities (like "transfer 
funds" and "open new account") as a discrete set of services.  The 
application layer can select what order to call those services as part of a 
business process.  They can, in effect, be composed in any way that is 
needed by different applications.  This is one of the key conceptual 
elements of "Service Oriented Architecture" or SOA.  I blog about SOA on a 
nearly-daily basis.

Are we on the same page so far?

The only challenge is that in your description of 'database as object,' you 
drew an equivalence between the implementation of the SDM (stored 
procedures, especially) and the object-oriented-ness of the SDM.

My challenge is with the equivalence that you have drawn between the 
procedural, non-OO oriented, implementation mechanisms presented by our 
"modern" RDBMS systems, and the conceptual framework you are attempting to 
use them in (object oriented design).

My point: they are not equivalent.  You cannot, and should not, view the 
stored procedures as object methods.  That is the place where we differ in 
this discussion, as far as I can tell.

Can we agree on the partitioning points?  Let's define the "thing" 
(database, SDM, etc) using terms that you have described.  I'm going to add 
a little context to make it useful from my viewpoint as well.  For the sake 
of the conversation, can we agree on a definition?  I think that will lower 
the confusion.  I'd like to, if I may, change the term I suggested in the 
prior thread, from Specific Datamodel to Semantic Persistence Store (SPM) to 
better capture this concept:

"A Semantic Persistence Store (SPM) is a self-contained component in a 
vertical distributed computing system that is responsible for both (a) the 
persistence of data in an ACID store and (b) the maintenance of the business 
semantics of that store.  To whit: the responsibility of this component is 
to insure that the data collected within it and the transactions that occur 
against it are not only consistent, efficient and secure but also 
semantically correct with respect to the operational, governance, and 
reporting needs of the business."

If we can agree on this foundation, then I think we can make some headway on 
the Object Oriented point of divergence in our discussion.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.


0
nickmalik (325)
12/28/2006 3:16:03 PM
Nick Malik [Microsoft] wrote:
<snip>
>
> Are we on the same page so far?
>   
I think so.
> The only challenge is that in your description of 'database as object,' you 
> drew an equivalence between the implementation of the SDM (stored 
> procedures, especially) and the object-oriented-ness of the SDM.
>
> My challenge is with the equivalence that you have drawn between the 
> procedural, non-OO oriented, implementation mechanisms presented by our 
> "modern" RDBMS systems, and the conceptual framework you are attempting to 
> use them in (object oriented design).
>
> My point: they are not equivalent.  You cannot, and should not, view the 
> stored procedures as object methods.  That is the place where we differ in 
> this discussion, as far as I can tell.
>
> Can we agree on the partitioning points?  Let's define the "thing" 
> (database, SDM, etc) using terms that you have described.  I'm going to add 
> a little context to make it useful from my viewpoint as well.  For the sake 
> of the conversation, can we agree on a definition?  I think that will lower 
> the confusion.  I'd like to, if I may, change the term I suggested in the 
> prior thread, from Specific Datamodel to Semantic Persistence Store (SPM) to 
> better capture this concept:
>
> "A Semantic Persistence Store (SPM) is a self-contained component in a 
> vertical distributed computing system that is responsible for both (a) the 
> persistence of data in an ACID store and (b) the maintenance of the business 
> semantics of that store.  To whit: the responsibility of this component is 
> to insure that the data collected within it and the transactions that occur 
> against it are not only consistent, efficient and secure but also 
> semantically correct with respect to the operational, governance, and 
> reporting needs of the business."
>
> If we can agree on this foundation, then I think we can make some headway on 
> the Object Oriented point of divergence in our discussion.
>
>   
I think we can agree.  Your comments and questions have been helpful. 

What I read in your description above, regarding the responsibilities of 
the SPM and its componentization (which I read to mean it is interacted 
with via messages) sounds a lot like an object to me.  Alan Kay 
emphasized the objects were the "lesser" idea and that "messages" are 
what was (and still is) important.  Looking at an SDM and treating it as 
though it were a component (object) brings a watershed of benefits that 
in practice simplify the relationship between object oriented programs 
and the SDM.  When those messages are emphasized over object/relational 
mapping systems are simpler, more explicit, and better reflect the 
intentions of applications--or at least they do for OLTP systems.


-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/28/2006 5:56:30 PM
Frans Bouma wrote:
> Thomas Gagne wrote:
>
>   
>> Frans Bouma wrote:
>>     
>>> Thomas Gagne wrote:
>>>  <snip>
>>>
>>> 	I still have a problem with what you mean with 'Correct'. Let's say
>>> you mean by it the relational integrity correctness but also the
>>> semantical correctness of the data, as you tried to explain above.
>>> Isn't it so that in that situation I can write another program, also
>>> using the same database which consumes a subset of your tables and
>>> finds incorrect data?
>>>       
>> In that example, where is the bug: the program or the database?  If
>> the DB stored the zip code precisely as the human intended and it
>> wasn't lost, overwritten, or corrupted, then the DB is correct.  If a
>> user entered an incorrect zip code the database can still be correct,
>> though its data may not.
>>     
>
> 	It's about context. A value '1' can be CORRECT in context A but
> incorrect in context B. The DB is the persistent storage, the place
> where the application state lives.
>   
Is this where we're missing each other?  I'm trying to describe how a 
DB's interface implements its semantics--its "context"--so that the 
value '1' will be recognized as either correct or incorrect and an 
appropriate exception raised.
>   
>> But to the point, if a program was able to store improperly-formatted
>> zipcode inside the DB then whose fault is that?  
>>     
>
> 	what's an improperly formatted zipcode? In the US, you have 5 digits,
> in the netherlands you have 4 digits and 2 characters. A zipcode of
> 1234 AA is properly formatted for a dutch user of the application, but
> not correctly formatted for the US user of the program. Hence: context.
>   
Exactly.  Back to my comment above--procedures give context that a DB 
can not by itself if it were only a data repository.
> <snip>
>
> 	I think you can cut a looooooong story short by explaining what you
> apparently can do in a stored proc which I can't do in code outside the
> DB.
>   
In another response, Nick Malik and I were discussing just this point.  
Now that I've had a night to think it over I regret agreeing.  It is 
true, you can do things in code outside the DB, great things, but doing 
it properly is much more difficult.

In that response I described how stored procedures know "how" to do 
things (update tables, record transaction history, etc.) and that 
application code knows "why" and "when" to do things.  Deciding which is 
which, is sometimes difficult.

There's already enough "art" in program design, so saying divination 
between hows, whys and whens is difficult isn't particularly helpful.

But even before attacking how that can be done (and I think it's more 
formula than other OO techniques) I think OO people need first to stop 
thinking of the database as an object persistence mechanism and start 
thinking of it as an object.  What Nick called a SpecificDataModel (SDM) 
has both data and methods, and accessing an SDM's data directly is as 
much a violation of OO idiom as directly accessing any other object's data.
>   
>>> <snip>
>>>       
>>>> Along the same theme, the article also errs in presenting a
>>>> bifurcation.  Even if we assume both object and relational models
>>>> exist in symbiosis we do not have to map one model onto the other.
>>>> We can instead implement to the interface, which is what
>>>> transactions are all about.
>>>>    
>>>>         
>>> 	that won't work in practise as you can't consume a set in an OOPL
>>> unless you transform that set to a consumable object. You too have
>>> to find a way to translate between imperative executed statements
>>> and set oriented statements.
>>>  
>>>       
>> What is returned is a projection, and that projection has tuples.  If
>> RAM allows you may treat them as a collection or a read-only stream.
>> The application doesn't need to know what a set-oriented statement is
>> if it uses stored procedures.
>>     
>
> 	no, that's too easy. This is precisely the fault one makes when one
> claims that using stored procedures in an OO program is as easy as
> using a class model provided and managed by an O/R mapper: having a
> proc which returns a set of rows is great, but you have to do something
> with these rows, or better: you have to transform the concept of a
> table into the concept of a set of entities, or better: you have to
> semantically interpret what a row MEANS. If a row means: this is the
> data for an entity of type Manager (subtype of 'Employee', lame
> inheritance scheme, but illustrates the point) and another row means:
> this is the data for a clerk instance (another subtype of employee),
> you have to make that interpretation somewhere IF you want to be able
> to consume the data in an OO fashion.
>   
Making tuples both idiomatic (collection or a stream) and semantic (what 
is a tuple?) is a fairly simple thing that can be solved once.  If we 
believe in polymorphism and use a language that supports it we can treat 
the tuple as anything we'd like, providing the tuple can implement the 
protocol.  Programmers have an option of either using it strictly as a 
data object or another business object by simply substituting a business 
class for the tuple's.  Voila!  New behavior.
> <snip>
>>>
>>> 	You IMHO also make the mistake to confuse SQL with a relational
>>> model.   
>>>       
>> You'll have to elaborate on that--I don't know another way to
>> converse with RDBMS without SQL.  But one of RDBMS' advantages over
>> other DBMS is the provision of stored procedures.  Other DBMS can
>> still be provided an interface, but it'll have to be home-rolled.
>>     
>
>  	SQL is the language to consume the CONTENTS of a physical datamodel,
> which is a physical representation of a relational model. It also
> happens to be the language to control the RDBMS, which thus means you
> can both consume the contents of a datamodel and also create the
> datamodel itself by SQL statements, but that doesn't mean what the
> datamodel REPRESENTS is equal to SQL.
>   
I understand that.
> 	As I tried to explain earlier: a NIAM model represents the domain
> model and also the relational model. So in theory I can implement the
> domain model purely in code without any RDBMS and it would represent
> the same thing without a single line of SQL.
>   
What do you mean when you write, "..I can implement the domain model 
purely in code..".  What code?

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/28/2006 6:17:28 PM
Thomas Gagne wrote:
> I've written multiple financial OLTP systems and can confidently say
> there's no such thing as "john bill 5".
>
> A transaction system must be able to record:
>
>     Check for $10.
>     Cash for $20.
>     Deposit to checking $15.
>     Payment to line-of-credit: $15.

These are 4 *separate* transactions, aren't they? It is still:

A, B, 10.00
C, D, 20.00
E, F, 15.00
G, H, 15.00

Where the letters are the various account categories/codes (unspecified
here). It is still a form of "john bill 5".  Every individual
accounting transaction has to debit and credit 2 accounts. Traditional
accounting would do the above like this:

A, 10.00
B, -10.00
C, 20.00
D, -20.00
E, 15.00
F, -15.00
G, 15.00
H, -15.00

Which is bad normalization. As far as a larger grouping of multiple
transactions, that is not the issue here that I can see. As far as
"splitting" a check, first it is deposited, and then divied up in
further/later transactions.

>
> or perhaps
>
>     Check for $100.
>     Interest payment: $98.
>     Principle payment: $2.
>
> These are all examples of double-entry/balanced transactions.
> Double-entry is a misnomer.  It's not so much as recording the same
> information twice as much as it is creating smaller transactions and
> building bigger transactions out of them.
>
> --
> Visit <http://blogs.instreamfinancial.com/anything.php>
> to read my rants on technology and the finance industry.

0
topmind (2124)
12/28/2006 6:21:17 PM
Dmitry A. Kazakov wrote:
> On 27 Dec 2006 14:51:10 -0800, topmind wrote:
>
> > Dmitry A. Kazakov wrote:
> >> On 26 Dec 2006 09:08:02 -0800, topmind wrote:
> >>
> >>> If I say "There is no
> >>> evidence that unicorns exist", you cannot ask for an example.
> >>
> >> No, we can immediately discard this statement as illegal. "There is no"
> >> must be applied to an observable set. You can say "In Britanica there is no
> >> evidences that unicorns exist." Then we could go and verify that.
> >> Otherwise, it is always your burden to prove a universally quantified
> >> statement. For example by showing that existing unicorns would necessarily
> >> post in comp.object within each hour.
> >
> > Is this better?:
>
> Definitely, google groups tracks comp.object. So we could take a one day
> record and verify existence of unicorns with regard of existence
> comp.object and google.
>
> > Nobody here on comp.object has presented such evidence.
>
> We just don't need it, because see above.

I don't see an answer above.

> Misspelled sentences are ignored
> on syntax reasons. No evidences or proofs required.
> 
> -- 
> Regards,
> Dmitry A. Kazakov
> http://www.dmitry-kazakov.de

0
topmind (2124)
12/28/2006 6:24:53 PM
Nick Malik [Microsoft] wrote:
> "topmind" <topmind@technologist.com> wrote in message
> news:1167247219.007135.48320@48g2000cwx.googlegroups.com...
> > Nick Malik [Microsoft] wrote:
>
> <<clip>>
>
> >>
> >> -T- just likes to argue.  Don't try to argue GAAP with him.  You won't
> >> learn
> >> anything.
> >
> > I have an urge to tell you where you can go. If you don't like
> > somebody, then simply don't respond. That is the polite way to avoid
> > somebody you don't like. You don't trash talk them to others, you
> > simply ignore them. Grow some people skills. I have some non-angelic
> > opinions of you too, but I don't keep stating them over and over.
> >
>
> in many of the threads you are responding to [...] you perform the role of
> a troll.  By that, I mean that you make off-topic statements

I am not aware of such practice on my behalf. I suggest you point it
out where and when it happens instead of a summary end-of-year
complaint.

> that are
> seemingly designed to invoke an argument without resolving or learning
> anything.

Perhaps you just interpret them that way. I learn best by concrete
examples. If you want to teach me (or anybody) something, show a
concrete relavent example.  If you want to prove that OOP makes
maintenence easier, then show an example where common and realistic
change scenarios result in fewer lines/statements being changed in the
OO code than the p/r code and show what you are counting. You are too
quick to lecture and too slow to demonstrate. Lectures are cheap and
plentiful; good demostrations are a rare commodity.


> Not everyone is aware of your tendencies.  If you were a complete
> troll, (e.g. never adding value), I would have said that instead.  I didn't.
>
> >> Grow some people skills.<<
>
> coming from you, that is funny.

I try to only be rude when others are rude to me *first*. As far as I
know, I have kept to that personal rule pretty well. If I have slipped,
I am unaware of it.

>
> >>
> >> I think there is nothing wrong with double-entry bookkeeping if it helps
> >> you
> >> and your team to prove that data is correct.  The auditors are business
> >> customers.  We are in IT.  In IT, we make our customers happy.  Any
> >> logical
> >> argument that creates a design that doesn't meet the needs of the
> >> business
> >> is an argument that should be dismissed on it's face.
> >
> >
> > You have not shown that my suggestion "doesn't meet the needs of the
> > business".
>
> I don't have to.  Developers, when left to their own devices, often come to
> the same mistaken conclusion that you did.  He mentioned GAAP.  In regulated
> industries, GAAP is required by law.

In reports or inside the DB? Those are two very different things. If
GAAP dictates table schemas, that would be big news.

> Everywhere else, it is good practice.

Poor normalization is hardly "good practice".

> Arguing about the applicability of pre-existing legislation in the financial
> field with a technologist is pretty pointless.  Setting aside the fact that
> his role is one of implementer, and not arguer, the GAAP principles /should/
> be applied to more institutions in the USA, not fewer, as a measure to
> reduce corporate fraud.

I disagree. Duplication is usually NOT the best data intregity tool
(except for backups).

>
> > Again, you are confusing presentation with internal
> > (computer) representation. With my suggestion, traditional REPORTS can
> > still be printed.
> >
> > Perhaps you could argue that since the vast majority will want an "old
> > style" view, that it would be easier to make the tables shaped that way
> > also. But you didn't.
> >
>
> You've already shown your ignorance.  Why should I contribute to your
> embarrassment by requiring a greater display?  I was being kind.

You have NOT shown any objective pivotal ignorance on my part. Can you
objectively prove that double-entry is the best data integrity tool?

> 
> --- Nick

-T-

0
topmind (2124)
12/28/2006 6:50:58 PM
topmind wrote:
> Thomas Gagne wrote:
>   
>> I've written multiple financial OLTP systems and can confidently say
>> there's no such thing as "john bill 5".
>>
>> A transaction system must be able to record:
>>
>>     Check for $10.
>>     Cash for $20.
>>     Deposit to checking $15.
>>     Payment to line-of-credit: $15.
>>     
>
> These are 4 *separate* transactions, aren't they? It is still:
>
> A, B, 10.00
> C, D, 20.00
> E, F, 15.00
> G, H, 15.00
>   
Think of it as:
A: $10
B: $20
C: $15
D: $15
> Where the letters are the various account categories/codes (unspecified
> here). It is still a form of "john bill 5".
The example, "john bill 5" was moving $5 from john to bill ( or the 
reverse--it's hard to tell ).
>   Every individual
> accounting transaction has to debit and credit 2 accounts. Traditional
> accounting would do the above like this:
>   
That is incorrect.  The sum of debits and credits = $0.  They aren't 
necessarily symmetric--they just cancel each other out.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/28/2006 7:21:48 PM
Thomas Gagne wrote:
> topmind wrote:
> > Thomas Gagne wrote:
> >
> >> I've written multiple financial OLTP systems and can confidently say
> >> there's no such thing as "john bill 5".
> >>
> >> A transaction system must be able to record:
> >>
> >>     Check for $10.
> >>     Cash for $20.
> >>     Deposit to checking $15.
> >>     Payment to line-of-credit: $15.
> >>
> >
> > These are 4 *separate* transactions, aren't they? It is still:
> >
> > A, B, 10.00
> > C, D, 20.00
> > E, F, 15.00
> > G, H, 15.00
> >
> Think of it as:
> A: $10
> B: $20
> C: $15
> D: $15

What are the letters here? In my example, they were account codes, such
as "cash", "accounts payable", etc.

> > Where the letters are the various account categories/codes (unspecified
> > here). It is still a form of "john bill 5".
> The example, "john bill 5" was moving $5 from john to bill ( or the
> reverse--it's hard to tell ).
> >   Every individual
> > accounting transaction has to debit and credit 2 accounts. Traditional
> > accounting would do the above like this:
> >
> That is incorrect.  The sum of debits and credits = $0.  They aren't
> necessarily symmetric--they just cancel each other out.


Isn't it: A = L + E rather than "= 0"? Summing to zero is not enough.

(Some account combinations are not permitted, to ensure this.)

>
> --
> Visit <http://blogs.instreamfinancial.com/anything.php>
> to read my rants on technology and the finance industry.

-T-

0
topmind (2124)
12/28/2006 9:05:54 PM
topmind wrote:
> Thomas Gagne wrote:
>   
>> topmind wrote:
>>     
>>> Thomas Gagne wrote:
>>>
>>>       
>>>> I've written multiple financial OLTP systems and can confidently say
>>>> there's no such thing as "john bill 5".
>>>>
>>>> A transaction system must be able to record:
>>>>
>>>>     Check for $10.
>>>>     Cash for $20.
>>>>     Deposit to checking $15.
>>>>     Payment to line-of-credit: $15.
>>>>
>>>>         
>>> These are 4 *separate* transactions, aren't they? It is still:
>>>
>>> A, B, 10.00
>>> C, D, 20.00
>>> E, F, 15.00
>>> G, H, 15.00
>>>
>>>       
>> Think of it as:
>> A: $10
>> B: $20
>> C: $15
>> D: $15
>>     
>
> What are the letters here? In my example, they were account codes, such
> as "cash", "accounts payable", etc.
>   
The same--they're accounts.
>   
> <snip>
> Isn't it: A = L + E rather than "= 0"? Summing to zero is not enough.
>   
It is enough.  In fact, its how accounting is done.  0 = debits + 
credits.  If everything your accounting transaction does adds to 0, 
you're in good shape.  Money going into one account must come from 
another account.  Adding 100 to a credit account and adding 100 to a 
debit account = 0.  Adding 100 to a debit account and subtracting 100 
from a debit account also = 0.  That's good, too.


-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/28/2006 9:18:10 PM
"topmind" <topmind@technologist.com> wrote in message 
news:1167165185.860615.272090@h40g2000cwb.googlegroups.com...
<...>
> I was talking with some tech buddies of mine about this once, and we
> concluded that double-entry book-keeping was archaic. One does not need
> to store the same info in two different places with modern DB's. If you
> want to avoid losing a record, then use some kind of increment
> sequencing number.

As with most techie conversations, this one has too narrow a scope. 
Double-entry book-keeping is not just about double checking numbers.  It's 
also about understanding the makeup of your data.  Knowing that you just 
added $100 to your assets is of little value if you don't know where the 
money came from.  Was it an interest accrual, sale, fee, it is a unrealised 
gain, does it have tax exposure, etc.?  All of these are covered by 
appropriately updating the balancing accounts.  The fact that your debits 
and credits must match is a nice data entry sanity check, but the big value 
is knowing more about what has happened.

Focusing on a technical aspect (redundant data) misses the business 
requirement.

Bob Nemec 


0
bobn1 (3)
12/29/2006 12:16:33 AM
Bob Nemec wrote:
> "topmind" <topmind@technologist.com> wrote in message
> news:1167165185.860615.272090@h40g2000cwb.googlegroups.com...
> <...>
> > I was talking with some tech buddies of mine about this once, and we
> > concluded that double-entry book-keeping was archaic. One does not need
> > to store the same info in two different places with modern DB's. If you
> > want to avoid losing a record, then use some kind of increment
> > sequencing number.
>
> As with most techie conversations, this one has too narrow a scope.
> Double-entry book-keeping is not just about double checking numbers.  It's
> also about understanding the makeup of your data.  Knowing that you just
> added $100 to your assets is of little value if you don't know where the
> money came from.  Was it an interest accrual, sale, fee, it is a unrealised
> gain, does it have tax exposure, etc.?  All of these are covered by
> appropriately updating the balancing accounts.  The fact that your debits
> and credits must match is a nice data entry sanity check, but the big value
> is knowing more about what has happened.
>
> Focusing on a technical aspect (redundant data) misses the business
> requirement.

I am not clear on what my suggestion is allegedly missing. It appears
to be representationally equivalent. It simply removes redundant info,
it does NOT remove account codes/classifications.

The only thing I did not explicitly include is some grouping indicator
to know that a given group of transactions should be considered as a
unit of some kind. However, it is relatively trivial to add such a
feature. A grouping table can be added to have groups such as "payment
for invoice 1234". The double-entry suggestion does not explicitly
provide it either, I would note. The basal transaction records can have
gazillion other tracking codes if need be. Columns is columns. Add them
as needed. Just remember to normalize them.

> 
> Bob Nemec

-T-

0
topmind (2124)
12/29/2006 12:33:46 AM
> RA is great, but it proves only structural (relational) correctness.  If
> there's an algebra for network databases then it could prove structural
> (network) correctness as well.  I think both are good things.
>
> In addition to structural correctness we can test for semantic
> correctness.  In banking systems one way to do that is to compare all
> the debit and credit accounts to make sure they net $0.

Why wouldn't relational algebra be able to do that? A simple select
sum() would solve the problem.

Fredrik Bertilsson
http://frebe.php0h.com

0
frebe73 (444)
12/29/2006 4:43:56 AM
frebe73@gmail.com wrote:
>> RA is great, but it proves only structural (relational) correctness.  If
>> there's an algebra for network databases then it could prove structural
>> (network) correctness as well.  I think both are good things.
>>
>> In addition to structural correctness we can test for semantic
>> correctness.  In banking systems one way to do that is to compare all
>> the debit and credit accounts to make sure they net $0.
>>     
>
> Why wouldn't relational algebra be able to do that? A simple select
> sum() would solve the problem.
>
> Fredrik Bertilsson
> http://frebe.php0h.com
>
>   
In that case, at least, sum() is how we do it.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
12/29/2006 5:10:45 AM
> >> RA is great, but it proves only structural (relational) correctness.  If
> >> there's an algebra for network databases then it could prove structural
> >> (network) correctness as well.  I think both are good things.
> >>
> >> In addition to structural correctness we can test for semantic
> >> correctness.  In banking systems one way to do that is to compare all
> >> the debit and credit accounts to make sure they net $0.
> >>
> >
> > Why wouldn't relational algebra be able to do that? A simple select
> > sum() would solve the problem.
> >
> In that case, at least, sum() is how we do it.

Sum is a standard aggregate function availible in relational algebra.
(It is also possible to define custom functions, if needed, and use
them in RA expressions.) Why can RA not be used for evaluating semantic
correctness?

Fredrik Bertilsson
http://frebe.php0h.com

0
frebe73 (444)
12/29/2006 7:45:40 AM
Thomas Gagne wrote:
> Frans Bouma wrote:
> > Thomas Gagne wrote:
> > > In that example, where is the bug: the program or the database?
> > > If the DB stored the zip code precisely as the human intended and
> > > it wasn't lost, overwritten, or corrupted, then the DB is
> > > correct.  If a user entered an incorrect zip code the database
> > > can still be correct, though its data may not.
> > 
> > 	It's about context. A value '1' can be CORRECT in context A but
> > incorrect in context B. The DB is the persistent storage, the place
> > where the application state lives.
> >  
> Is this where we're missing each other?  I'm trying to describe how a
> DB's interface implements its semantics--its "context"--so that the
> value '1' will be recognized as either correct or incorrect and an
> appropriate exception raised.

	I understand what an interface is ;) but I don't see where the
necessity comes from that I have to implement the DML code in procs. I
can do that perfectly in my OO code which targets my O/R core which
generates SQL on the fly.

	My example is of course extreme. Storing a value X in a table field F
from app A and read it back in app B and interpret just the value X in
B has only real value if X means the same in A and B. But that's not
the point. I think that's where we misunderstand eachother.

 	What I meant was: to use data you have to make it information first.
To store that transformation in the one interface available, the proc
API, you can't possibly add additional transformations to that outside
the DB. Say you have order data in your db, and the application, A,
using it (and thus the app the API supports) doesn't need aggregated
order data, like order totals.

	Now another application, B, is written and B does need aggregated
order data. Do you suggest this feature was already implemented in the
proc API before B was written? I highly doubt it was. So B adds context
bound information to the SAME data already in the DB. When the code
would have been placed outside the DB, you could just write B and be
done with it. Now, with procs, you have to adjust the proc API to
support B, which is actually wrong, as you never should alter an
interface. So in theory, you should add ANOTHER interface, especially
for B, so B can get the interface it wants/needs, and A can keep its
known interface.

	Welcome to maintenance nightmare. 

> > 	what's an improperly formatted zipcode? In the US, you have 5
> > digits, in the netherlands you have 4 digits and 2 characters. A
> > zipcode of 1234 AA is properly formatted for a dutch user of the
> > application, but not correctly formatted for the US user of the
> > program. Hence: context.   
> Exactly.  Back to my comment above--procedures give context that a DB
> can not by itself if it were only a data repository.
> > <snip>
> > 
> > 	I think you can cut a looooooong story short by explaining what you
> > apparently can do in a stored proc which I can't do in code outside
> > the DB.
> >  
> In another response, Nick Malik and I were discussing just this
> point.  Now that I've had a night to think it over I regret agreeing.
> It is true, you can do things in code outside the DB, great things,
> but doing it properly is much more difficult.

	Keep in mind that code outside the DB can just send a parameterized
DML statement to the DB to perform work.

> In that response I described how stored procedures know "how" to do
> things (update tables, record transaction history, etc.) and that
> application code knows "why" and "when" to do things.  Deciding which
> is which, is sometimes difficult.

	Why can only a proc know this? My SQL generators also know how to do
things, and my OO code just performs OO code. When I want to save a
customer, it's contained Order entities, and the orderline entities
inside that customer, I just do
using(DataAccessAdapter adapter = new DataAccessAdapter())
{
	adapter.SaveEntity(myCustomer);
}

	and everything is saved in a transaction in the right order, pk/fk's
are synced etc.

	The OO code knows what it's doing, it's saving the changed STATE of
the entity copies in memory (stored in entity class instances), it
knows why, but how it's done is not the concern of the code, that's the
concern of the o/r core. Do I need a proc to make this happen? No, as
DML is generated on the fly.

	In the small snippet above, the 'adapter' is a service providing
object. It provides a persistence service to the application, to take
care of persistence logic for the entities in memory, and it's for the
particular DB type the adapter represents.

	Using procs instead of an o/r mapper and to see the db as a singleton
object with a bunch of non-related methods, will only be useful if you
can solve the problems to the same level of abstraction. I mean: if I
want to persist a new customer, new order entities and new order line
entities (so a full graph), I don't want to write transaction
management code, pk-fk sync code and ABOVE all, low level code which
entity to save first. FOr a simple graph like customer-order-orderline
it's easy, but it's often not that easy so you need to implement some
kind of DAG sorter.

	Now, as a person who knows what kind of code it takes to make these
'simple' things work, I can tell you: you won't get that from a proc
API. So a DB object with a proc API isn't usable on the same level of
abstraction as an o/r mapper can: it will operate on a much lower
level, so the application developer has to fall back on plumbing code
to manage the data access code.

	It then comes down to: using a library with functions in an OO
application.

> There's already enough "art" in program design, so saying divination
> between hows, whys and whens is difficult isn't particularly helpful.
> 
> But even before attacking how that can be done (and I think it's more
> formula than other OO techniques) I think OO people need first to
> stop thinking of the database as an object persistence mechanism and
> start thinking of it as an object.  

	I think most OO people don't want to think about a database at all,
just want to tell a governing object in their code some operation is
done and the state of an entity or entities can be persisted. HOW
that's done, that's up to the governing object, after all, that's what
OO is good at: hiding implementation in a class so you don't duplicate
code.

> > > What is returned is a projection, and that projection has tuples.
> > > If RAM allows you may treat them as a collection or a read-only
> > > stream.  The application doesn't need to know what a set-oriented
> > > statement is if it uses stored procedures.
> >>    
> > 
> > 	no, that's too easy. This is precisely the fault one makes when one
> > claims that using stored procedures in an OO program is as easy as
> > using a class model provided and managed by an O/R mapper: having a
> > proc which returns a set of rows is great, but you have to do
> > something with these rows, or better: you have to transform the
> > concept of a table into the concept of a set of entities, or
> > better: you have to semantically interpret what a row MEANS. If a
> > row means: this is the data for an entity of type Manager (subtype
> > of 'Employee', lame inheritance scheme, but illustrates the point)
> > and another row means:  this is the data for a clerk instance
> > (another subtype of employee), you have to make that interpretation
> > somewhere IF you want to be able to consume the data in an OO
> > fashion.   
> Making tuples both idiomatic (collection or a stream) and semantic
> (what is a tuple?) is a fairly simple thing that can be solved once.
> If we believe in polymorphism and use a language that supports it we
> can treat the tuple as anything we'd like, providing the tuple can
> implement the protocol.  Programmers have an option of either using
> it strictly as a data object or another business object by simply
> substituting a business class for the tuple's.  Voila!  New behavior.

	It sounds all nice in a couple of sentences, but it's not that easy.
Fetching a graph of entities efficiently (using 1 query per graph node)
for example requires sophisticated code. Syncing fk's / pk's and
setting up that syncing so it's performed at the right spot isn't as
simple as just saying what you're saying, it takes a heck of a lot of
code.

	OF COURSE, just reading an array of values and ASSUMING it's a
customer and copying over value at ordinal X to field which is mapped
on ordinal X is easy, that's a small routine, it's all the little
plumbing code around it which eventually ends up in that small routine
which makes it not that easy.

	So that's why I'd like to ask you: IF you do remarks like you do
above, proof me that your remarks are as easily implemented as you say
it is. I guarantee you: it's not. Even projecting resultsets which are
straightforward, in a generic way onto whatever type you push into the
routine isn't as simple as it sounds in a single sentence, these things
cost code, thinking and if the developer is unlucky, a glorified
afternoon of debugging. And what has the developer gained after that
afternoon? A routine which adds _NOTHING_ to his/her application, other
than plumbing.

	You see, serializing an object graph to a binary file takes 3 lines of
code using simple .net framework classes, (and I assume on Java it's
not different) so why would the developer need to write a truckload of
code first to even get to work with the data in a database in an OO
fashion? Why do you imply the developer first has to write a truckload
of plumbing code?

	Frankly I don't take any remarks like you did above any serious,
because the remark shows that the idea is there, but there's no clear
knowledge how much work that idea actually still requires before it's
even useful. That's a common property of most ideas, and we all have
them, so I don't blame you, but it's not as simple as you suggest it is.

> > 	As I tried to explain earlier: a NIAM model represents the domain
> > model and also the relational model. So in theory I can implement
> > the domain model purely in code without any RDBMS and it would
> > represent the same thing without a single line of SQL.
> >  
> What do you mean when you write, "..I can implement the domain model
> purely in code..".  What code?

	err... just any code written in OOPL ? In theory, I can store my
objects in an internal datastructure or set of datastructures without
ever needing an RDBMS. The application code would look the same, as it
still works with entities.

		FB

-- 
------------------------------------------------------------------------
Lead developer of LLBLGen Pro, the productive O/R mapper for .NET
LLBLGen Pro website: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#) 
------------------------------------------------------------------------
0
Frans
12/29/2006 10:34:54 AM
"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
news:cdydnQUXZ7bbmAnYnZ2dnUVZ_vamnZ2d@wideopenwest.com...
> Nick Malik [Microsoft] wrote:
> <snip>
>>
>> Are we on the same page so far?
>>
> I think so.

Wonderful!  Of course, after discussing it further, you may wish to alter 
that definition somewhat, and that would be fine, but for now, let's start 
there.  I do need to alter it just a tiny bit: I created the acronym for 
Semantic Persistence Store as "SPM".  Where does the "M" come from?  In my 
love of three letter acronyms (which cannot be explained by rational 
thought), I managed to misspell it.  It's SPS.

Regardless, I live surrounded by Information Architects, and they have 
convinced me to spell out my acronyms as often as possible, so I'll try to 
use the full term frequently.

So, let me put the definition up top (with the acronym corrected):

>> "A Semantic Persistence Store (SPS) is a self-contained component in a 
>> vertical distributed computing system that is responsible for both (a) 
>> the persistence of data in an ACID store and (b) the maintenance of the 
>> business semantics of that store.  To whit: the responsibility of this 
>> component is to insure that the data collected within it and the 
>> transactions that occur against it are not only consistent, efficient and 
>> secure but also semantically correct with respect to the operational, 
>> governance, and reporting needs of the business."

Now, I'll add another layer of context to make sure we stay on the same 
page:

When you call Sybase or SQL Server (which are very similar, since they share 
ancestry) from an application of any kind, you do so by sending a message. 
I haven't worked directly with Oracle or DB-2 but I strongly believe that 
the same is true.  In each case, the software system is running on a server 
and is "listening" for the messages arriving in a known message format over 
a known networking protocol.

Regardless of whether an application runs on the same machine as the 
database software, or on a different machine, regardless of whether we would 
call that machine a 'client workstation' or a 'server', this interface is 
still managed, on the database side, by a listener that looks for messages 
in a known format in a known protocol.  That listener interprets the 
message.  This is the foundation of the connection point and it is the 
hard-and-unwavering line that nearly always creates the interface between 
the database and the application.

We all know this and it is part of the assumption set that everyone brought 
to this thread.  I want to challenge some of your statements (not to dismiss 
them but to refine them), and I want to do so in light of these facts.  That 
is why I call them out.

So we start with database software.  Most software vendors, including all of 
those mentioned above, accept statements in the Structured Query Language as 
a 'good' way to format the message.  In one aspect, SQL serves us well: it 
is a way to issue commands across the network boundary to a listener, and to 
get a nicely formatted response.  The SQL language was invented prior to 
Object Orientation and, I believe we agree on this, is not particularly 
aware of OO constraints or principles.  To whit, most of the things that we 
accept as 'definition' in the OO world, like inheritance and polymorphism, 
range from inelegant to impossible in SQL.

However, the commands themselves must be issued by code that is written 
using the OO paradigm if we are to have our OO applications call databases 
written in one of these packages.  It is this point, where the rubber meets 
the road, that has generated so much discussion.  How, conceptually, can we 
view this point of communication to make it easier to maintain, manage, 
control, and defend our distributed system?

While SQL is not object oriented, your proposition (correct me if I am 
wrong) is that we should create a layer of stored procedures that live on 
the database-side of this communication boundary.  These stored procedures, 
necessarily written in SQL code, could then be viewed as methods of a grand 
'database' object, hiding the implementation of the data elements in the 
tables themselves.  You suggest (once again, please correct me) that by 
doing so, the OO developer needs not rely on (seemingly) complex O/R mapping 
code to bridge the semantic divide between the OO understanding of data and 
the relational mechanisms used to manage it.

Regardless if the complexity or lack of complexity of the code needed to 
bridge the gap, there will be a layer of code that brings data into the 
application and 'characterizes it' or 'translates it' or 'relabels it' in 
some way to make it easier for the application to consume.  This is because 
the internal application will nearly always have it's own data model, 
specifically designed to meet the needs of the app, usually expressed as an 
object heirarchy or graph.  It takes code to go from a data stream to an 
object graph.  I don't believe we disagree on this point.  (If we do, you 
need to chat with your best OO developer and get him or her to show you an 
application that doesn't have, or need, this layer.)

Without suggesting that your idea is good or bad, I'd like to suggest that 
the definition of a Semantic Persistence Store (SPS) can be implemented well 
using stored procedures in the manner in which you suggest.  The SPS is more 
than a database.  It is a fully functioning component in a distributed 
system, with its own responsibilities.  It also has physical implementation 
details that it needs to hide.

In your suggested model, the command interface is not Structured Query 
Language.  It is the set of stored procedures that the SPS makes available 
to be called. SQL exists under the covers, but the applications do not use 
it.  Commands arrive, at the listener, in the form of  a string similer to 
this: "exec procname param1, param2,..."

Advantages:
A1) the existing listening mechanism that has been created by the software 
vendors can be used by all applications to access the SPS.  These listeners 
are well understood, scalable, and mature.  They are therefore relatively 
defect free.
A2) The team that maintains the SPS need only know a single language, SQL, 
in order to maintain this component.  From a resource management 
perspective, this is very useful as the duties of individuals can be shifted 
over time to match the development, testing, support, and maintenance needs 
of the organization with fewer constraints.
A3) SQL-oriented tools are fairly mature, and allow a fairly rich paradigm 
for inspection, testing, and auditing of the activities of the SPS.

Disadvantages:
D1) The command interface (stored procedure names and parameters) that needs 
to be invoked by OO programming modules has no inherent support for OO 
concepts.  Therefore, while naming conventions and good practices can make 
the interface understandable, the listener will not validate the incoming 
message to insure that the proper method was called for the desired 
activity.  The burden of this constraint falls to the developer of the 
calling code.  In other words, once you can call any procedure, you can 
effectively call them all (security notwithstanding), and no guidance can be 
provided by the command interface to 'encourage' you to pick the correct 
one.

D2) As stated above, plumbing code must exist on the OO side to frame up the 
message and its results in the internal data model of the application. 
However, the command interface for this particular implementation of the SPS 
is based on good practices and naming conventions.  It is not based on 
repeatable and enforced (mathematical) principles.  Therefore, the plumbing 
code must be carefully hand crafted and tested.  It is a labor-intensive 
task that is very difficult to automate.

D2-alternate) If the SPS does not offer up a single canonical model for 
consumption, but rather offers up different models to make it easier for 
application developers to consume the data using objects that are easy to 
map, then the SPS is likely, over time, to offer up a large list of 
interfaces.  (application-specific stored procedures).  The plumbing layer 
still exists... it just exists in SQL instead of C# (or Java).  Some teams 
prefer this model.  I don't have evidence that this mechanism is better or 
worse than the other.

D3) The underlying language, on the db side, is not object oriented.  Thus, 
even if the stored procedure interface has been carefully crafted to 
resemble and reflect OO thinking, the code written in these procedures is 
not 'defended' or 'supported' by the OO paradigm.  Therefore, the conceptual 
heirarchy carefully crafted by the application developer cannot be used in 
the persistence of the data, even when the models match.  As a result, every 
data object must have two bodies of code to interpret it, one in the OO code 
and one in the SQL code.  This is inefficient in terms of human as well as 
code resources.

D4) the choice of the technology (SQL and stored procs) to implement the SPS 
means that a SQL-oriented software package must be installed in the 
application's environment.  While this is often a very reasonable choice, 
there can conceivably be cases where it is not useful to install a 
SQL-oriented software system to meet the needs of the business.  Therefore, 
the choice of implementation technologies has the effect of limiting the 
distributed system design to situations where a SQL-oriented db system is 
appropriate.  While this applies to "94 out of 100" cases, it is not 
universal.

I assume we are on the same page, because I just offered an analysis that 
supports your contention that an SPS can be implemented using a stored 
procedure interface.  Note: I'm not saying whether or not a Semantic 
Persistence Store should be implemented at all.  We have both agreed that it 
should.  I have agreed that the mechanism that you support, in this thread 
and in your blog, will work.  I have tried to outline the tradeoffs that I 
see in doing so.  You may agree or disagree with individual tradeoffs.  I 
don't think we will disagree that, with any implementation of an idea, 
tradeoffs will exist.

Now, I'd like you to consider an alternative.  This is one that I am fond 
of, but don't often get the chance to use.

Let's consider implementing the SPS, but the boundary between the 
application and the SPS will not be the native listener provided by the DB 
software vendor.

Let's create an alternate listener.  It will live in the same manner as the 
db listener, as a process that runs on a server, listening for a known 
message in a known protocol.  It will interpret that message when it gets 
it.

Just as in your model, the code that runs 'under' the listener is part of 
the SPS.  In your suggested implementation, that code was written in SQL and 
presented as a library of stored procedures.  I, on the other hand, would 
like to write the code using an Object Oriented language.  I would like to 
present the interface in terms of objects, as OO languages are comfortable 
doing.  I want the language to help constrain me, so that my interface is 
constrained, and my assumptions are clear (to me, at least ;-).

Envision this: I have created a system whereby the interface is natively 
exposed as objects and methods.  The interface itself is managed using 
Windows Communication Foundation, and we call the objects 'services' and 
their methods are called 'service methods.'

The object oriented code in my service is the ONLY code that is allowed to 
call the native database listener.  No other system is allowed to call the 
RDBMS.  I defend this with network security or some other hack because RDBMS 
systems don't have good provisions for turning off native capabilities.

All applications that wish to use the abilities of my Semantic Persistence 
Store must call my SPS service interface.

Now, in the code within the SPS, I still need to have an OO module call the 
underlying database tables.  However, in this model, I can easily constrain 
the number of places where a particular column is referenced, and I can 
fairly readily constrain things like the pk/fk relationships.  In fact, much 
of the actual code can be generated, rather than hand-written.  In this, 
very narrow and constrained place, in the OO modules that form the interface 
to my database, an O/R mapping tool can be safely and appropriately used, 
resulting in considerably less code to create, debug, and maintain.

In this model, there are stored procedures, but the code inside the SPS can 
call the data tables directly, and usually it does.  Stored procedures are 
useful in many ways, to encapsulate algorithms and mechanisms, much as we 
would with the Strategy pattern, but they are NOT the only interface to the 
database for this narrow segment of OO code.

The services interface is implemented using a base class that each of the 
objects inherit from.  Therefore, in addition to using a scalable 
communication technology like WCF, we can be sure that basic standards will 
be followed and that specific methods are available in every service.

Applications still need plumbing code.  They will call the service interface 
and will get back a canonical model of the data.  They will need to 
recharacterize that data to meet the internal data model in their 
environment.  That will still occur in apps that cannot adopt the canonical 
model directly.

Advantages:
A1) The command interface (services and methods) that needs to be invoked by 
OO programming modules can be crafted in an OO manner (or not, anything 
worth doing, is worth doing really badly at least once, to be an example to 
others ;-).  Assuming good craftmanship at the service interface, the OO 
developer has a rich object-oriented model to drive the consumption of the 
data services available through the Semantic Persistence Store.

A2) As stated above, applications still need plumbing code to call the 
Semantic Persistence Store.  However, since we derived the services from a 
base class, we may be able to generate some of this application code as well 
because we can write object-to-object mapping tools that take advantage of 
things like reflection.  On the other hand, many applications will be able 
to adopt the canonical model, making the plumbing layer trivial (more like a 
Proxy pattern than a Facade).

A3) The underlying language of the SPS is object oriented.  Therefore, the 
conceptual heirarchy carefully crafted by the application developer is 
translated, as closely as possible, to SQL using O/R mapping tools.  This is 
a bit more efficient in terms of code and human resources.  Note that this 
is not a perfect answer.  As I said before, stored procs are still needed 
and extremely useful.  However, the direct translation from the canonical 
object model to SQL code can largely be generated.

A4)  the SPS can be implemented without using SQL technology at all.  For 
enterprises that need a small or light or minimal data store that runs in 
process, SQL is a massive overkill.  The SPS paradigm doesn't require SQL at 
all.  You could store the data directly in an XML file if you would find 
that useful.  This allows the SPS mechanism to be written in whatever 
technology makes the most sense.


Disadvantages:
D1) the listening mechanism is going to be less mature and will likely be 
implementation specific.  It could be based on an ESB (on JMS) or perhaps 
SOAP, but it will surely be less ubiquitious, and may be less scalable, less 
mature, and may have defects that have not yet been found and resolved.
D2) The team that maintains the SPS will need to know at least two 
languages, SQL and C# (or SQL and Java), in order to maintain this 
component.  From a resource management perspective, this adds a few wrinkles 
into the mix.  How many of each do you need?  If your needs change over 
time, will you be ready to respond?  How many should be cross-trained?  That 
said, the fact that our staffing model would need to be updated doesn't 
invalidate the design.  It does mean that we have to practice good change 
management principles, though, to bring this about.

D3) Service-oriented tools are fairly immature, and do not yet allow a rich 
paradigm for inspection, testing, and auditing of the activities of the SPS. 
This is rapidly changing, but has not reached the level of sophistication of 
the DB-oriented model just yet.

>
> What I read in your description above, regarding the responsibilities of 
> the SPM and its componentization (which I read to mean it is interacted 
> with via messages) sounds a lot like an object to me.

I view it as more than one object.  I view it as an interface composed of 
many objects ('services') that are fairly independent of one another.

> Alan Kay emphasized the objects were the "lesser" idea and that "messages" 
> are what was (and still is) important.  Looking at an SDM and treating it 
> as though it were a component (object) brings a watershed of benefits that 
> in practice simplify the relationship between object oriented programs and 
> the SDM.  When those messages are emphasized over object/relational 
> mapping systems are simpler, more explicit, and better reflect the 
> intentions of applications--or at least they do for OLTP systems.

On this, we violently agree.

--- Nick 


0
nickmalik (325)
12/29/2006 4:59:27 PM
> I consider SQL to be a low level language, as far as RDBs are
> concerned, because it is application-ignorant.  It's like C for
> relational operations.  SQL doesn't know anything about my application.

All languages are application-ignorant. SQL is not different from Java
or C# in that aspect. Java doesn't know anything about your application
either.

> So, I subclass Model (so-to-speak) and add data that is domain-specific
> to create my domain-specific database.

Your are talking about creating tables here, right?

> Why access it from applications
> using the same domain-ignorant language?  Instead, I construct
> procedures that create a domain-specific interface.

Isn't the database schema already a domain-specific interface? Your
procedures won't make it more domain-specific.

> Instead of the
> lower-level
>
>     select * from account, user where user.userId=X and account.userId =
>     user.userId
>
> when instead I can use
>
>     exec getAccountsFor @userId=X
>
> ?
>
> Besides its brevity, the procedure name clearly communicates the intent
> of the operation (stbpp pattern: intention revealing message),

Doesn't the word "select" clearly communicate the intent of the
operation?

> makes obvious its parameters,

If you can read the schema definition, the parameters are already
obvious.

> and provides a layer of indirection behind which
> its implementation can change without affecting the procedure's users.

But you also makes it all very inflexible. In your example, you select
all columns in the account and user tables. Lets say you are only
interested in some of the columns or other columns from a third table,
you need to create a new procedure that are almost identical to the
first one.

>  From SQL I've constructed procedures to provide a higher-level,
> domain-specific, language-and-paradigm-neutral interface to a
> domain-specific database.

You have not created something higher-level. Procedures are rather
low-level, compared to relational calculus. Your procedures are indeed
domain-specific but so are the schema, your procedures didn't
contribute to anything. The procedures are written in some language and
need to be called using a specific technology, so I can't understand
how you can claim them to be language-neutral. Claiming that procedures
are paradigm-neutral is not really correct either. In OO, procedures
doesn't exists. The only reason procedures in some way can be
considered being paradigm-neutral, is the fact that they are very
low-level.

> To find all the places in my application source code that get account
> information with user IDs it is much easier to find senders (callers) of
> getAccountsFor than it would be to find all the SQL referencing both the
> account and user tables.

But getAccountsFor is not the only procedure that uses the account or
user tables. How do you know which procedures that uses which tables?

> Could the latter be done?  Sure, but when a
> more efficient and accurate alternative exists why would you?

Why is it so important to "To find all the places in my application
source code that get account
information with user IDs"? Your assumption that it is more efficient
using your way is only true, if it only exists one or a few procedures,
performing this task. Because any procedure can do the same task, I
don't find your more more accurate. You basically need to browse the
implementation of all procedures.

Fredrik Bertilsson
http://frebe.php0h.com

0
frebe73 (444)
12/31/2006 5:47:51 AM
Thomas,

I was hoping to get your feedback on my reply from the 29th on this part of 
the thread.

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
--
"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
news:cdydnQUXZ7bbmAnYnZ2dnUVZ_vamnZ2d@wideopenwest.com...
> Nick Malik [Microsoft] wrote:
> <snip>
>>
>> Are we on the same page so far?
>>
> I think so.
>> The only challenge is that in your description of 'database as object,' 
>> you drew an equivalence between the implementation of the SDM (stored 
>> procedures, especially) and the object-oriented-ness of the SDM.
>>
>> My challenge is with the equivalence that you have drawn between the 
>> procedural, non-OO oriented, implementation mechanisms presented by our 
>> "modern" RDBMS systems, and the conceptual framework you are attempting 
>> to use them in (object oriented design).
>>
>> My point: they are not equivalent.  You cannot, and should not, view the 
>> stored procedures as object methods.  That is the place where we differ 
>> in this discussion, as far as I can tell.
>>
>> Can we agree on the partitioning points?  Let's define the "thing" 
>> (database, SDM, etc) using terms that you have described.  I'm going to 
>> add a little context to make it useful from my viewpoint as well.  For 
>> the sake of the conversation, can we agree on a definition?  I think that 
>> will lower the confusion.  I'd like to, if I may, change the term I 
>> suggested in the prior thread, from Specific Datamodel to Semantic 
>> Persistence Store (SPM) to better capture this concept:
>>
>> "A Semantic Persistence Store (SPM) is a self-contained component in a 
>> vertical distributed computing system that is responsible for both (a) 
>> the persistence of data in an ACID store and (b) the maintenance of the 
>> business semantics of that store.  To whit: the responsibility of this 
>> component is to insure that the data collected within it and the 
>> transactions that occur against it are not only consistent, efficient and 
>> secure but also semantically correct with respect to the operational, 
>> governance, and reporting needs of the business."
>>
>> If we can agree on this foundation, then I think we can make some headway 
>> on the Object Oriented point of divergence in our discussion.
>>
>>
> I think we can agree.  Your comments and questions have been helpful.
> What I read in your description above, regarding the responsibilities of 
> the SPM and its componentization (which I read to mean it is interacted 
> with via messages) sounds a lot like an object to me.  Alan Kay emphasized 
> the objects were the "lesser" idea and that "messages" are what was (and 
> still is) important.  Looking at an SDM and treating it as though it were 
> a component (object) brings a watershed of benefits that in practice 
> simplify the relationship between object oriented programs and the SDM. 
> When those messages are emphasized over object/relational mapping systems 
> are simpler, more explicit, and better reflect the intentions of 
> applications--or at least they do for OLTP systems.
>
>
> -- 
> Visit <http://blogs.instreamfinancial.com/anything.php> to read my rants 
> on technology and the finance industry. 


0
nickmalik (325)
12/31/2006 6:40:47 PM
Nick Malik [Microsoft] wrote:
> Thomas,
>
> I was hoping to get your feedback on my reply from the 29th on this part of 
> the thread.
>
>   
I'm going to get around to it, but it required more thinking and typing 
than I had available to me over the holiday.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
1/2/2007 3:16:29 PM
Nick Malik [Microsoft] wrote:
> <snip>
We're on the same page at this point, and agree on the advantages.  I 
might add more but they wouldn't further our mutual understanding enough 
to keep them off the cutting room floor.
> Disadvantages:
> D1) The command interface (stored procedure names and parameters) that needs 
> to be invoked by OO programming modules has no inherent support for OO 
> concepts.  Therefore, while naming conventions and good practices can make 
> the interface understandable, the listener will not validate the incoming 
> message to insure that the proper method was called for the desired 
> activity.
While you're correct in that SQL is as unaware of OO concepts as it is 
of my C code's memory management, it doesn't mean the procedures don't 
have a dependence between them that provides a "correct" order.

For example, before transaction detail can be added a transaction must 
exist.  Before a transaction exists a valid session (recording the users 
id, login time, application, etc.) must also be available.  The SQL 
message for this may resemble:

    declare @sessionKey IDENTREF,
            @transactionKey IDENTREF,
            @transactionDetailKey IDENTREF
    declare @rc INT

    begin tran

    set @sessionKey = <a previously validated session key>
    exec @rc = addTransaction
         @sessionKey = @sessionKey,
         @tranactionKey = @transactionKey OUTPUT
    if (@rc = 0) begin
        exec @rc = addTransactionDetail
             @transactionKey = @transactionKey,
             @transactionDetailKey = @transactionDetailKey OUTPUT,
             @transactionTypeName = "...."
             @accountKey = <an account key>
             @transactionAmount = <some amount>
    end
    if (@rc = 0) begin
        exec @rc = addTransactionDetail
             @transactionKey = @transactionKey,
             @transactionDetailKey = @transactionDetailKey OUTPUT,
             @transactionTypeName = "...."
             @accountKey = <another account key>
             @transactionAmount = <some amount>
    end
    if (@rc = 0) begin
        exec @rc = checkTransactionBalance
             @transactionKey = @transactionKey,
    end
    if (@rc != 0) begin
        -- some exception handling stuff
        rollback tran
    end
    else begin
        -- some finishing stuff
        commit tran
    end

addTransactionDetail requires a valid transactionKey, which can only be 
gotten from a successful call to addTransaction.  addTransaction can 
only be called with a valid sessionKey, which it checks to make sure the 
session is valid (user is still logged on, the database is in read-write 
mode, etc.).  So even if it knows nothing of OO data graphs the order of 
the stored procedure calls is not random or arbitrary.

There are other semantic rules buried in the procedures that are not 
obvious.  For instance, each account is allowed only a single currency, 
and all transactions (including their detail) must be in a common 
currency.  Valid accountKeys are also required as are transaction names, 
etc.

I'm not so sure I would call this a disadvantage as much as I would 
consider it the nature of the beast.
> D2) As stated above, plumbing code must exist on the OO side to frame up the 
> message and its results in the internal data model of the application. 
> However, the command interface for this particular implementation of the SPS 
> is based on good practices and naming conventions.  It is not based on 
> repeatable and enforced (mathematical) principles.  Therefore, the plumbing 
> code must be carefully hand crafted and tested.  It is a labor-intensive 
> task that is very difficult to automate.
>   
I agree it must be carefully implemented to be useful, but not that it 
is labor-intensive.  If you look at the SQL you can see that it has a 
structure.  That structure is the transaction, and a transaction can be 
modeled inside OO very easy.  Detail objects can be added to the 
transaction object very easily.  When complete, the transaction object 
can be asked to express itself in SQL, or told to put itself onto the 
SPS, for which it can also be be prepared to handle exceptions properly.

It is in doing this that an OO system can easily interface with a 
relational database without mapping OO datagraphs to relational 
projections or tuples.  It should not go unappreciated that the SQL 
above has no OO baggage with it, and can just as easily be created by C, 
COBOL, FORTRAN, or by hand.

And remember, when used manually there is no plumbing code necessary.  
As this might be the first interface client the SPS has this is an 
important feature.  In the alternate model you describe below (way 
below) there is no human-accessible interface to the SPS.
> D2-alternate) If the SPS does not offer up a single canonical model for 
> consumption, but rather offers up different models to make it easier for 
> application developers to consume the data using objects that are easy to 
> map, then the SPS is likely, over time, to offer up a large list of 
> interfaces.  (application-specific stored procedures).  The plumbing layer 
> still exists... it just exists in SQL instead of C# (or Java).  Some teams 
> prefer this model.  I don't have evidence that this mechanism is better or 
> worse than the other.
>   
I'm unsure if I understood this one.  At first I thought you might be 
suggesting the SPS would be creating client-friendly procedures for each 
of the client environments it may have to support.  This sounds like a 
kind of interface entropy.  By making the SPS' interface compatible with 
multiple clients (and making the interface larger with more procedures) 
you're suggesting that the mapping (or some portion of it) has migrated 
into the SPS interface.  I disagree this necessarily occurs, and if it 
did would violate OO designer's sense of cohesiveness.  Though I don't 
have any dedicated DBAs, I would also suspect it would violate their 
sense of aesthetics and would contribute to a more difficult to maintain 
SPS.

If I understood your point, I would not include it among the model's 
disadvantages since it is not a necessary evil.  It would instead be a 
self-inflicted wound which makes it little different from the other 
mutilations programmers are responsible for creating themselves, and 
there's no limit to the number or kinds of these in everyone else's 
code--but not yours or mine.
> D3) The underlying language, on the db side, is not object oriented.  Thus, 
> even if the stored procedure interface has been carefully crafted to 
> resemble and reflect OO thinking, the code written in these procedures is 
> not 'defended' or 'supported' by the OO paradigm.  Therefore, the conceptual 
> heirarchy carefully crafted by the application developer cannot be used in 
> the persistence of the data, even when the models match.  As a result, every 
> data object must have two bodies of code to interpret it, one in the OO code 
> and one in the SQL code.  This is inefficient in terms of human as well as 
> code resources.
>   
Why would the SPS' procedures require defending by a 
paradigm-in-execution?  Why should the presence of OO applications 
require anything of the SPS beyond the same provision for other 
paradigms like structured, aspect, or functional?

If we created a web service we would not create different interfaces for 
each application language or paradigm, whether it be C#, Smalltalk, C, 
assembly, LISP, Prolog, Haskel, or AspectJ.

How much time do programmers spend defending a Date object?  And if so, 
how much of that is duplicative?  Shouldn't a date object be responsible 
for its own defense?
> D4) the choice of the technology (SQL and stored procs) to implement the SPS 
> means that a SQL-oriented software package must be installed in the 
> application's environment.
True, but stored procedures are just one way to implement a SPS' 
interface.  Rather than discuss how those are implemented (they're 
conceptually as simple as procedures but are attended by other 
distracting nuances) for the moment I'd like to stick to relational 
databases that support procedures (mindful that MySQL just recently 
added these).
> <snip description of alterante>
Much of what you're describing isn't conceptually different.  You've 
constructed an interface between your applications and your database, 
and that interface combined with your database is your SPS.

The biggest difference seems to be the model's interface, being either 
OO or XML-based (not to mention Windows-based), is arrogant (the 
interface--not you!).  It prefers to speak only with applications that 
share its vocabulary and pedigree.  Lower-level languages or manual 
entries are not permitted to eat with the grown-ups.

Taking your example a little further, why not just use an ODBMS?  The 
interface has already created an exclusive club of clients, why not use 
an exclusive database management system (XDBMS)?

Granted, that exclusivity has value (I tried it once with an ODBMS) but 
one of the problems (aside from performance) was reporting--but we'll 
save that for another thread.
> Now, in the code within the SPS, I still need to have an OO module call the 
> underlying database tables.  However, in this model, I can easily constrain 
> the number of places where a particular column is referenced, and I can 
> fairly readily constrain things like the pk/fk relationships.  In fact, much 
> of the actual code can be generated, rather than hand-written.  In this, 
> very narrow and constrained place, in the OO modules that form the interface 
> to my database, an O/R mapping tool can be safely and appropriately used, 
> resulting in considerably less code to create, debug, and maintain.
>   
Doesn't that defeat, or at least deflate, the value of an OR mapping 
tool or framework?  With only a single client OR tools have little-or-no 
bang for the buck.  It like getting a pellet gun for the purchase price, 
training, maintenance, and servicing fees of an Apache helicopter.  It 
seems disproportionate to me.
> <snip>
>
> The services interface is implemented using a base class that each of the 
> objects inherit from.  Therefore, in addition to using a scalable 
> communication technology like WCF, we can be sure that basic standards will 
> be followed and that specific methods are available in every service.
>   
That's what I mean by arrogant.  Aloof is perhaps a better word.  It 
doesn't carry the pejorative baggage arrogant does.

In my pedestrian way of thinking, a different way of doing or thinking 
about things shouldn't require exotic ingredients or access to private 
clubs.  C programmers are able to create OO-inspired interfaces to their 
systems without resorting to OO languages (ctlib and dblib come to 
mind).  Resourceful programmers ought to be able to assemble them from 
whatever is lying around and the end result should reflect its humble 
beginnings and avoid becoming presumptuous (I'm looking hard for 
synonyms to arrogant).

This is why I think the simplest example remains stored procedures 
in-front of a DB is the best example of treating a database as an 
object, or as we have started to call it, a Semantic Persistent Store (SPS).
> <snip advantages>
>
>
> Disadvantages:
> D1) the listening mechanism is going to be less mature and will likely be 
> implementation specific.  It could be based on an ESB (on JMS) or perhaps 
> SOAP, but it will surely be less ubiquitious, and may be less scalable, less 
> mature, and may have defects that have not yet been found and resolved.
>   
Let's assume you're at least using mature message-oriented-middleware so 
network nonsense isn't an issue.  If your SPS component is the only 
pathway in and out it is likely to mature fairly quickly as every client 
application in the system will be using it.  So, though you may list 
this as a disadvantage, half of it is eliminated with MOM, and the other 
half disappears over a fairly short piece of time.
> D2) The team that maintains the SPS will need to know at least two 
> languages, SQL and C# (or SQL and Java), in order to maintain this 
> component.  From a resource management perspective, this adds a few wrinkles 
> into the mix.  How many of each do you need?  If your needs change over 
> time, will you be ready to respond?  How many should be cross-trained?  That 
> said, the fact that our staffing model would need to be updated doesn't 
> invalidate the design.  It does mean that we have to practice good change 
> management principles, though, to bring this about.
>   
But this problem already exists in many shops and isn't unique to this 
model.  Nor does this model necessarily exacerbate the problem.  The 
shops that probably most need to worry more than others are those with 
separate (and antagonistic) database and systems programming groups--but 
those shops already have other problems that aren't unique to this model.
> D3) Service-oriented tools are fairly immature, and do not yet allow a rich 
> paradigm for inspection, testing, and auditing of the activities of the SPS. 
> This is rapidly changing, but has not reached the level of sophistication of 
> the DB-oriented model just yet.
>   
I don't think they will, because in making things easier for computers 
to use and or consume (like XML) those structures are increasingly 
difficult for humans to read.  Think of sendmail.cf, and how that may be 
really easy for sendmail to parse, but have you ever tried creating one 
by hand or even describing to a fellow programmer /exactly/ what's going 
on inside there?  Even if you could, think about the level of training 
and genius required and ask yourself how many other people share that?

As you indicated, we agree more than we disagree.  I think we both 
understand and recognize the concept and the model, even if I think 
yours is snootier than mine. ;-)

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
1/2/2007 8:06:50 PM
frebe73@gmail.com wrote:
>> I consider SQL to be a low level language, as far as RDBs are
>> concerned, because it is application-ignorant.  It's like C for
>> relational operations.  SQL doesn't know anything about my application.
>>     
>
> All languages are application-ignorant. SQL is not different from Java
> or C# in that aspect. Java doesn't know anything about your application
> either.
>
>   
It is true that out-of-the-box all languages are application ignorant.  
Out-of-the-box so are most programmers.  The difference is what becomes 
of them after a few months time?

I don't know what happens in other shops, but in ours Smalltalk may 
remain essentially Smalltalk but it class hierarchy starts to look more 
like our business.  The more business-oriented classes we have the 
larger building blocks we have to solve business problems.  I only need 
to describe what a loan is once, and from then on I can solve problems 
with loans.  It is the ability of a language to assimilate a business' 
vocabulary that moves it from being 1st, 2nd, or 3rd-level language and 
if not becoming 4th-level, at least makes it a large value of 3.
>> So, I subclass Model (so-to-speak) and add data that is domain-specific
>> to create my domain-specific database.
>>     
>
> Your are talking about creating tables here, right?
>   
Yes.  In the same way I add instance variables and methods when I 
subclass Object, I add tables and procedures when I "subclass" the model DB.
>   
>> Why access it from applications
>> using the same domain-ignorant language?  Instead, I construct
>> procedures that create a domain-specific interface.
>>     
>
> Isn't the database schema already a domain-specific interface? Your
> procedures won't make it more domain-specific.
>   
Au contraire.  When I add procedures for doing a business-specific thing 
I make my Semantic Persistent Store (discussed elsewhere on this thread) 
more aware of the semantics of my business.
>   
>> Instead of the
>> lower-level
>>
>>     select * from account, user where user.userId=X and account.userId =
>>     user.userId
>>
>> when instead I can use
>>
>>     exec getAccountsFor @userId=X
>>
>> ?
>>
>> Besides its brevity, the procedure name clearly communicates the intent
>> of the operation (stbpp pattern: intention revealing message),
>>     
>
> Doesn't the word "select" clearly communicate the intent of the
> operation?
>   
You know, that's the problem with simplified examples.  People tend not 
to extrapolate the simple example into a more complex one.  Yes, the 
word SELECT does indicate you're doing a query, but what exactly is 
being queried and the rules expressed by way of table names, joins, and 
discriminations tend to obfuscate the intent.  No matter how complex a 
SELECT may be (or several selects) the procedure's signature should be 
simpler to read.

It seems odd to me that programmers accustomed to creating functions in 
their own code to add structure, increase readability and reuse resist 
doing the same for their databases.
>   
>> makes obvious its parameters,
>>     
>
> If you can read the schema definition, the parameters are already
> obvious.
>   
Telling, perhaps, but not "obvious".  Though you might be given a clue 
as to the nature of my business by looking at the tables are you 
prepared to assert you can correctly update the tables?
>   
>> and provides a layer of indirection behind which
>> its implementation can change without affecting the procedure's users.
>>     
>
> But you also makes it all very inflexible. In your example, you select
> all columns in the account and user tables. Lets say you are only
> interested in some of the columns or other columns from a third table,
> you need to create a new procedure that are almost identical to the
> first one.
>   
You could do that, but I wouldn't.  If a single SELECT can satisfy both 
then I can update the existing procedure to include the new fields and 
now both may use it.
>   
>>  From SQL I've constructed procedures to provide a higher-level,
>> domain-specific, language-and-paradigm-neutral interface to a
>> domain-specific database.
>>     
>
> You have not created something higher-level. Procedures are rather
> low-level, compared to relational calculus.
You and I are not using the same definition of level.  See my 
description above.
>  Your procedures are indeed
> domain-specific but so are the schema, your procedures didn't
> contribute to anything.
Is that how you think of your objects.  That the data inside them is 
domain specific enough that methods don't add anything useful to them?
>  The procedures are written in some language and
> need to be called using a specific technology, so I can't understand
> how you can claim them to be language-neutral.
They're language neutral in the sense they can be equitably invoked and 
their results accessible to C, Java, Smalltalk, Perl, PHP, LISP, 
FORTRAN, etc.  It doesn't matter which language the procedure is called 
from, the results to the database (object) are the same.
>  Claiming that procedures
> are paradigm-neutral is not really correct either.
Not true.  They are paradigm neutral in the sense both their invocation 
and results are understandable by aspect-oriented, functional, 
object-oriented, and structured languages.  Procedures don't reserve the 
benefits of their existence to OO-only languages or XML-capable 
languages or any other exclusive feature a language may support.  The 
bar is set deliberately by APIs so as not to discriminate against client 
languages and environments.
>> To find all the places in my application source code that get account
>> information with user IDs it is much easier to find senders (callers) of
>> getAccountsFor than it would be to find all the SQL referencing both the
>> account and user tables.
>>     
>
> But getAccountsFor is not the only procedure that uses the account or
> user tables. How do you know which procedures that uses which tables?
>   
I don't know which DBMS you're using, but most modern DBMSs support 
reflection in the form of system tables and functions that can answer 
the question, "which procedures use table x?" in the same way modern OO 
environments can answer the question, "which methods access instance 
variable x?" or "which methods call the method f(x)?" or "who implements 
the method f(x)?"
>   
>> Could the latter be done?  Sure, but when a
>> more efficient and accurate alternative exists why would you?
>>     
>
> Why is it so important to "To find all the places in my application
> source code that get account
> information with user IDs"? Your assumption that it is more efficient
> using your way is only true, if it only exists one or a few procedures,
> performing this task. Because any procedure can do the same task, I
> don't find your more more accurate. You basically need to browse the
> implementation of all procedures.
>   
Again, I doubt that's how you create your OO code, ".. any [method] can 
do the same task.."  I suspect that like many good OO designers you 
don't have multiple methods that do the same thing for any given class.  
How many methods are their to add days to a date?  Even if they take the 
same number and types of arguments would you create multiple method with 
different names?

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
1/3/2007 3:30:28 PM
Thomas Gagne wrote:
> frebe73@gmail.com wrote:
> >> I consider SQL to be a low level language, as far as RDBs are
> >> concerned, because it is application-ignorant.  It's like C for
> >> relational operations.  SQL doesn't know anything about my application.
> >>
> >
> > All languages are application-ignorant. SQL is not different from Java
> > or C# in that aspect. Java doesn't know anything about your application
> > either.
> >
> >
> It is true that out-of-the-box all languages are application ignorant.
> Out-of-the-box so are most programmers.  The difference is what becomes
> of them after a few months time?
>
> I don't know what happens in other shops, but in ours Smalltalk may
> remain essentially Smalltalk but it class hierarchy starts to look more
> like our business.  The more business-oriented classes we have the
> larger building blocks we have to solve business problems.  I only need
> to describe what a loan is once, and from then on I can solve problems
> with loans.  It is the ability of a language to assimilate a business'
> vocabulary that moves it from being 1st, 2nd, or 3rd-level language and
> if not becoming 4th-level, at least makes it a large value of 3.

Can you demonstrate publicly that Smalltalk does this better than
procedural/relational?

> >> Instead of the
> >> lower-level
> >>
> >>     select * from account, user where user.userId=X and account.userId =
> >>     user.userId
> >>
> >> when instead I can use
> >>
> >>     exec getAccountsFor @userId=X
> >>
> >> ?
> >>
> >> Besides its brevity, the procedure name clearly communicates the intent
> >> of the operation (stbpp pattern: intention revealing message),
> >>
> >
> > Doesn't the word "select" clearly communicate the intent of the
> > operation?
> >
> You know, that's the problem with simplified examples.  People tend not
> to extrapolate the simple example into a more complex one.  Yes, the
> word SELECT does indicate you're doing a query, but what exactly is
> being queried and the rules expressed by way of table names, joins, and
> discriminations tend to obfuscate the intent.  No matter how complex a
> SELECT may be (or several selects) the procedure's signature should be
> simpler to read.
>
> It seems odd to me that programmers accustomed to creating functions in
> their own code to add structure, increase readability and reuse resist
> doing the same for their databases.

People not doing X to a tool and a tool not capable of doing X are 2
different things. I will agree that many database-centric shops may
resist abstraction and simplification techniques, but that does not
mean it is not possible.  I've found lots of ways to simplify the
"view" of a business using tables and table-oriented techniques. In
many cases people resisted for political reasons or (probably
unjustified) performance concerns.  I agree that the "raw" tables may
not be the best abstraction for a given deparment or application
(sometimes due to bad schema design). But I've almost always found ways
to rework the data view to fit those. In some cases it requires
creating "views", and in others it requires reprocesssing the data
nightly to produce a set of tables that better fit the given niche.

I will also agree that existing RDBMS and SQL are often lacking in
useful abstraction and meta-abilities. But this is not the fault of
relational itself. I believe it to be from an obsession over
performance. But even with their limits, I'll take them over
navigational techniques.

I will agree that the "Smalltalk culture" spends more time on
abstraction issues. However, this is due to culture, not the ability of
relational. Idiots can mess up any language and any tool. Relational
algebra and table-based models can be very powerful abstruction tools.
OO designs tend to reinvent a lot of it the hard way, wasting code on
mundane collection-handling and set-oriented issues because OO does not
provide it natively. OO is obsessed with "hiding" rather than managing
interdependencies, the real work.

> >> and provides a layer of indirection behind which
> >> its implementation can change without affecting the procedure's users.
> >>
> >
> > But you also makes it all very inflexible. In your example, you select
> > all columns in the account and user tables. Lets say you are only
> > interested in some of the columns or other columns from a third table,
> > you need to create a new procedure that are almost identical to the
> > first one.
> >
> You could do that, but I wouldn't.  If a single SELECT can satisfy both
> then I can update the existing procedure to include the new fields and
> now both may use it.

Yes, but you have to make 2 stops to make the change. Also, I've done
things like pass criteria expressions (WHERE clause). You can't compete
with this flexibility very easily with a pure OOP interface because
passing expressions is more powerful than passing individual
parameters. It thus increases reuse. There are some OO API's that can
be used to build up Boolean expressions, but that is highly verbose and
takes one back to the assembler language days.


> --
> Visit <http://blogs.instreamfinancial.com/anything.php>
> to read my rants on technology and the finance industry.

-T-

0
topmind (2124)
1/3/2007 5:34:18 PM
"topmind" <topmind@technologist.com> wrote in message 
news:1167845658.536831.82820@42g2000cwt.googlegroups.com...
<...>
>> I don't know what happens in other shops, but in ours Smalltalk may
>> remain essentially Smalltalk but it class hierarchy starts to look more
>> like our business.  The more business-oriented classes we have the
>> larger building blocks we have to solve business problems.  I only need
>> to describe what a loan is once, and from then on I can solve problems
>> with loans.  It is the ability of a language to assimilate a business'
>> vocabulary that moves it from being 1st, 2nd, or 3rd-level language and
>> if not becoming 4th-level, at least makes it a large value of 3.
>
> Can you demonstrate publicly that Smalltalk does this better than
> procedural/relational?

How about this... we added a 'Function' class that allows us to configure a 
numeric relationship between numeric value holders.  The user can define 
'net asset value = assets - liabilities' by defining a Function instance 
with an expression of 'a - b' and link it as the function of the 'net asset 
value' numeric item (which itself is then linked to the assets item and the 
liabilities item, based on accounting rules).  BTW: a neat trick is that in 
GemStone we store the 'a - b' as a compiled block, [:a :b | a - b], so that 
our configured code runs at byte code speeds.

We then added a whole library of financial functions to the Function class, 
effectively making it our Smalltalk environment more of a 'financial app' 
development language.

Even something as simple as extending Number can change the 'language' to be 
more app specific. For example, we added #displayStringTo: to Number, so 
that 123.4567 will display as 123.46 when send #displayStringTo: 2.
Then, we added #displayStringTo: to UndefinedObject to answer '<n/a>' and to 
Object to answer #displayString

Ours is a financial app, so printing numbers in various formats is something 
we do a lot of.  If the base language had not provided these display tools, 
we'd have to code them somehow.  In our Smalltalk app, they are no different 
from #printString.

Bob Nemec
Northwater Objects 


0
bobn1 (3)
1/3/2007 6:08:49 PM
> >> I consider SQL to be a low level language, as far as RDBs are
> >> concerned, because it is application-ignorant.  It's like C for
> >> relational operations.  SQL doesn't know anything about my application.
.....
> It is the ability of a language to assimilate a business'
> vocabulary that moves it from being 1st, 2nd, or 3rd-level language and
> if not becoming 4th-level, at least makes it a large value of 3.

Isn't SQL able to assimilate to business vocabulary? A view definition
might be very application-specific.

> >> Why access it from applications
> >> using the same domain-ignorant language?  Instead, I construct
> >> procedures that create a domain-specific interface.
> >
> > Isn't the database schema already a domain-specific interface? Your
> > procedures won't make it more domain-specific.
> >
> Au contraire.  When I add procedures for doing a business-specific thing
> I make my Semantic Persistent Store (discussed elsewhere on this thread)
> more aware of the semantics of my business.

Are you saying that a database schema is not domain-specific? What
about
create table order (orderid integer, customeridid integer, order_date
date)

> >> Instead of the
> >> lower-level
> >>
> >>     select * from account, user where user.userId=X and account.userId =
> >>     user.userId
> >>
> >> when instead I can use
> >>
> >>     exec getAccountsFor @userId=X
> >>
> >> ?
> >>
> >> Besides its brevity, the procedure name clearly communicates the intent
> >> of the operation (stbpp pattern: intention revealing message),
> >
> > Doesn't the word "select" clearly communicate the intent of the
> > operation?
> >
> You know, that's the problem with simplified examples.  People tend not
> to extrapolate the simple example into a more complex one.  Yes, the
> word SELECT does indicate you're doing a query, but what exactly is
> being queried and the rules expressed by way of table names, joins, and
> discriminations tend to obfuscate the intent.  No matter how complex a
> SELECT may be (or several selects) the procedure's signature should be
> simpler to read.

If you have a complex select statement, define a view. That is much
more flexible than hiding the select statement behind a procedure.

> It seems odd to me that programmers accustomed to creating functions in
> their own code to add structure, increase readability and reuse resist
> doing the same for their databases.

Views are an excellent way or increasing readability and reuse.

> >> makes obvious its parameters,
> >
> > If you can read the schema definition, the parameters are already
> > obvious.
> >
> Telling, perhaps, but not "obvious".  Though you might be given a clue
> as to the nature of my business by looking at the tables are you
> prepared to assert you can correctly update the tables?

Yes. Input validation should be defined as referential and check
constraints.

> >> and provides a layer of indirection behind which
> >> its implementation can change without affecting the procedure's users.
> >
> > But you also makes it all very inflexible. In your example, you select
> > all columns in the account and user tables. Lets say you are only
> > interested in some of the columns or other columns from a third table,
> > you need to create a new procedure that are almost identical to the
> > first one.
> >
> You could do that, but I wouldn't.  If a single SELECT can satisfy both
> then I can update the existing procedure to include the new fields and
> now both may use it.

That means you will always execute the join, even if you are not really
interested in the joined values? This is one of the reasons why your
solution is a major performance drawback.

> You and I are not using the same definition of level.  See my
> description above.

Your definition of high-level is application-specific. Using that
definition it is not possible to see any difference in level between
assembler and Java. Both can be used for making application-specific
procedures.

My definition of level is the amount out code you have to write to
perform a given task. Java source code will in almost all situations
have lesser lines of code than a corresponding assembler source code. A
SQL source will have less LOC than the corresponding Java source.

> >  Your procedures are indeed
> > domain-specific but so are the schema, your procedures didn't
> > contribute to anything.
> Is that how you think of your objects.  That the data inside them is
> domain specific enough that methods don't add anything useful to them?

The database schema is domain specific, but that doesn't contradict the
fact that procedures may add anything useful.

> >  The procedures are written in some language and
> > need to be called using a specific technology, so I can't understand
> > how you can claim them to be language-neutral.
> They're language neutral in the sense they can be equitably invoked and
> their results accessible to C, Java, Smalltalk, Perl, PHP, LISP,
> FORTRAN, etc.  It doesn't matter which language the procedure is called
> from, the results to the database (object) are the same.

You can call SQL code from C, Java, etc too.

> >  Claiming that procedures
> > are paradigm-neutral is not really correct either.
> Not true.  They are paradigm neutral in the sense both their invocation
> and results are understandable by aspect-oriented, functional,
> object-oriented, and structured languages.  Procedures don't reserve the
> benefits of their existence to OO-only languages or XML-capable
> languages or any other exclusive feature a language may support.  The
> bar is set deliberately by APIs so as not to discriminate against client
> languages and environments.

Procedures does not exists in OO. The closest thing to a procedure is a
static method. Functional languages does not accept procedures with
side-effects (insert or update), or procedures returning values not
derived from the arguments.

> >> To find all the places in my application source code that get account
> >> information with user IDs it is much easier to find senders (callers) of
> >> getAccountsFor than it would be to find all the SQL referencing both the
> >> account and user tables.
> >
> > But getAccountsFor is not the only procedure that uses the account or
> > user tables. How do you know which procedures that uses which tables?
> >
> I don't know which DBMS you're using, but most modern DBMSs support
> reflection in the form of system tables and functions that can answer
> the question, "which procedures use table x?" in the same way modern OO
> environments can answer the question, "which methods access instance
> variable x?" or "which methods call the method f(x)?" or "who implements
> the method f(x)?"

An IDE (or even grep) could solve the same problem if you embedd the
SQL statements in your application code.

> >> Could the latter be done?  Sure, but when a
> >> more efficient and accurate alternative exists why would you?
> >>
> >
> > Why is it so important to "To find all the places in my application
> > source code that get account
> > information with user IDs"? Your assumption that it is more efficient
> > using your way is only true, if it only exists one or a few procedures,
> > performing this task. Because any procedure can do the same task, I
> > don't find your more more accurate. You basically need to browse the
> > implementation of all procedures.
> >
> Again, I doubt that's how you create your OO code, ".. any [method] can
> do the same task.."  I suspect that like many good OO designers you
> don't have multiple methods that do the same thing for any given class.
> How many methods are their to add days to a date?  Even if they take the
> same number and types of arguments would you create multiple method with
> different names?

Your approach will result in a rather large number of procedures. When
a new database query is needed, the programmer has to browse many of
the existing procedures to see if the same or a similar query is
already implemented. I have experienced many times that programmers
choose to add a new procedure without carefully verifying if some
existing procedure already might do the job. There are also probable
that you will end up with many similar (but not identical) procedures.

/Fredrik

0
frebe73 (444)
1/3/2007 6:52:32 PM
> > Can you demonstrate publicly that Smalltalk does this better than
> > procedural/relational?
>
> How about this... we added a 'Function' class that allows us to configure a
> numeric relationship between numeric value holders.  The user can define
> 'net asset value = assets - liabilities' by defining a Function instance
> with an expression of 'a - b' and link it as the function of the 'net asset
> value' numeric item (which itself is then linked to the assets item and the
> liabilities item, based on accounting rules).

Isn't it possible to define or import functions in non-OO procedural
languages too?

function assetValue(assets, liabilities) { return assets - liabilities;
}

> Even something as simple as extending Number can change the 'language' to be
> more app specific. For example, we added #displayStringTo: to Number, so
> that 123.4567 will display as 123.46 when send #displayStringTo: 2.
> Then, we added #displayStringTo: to UndefinedObject to answer '<n/a>' and to
> Object to answer #displayString

If think that formatting numbers are extremly simple in procedural
languages too.

> Ours is a financial app, so printing numbers in various formats is something
> we do a lot of.  If the base language had not provided these display tools,
> we'd have to code them somehow.  In our Smalltalk app, they are no different
> from #printString.

I think you have to supply the larger example if you want to
demonstrate why Smalltalk or some other OO language prints numbers
better than procedural languages.

/Fredrik

0
frebe73 (444)
1/3/2007 7:03:18 PM
<...>
> Isn't it possible to define or import functions in non-OO procedural
> languages too?
>
> function assetValue(assets, liabilities) { return assets - liabilities;}

But how would you use this new function without code changes?
Our example is something an end user configures; no code change required.

<...>
> I think you have to supply the larger example if you want to
> demonstrate why Smalltalk or some other OO language prints numbers
> better than procedural languages.

The intent was not show that Smalltalk prints numbers better, but to show 
that Smalltalk can itself be 'morphed' into something that more of an 
'application specific' language.  Our Smalltalk environment is more tightly 
bound to our problem domain than would normally be the case for other tools.

Bob Nemec
Northwater Objects.


0
bobn1 (3)
1/3/2007 7:42:46 PM
Bob Nemec wrote:
> "topmind" <topmind@technologist.com> wrote in message
> news:1167845658.536831.82820@42g2000cwt.googlegroups.com...
> <...>
> >> I don't know what happens in other shops, but in ours Smalltalk may
> >> remain essentially Smalltalk but it class hierarchy starts to look more
> >> like our business.  The more business-oriented classes we have the
> >> larger building blocks we have to solve business problems.  I only need
> >> to describe what a loan is once, and from then on I can solve problems
> >> with loans.  It is the ability of a language to assimilate a business'
> >> vocabulary that moves it from being 1st, 2nd, or 3rd-level language and
> >> if not becoming 4th-level, at least makes it a large value of 3.
> >
> > Can you demonstrate publicly that Smalltalk does this better than
> > procedural/relational?
>
> How about this... we added a 'Function' class that allows us to configure a
> numeric relationship between numeric value holders.  The user can define
> 'net asset value = assets - liabilities' by defining a Function instance
> with an expression of 'a - b' and link it as the function of the 'net asset
> value' numeric item (which itself is then linked to the assets item and the
> liabilities item, based on accounting rules).  BTW: a neat trick is that in
> GemStone we store the 'a - b' as a compiled block, [:a :b | a - b], so that
> our configured code runs at byte code speeds.
>
> We then added a whole library of financial functions to the Function class,
> effectively making it our Smalltalk environment more of a 'financial app'
> development language.

I've stored formulas in tables fairly often back in my xBase days
(although current tools don't support it as well). Most dynamic
languages support some form of this. I don't know why you think it is
unique to Smalltalk.

And when we build up all these formula repositories? I find tables
better than the navigational structures that Smalltalk builds.
Navigational structures turn into a big mess that only the original
author can grok. Perhaps you somehow think better with navigational
structures, but you cannot assume that others will also.

Nobody has documented how to query and navigate navigational stuctures
in a clear, consistent sense. Relational is better understood; it's
more "tamed" and better tied to math (set theory) than navigational.
Navigational is the structuring equivalent to GOTO's. If you drew
relationships between all your objects, you get a big tangled messy
graph (web).

>
> Even something as simple as extending Number can change the 'language' to be
> more app specific. For example, we added #displayStringTo: to Number, so
> that 123.4567 will display as 123.46 when send #displayStringTo: 2.
> Then, we added #displayStringTo: to UndefinedObject to answer '<n/a>' and to
> Object to answer #displayString
>
> Ours is a financial app, so printing numbers in various formats is something
> we do a lot of.  If the base language had not provided these display tools,
> we'd have to code them somehow.  In our Smalltalk app, they are no different
> from #printString.

Please clarify. How exactly does Smalltalk make formatting functions
and/or a format template sub-language better? Formatting functions and
template sub-languages are not unique to ST.

> 
> Bob Nemec
> Northwater Objects

-T-

0
topmind (2124)
1/3/2007 9:28:04 PM
Bob Nemec wrote:
> <...>
> > Isn't it possible to define or import functions in non-OO procedural
> > languages too?
> >
> > function assetValue(assets, liabilities) { return assets - liabilities;}

(not my example, BTW)

>
> But how would you use this new function without code changes?
> Our example is something an end user configures; no code change required.

rs = query("select * from formulaTable where $criteria");
while (row = nextRow(rs)) {
   result = eval(row["formula"]);
}

A language like TCL will give one more evaluation options, such as
scope control. But most modern dynamic languages support at least some
form of "eval".


>
> <...>
> > I think you have to supply the larger example if you want to
> > demonstrate why Smalltalk or some other OO language prints numbers
> > better than procedural languages.
>
> The intent was not show that Smalltalk prints numbers better, but to show
> that Smalltalk can itself be 'morphed' into something that more of an
> 'application specific' language.  Our Smalltalk environment is more tightly
> bound to our problem domain than would normally be the case for other tools.

Example? How are you measuring boundness?

> 
> Bob Nemec
> Northwater Objects.

-T-

0
topmind (2124)
1/3/2007 9:37:29 PM
>
> Views are an excellent way or increasing readability and reuse.

Unfortunately, an app developer often does not have access to create or
change views very easily. RDBMS vendors could help things by making it
easier to assign app-specific view management tools. And the creation
ability for ad-hoc temporary or virtual views would be nice also. But
this is an implementation issue, not a paradigm fault. Some developers
end up writing big long run-on bloaty SQL because ad-hoc views are not
available.


> > >> To find all the places in my application source code that get account
> > >> information with user IDs it is much easier to find senders (callers) of
> > >> getAccountsFor than it would be to find all the SQL referencing both the
> > >> account and user tables.
> > >
> > > But getAccountsFor is not the only procedure that uses the account or
> > > user tables. How do you know which procedures that uses which tables?
> > >
> > I don't know which DBMS you're using, but most modern DBMSs support
> > reflection in the form of system tables and functions that can answer
> > the question, "which procedures use table x?" in the same way modern OO
> > environments can answer the question, "which methods access instance
> > variable x?" or "which methods call the method f(x)?" or "who implements
> > the method f(x)?"
>
> An IDE (or even grep) could solve the same problem if you embedd the
> SQL statements in your application code.

The problem here is greedy vendors that limit the scope of where their
inspection tools look. A good system wouldn't care if you stored your
SQL in files, a RDBMS, tables, etc.

One problem with SQL inspection is dynamic SQL. For example,
Query-By-Example is difficult to write without run-time generated SQL
because the WHERE conditions, and perhaps ordering are not known until
the user selects them (via the QBE form).

-T-

0
topmind (2124)
1/3/2007 9:54:12 PM
Thomas Gagne wrote:
> frebe73@gmail.com wrote:
>>> I consider SQL to be a low level language, as far as RDBs are
>>> concerned, because it is application-ignorant.  It's like C for
>>> relational operations.  SQL doesn't know anything about my application.
>>>     
>>
>> All languages are application-ignorant. SQL is not different from Java
>> or C# in that aspect. Java doesn't know anything about your application
>> either.
>>
>>   
> It is true that out-of-the-box all languages are application ignorant.  
> Out-of-the-box so are most programmers.  The difference is what becomes 
> of them after a few months time?

Thomas, sorry to so pedantically disagree but:

- Smalltalk out of the box, at least as seen in VW, Squeak, Dolphin, VA, 
and all the others that come in an image with the development 
environment built-in, are extremely aware of (at least) the Interactive 
Development Environment application

- Mathematica out of the box is replete with graphing facilities, etc, 
etc (I'm not very familiar with it but it does seem whizzy)

- Logo is perfectly adapted to doping turtle diagrams

....and no mention of application-knowledge in programming languages 
would be complete without mentioning domain=specific languages...

-- 
The surest sign that intelligent life exists elsewhere in      Calvin &
the universe is that none of it has tried to contact us.       Hobbes.
--
Eliot     ,,,^..^,,,    Smalltalk - scene not herd
0
eliotm (17)
1/3/2007 9:59:29 PM
topmind wrote:
> Bob Nemec wrote:
>> "topmind" <topmind@technologist.com> wrote in message
>> news:1167845658.536831.82820@42g2000cwt.googlegroups.com...
>> <...>
>>>> I don't know what happens in other shops, but in ours Smalltalk may
>>>> remain essentially Smalltalk but it class hierarchy starts to look more
>>>> like our business.  The more business-oriented classes we have the
>>>> larger building blocks we have to solve business problems.  I only need
>>>> to describe what a loan is once, and from then on I can solve problems
>>>> with loans.  It is the ability of a language to assimilate a business'
>>>> vocabulary that moves it from being 1st, 2nd, or 3rd-level language and
>>>> if not becoming 4th-level, at least makes it a large value of 3.
>>> Can you demonstrate publicly that Smalltalk does this better than
>>> procedural/relational?
>> How about this... we added a 'Function' class that allows us to configure a
>> numeric relationship between numeric value holders.  The user can define
>> 'net asset value = assets - liabilities' by defining a Function instance
>> with an expression of 'a - b' and link it as the function of the 'net asset
>> value' numeric item (which itself is then linked to the assets item and the
>> liabilities item, based on accounting rules).  BTW: a neat trick is that in
>> GemStone we store the 'a - b' as a compiled block, [:a :b | a - b], so that
>> our configured code runs at byte code speeds.
>>
>> We then added a whole library of financial functions to the Function class,
>> effectively making it our Smalltalk environment more of a 'financial app'
>> development language.
> 
> I've stored formulas in tables fairly often back in my xBase days
> (although current tools don't support it as well). Most dynamic
> languages support some form of this. I don't know why you think it is
> unique to Smalltalk.

Where did he say he thought this unique to Smalltalk?  You're putting 
words in his mouth.  Read what was asked and what he said in reply.


-- 
The surest sign that intelligent life exists elsewhere in      Calvin &
the universe is that none of it has tried to contact us.       Hobbes.
--
Eliot     ,,,^..^,,,    Smalltalk - scene not herd
0
eliotm (17)
1/3/2007 10:02:06 PM
Eliot Miranda wrote:
> topmind wrote:
> > Bob Nemec wrote:
> >> "topmind" <topmind@technologist.com> wrote in message
> >> news:1167845658.536831.82820@42g2000cwt.googlegroups.com...
> >> <...>
> >>>> I don't know what happens in other shops, but in ours Smalltalk may
> >>>> remain essentially Smalltalk but it class hierarchy starts to look more
> >>>> like our business.  The more business-oriented classes we have the
> >>>> larger building blocks we have to solve business problems.  I only need
> >>>> to describe what a loan is once, and from then on I can solve problems
> >>>> with loans.  It is the ability of a language to assimilate a business'
> >>>> vocabulary that moves it from being 1st, 2nd, or 3rd-level language and
> >>>> if not becoming 4th-level, at least makes it a large value of 3.
> >>> Can you demonstrate publicly that Smalltalk does this better than
> >>> procedural/relational?
> >> How about this... we added a 'Function' class that allows us to configure a
> >> numeric relationship between numeric value holders.  The user can define
> >> 'net asset value = assets - liabilities' by defining a Function instance
> >> with an expression of 'a - b' and link it as the function of the 'net asset
> >> value' numeric item (which itself is then linked to the assets item and the
> >> liabilities item, based on accounting rules).  BTW: a neat trick is that in
> >> GemStone we store the 'a - b' as a compiled block, [:a :b | a - b], so that
> >> our configured code runs at byte code speeds.
> >>
> >> We then added a whole library of financial functions to the Function class,
> >> effectively making it our Smalltalk environment more of a 'financial app'
> >> development language.
> >
> > I've stored formulas in tables fairly often back in my xBase days
> > (although current tools don't support it as well). Most dynamic
> > languages support some form of this. I don't know why you think it is
> > unique to Smalltalk.
>
> Where did he say he thought this unique to Smalltalk?  You're putting
> words in his mouth.  Read what was asked and what he said in reply.

The implication was that Smalltalk or OOP could do something or
something better than other techniques. If betterment is not the issue,
then frankly nobody cares if Smalltalk merely supports something common
among many languages and paradigms. It is like telling us your house
has a garage.

If I misunderstood the implication, I apologize. So, is there
betterment?


>
> --
> The surest sign that intelligent life exists elsewhere in      Calvin &
> the universe is that none of it has tried to contact us.       Hobbes.
> --
> Eliot     ,,,^..^,,,    Smalltalk - scene not herd

-T-
oop.ismad.com

0
topmind (2124)
1/3/2007 10:44:50 PM
topmind escreveu:
> Eliot Miranda wrote:
>> topmind wrote:
>>> Bob Nemec wrote:
>>>> "topmind" <topmind@technologist.com> wrote in message
>>>> news:1167845658.536831.82820@42g2000cwt.googlegroups.com...
>>>> <...>
>>>>>> I don't know what happens in other shops, but in ours Smalltalk may
>>>>>> remain essentially Smalltalk but it class hierarchy starts to look more
>>>>>> like our business.  The more business-oriented classes we have the
>>>>>> larger building blocks we have to solve business problems.  I only need
>>>>>> to describe what a loan is once, and from then on I can solve problems
>>>>>> with loans.  It is the ability of a language to assimilate a business'
>>>>>> vocabulary that moves it from being 1st, 2nd, or 3rd-level language and
>>>>>> if not becoming 4th-level, at least makes it a large value of 3.
>>>>> Can you demonstrate publicly that Smalltalk does this better than
>>>>> procedural/relational?
>>>> How about this... we added a 'Function' class that allows us to configure a
>>>> numeric relationship between numeric value holders.  The user can define
>>>> 'net asset value = assets - liabilities' by defining a Function instance
>>>> with an expression of 'a - b' and link it as the function of the 'net asset
>>>> value' numeric item (which itself is then linked to the assets item and the
>>>> liabilities item, based on accounting rules).  BTW: a neat trick is that in
>>>> GemStone we store the 'a - b' as a compiled block, [:a :b | a - b], so that
>>>> our configured code runs at byte code speeds.
>>>>
>>>> We then added a whole library of financial functions to the Function class,
>>>> effectively making it our Smalltalk environment more of a 'financial app'
>>>> development language.
>>> I've stored formulas in tables fairly often back in my xBase days
>>> (although current tools don't support it as well). Most dynamic
>>> languages support some form of this. I don't know why you think it is
>>> unique to Smalltalk.
>> Where did he say he thought this unique to Smalltalk?  You're putting
>> words in his mouth.  Read what was asked and what he said in reply.
> 
> The implication was that Smalltalk or OOP could do something or
> something better than other techniques. If betterment is not the issue,
> then frankly nobody cares if Smalltalk merely supports something common
> among many languages and paradigms. It is like telling us your house
> has a garage.
> 
> If I misunderstood the implication, I apologize. So, is there
> betterment?
> 
Let's take your example elsethread:

>> (not my example, BTW)
>> 
>>> But how would you use this new function without code changes?
>>> Our example is something an end user configures; no code change required.
>> 
>> rs = query("select * from formulaTable where $criteria");
>> while (row = nextRow(rs)) {
>>    result = eval(row["formula"]);
>> }

In the fragment above the procedural nature of the query language does 
not allow the Database Engine to make a critic if the '"formula"' is 
applicable to the 'result', right?

>> A language like TCL will give one more evaluation options, such as
>> scope control. But most modern dynamic languages support at least some
>> form of "eval".

In which you proposing use a Database, SPs and code written in (say) Tcl 
to called by an external interpreter, it that correct?

--
Cesar Rabak

0
csrabak (402)
1/3/2007 11:38:32 PM
Cesar Rabak wrote:
> topmind escreveu:
> > Eliot Miranda wrote:
> >> topmind wrote:
> >>> Bob Nemec wrote:
> >>>> "topmind" <topmind@technologist.com> wrote in message
> >>>> news:1167845658.536831.82820@42g2000cwt.googlegroups.com...
> >>>> <...>
> >>>>>> I don't know what happens in other shops, but in ours Smalltalk may
> >>>>>> remain essentially Smalltalk but it class hierarchy starts to look more
> >>>>>> like our business.  The more business-oriented classes we have the
> >>>>>> larger building blocks we have to solve business problems.  I only need
> >>>>>> to describe what a loan is once, and from then on I can solve problems
> >>>>>> with loans.  It is the ability of a language to assimilate a business'
> >>>>>> vocabulary that moves it from being 1st, 2nd, or 3rd-level language and
> >>>>>> if not becoming 4th-level, at least makes it a large value of 3.
> >>>>> Can you demonstrate publicly that Smalltalk does this better than
> >>>>> procedural/relational?
> >>>> How about this... we added a 'Function' class that allows us to configure a
> >>>> numeric relationship between numeric value holders.  The user can define
> >>>> 'net asset value = assets - liabilities' by defining a Function instance
> >>>> with an expression of 'a - b' and link it as the function of the 'net asset
> >>>> value' numeric item (which itself is then linked to the assets item and the
> >>>> liabilities item, based on accounting rules).  BTW: a neat trick is that in
> >>>> GemStone we store the 'a - b' as a compiled block, [:a :b | a - b], so that
> >>>> our configured code runs at byte code speeds.
> >>>>
> >>>> We then added a whole library of financial functions to the Function class,
> >>>> effectively making it our Smalltalk environment more of a 'financial app'
> >>>> development language.
> >>> I've stored formulas in tables fairly often back in my xBase days
> >>> (although current tools don't support it as well). Most dynamic
> >>> languages support some form of this. I don't know why you think it is
> >>> unique to Smalltalk.
> >> Where did he say he thought this unique to Smalltalk?  You're putting
> >> words in his mouth.  Read what was asked and what he said in reply.
> >
> > The implication was that Smalltalk or OOP could do something or
> > something better than other techniques. If betterment is not the issue,
> > then frankly nobody cares if Smalltalk merely supports something common
> > among many languages and paradigms. It is like telling us your house
> > has a garage.
> >
> > If I misunderstood the implication, I apologize. So, is there
> > betterment?
> >
> Let's take your example elsethread:
>
> >> (not my example, BTW)
> >>
> >>> But how would you use this new function without code changes?
> >>> Our example is something an end user configures; no code change required.
> >>
> >> rs = query("select * from formulaTable where $criteria");
> >> while (row = nextRow(rs)) {
> >>    result = eval(row["formula"]);
> >> }
>
> In the fragment above the procedural nature of the query language does
> not allow the Database Engine to make a critic if the '"formula"' is
> applicable to the 'result', right?

If you mean validate it, no, it does not. Better error handling could
probably be added, depending on the language. But the smalltalk version
wouldn't either.

>
> >> A language like TCL will give one more evaluation options, such as
> >> scope control. But most modern dynamic languages support at least some
> >> form of "eval".
>
> In which you proposing use a Database, SPs and code written in (say) Tcl
> to called by an external interpreter, it that correct?

External? Please clarify.

> 
> --
> Cesar Rabak

-T-

0
topmind (2124)
1/4/2007 5:26:16 AM
topmind escreveu:
> Cesar Rabak wrote:
[snipped]
>>>>> But how would you use this new function without code changes?
>>>>> Our example is something an end user configures; no code change required.
>>>> rs = query("select * from formulaTable where $criteria");
>>>> while (row = nextRow(rs)) {
>>>>    result = eval(row["formula"]);
>>>> }
>> In the fragment above the procedural nature of the query language does
>> not allow the Database Engine to make a critic if the '"formula"' is
>> applicable to the 'result', right?
> 
> If you mean validate it, no, it does not. Better error handling could
> probably be added, depending on the language. But the smalltalk version
> wouldn't either.

I not writing about 'validating' in the sense you were discussing with 
Thomas, but rather get automatic help if the object 'result' selected 
would not be apropriate for the formula.

Since the SP is written in a typeless language, it is not impossible you 
make a calculation which may be meaningless.

> 
>>>> A language like TCL will give one more evaluation options, such as
>>>> scope control. But most modern dynamic languages support at least some
>>>> form of "eval".
>> In which you proposing use a Database, SPs and code written in (say) Tcl
>> to called by an external interpreter, it that correct?
> 
> External? Please clarify.

I undesrtand Tcl is a language that requires an interpreter in addition 
to the Database, right?
0
csrabak (402)
1/4/2007 10:28:59 AM
Eliot Miranda wrote:
> Thomas Gagne wrote:
>> It is true that out-of-the-box all languages are application 
>> ignorant.  Out-of-the-box so are most programmers.  The difference is 
>> what becomes of them after a few months time?
>
> Thomas, sorry to so pedantically disagree but: <snip>
I guess I was thinking out-of-the-box Smalltalk, LISP, C++, Java, SQL, 
C, Eiffel, etc. know  little (if anything) about commercial finance, 
insurance, forecasting, banking, payment processing, inventory 
management, supply-chain logistics, billing, manufacturing control, 
etc., which is the nature of 3GL languages.

An exception would be 4GLs and 5GLs, but I'm having trouble finding a 
satisfactory description of either.  The description I liked best came 
from a book I can't remember the title too, which makes it difficult for 
me to locate and quote.  In fact, the best example of a 4-and-a-half-GL 
may actually be the application program--which is loaded with semantic 
knowledge and humans use them everyday to accomplish tasks.

In the meantime I can summarize.  The author described 1GL as being 
closest to the hardware and nGL as being closer to the _problem domain_ 
(and presumably the human).  I believe the author made a point of saying 
nGL was not to be natural language or natural language-like because 
natural language is too ambiguous to be of much use by computers.  This, 
at least, is where "business" language (domain-specific, semantic) has 
value over natural language.  Business language has concrete terms that 
with less effort than natural language can be given non-ambiguous 
definitions.

5GLs, if we believe the paint-a-program vendors, are such because 
they're graphically based and allow users to point-and-shoot their way 
to icon-o-automation.  Many anthropologists and linguists agree that as 
pretty a product as that may be, hieroglyphics are a step backward in 
communication.  The vocabulary too nuanced and too broad, and symbols 
are easily and frequently misinterpreted by humans.

Heck, look at the challenge we've had coming up with international 
traffic signs that drivers and pedestrians can understand, and someone 
wants to make a programming language out of it?

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
1/4/2007 1:04:19 PM
Hello Thomas,

Well I don't think I've ever been in a thread where someone came up with 
quite so many adjectives for a design that were, well, emotional... kinda... 
It's the first time I've really thought of whether an interface should be 
judged on the basis of 'arrogance' or 'snootiness'.  I truly enjoyed reading 
the message.  Thanks for brightening my morning.

Let me take a day to 'grok' your reply and give you a reasoned one.  As you 
pointed out, we are on the same page with respect to the core concept of the 
Semantic Persistence Store.

--- Nick

"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
news:BoWdne5kX_XLJgfYnZ2dnUVZ_tmknZ2d@wideopenwest.com...
> Nick Malik [Microsoft] wrote:
>> <snip>
> We're on the same page at this point, and agree on the advantages.  I 
> might add more but they wouldn't further our mutual understanding enough 
> to keep them off the cutting room floor.
>> Disadvantages:
>> D1) The command interface (stored procedure names and parameters) that 
>> needs to be invoked by OO programming modules has no inherent support for 
>> OO concepts.  Therefore, while naming conventions and good practices can 
>> make the interface understandable, the listener will not validate the 
>> incoming message to insure that the proper method was called for the 
>> desired activity.
> While you're correct in that SQL is as unaware of OO concepts as it is of 
> my C code's memory management, it doesn't mean the procedures don't have a 
> dependence between them that provides a "correct" order.
>
> For example, before transaction detail can be added a transaction must 
> exist.  Before a transaction exists a valid session (recording the users 
> id, login time, application, etc.) must also be available.  The SQL 
> message for this may resemble:
>
>    declare @sessionKey IDENTREF,
>            @transactionKey IDENTREF,
>            @transactionDetailKey IDENTREF
>    declare @rc INT
>
>    begin tran
>
>    set @sessionKey = <a previously validated session key>
>    exec @rc = addTransaction
>         @sessionKey = @sessionKey,
>         @tranactionKey = @transactionKey OUTPUT
>    if (@rc = 0) begin
>        exec @rc = addTransactionDetail
>             @transactionKey = @transactionKey,
>             @transactionDetailKey = @transactionDetailKey OUTPUT,
>             @transactionTypeName = "...."
>             @accountKey = <an account key>
>             @transactionAmount = <some amount>
>    end
>    if (@rc = 0) begin
>        exec @rc = addTransactionDetail
>             @transactionKey = @transactionKey,
>             @transactionDetailKey = @transactionDetailKey OUTPUT,
>             @transactionTypeName = "...."
>             @accountKey = <another account key>
>             @transactionAmount = <some amount>
>    end
>    if (@rc = 0) begin
>        exec @rc = checkTransactionBalance
>             @transactionKey = @transactionKey,
>    end
>    if (@rc != 0) begin
>        -- some exception handling stuff
>        rollback tran
>    end
>    else begin
>        -- some finishing stuff
>        commit tran
>    end
>
> addTransactionDetail requires a valid transactionKey, which can only be 
> gotten from a successful call to addTransaction.  addTransaction can only 
> be called with a valid sessionKey, which it checks to make sure the 
> session is valid (user is still logged on, the database is in read-write 
> mode, etc.).  So even if it knows nothing of OO data graphs the order of 
> the stored procedure calls is not random or arbitrary.
>
> There are other semantic rules buried in the procedures that are not 
> obvious.  For instance, each account is allowed only a single currency, 
> and all transactions (including their detail) must be in a common 
> currency.  Valid accountKeys are also required as are transaction names, 
> etc.
>
> I'm not so sure I would call this a disadvantage as much as I would 
> consider it the nature of the beast.
>> D2) As stated above, plumbing code must exist on the OO side to frame up 
>> the message and its results in the internal data model of the 
>> application. However, the command interface for this particular 
>> implementation of the SPS is based on good practices and naming 
>> conventions.  It is not based on repeatable and enforced (mathematical) 
>> principles.  Therefore, the plumbing code must be carefully hand crafted 
>> and tested.  It is a labor-intensive task that is very difficult to 
>> automate.
>>
> I agree it must be carefully implemented to be useful, but not that it is 
> labor-intensive.  If you look at the SQL you can see that it has a 
> structure.  That structure is the transaction, and a transaction can be 
> modeled inside OO very easy.  Detail objects can be added to the 
> transaction object very easily.  When complete, the transaction object can 
> be asked to express itself in SQL, or told to put itself onto the SPS, for 
> which it can also be be prepared to handle exceptions properly.
>
> It is in doing this that an OO system can easily interface with a 
> relational database without mapping OO datagraphs to relational 
> projections or tuples.  It should not go unappreciated that the SQL above 
> has no OO baggage with it, and can just as easily be created by C, COBOL, 
> FORTRAN, or by hand.
>
> And remember, when used manually there is no plumbing code necessary.  As 
> this might be the first interface client the SPS has this is an important 
> feature.  In the alternate model you describe below (way below) there is 
> no human-accessible interface to the SPS.
>> D2-alternate) If the SPS does not offer up a single canonical model for 
>> consumption, but rather offers up different models to make it easier for 
>> application developers to consume the data using objects that are easy to 
>> map, then the SPS is likely, over time, to offer up a large list of 
>> interfaces.  (application-specific stored procedures).  The plumbing 
>> layer still exists... it just exists in SQL instead of C# (or Java). 
>> Some teams prefer this model.  I don't have evidence that this mechanism 
>> is better or worse than the other.
>>
> I'm unsure if I understood this one.  At first I thought you might be 
> suggesting the SPS would be creating client-friendly procedures for each 
> of the client environments it may have to support.  This sounds like a 
> kind of interface entropy.  By making the SPS' interface compatible with 
> multiple clients (and making the interface larger with more procedures) 
> you're suggesting that the mapping (or some portion of it) has migrated 
> into the SPS interface.  I disagree this necessarily occurs, and if it did 
> would violate OO designer's sense of cohesiveness.  Though I don't have 
> any dedicated DBAs, I would also suspect it would violate their sense of 
> aesthetics and would contribute to a more difficult to maintain SPS.
>
> If I understood your point, I would not include it among the model's 
> disadvantages since it is not a necessary evil.  It would instead be a 
> self-inflicted wound which makes it little different from the other 
> mutilations programmers are responsible for creating themselves, and 
> there's no limit to the number or kinds of these in everyone else's 
> code--but not yours or mine.
>> D3) The underlying language, on the db side, is not object oriented. 
>> Thus, even if the stored procedure interface has been carefully crafted 
>> to resemble and reflect OO thinking, the code written in these procedures 
>> is not 'defended' or 'supported' by the OO paradigm.  Therefore, the 
>> conceptual heirarchy carefully crafted by the application developer 
>> cannot be used in the persistence of the data, even when the models 
>> match.  As a result, every data object must have two bodies of code to 
>> interpret it, one in the OO code and one in the SQL code.  This is 
>> inefficient in terms of human as well as code resources.
>>
> Why would the SPS' procedures require defending by a 
> paradigm-in-execution?  Why should the presence of OO applications require 
> anything of the SPS beyond the same provision for other paradigms like 
> structured, aspect, or functional?
>
> If we created a web service we would not create different interfaces for 
> each application language or paradigm, whether it be C#, Smalltalk, C, 
> assembly, LISP, Prolog, Haskel, or AspectJ.
>
> How much time do programmers spend defending a Date object?  And if so, 
> how much of that is duplicative?  Shouldn't a date object be responsible 
> for its own defense?
>> D4) the choice of the technology (SQL and stored procs) to implement the 
>> SPS means that a SQL-oriented software package must be installed in the 
>> application's environment.
> True, but stored procedures are just one way to implement a SPS' 
> interface.  Rather than discuss how those are implemented (they're 
> conceptually as simple as procedures but are attended by other distracting 
> nuances) for the moment I'd like to stick to relational databases that 
> support procedures (mindful that MySQL just recently added these).
>> <snip description of alterante>
> Much of what you're describing isn't conceptually different.  You've 
> constructed an interface between your applications and your database, and 
> that interface combined with your database is your SPS.
>
> The biggest difference seems to be the model's interface, being either OO 
> or XML-based (not to mention Windows-based), is arrogant (the 
> interface--not you!).  It prefers to speak only with applications that 
> share its vocabulary and pedigree.  Lower-level languages or manual 
> entries are not permitted to eat with the grown-ups.
>
> Taking your example a little further, why not just use an ODBMS?  The 
> interface has already created an exclusive club of clients, why not use an 
> exclusive database management system (XDBMS)?
>
> Granted, that exclusivity has value (I tried it once with an ODBMS) but 
> one of the problems (aside from performance) was reporting--but we'll save 
> that for another thread.
>> Now, in the code within the SPS, I still need to have an OO module call 
>> the underlying database tables.  However, in this model, I can easily 
>> constrain the number of places where a particular column is referenced, 
>> and I can fairly readily constrain things like the pk/fk relationships. 
>> In fact, much of the actual code can be generated, rather than 
>> hand-written.  In this, very narrow and constrained place, in the OO 
>> modules that form the interface to my database, an O/R mapping tool can 
>> be safely and appropriately used, resulting in considerably less code to 
>> create, debug, and maintain.
>>
> Doesn't that defeat, or at least deflate, the value of an OR mapping tool 
> or framework?  With only a single client OR tools have little-or-no bang 
> for the buck.  It like getting a pellet gun for the purchase price, 
> training, maintenance, and servicing fees of an Apache helicopter.  It 
> seems disproportionate to me.
>> <snip>
>>
>> The services interface is implemented using a base class that each of the 
>> objects inherit from.  Therefore, in addition to using a scalable 
>> communication technology like WCF, we can be sure that basic standards 
>> will be followed and that specific methods are available in every 
>> service.
>>
> That's what I mean by arrogant.  Aloof is perhaps a better word.  It 
> doesn't carry the pejorative baggage arrogant does.
>
> In my pedestrian way of thinking, a different way of doing or thinking 
> about things shouldn't require exotic ingredients or access to private 
> clubs.  C programmers are able to create OO-inspired interfaces to their 
> systems without resorting to OO languages (ctlib and dblib come to mind). 
> Resourceful programmers ought to be able to assemble them from whatever is 
> lying around and the end result should reflect its humble beginnings and 
> avoid becoming presumptuous (I'm looking hard for synonyms to arrogant).
>
> This is why I think the simplest example remains stored procedures 
> in-front of a DB is the best example of treating a database as an object, 
> or as we have started to call it, a Semantic Persistent Store (SPS).
>> <snip advantages>
>>
>>
>> Disadvantages:
>> D1) the listening mechanism is going to be less mature and will likely be 
>> implementation specific.  It could be based on an ESB (on JMS) or perhaps 
>> SOAP, but it will surely be less ubiquitious, and may be less scalable, 
>> less mature, and may have defects that have not yet been found and 
>> resolved.
>>
> Let's assume you're at least using mature message-oriented-middleware so 
> network nonsense isn't an issue.  If your SPS component is the only 
> pathway in and out it is likely to mature fairly quickly as every client 
> application in the system will be using it.  So, though you may list this 
> as a disadvantage, half of it is eliminated with MOM, and the other half 
> disappears over a fairly short piece of time.
>> D2) The team that maintains the SPS will need to know at least two 
>> languages, SQL and C# (or SQL and Java), in order to maintain this 
>> component.  From a resource management perspective, this adds a few 
>> wrinkles into the mix.  How many of each do you need?  If your needs 
>> change over time, will you be ready to respond?  How many should be 
>> cross-trained?  That said, the fact that our staffing model would need to 
>> be updated doesn't invalidate the design.  It does mean that we have to 
>> practice good change management principles, though, to bring this about.
>>
> But this problem already exists in many shops and isn't unique to this 
> model.  Nor does this model necessarily exacerbate the problem.  The shops 
> that probably most need to worry more than others are those with separate 
> (and antagonistic) database and systems programming groups--but those 
> shops already have other problems that aren't unique to this model.
>> D3) Service-oriented tools are fairly immature, and do not yet allow a 
>> rich paradigm for inspection, testing, and auditing of the activities of 
>> the SPS. This is rapidly changing, but has not reached the level of 
>> sophistication of the DB-oriented model just yet.
>>
> I don't think they will, because in making things easier for computers to 
> use and or consume (like XML) those structures are increasingly difficult 
> for humans to read.  Think of sendmail.cf, and how that may be really easy 
> for sendmail to parse, but have you ever tried creating one by hand or 
> even describing to a fellow programmer /exactly/ what's going on inside 
> there?  Even if you could, think about the level of training and genius 
> required and ask yourself how many other people share that?
>
> As you indicated, we agree more than we disagree.  I think we both 
> understand and recognize the concept and the model, even if I think yours 
> is snootier than mine. ;-)
>
> -- 
> Visit <http://blogs.instreamfinancial.com/anything.php> to read my rants 
> on technology and the finance industry.



-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
1/4/2007 2:16:06 PM
Nick Malik [Microsoft] wrote:
> Hello Thomas,
>
> Well I don't think I've ever been in a thread where someone came up with 
> quite so many adjectives for a design that were, well, emotional... kinda... 
> It's the first time I've really thought of whether an interface should be 
> judged on the basis of 'arrogance' or 'snootiness'.  I truly enjoyed reading 
> the message.  Thanks for brightening my morning.
>   
Who said computerology had to be dry and boring?

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
1/4/2007 2:19:11 PM
On Thu, 04 Jan 2007 08:04:19 -0500, Thomas Gagne wrote:

> An exception would be 4GLs and 5GLs, but I'm having trouble finding a 
> satisfactory description of either.  The description I liked best came 
> from a book I can't remember the title too, which makes it difficult for 
> me to locate and quote.  In fact, the best example of a 4-and-a-half-GL 
> may actually be the application program--which is loaded with semantic 
> knowledge and humans use them everyday to accomplish tasks.
> 
> In the meantime I can summarize.  The author described 1GL as being 
> closest to the hardware and nGL as being closer to the _problem domain_ 
> (and presumably the human).

Which is obviously an unsatisfactory definition to me.

Consider this simple question, is the problem domain a property of the
language? There are two possible answers:

1. Yes, it is. Then, automatically, each language is ooGL (in its domain).
Consider a machine language applied for, well, a machine. According to this
definition MOV, JMP etc reach the highest end of the scale!

2. No, it is not. Then the n in nGL cannot be attributed to L, the
language. So, we cannot say which n has L=Assembler before seeing a domain.

> 5GLs, if we believe the paint-a-program vendors, are such because 
> they're graphically based and allow users to point-and-shoot their way 
> to icon-o-automation.  Many anthropologists and linguists agree that as 
> pretty a product as that may be, hieroglyphics are a step backward in 
> communication.  The vocabulary too nuanced and too broad, and symbols 
> are easily and frequently misinterpreted by humans.

Another problem of graphical interfaces, with all due respect, they don't
deserve status of a language, is that correctness verification, formal and
informal were very difficult, if possible, unit tests is a nightmare,
debugging is a catastrophe etc.
 
> Heck, look at the challenge we've had coming up with international 
> traffic signs that drivers and pedestrians can understand, and someone 
> wants to make a programming language out of it?

People are very good in pattern recognition. The mistake is to equalize
recognition and description. It is easy to express "no entry." It is
considerably more difficult to do a speed limit. In fact, the system fails
already here by resorting to numbers dressed up as signs.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
0
mailbox2 (6357)
1/4/2007 2:29:33 PM
Dmitry A. Kazakov escreveu:
> On Thu, 04 Jan 2007 08:04:19 -0500, Thomas Gagne wrote:
> 
>> An exception would be 4GLs and 5GLs, but I'm having trouble finding a 
>> satisfactory description of either.  The description I liked best came 
>> from a book I can't remember the title too, which makes it difficult for 
>> me to locate and quote.  In fact, the best example of a 4-and-a-half-GL 
>> may actually be the application program--which is loaded with semantic 
>> knowledge and humans use them everyday to accomplish tasks.
>>
>> In the meantime I can summarize.  The author described 1GL as being 
>> closest to the hardware and nGL as being closer to the _problem domain_ 
>> (and presumably the human).
> 
> Which is obviously an unsatisfactory definition to me.
> 
> Consider this simple question, is the problem domain a property of the
> language? There are two possible answers:

Yes, but when we talk about the 'problem domain' we discuss in the 
context, and generally we are able to convey the meaning without too 
much contorted definitions.

1GL are 'near the hardware' and as such not next to some (for a lot of 
people) useful problem domains like hydraulics, finance, discrete 
simulation, etc.

> 
> 1. Yes, it is. Then, automatically, each language is ooGL (in its domain).
> Consider a machine language applied for, well, a machine. According to this
> definition MOV, JMP etc reach the highest end of the scale!

 From accepted nomenclature, no, because most of the human problems are 
solved directly by MOVs and JMPs.

> 
> 2. No, it is not. Then the n in nGL cannot be attributed to L, the
> language. So, we cannot say which n has L=Assembler before seeing a domain.

YES! Mathematica or MatLab may be of use for some persons but for a 
civil engineer, it would still be better a structural analysis program 
where [s]he could enter data and get results in a more closer way to 
her/his realm.

[snipped]

> 
> Another problem of graphical interfaces, with all due respect, they don't
> deserve status of a language, is that correctness verification, formal and
> informal were very difficult, if possible, unit tests is a nightmare,
> debugging is a catastrophe etc.

Not completely true. Certain graphical representations are more amenable 
to a lot of mathematical analysis, using (for example, not exhaustive, 
graph theory, Petri nets, etc.).

0
csrabak (402)
1/4/2007 3:43:09 PM
On Thu, 21 Dec 2006 05:30:53 -0500, Thomas Gagne
<tgagne@wide-open-west.com> wrote:

>topmind wrote:
>> <snip>
>>
>> BTW, Microsoft has ADO, DAO, etc. which are OO wrappers around RDBMS.
>> Java and other vendors do also. Whether OO is the best way wrap RDBMS
>> calls is another debate. My point is they already exist.
>>
>> Further, even if OO *was* the best way to access RDBMS thru an app,
>> <snip>
>>   
>You're missing something.  I am not advocating wrapping the RDB with OO 
>stuffs.  I am not saying OO is the best way to access a 
>database--directly or through any of the frameworks mentioned above.  In 
>fact, I'm advocating the opposite.  Deal with the DB on its own terms, 
>but treat it as an object.  I'm recommending against accessing it using 
>its low-level interface (SQL), but instead that a higher-level 
>application/schema/problem domain-specific API be constructed, most 
>likely using procedures, and that applications should access the DB that 
>way.

In otherwords I think you are advocating creating a set of APIs that
encapsulate the database itself.  This is a standard practice in
object-relational systems where one needs to transpose from an OO
mechanism to a relational mechanism.

Most programmers would probably use the term 'the database layer' when
talking about that set of APIs that are generally application
specific.

Note that I use the term object-relational because to me a true OO
system doesnt have the concept of a database (making them sucky at
manipulating lists and tables) - its this reason why I hate talking
about relational systems in the same context of OO systems.  I much
prefer to treat them as two discrete entities with some access
mechanism in between.


0
foo_ (331)
1/6/2007 8:33:01 PM
On Wed, 27 Dec 2006 09:41:49 -0800, "Nick Malik [Microsoft]"
<nickmalik@hotmail.nospam.com> wrote:

>"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
>news:D7mdne4mU5111BLYnZ2dnUVZ_smonZ2d@wideopenwest.com...
>> Nick Malik [Microsoft] wrote:
>>> <snip>
>>> That said, an RDBMS can present MANY interfaces to your code, not all of 
>>> which have to be presented through stored procs.  You could present 
>>> through views, for example, and still hide some of the details of your db 
>>> design.
>>>
>>> I would also say that the db presents the data for 'many' objects instead 
>>> of a single one.  Viewing the db as a single object begs the question: 
>>> what behavior are you encapsulating in it?
>>>
>> Objects are often composed of many other objects.
>
>Wow.  First statement has the first assumption I'd like to challenge.
>
>Your statement is true but misses the point.  In theory, everything is an 
>object.  Who cares?
>
>An object has a PURPOSE.  If you think of objects as nouns, you are not 
>designing your system very well.  Objects are 'things that encapsulate.'  In 
>other words, I express the definition of an object as something that "does" 
>something, rather than something that "is" something.  In my mind, the 
>distinction is important, because it allows me to use OO techniques and 
>pattern languages.  Moving from "is" to "does" is absolutely essential, 
>because there are a great many objects in systems that I have created whose 
>purpose has nothing to do with hiding data.  The encapsulation of logic, the 
>seperation of concerns, the creation of structure to allow for agility or to 
>reduce dependence, etc, these are reasons for the existence of an object 
>that extend far beyond the definition of an object that "is" something.

I would suggest that an object is a 'thing that does some thing" to
use an old marketing phrase.   To me in programming, one views an
object by the set of functionality offered by its public interface the
rest (protected and private interfaces) really should be ignored.

I might add that if one looks at a distributed system, then a database
is just an object (entity) that is located somewhere in the system
and, there may be more than one separate database in that system (for
example, one in each country).   From an application programming point
of view, one can treat all of the separate databases as one single
object (one issues a query and some middleware figures out which
database entity handles the request) or one can deal with individual
databases (for example, one is looking up a local customer).  

The result in any case is that one can treat the databases as objects
because at the end of the day, they are just 'things that do
something'.

A classic example of this is when building CORBA based systems (which
I have mentioned in other threads) where an object is represented in
both the client and the server at the same(ish) time.  That is one
calls the method on a proxy object on the client and it is executed by
the 'servant' (object implementation) on the server transparently)
without the user needing to know where the server is located (or even
if one exists since they can be started on demand).

When one combines this mechanism with a database, it makes the
database transparent to the system. From the the point of view of the
programmer, one can just access data (or more probably the behavioral
functions) as if it is there (if its not, then the appropriate
exceptions occur).

(one of the drawbacks with corba tho is that I notice people tend to
use it to just get/put data in a database which is not really what I
think its about since it ignores the whole concept of RMI).

>Therefore, if we look at the (business-specific) database as an object, we 
>start to see that the database "is" a representation of the data model 
>useful for managing a consistent and efficient representation of the 
>information that our business collects.
>
>However, if we start to ask: what does this data do... we run aground.  It 
>doesn't do anything.  The relational model doesn't let it.  It is. 
>Therefore, to move to the use of an object that "does", we must reframe our 
>representation, and view the data not in terms of the relations and tuples 
>that Codd described and Date and Darwen elaborated for us, but rather in 
>terms of the activities that the business needs to perform on it.

For the non-oo people I would suggest this transfer from viewing the
data model to the business model is why there is usually a
'translational layer' in an Object-relational system.  One often needs
to convert the data from its raw form to a more usable (logical view)
that can be used within the application.
>
>> My database object is no different.  Primarily, through stored procedures 
>> the DB has methods and projections.
>
>The term DB is a bit confusing.  From your earlier posts, you appear to be 
>referring not to a generic database but to a database that represents a 
>specific data model.  If it is OK with you, I'll refer to this concept as a 
>'SpecificDatamodel' object, rather than a DB.
>
>As I pointed out in other places, you can view these methods as attached to 
>the 'SpecificDatamodel' object, but not to the elements within that specific 
>data model, even though the methods clearly apply only to specific elements. 
>This is not wrong per se, but it is problematic in some respect.  I've 
>worked with databases that have hundreds of tables and thousands of stored 
>procedures.  I would have a difficult time considering a single object with 
>a thousand methods and such a complex data representation mechanism as a 
>single object.
>
>In addition, I would have no way to describe to someone what the object 
>'does' or better yet, what it 'encapsulates.'

I think  this is just a level of granularity.  One might say, "thats
the customer database" or the "call database" at the highest level of
granularity.  At a lower level, one might need to say,  "that part of
the customer database handles the msdn/customer relationships) etc.

Humans are pretty good at handling abstractions (especially if the
target audience is high context aware).



0
foo_ (331)
1/6/2007 9:24:06 PM
Cesar Rabak wrote:
> topmind escreveu:
> > Cesar Rabak wrote:
> [snipped]
> >>>>> But how would you use this new function without code changes?
> >>>>> Our example is something an end user configures; no code change required.
> >>>> rs = query("select * from formulaTable where $criteria");
> >>>> while (row = nextRow(rs)) {
> >>>>    result = eval(row["formula"]);
> >>>> }
> >> In the fragment above the procedural nature of the query language does
> >> not allow the Database Engine to make a critic if the '"formula"' is
> >> applicable to the 'result', right?
> >
> > If you mean validate it, no, it does not. Better error handling could
> > probably be added, depending on the language. But the smalltalk version
> > wouldn't either.
>
> I not writing about 'validating' in the sense you were discussing with
> Thomas, but rather get automatic help if the object 'result' selected
> would not be apropriate for the formula.
>
> Since the SP is written in a typeless language, it is not impossible you
> make a calculation which may be meaningless.

This is a very language-dependent issue. The ability to evaluate,
verify, etc. expressions  entered by users varies widely and has little
to do with OOP.

-T-

0
topmind (2124)
1/6/2007 9:56:40 PM
> For the non-oo people I would suggest this transfer from viewing the
> data model to the business model is why there is usually a
> 'translational layer' in an Object-relational system.  One often needs
> to convert the data from its raw form to a more usable (logical view)
> that can be used within the application.

Why is a network graph more usable and logical than predicates? In what
way are predicates "raw data"? 

/Fredrik B

0
frebe73 (444)
1/7/2007 5:50:36 AM
AndyW wrote:
> <snip>
> I much
> prefer to treat them as two discrete entities with some access
> mechanism in between.
>   
That's basically what I'm advocating programmers do--stop trying to make 
their database and object models identical, stop pretending they 
/should/ be the same, and respect each model's features and exploit them.

While doing that in our own application I recognized what we'd done to 
accomplish that as being OO-inspired.  We'd created an interface to our 
database through its stored procedures.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
1/8/2007 4:49:01 PM
> Note that I use the term object-relational because to me a true OO
> system doesnt have the concept of a database (making them sucky at
> manipulating lists and tables) - its this reason why I hate talking
> about relational systems in the same context of OO systems.  I much
> prefer to treat them as two discrete entities with some access
> mechanism in between.

In your true OO system, how would you do to find lets say an employee
with a given familyname? Are you going to have a familyname object with
references to all employees with that familyname? Or do you want to use
predicate logic to find them? 

/Fredrik B

0
frebe73 (444)
1/8/2007 6:26:18 PM
Thomas Gagne wrote:

> While doing that in our own application I recognized what we'd done to
> accomplish that as being OO-inspired.  We'd created an interface to our
> database through its stored procedures.

"Tell, don't ask", as the OO mantra runs...

    -- chris


0
chris.uppal (3980)
1/8/2007 7:13:20 PM
On Mon, 8 Jan 2007 19:13:20 -0000, "Chris Uppal"
<chris.uppal@metagnostic.REMOVE-THIS.org> wrote:

>Thomas Gagne wrote:
>
>> While doing that in our own application I recognized what we'd done to
>> accomplish that as being OO-inspired.  We'd created an interface to our
>> database through its stored procedures.
>
>"Tell, don't ask", as the OO mantra runs...
>
>    -- chris
>
I've always wondered who thought up the 'tell, dont ask' principle.  
I'm not quite sure I entirely agree with it although I dont disagree
with it either.

To me "tell, dont ask implies tight coupling since the 'tell' part is
usually implemented as a function call from one class to another in
the form of HeyClass.GoDoThis().

My rule has always been  "Let resources find your, dont find them"
which instead of having tightly coupled objects ends up with a more
loose coupling in the form of  PostMessage("CanSomeonePleaseDoThis").

Obviously there is a time and place for each method,  but I wouldn't
treat the 'tell dont ask' rule as hard and fast rule.

AndyW
0
foo_ (331)
1/8/2007 9:45:28 PM
On Tue, 09 Jan 2007 10:45:28 +1300, AndyW wrote:

> On Mon, 8 Jan 2007 19:13:20 -0000, "Chris Uppal"
> <chris.uppal@metagnostic.REMOVE-THIS.org> wrote:
> 
>>Thomas Gagne wrote:
>>
>>> While doing that in our own application I recognized what we'd done to
>>> accomplish that as being OO-inspired.  We'd created an interface to our
>>> database through its stored procedures.
>>
>>"Tell, don't ask", as the OO mantra runs...
>>
>>    -- chris
>>
> I've always wondered who thought up the 'tell, dont ask' principle.  

Military, I suppose. Fire and forget missiles follow that principle. Not
that you could create great things using them...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
0
mailbox2 (6357)
1/9/2007 8:59:48 AM
Hello Andy,

"AndyW" <foo_@bar_no_email.com> wrote in message 
news:li20q215v48tkjtnrdfn4i8kkodos779q1@4ax.com...
> On Wed, 27 Dec 2006 09:41:49 -0800, "Nick Malik [Microsoft]"
> <nickmalik@hotmail.nospam.com> wrote:
>
>>"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message
>>news:D7mdne4mU5111BLYnZ2dnUVZ_smonZ2d@wideopenwest.com...
>>> Nick Malik [Microsoft] wrote:
>>>> <snip>
>>>> That said, an RDBMS can present MANY interfaces to your code, not all 
>>>> of
>>>> which have to be presented through stored procs.  You could present
>>>> through views, for example, and still hide some of the details of your 
>>>> db
>>>> design.
>>>>
>>>> I would also say that the db presents the data for 'many' objects 
>>>> instead
>>>> of a single one.  Viewing the db as a single object begs the question:
>>>> what behavior are you encapsulating in it?
>>>>
>>> Objects are often composed of many other objects.
>>
>>Wow.  First statement has the first assumption I'd like to challenge.
>>
>>Your statement is true but misses the point.  In theory, everything is an
>>object.  Who cares?
>>
>>An object has a PURPOSE.  If you think of objects as nouns, you are not
>>designing your system very well.  Objects are 'things that encapsulate.' 
>>In
>>other words, I express the definition of an object as something that 
>>"does"
>>something, rather than something that "is" something.  In my mind, the
>>distinction is important, because it allows me to use OO techniques and
>>pattern languages.  Moving from "is" to "does" is absolutely essential,
>>because there are a great many objects in systems that I have created 
>>whose
>>purpose has nothing to do with hiding data.  The encapsulation of logic, 
>>the
>>seperation of concerns, the creation of structure to allow for agility or 
>>to
>>reduce dependence, etc, these are reasons for the existence of an object
>>that extend far beyond the definition of an object that "is" something.
>
> I would suggest that an object is a 'thing that does some thing" to
> use an old marketing phrase.   To me in programming, one views an
> object by the set of functionality offered by its public interface the
> rest (protected and private interfaces) really should be ignored.

Not dissimilar from my two statements: objects are 'things that encapsulate' 
and an object 'does' something rather than an object 'is' something.  Your 
wording is more understandable, though.  :-)


> I might add that if one looks at a distributed system, then a database
> is just an object (entity) that is located somewhere in the system
> and, there may be more than one separate database in that system (for
> example, one in each country).

I cautioned Thomas and I will caution you as well... the database is managed 
by a relational database management system and there is a networking 
interface to that RDBMS.  However, the network 'listener' made available by 
the RDBMS does NOT have to be the only way to partition a database, and I 
argue that it is not the best way, in a distributed system, to partition a 
database.  In other words, the fact that our RDBMS offers a networking 
interface does NOT imply that this is the correct place to partition 
persistence.

Interestingly, both Thomas and I agree that we need to 'cover' the database 
with a layer of code designed to add context, defend rules, validate the 
data (both values and meaning), etc.  This layer of code, plus the database, 
is the Semantic Persistence Store.

The distinction that I draw is that I believe that the code portion of the 
SPS should be written in an OO language and made available as services while 
Thomas argues that it should be written in a data-oriented language and made 
available as stored procedures over a db-networking interface (like dblib or 
ODBC).

In your distributed model, where a database can be distributed across 
geographies, you nearly always need a layer of code to either asynchronously 
route the request or allow multiple (distant) db connections.  That layer of 
code, I argue, belongs INSIDE the semantic persistence store.

> From an application programming point
> of view, one can treat all of the separate databases as one single
> object (one issues a query and some middleware figures out which
> database entity handles the request) or one can deal with individual
> databases (for example, one is looking up a local customer).

The rules for deciding (local vs. remote) should NOT be in the application. 
They should be in the middleware.  That allows the decision to change 
without affecting the code of the app, but possibly affecting it's 
throughput, performance, and reliability (hopefully to allow a business to 
rebalance the priorities for these based on changing business needs).

>
> The result in any case is that one can treat the databases as objects
> because at the end of the day, they are just 'things that do
> something'.

I would agree that a semantic persistence store is an object in this sense. 
A database, by itself, is not.

>
> A classic example of this is when building CORBA based systems (which
> I have mentioned in other threads) where an object is represented in
> both the client and the server at the same(ish) time.  That is one
> calls the method on a proxy object on the client and it is executed by
> the 'servant' (object implementation) on the server transparently)
> without the user needing to know where the server is located (or even
> if one exists since they can be started on demand).

no argument there.

>
> When one combines this mechanism with a database, it makes the
> database transparent to the system. From the the point of view of the
> programmer, one can just access data (or more probably the behavioral
> functions) as if it is there (if its not, then the appropriate
> exceptions occur).

no argument there either.

>
> (one of the drawbacks with corba tho is that I notice people tend to
> use it to just get/put data in a database which is not really what I
> think its about since it ignores the whole concept of RMI).
>

Corba is not alone in this.  Any technology that invites a discussion about 
partitioning usually exposes the rampant ignorance of system coders who 
don't understand the advantages and careful considerations that go along 
with making good choices about where to partition the logic.

>>As I pointed out in other places, you can view these methods as attached 
>>to
>>the 'SpecificDatamodel' object, but not to the elements within that 
>>specific
>>data model, even though the methods clearly apply only to specific 
>>elements.
>>This is not wrong per se, but it is problematic in some respect.  I've
>>worked with databases that have hundreds of tables and thousands of stored
>>procedures.  I would have a difficult time considering a single object 
>>with
>>a thousand methods and such a complex data representation mechanism as a
>>single object.
>>
>>In addition, I would have no way to describe to someone what the object
>>'does' or better yet, what it 'encapsulates.'
>
> I think  this is just a level of granularity.  One might say, "thats
> the customer database" or the "call database" at the highest level of
> granularity.  At a lower level, one might need to say,  "that part of
> the customer database handles the msdn/customer relationships) etc.
>
> Humans are pretty good at handling abstractions (especially if the
> target audience is high context aware).


agreed.  However, Thomas was making a case that the database, of and by 
itself, can implement a Semantic Persistence Store that exposes an interface 
that is roughly equivalent to a single large OO object.  I called out that a 
single object is not an appropriate thing to do, because OO doesn't work in 
the shape of a single object... it allows (and some may say requires) the 
developer to make their logic more specific in order to seperate the 
concerns that different objects have.  By creating a single object, you have 
the drawback that all methods fall into the same objects' list of methods 
(e.g. all stored procedures callable from a single object interface).  This 
is the drawback that I was discussing above.

It is not salient to point out that humans are good at abstractions.  The 
code written by humans is not, and it is the ability of code to behave at a 
level that allows humans to let those 'abstraction muscles' rest... that is 
what makes our systems more maintainable and less error prone.



-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
1/9/2007 3:12:43 PM
"Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
news:BoWdne5kX_XLJgfYnZ2dnUVZ_tmknZ2d@wideopenwest.com...
> Nick Malik [Microsoft] wrote:
>> <snip>
> We're on the same page at this point, and agree on the advantages.  I 
> might add more but they wouldn't further our mutual understanding enough 
> to keep them off the cutting room floor.
>> Disadvantages:
>> D1) The command interface (stored procedure names and parameters) that 
>> needs to be invoked by OO programming modules has no inherent support for 
>> OO concepts.  Therefore, while naming conventions and good practices can 
>> make the interface understandable, the listener will not validate the 
>> incoming message to insure that the proper method was called for the 
>> desired activity.
> While you're correct in that SQL is as unaware of OO concepts as it is of 
> my C code's memory management, it doesn't mean the procedures don't have a 
> dependence between them that provides a "correct" order.
>
> For example, before transaction detail can be added a transaction must 
> exist.  Before a transaction exists a valid session (recording the users 
> id, login time, application, etc.) must also be available.

<<code clipped>>

> addTransactionDetail requires a valid transactionKey, which can only be 
> gotten from a successful call to addTransaction.  addTransaction can only 
> be called with a valid sessionKey, which it checks to make sure the 
> session is valid (user is still logged on, the database is in read-write 
> mode, etc.).  So even if it knows nothing of OO data graphs the order of 
> the stored procedure calls is not random or arbitrary.
>
> There are other semantic rules buried in the procedures that are not 
> obvious.  For instance, each account is allowed only a single currency, 
> and all transactions (including their detail) must be in a common 
> currency.  Valid accountKeys are also required as are transaction names, 
> etc.
>
> I'm not so sure I would call this a disadvantage as much as I would 
> consider it the nature of the beast.

It is interesting to note that you draw out, as an example of a good reason 
to use data-oriented language for the SPS, an example of maintaining state 
between method calls.  Your clearly need to maintain state between your 
calls, and you go to some lengths to not only store the state but validate 
it on the next call.  In an OO language, state is maintained by the 
existence of the object, and requires no extra effort on the part of the 
programmer.  This is one example of where an OO language, as an interface, 
is superior to a data-oriented language.

In addition, the disadvantage that I was pointing out had nothing to do with 
the inherent dependency on one call to be made after another.  It had more 
to do with the fact that, in an OO system, I want the objects themselves to 
be independent of one another.  This allows one object to change without 
affecting another.  However, in a Semantic Persistence Store that is exposed 
by a database language like Transact SQL stored procs, all of the methods of 
all of the objects are in the same 'super object' which you started this 
thread by describing as a single object.  It is a single object... and 
that's the problem.  I'm no happier with a design where a single object has 
500+ methods that are exposed by database technology than I would be with a 
design where the same 500+ methods were exposed by an OO language.  It is 
bad design not to seperate the objects to allow one to change without 
affecting the other.

Granted, careful programming can allow the stored proc that produces the 
result of one method call to change without affecting the results of any 
other stored procs.  This is completely true.  You can write good code in 
any language.  OO is not required.  That said, OO is there to help you, to 
make it easier to maintain, because the person who maintains your code may 
not be so 'clear' on the responsibilities of the object as you, the original 
designer, are.  They NEED the help.  A stored proc interface to the Semantic 
Persistence Store does not provide that help.  That is a problem.  It can be 
solved by using an interface that is written in an OO language and exposed 
through services.


>> D2) As stated above, plumbing code must exist on the OO side to frame up 
>> the message and its results in the internal data model of the 
>> application. However, the command interface for this particular 
>> implementation of the SPS is based on good practices and naming 
>> conventions.  It is not based on repeatable and enforced (mathematical) 
>> principles.  Therefore, the plumbing code must be carefully hand crafted 
>> and tested.  It is a labor-intensive task that is very difficult to 
>> automate.
>>
> I agree it must be carefully implemented to be useful, but not that it is 
> labor-intensive.  If you look at the SQL you can see that it has a 
> structure.  That structure is the transaction, and a transaction can be 
> modeled inside OO very easy.  Detail objects can be added to the 
> transaction object very easily.  When complete, the transaction object can 
> be asked to express itself in SQL, or told to put itself onto the SPS, for 
> which it can also be be prepared to handle exceptions properly.

This is interesting.  Not sure if we have veered off course here.  I though 
that you were arguing that a stored procedure interface was the only way to 
access the database from OO code, and that doing so provides an 
object-method paradigm for understanding the code on the OO side.  If this 
is the case, are you suggesting that OO code WRITES a stored procedure on 
the fly?  during design, using controls?  using a domain-specific language 
and a software factory?  If I understand your "point of automation," I can 
better understand where you want to see this go.

>
> It is in doing this that an OO system can easily interface with a 
> relational database without mapping OO datagraphs to relational 
> projections or tuples.

and that is where you lost me.

> It should not go unappreciated that the SQL above has no OO baggage with 
> it, and can just as easily be created by C, COBOL, FORTRAN, or by hand.
>
> And remember, when used manually there is no plumbing code necessary.  As 
> this might be the first interface client the SPS has this is an important 
> feature.  In the alternate model you describe below (way below) there is 
> no human-accessible interface to the SPS.
>> D2-alternate)

I disagree.  Web services are directly callable by a browser.  I'd suggest 
that a web service is MORE user-friendly than a stored proc interface 
because (a) once you pick the service that you want to call, the only method 
available on that service can help you decide if you picked the right one... 
if the method you are looking for doesn't appear, you go back and pick 
another.  (b) access to the stored proc interface requires a client tool. 
For a web service interface, access can be provided by a browser and still 
be secured through solid and tight security.

>> If the SPS does not offer up a single canonical model for consumption, 
>> but rather offers up different models to make it easier for application 
>> developers to consume the data using objects that are easy to map, then 
>> the SPS is likely, over time, to offer up a large list of interfaces. 
>> (application-specific stored procedures).  The plumbing layer still 
>> exists... it just exists in SQL instead of C# (or Java).  Some teams 
>> prefer this model.  I don't have evidence that this mechanism is better 
>> or worse than the other.
>>
> I'm unsure if I understood this one.

I was offering an alternative disadvantage, depending on coding style. 
Clearly, you agree that this disadvantage represents a poor practice.  No 
reason to beat this horse.


> At first I thought you might be suggesting the SPS would be creating 
> client-friendly procedures for each of the client environments it may have 
> to support.  This sounds like a kind of interface entropy.
<<clip>>
>
> If I understood your point, I would not include it among the model's 
> disadvantages since it is not a necessary evil.  It would instead be a 
> self-inflicted wound which makes it little different from the other 
> mutilations programmers are responsible for creating themselves, and 
> there's no limit to the number or kinds of these in everyone else's 
> code--but not yours or mine.

yep.  We are on the same page.  No disagreement there.  (nice to see, BTW. 
Not everyone understands the need to specifically manage this interface). 
Drop this part of the discussion.  It was, as I noted, an alternate 
depending on a coding style and approach that both of us reject.

>> D3) The underlying language, on the db side, is not object oriented. 
>> Thus, even if the stored procedure interface has been carefully crafted 
>> to resemble and reflect OO thinking, the code written in these procedures 
>> is not 'defended' or 'supported' by the OO paradigm.  Therefore, the 
>> conceptual heirarchy carefully crafted by the application developer 
>> cannot be used in the persistence of the data, even when the models 
>> match.  As a result, every data object must have two bodies of code to 
>> interpret it, one in the OO code and one in the SQL code.  This is 
>> inefficient in terms of human as well as code resources.
>>
> Why would the SPS' procedures require defending by a 
> paradigm-in-execution?  Why should the presence of OO applications require 
> anything of the SPS beyond the same provision for other paradigms like 
> structured, aspect, or functional?

The point is that all data interfaces require defending, but that you have 
two paradigms for making that defense (SQL and OO) and therefore have to 
make that defense in two different languages.  This makes your maintenance 
effort go up.

>
> How much time do programmers spend defending a Date object?

depends on how bad the developer is :-).  Date is not a good example because 
it is provided by all major frameworks.  Perhaps an 'InventoryItem' object 
is a better example here.  In the db, using your method, you will have 
methods for things like 'GetInventoryItembyID' and 
'PullFromInventoryForOrder' and 'RecordInventoryCycleCount'.  Each needs 
code to validate things.  Doing so requires that the code has access to some 
element data, like InventoryID number and security token, etc.  The code for 
getting access to this has to be written in a DB-specific language.  I'm 
suggesting that you have two languages: the OO calling language and the 
DB-specific receiving language, and that requires understanding of both to 
maintain the interface.  If you create an OO-based service layer, and use 
ORM mapping to generate access directly to SQL tables, then there is only 
one langauge, be it C# or Java or whatever is your OO passion, in which the 
developer must be trained.

Note: I do not feel that multiple languages is a major cost, nor do I 
believe that you can, or should, do without data-specific procedures (stored 
procs) to perform operations.  I am stating, however, that the primary 
access to code can be done in OO language, rather than SQL, just as well.

>> D4) the choice of the technology (SQL and stored procs) to implement the 
>> SPS means that a SQL-oriented software package must be installed in the 
>> application's environment.
> True, but stored procedures are just one way to implement a SPS' 
> interface.

Correct me if I'm wrong, but isn't that MY point?  :-)

> Rather than discuss how those are implemented (they're conceptually as 
> simple as procedures but are attended by other distracting nuances) for 
> the moment I'd like to stick to relational databases that support 
> procedures (mindful that MySQL just recently added these).

I'm trying to draw out that a service-oriented interface to an SPS does NOT 
require the installation of a SQL-oriented package while at the same time 
allowing the SPS to fulfill the same role in an distributed architecture as 
if a SQL-oriented package were actually installed.  That is the reason for 
listing the 'disadvantage' in my list above.  It was not to attack SQL 
oriented packages.  Personally, I think that these systems are most mature 
and effective answer to the problem of large-scale distributed data 
persistence and availability.  There are cases where they are not useful, 
however, and in those cases, it is good to be able to deliver on the same 
design.

>> <snip description of alterante>
> Much of what you're describing isn't conceptually different.  You've 
> constructed an interface between your applications and your database, and 
> that interface combined with your database is your SPS.
>
> The biggest difference seems to be the model's interface, being either OO 
> or XML-based (not to mention Windows-based), is arrogant (the 
> interface--not you!).  It prefers to speak only with applications that 
> share its vocabulary and pedigree.  Lower-level languages or manual 
> entries are not permitted to eat with the grown-ups.

Manual entries, as stated above, are easily performed over a brower 
interface.  "Lower level" languages can communicate with web services, 
although it may not be that easy to do.  I would suggest that it is as 
difficult for a C program to communicate with SQL Server over dblib() as it 
is for a C program to communicate with a web service using one of the many 
existing libraries developed for exactly that purpose.  There is no lack of 
'permission.'

>
> Taking your example a little further, why not just use an ODBMS?  The 
> interface has already created an exclusive club of clients, why not use an 
> exclusive database management system (XDBMS)?

Perhaps I should ask you that question.  You made the statement early on 
that you felt that the stored procedure interface provides a set of 
'methods' for the 'database object'.  I stated that other languages would do 
a better job if you moved the interface to services, and that objects don't 
have a flat address space for all methods.  Suggesting that I move from one 
object technology to another doesn't invalidate, or necessarily address, my 
point.  That is: I do not believe that a stored procedure interface makes a 
strong or supportable object-level interface and that it should not be 
described as such, and that perhaps it is time to completely drop stored 
procedures as the *primary* interface to the relational database, using 
instead a service-oriented approach in an OO language that takes advantage 
of an ORM layer to make the calls.

>
> Granted, that exclusivity has value (I tried it once with an ODBMS) but 
> one of the problems (aside from performance) was reporting--but we'll save 
> that for another thread.

Note: I never said I wanted to get rid of the relational storage model or 
the mature reporting tools that live on it.  I just wanted to represent the 
business interactions in object-oriented mechanisms, and that there are 
efficient ways to do that in an environment that has a relatively low number 
of stored procedures.

>> Now, in the code within the SPS, I still need to have an OO module call 
>> the underlying database tables.  However, in this model, I can easily 
>> constrain the number of places where a particular column is referenced, 
>> and I can fairly readily constrain things like the pk/fk relationships. 
>> In fact, much of the actual code can be generated, rather than 
>> hand-written.  In this, very narrow and constrained place, in the OO 
>> modules that form the interface to my database, an O/R mapping tool can 
>> be safely and appropriately used, resulting in considerably less code to 
>> create, debug, and maintain.
>>
> Doesn't that defeat, or at least deflate, the value of an OR mapping tool 
> or framework?  With only a single client OR tools have little-or-no bang 
> for the buck.  It like getting a pellet gun for the purchase price, 
> training, maintenance, and servicing fees of an Apache helicopter.  It 
> seems disproportionate to me.

Simple OR mapping is free.  OO developers, using the .Net framework, already 
use it without, in many cases, being aware of it.  (Simple OR mapping is 
built in to the dataset object interface... many folks don't even know it is 
there).

In this case, I'd update your comparison to say that it is more like buying 
a hybrid Toyota Prius for the same price as a gasoline-only Ford Focus.

>> <snip>
>>
>> The services interface is implemented using a base class that each of the 
>> objects inherit from.  Therefore, in addition to using a scalable 
>> communication technology like WCF, we can be sure that basic standards 
>> will be followed and that specific methods are available in every 
>> service.
>>
> That's what I mean by arrogant.  Aloof is perhaps a better word.  It 
> doesn't carry the pejorative baggage arrogant does.
>
> In my pedestrian way of thinking, a different way of doing or thinking 
> about things shouldn't require exotic ingredients or access to private 
> clubs.

Then the requirement that you use ctlib or dblib or ODBC or System.Data 
objects should fall away?  What do you think is under that proprietary 
data-oriented network interface!  Exotic (your words) ingredients!  OO 
interfaces are MORE standard, not less standard.

> C programmers are able to create OO-inspired interfaces to their systems 
> without resorting to OO languages (ctlib and dblib come to mind). 
> Resourceful programmers ought to be able to assemble them from whatever is 
> lying around and the end result should reflect its humble beginnings and 
> avoid becoming presumptuous (I'm looking hard for synonyms to arrogant).

You are assuming that any network access can be done without a library of 
some kind?  I'm not sure what you are saying here, because all programmers, 
whether they are writing in C or PERL or C# or Java are using the underlying 
capabilities of the system or associated libraries to actually make a call 
to the SPS, whether that library is DBLib or System.Data or System.Web, it 
is still a library.  No one calls a database like Sybase by writing directly 
to the TCP port.  (It is entirely possible to do this with web services, 
BTW, but no one bothers... the libraries are easier).

>
> This is why I think the simplest example remains stored procedures 
> in-front of a DB is the best example of treating a database as an object, 
> or as we have started to call it, a Semantic Persistent Store (SPS).
>> <snip advantages>
>>
>>
>> Disadvantages:
>> D1) the listening mechanism is going to be less mature and will likely be 
>> implementation specific.  It could be based on an ESB (on JMS) or perhaps 
>> SOAP, but it will surely be less ubiquitious, and may be less scalable, 
>> less mature, and may have defects that have not yet been found and 
>> resolved.
>>
> Let's assume you're at least using mature message-oriented-middleware so 
> network nonsense isn't an issue.  If your SPS component is the only 
> pathway in and out it is likely to mature fairly quickly as every client 
> application in the system will be using it.  So, though you may list this 
> as a disadvantage, half of it is eliminated with MOM, and the other half 
> disappears over a fairly short piece of time.

agreed

>> D2) The team that maintains the SPS will need to know at least two 
>> languages, SQL and C# (or SQL and Java), in order to maintain this 
>> component.  From a resource management perspective, this adds a few 
>> wrinkles into the mix.  How many of each do you need?  If your needs 
>> change over time, will you be ready to respond?  How many should be 
>> cross-trained?  That said, the fact that our staffing model would need to 
>> be updated doesn't invalidate the design.  It does mean that we have to 
>> practice good change management principles, though, to bring this about.
>>
> But this problem already exists in many shops and isn't unique to this 
> model.  Nor does this model necessarily exacerbate the problem.  The shops 
> that probably most need to worry more than others are those with separate 
> (and antagonistic) database and systems programming groups--but those 
> shops already have other problems that aren't unique to this model.

agreed

>> D3) Service-oriented tools are fairly immature, and do not yet allow a 
>> rich paradigm for inspection, testing, and auditing of the activities of 
>> the SPS. This is rapidly changing, but has not reached the level of 
>> sophistication of the DB-oriented model just yet.
>>
> I don't think they will, because in making things easier for computers to 
> use and or consume (like XML) those structures are increasingly difficult 
> for humans to read.

That is just it.  Humans don't read the XML any more than humans read 
datasets.  XML is interpreted into a report, either automatically by the 
browser or using code and stylesheets.  The only humans that read XML are 
system developers, and they have code interfaces that are as efficient 
regardless of the communication paradigm.


> Think of sendmail.cf, and how that may be really easy for sendmail to 
> parse, but have you ever tried creating one by hand or even describing to 
> a fellow programmer /exactly/ what's going on inside there?

never had to, but I've certainly had my fair share of //really complex// 
configuration files.  Example: the open-source Microsoft Enterprise Library 
has a config file that is so complex, that it ships with a client tool for 
entering data into it.

> As you indicated, we agree more than we disagree.  I think we both 
> understand and recognize the concept and the model, even if I think yours 
> is snootier than mine. ;-)

I think yours reflects a type of thinking that is both common (among data 
folks) and procedural.  My only disagreement on this thread has been on the 
notion that a stored proc interface forms a useful 'object method' 
interface.  My point is that it is not useful in the OO sense, and that more 
useful ones are available, and that using them allows us to call into 
question the value of using a stored procedure interface at all.


>
> -- 
> Visit <http://blogs.instreamfinancial.com/anything.php> to read my rants 
> on technology and the finance industry. 


0
nickmalik (325)
1/9/2007 4:16:39 PM
I was re-reading some of the messages under the subject "Databases as
objects" and ran across something Frans Bouma wrote:

Frans Bouma wrote:
> Thomas Gagne wrote:
>
>
>   
>> But to the point, if a program was able to store improperly-formatted
>> zipcode inside the DB then whose fault is that?  
>>     
>
> 	what's an improperly formatted zipcode? In the US, you have 5 digits,
> in the netherlands you have 4 digits and 2 characters. A zipcode of
> 1234 AA is properly formatted for a dutch user of the application, but
> not correctly formatted for the US user of the program. Hence: context.
>
> 	This thus means that if the db stores '1234AA', it can do so, and the
> dutch user will happily use it. The US user can't because for the US
> user it's just data, 1234 and 2 characters, it's not information
> (zipcode).
If you think about it, neither zip code means anything to the database. 
They're simply characters in a field.  It's the /reader/ who derives
information from them.  The only way a DB might know something about zip
codes is if the field were a FK to known zip codes, then the DB might
assert referential integrity, but still know nothing of the meaning of
"1234 AA."  Only the post office and its customers are impacted by its
meaning.

So that takes us back to whether or not the database is correct beyond
its integrity checks.  If it stores exactly what it should store then it
is correct.  An incorrect or improperly formatted zip code isn't the
database's responsibility if that's what a human or application told it
to store.  A DB wouldn't know it was correct or not until it tried
delivering the mail--or performing some function with the data.

This is the same problem with adding apples and oranges, or USD and
CAD.  Where I able to store $1USD and $2CAD in an attribute that by
itself isn't necessarily wrong.  It may, however, be an inability of a
function to find meaning to adding them together, or the unwillingness
of a bank to ACH 1USD+1CAD, even though people familiar with both
algebra and currency codes recognize exactly what the expression is and
what it means.  After crossing the border to Windsor I may end up with
both USD and CAD in my wallet. 

This reinforces the DB can be responsible for structural, type, and
referential integrity, but it can not give meaning to its data. 
"Incorrect" data, having passed the three tests the DB can apply, is
only incorrect to those is has meaning for.

So this takes us back to our responsibility as programmers and designers
to guard our database's integrity.  In the same way OO programmers guard
their objects' integrity by restricting access to object state, why
would they regard a DB's state any less by allowing any and all to
manipulate rows and attributes however and wherever they wish?

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
4/4/2007 5:11:13 PM
> >> But to the point, if a program was able to store improperly-formatted
> >> zipcode inside the DB then whose fault is that?
>
> >    what's an improperly formatted zipcode? In the US, you have 5 digits,
> > in the netherlands you have 4 digits and 2 characters. A zipcode of
> > 1234 AA is properly formatted for a dutch user of the application, but
> > not correctly formatted for the US user of the program. Hence: context.
>
> >    This thus means that if the db stores '1234AA', it can do so, and the
> > dutch user will happily use it. The US user can't because for the US
> > user it's just data, 1234 and 2 characters, it's not information
> > (zipcode).
>
> If you think about it, neither zip code means anything to the database.
> They're simply characters in a field.

With a proper type system or check contraints, a RDBMS could guarantee
that no invalid zip codes are stored into the database. If we want to
store zip codes for multiple countries, one solution is to have
different relations for different countries with different domains/
types for the zip code attributes.

address(id, country, street, ...)
dutch_address(id, country, dutch_zipcode)
us_address(id, country, us_zipcode)

The country columns in duch_address and us_address should be constants
(enforced by a check constraint), and there should be two foreign keys
between dutch_address/us_address (id, country) referencing address(id,
country). Doing this, it would not be possible to have both a dutch
and us address. The format of dutch_zipcode and us_zipcode could be
enforced by a check constraint using regular expressions, or by
defining a custom type.

To make it simpler to use, we would create a view like this:

create view address_all
select id, country, street, dutch_zipcode as zip
from address a join dutch_address da on da.id=a.id
union
select id, country, street, us_zipcode
from address a join us_address ua on ua.id=a.id

> So that takes us back to whether or not the database is correct beyond
> its integrity checks.  If it stores exactly what it should store then it
> is correct.  An incorrect or improperly formatted zip code isn't the
> database's responsibility if that's what a human or application told it
> to store.

A RDBMS is capable of enforcing such constraints. We want all data in
the database to be valid.

> This reinforces the DB can be responsible for structural, type, and
> referential integrity, but it can not give meaning to its data.

A RDBMS can give meaning to its data in the same way as an application
can give "meaning" to its data.

> So this takes us back to our responsibility as programmers and designers
> to guard our database's integrity.

I am afraid that you say that "programmers" should guard the database
because you want to use stored procedures as proxies to the database.
The data integrity is enforced by primary and foreign key constraints,
check contraints and triggers, in that preferred order. Hiding the
database behind procedures is almost never necessary.

/Fredrik

0
frebe73 (444)
4/4/2007 5:47:47 PM
frebe wrote:
>> <snip>
>> If you think about it, neither zip code means anything to the database.
>> They're simply characters in a field.
>>     
>
> With a proper type system or check contraints, a RDBMS could guarantee
> that no invalid zip codes are stored into the database. If we want to
> store zip codes for multiple countries, one solution is to have
> different relations for different countries with different domains/
> types for the zip code attributes.
>   
But you quickly approach the DB designer's version of Heisenberg's
uncertainty principle.  How does the DB know it has all zip code formats
to test against?  How does it know what all the valid zip codes are?  As
soon as it's measured and implemented how does the DB know a new zip
code wasn't created in some growing suburb of Zurich?

As the DB more rigorously guards what it can not be certain of, it
creates obstacles, bugs, or simply starts getting in the way of people
who are certain of what they're doing.

Is there some definite point beyond which a DB designer should stop
creating constraints?  Is there a point of diminishing returns?  How
much effort should be asserted to know the exact value, set of values,
and potential values of every field in the database? At some point
mightn't even the post office shrug its shoulders and deliver a letter
to a destination country and let them figure-out its address if they
don't recognize anything but the country?


-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
4/4/2007 7:47:46 PM
On 4 Apr 2007 10:47:47 -0700, frebe wrote:

>>>> But to the point, if a program was able to store improperly-formatted
>>>> zipcode inside the DB then whose fault is that?
>>
>>>    what's an improperly formatted zipcode? In the US, you have 5 digits,
>>> in the netherlands you have 4 digits and 2 characters. A zipcode of
>>> 1234 AA is properly formatted for a dutch user of the application, but
>>> not correctly formatted for the US user of the program. Hence: context.
>>
>>>    This thus means that if the db stores '1234AA', it can do so, and the
>>> dutch user will happily use it. The US user can't because for the US
>>> user it's just data, 1234 and 2 characters, it's not information
>>> (zipcode).
>>
>> If you think about it, neither zip code means anything to the database.
>> They're simply characters in a field.
> 
> With a proper type system or check contraints, a RDBMS could guarantee
> that no invalid zip codes are stored into the database.

Where the types are stored?
[...]

>> This reinforces the DB can be responsible for structural, type, and
>> referential integrity, but it can not give meaning to its data.
> 
> A RDBMS can give meaning to its data in the same way as an application
> can give "meaning" to its data.

No. RDBMS to application is like CPU.

CPU cannot give any meaning to anything. Neither RDBMS can.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
0
mailbox2 (6357)
4/4/2007 7:56:44 PM
Thomas Gagne <tgagne@wide-open-west.com> writes:
>What gives data meaning?

  A specification of a semantics that is accepted by at least
  one party engaging in an act of communication.

  A semantics is a map (in the mathematical sense) from a set
  of symbols (this set is called the syntax) to another set
  (which is the set of meanings).

  To determine the meaning of a datum, one needs the additional
  information about the semantics to be used, and one needs to
  assure that the datum is an element of the syntax, then one
  can apply the semantics to the datum to obtain the meaning.

  This was the explanation of meaning for a formal language.
  Eventually, a semantics has to be specified using a natural
  language, which has been learned by means of natural meanings
  shared by parties (e.g., when someone cries �Ouch!� after
  being hit by a hammer and you already know how this feels, you
  might be able to infer the meaning of �Ouch!�).

  One can never really be sure, whether meaning can really be
  shared (http://google.to/ie?q=Quine+inscrutability).

0
ram (2986)
4/4/2007 8:37:01 PM
frebe wrote:
> > >> But to the point, if a program was able to store improperly-formatted
> > >> zipcode inside the DB then whose fault is that?
> >
> > >    what's an improperly formatted zipcode? In the US, you have 5 digits,
> > > in the netherlands you have 4 digits and 2 characters. A zipcode of
> > > 1234 AA is properly formatted for a dutch user of the application, but
> > > not correctly formatted for the US user of the program. Hence: context.
> >
> > >    This thus means that if the db stores '1234AA', it can do so, and the
> > > dutch user will happily use it. The US user can't because for the US
> > > user it's just data, 1234 and 2 characters, it's not information
> > > (zipcode).
> >
> > If you think about it, neither zip code means anything to the database.
> > They're simply characters in a field.
>
> With a proper type system or check contraints, a RDBMS could guarantee
> that no invalid zip codes are stored into the database. If we want to
> store zip codes for multiple countries, one solution is to have
> different relations for different countries with different domains/
> types for the zip code attributes.
>
> address(id, country, street, ...)
> dutch_address(id, country, dutch_zipcode)
> us_address(id, country, us_zipcode)

I personally would take a different approach, partly because I have
more of a dynamic-typing (or type-free) preference than frebe.

I would have a general Address or Contact table, rather than one per
country, with all columns that any country would need. Validation
could be done by a trigger.  A validation table could look something
like this:

table: contactColumnValidation
-------
countryRef   // foreign key to country code
columnRef  // column name (such as "postal_code")
isRequired  // Boolean
regEx  // regular expression used to validate field
function  // function name or reference for any further validation
formatNote  // optional note or hint to help data entry clerk

The function name (or even a code snippet stored in the DB) points to
an optional function that can be used for validation when regular
expressions are not capable of it by themselves. (Regular expressions
hopefully handle most cases.)

Whether existing trigger implementations can handle this or not, I
don't know. But this would be the ideal IMO. If not, then at least
these tables can be used on the app side to perform validation. In
practice there is usually only one Address Entry Screen anyhow, since
maintaining multiple that do the same thing is usually not worth it.
Thus, there is usually only one "entry point" for address changes/
additions anyhow.

>
> The country columns in duch_address and us_address should be constants
> (enforced by a check constraint), and there should be two foreign keys
> between dutch_address/us_address (id, country) referencing address(id,
> country). Doing this, it would not be possible to have both a dutch
> and us address. The format of dutch_zipcode and us_zipcode could be
> enforced by a check constraint using regular expressions, or by
> defining a custom type.
>
> To make it simpler to use, we would create a view like this:
>
> create view address_all
> select id, country, street, dutch_zipcode as zip
> from address a join dutch_address da on da.id=a.id
> union
> select id, country, street, us_zipcode
> from address a join us_address ua on ua.id=a.id
>
> > So that takes us back to whether or not the database is correct beyond
> > its integrity checks.  If it stores exactly what it should store then it
> > is correct.  An incorrect or improperly formatted zip code isn't the
> > database's responsibility if that's what a human or application told it
> > to store.
>
> A RDBMS is capable of enforcing such constraints. We want all data in
> the database to be valid.
>
> > This reinforces the DB can be responsible for structural, type, and
> > referential integrity, but it can not give meaning to its data.
>
> A RDBMS can give meaning to its data in the same way as an application
> can give "meaning" to its data.
>
> > So this takes us back to our responsibility as programmers and designers
> > to guard our database's integrity.
>
> I am afraid that you say that "programmers" should guard the database
> because you want to use stored procedures as proxies to the database.
> The data integrity is enforced by primary and foreign key constraints,
> check contraints and triggers, in that preferred order. Hiding the
> database behind procedures is almost never necessary.
>
> /Fredrik

-T-

0
topmind (2124)
4/4/2007 9:35:27 PM
> >> If you think about it, neither zip code means anything to the database.
> >> They're simply characters in a field.
>
> > With a proper type system or check contraints, a RDBMS could guarantee
> > that no invalid zip codes are stored into the database. If we want to
> > store zip codes for multiple countries, one solution is to have
> > different relations for different countries with different domains/
> > types for the zip code attributes.
>
> But you quickly approach the DB designer's version of Heisenberg's
> uncertainty principle.  How does the DB know it has all zip code formats
> to test against?

How does the application know all zip code formats? If the application
know about them, the database should too. If the application doesn't
know about them, the database shouldn't either.

> How does it know what all the valid zip codes are?

If you want a zip code look up table, you might give the user
possibility to add new valid zip codes, while typing the address.
Maybe after giving a notice first like "This zip code is not known, do
you want to add it? (Y/N)".

> As
> soon as it's measured and implemented how does the DB know a new zip
> code wasn't created in some growing suburb of Zurich?

How does the application know it?

> As the DB more rigorously guards what it can not be certain of, it
> creates obstacles, bugs, or simply starts getting in the way of people
> who are certain of what they're doing.

The same could be said about the application.

> Is there some definite point beyond which a DB designer should stop
> creating constraints?

Is there some definite point beyond which a application designer
should stop creating constraints?

/Fredrik

0
frebe73 (444)
4/5/2007 6:51:42 AM
There's a great book I can recommend here:
"Semantics in Business Systems" by Dave McComb
  - Morgan Kauffman Publishers 2004

The basic goal of this book (as I an perceive)
is to  show that Programs Create Meaning,
- and on another level, of course the programmers
who create the programs. Or perhaps more simply
put: Programs *assign* meaning to data.

Data is just data. But a program that manipulates
it and presents it to humans, and to other programs,
turns it into meaningful information.

It is really the same phenomenon as writing a program
that DECRYPTS a senseless stream of ones and zeros.

Programs behave in a certain way when they encounter
certain data. So do people - when they encounter
certain  *information*. THIS is what gives data
its meaning, its potential of causing reproducible
events in the world.

If the data can not be interpreted, it is meaningless,
and nothing much happens. At most: an Error-report,
if it was expected that that data should be interpretable.
(Just like when your best friend or your boss starts
uttering unintelligible gibberish. An error-report is
needed)

So, humans create meaning by reacting to messages
in a reproducible, common manner. Programs create
meaning by translating data into characters and
graphics, which humans can 'understand'.

So how does this all relate to objects vs. tables
of numbers? Objects encode the rules by which to
react to certain patterns of data. Thus they are one
level closer to 'meaning'.

- Panu Viljamaa



Thomas Gagne wrote:
> I was re-reading some of the messages under the subject "Databases as
> objects" and ran across something Frans Bouma wrote:
> 
> Frans Bouma wrote:
>> Thomas Gagne wrote:
>>
>>
>>   
>>> But to the point, if a program was able to store improperly-formatted
>>> zipcode inside the DB then whose fault is that?  
>>>     
>> 	what's an improperly formatted zipcode? In the US, you have 5 digits,
>> in the netherlands you have 4 digits and 2 characters. A zipcode of
>> 1234 AA is properly formatted for a dutch user of the application, but
>> not correctly formatted for the US user of the program. Hence: context.
>>
>> 	This thus means that if the db stores '1234AA', it can do so, and the
>> dutch user will happily use it. The US user can't because for the US
>> user it's just data, 1234 and 2 characters, it's not information
>> (zipcode).
> If you think about it, neither zip code means anything to the database. 
> They're simply characters in a field.  It's the /reader/ who derives
> information from them.  The only way a DB might know something about zip
> codes is if the field were a FK to known zip codes, then the DB might
> assert referential integrity, but still know nothing of the meaning of
> "1234 AA."  Only the post office and its customers are impacted by its
> meaning.
> 
> So that takes us back to whether or not the database is correct beyond
> its integrity checks.  If it stores exactly what it should store then it
> is correct.  An incorrect or improperly formatted zip code isn't the
> database's responsibility if that's what a human or application told it
> to store.  A DB wouldn't know it was correct or not until it tried
> delivering the mail--or performing some function with the data.
> 
> This is the same problem with adding apples and oranges, or USD and
> CAD.  Where I able to store $1USD and $2CAD in an attribute that by
> itself isn't necessarily wrong.  It may, however, be an inability of a
> function to find meaning to adding them together, or the unwillingness
> of a bank to ACH 1USD+1CAD, even though people familiar with both
> algebra and currency codes recognize exactly what the expression is and
> what it means.  After crossing the border to Windsor I may end up with
> both USD and CAD in my wallet. 
> 
> This reinforces the DB can be responsible for structural, type, and
> referential integrity, but it can not give meaning to its data. 
> "Incorrect" data, having passed the three tests the DB can apply, is
> only incorrect to those is has meaning for.
> 
> So this takes us back to our responsibility as programmers and designers
> to guard our database's integrity.  In the same way OO programmers guard
> their objects' integrity by restricting access to object state, why
> would they regard a DB's state any less by allowing any and all to
> manipulate rows and attributes however and wherever they wish?
> 
0
panu1 (34)
4/9/2007 12:39:04 PM
Nick Malik [Microsoft] wrote:
> "Thomas Gagne" <tgagne@wide-open-west.com> wrote in message 
> news:BoWdne5kX_XLJgfYnZ2dnUVZ_tmknZ2d@wideopenwest.com...
>   
> <snip>
>>>
>>>       
>> I agree it must be carefully implemented to be useful, but not that it is 
>> labor-intensive.  If you look at the SQL you can see that it has a 
>> structure.  That structure is the transaction, and a transaction can be 
>> modeled inside OO very easy.  Detail objects can be added to the 
>> transaction object very easily.  When complete, the transaction object can 
>> be asked to express itself in SQL, or told to put itself onto the SPS, for 
>> which it can also be be prepared to handle exceptions properly.
>>     
>
> This is interesting.  Not sure if we have veered off course here.  I though 
> that you were arguing that a stored procedure interface was the only way to 
> access the database from OO code, and that doing so provides an 
> object-method paradigm for understanding the code on the OO side.  If this 
> is the case, are you suggesting that OO code WRITES a stored procedure on 
> the fly?  during design, using controls?
No.  If it did that would be the equivalent of consumers writing their
own getters and setters to access another object's private data.  What
good is making the data private if consumers can write their own methods
to access it?
>  <snip>
>> It should not go unappreciated that the SQL above has no OO baggage with 
>> it, and can just as easily be created by C, COBOL, FORTRAN, or by hand.
>>
>> And remember, when used manually there is no plumbing code necessary.  As 
>> this might be the first interface client the SPS has this is an important 
>> feature.  In the alternate model you describe below (way below) there is 
>> no human-accessible interface to the SPS.
>>     
>>> D2-alternate)
>>>       
>
> I disagree.  Web services are directly callable by a browser.  I'd suggest 
> that a web service is MORE user-friendly than a stored proc interface 
> because (a) once you pick the service that you want to call, the only method 
> available on that service can help you decide if you picked the right one... 
> if the method you are looking for doesn't appear, you go back and pick 
> another.  (b) access to the stored proc interface requires a client tool. 
> For a web service interface, access can be provided by a browser and still 
> be secured through solid and tight security.
>   
Ah, but to access a web service you must have a web-client.  SOA
presumes a services oriented architecture in the same way that CORBA
presumed the existence of object requests.  Sometimes a
pedestrian-accessible store is more profitable than a Jetson's-only
accessible space store.  True, a browser may be able to access simple
SOA services, but business application transactions are more complicated
than can a simple browser access--and most programs to not yet, and
perhaps should not, be augmented with web-service compatibility.
>   
> <snip>
>> How much time do programmers spend defending a Date object?
>>     
>
> depends on how bad the developer is :-).  Date is not a good example because 
> it is provided by all major frameworks.  Perhaps an 'InventoryItem' object 
> is a better example here.  In the db, using your method, you will have 
> methods for things like 'GetInventoryItembyID' and 
> 'PullFromInventoryForOrder' and 'RecordInventoryCycleCount'.  Each needs 
> code to validate things.  Doing so requires that the code has access to some 
> element data, like InventoryID number and security token, etc.  The code for 
> getting access to this has to be written in a DB-specific language.  I'm 
> suggesting that you have two languages: the OO calling language and the 
> DB-specific receiving language, and that requires understanding of both to 
> maintain the interface.  If you create an OO-based service layer, and use 
> ORM mapping to generate access directly to SQL tables, then there is only 
> one langauge, be it C# or Java or whatever is your OO passion, in which the 
> developer must be trained.
>   
Remember, I don't disagree that a single path to the database can be an
OO interface--but it is surely a more complicated subject than just
getting people to understand this concept using stored procedures--which
don't require the overhead an OO path might.
> <snip>
>> C programmers are able to create OO-inspired interfaces to their systems 
>> without resorting to OO languages (ctlib and dblib come to mind). 
>> Resourceful programmers ought to be able to assemble them from whatever is 
>> lying around and the end result should reflect its humble beginnings and 
>> avoid becoming presumptuous (I'm looking hard for synonyms to arrogant).
>>     
>
> You are assuming that any network access can be done without a library of 
> some kind?  I'm not sure what you are saying here, because all programmers, 
> whether they are writing in C or PERL or C# or Java are using the underlying 
> capabilities of the system or associated libraries to actually make a call 
> to the SPS, whether that library is DBLib or System.Data or System.Web, it 
> is still a library.  No one calls a database like Sybase by writing directly 
> to the TCP port.  (It is entirely possible to do this with web services, 
> BTW, but no one bothers... the libraries are easier).
>   
What I'm saying is the programmers that developed the library created it
in an OO-way (deliberately or not)--making the data structures opaque to
users of the API, but each API function requires a pointer to the
structure as an argument (usually the first), which makes it look Pythonish.

So I'm not talking about what those library programmers used to access
services, but how they created their own library's interface.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
4/10/2007 12:15:40 AM
My apologies, Thomas.  I've lost the train of thought that went into this 
thread, and I just don't have the energy to go back and jump in on whatever 
it was.  As much as I love a good discussion, I'm going to have to disengage 
from this one.

With utmost respect,

-- 
--- Nick Malik [Microsoft]
    MCSD, CFPS, Certified Scrummaster
    http://blogs.msdn.com/nickmalik

Disclaimer: Opinions expressed in this forum are my own, and not 
representative of my employer.
   I do not answer questions on behalf of my employer.  I'm just a 
programmer helping programmers.
-- 


0
nickmalik (325)
4/12/2007 3:18:48 PM
Nick Malik [Microsoft] wrote:
> My apologies, Thomas.  I've lost the train of thought that went into this 
> thread, and I just don't have the energy to go back and jump in on whatever 
> it was.  As much as I love a good discussion, I'm going to have to disengage 
> from this one.
>
> With utmost respect,
>
>   
That's fine.  I was researching the subject for an upcoming talk and
thought I'd left you hanging.

-- 
Visit <http://blogs.instreamfinancial.com/anything.php> 
to read my rants on technology and the finance industry.
0
tgagne (596)
4/12/2007 4:59:08 PM
panu wrote:
> There's a great book I can recommend here:
> "Semantics in Business Systems" by Dave McComb
>   - Morgan Kauffman Publishers 2004
>
> The basic goal of this book (as I an perceive)
> is to  show that Programs Create Meaning,
> - and on another level, of course the programmers
> who create the programs. Or perhaps more simply
> put: Programs *assign* meaning to data.
>
> Data is just data. But a program that manipulates
> it and presents it to humans, and to other programs,
> turns it into meaningful information.

That is almost like saying that the CPU is more important than
MyProgram.exe because MyProgram.exe is "just a data file" (a list of
op-codes).  Or that a sheet of music is "less than" a conductor.
Beetoven is not a conductor, so he is less important? I view it more
as a Yin and Yang rather than "Yang has a bigger yang", or the like.
Data and behavior are generally different views of the same thing,
analogous to energy vs. matter.

>
> It is really the same phenomenon as writing a program
> that DECRYPTS a senseless stream of ones and zeros.
>
> Programs behave in a certain way when they encounter
> certain data. So do people - when they encounter
> certain  *information*. THIS is what gives data
> its meaning, its potential of causing reproducible
> events in the world.
>
> If the data can not be interpreted, it is meaningless,
> and nothing much happens. At most: an Error-report,
> if it was expected that that data should be interpretable.
> (Just like when your best friend or your boss starts
> uttering unintelligible gibberish. An error-report is
> needed)
>
> So, humans create meaning by reacting to messages
> in a reproducible, common manner. Programs create
> meaning by translating data into characters and
> graphics, which humans can 'understand'.
>
> So how does this all relate to objects vs. tables
> of numbers? Objects encode the rules by which to
> react to certain patterns of data. Thus they are one
> level closer to 'meaning'.
>
> - Panu Viljamaa
>
>
>
> Thomas Gagne wrote:
> > I was re-reading some of the messages under the subject "Databases as
> > objects" and ran across something Frans Bouma wrote:
> >
> > Frans Bouma wrote:
> >> Thomas Gagne wrote:
> >>
> >>
> >>
> >>> But to the point, if a program was able to store improperly-formatted
> >>> zipcode inside the DB then whose fault is that?
> >>>
> >> 	what's an improperly formatted zipcode? In the US, you have 5 digits,
> >> in the netherlands you have 4 digits and 2 characters. A zipcode of
> >> 1234 AA is properly formatted for a dutch user of the application, but
> >> not correctly formatted for the US user of the program. Hence: context.
> >>
> >> 	This thus means that if the db stores '1234AA', it can do so, and the
> >> dutch user will happily use it. The US user can't because for the US
> >> user it's just data, 1234 and 2 characters, it's not information
> >> (zipcode).
> > If you think about it, neither zip code means anything to the database.
> > They're simply characters in a field.  It's the /reader/ who derives
> > information from them.  The only way a DB might know something about zip
> > codes is if the field were a FK to known zip codes, then the DB might
> > assert referential integrity, but still know nothing of the meaning of
> > "1234 AA."  Only the post office and its customers are impacted by its
> > meaning.
> >
> > So that takes us back to whether or not the database is correct beyond
> > its integrity checks.  If it stores exactly what it should store then it
> > is correct.  An incorrect or improperly formatted zip code isn't the
> > database's responsibility if that's what a human or application told it
> > to store.  A DB wouldn't know it was correct or not until it tried
> > delivering the mail--or performing some function with the data.
> >
> > This is the same problem with adding apples and oranges, or USD and
> > CAD.  Where I able to store $1USD and $2CAD in an attribute that by
> > itself isn't necessarily wrong.  It may, however, be an inability of a
> > function to find meaning to adding them together, or the unwillingness
> > of a bank to ACH 1USD+1CAD, even though people familiar with both
> > algebra and currency codes recognize exactly what the expression is and
> > what it means.  After crossing the border to Windsor I may end up with
> > both USD and CAD in my wallet.
> >
> > This reinforces the DB can be responsible for structural, type, and
> > referential integrity, but it can not give meaning to its data.
> > "Incorrect" data, having passed the three tests the DB can apply, is
> > only incorrect to those is has meaning for.
> >
> > So this takes us back to our responsibility as programmers and designers
> > to guard our database's integrity.  In the same way OO programmers guard
> > their objects' integrity by restricting access to object state, why
> > would they regard a DB's state any less by allowing any and all to
> > manipulate rows and attributes however and wherever they wish?
> >

-T-

0
topmind (2124)
4/12/2007 7:36:56 PM
topmind wrote:
> panu wrote:
>> There's a great book I can recommend here:
>> "Semantics in Business Systems" by Dave McComb
>>   - Morgan Kauffman Publishers 2004
>>
>> The basic goal of this book (as I an perceive)
>> is to  show that Programs Create Meaning,
>> - and on another level, of course the programmers
>> who create the programs. Or perhaps more simply
>> put: Programs *assign* meaning to data.
>>
>> Data is just data. But a program that manipulates
>> it and presents it to humans, and to other programs,
>> turns it into meaningful information.
> 
> That is almost like saying that the CPU is more important than
> MyProgram.exe because MyProgram.exe is "just a data file" (a list of
> op-codes).  Or that a sheet of music is "less than" a conductor.
> Beetoven is not a conductor, so he is less important? I view it more
> as a Yin and Yang rather than "Yang has a bigger yang", or the like.
> Data and behavior are generally different views of the same thing,
> analogous to energy vs. matter.

If you believe in a soul then the bio-chemical substrate of the body is 
irrelevant to meaning.  But if you're an atheist/materialist you'll 
think that without the bio-chemical substrate that implements the brain 
there can be no meaning.  The meaning of music resides in epiphenomena 
of the brain but without the brain's substrate, poof, its gone.

There are interesting issues in DNA (I'm treading on extremely thin ice 
here because my reading is mostly limited to wikipedia) but its the 
transcription and decoding machinery that gives meaning to a strand of 
DNA.  A given strand's interpretation depends very importantly on where 
transcription/protein generation starts.  Start one base pair further on 
and the interpretation si completely different.  Apparently DNA is full 
of things like palindromes, multiple encodings of different protiens in 
the same strand obtained by starting at different places, etc, etc.  Of 
course this is fabulously involuted and mutually recursive because the 
transcription/protein generation is in some sense coded by DNA.  It a 
limited sense because DNA is functionally inert without the biochemical 
context of the cell to provide preexisting machinery with which to 
decode it.  See e.g. Origins of Life by Freeman Dyson 
(http://www.amazon.com/exec/obidos/ASIN/0521626684/ref=nosim/completereview) 
that examines the origins of cells, concluding that cells (biochemical 
context) come before cell replication (dna), i.e. that hardware preceeds 
software.


Satan oscillate my metallic sonatas!

-- 
The surest sign that intelligent life exists elsewhere in      Calvin &
the universe is that none of it has tried to contact us.       Hobbes.
--
Eliot     ,,,^..^,,,    Smalltalk - scene not herd
0
eliotm (17)
4/13/2007 5:36:20 PM
Eliot Miranda wrote:
> topmind wrote:
> > panu wrote:
> >> There's a great book I can recommend here:
> >> "Semantics in Business Systems" by Dave McComb
> >>   - Morgan Kauffman Publishers 2004
> >>
> >> The basic goal of this book (as I an perceive)
> >> is to  show that Programs Create Meaning,
> >> - and on another level, of course the programmers
> >> who create the programs. Or perhaps more simply
> >> put: Programs *assign* meaning to data.
> >>
> >> Data is just data. But a program that manipulates
> >> it and presents it to humans, and to other programs,
> >> turns it into meaningful information.
> >
> > That is almost like saying that the CPU is more important than
> > MyProgram.exe because MyProgram.exe is "just a data file" (a list of
> > op-codes).  Or that a sheet of music is "less than" a conductor.
> > Beetoven is not a conductor, so he is less important? I view it more
> > as a Yin and Yang rather than "Yang has a bigger yang", or the like.
> > Data and behavior are generally different views of the same thing,
> > analogous to energy vs. matter.
>
> If you believe in a soul then the bio-chemical substrate of the body is
> irrelevant to meaning.  But if you're an atheist/materialist you'll
> think that without the bio-chemical substrate that implements the brain
> there can be no meaning.  The meaning of music resides in epiphenomena
> of the brain but without the brain's substrate, poof, its gone.

"Meaning" is relative.  It is like asking if pi exists.

>
> There are interesting issues in DNA (I'm treading on extremely thin ice
> here because my reading is mostly limited to wikipedia) but its the
> transcription and decoding machinery that gives meaning to a strand of
> DNA.  A given strand's interpretation depends very importantly on where
> transcription/protein generation starts.  Start one base pair further on
> and the interpretation si completely different.  Apparently DNA is full
> of things like palindromes, multiple encodings of different protiens in
> the same strand obtained by starting at different places, etc, etc.  Of
> course this is fabulously involuted and mutually recursive because the
> transcription/protein generation is in some sense coded by DNA.  It a
> limited sense because DNA is functionally inert without the biochemical
> context of the cell to provide preexisting machinery with which to
> decode it.  See e.g. Origins of Life by Freeman Dyson
> (http://www.amazon.com/exec/obidos/ASIN/0521626684/ref=nosim/completereview)
> that examines the origins of cells, concluding that cells (biochemical
> context) come before cell replication (dna), i.e. that hardware preceeds
> software.
>
>
> Satan oscillate my metallic sonatas!

I've found a tin-foil hat is helpful against that  :-)

>
> --
> The surest sign that intelligent life exists elsewhere in      Calvin &
> the universe is that none of it has tried to contact us.       Hobbes.
> --
> Eliot     ,,,^..^,,,    Smalltalk - scene not herd

-T-

0
topmind (2124)
4/14/2007 5:21:13 AM
topmind wrote:
> Eliot Miranda wrote:
>> topmind wrote:
>>> panu wrote:
>>>> There's a great book I can recommend here:
>>>> "Semantics in Business Systems" by Dave McComb
>>>>   - Morgan Kauffman Publishers 2004
>>>>
>>>> The basic goal of this book (as I an perceive)
>>>> is to  show that Programs Create Meaning,
>>>> - and on another level, of course the programmers
>>>> who create the programs. Or perhaps more simply
>>>> put: Programs *assign* meaning to data.
>>>>
>>>> Data is just data. But a program that manipulates
>>>> it and presents it to humans, and to other programs,
>>>> turns it into meaningful information.
>>> That is almost like saying that the CPU is more important than
>>> MyProgram.exe because MyProgram.exe is "just a data file" (a list of
>>> op-codes).  Or that a sheet of music is "less than" a conductor.
>>> Beetoven is not a conductor, so he is less important? I view it more
>>> as a Yin and Yang rather than "Yang has a bigger yang", or the like.
>>> Data and behavior are generally different views of the same thing,
>>> analogous to energy vs. matter.
>> If you believe in a soul then the bio-chemical substrate of the body is
>> irrelevant to meaning.  But if you're an atheist/materialist you'll
>> think that without the bio-chemical substrate that implements the brain
>> there can be no meaning.  The meaning of music resides in epiphenomena
>> of the brain but without the brain's substrate, poof, its gone.
> 
> "Meaning" is relative.  It is like asking if pi exists.

All meaning is grounded in physical existence.  There is no meaning in 
the void.  Meaning arises from computation, and computation arises from 
physical activity (the movement of electrons in synapses, and 
transistors, the movement of cogs in Babbage's engine, etc).  The 
abstract models of computation within which one can demonstrate 
computational equivalences are themselves rooted within physical 
reality, so the demonstrations themselves don't lift meaning above 
physical reality.

Pi doesn't exist except within systems that can hold the concept of 
ratio.  And that means it exists inside computational entities capable 
of conceptual thought.  These entities exist (us at the very least), 
therefore Pi exists.  But Pi doesn't exist outside of thought and so 
doesn't exist outside of the physical Universe.

Much more difficult is defining when computation rises to the level of 
thought.


>> There are interesting issues in DNA (I'm treading on extremely thin ice
>> here because my reading is mostly limited to wikipedia) but its the
>> transcription and decoding machinery that gives meaning to a strand of
>> DNA.  A given strand's interpretation depends very importantly on where
>> transcription/protein generation starts.  Start one base pair further on
>> and the interpretation si completely different.  Apparently DNA is full
>> of things like palindromes, multiple encodings of different protiens in
>> the same strand obtained by starting at different places, etc, etc.  Of
>> course this is fabulously involuted and mutually recursive because the
>> transcription/protein generation is in some sense coded by DNA.  It a
>> limited sense because DNA is functionally inert without the biochemical
>> context of the cell to provide preexisting machinery with which to
>> decode it.  See e.g. Origins of Life by Freeman Dyson
>> (http://www.amazon.com/exec/obidos/ASIN/0521626684/ref=nosim/completereview)
>> that examines the origins of cells, concluding that cells (biochemical
>> context) come before cell replication (dna), i.e. that hardware preceeds
>> software.
>>
>>
>> Satan oscillate my metallic sonatas!
> 
> I've found a tin-foil hat is helpful against that  :-)
> 
>> --
>> The surest sign that intelligent life exists elsewhere in      Calvin &
>> the universe is that none of it has tried to contact us.       Hobbes.
>> --
>> Eliot     ,,,^..^,,,    Smalltalk - scene not herd
> 
> -T-
> 


-- 
The surest sign that intelligent life exists elsewhere in      Calvin &
the universe is that none of it has tried to contact us.       Hobbes.
--
Eliot     ,,,^..^,,,    Smalltalk - scene not herd
0
eliotm (17)
4/15/2007 8:29:58 PM
Eliot Miranda wrote:
> topmind wrote:
> > Eliot Miranda wrote:
> >> topmind wrote:
> >>> panu wrote:
> >>>> There's a great book I can recommend here:
> >>>> "Semantics in Business Systems" by Dave McComb
> >>>>   - Morgan Kauffman Publishers 2004
> >>>>
> >>>> The basic goal of this book (as I an perceive)
> >>>> is to  show that Programs Create Meaning,
> >>>> - and on another level, of course the programmers
> >>>> who create the programs. Or perhaps more simply
> >>>> put: Programs *assign* meaning to data.
> >>>>
> >>>> Data is just data. But a program that manipulates
> >>>> it and presents it to humans, and to other programs,
> >>>> turns it into meaningful information.
> >>> That is almost like saying that the CPU is more important than
> >>> MyProgram.exe because MyProgram.exe is "just a data file" (a list of
> >>> op-codes).  Or that a sheet of music is "less than" a conductor.
> >>> Beetoven is not a conductor, so he is less important? I view it more
> >>> as a Yin and Yang rather than "Yang has a bigger yang", or the like.
> >>> Data and behavior are generally different views of the same thing,
> >>> analogous to energy vs. matter.
> >> If you believe in a soul then the bio-chemical substrate of the body is
> >> irrelevant to meaning.  But if you're an atheist/materialist you'll
> >> think that without the bio-chemical substrate that implements the brain
> >> there can be no meaning.  The meaning of music resides in epiphenomena
> >> of the brain but without the brain's substrate, poof, its gone.
> >
> > "Meaning" is relative.  It is like asking if pi exists.
>
> All meaning is grounded in physical existence.

"Grounded" is too strong a word IMO. It may be required for final
manifestation, but that does not mean it is "grounded" in it, but
merely that such is a *necessary requirement*, but not necessarily the
most "important" issue.  (Oh oh, a metaphysical battle over the
meaning of "importance" will now ensue.) See the tax analogy later on.

Think of it this way: our universe may be a simulation of a simulation
of a simulation, and we may not know the difference. Maybe at the
highest level there is physical hardware, but it is not (and cannot so
far) be our concern.

When you are programming in Smalltalk or PHP, are you really concerned
with the processes inside your Intel or AMD processor to execute your
code (other than knowing that there are performance limitations)?


> There is no meaning in
> the void.  Meaning arises from computation, and computation arises from
> physical activity (the movement of electrons in synapses, and
> transistors, the movement of cogs in Babbage's engine, etc).  The
> abstract models of computation within which one can demonstrate
> computational equivalences are themselves rooted within physical
> reality, so the demonstrations themselves don't lift meaning above
> physical reality.
>
> Pi doesn't exist except within systems that can hold the concept of
> ratio.  And that means it exists inside computational entities capable
> of conceptual thought.  These entities exist (us at the very least),
> therefore Pi exists.  But Pi doesn't exist outside of thought and so
> doesn't exist outside of the physical Universe.

Again, a physical universe may be necessary for "final output", but it
is not something we are *conceptually* bound to use it. Abstractions
eventually require concrete implementation to be useable for real
work, but that is a mere detail, a "tax" on abstraction that we live
with. Taxes are necessary for a business (unless you are
Haliburton :-), but that does not mean that taxes are the important
part of your business. They are simply a cost of doing business. A
final concreteness of some kind is the cost (tax) of using
abstractions like pi, math, or software.

>
> Much more difficult is defining when computation rises to the level of
> thought.
>
>
> >> There are interesting issues in DNA (I'm treading on extremely thin ice
> >> here because my reading is mostly limited to wikipedia) but its the
> >> transcription and decoding machinery that gives meaning to a strand of
> >> DNA.  A given strand's interpretation depends very importantly on where
> >> transcription/protein generation starts.  Start one base pair further on
> >> and the interpretation si completely different.  Apparently DNA is full
> >> of things like palindromes, multiple encodings of different protiens in
> >> the same strand obtained by starting at different places, etc, etc.  Of
> >> course this is fabulously involuted and mutually recursive because the
> >> transcription/protein generation is in some sense coded by DNA.  It a
> >> limited sense because DNA is functionally inert without the biochemical
> >> context of the cell to provide preexisting machinery with which to
> >> decode it.  See e.g. Origins of Life by Freeman Dyson
> >> (http://www.amazon.com/exec/obidos/ASIN/0521626684/ref=nosim/completereview)
> >> that examines the origins of cells, concluding that cells (biochemical
> >> context) come before cell replication (dna), i.e. that hardware preceeds
> >> software.

Side note: I've proposed that a SETI-like search for ET messages/
artifacts in DNA may be warranted. Sure, it is a long-shot, but so is
SETI. And unlike SETI, does not require any new hardware (antennas)
once the DNA codons are available from other purposes (reuse). Thus,
lower probabability, but also lower expenses, balancing it out. Hell,
if its cheap to test for unicorns and bigfoot, why not? However, when
it was pointed out that it was "ID like", people went nuts and poo-
pooed the idea and started calling me nasty names. It went from an
intellectual curiousity to being in the middle of the evo-vs-create
debates.

> >>
> >>
> >> Satan oscillate my metallic sonatas!
> >
> > I've found a tin-foil hat is helpful against that  :-)
> >
> >> --
> >> The surest sign that intelligent life exists elsewhere in      Calvin &
> >> the universe is that none of it has tried to contact us.       Hobbes.
> >> --
> >> Eliot     ,,,^..^,,,    Smalltalk - scene not herd
> >
> > -T-
> >

-T-

0
topmind (2124)
4/16/2007 3:59:05 PM
topmind wrote:
> Eliot Miranda wrote:
>> topmind wrote:
>>> Eliot Miranda wrote:
>>>> topmind wrote:
>>>>> panu wrote:
>>>>>> There's a great book I can recommend here:
>>>>>> "Semantics in Business Systems" by Dave McComb
>>>>>>   - Morgan Kauffman Publishers 2004
>>>>>>
>>>>>> The basic goal of this book (as I an perceive)
>>>>>> is to  show that Programs Create Meaning,
>>>>>> - and on another level, of course the programmers
>>>>>> who create the programs. Or perhaps more simply
>>>>>> put: Programs *assign* meaning to data.
>>>>>>
>>>>>> Data is just data. But a program that manipulates
>>>>>> it and presents it to humans, and to other programs,
>>>>>> turns it into meaningful information.
>>>>> That is almost like saying that the CPU is more important than
>>>>> MyProgram.exe because MyProgram.exe is "just a data file" (a list of
>>>>> op-codes).  Or that a sheet of music is "less than" a conductor.
>>>>> Beetoven is not a conductor, so he is less important? I view it more
>>>>> as a Yin and Yang rather than "Yang has a bigger yang", or the like.
>>>>> Data and behavior are generally different views of the same thing,
>>>>> analogous to energy vs. matter.
>>>> If you believe in a soul then the bio-chemical substrate of the body is
>>>> irrelevant to meaning.  But if you're an atheist/materialist you'll
>>>> think that without the bio-chemical substrate that implements the brain
>>>> there can be no meaning.  The meaning of music resides in epiphenomena
>>>> of the brain but without the brain's substrate, poof, its gone.
>>> "Meaning" is relative.  It is like asking if pi exists.
>> All meaning is grounded in physical existence.
> 
> "Grounded" is too strong a word IMO. It may be required for final
> manifestation, but that does not mean it is "grounded" in it, but
> merely that such is a *necessary requirement*, but not necessarily the
> most "important" issue.  (Oh oh, a metaphysical battle over the
> meaning of "importance" will now ensue.) See the tax analogy later on.

"Grounded" is an excellent term, referring as t does to being based in a 
material reality.  In fact a material Universe is necessary and 
sufficient for the exstence of meaning.  No one *can* demonstrate that 
meaning exists anywhere else.  They can present an abstract system in 
which meaning exists, btu that presentation can only happen within the 
context of a material reality (even if upon an infinite tower of 
simulations).  A metacircular interpreter can bootstrap itself within 
itself only because underneath it is mechanism providing execution 
primitives.  Without the mechanism the metacircularity collapses.  You 
can think of it existing in the abstract, but it can't think of itself 
in the abstract; it needs the underlying hardware.  if you're the one 
simulating it then you're the underlying hardware.

> Think of it this way: our universe may be a simulation of a simulation
> of a simulation, and we may not know the difference. Maybe at the
> highest level there is physical hardware, but it is not (and cannot so
> far) be our concern.

Whether we can tell whether we're a simulation or not, you take as a 
given that eventually that tower of simulations terminates in real 
hardware (a physical universe).  That's the point.  The ardware 
(physics) comes before meaning.

> When you are programming in Smalltalk or PHP, are you really concerned
> with the processes inside your Intel or AMD processor to execute your
> code (other than knowing that there are performance limitations)?

Whether I can abstract away from hardware when reasoning about a program 
or not doesn't change the fact that its my physical brain that does the 
reasoning and not some Platonic abstraction.

--
Eliot     ,,,^..^,,,    Smalltalk - scene not herd
0
eliotm (17)
4/16/2007 6:43:46 PM
Eliot Miranda wrote:
> topmind wrote:
> > Eliot Miranda wrote:
> >> topmind wrote:
> >>> Eliot Miranda wrote:
> >>>> topmind wrote:
> >>>>> panu wrote:
> >>>>>> There's a great book I can recommend here:
> >>>>>> "Semantics in Business Systems" by Dave McComb
> >>>>>>   - Morgan Kauffman Publishers 2004
> >>>>>>
> >>>>>> The basic goal of this book (as I an perceive)
> >>>>>> is to  show that Programs Create Meaning,
> >>>>>> - and on another level, of course the programmers
> >>>>>> who create the programs. Or perhaps more simply
> >>>>>> put: Programs *assign* meaning to data.
> >>>>>>
> >>>>>> Data is just data. But a program that manipulates
> >>>>>> it and presents it to humans, and to other programs,
> >>>>>> turns it into meaningful information.
> >>>>> That is almost like saying that the CPU is more important than
> >>>>> MyProgram.exe because MyProgram.exe is "just a data file" (a list of
> >>>>> op-codes).  Or that a sheet of music is "less than" a conductor.
> >>>>> Beetoven is not a conductor, so he is less important? I view it more
> >>>>> as a Yin and Yang rather than "Yang has a bigger yang", or the like.
> >>>>> Data and behavior are generally different views of the same thing,
> >>>>> analogous to energy vs. matter.
> >>>> If you believe in a soul then the bio-chemical substrate of the body is
> >>>> irrelevant to meaning.  But if you're an atheist/materialist you'll
> >>>> think that without the bio-chemical substrate that implements the brain
> >>>> there can be no meaning.  The meaning of music resides in epiphenomena
> >>>> of the brain but without the brain's substrate, poof, its gone.
> >>> "Meaning" is relative.  It is like asking if pi exists.
> >> All meaning is grounded in physical existence.
> >
> > "Grounded" is too strong a word IMO. It may be required for final
> > manifestation, but that does not mean it is "grounded" in it, but
> > merely that such is a *necessary requirement*, but not necessarily the
> > most "important" issue.  (Oh oh, a metaphysical battle over the
> > meaning of "importance" will now ensue.) See the tax analogy later on.
>
> "Grounded" is an excellent term, referring as t does to being based in a
> material reality.  In fact a material Universe is necessary and
> sufficient for the exstence of meaning.  No one *can* demonstrate that
> meaning exists anywhere else.  They can present an abstract system in
> which meaning exists, btu that presentation can only happen within the
> context of a material reality (even if upon an infinite tower of
> simulations).

You seem to be confusing "necessary" and "most important". Yes, a
physical resting place is (eventually) needed for the representation,
but again that is merely an "existence tax" paid by everything with a
tangable representation.

> A metacircular interpreter can bootstrap itself within
> itself only because underneath it is mechanism providing execution
> primitives.  Without the mechanism the metacircularity collapses.  You
> can think of it existing in the abstract, but it can't think of itself
> in the abstract; it needs the underlying hardware.  if you're the one
> simulating it then you're the underlying hardware.
>
> > Think of it this way: our universe may be a simulation of a simulation
> > of a simulation, and we may not know the difference. Maybe at the
> > highest level there is physical hardware, but it is not (and cannot so
> > far) be our concern.
>
> Whether we can tell whether we're a simulation or not, you take as a
> given that eventually that tower of simulations terminates in real
> hardware (a physical universe).  That's the point.  The ardware
> (physics) comes before meaning.

But your "eventually" can be so far removed such as to not be a
controllable or "care-able" issue for the beings doing the abstracting
in the lowest-level universe chain. The "eventually" can approach
infinity such that our control over it (relavancy to us) approaches
zero such that we can assume it is zero in our analysis of it. If the
level was changed from say 5 deep to 5,000,000 deep, the issues at our
level would not change from our perspective. And since infinity levels
would be equivalent of no levels (all virtual), a change from
5,000,000 deep to infinity deep would not change anything for us
meaning that it is the *same* with or without from our perspective.

>
> > When you are programming in Smalltalk or PHP, are you really concerned
> > with the processes inside your Intel or AMD processor to execute your
> > code (other than knowing that there are performance limitations)?
>
> Whether I can abstract away from hardware when reasoning about a program
> or not doesn't change the fact that its my physical brain that does the
> reasoning and not some Platonic abstraction.

How do you know your "physical" brain is not really a simulation in
God's Pentium? You cannot know for sure, but it probably does not
change the issue either way.

>
> --
> Eliot     ,,,^..^,,,    Smalltalk - scene not herd

-T-

0
topmind (2124)
4/16/2007 8:21:29 PM
topmind wrote:
> Eliot Miranda wrote:
>> topmind wrote:
>>> Eliot Miranda wrote:
>>>> topmind wrote:
>>>>> Eliot Miranda wrote:
>>>>>> topmind wrote:
>>>>>>> panu wrote:
>>>>>>>> There's a great book I can recommend here:
>>>>>>>> "Semantics in Business Systems" by Dave McComb
>>>>>>>>   - Morgan Kauffman Publishers 2004
>>>>>>>>
>>>>>>>> The basic goal of this book (as I an perceive)
>>>>>>>> is to  show that Programs Create Meaning,
>>>>>>>> - and on another level, of course the programmers
>>>>>>>> who create the programs. Or perhaps more simply
>>>>>>>> put: Programs *assign* meaning to data.
>>>>>>>>
>>>>>>>> Data is just data. But a program that manipulates
>>>>>>>> it and presents it to humans, and to other programs,
>>>>>>>> turns it into meaningful information.
>>>>>>> That is almost like saying that the CPU is more important than
>>>>>>> MyProgram.exe because MyProgram.exe is "just a data file" (a list of
>>>>>>> op-codes).  Or that a sheet of music is "less than" a conductor.
>>>>>>> Beetoven is not a conductor, so he is less important? I view it more
>>>>>>> as a Yin and Yang rather than "Yang has a bigger yang", or the like.
>>>>>>> Data and behavior are generally different views of the same thing,
>>>>>>> analogous to energy vs. matter.
>>>>>> If you believe in a soul then the bio-chemical substrate of the body is
>>>>>> irrelevant to meaning.  But if you're an atheist/materialist you'll
>>>>>> think that without the bio-chemical substrate that implements the brain
>>>>>> there can be no meaning.  The meaning of music resides in epiphenomena
>>>>>> of the brain but without the brain's substrate, poof, its gone.
>>>>> "Meaning" is relative.  It is like asking if pi exists.
>>>> All meaning is grounded in physical existence.
>>> "Grounded" is too strong a word IMO. It may be required for final
>>> manifestation, but that does not mean it is "grounded" in it, but
>>> merely that such is a *necessary requirement*, but not necessarily the
>>> most "important" issue.  (Oh oh, a metaphysical battle over the
>>> meaning of "importance" will now ensue.) See the tax analogy later on.
>> "Grounded" is an excellent term, referring as t does to being based in a
>> material reality.  In fact a material Universe is necessary and
>> sufficient for the exstence of meaning.  No one *can* demonstrate that
>> meaning exists anywhere else.  They can present an abstract system in
>> which meaning exists, btu that presentation can only happen within the
>> context of a material reality (even if upon an infinite tower of
>> simulations).
> 
> You seem to be confusing "necessary" and "most important". Yes, a
> physical resting place is (eventually) needed for the representation,
> but again that is merely an "existence tax" paid by everything with a
> tangable representation.

You seem to be confused.  Show me *any* computation not grounded in 
physics.  And btw its "tangible".

> 
>> A metacircular interpreter can bootstrap itself within
>> itself only because underneath it is mechanism providing execution
>> primitives.  Without the mechanism the metacircularity collapses.  You
>> can think of it existing in the abstract, but it can't think of itself
>> in the abstract; it needs the underlying hardware.  if you're the one
>> simulating it then you're the underlying hardware.
>>
>>> Think of it this way: our universe may be a simulation of a simulation
>>> of a simulation, and we may not know the difference. Maybe at the
>>> highest level there is physical hardware, but it is not (and cannot so
>>> far) be our concern.
>> Whether we can tell whether we're a simulation or not, you take as a
>> given that eventually that tower of simulations terminates in real
>> hardware (a physical universe).  That's the point.  The hardware
>> (physics) comes before meaning.
> 
> But your "eventually" can be so far removed such as to not be a
> controllable or "care-able" issue for the beings doing the abstracting
> in the lowest-level universe chain. The "eventually" can approach
> infinity such that our control over it (relavancy to us) approaches
> zero such that we can assume it is zero in our analysis of it. If the
> level was changed from say 5 deep to 5,000,000 deep, the issues at our
> level would not change from our perspective. And since infinity levels
> would be equivalent of no levels (all virtual), a change from
> 5,000,000 deep to infinity deep would not change anything for us
> meaning that it is the *same* with or without from our perspective.

Not so.  At each level of the simulation the computation depends on the 
existence of the (possibly simulated) hardware immediately below it. 
The number of levels doesn't reduce the direct dependence of the 
computation (simulated or otherwise) on the computing machinery 
performing that computation.

>>> When you are programming in Smalltalk or PHP, are you really concerned
>>> with the processes inside your Intel or AMD processor to execute your
>>> code (other than knowing that there are performance limitations)?
>> Whether I can abstract away from hardware when reasoning about a program
>> or not doesn't change the fact that its my physical brain that does the
>> reasoning and not some Platonic abstraction.
> 
> How do you know your "physical" brain is not really a simulation in
> God's Pentium? You cannot know for sure, but it probably does not
> change the issue either way.

Exactly.  Even if I am a simulation the computation simulating me is 
happening somewhere somehow.

But enough already.  You've brought up God.  Time to bow out.
--
Eliot     ,,,^..^,,,    Smalltalk - scene not herd
0
eliotm (17)
4/16/2007 9:31:42 PM
> We can build a Chess machine that can probably beat any human alive.
> Thus, at least in specific areas exceeding the creator is possible.
> 

I know I'm going to regret getting involved in this discussion mostly 
because prior threads that involve 'topmind' end up in no mans land.

The truth wrt to Chess is that we cannot build a machine that can beat 
any human alive. The current chess machines that sometimes win against 
humans are constantly crashing and have human interventions to help them 
play. The computers are human augmented in major ways.

For any sufficiently advanced game - take GO for example - you find that 
  a computer has exactly the same problem as a human. The problem space 
is too big and no matter how many processors you throw at it that 
doesn't go away.

Quantum computing may be able to achieve the kinds of "thinking" that 
humans do - but may be it won't either.

And let's get to the heart of the thread here.. if you do manage to make 
a chess computer that always beats a human, it's only because you're not 
making a thinking machine, you're making a calculating machine that 
models the domain well enough to make better decisions in the same fixed 
space of time as a human.


>> If we discover how the brain works then we will cease to be humans.
> 
> Huh? You are getting too poetic for me.

I agree. That's a useless statement. Sit down and meditate a moment and 
you're no longer the same person you were before.

This is as silly as Douglas Adams's idea that once we know the meaning 
of life the universe and everything, the universe will be destroyed and 
replaced with something more complex. It was a tongue in cheek joke but 
you seem to have taken it to heart.

Now that we know how to flip epigenetic switches, will we cease to be 
human? Heck no, we'll become better humans. Once we know how the human 
mind works, will we cease to be human? No, we'll become better humans.

When we invented the first human-augmentation device (read: computer) 
did we cease to be human? no.


Didn't this thread start by talking about how computers can understand 
stuff? It seems silly that we've come this far off track.

I've done a reasonable amount of knowledge architecture and 
representation work for open ontologies (as opposed to fixed ontologies 
which introduce all kinds of awesome optimisations, representations, etc).

I believe somewhere in this thread someone posited that a relational 
database wasn't a good way of storing information but was a good way of 
storing data. I agree with this sentiment. The main reason is that an 
RDB assumes that the ontology is fixed. A table has a set number of 
columns and you'll kill the database if you allow the tables columns to 
grow indefinitely to accept all ontologies.

And for those of you who truely believe that I'm wrong.. then you're 
also argueing with Oracle. They have recently released Oracle Semantic 
Database which is based off Oracle Spacial and it uses their newly 
acquired BerkeleyDB as its base to represent specialised triple 
structures for optimal speed with querying and indexing.

Their offering is pretty meager wrt to Frames and Descriptive Logics and 
other AI offerings, but hey, it's a pretty big step forward for an RBDMS 
company to make an RDF Database.

Cheers,
Michael
0
Michael
4/19/2007 5:02:34 AM
Thomas Gagne <tgagne@wide-open-west.com> wrote:

>I was re-reading some of the messages under the subject "Databases as
>objects" and ran across something Frans Bouma wrote:
>
>Frans Bouma wrote:
>> Thomas Gagne wrote:
>>
>>
>>   
>>> But to the point, if a program was able to store improperly-formatted
>>> zipcode inside the DB then whose fault is that?  
>>>     
>>
>> 	what's an improperly formatted zipcode? In the US, you have 5 digits,
>> in the netherlands you have 4 digits and 2 characters. A zipcode of
>> 1234 AA is properly formatted for a dutch user of the application, but
>> not correctly formatted for the US user of the program. Hence: context.
>>
>> 	This thus means that if the db stores '1234AA', it can do so, and the
>> dutch user will happily use it. The US user can't because for the US
>> user it's just data, 1234 and 2 characters, it's not information
>> (zipcode).
>If you think about it, neither zip code means anything to the database. 
>They're simply characters in a field.  It's the /reader/ who derives
>information from them.  The only way a DB might know something about zip
>codes is if the field were a FK to known zip codes, then the DB might
>assert referential integrity, but still know nothing of the meaning of
>"1234 AA."  Only the post office and its customers are impacted by its
>meaning.
>
>So that takes us back to whether or not the database is correct beyond
>its integrity checks.  If it stores exactly what it should store then it
>is correct.  An incorrect or improperly formatted zip code isn't the
>database's responsibility if that's what a human or application told it
>to store.  A DB wouldn't know it was correct or not until it tried
>delivering the mail--or performing some function with the data.

With your statement above.. what exactly is a DB?  Can you give me a
definition of one?  If a database stored metadata and understood the
semantics of zip codes, then couldn't it do that?  I guess it depends on
your concrete definition of "what exactly is a DB", because otherwise I
believe your statement is entirelly incorrect -- surely a DB could do this.

I'm also going to preface, that a lot of what you talk about has a lot of
overlap with semantic technology.  Have a look at the W3C too, about
sementic web technologies and the like, there is a lot of evolution going on
in these concepts and related technology.

>This is the same problem with adding apples and oranges, or USD and
>CAD.  Where I able to store $1USD and $2CAD in an attribute that by
>itself isn't necessarily wrong.  It may, however, be an inability of a
>function to find meaning to adding them together, or the unwillingness
>of a bank to ACH 1USD+1CAD, even though people familiar with both
>algebra and currency codes recognize exactly what the expression is and
>what it means.  After crossing the border to Windsor I may end up with
>both USD and CAD in my wallet. 
>
>This reinforces the DB can be responsible for structural, type, and
>referential integrity, but it can not give meaning to its data.

Here, again, I wholeheartedly disagree.. While most commonly used databases
in existance today don't provide any kind of notional semantics in this
regard, there isn't any reason why such a system couldn't exist.  Perhaps
you wouldn't call such a system a DB, but again perhaps that depends on your
definition of a DB.  :-)

Cheers, Ian

>"Incorrect" data, having passed the three tests the DB can apply, is
>only incorrect to those is has meaning for.
>
>So this takes us back to our responsibility as programmers and designers
>to guard our database's integrity.  In the same way OO programmers guard
>their objects' integrity by restricting access to object state, why
>would they regard a DB's state any less by allowing any and all to
>manipulate rows and attributes however and wherever they wish?
---
http://www.upright.net/ian/
0
ian-news2 (24)
4/26/2007 8:23:10 AM
Michael Lucas-Smith <"michael-dot-lucassmith at google mail"> wrote:

>Didn't this thread start by talking about how computers can understand 
>stuff? It seems silly that we've come this far off track.
>
>I've done a reasonable amount of knowledge architecture and 
>representation work for open ontologies (as opposed to fixed ontologies 
>which introduce all kinds of awesome optimisations, representations, etc).
>
>I believe somewhere in this thread someone posited that a relational 
>database wasn't a good way of storing information but was a good way of 
>storing data. I agree with this sentiment. The main reason is that an 
>RDB assumes that the ontology is fixed. A table has a set number of 
>columns and you'll kill the database if you allow the tables columns to 
>grow indefinitely to accept all ontologies.
>
>And for those of you who truely believe that I'm wrong.. then you're 
>also argueing with Oracle. They have recently released Oracle Semantic 
>Database which is based off Oracle Spacial and it uses their newly 
>acquired BerkeleyDB as its base to represent specialised triple 
>structures for optimal speed with querying and indexing.

While I wholeheartedly don't believe that a "triple structure" is THE
structure for optimial speed, I believe there are obvious and serious
performance problems with relational databases, and perhaps even object
databases, as you point out.  

At where I work, we use a more object-oriented model augmented with a number
of indicies to make these kinds of queries efficient, as well as having very
loosely structured objects.  The triple structure encourages data
fragmentation and works extremely poorly with large and/or ordered sets, for
example.  I'm the chief engineer of our own in house OODB-like engine built
in C++, optimized for storing ontologicial data and metadata, with some
capabilities in the DB to have some better semantic understanding of the
data itself.  It natively imports OWL/RDF at blazing speed, btw.  We have
many higher level frameworks that use the database written in Smalltalk.

Cheers, Ian

>Their offering is pretty meager wrt to Frames and Descriptive Logics and 
>other AI offerings, but hey, it's a pretty big step forward for an RBDMS 
>company to make an RDF Database.
>
>Cheers,
>Michael
---
http://www.upright.net/ian/
0
Ian
4/26/2007 4:28:07 PM
Reply:

Similar Artilces:

For each object in object
I'm trying to do this in VO: Have tried ForEachElement( oEvalObj ) CLASS AbstractCollection:VOCOM32.AbstractCollection:ForEachElement without any luck. My code vo is working but goBeregning:BeregnRefusjon(oRegning) are generating some new objects. I can't get these objects in vo. Any suggestions? "VB code For Each objKode In objRegning.Koder If Not objKode.Gyldig Then strStatus = "Ugyldig!" ElseIf objKode.Stjernekode Then strStatus = "Stjernekode" ElseIf objKode.Aggregert Then If...

an (object, object) -> object mapping, please?
I wonder, what are the common ways of implementing an (object, object) -> object mapping in Scheme (of various flavours)? Say, I have a set of `shop' objects, a set of `item' ones, and then a mapping, like: (shop, item) -> stock. With an option to get an (item -> stock) mapping for any given `shop'. One of the solutions (in standard Scheme) would be to embed a mapping into either each `shop' or `item' (or both), and then, e. g.: (define (get-stock shop item) (assq item (shop-items shop))) or, conversely: (define (get-stock shop item) (assq shop (...

INVALID objects
Hello, Can ANY object in Oracle have the status "invalid"? Or is this only applicable to source code objects (Stored Procedures, Packages, Types etc...) -- With regards, Martijn Tonies Database Workbench - developer tool for InterBase, Firebird, MySQL & MS SQL Server. Upscene Productions http://www.upscene.com can but shouldn't they should usually be recompiled by running appropriate scripts check metalink for any particulars, and if they have bugs. - On Mon, 25 Oct 2004 14:41:05 +0200, "Martijn Tonies" <m.tonies@upscene_nospam_.com> wrote: >Hel...

How do I tell an object to free up an owned object from thta object itself?
I have built a server application that uses a server object (MyServer) that creates a TServerSocket to manage connections. Every time a connection is made a ClientHandler object is created in the OnClientConnect event and it has a Socket property that is assigned the socket the TServerSocket supplies in the event. Now it seems like my ClientHandler object is all on its own with respect to the communication. When the client disconnects it fires the OnSocketEvent event on the socket with SocketEvent set to seDisconnect. I am decoding this in the ClientHandler so I can handle some cleanu...

Object.object.method()
Hello, obviously it's a time for a noobie question. What the following statement means? Object.object.method() Sample code: window = Gtk::Window.new(Gtk::Window::TOPLEVEL) area = Gtk::DrawingArea.new() area.window.set_cursor(Gdk::Cursor.new(Gdk::Cursor::PENCIL)) I'm confused. Can anyone enlight me? Thanks. -- Posted via http://www.ruby-forum.com/. On Sun, Oct 3, 2010 at 11:29 AM, Johan Soderholm <teisto@surfy.net> wrote: > Hello, obviously it's a time for a noobie question. What the following > statement means? > > Object.object.method...

objects build up from other objects
I've stood up to the challenge to show how ciforth's mini-OO (with current object) can be used to build larger objects. REGRESS is a new invention, it is like the { <- } of Hopkins test suite. In my system it aborts if the test fails, but you can see it as a stack diagram by example. Once you have a class Point (a defining word) you get also ^Point :contains the pointer to the current Point) BUILD-Point : lay down a data structure in memory, leave its address. There is no current object per se. Only current Point, current Rectangle etc. There are no fields. Offsets ...

Are Delphi's objects always pointers to objects? (comparing Delphi objects and C++ objects)
I'm trying to convert some C++ code to Delphi but I'm a bit confused on how objects (and object pointers) are different between the two languages. In Delphi it seems like objects are all pointers to objects. The reason I'm asking is because of the C++ "->" operator (or whatever you would classify that symbol as). I'm confused on how this translates to Delphi. For example, consider the following code: var MyObj1, MyObj2: TMyObject; begin MyObj1 := TMyObject.Create; // instantiate object (allocate memory) and have MyObj1 point to the memory ...

objects as indexes of objects
I have this: <div id="a"> a </div> <div id="b"> b </div> <script type="text/javascript"> var X = new Object(); var a = document.getElementById('a'); var b = document.getElementById('b'); X[a] = 'A'; if(X[a]){alert('object a')} // I get the expected alert if(X[b]){alert('object b')} // There is no X[b] defined but I still get an alert. Why? </script> I'm initializing an instance for various objects on a page and I want to keep track of them. How do I do that? Jeff On Wednesday, March 9, 2011 3:56:59 PM UTC+1, Jeff Thies wrote: .... > <script type="text/javascript"> > var X = new Object(); > > var a = document.getElementById('a'); > var b = document.getElementById('b'); > > X[a] = 'A'; > > if(X[a]){alert('object a')} // I get the expected alert > > if(X[b]){alert('object b')} // There is no X[b] defined but I still get > an alert. Why? > > > </script> > > I'm initializing an instance for various objects on a page and I want to > keep track of them. How do I do that? Per the spec PropertyName can either be a number, or a string - since you are using an object, this will be coerced to a string by calling `toString`, which most likely returns something like '[object Object]'. Since both `a` and `b` resolve into this ...

Question about objects in objects.
I have been writing a game and some of my objects have objecs as members of that I have a graphic with a color or unit with a coord structure. If I have a: class unit{ color cl; int attackFactor; ..... ..... When I create the object, I create an empty color but I have a color constructor: color::color(int r, int g, int b); I want to create the unit unit::unit(int af, int r, int g, int b); attackFactor =af; color(r,g,b); My unit objects will have quite a bit of data so I want to load all the information with istream from a text file. How can I use constructor for the m...

Database which allows object to be "child" of any other object
I'm working on a database-driven project and I'm trying to do some re- architecting of the system. I would like to cut down on the number of strictly-defined complex object types, instead forming them from a composition of attached atomic metadata objects. A user could have an attached phone number, an attached address, an attached photo; but furthermore, an address could have an attached photo or phone number. This means that in the DB, my relationship cannot be simply from the "PhoneNumber" table to the "User" table. It must allow relationships from...

cast object to object
So I have an object of class (user defined) Dave() and Dave2() This may seem totally assuming, but I have a string that I get. It will either be "Dave" or "Dave2". Is there a way to if string=="Dave" (Dave)string.theMethod() else (Dave2)string.theMethod() end I know, again, I am taking a lot for granted, but in Python I was able to do this sort of thing... And of course in Java... -- Posted via http://www.ruby-forum.com/. On 12/3/2010 9:27 AM, David E. wrote: > So I have an object of class (user defined) Dave() and Dave2() >...

object and object types
"Ulrich Eckhardt" <doomster@knuut.de> wrote in message news:8omqe9FiotU1@mid.uni-berlin.de... > Paul wrote: >> A class is an object type as the C++ standards clearly state: > > Yes, a class is a type, just like an enumeration. A type is not an object. > An object type is also not an object, but a type. Instantiating it will > yield an object. > > Simple thing, at least for those willing to understand. > Consider the following Cat frisky; frisky is an object yet frisky is also an object type. frsiky is region of storage yet frisky is also an identifier. frisky is also a Cat type. An object is an object type, that is why it's called an object. An integer is an integer type , this is why we call it an integer etc etc. There is no instantiation here, yet there is an object. The object declared here is defined by the definition of its type, in the class Cat. If Cat has an Eat() member function then frisky has an Eat() member function. If Cat has a Meow() member function then so does frisky. And so on. The term object is not simply a block of memory. Whether the inconsistent standards state this or not, which they don't. Paul wrote: > Consider the following > > Cat frisky; > > frisky is an object yet frisky is also an object type. Wrong. > frsiky is region of storage yet frisky is also an identifier. Wrong due to oversimplification. > ...

Definition of Business Object/Domain Object/Transaction Object
Could anyone please give me a definition? -- Steven Woody steven@lczmsoft.com ...

Objects containing objects
I have a class that takes in an object in its constructor, and stores it as a member variable as follows: # INSIDE CONSTRUCTOR .... $self->{MY_OBJECT} = shift; .... I can make calls to this member object's methods inside the constructor, and they work fine: .... print $self->{MY_OBJECT}->myMethod(); .... I have a getter method in this top-level object that returns the member object: sub getMyObject { my $self = shift; return $self->{MY_OBJECT}; } However, when other code retrieves this object using the getter method, it is unable to make method calls against ...

Does Object extend Object?
The javadoc on the class Object is as follows: Class Object is the root of the class hierarchy. Every class has Object as a superclass. All objects, including arrays, implement the methods of this class. Does this mean that Object implicitly extends Object? It passes all the inheritance tests such as implementing the methods of the parent class, the "is a" relationship etc. So does it in fact extend itself or should it say "Every OTHER class has Object as a superclass? Thanks for your thoughts! <william.lichtenberger@gmail.com> wrote: > The javadoc on the...

Align objects to a first object without the first object moving
I have several objects on a front panel that I want to align with one another.&nbsp; One object should keep its original position and the others should be aligned to it.&nbsp; Is there a way to do this?&nbsp; The normal method of alignment seems to move all the objects, including the one that I want to retain its original position.&nbsp; I tried locking the object that isn't supposed to move, but then I can't seem to align to it anymore. &nbsp; Darin If you want to align them to a side (top, bottom, left, right), make sure the base object is the outermost in that d...

Object as Instance Variable of an Object
Hi, I'm new to OTcl and would like to do the following: To have an instance variable that is itself an object. In Java this would look somehow like this: class Father{ public int age; public String name; } class Family{ public Father father; public Family () {this.father = new Father();} } myFamily = new Family(); //access to instance variables myFamily.father.age // and so on How can I construct the same in OTcl and how to access instance variables that are themselves objects and how to access the instant variables of the instant variable of Family (myFamiliy.father.age). Thanks a lot for your help, /Christian ...

Object Database
Dear Friends, Maybe relational databases are more convenient but I decided to use Object Database for my patients health records. Which ones would you prefer to use with ruby? Thankyou Vahdi Kulahci -- Posted via http://www.ruby-forum.com/. ...

A200 limits on objects
Hi A200 has a limit or 32,000+ objects. What is the definition of an 'object'? Does it include controls on a form etc as they are referred to as objects as well in documentation? What does the table MSysQueries count as it has 22,000 records but I don't have that many queries? Sorry for multiple questions but just trying to work out how close to limits a db is and can't find any clear explanation. MSysQueries contains many records for each query. For instance, it has a record to define each field or calculated column in a query that is being displayed in the results...

Implementation of object database in relational database
Does aybody know any articles or any other info of implementations of object databases using relational databases? I'd be really greatfull of any informations. Thanks in advance Piotr Pietruszka > Does aybody know any articles or any other info of implementations > of object databases using relational databases? > I'd be really greatfull of any informations. What do you mean by "object"? Code + data? Are are the object's properties and methods similar or quite diverse? While not impossible, it is impractical to store code and data or collections of diverse objects in relational dbs. An example at www.xrdb.com/example/ex123.asp shows how to store 10 diverse computer systems using a simple but flexible database. Below script stores and queries the first computer system, a Dell computer with a 133 Mhz motherboard with dual 2.0 GHz processors, 20 GB IDE hard drive and 7.1 speaker system that can handle 1000 watts peak. The computer has 3 slots, the first has a 2/10 MBit network card, the second with an audio card with 3 sampling rates. The third slot is empty. See script for additional specifications. // Create various types. (CREATE type inst *device) (CREATE type inst *serial#) (CREATE type inst *mfg) (CREATE type inst *model#) (CREATE type inst *computer & dir item it) (CREATE type inst *slots) (CREATE type inst *motherboard) (CREATE type inst *"bus speed") (CREATE type inst *cpu) (...

object.getClass.cast(object) ?
When I run the java code below I get this: uprrtoa@uprags011> java Test Class : class Test$C cast(): Test$C@192d342 This tells me that the cast() call is working and finding the right class, but if you uncomment the line: dispatch(a.getClass().cast(a)); and try and compile it I get the following: uprrtoa@uprags011> javac Test.java Test.java:44: cannot find symbol symbol : method dispatch(Test.A) location: class Test dispatch(a.getClass().cast(a)); ^ 1 error it's looking for a method with the signature: void dispatch(A a); which of course isn't the desi...

Object draws or object is drawn?
Hello. I have an OOP question which is kind of fundamental for me. Here it goes: I. Objects draws. Imagine there are objects sharing the same interface ( though some of them may be subtypes, hyerarchy isn't known ) which provides a method to draw these objects. E.g. objects A, B and C are all of subtype of type X. And type X has a method X.draw(). So I can request A.draw(), B.draw(), C.draw(). Also there's an object Painter which implements interface of drawing. E.g. it has methods Painter.drawLine( p1, p2 ), Painter.drawCircle( p, r ) et cetera. And objects A, B and C use this class ...

How to create a object which contains other objects
I've tried to do like this: (DEFCLASS BAR () ((TESTING :INITARG NIL))) (DEFCLASS FOO () ((STUFF :INITFORM (LIST 'APA 'BEPA 'CEPA)))) (SETF APA (MAKE-INSTANCE 'BAR)) (SETF BEPA (MAKE-INSTANCE 'BAR)) (SETF CEPA (MAKE-INSTANCE 'BAR)) (SETF TEST-FOO (MAKE-INSTANCE 'FOO)) #|| NOTE Let's toy with the idea that I don't use SETF at the top-level, right? ||# END NOTE Now. I'd hoped that when I DESCRIBE'd my TEST-FOO the slot STUFF would show up like (APA BEPA CEPA) which then could be further investigated and found to be to be a list with object...

Composition: Build objects from other objects
<http://www.javaworld.com/javaworld/jw-06-2001/jw-0608-java101.html> has a discussion of composition versus inheritance which I'm trying to apply to ruby. For discussions sake, I'll use the classes discussed in the above linke: Car, Vehicle and Engine. Car inherits from Vehicle, of course. Let's say that Engine is something like: class Engine attr_accessor :type def initialize () @type = "generic engine" end def vroom () puts "vroooooom" end end Vehicle would be something like: class Vehicle attr_accessor :engine def initia...

Web resources about - Databases as objects - comp.object

Database - Wikipedia, the free encyclopedia
A database is an organized collection of data . The data are typically organized to model aspects of reality in a way that supports processes ...

Database - Wikipedia, the free encyclopedia
... requiring information. For example, modelling the availability of rooms in hotels in a way that supports finding a hotel with vacancies. Database ...

Oracle: Fear Not, ’12c’ Database Nearing Its Breakout, Says Credit Suisse
... from cloud computing applications in 2016, along with the adoption by customers of the its “ 12c ” version of its flagship relational database ...

Open source database improves protection and performance
Most enterprises rely on databases in some form or another, but they can be vulnerable to attack from people looking to steal information. They ...

Seattle’s Tableau Software snaps up database-computing startup in Germany
Seattle’s Tableau Software has acquired HyPer, a database-computing startup that spun out of research at a university in Munich, Germany. As ...

NTTC's Liquid Products Database making steady progress
NTTC's Liquid Products Database making steady progress Modern Bulk Transporter NTTC is asking carriers to suggest “proprietary blends” for ...

TransGrid Switches Oracle Database Support to Rimini Street
TransGrid Switches Oracle Database Support to Rimini Street

Herbalife sinks on database error report
Herbalife said it had overstated growth in the number of "active new members" in some instances in the past three quarters due to database errors. ...

Microsoft's SQL database software now runs on Linux
Remember when Steve Ballmer likened Linux to cancer, and the notion of Microsoft courting the open source crowd was virtually unimaginable? The ...

Microsoft's SQL Server Database Is Heading to Linux
Next year, Microsoft plans to release a version of its hybrid-cloud-friendly database software for the open-source operating system.

Resources last updated: 3/14/2016 2:24:36 AM